This is dumb. Literally nothing has changed. Anyone who knows anything about LLM’s knows that they’ve struggled with math more than almost every other discipline. It sounds counter intuitive for a computer to be shit at math, but this is because LLM’s “intelligence” is through mimicry. They do not calculate math like a calculator. They calculate all responses based on a probability distribution constructed from billions of human text inputs. They are as smart, and as fallible, as wikipedia + reddit + twitter, etc, etc. They are as fallible as their constructing dataset.
Think about how ice cream sales correlate with drownings. There is no direct causality, but that won’t stop an LLM from seeing the pattern or implying causality, because it has no real intelligence and doesn’t know any better.
“Prompt engineering” is about understanding an LLM’s strengths and weaknesses, and learning how to work with them to build out a context and efficiently achieve an end result, whatever that desired result may be. It’s not dead, and it’s not going anywhere as long as LLM’s exist.
I really wish all of these companies racing to replace their existing software features and employees with LLMs understood this. So many applications are dependent on a response being 100% accurate for a very specific request as opposed to being 80% accurate for a wide variety of requests. “Based on training data, here’s what a response to your input might look like” is pretty good for conversational language and image generation, but it sucks for anything requiring computation or expertise. Worst of all, it’s so confidently wrong about things I might as well be back on Reddit.
I really wish all of these companies racing to replace their existing software features and employees with LLMs understood this.
They totally understand it. And OpenAI has solved it. For example while researching The Ultimate Answer to Life the Universe and Everything, I asked it to calculate 6 by 9 in base 13 and got the correct answer - 42.
ChatGPT didn’t use the LLM to calculate that. It only used the LLM understand an obscure and deliberately confusing chapter of the Hitchhiker’s Guide book, to write and execute this python script.
# To calculate six by nine in base 13, we multiply the numbers in our standard decimal system and then convert the result to base 13.# Calculate 6 * 9 in decimal
result_decimal = 6 * 9# Convert the result to base 13# The easiest approach is to use the divmod() function repeatedly to get the remainder (which corresponds to the base 13 digit) # and update the quotient for the next iteration until the quotient is 0.defdecimal_to_base_n(num, base):
if num == 0:
return"0"
digits = []
while num:
num, remainder = divmod(num, base)
digits.append(str(remainder))
return''.join(digits[::-1])
# Convert the decimal result to base 13
result_base_13 = decimal_to_base_n(result_decimal, 13)
result_base_13
I mean it’s not really like humans are good at math either, we are good at making abstractions and following linear rules but we are slow and fallible. Digital computation is just near absolute the best method for doing math. LLMs are decent abstraction and general problem solving tho. They are not as creative as people but they are still pretty good! It’s a step on the right direction for true agi. Honestly even when we have agi I doubt they will ever beat raw cpus in computation speed.
Machine learning could find those strengths and weaknesses and learn to work around them likely better than a human could. It’s just trial and error. There’s nothing about the human brain that makes it better suited to understanding the inner logic of an LLM.
Because the training data is man made so will never be 100% accurate, and critical thought is required to set the desired output, and understand if the output makes sense?
Statistical models find patterns in one’s and zeros. They don’t apply critical thought.
Actually most (I think all, but not 99% positive) machine learning models are incapable of doing straight arithmetic. Due to the way they are built ML models, including deep learning models, can only learn relationships in a limited input space.
This is most apparent when you test LLMs on different arithmetic operations:
For addition, it does okay up until you get to millions or billions
Multiplication I think breaks at the 100/1000 level
exponents almost break immediately
Give it decimal values and it also breaks relatively quickly for any operation.
This has to do with the fact that LLMs are effectively multiple layers of linear functions, so higher order operations break down faster.
This is dumb. Literally nothing has changed. Anyone who knows anything about LLM’s knows that they’ve struggled with math more than almost every other discipline. It sounds counter intuitive for a computer to be shit at math, but this is because LLM’s “intelligence” is through mimicry. They do not calculate math like a calculator. They calculate all responses based on a probability distribution constructed from billions of human text inputs. They are as smart, and as fallible, as wikipedia + reddit + twitter, etc, etc. They are as fallible as their constructing dataset.
Think about how ice cream sales correlate with drownings. There is no direct causality, but that won’t stop an LLM from seeing the pattern or implying causality, because it has no real intelligence and doesn’t know any better.
“Prompt engineering” is about understanding an LLM’s strengths and weaknesses, and learning how to work with them to build out a context and efficiently achieve an end result, whatever that desired result may be. It’s not dead, and it’s not going anywhere as long as LLM’s exist.
I really wish all of these companies racing to replace their existing software features and employees with LLMs understood this. So many applications are dependent on a response being 100% accurate for a very specific request as opposed to being 80% accurate for a wide variety of requests. “Based on training data, here’s what a response to your input might look like” is pretty good for conversational language and image generation, but it sucks for anything requiring computation or expertise. Worst of all, it’s so confidently wrong about things I might as well be back on Reddit.
They totally understand it. And OpenAI has solved it. For example while researching The Ultimate Answer to Life the Universe and Everything, I asked it to calculate 6 by 9 in base 13 and got the correct answer - 42.
ChatGPT didn’t use the LLM to calculate that. It only used the LLM understand an obscure and deliberately confusing chapter of the Hitchhiker’s Guide book, to write and execute this python script.
# To calculate six by nine in base 13, we multiply the numbers in our standard decimal system and then convert the result to base 13. # Calculate 6 * 9 in decimal result_decimal = 6 * 9 # Convert the result to base 13 # The easiest approach is to use the divmod() function repeatedly to get the remainder (which corresponds to the base 13 digit) # and update the quotient for the next iteration until the quotient is 0. def decimal_to_base_n(num, base): if num == 0: return "0" digits = [] while num: num, remainder = divmod(num, base) digits.append(str(remainder)) return ''.join(digits[::-1]) # Convert the decimal result to base 13 result_base_13 = decimal_to_base_n(result_decimal, 13) result_base_13
You know, I had gotten frustrated using it because it wouldn’t understand me, but now I’ll use the approach to find out how it understands me
I mean it’s not really like humans are good at math either, we are good at making abstractions and following linear rules but we are slow and fallible. Digital computation is just near absolute the best method for doing math. LLMs are decent abstraction and general problem solving tho. They are not as creative as people but they are still pretty good! It’s a step on the right direction for true agi. Honestly even when we have agi I doubt they will ever beat raw cpus in computation speed.
Machine learning could find those strengths and weaknesses and learn to work around them likely better than a human could. It’s just trial and error. There’s nothing about the human brain that makes it better suited to understanding the inner logic of an LLM.
Congrats. You don’t understand the difference between a statistical model and a human.
I expected more from a gaylord fartmaster. 2/10.
In what way?
Why couldn’t even a basic reinforcement learning model be used to brute force “figure out what input gives desired X output”?
Because the training data is man made so will never be 100% accurate, and critical thought is required to set the desired output, and understand if the output makes sense?
Statistical models find patterns in one’s and zeros. They don’t apply critical thought.
Actually most (I think all, but not 99% positive) machine learning models are incapable of doing straight arithmetic. Due to the way they are built ML models, including deep learning models, can only learn relationships in a limited input space.
This is most apparent when you test LLMs on different arithmetic operations:
This has to do with the fact that LLMs are effectively multiple layers of linear functions, so higher order operations break down faster.