• 1 Post
  • 7 Comments
Joined 10 months ago
cake
Cake day: January 19th, 2024

help-circle


  • I love that example. Microsoft’s Copilot (based on GTP-4) immediately doesn’t disappoint:

    Microsoft Copilot: Two pounds of feathers and a pound of lead both weigh the same: two pounds. The difference lies in the material—feathers are much lighter and less dense than lead. However, when it comes to weight, they balance out equally.

    It’s annoying that for many things, like basic programming tasks, it manages to generate reasonable output that is good enough to goat people into trusting it, yet hallucinates very obviously wrong stuff or follows completely insane approaches on anything off the beaten path. Every other day, I have to spend an hour to justify to a coworker why I wrote code this way when the AI has given him another “great” suggestion, like opening a hidden window with an UI control to query a database instead of going through our ORM.




  • Is this a case of “here, LLM trained on millions of lines of text from cold war novels, fictional alien invasions, nuclear apocalypses and the like, please assume there is a tense diplomatic situation and write the next actions taken by either party” ?

    But it’s good that the researchers made explicit what should be clear: these LLMs aren’t thinking/reasoning “AI” that is being consulted, they just serve up a remix of likely sentences that might reasonably follow the gist of the provided prior text (“context”). A corrupted hive mind of fiction authors and actions that served their ends of telling a story.

    That being said, I could imagine /some/ use if an LLM was trained/retrained on exclusively verified information describing real actions and outcomes in 20th century military history. It could serve as brainstorming aid, to point out possible actions or possible responses of the opponent which decision makers might not have thought of.


  • I know this is naive, but sometimes I wish we’d be bolder in brainstorming alternative ways the economy could work.

    Imagine, for example, the IRS would send a yearly, mandatory “happiness questionnaire” to all employees of a company (compare the “world happiness report”). This questionnaire then would have a major influence on how much taxes the company has to pay, so much that it’s cheaper to make employees happy and content than to squeeze them for every ounce of labor they can give.

    Or an official switch to 6 hour days, except to get those 2 hours less, you have to use them for growing your own food. Shorter workdays, more time with family, more self-reliance. And a strong motivation for cities to provide more green spaces and community gardens.

    Very naive ideas with lots of problems, yes, but I wish we wouldn’t have the concept of revenue generation so thoroughly encrusted in our heads as the guiding principle of all we do and dream of.