• 0 Posts
  • 19 Comments
Joined 2 years ago
cake
Cake day: June 28th, 2023

help-circle

  • It is very tangential here, but I think this whole concept of “searching everything indiscriminately” can get a little bit ridiculous, anyway. For example, when I’m looking for the latest officially approved (!) version of some document in SharePoint, I don’t want search to bring up tons of draft versions that are either on my personal OneDrive or had been shared with me at some point in the past, random e-mails etc. Yet, apparently, there is no decent option for filtering, because supposedly “that’s against the philosophy” and “nobody should even need or want such a feature” (why not???).

    In some cases, context and metadata is even more important than the content of a document itself (especially when related to topics such as law/compliance, accounting etc.). However, maybe the loss of this insight is another collateral damage of the current AI hype.

    Edit: By the way, this fits surprisingly well with the security vulnerability described here. An external email is used that purports to contain information about internal regulations. What is the point of a search that includes external sources for this type of questions, even without the hidden instructions to the AI?


  • As I’ve pointed out earlier in this thread, it is probably fairly easy to manipulate and control people if someone is devoid of empathy and a conscience. Most scammers and cult leaders appear to operate from similar playbooks, and it is easy to imagine how these techniques could be incorporated into an LLM (either intentionally or even unintentionally, as the training data is probably full of examples). Doesn’t mean that the LLM is in any way sentient, though. However, this does not imply that there is no danger. At risk are, on the one hand, psychologically vulnerable people and, on the other hand, people who are too easily convinced that this AI is a genius and will soon be able to do all the brainwork in the world.



  • These systems are incredibly effective at mirroring whatever you project onto it back at you.

    Also, it has often been pointed out that toxic people (from school bullies and domestic abusers up to cult leaders and dictators) often appear to operate from similar playbooks. Of course, this has been reflected in many published works (both fictional and non-fictional) and can also be observed in real time on social media, online forums etc. Therefore, I think it isn’t surprising when a well-trained LLM “picks up” similar strategies (this is another reason - besides energy consumption - why I avoid using chatbots “just for fun”, by the way).

    Of course, “love bombing” is a key tool employed by most abusers, and chatbots appear to be particularly good at doing this, as you pointed out (by telling people what they want to hear, mirroring their thoughts back to them etc.).


  • Some of the comments on this topic remind me a bit of the days when people insisted that Google could only ever be the “good guy” because Google had been sued by big publishing companies in the past (and the big publishers didn’t look particularly good in some of these cases). So now, conversely, some people seem to assume that Disney must always be the only “bad guy” no matter what the other side does (and who else the other side had harmed besides Disney).





  • This is just naive web crawling: Crawl a page, extract all the links, then crawl all the links and repeat.

    It’s so ridiculous - supposedly these people have access to a super-smart AI (which is supposedly going to take all our jobs soon), but the AI can’t even tell them which pages are worth scraping multiple times per second and which are not. Instead, they appear to kill their hosts like maladapted parasites regularly. It’s probably not surprising, but still absurd.

    Edit: Of course, I strongly assume that the scrapers don’t use the AI in this context (I guess they only used it to write their code based on old Stackoverflow posts). Doesn’t make it any less ridiculous though.



  • Under the YouTube video, somebody just commented that they believe that in the end, the majority of people is going to accept AI slop anyway, because that’s just how people are. Maybe they’re right, but to me it seems that sometimes, the most privileged people are the ones who are the most impressed by form over substance, and this seems to be the case with AI at the moment. I don’t think this necessarily applies to the population as a whole, though. The possibility that oligopolistic providers such as Google might eventually leave them with no other choice by making reliable search results almost unreachable is another matter.


  • I’m not surprised that this feature (which was apparently introduced by Canva in 2019) is AI-based in some way. It was just never marketed as such, probably because in 2019, AI hadn’t become a common buzzword yet. It was simply called “background remover” because that’s what it does. What I find so irritating is that these guys on LinkedIn not only think this feature is new and believe it’s only possible in the context of GenAI, but apparently also believe that this is basically just the final stepping stone to AI world domination.


  • This somehow reminds me of a bunch of senior managers in corporate communications on LinkedIn who got all excited over the fact that with GenAI, you can replace the background of an image with something else! That’s never been seen before, of course! I’m assuming that in the past, these guys could never be bothered to look into tools as widespread as Canva, where a similar feature had been present for many years (before the current GenAI hype, I believe, even if the feature may use some kind of AI technology - I honestly don’t know). Such tools are only for the lowly peasants, I guess - and quite soon, AI is going to replace all the people who know where to click to access a feature like “background remover”, anyway!



  • Of course, it has long been known that some private investors would buy shares in any company just because its name contains letters like “.com” or “blockchain”. However, if a company invests half a billion in an “.ai” company, shouldn’t it make sure that the business model is actually AI-based?

    Maybe, if we really wanted to replace something with AI, we should start with the VC investors themselves. In this case, we might not actually see any changes for the worse.

    Edit: Of course, investors only bear part of the blame if fraud was involved. But the company apparently received a large part of its funding in 2023, following reports of similar lies in as early as 2019. I find it hard to imagine that tech-savvy investors really wouldn’t have had a chance to spot the problems earlier.

    Edit No. 2: Of course, it is also conceivable that the investors didn’t care at all because they were only interested in the baseless hype, which they themselves fueled. But with such large sums of money at stake, I still find it hard to imagine that there was apparently so little due diligence.


  • As all the book authors on the list were apparently real, I guess the “author” of this supplemental insert remembered to google their names and to remove all references to fake books from fake authors made up by AI, but couldn’t be bothered to do the same with the book titles (too much work for too little money, I suppose?). And for an author to actually read these books before putting them on a list is probably too much to ask for…

    It’s also funny how some people seem to justify this by saying that the article is just “filler material” around ads. I don’t know, but I believe most people don’t buy printed newspapers in order to read nonsensical “filler material” garnished with advertising. The use of AI is a big problem in this case, but not the only one.