Why, a hexvex of course!

  • 1 Post
  • 24 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle






  • Ehh, I have a different vision here - AI is useful, it’s just going down the hypermonetisation path at the moment. It’s not great because your data is being scraped and used to fuel paywalled content - that is largely why most folks object.

    It’s, also, badly implemented, and is draining a lot of system resource when plugged into an OS for little more than a showy web search.

    Eventually, after a suitable lag, we’ll see Linux AI as the AI we always wanted. A local, reasonable resource intense, option.

    The real game changer will be a shift towards custom hardware for AIs (they’re just huge probability models with a lot of repetitive similar calculations). At the moment, we use GPUs as they’re the best option for these calculations. As the specialist hardware is developed, and gets cheaper, we’ll see more local models and thus more Linux AI goodness.


  • At this point, I can use Linux for most things except older fangames, reliable printing (seriously, cups is pain), and some mmorpgs.

    Once I get a month without the university shitting its pants and changing policy overnight, I’ll eat the learning curve and switch (actually learn to troubleshoot wine rather than relying on searches).

    When I move, thinking mint with cinnamon because I love that desktop.


  • I think a lot of equity arguments bug me because they often fail to address the real issue (at least in the workplace). It’s a matter of attitude, rather than parity/proportionality.

    However much we hate it, the majority of people in a stem field will still seek a straight white man out when we look for authority/expertise. That isn’t because they are the greatest expert, or that they hold the highest accessible authority, but because it is an ingrained belief. That’s just wrong, on so very many levels, that I cannot even begin to express how stupid it is.

    Some people have spotted this issue, but their solution is abhorrent - denigrate this group. Raise a generation that looks on this group with contempt, to at least remove the component of authority. It will solve the problem, but it will create a lot more down the line as it becomes the accepted solution. Shall we have a generational genetic lottery forever?

    Oddly enough, I think the “blurring of gender lines” brought about by the trans movement might offer a more meaningful solution to some part of this problem, as it erases the categories themselves, rather than attempting to shift their position.











  • Ah the calculator fallacy; hello my old friend.

    So, a calculator is a great shortcut, but it’s useless for most mathematics (i.e. proof!). A lot of people assume that having a calculator means they do not need to learn mathematics - a lot of people are dead wrong!

    In terms of exams being about memory, I run mine open book (i.e. students can take pre-prepped notes in). Did you know, some students still cram and forget right after the exams? Do you know, they forget even faster for courseworks?

    Your argument is a good one, but let’s take it further - let’s rebuild education towards an employer centric training system, focusing on the use of digital tools alone. It works well, productivity skyrockets, for a few years, but the humanities die out, pure mathematics (which helped create AI) dies off, so does theoretical physics/chemistry/biology. Suddenly, innovation slows down, and you end up with stagnation.

    Rather than moving us forward, such a system would lock us into place and likely create out of date workers.

    At the end of the day, AI is a great tool, but so is a hammer and (like AI today), it was a good tool for solving many of the problems of its time. However, I wouldn’t want to only learn how to use a hammer, otherwise how would I be replying to you right now?!?


  • I’ll field this because it does raise some good points:

    It all boils down to how much you trust what is essentially matrix multiplication, trained on the internet, with some very arbitrarily chosen initial conditions. Early on when AI started cropping up in the news, I tested the validity of answers given:

    1. For topics aimed at 10–18 year olds, it does pretty well. It’s answers are generic, and it makes mistakes every now and then.

    2. For 1st–3rd year degree, it really starts to make dangerous errors, but it’s a good tool to summarise materials from textbooks.

    3. Masters+, it spews (very convincing) bollocks most of the time.

    Recognising the mistakes in (1) requires checking it against the course notes, something most students manage. Recognising the mistakes in (2) is often something a stronger student can manage, but not a weaker one. As for (3), you are going to need to be an expert to recognise the mistakes (it literally misinterpreted my own work at me at one point).

    The irony is, education in its current format is already working with AI, it’s teaching people how to correct the errors given. Theming assessment around an AI is a great idea, until you have to create one (the very fact it is moving fast means that everything you teach about it ends up out of date by the time a student needs it for work).

    However, I do agree that education as a whole needs overhauling. How to do this: maybe fund it a bit better so we’re able to hire folks to help develop better courses - at the moment every “great course” you’ve ever taken was paid for in blood (i.e. 50 hour weeks teaching/marking/prepping/meeting arbitrary research requirement).