• dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    English
    arrow-up
    69
    arrow-down
    1
    ·
    8 months ago

    Well, here’s another example of the level tech journalism has sunk to.

    163-inch 4K Micro-LED television that one home theater expert described as “tall as Darth Vader.” Each of the TV’s 8.3 million pixels is an independent, miniscule LED, a feat for which TCL charges over $100,000.

    But here’s the real surprise: TCL’s new TV isn’t the most pixel-dense or exotic display ever produced.

    No fucking shit, Sherlock. It is trivial these days to buy a laptop with a much smaller screen but exactly the same 3840x2160=8,294,400 pixels on it. Smaller screen, same number of pixels, more pixel dense. The Sony Experia Z5 Premium is a phone with that same pixel count.

    Duh…?

    The Vision Pro is wireless out of the box, but it’s somewhat heavy, struggles with meager battery life which, and can’t match the fidelity of Varjo or Pimax headsets.

    Apparently nobody proofreads or does any copy editing anymore, either. Or maybe the whole damn thing is outsourced to ChatGPT now, who the fuck knows.

    • Marcbmann@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      8 months ago

      It’s definitely written by someone who’s never used a VR headset. It only takes a second to realize that these screens are nowhere near the resolution of your eye. Ya know, cause small text that would be easily read on my phone is blurry as fuck on a VR headset

      • Tarquinn2049@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        8 months ago

        I can see someone who only tried VR back 10 years ago, putting on an apple vision pro and being shocked that the resolution was so high, only to be informed it was a modest increase over other current headsets and that they are all pretty clear now. But really they should know if it was anywhere near “retina resolution”, apple would have been all over making that claim.

    • NightAuthor@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      8 months ago

      ChatGPT writes much better than this article, here’s its response to your comment as an example:

      Your frustration with the state of tech journalism is understandable, especially when it comes to covering high-end technology like the 163-inch 4K Micro-LED television by TCL. It’s true that on a surface level, comparing the pixel density of large screens to that of smaller devices like laptops or smartphones can seem straightforward. However, the engineering and design challenges involved in scaling up screen technology while maintaining image quality cannot be understated. TCL’s achievement lies not just in the number of pixels but in creating a large-scale display that maintains high-quality imagery, which is a different kind of technological feat compared to miniaturization.

      Regarding the editorial errors and the critique of the writing quality, it’s clear that tech journalism, like many fields, faces challenges in maintaining high standards amid the fast pace of technology news and the pressure to publish quickly. While it’s disappointing to see, it highlights the ongoing need for rigorous proofreading and editorial oversight in publishing. However, attributing these issues to automation or outsourcing without evidence might not fully capture the complexities and pressures faced by publishers today. It’s crucial for the industry to address these issues to maintain credibility and provide the insightful, accurate tech coverage that readers deserve.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      7
      ·
      8 months ago

      maybe the whole damn thing is outsourced to ChatGPT now, who the fuck knows.

      I don’t understand why so many people assume an LLM would make glaring errors like this…

      • drislands@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        6
        ·
        8 months ago

        …because they frequently do? Glaring errors are like, the main thing LLMs produce besides hype.

        • KairuByte@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          22
          arrow-down
          4
          ·
          edit-2
          8 months ago

          They make glaring errors in logic, and confidently state things that are not true. But their whole “deal” is writing proper sentences based on predictive models. They don’t make mistakes like the excerpt highlighted.

          • drislands@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            8 months ago

            Y’know what, that’s a fair point. Though I’m not the original commenter from the top, heh.

          • Garbanzo@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 months ago

            I’m imagining that the first output didn’t cover everything they wanted so they tweaked it and pasted the results together and fucked it up.

          • GlitterInfection@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            Pretty soon glaring errors like this will be the only way to identify human vs LLM writing.

            Then soon after that the LLMs will start producing glaring grammatical errors to match the humans.