Lemmy search isn’t great, or I’m too new, and can’t tell if this has been posted here before.

  • vegetaaaaaaa@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    awesome-selhosted maintainer here. This critique comes up often (and I sometimes agree…) but it’s hard to properly “fix”:

    Any rule that enforces some kind of “quality” guideline has to be explicitly written to the contribution guidelines to not waste submitters’ (and maintainers) time.

    As you can see there are already minimal rules in place (software has to be actively maintained, properly documented, first release must be older than 4 months, must of course be fully Free and Open-source…). Anything more is very hard to word objectively or is plain unfair - in the last 7 years (!) maintaining the list I’ve spent countless hours thinking about it.

    For example, rejecting new projects because an existing/already listed one effectively does the same thing would give an unfair advantage to older projects, effectively “locking out” newer ones. Moreover, you will rarely find two projects that have the exact same feature set, workflow, release frequency, technical requirements… and every user has different needs and requirements, so yeah, users of the list are expected to do some research to find the best solution to their particular needs.

    This is of course, less true for some categories (why are there so many pastebins??). But again, it’s hard to find clear and objective criteria to determine what deserves to be listed and what does not.

    If we started rejecting projects because “I don’t have a need for it” or “I already use a somewhat equivalent solution and am not going to switch”, that would discard 90% of entries in the list (and not necessarily the worst ones). I do check that projects being added are in a “production-ready” state and ask more questions during reviews if needed. But it’s hard to be more selective than we already are, without falling in subjective “I like/I don’t like” reasoning (let’s ban all Nodejs-based projects, npm is horrible and a security liability. Let’s also ban all projects that are so convoluted and impossible to build and install properly that Docker is the only installation option. Follow my thoughts?)

    Also, Free Software has always been very fragmented, which is both a strength and a weakness. The list simply reflects that.

    Another idea I contemplated is linking each project to a “review” thread for the software in question. But I will not host or moderate such a forum/review board, and it will be heavily brigaded by PR departments looking to promote their companies software.

    A HTML version is coming out soon (based on the same data) that will hopefully make the list easier to browse.

    I am open to other suggestions, keeping in mind the points above…

    250+ self hostable apps

    1268 exactly.

    You can help cleaning up the list of unmaintained projects by working on this issue

    • JoeyJoeJoeJr@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      I would imagine the source for most projects is hosted on GitHub, or similar platforms? Perhaps you could consider forks, stars, and followers as “votes” and sort each sub category based on the votes. I would imagine that would be scriptable - the script could be included in the awesome list repo, and run periodically. It would be kind of interesting to tag “releases” and see how the sort order changes over time. If you wanted to get fancy, the sorting could probably happen as part of a CI task.

      If workable, the obvious benefit is you don’t have to exclude anything for subjective reasons, but it’s easier for readers of the list to quickly find the “most used” options.

      Just an idea off the top of my head. You may have already thought about it, and/or it may be full of holes.

      • vegetaaaaaaa@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 year ago

        would imagine that would be scriptable - the script could be included in the awesome list repo, and run periodically.

        The next version of the list will be based on https://github.com/awesome-selfhosted/awesome-selfhosted-data (raw YAML data), so much easier to integrate with scripts. There is already a CI system running at https://github.com/awesome-selfhosted/awesome-selfhosted-data/actions, and a preview of an enriched export at https://nodiscc.github.io/awesome-selfhosted-html-preview/ that take stars/last update dates and other metadata into account. This will all go live “soon”.

        Perhaps you could consider forks, stars, and followers as “votes” and sort each sub category based on the votes.
        it’s easier for readers of the list to quickly find the “most used” options.

        This would exclude (or move to the bottom of the list) all projects that are not hosted on these (mostly proprietary) platforms. Right now only metadata from Github is being parsed, in the future it will expand to Gitlab, maybe Gitea instances or similar, but it will take time and not all platforms have these stars/followers/forks features. This would also induce a huge bias as Github projects will have a lot more forks/followers/… than projects hosted on independent forges. Star counts can also (and absolutely are) manipulated by some projects that want to get “trending”.

        Also popularity != quality. A project whose code is hosted on cgit can be as good or even better than a project on Github (even more in the context of self-hosting…).

        Just an idea off the top of my head. You may have already thought about it, and/or it may be full of holes.

        It was a good idea :) But as you can see, it has its flaws.