• 0 Posts
  • 7 Comments
Joined 9 months ago
cake
Cake day: March 1st, 2024

help-circle





  • You’re talking about free and open competition in a perfect competition marketplace. This is an ideal (similarly far-fetched as communism/socialism*) where there are low barriers to entry, and consumers have good information to make well informed choices. In this world competition bid’s down excess profits in the long run - essentially to consumers benefit. not the benefit of producers. wages are low but it doesnt so much matter becauases competition keeps prices low.

    Capitalism wants to increase the return to capital , so it works against competition to create market power (by many means including legal system power and regulatory capture as well tacit or explicit corruption) both over consumers and over their own supply chain (e.g. employees). It inherits its legacy from rentierism and landowners who also like to monopolize land, ration it and have tenants bid up rents.

    ‘objective sources’, on economics? Good luck. economists are so bi-assed that most of them can spew shit out of two holes simultaneously.

    • both communism and perfect competition probably work fine in a small closed community, where everyone pretty much has repeated interactions with everyone - visibility - and there will be other examples where they each work fine-ish, but on a large enough scale, anomynity and human nature come into play. The reality is human trust is excellent, but some people will abuse it when they think they’ll get away with it and that destroys it.

  • Best replacement for excel is: anything that doesn’t rape your data whilst pouring sugar in you gas tank. /s

    TLDR - R, Python, mariaDB, for real data analysis stuff + minor role for whatever spreadsheet package.

    For hobbies / analysis / data manipulation , storage , graphs and general stats fuckery here’s my advice; as someone who does this stuff - “badly I might add” - for a shitty public sector organisation that just can’t decide whether to bend over M$ barrel or Oracle’s barrel:

    • use R (via R-studio if you need an “environment”) for more statsy stuff and easier graphs.

    • Python for more general mathsy / programmy / web scrapy stuff - can do decent graphs with libraries like plotly and matplotlib stuff like that, scipy, numpy, and pandas are the other basic libraries for analysis and maths and large datasets. peopl like using ‘jupyter notebooks’ - I don’t get it personally - but 50 Phil Ochs fans cant be that wrong.

    • Set up a mariadb or something if you need databasey stuff, I doubt you need to look at more hardcore stuff like postgresql for “hobbying” ; my personal (1 user) databases were built several years ago and mariadb is just fine for that. but some of the high vol transactional DB at work do use postgresql.

    These are all good to learn in my experience, even if you think they’re harder than excel; ( are they tho’? array formlae!?). They’re sort of interoperable - subject to learning. They - naturally - have their open-source annoyances.: a million ways to do everything, and versioning issues. (Excel still has fucking vlookup() tho’ - talk abut legacy baggage - but no it’s not as bad as the open souce maelstroms).

    You can still ouput data into a spreadsheet for viewing formatting and messing with stuff - but there are other ways.

    Footnote: Yes I do still use excel, but normally mostly for final formatted report for customer who wants it. Having R/python directly write data into excel is so much better than letting excel open anything. Excel just can’t let an innocent SNOMED code go unmolested; you have to be on high alert if you let excel actually do anything.

    Also spreadsheet for messy data cleansing - for looking at mess, to help refine the R/python cleansing script. I’d happily use libre/ods for any of these but I don’t fancy putting the request in to IT and . . . having to speak to IT about it.