When I resumed the blog I wrote it would be temporarily. I can be more productive now, and the blog is only means for procrastination, therefore I will suspend it, until I have nothing better to do or I need to say something.
After writing the Salk dilemma (something probably already named in other way) I had planned to write a number of pessimist posts, but I would not like becoming a prophet of disaster. For some time I will focus on deep learning, then probably I will continue on the technical side, trying to get things done, until it feels like a tar pit, or I find something more useful to do.
The contents of the posts are probably obvious or something that people prefer not to know, as open issues, but I will write a short description here, and if any of them is particularly interesting, let me know, writing 500 words on any of them should not take too long, serving your disappointment earlier than later.
- All valuable things are means for health (wide definition). We can apply science to axiology.
- We are at war: our enemies are killing us.
- Everything is a computer: The obnoxious computer scientist.
- Disregard others.
- Nothing is what it seems, the delusional kakonomy.
- Opportunity cost. Slower progress means longer exposure to existential risks.
If that was not centered enough in economy, I had a series of posts centered in that:
- Capitalism is the best system we have, we should continue updating it.
- Transaction-based markets are unsuitable for the knowledge economy. Artificial scarcity, patents (also on medicines), copyrights, OSS, value creation and capture, etc.
- Collaboration is more complex but more efficient than competition. Can we get it right?
- Bullshit jobs, AI future, and saying goodbye to reality.
- Our economy as an AI for the maximization of profit, and an existential risk.
- Some alternatives (this needs more work):
- Classics: technocracy, meritocracy,…
- Basic income and prizes.
I wanted to study AI, but economy is the largest and most transcendental multi-agent system, and the most dangerous AI. Now I have to either find the ways to break it, or create a new one, the current explaining does not seem to be helping (maybe it needs scientific rigor). I will listen if you have a plan for a change to something scientifically (empirically or formally) proven as better than the state of the art, I may even join.
PS: before being accused of messiah complex, I know perfectly that most likely I will fail, nevertheless, I am alive and have the privilege of existing for a brief period of time (about as brief as everybody else, I hope there are no surprises). Trying, even if failing, is the best idea I could come up with for using that time. It is worthwhile, maybe the only worthwhile thing to do. Humans control to a good extent the environment, now we need to become responsible/accountable of it (e.g. global warming) and our future society, economy, politics…