☆ Yσɠƚԋσʂ ☆
  • 145 Posts
  • 31 Comments
Joined 5Y ago
cake
Cake day: Jan 18, 2020

help-circle
rss











Seems like my home is actually living rent free in that head of yours. 😂



I find small services work fine for well defined and context free tasks. For example, say you have common tasks like handling user authorization, PDF generation, etc. Having a common service to handle that is a good idea. This sort of a service bus can be leveraged by different apps that can focus on their business logic, and leverage common functionality.

However, anything that’s part of a common workflow and has shared state is much better handled within a single application. Splitting things out into services creates a ton of overhead, and doesn’t actually address any problems since you have to be able to reason about the entirety of the workflow anyways. You end up having a much more difficult development process where you need a bunch of services running. Doing API calls means having to add endpoints, do authentication, etc. where within a single app you just do a function call. Debugging and tracing becomes a lot more difficult, and so on.












The amount of churn in Js ecosystem really is phenomenal. The worst part is that a lot of it is just churn for the sake of churn without any tangible benefit that you can see.


I imagine it would be the same dynamic, and you could have an emulation layer on the chip with its own instruction set for legacy code while providing direct access to the native instruction set.


I’m not familiar enough with how Habit and Ante represent memory allocation to say, but part of the problem right now is that there’s already a VM baked into the chip to provide the PDP-11 style emulation on top of it. Ideally, we’d want chips that expose their native behavior, and then craft languages to take advantage of it. Similarly to what we’re seeing happening with graphics chips.


I very much agree, this is just a crazy period we have to live through, but sanity will return.


This dynamic illustrates how capitalism goes through different stages. Early on, companies compete on quality trying to attract customers with better products, and you end up with quality things that work well, last a long time, and so on. However, eventually you get to the point where the same volumes of the product are no longer needed, and that’s when you start seeing things like planned obsolescence creep in because the logic of capitalism is that you have to keep selling and growing indefinitely.


Crazy how we’re basically losing the right to personal property under late stage capitalism.


Not just software anymore, increasingly physical products too thanks to the whole IoT nonsense where every appliance you buy has to connect to the manufacturer to work now.



That’s not what the article is saying though. It’s arguing that the memory model that imperative languages assume is not actually how modern chips work. What we end up with effectively is a VM on the chip that pretends to be a really fast PDP-11 style architecture. Writing assembly against this VM still has the same problem. Interestingly, the way modern chips are designed actually fits better with functional style that doesn’t rely on global state.


It was published a while back, but everything it talks about still very much applies today.





Same experience here as well, I set it as my default git diff viewer and never looked back.



I think the whole discussion of the architecture was pretty interesting and gives some insight into what can be done by ActivityPub platforms to facilitate scaling going forward. There’s been a lot of discussion previously whether ActivityPub creates some inherent limitations on scaling, and looks like they’ve demonstrated that it can scale pretty far.