A general programming discussion community.
Rules:
- Be civil.
- Please start discussions that spark conversation
Other communities
Systems
Functional Programming
Also related
- 0 users online
- 5 users / day
- 14 users / week
- 44 users / month
- 138 users / 6 months
- 1 subscriber
- 336 Posts
- 544 Comments
- Modlog
C may not meet the authors definition of low level, but is still far lower level than most viable alternatives.
The author is confusing two completely different things and comes to a wrong conclusion.
He states that C isn’t low level because CPU are much more complex today. But those aren’t related. His argument would be no different if he claimed assembly isn’t a low level language.
That the CPU speculatively executes instructions and maintains many levels of cache doesn’t change that C is low level. Even if you wrote a program in OP codes you can’t change that.
There was a single paragraph to support his argument that was optimizing compilers can create machine code wildly different than what might be expected.
Then he goes off on a complete tangent of how C isn’t good for parallel processing which has nothing to do with his thesis.
I think for these types of discussions it’s really necessary to clearly define what “low level” really means, something both you and the author kinda skip over. I think a reasonable definition is about the amount of layers of abstraction between the language’s model of the machine and the actual hardware.
The author is correct that nowadays, on lots of hardware, there are considerably more abstractions in place and the C abstract machine does not accurately represent high performance modern consumer processors. So the language is not as low level as it was before. At the same time, many languages exist that are still way higher level than C is.
I’d say C is still in the same place on the abstraction ladder it’s always been, but the floor is deeper nowadays (and the top probably higher as well).
But it’s the hardware that has changed not C. As I said, with his argument Assembly isn’t a low level programming language either.
Besides, early risc cpus from the 80’s had out of order write back so this isn’t new. By the 90’s all risc were ooe. The first was the ibm 360 from the 1960’s.
I agree!
Indeed, I could have worded that a bit better. But I think we agree on the fundamental points.
Short answer: No, this guy is all the way up his own rear end.
Longer answer:
Author: “C is not ‘close to hardware’”
Also Author: “Successful one to one struct comparisons may require padding, which isn’t automatically applied!!!”
Like if you have an entire PhD on this stuff and you don’t understand how and why you need to pad, when you need to do it, and how to calculate the proper amount of padding, maybe somebody should’ve stopped you before you showed your whole ass on the Internet like that.
(Padding is applied to align chunks of data more closely to the size of memory writes possible in a given architecture, it is extremely system dependent and you use it in very specific circumstances that you, a beginner, do not need to understand right now other than to say that if the senior says thou shalt not fuck with my struct you better not)
By this logic assembly isn’t low level either.
C is low level because it allows you to manipulate pointers. Most higher level languages don’t let you do that.
I see the argument that C on an Intel CPU is not low level enough for this person. On arm cortex m and r series CPUs, it’s low enough. (I don’t know enough about the A series to say this).
The gripe is mostly that there’s microcode in the pipeline for branch predictions and that takes the control away from them. If you want to own that control, You’re going to lose on speed.
Should you be bothered? Generally, No. If you are, there are CPU options out there with smaller pipelines and much less predictions going on. But don’t expect them to compete in the same arena as an application class CPU. (Intel, AMD, Arm A series likely)
The C compiler or third party libraries can provide support for parallel execution and SIMD. That article is just used by people in an attempt to argue that C’s strength in being a good low level abstraction is false, which it isn’t. C is the best portable abstraction over a generic CPU that I know of. If you start adding parallel features and SIMD like the article suggest, you’ll end up with something that’s not a portable low level abstraction. To be portable those featues would have to be implemented in slow fake variants for platforms that doesn’t support it. We can always argue where to draw the line, but I think C nailed it pretty good on what to include in the language and what to leave up to extensions and libaries.
C is not a perfect abstraction, because it is portable. Non portable features for specific architectures are accessed through libraries or compiler extensions and many compilers even include memory safe features. It’s a big advantage though to know Assembly for your target platform when developing in C, such that you become aware fx. that x86 actually detects integer overflow and sets an overflow flag, even though that’s not directly accessible from C. Compilers often implement extensions for such features, but you can yourself extend C with small functions for architecture specific features using Assembly.
ITT: People who didn’t understand the article
OP: You should not be bothered. The author’s arguments are perfectly valid IMO, but they’re way way beyond a beginner level. C is already a fairly challenging language to get your head around, and the author is going way beyond that into arguments about the fundamental theoretical underpinnings of C and its machine model, and the hellish complexities of modern microcode-and-silicon CPU design. You don’t need to worry about it. You can progress your development through:
… and not worry about step 5 until much much later.
deleted by creator
Er… sort of. He brings up some towards the end:
I would add to that Go with its channel model of concurrency which I quite like, and numpy which does an excellent job in my experience with giving you fast paralleled operations on big parallel structures while still giving you a simple imperative model for quick simple operations. There are also languages like Erlang or ML that try to do things in just a totally different way which in theory can lend itself to much better use of parallelism, but I’m not real familiar with them and I have no idea how well the theoretical promise works out in terms of real world results.
I’d be interested to see someone with this guy’s level of knowledge talk about how well any of that maps into actually well-parallelized operations when solving actual real problems on actual real-world CPUs (in the specific way that he’s talking about when he’s criticizing how well C maps to it), because personally I don’t really know.