Husband, father, kabab lover, history buff, chess fan and software engineer. Believes creating software must resemble art: intuitive creation and joyful discovery.

🌎 linktr.ee/bahmanm

Views are my own.

  • 18 Posts
  • 17 Comments
Joined 1Y ago
cake
Cake day: Jun 26, 2023

help-circle
rss
My fellow software engineer, It's the year 2024...
cross-posted from: https://lemmy.ml/post/17978313 > Shameless plug: I am the author.
fedilink

cross-posted from: https://lemmy.ml/post/15607790 > Just wanted to share some (exciting) news about my Common Lisp project, [euler-cl](https://github.com/bahmanm/euler-cl). I finally got the time to sit down and integrate it with [Codecov](https://about.codecov.io/)! This means a couple of cool things: > > * 📈 Test Coverage Tracking: I can now see how well my code is tested over time, giving valuable insights into code quality. > * 🏅 Codecov Badge: euler-cl now sports a snazzy Codecov badge to show off! > * 📦 Reusable Setup: The code and setup process should be simple enough to be used as a reference to integrate Codecov (and potentially other services) into your own Common Lisp projects! > > If you're interested this commit is almost all you need: https://github.com/bahmanm/euler-cl/commit/855b014 > > Let me know in the comments if you have any questions or want to chat about integrating Codecov into your own projects!
fedilink

Docker Image Fusions for Simpler Workflows!
If you've found yourself manually crafting complex Docker images or repeatedly installing tools, I've got something for you 😁 Check out "fusions" in bdockerimg project (https://github.com/bahmanm/bdockerimg). --- With fusions, you merge base images into powerful composite images. Currently there are: * sdkman.bmakelib * quicklisp.bmakelib --- Let me know what other fusions would make your Docker life easier 🙏
fedilink

Thanks for the pointer! Very interesting. I actually may end up doing a prototype and see how far I can get.


Could bdockerimg make your dev life easier?
I've been working on a small project called **bdockerimg**. It's a collection of pre-built Docker images for some less common development tools (currently bmakelib, QuickLisp, and SDKMAN). The idea is to streamline setup, especially for CI/CD pipelines, where I found myself repeating the same Dockerfile steps a lot. Basic functionality tests are included for a bit of extra peace of mind. --- 👀 Here's the repo if you're interested: https://github.com/bahmanm/bdockerimg 🗣 And here's the the Matrix room: https://matrix.to/#/#bdockerimg:matrix.org --- I'm curious: * Does this seem like something you might find useful? * Are there any specific tools you'd love to see as easy-to-use Docker images? --- This project is still in its early stages, so any feedback or contributions are much appreciated 🙏
fedilink

[ANN] bmakelib v0.6.0 with enums
cross-posted from: https://lemmy.ml/post/8492082 > _[bmakelib](https://github.com/bahmanm/bmakelib) is a collection of useful targets, recipes and variables you can use to augment your Makefiles._ > > --- > > I just released bmakelib v0.6.0 w/ the main highlight being the ability to define enums and validate variable values against them. > > --- > > ➤ Makefile: > > ```Makefile > define-enum : bmakelib.enum.define( DEPLOY-ENV/dev,staging,prod ) > include define-enum > > deploy : bmakelib.enum.error-unless-member( DEPLOY-ENV,ENV ) > deploy : > @echo 🚀 Deploying to $(ENV)... > ``` > > ➤ Shell: > > ```text > $ make ENV=local-laptop deploy > *** 'local-laptop' is not a member of enum 'DEPLOY-ENV'. Stop. > > $ make ENV=prod deploy > 🚀 Deploying to prod... > ```
fedilink

I usually capture all my development-time “automation” in Make and Ansible files. I also use makefiles to provide a consisent set of commands for the CI/CD pipelines to work w/ in case different projects use different build tools. That way CI/CD only needs to know about make build, make test, make package, … instead of Gradle/Maven/… specific commands.

Most of the times, the makefiles are quite simple and don’t need much comments. However, there are times that’s not the case and hence the need to write a line of comment on particular targets and variables.


Can you provide what you mean by check the environment, and why you’d need to do that before anything else?

One recent example is a makefile (in a subproject), w/ a dozen of targets to provision machines and run Ansible playbooks. Almost all the targets need at least a few variables to be set. Additionally, I needed any fresh invocation to clean the “build” directory before starting the work.

At first, I tried capturing those variables w/ a bunch of ifeqs, shells and defines. However, I wasn’t satisfied w/ the results for a couple of reasons:

  1. Subjectively speaking, it didn’t turn out as nice and easy-to-read as I wanted it to.
  2. I had to replicate my (admittedly simple) clean target as a shell command at the top of the file.

Then I tried capturing that in a target using bmakelib.error-if-blank and bmakelib.default-if-blank as below.

##############

.PHONY : ensure-variables

ensure-variables : bmakelib.error-if-blank( VAR1 VAR2 )
ensure-variables : bmakelib.default-if-blank( VAR3,foo )

##############

.PHONY : ansible.run-playbook1

ansible.run-playbook1 : ensure-variables cleanup-residue | $(ansible.venv)
ansible.run-playbook1 : 
	...

##############

.PHONY : ansible.run-playbook2

ansible.run-playbook2 : ensure-variables cleanup-residue | $(ansible.venv)
ansible.run-playbook2 : 
	...

##############

But this was not DRY as I had to repeat myself.

That’s why I thought there may be a better way of doing this which led me to the manual and then the method I describe in the post.


running specific targets or rules unconditionally can lead to trouble later as your Makefile grows up

That is true! My concern is that when the number of targets which don’t need that initialisation grows I may have to rethink my approach.

I’ll keep this thread posted of how this pans out as the makefile scales.


Even though I’ve been writing GNU Makefiles for decades, I still am learning new stuff constantly, so if someone has better, different ways, I’m certainly up for studying them.

Love the attitude! I’m on the same boat. I could have just kept doing what I already knew but I thought a bit of manual reading is going to be well worth it.


That’s a great starting point - and a good read anyways!

Thanks 🙏


How do you comment your makefiles?
cross-posted from: https://lemmy.ml/post/6863402 > Fed up w/ my ad-hoc scripts to display the targets and variables in a > makefile(s), I've decided to write a reusable piece of code to do > that: https://github.com/bahmanm/bmakelib/issues/81 > > --- > > The first step toward that would be to understand the common commenting styles. So far I have identified 4 patterns in the wild which you can find below. > > Are there any style guides/conventions around this topic? Any references > to well-written makefiles I can get inspiration from? > > --- > > ### A > > ``` > VAR1 = foo ## short one-liner comment > my-target: ## short one-liner comment > … > ``` > > ### B > > ``` > # longer comment which > # may span > # several lines > VAR1 = foo > > ## comments can be prefixed w/ more than # > ## lorem ipsum dolor > my-target: > … > ``` > > ### C > > ``` > ##### > # a comment block which is marked w/ several #s on > # an otherwise blank line > ##### > VAR1 = foo > ``` > > ### D > > ``` > ##### > #> # heading 1 > # This is a variation to have markdown comments > # inside makefile comments. > # > # ## It's a made-up style! > # I came up w/ this style and used it to document `bmakelib`. > # For example: https://is.gd/QtiqyA (opens github) > #< > ##### > VAR1 = foo > ```
fedilink

GNU Make - Unconditionally run a target before any other targets
cross-posted from: https://lemmy.ml/post/6856563 > > When writing a (GNU) Makefile, there are times when you need a particular target(s) to be run before anything else. That can be for example to check the environment, ensure variables are set or prepare a particular directory layout. > > > >... take advantage of GNU Make's mechanism of `include`ing and `make`ing makefiles which is described in details in the manual:
fedilink


Thanks. At least I’ve got a few clues to look for when auditing such code.


How to audit a shell-completion script?
I just stumbled upon a collection of bash completions which can be quite handy: https://github.com/perlpunk/shell-completions I tried mojo, cpan and pip completions in a sandbox and they worked like a charm! The only question I've got is, has anyone ever done a security audit of the repository? Anyone has taken the time to look at the code? I could try auditing but I'm not even sure what to look for. I feel quite wary of letting an unknown source access to my bash session and what I type.
fedilink

I got to admit that your point about the presentation skills of the author are all correct! Perhaps the reason that I was able to relate to the material and ignore those flaws was that it’s a topic that I’ve been actively struggling w/ in the past few years 😅

That said, I’m still happy that this wasn’t a YouTube video or we’d be having this conversation in the comments section (if ever!) 😂


To your point and @krnpnk@feddit.de’s RE embedded systems:

That’s absolutely true that such a mindset is probably not going to work in an embedded environment. The author, w/o explicitly mentioning it anywhere, is explicitly talking about distributed systems where you’ve got plenty of resources, stable network connectivity and a log/trace ingestion solution (like Sumo or Datadog) alongside your setup.

That’s indeed an expensive setup, esp for embedded software.


The narrow scope and the stylistic problem aside, I believe the author’s view is correct, if a bit radical.
One of major pain points of troubleshooting distributed systems is sifting through the logs produced by different services and teams w/ different takes of what are the important bits of information in a log message.

It get extremely hairy when you’ve got a non-linear lifeline for a request (ie branches of execution.) And even worse when you need to keep your logs free of any type of information which could potentially identify a customer.

The article and the conversation here got me thinking that may be a combo of tracing and structured logging can help simplify investigations.


Thanks for sharing your insights.


Thinking out loud here…

In my experience with traditional logging and distributed systems, timestamps and request IDs do store the information required to partially reconstruct a timeline:

  • In the case of a linear (single branch) timeline you can always “query” by a request ID and order by the timestamps and that’s pretty much what tracing will do too.
  • Things, however, get complicated when you’ve a timeline w/ multiple branches.
    For example, consider the following relatively simple diagram.
    Reconstructing the causality and join/fork relations between the executions nodes is almost impossible using traditional logs whereas a tracing solution will turn this into a nice visual w/ all the spans and sub-spans.

That said, logs do shine when things go wrong; when you start your investigation by using a stacktrace in the logs as a clue. That (stacktrace) is something that I’m not sure a tracing solution will be able to provide.


they should complement each other

Yes! You nailed it 💯

Logs are indispensable for troubleshooting (and potentially nothing else) while tracers are great for, well, tracing the data/request throughout the system and analyse the mutations.


I’m not sure how this got cross-posted! I most certainly didn’t do it 🤷‍♂️


Tracing: structured logging, but better in every way
**TLDR;** The author argues that free-form logging is quite useless/expensive to use. They also argue that structured logging is less effective than tracing b/c of mainly the difficulty of inferring timelines and causality. --- I find the arguments very plausible. In fact I very rarely use logs produced by several services b/c most of the times they just confuse me. The only time that I heavily use logs is troubleshooting a single service and looking at its stdout (or `kubectl log`.) However I have very little experience w/ tracing (I've used it in my hobby projects but, obviously, they never represent the reality of complex distributed systems.) Have you got real world experience w/ tracing in larger systems? Care to share your take on the topic?
fedilink

I think I understand where RMS was coming from RE “recursive variables”. As I wrote in my blog:

Recursive variables are quite powerful as they introduce a pinch of imperative programming into the otherwise totally declarative nature of a Makefile.

They extend the capabilities of Make quite substantially. But like any other powerful tool, one needs to use them sparsely and responsibly or end up w/ a complex and hard to debug Makefile.

In my experience, most of the times I can avoid using recursive variables and instead lay out the rules and prerequisites in a way that does the same. However, occasionally, I’d have to resort to them and I’m thankful that RMS didn’t win and they exist in GNU Make today 😅 IMO purist solutions have a tendency to turn out impractical.



Variables in GNU Make: Simple and Recursive
cross-posted from: https://lemmy.ml/post/4908824 > >There are two major flavours of variables in GNU Make: "simple" and "recursive". > > > >While simple variables are quite simple and easy to understand, they can be limiting at times. On the other hand, recursive variables are powerful yet tricky. > > > >... > > > >There is exactly one rule to recall when using recursive variables... > > > > 🧠 The value of a recursive variable is computed every time it is expanded.
fedilink

I just quote my comment on a similar post earlier 😅

A bit too long for my brain but nonetheless it is written in plain English, conveys the message very clearly and is definitely a very good read on the topic. Thanks for sharing.


[CODE-REVIEW] Determine if given lists intersect
cross-posted from: https://lemmy.ml/post/4591838 > Suppose I need to find out if the intersection of an arbitrary number of lists or sequences is empty. > > Instead of the obvious _O(n^2^)_ approach I used a hash table to achieve an _O(n)_ implementation. > > Now, `loop` mini-language aside, is this idiomatic elisp code? Could it be improved w/o adding a lot of complexity? > > _You can view the [same snippet w/ syntax highlighting on pastebin](https://pastebin.com/bSYZrdpR)._ > > ```lisp >(defun seq-intersect-p (seq1 seq2 &rest sequences) > "Determine if the intersection of SEQ1, SEQ2 and SEQUENCES is non-nil." > (cl-do* ((sequences `(,seq1 ,seq2 ,@sequences) (cdr sequences)) > (seq (car sequences) (car sequences)) > (elements (make-hash-table :test #'equal)) > (intersect-p nil)) > ((or (seq-empty-p sequences)) intersect-p) > (cl-do* ((seq seq (cdr seq)) > (element (car seq) (car seq))) > ((or intersect-p (seq-empty-p seq)) intersect-p) > (if (ht-get elements element) > (setf intersect-p t) > (ht-set elements element t))))) > > (defun test-seq-intersect-p () > "Test cases." > (cl-assert (null (seq-intersect-p '() > '()))) > (cl-assert (null (seq-intersect-p '(1) > '()))) > (cl-assert (null (seq-intersect-p '(1 2) > '(3 4) > '(5 6)))) > (cl-assert (seq-intersect-p '(1 2) > '(3 4) > '(5 6) > '(1))) > t) > > (test-seq-intersect-p) > ```
fedilink

Now that I know which endpoints I’m interested in and which arguments I need to pass, exporting them to Prometheus is my next step. Though I wasn’t sure where to begin w/ - I was thinking about writing the HTTP requests in Java or Python and export the results from there.

Blackbox exporter is definitely easier and cleaner. Thanks for the tip 💯


Using Make and cURL to measure Lemmy's performance
cross-posted from: https://lemmy.ml/post/4560181 > *A follow up on [[DISCUSS] Website to monitor Lemmy servers' performance/availability](https://lemmy.ml/post/4489142)* > > --- > > I wanted to experiment w/ Lemmy's APIs to, eventually, build a public-facing performance monitoring solution for Lemmy. > > It started w/ a couple of shell commands which I found myself repeating. Then I recalled the saying *"Don't repeat yourself - make Make make things happen for you!"* and, well, stopped typing commands in bash. > > Instead I, incrementally, wrote a makefile to do the crud work for me (esp thanks to its declarative style): https://github.com/bahmanm/lemmy-clerk/blob/v0.0.1/run-clerk > > --- > > TBH there's nothing special about the file. But I thought I'd share this primarily b/c it is a demonstration of the patterns I usually use in my makefiles and I'd love some feedback on those. > > > Additionally, it's a real world use-case for [bmakelib](https://github.com/bahmanm/bmakelib) (a library that I maintain 😎 )
fedilink

cross-posted from: https://lemmy.ml/post/4440129 > *I am not the author.* > > https://addons.mozilla.org/en-GB/android/addon/github-license-observer/ > > https://github.com/galdor/github-license-observer > > This is a cool little addon to help you tell, at a glance, if the repository you're browsing on github has an [open source license](https://opensource.org/licenses/) license. > > Especially relevant nowadays given the trend to convert previously OS repos to non-OS licenses as a business model (eg Akka or Terraform.)
fedilink

*"Don't repeat yourself. Make Make make things happen for you!"* 😎 I just created a public room dedicated to all things about Make and Makefiles. `#.mk:matrix.org` or matrix.to/#/#.mk:matrix.org Hope to see you there.
fedilink

cross-posted from: https://lemmy.ml/post/4027414 > TIL that I can use Perl's `Benchmark` module to time and compare the performance of different commands in an OS-agnostic way, ie as long as Perl is installed. > > For example, to benchmark curl, wget and httpie you could simply run: > > ```Perl > $ perl -MBenchmark=:all \ > -E '$cmd_wget = sub { system("wget https://google.com > /dev/null 2>&1") };' \ > -E '$cmd_curl = sub { system("curl https://google.com > /dev/null 2>&1") };' \ > -E '$cmd_httpie = sub { system("https https://google.com > /dev/null 2>&1") };' \ > -E '$timeresult = timethese(15, { "wget" => $cmd_wget, "curl" => $cmd_curl, "httpie" => $cmd_httpie });' \ > -E 'cmpthese($timeresult)' > ``` > > which on my old T530 produces: > > ``` > Benchmark: timing 15 iterations of curl, httpie, wget... > > curl: 2 wallclock secs ( 0.00 usr 0.00 sys + 0.42 cusr 0.11 csys = 0.53 CPU) @ 28.30/s (n=15) > httpie: 8 wallclock secs ( 0.00 usr 0.01 sys + 4.63 cusr 0.79 csys = 5.43 CPU) @ 2.76/s (n=15) > wget: 3 wallclock secs ( 0.00 usr 0.00 sys + 0.53 cusr 0.19 csys = 0.72 CPU) @ 20.83/s (n=15) > > Rate httpie wget curl > httpie 2.76/s -- -87% -90% > wget 20.8/s 654% -- -26% > curl 28.3/s 925% 36% -- > ``` > > Very handy indeed ❤
fedilink

I wonder why all the down-votes!? The linked article was a good read IMO. What did I miss here?


Good point 👍

Likewise, I never thought I’d need any timestamp w/ a finer resolution than millis, until my tests started failing:

There is a feature in bmakelib (called !!logged) which logs the stdout/err of a given target to disk. When I was writing tests for it, I noticed that occasionally my tests fail where they shouldn’t have (for context, the tests used to create files w/ millis resolution and then check the contents.) Turned out the my tests were fast enough that more than 1 of them would run and finish in a single millisecond causing the “expected” files to be overwritten.

That’s how I got to thinking that it may be something which can be added to bmakelib.

The benefit is that you don’t need to do much and you ensure the timestamp has a high resolution. That will make it harder to produce difficult-to-debug bugs 😅

The downsides are 1) cognitive load (yet another thing to know about) 2) filenames/variables/… will have 3 extra characters which stand for µ fraction.

Does that make sense?


[DISCUSS] bmakelib - generate high resolution timestamp
*bmakelib is a minimalist standard library for writing **Makefile**s.* What do you think about being able to easily generate µsecond precision timestamps in a Makefile? ![](https://lemmy.ml/pictrs/image/f26d4c81-51d1-4dba-8c6f-29d61ae80ca8.png) Please take a second to look at https://github.com/bahmanm/bmakelib/issues/42 & share your thoughts/emojis 🙏
fedilink

[DISCUSS] IBM using LLMs to convert COBOL to Java
cross-posted from: https://lemmy.ml/post/3758187 > It's not the 1st time a language/tool will be lost to the annals of the job market, eg VB6 or FoxPro. Though previously all such cases used to happen gradually, giving most people enough time to adapt to the changes. > > I wonder what's it going to be like this time now that the machine, w/ the help of humans of course, can accomplish an otherwise multi-month risky corporate project much faster? What happens to all those COBOL developer jobs? > > Pray share your thoughts, esp if you're a COBOL professional and have more context around the implication of this announcement 🙏
fedilink


Thanks! That’s definitely a good starting point.


A community around data
Are there communities focused on data storage and processing? I'm looking for one where people talk a lot about **databases** (No/SQL), **message brokers** (eg Kafka or RabbitMQ) and **data retrieval/processing** patterns. *I'd create one such, but I already know I haven't got the time to mod anything.*
fedilink