Wastholm.com

In my first decade writing Makefiles, I developed the bad habit of liberally using GNU Make’s extensions. I didn’t know the line between GNU Make and the portable features guaranteed by POSIX. Usually it didn’t matter much, but it would become an annoyance when building on non-Linux systems, such as on the various BSDs. I’d have to specifically install GNU Make, then remember to invoke it (i.e. as gmake) instead of the system’s make.

I’ve since become familiar and comfortable with make’s official specification, and I’ve spend the last year writing strictly portable Makefiles. Not only has are my builds now portable across all unix-like systems, my Makefiles are cleaner and more robust. Many of the common make extensions — conditionals in particular — lead to fragile, complicated Makefiles and are best avoided anyway. It’s important to be able to trust your build system to do its job correctly.

This tutorial should be suitable for make beginners who have never written their own Makefiles before, as well as experienced developers who want to learn how to write portable Makefiles.

In the future, if you want a job, you must be as unlike a machine as possible: creative, critical and socially skilled. So why are children being taught to behave like machines?

Our extensive use of Perl to build many of our internal services often comes as a surprise to many and we can understand why. Perl is a dinosaur among mainstream programming languages. It lacks the glamour that other, relatively younger languages have. There is also a common misconception in the programming world that modern software engineering practices cannot be followed with a language like Perl. In this post, we hope to debunk that myth. We want to give you a glimpse of the developer experience (DX) here at Semantics3 where we write a lot of Perl code but still manage to employ the latest engineering best-practices. We would like to highlight that we are able to do so with the help of a tool-chain written entirely in Perl.

This is a long — sorry not sorry! — written piece specifically about the high-level aspects of deployment: collaboration, safety, and pace. There's plenty to be said for the low-level aspects as well, but those are harder to generalize across languages and, to be honest, a lot closer to being solved than the high-level process aspects. I love talking about how teams work together, and deployment is one of the most critical parts of working with other people. I think it's worth your time to evaluate how your team is faring, from time to time.

Nest Labs, a home automation company acquired by Google in 2014, will disable some of its customers' home automation control devices in May. This move is causing quite a stir among people who purchased the $300 Revolv Hub devices—customers who reasonably expected that the promised "lifetime" of updates would enable the hardware they paid for to actually work, only to discover the manufacturer can turn their device into a useless brick when it so chooses.

I don’t know about you, but I’ve always been uncomfortable with Jenkins’ apparent statefulness. You set up your Jenkins server, configure it exactly as you want it, then DON’T TOUCH IT.

Fortunately I now have a solution. With a combination of Docker, Python’s Jenkins API modules, the Jenkins job builder Python module, and some orchestration using docker-compose, I can reproduce my Jenkins state at will from code and run it in isolated environments, improving in iterable, track-able steps.

This is a simple Bellman-Ford bot that uses the negative cycle detection feature of the algorithm to find favorable currency trades to make in forex markets (in this case we are targeting bitcoin/fiat/scryptcoin markets on btc-e and other exchanges).

Typically the trades the bot finds are less than 0.5% profit, will take 3 steps, and must be filled quickly to be profitable.

The USB Weather Data Receiver USB-WDE1 wirelessly receives data from various weather sensors of ELV at 868 MHz. The receiver is connected to a USB port on the computer, so no additional power supply is required. The data is transmitted via a simple serial ASCII protocol, which is well documented by ELV. The RasberryPi running Raspbian is used for the data acquisition allowing very little power consumption while being completely flexible.

If you've ever used git bisect, you know what an incredibly useful tool this is. It allows you to do a binary search through commits to find out which commit caused a particular error. Many people seem unaware of git bisect run ... which automates this even further, but it has a limitation: it won't let you find a particular error, it detects success or failure, that's all. So I decided to do something about that.

Backupninja allows you to coordinate system backup by dropping a few simple configuration files into /etc/backup.d/. Most programs you might use for making backups don't have their own configuration file format. Backupninja provides a centralized way to configure and schedule many different backup utilities. It allows for secure, remote, incremental filesytem backup (via rdiff-backup), compressed incremental data, backup system and hardware info, encrypted remote backups (via duplicity), safe backup of MySQL/PostgreSQL databases, subversion or trac repositories, burn CD/DVDs or create ISOs, incremental rsync with hardlinking.

|< First   < Previous   11–20 (36)   Next >   Last >|