ed(1) is Turing-Complete

Can a mere text editor be powerful enough to calculate anything that a computer can? Sure there’s Emacs, but it’s really just a Lisp interpreter, so it hardly counts. It turns out that even the humble line editor ed(1) is theoretically capable of such computation.

ed(1), or simply ed (the “(1)” comes from the man page section), is the standard text editor. It is the original Unix text editor and is the ancestor, either directly or indirectly, to many modern utilities, including sed, grep and vi. Unlike most editors today, ed is a line editor: instead of presenting the user with a screen full of text as a visual editor would, it uses commands to act on lines in a buffer.

A system that is Turing-complete is a system that can simulate any Turing machine, and a Turing machine is an abstract machine that can compute anything that can be computed with an algorithm (most programming languages are Turing-complete). To show that a system is Turing-complete, one only needs to show that it can simulate another Turing-complete system. Note that Turing completeness implies nothing about the efficiency of the system.

Rule 110 is an example of a Turing-complete system. It is an elementary cellular automaton (similar to Conway’s Game of Life, but in one dimension only) that was shown to be Turing-complete by Matthew Cook around 2000. As an elementary cellular automaton, the Rule defines the life of a cell in the next generation based on its current value and the values of the cells adjacent to it.

I had an idea one day: if you could call ed recursively, you’d be able to implement Rule 110 and prove that ed is Turing-complete.

So I got to work on it. It turns out, yes, it is possible to simulate Rule 110 in ed, showing that ed is Turing-complete.

Here it is, the code in full (as of writing), licensed under the MIT. Instructions on how to run the simulation can be found in the GitHub repository. An overview and a command-by-command walkthrough follows.


!exec ed '%' < '%'


But before that, I’d like to thank Tom Ryder for sparking my interest in ed, and @ed1conf for reviewing the script and keeping my interest in ed high.

Explanation →


Taming your Temporary Files

I don’t know about you, but there have been many times when I wanted to download something only temporarily with the intent to delete it later, like saving an image to upload elsewhere or saving email attachments that I didn’t need to keep to open in an external program. There have also been times when I wanted to test something out and had to temporarily create files with the intent to delete them later, whether it’s creating a test program, extracting a compressed archive, or even creating a named pipe (FIFO). I used to use my disk for that, using a ~/tmp directory, but it would slowly fill up when I neglected to delete these temporary files, thinking that they “might be useful later”; even to this day, I still have about 12GB of supposedly “temporary” files.

The largest files and folders in my tmp folder. Sorry, but I’m not going to show you my files!

How do you prevent these temporary files from filling up your disk? The obvious answer is to just delete them, and good for you if that works for you, but it evidently doesn’t work for me. You could stop downloading things or generating temporary files, but that’s unreasonable. One solution that I came up with when I used to run on Windows was to create a fixed-size container and put my temporary files there. I now had a maximum size for my temporary files so they wouldn’t overrun my disk. It worked for a while, but I made the mistake of making the container too small (only 256 MB), so in time, I was putting my larger temporary files in a tmp directory outside my container. I was also still not really deleting my temporary files, so in time, the container filled up.

So space-boxing the files is a good thing, but how do you make sure that your temporary files stay temporary and actually get deleted?

Let’s do this in Linux! →

Fixing filesystem check or mount fail

Beginners on Linux will probably encounter a file system check or mount fail at least once throughout their experience. I know I did. Multiple times. But it’s hard to get around hard shut downs when your computer just freezes! Now some people might just press i to ignore every time. Others blindly follow guides, unaware of the effects on their computer.

I for one want to know what the commands I type actually do. So without much further ado, with reference to Simon Richter’s reply on an Ask Ubuntu thread, here’s how you can resolve the issue permanently (complete with explanations of each command):

Step 1: Boot up Computer

Press that power button. 😉

Step 2: Enter Manual Recovery Mode

In the GNU GRUB boot loader, press m to enter manual recovery mode.

Step 3: Remount the System

Remount your system to read-only mode. This will prevent the kernel from writing any permanent changes that could harm your system.

  • mount is the command.
  • -o allows you to specify mount options.
    • ro is the read-only option.
    • remount is the option to remount an already mounted system.
  • / is the mount point for the entire system.
mount -o ro,remount /

Step 4: Repair the System

Run the file system consistency check and interactive repair. You do not need to understand all the underlying semantics of fsck but understand that fsck possesses a set of standards for file systems and ensures all the files on your system follow that standard.

  • fsck is the repair command.
  • -f is the flag to also check clean files.
fsck -f

Step 5: Write the Changes

Write the changes to your system by using a sync.


Step 6: Reboot Computer.

Reboot the system and all changes should be implemented.


Next time you turn on your computer, everything should be running smoothly!

Meet Topanga, my Arch Linux Boot!

Happy New Year everyone!

A lot happened in 2015 for me, especially in the computer side of things.

My Hewlett Packard (HP Pavilion m6 w/ AMD Quad Core A8 4500M APU @ 2.0GHz and AMD Radeon Dedicated Graphics (1GB) w/ 8GB RAM) finally gave up on me. To summarize, I dropped my laptop in high school and damaged the left hinge. By construction, the fan is placed underneath. As much as this truly sounds like a problem, during that time I didn’t think too much about it.

Continue reading

Package Management Systems Explained

Any Linux user should be familiar with package management systems. After all, they are a key component of every Linux distro.

Package management systems are an effective way to organize packages for installation, upgrade and deletion.

Package management systems have three sub-components: repositories, packages and package managers. A repository is a database of packages that users can search, download and install. A package contains the files for a particular program along with metadata that includes the package’s name, size and dependencies. When a user wants to install a package, the package manager will automatically search the repositories and install any missing dependencies.

As a user of Linux Mint, the package manager I’m familiar with is APT/dpkg through Synaptic. While I have no experience using existing elements to integrate into my own programming, I can comment on the advantages and disadvantages that a package management system for a Windows-adapted user.


1) Easy installation, upgrade and deletion.

One of the most annoying parts about installing, upgrading or deleting anything on Windows is browsing through all the different options available and figuring out which programs go hand-in-hand with one another.

Continue reading

Seasons Greetings From Linux!

It’s that time of year again! The time when we go around when we go around doing last-minute shopping, spend time with friends and family, and remember Christ’s birth.

Programs tend to be more or less the same all year round, but a couple of them change depending on the time of year, especially for Christmas.


From December 18 to January 1, VLC media player‘s striped orange cone sports a Santa hat.

VLC Christmas icon

VLC media Christmas icon

Continue reading