UNICOS UPDATE

The other day I felt nostalgic and wondered over to the great archive.org site, originally to look for Commodore 64 material. Then, I thought, what the hell! and typed ‘cray’ into the search box. Lo and behold, a lot of hits! There are manuals, a CD image with some marketing or teaching material on it, but among them, two interesting CD images:

Both are UNICOS install images. I’ll have to see if they are complete and what version (and for what machine) they are, but I wanted to share the news: it appears that there is now a publicly available OS image (maybe even two) for my simulator.

Cray update

Recently I’ve noticed that the C compiler hung on the simulator. I’ve tracked the problem down and fixed it. The online version as well as the latest sources should have it fixed and working.

For the curious, I’ve written up a small story on the reasons and the fixes.

More retro-computing

I ventured into the past again, this time by reviving an old ZX81 computer. The short project included expanding the original memory to 16k and to correctly output video from the old dinosaur.

You can read about the adventure and download the design files in case you want to replicate my work from here.

VCF PNW

Well, that’s a mouth-full. It stands for ‘Vintage Computer Festival, Pacific North-West‘. It’s happening this weekend, and I’m presenting the Cray work I’ve been doing. Which reminded me that I should also update this page with direct links to all the random babbling I’ve done about this project over the years.

First off, all the articles about the Cray XMP and COS restoration work:

  1. Prelude
  2. A New Hope
  3. The Hunt for the Red Bootcode
  4. The Return of the Cray
  5. The Matrix
  6. First update
  7. Needle in the Hay-stack
  8. A Brave New World
  9. Multiple Platforms
  10. Jobs
  11. Parallels
  12. Turbo

The second series is about the newer, J90 (and YMP-el) simulation work, running UNICOS:

  1. A Bridge too Far
  2. That Obscure Object of Desire
  3. Exploring the CD
  4. A New Simulator, part 1
  5. A New Simulator, part 2
  6. A New Simulator, part 3
  7. The UNICOS file-system
  8. Oldies but goodies
  9. A debug session
  10. To SSH and beyond

You can download the latest version of the simulator (including source code).

However if you’ve read all the above (as I’m sure you just did), you’ll know that the download page doesn’t actually contain functional UNICOS images: I don’t have the rights to distribute them. Because of that, I’ve put the simulator on-line, so – while you still can’t run it yourself – you can experience the full glory of a late 20th century supercomputer.

To anyone coming here because they ran into my exhibit on the VCF PNW, thank you again! For everyone else, still thank you for dropping by my page and enjoy!

The Return of the Cray files

Good news everyone!

A very nice gentlemen – who decided to remain anonymous – contacted Chris and me, saying he has some X-MP tapes we might be interested in. As it turned out he has an absolute treasure chest of material, in fact much more than ‘just’ a collection of X-MP tapes.

This gave me a reason to re-start my work on the Cray simulator project, in fact I’ve decided to start a new series. If you’re interested, check it out here.

The Second Computer Revolution

I’ve had a glimpse into to the future. It was bright. It was delightful. It was alien and frightening. Nevertheless I have seen it and now I can’t shake it out of my head. I’ll write it down and hope that it will bring some piece to my mind. Or at least upsets yours…

The first revolution

When the computer was invented in the ‘40s (all right, some will for certain debate that, let’s just say that the first electrical, digital computers were invented around then) it had a very immediate purpose; to help solve equations faster to:

In other words to help kill people faster.

But with that, the Genie was out of the bottle and people soon started to see that these machines could be used for other things as well. One of its first applications was processing payroll and the famous election night predictions. Soon anything and everything that had to do with numbers was ‘computerized’.

If you look carefully however, you can already see the main characteristic of this revolution: computers are used to automate existing activities. They solve existing equations faster. They replace people in payroll. They become a better bean-counter, an enhanced paper, an improved pen. In other words, the computer became the universal tool.

This is not a great surprise if you think about it’s origin: a computer is essentially a physical realization of the Universal Turing Machine. A machine, that can emulate all other (Turing) machines, a universal tool in deed.

With all that progress though you would be hard pressed to find an application of computers that really required one. Sure, it’s nice to have a smart-phone in our packets, but we could make phone-calls without them. It’s neat that we can program the microwave down to a second, but surely ovens predate computers. The latest hype – driverless cars – is just another addition to this long list. Sure you can sit in a car Today and get to places without driving one: it’s called a taxi cab!

Few people saw the second revolution coming. Edsger W. Dijkstra was one of them (note the date on the article: 1988). I for one, wasn’t. I needed to see it first-hand.

The second revolution

So what is this second revolution business? This is the phase, when computers start to perform truly unique tasks, things that would be impossible to do without them. When these machines stop being empty shells that we can shape to the tool we want it to be – a pen or a cab driver – and become a first-class tool on their own right. I will give you four examples of what is possible when you treat computing machines as a new class of tools:

Video games

I’m not talking about chess games with a computer here. Those belong to the previous phase, where the computer replaced your human opponent. I’m talking about true video games, like Space Invaders, Pac-Man or Mario Bros. Here – and this idea is coming from Bret Victor’s talk – the gameplay is truly unique. These games could not have possibly existed without computers. You can’t make Pacman out of paper or blocks of wood. You can’t play Space Invaders against a human opponent. You can’t write down the story of Mario in a book. Their existence is only possible due to the invention of computers. I could have chosen more recent examples, but I wanted to go with the classics to prove that the idea is not new. All these games were released around 1980. These old games also show that the computer is not there to make dazzling graphics, surround sound or photorealistic rendering. It’s there to create a new art form if you wish, a new type of expression, a new way of engaging an audience. In fact a new type of audience as well: the gamer.

To give it a name, let’s call the innovation here responsive interactivity. The idea that the object you play with not only interacts with you, but reacts and changes its behavior based on your actions. Which leads to my second point:

Interactive art

This is the general idea that Bret Victor further explores in his talk: Stop Drawing Dead Fish. Responsive interactivity –  that computers can be taught to react to your input – can be exploited as an art medium. His example is to create an animated live performance on stage using simple shapes, pre-programmed behaviors and – of course – a computer. This is not a traditional cartoon however. Being a live performance adds spontaneity to it that never existed in that art-form. It’s probably closest to a puppet-theater though that analog breaks down as well as at least part of the action of the ‘puppets’ is not controlled by the ‘puppeteers’.

Internet

My third example is one of those over-used words that mean too many things to actually mean anything. To understand its true disruptive nature you have to look at the social change it creates. It is made most evident in the Arab Spring. Sure, revolutions have happened before, even waves of revolutions that swept continents happened, but the way the internet allows people to interact on a global level is new and unprecedented. The free flow of information and goods literarily tears the old social, economic and political fabric apart. Regional leadership becomes irrelevant and next to impossible. Central control of information – the main tool of mass manipulation – becomes very hard. The old ‘bad word’ for this is globalization, but the main driving force behind that is the Internet too. Friedman wrote a whole book on the subject.

The key idea is democratized information distribution. The next step over Guttenberg (who made it possible to everyone to consume ideas): make it possible for everyone to create and distribute ideas.

Computational Architecture

From these grand visions lets get back to something more mundane, and the main trigger for my little rant here: architecture. In his TED talk, Michael Hansmeyer talks about building unimaginable shapes, columns in his case. The point of his talk is that by using computers he was able to create forms that not only could not have been created without computers, they could not have even been imagined without them. The very process of creating these forms involves using computer algorithms (fractals if I understand what’s going on behind the scenes correctly). When I first saw his work, I got very excited. This was the first time I’ve realized that I’m looking at something truly new. And since – at least in my view – big architectural changes have always been tied to enabling technologies – the invention of the arch or reinforced concrete – this new application of computers appears to be truly revolutionary. True, it’s only involved in the esthetics, not in the structure – it’s still a column, in fact arguably a column that can’t support weight at the moment – but it’s the first baby-step towards something much more important: a new artistic language for architecture, a truly new style.

The spark in this example is to make the algorithm and the human equal entities in the creative process.

Hallmarks of a revolution

Let me close by summarizing the key aspect of these wide-ranging examples: Analogies break down. In fact, if you find a good analog to explain what’s going on (you store ‘files’ in a ‘folder’ just like you put ‘papers’ in ‘filing cabinets’) chances are, it’s not revolutionary.

Consequently as Elon Musk have put it at the end of his TED talk (at 19:32 if you’re not in the mood of listening through 20 minutes of unrelated stuff), you have to let go of reasoning through analogy to arrive at these truly new ideas. You have to use axiomatic thinking (or use the way physicists work as he put it). You have to boil things down to their fundamental building blocks and start building things up from there. Or, as I suspect in some of the cases above, just try random ideas, be brave enough not to give up and stumble upon something.

It’s frightening to work on something that doesn’t feel comfortable, known and familiar. It’s all too easy to incrementally improve upon the known – putting a touch-screen on a cell-phone.

This is what Bret Victor is talking about in his ‘The Future of Programming’ talk.
This is what Alan Kay was talking about in ‘The Computer Revolution Hasn’t Happened Yet’.
This is what Dijkstra was talking about in ‘On the cruelty of really teaching computing science’.

Maybe we should start listening…

Update: Karolin Lohmus translated this entry to Estonian. You can read it here.

Help! I need somebody…

Now that the Cray simulator is finally booting the OS, I have to plead to your help! I need SW to run on this. I need source code, I need compilers, I need tools. Without these, the machine is almost as dead as if I haven’t done anything. The OS is just the framework to do useful work in, but useless itself.

If you have or know of anybody who has experience, old backup tapes, disks, anything that can be used with this machine, please contact me!

On Function Call Inlining

I’ve heard so many people claim that the main benefit of inlining is that the compiler can save the call and return instructions. Some examples:

http://stackoverflow.com/questions/145838/benefits-of-inline-functions-in-c
http://www.exforsys.com/tutorials/c-plus-plus/inline-functions.html

That’s all well and nice, but that’s not even scratching the surface of the benefit of inlining.

The main reason inlining is a powerful optimization tool is that, once the function is inlined, the compiler can optimize it with the call-site. Constants can be propagated into the function body. Loops can be more efficiently re-arranged, code can be hoisted, registers can be (more) optimally allocated, essentially all the heavy artillery that an optimizing compiler has can be deployed on the merged function. Furthermore, since the compiler has full information of what’s going on inside the function that was called, it doesn’t have to make conservative assumptions about side-effects, subsequent function calls, etc.

The effects of all the above are way more impactful than the saved call and return – even if you take the potential branch-misprediction penalty and parameter save operations into consideration.

Some get it right. Some make a note on this and move on.

The down-side of course is that it could increase code-size. Over-use of inlining might generate more (instruction) cache-misses, which can hurt your performance quite a bit. My personal guidelines:

  • Small functions should be inlined
  • Large functions with only a few call-sites could be inlined
  • Large functions, called from a number of places should not be inlined – though a few specific call-sites could be inlined

Of course most of these decisions are made by your compiler already, but checking the results, and on occasion forcing the compiler to bend your way can be beneficial: unless you use profile-guided optimization, the compiler has to make inlining decisions based on a static view of the code, so it can’t take for example execution frequency into consideration.

Finally, the biggest enemies of inlining (in C++) are virtual functions: it’s very hard for the compiler to see through a virtual function call and realize that you always (or most of the time) call the same virtual function. Providing non-virtual variants and manually calling them in cases when inlining is expected is probably the best way around this problem.

EDIT: a kind person provided a Slovakian translation here: https://www.zoobio.fr/edu/2017/09/22/na-vyvolanie-funkcie-inlining/

Welcome

Welcome to the newly re-designed Modular Circuits site. I’ve finally taken the time to move the content over to a new blog-based system, where I can keep the site more up-to-date, and where most importantly you, dear reader, can comment and help me improve the content as well.

It will take some time to get all the kinks out, so please bear with me and report any issues you may find.

And now that I grabbed your attention, may I direct it to the whole-new ‘Articles’ section above, where you can find new (and as of now still incomplete) content on H-bridges and electro-mechanical systems.