Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Linux used to be able to run Xenix and SCO UNIX applications- I have not tried this in years, but I'm wondering if the mechanism can be shared such that any OS can be emulated.


Linux still can: https://sourceforge.net/projects/ibcs-us/

That's my project. I also did the port of iBCS to amd64, which is what ibcs-us is based on.

I moved the support out of the kernel because the rate of change of the internet kernel API's makes maintaining invasive kernel modules like this is just far too much work. Every time I looked at the kernel the way syscalls worked internally changed in some way, and broke ibcs64 as a consequence.

Decades ago Linux had full on support for other syscall interfaces via it's personality mechanism. See personality(2). Most of the functionality has been ripped out as kernel internal API evolved. I'm guessing maintainers noticed the personality stuff wasn't used and so it was easier to delete it than port it to their new shiny API.

At the time I thought that would be a disaster for iBCS. Now after porting ibcs64 to user space, I think personality(2) was the wrong way to do it. Turns out iBCS is mostly glue that emulates foreign syscalls using Linux equivalents, and as ibcs-us demonstrates that kernel space glue runs equally well in user space. It should not be in the kernel, where it bloats things and could introduce security holes.

Once you realise the glue code can run anywhere, there is only one problematic area remaining: directing the foreign syscalls to your user space glue code. Here ibcs-us could really use some kernel could really help to make it both fast and safe, but it's a use case that apparently hasn't crossed the Linux kernel dev's minds.

This is exactly the same problem the Wine developers are having, and are trying to solve in these LWN articles. To me their solution looks like a horrible kludge. That isn't a criticism. I don't see how anything that tries to make minimal changes to the kernel is going to be anything other than a kludge.

If you want to do it properly, you need to add some mechanism to the kernel that lets user space redirect whatever syscall method is in use to a user space trampoline page. There are any number of syscall methods out there: int's, sysenter's, lcalls. The kernel has to provide a way of trapping each and redirecting them to the trampoline, but once they are there they are the emulator's problem. Mostly. The emulator then needs a second mechanism to make the real Linux kernel syscall that isn't redirected. One way to do that is to say "syscalls from within this address range aren't to be emulated". I’m guessing that’s what OpenBSD does now, but replace “aren’t to be emulated” with “are allowed”.

This could be thought of as a way of the kernel providing a way to virtualise it’s own syscall interface. There is no need to stop at one level either, as there there is no good reason one virtualised interface shouldn't end in another trampoline, unbeknownst to it. So a Xenix 286 emulator could be running inside of a Xenix 386 emulator running on Linux. It would be lovely for ibcs-us of course, but LD_PRELOAD's, virtual machines and yes even Wine all want to do the same thing, so it is a broadly applicable use case.


So I remember a big use case for SCO UNIX was small governments (like at the town level). Do you still see such users of iBCS?

Anyway, this seems like a very good idea. It provides a very general light-weight VM mechanism.


> So I remember a big use case for SCO UNIX was small governments (like at the town level). Do you still see such users of iBCS?

I don't know who they are, which is the normal situation in the open source world. The few I've interacted with seem to use Informix. I, well the company I work for, is the only exception I've met to that.


Were those 286 Xenix applications or 386 Xenix applications?

Running 286 Xenix applications on a 386 or later Unix or Unix-like system is fairly straightforward as far as the 386 kernel is concerned. 286 Xenix used a different syscall mechanism than 386 Unixes did, so you didn't have the problem of the kernel needing to deal with 286 and 386 syscalls ending up in the same place.

My recollection, from being half the two person team that implemented 286 Unix binary compatibility for System V Release 3.2 on 386 at ISC, and then being half the different two person team that did 286 Xenix binary compatibility, was that this is what was required in the kernel:

1. Adding a mechanism to allow a process to set up 16-bit segments as aliases to portions of its address space.

2. Modifying exec to recognize a 286 Unix or Xenix binary, and turn the exec into an exec of /bin/i286emul or /bin/x286emul, with the path to the 286 binary as an argument.

3. I don't remember if we had to do anything special for the 286 system call mechanism, because I don't actually remember what that mechanism was. I don't remember if what the binaries would try to do for a syscall would simply cause a trapable signal, and then the signal handler would recognize the system call and take care of it, of if the system call mechanism used something like a call gate that the 386 code had to set up, and so we had to add a mechanism to allowed the 386 code to do that.

That was almost all the kernel work. Pretty much everything else could be handled in user mode code. When you would run a 286 Xenix binary, that turned into an exec of /bin/x286emul. x286emul would read the Xenix binary, allocate memory for it and load it, map segments to it, and jump to the code, switching the processor to 16-bit mode.

When the Xenix code did a system call and it ended up in the handler in x286emul, for most system calls it was simply a matter of copying arguments from where the 286 call put them to where the corresponding 386 call expected them, making the 386 system call, then putting results in the right place, and returning to the Xenix code.

For some system calls, more work was needed. Signal handling, for instance. If the 286 code wanted to trap a signal, the 386 code would have to set a trap of its own, and its handler would then have to deal with the stack fiddling and such to deliver it to the 286 code.

Speaking of signal handling, that led to the stupidest, most annoying meeting I've ever had to attend.

ISC had done the official 386 port of System V Release 3 under contract from AT&T, and we did the 286 Unix binary compatibility as part of that. Later, there was a deal between AT&T and Sun and Microsoft to make Unixes more compatible, which included Xenix compatibilty, and we were doing the Xenix compatibility as part of that, under contract from Microsoft.

For the 286 binary compatibility, ISC did the whole thing--all the user mode code in i286emul and all the supporting kernel changes. But for Xenix compatibilty, ISC was just doing x286emul. Microsoft was doing any kernel modifications needed, which wasn't very much because the mods already there for i286emul mostly worked find for x286emul.

There was one difference between Xenix and Unix signal handling that we could not fully address in x286emul. We needed kernel support--some sort of per-process flag that would tell it the processes wanted signals to behave like Xenix signals. We asked Microsoft to add such a flag.

A bit later they got back to us and said there was issues with such a flag that could not be settled by email or phone. We had to have an in-person meeting. So me and the other guy working no x286emul had to fly early one morning from Los Angeles to Seattle, take a rental car to Redmond, and attend a meeting with Microsoft.

At the meeting, we introduced ourselves and the half dozen or so Microsoft people introduced themselves, and then they presented this issue that could not be handled by email or phone: should this flag be controlled by an ioctl, or should it be an optional new parameter for the signal() call [1]. We gave our opinion, that was the end of the meeting, and we headed for the airport.

[1] or something like that. I don't remember the exact second option.


Well 386, but 286 compatibility would be nice also. I'm still annoyed that AMD dropped virtual 8086 mode from x86_64- I liked it because Windows ran MS-DOS programs better than MS-DOS ever did and I would still use this facility if it was available in 64-bit operating systems. Virtual x86 mode in Linux was also pretty good, but the graphics modes were better supported in Windows.


To be fair, anything written against a actual 8086 can be brute-force software emulated in faster than real time at this point. It's no excuse, but it's less of a problem than it could be.


This is entirely true, but I can tell you that DOS OrCAD does not work as well as in DOSBoX as it does in Windows-XP. The issue is not performance, but a matter of the variable size video modes available in Windows and not on DOSBoX.


Oh, absolutely; I wasn't saying that a full-featured emulator actually exists, just that the performance difference was huge enough that you could get away with using one as a replacement for amd64's missing virtualization. Video mode support is a operating system interface problem rather than a hardware-can't-run-fast-enough problem.


https://en.wikipedia.org/wiki/Intel_Binary_Compatibility_Sta...

The Intel Binary Compatibility Standard (iBCS) is a standardized application binary interface (ABI) for Unix operating systems on Intel-386-compatible computers, published by AT&T, Intel and SCO in 1988, and updated in 1990.


Yes, also I remember that SCO UNIX worked but SCO Xenix did not work. But really I wanted Xenix to work.. I seem to remember it worked once, but then stopped working when they implemented the standard. Or something- it was a long time ago.


I had a friend get WordPerfect for SCO working under Linux. That was around 1996. I found these instructions.

https://www.tldp.org/HOWTO/WordPerfect-5.html

Same friend had a DEC Alpha and got Digital Unix Netscape to run under Linux on the Alpha.

http://users.bart.nl/~geerten/FAQ-17.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: