Hacker Newsnew | past | comments | ask | show | jobs | submit | sergiolp's commentslogin

For Mail/Calendar/Contacts/Tasks you should really consider Evolution. I've switched ~2 years ago, and it's amazing. Stable, sleek, and with tons of options.

It has a bad reputation because, back in the day, it was buggy and bloated. But I haven't hit a single bug over these years, and while it eats a significant amount of memory, it's on par with other options (and these days everyone has plenty of RAM).

I'd love to see more people giving it a second chance.


Centralization is a _huge_ privacy issue. Specially when combined with real world data, like a phone number.

Let's assume OWS is playing nice, and they really don't store any relevant metadata. How can we be sure that a third party is not eavesdropping their communications?

Even with end-to-end encryption, given enough time, an attacker can easily build a user relationship network, something _very_ dangerous in the wrong hands.

If you really care about privacy, you should consider options like BitMessage, Onion.chat, Ricochet, Tox or GNU Ring. Or, as a middle ground between those (which are quite mobile unfriendly, due to its P2P nature) and Signal/WhatsApp/Telegram, a federated service like XMPP (as the article suggest) or Matrix.


Telegram-FOSS (the FOSS friendly fork you can find on F-Droid, not the official app on Play Store) maintainer here. Telegram is NOT a "secure messaging app".


Quote from the Telegram-FOSS Github Page: "Telegram is a messaging app with a focus on speed and security..."


That comes from upstream's README.md. I've never payed any attention to it, but now that you've pointed it out, I'm seriously considering changing that paragraph, or adding a note/disclaimer somewhere.

After all these years and promises, server's code is still closed, federation is nowhere to be found, their update/commit policy for the official Android app is a joke [1] (they even closed its issue tracker), and I'm really tired of their "trust us, we're not evil" policy [2].

If you haven't switched to Matrix/Riot, do it right away.

[1] https://github.com/DrKLO/Telegram/commits/master

[2] https://telegram.org/faq#q-who-are-the-people-behind-telegra...


As one of the resident cryptography nerds: Matrix/Riot seems to be in every way better than Telegram. I still need to review it before I can wholesale recommend it, but it was audited by NCC Group previously.


I don't seem to understand Matrix/Riot.

It appears to be like IRC, but also integrates with Gitter, and IRC and has a identity federator called vector.im.


Matrix is a federated chat protocol. It is like IRC or XMPP but synchronizes history and uses HTTP-based protocol. There are bridges to IRC, XMPP, and Gitter.

Riot is a client. It used to be named Vector.im.


Three days ago, a Jolla C replaced an iPhone 5S as my main phone. To be fair, me using iOS was just a kind of an experiment. Before that, I was using a CM build without Google Apps using FDroid as unique app repository.

I must say that, while still needs some polishing, Sailfish OS (which is not 100% free, but is quite there, and provides a complete Linux experience), does the job. My Pebble works, my BT car kit works, and I have all apps I need. In fact, my biggest complain is about the hardware, being underpowered and with an atrocious camera. An official port from Jolla to some mid level device would make me _very_ happy.

In some countries, like mine, the FOSS mobile OS killer has a name: WhatsApp. Without official support for FFOS or Ubuntu Touch, and being very aggressive against third party apps (banning their users), most people can't even think of them as an option for daily usage.

Jolla goes around this bundling a commercial Android Dalvik emulator. Not the best solution, but one quite pragmatic.


I have a Jolla too, and while I love it as an N9 successor, I'd lie if I didn't say I'm disappointed they didn't opensource the whole system. This failed to attract a critical mass of developers and has made the platform quite stagnant. I understand their investors were afraid of opensourcing key assets, but their alternative plan hasn't worked well either.

Does it make sense to go the N900 route now? Anyone has any experience on this? Some worthy successors like Pyra and Neo900 are coming out, and N900 can be still found easily to use while waiting for these newcomers. How good is Maemo these days?

An obvious alternative is to get a Nexus and install CopperheadOS + FDroid. The ecosystem is very polished and lively, plus hardware is very good albeit with planned obsolescence due to the lack of long-term kernel updates.


What do you notice about the 'complete Linux experience'? I cannot find too much on the Sailfish OS homepage; going to Mer shows it's a minimal Linux with QT on top. For me a complete Linux experience would be xterm and sudo apt-get install build-essential. Is their anything like that? I would buy a phone/tablet device that runs Ubuntu (or Debian) smoothly (all hardware supported) in a heartbeat but seems nothing is there yet so I currently have to go for the Pyra which has all that and a keyboard and replaceable batteries.

Also; the Mer wiki mentions an Android compatibility layer and you manage an Android emulator; would either or both not solve the Whatsapp issue?


> For me a complete Linux experience would be xterm and sudo apt-get install build-essential. Is their anything like that?

Pretty much yes. With developer mode on you gain access to a terminal with bash and other expected utils. There is a mobile terminal application with custom keyboard which is more or less usable, you can also ssh into the phone without any problem. As for the package manager there is pkcon (probably some alternatives are also available), you can install build utils without any problem.

I've been using Jolla for some time before moving to iOS. It was a very cool experience. I really liked having a phone that I could hack and mess around with. The gesture based UI was nice, it was amazingly comfortable after getting used to it. The main drawback was the lack of apps and the reliability of the phone. The Android apps work poorly, they are sluggish and plenty of apps are not going to work due to lack of Google Play Services. Also with Android apps you get the old flawed Android permission model. The phone crashes from time to time, sometimes in the least suitable moment (stability varies between updates).


I found Jolla comparatively very stable and the Android support was good enough for my purposes, given that it makes little sense to buy a Jolla phone for the sole purpose of running Android apps. People judge Jolla harshly for things they will forgive Android for. "has unfortunately stopped" is disturbingly common in the Android world, coming from Jolla, even before we get into the horrible quality of the vast majority of apps on the Google Play Store. Clearly if your #1 priority is apps from Google Play Store, you should buy an Android phone. But if you want anything different, or if you want an actual Linux phone.... my only objection with Jolla is that they seem starved for resources to regularly release hardware and to target markets like North America at all. So no matter how great a job they do, almost nobody sees their work.


Just to clarify: I was not speaking about application stability, I spoke about the whole OS. I've never seen native Jolla apps crash, the Android apps also seem to be rather stable when they work. However the whole phone can die - the screen goes balck and the status LED starts to blink red - in the middle of usage or just when it seats in the pocket. The phone will restart in a half of a minute or so, but it's a rather infuriatingly when it does so when you try to answer a call.


At flexVDI, we use Xamarin for building our macOS client (Xamarin.Mac), sharing most of the code with the client for Windows, built with Visual Studio. Both of them, link against a shared library which implements core functionality, written in C.

I must say that, having its own quirks and nuisances (specially in Xamarin Studio, which was pretty buggy until version 6.x), it does the job pretty well.

In fact, when we wrote our iOS and Android clients, Xamarin was still pretty immature. But if we had to rewrite them today, it would be one our of first options, right after using the native frameworks (which ensures the best results, but drastically increases the costs).


I can't help but think they're trying to fix something that isn't broken at all.

Adding new abstraction layers rarely helps when doing systems programming. You (as in "the developer") want to be as near to the machine as possible. C does this pretty well.

Perhaps I'm just getting old :-(


In this case it seems like a very thin wrapper that leverages the type system to allow catching a whole class of errors at compile time, like using exhaustiveness checks to make sure a function call handles all possible return values. I think the small overhead is well worth it.

The original API is not "broken" per se, it's just limited by the language features ("magical" return values vs. tagged unions or whatever they're called in Rust, I don't remember.)


It's not even clear to me that there's any overhead to the Rust version. Checking error codes that should be checked isn't overhead. Checking them inefficiently would be overhead, but the Rust version looks like it should compile down to something pretty similar to what the equivalent C switch blocks would produce.


> It's not even clear to me that there's any overhead to the Rust version.

There is a slight bit of stack overhead: Option<ForkResult> is at least {tag:u8, {tag:u8, pid:i32}}, and due to alignment constraints it's actually {tag: u32, {tag: u32, pid: i32 }}). A nonzero wrapper[0] would allow folding either ForkResult or Option into a 0-valued pid_t and remove one level of tagging: http://is.gd/yxStW1

Beyond that you'd need generalised enum folding in order to fold two tags into the underlying value (you'd denote that pid_t is nonzero and nonnegative for instance)

[0] which is unstable, so not really an option


We do have a planned optimization that would fold the tags for cases like `Option<ForkResult>` to give a word pair, which should be returned in %eax:%edx (or %rax:%rdx).


Really? That's exciting. Missed enum layout optimizations are one of my few issues with Rust right now.


But if the wrappers get inlined (which they should be) then SROA kicks in and promotes the tags to SSA values, where other optimizations such as SCCP can eliminate them. Optimizing compilers are awesome :)


Theoretically It should be possible to have a union based on the value of the ints {-1, 0, positive}, which should use only one 32bit integer.


I'm not talking about the efficency of the resulting binary, but the "distance" from what the programmer is thinking, to what the machine will really do.

Compiler optimizations aside, C does a pretty good job at this. It's way more efficient than writing assembly, but your still basically just moving memory around, while doing some arithmethic. Easy to understand in "machine" terms.

Of course, this is only relevant when you're doing low-level stuff, like kernel or drivers programming. For the userland, Rust really looks like a nice language (I've played with it just a bit), and I'd be really happy if it pushes C++ away ;-)


Compiler optimizations included, C does a terrible job at this. It puts forward a seductive but terrible mirage of simple mappings and understandings which are just plain broken. And then you add multithreading into the mix and it gets even worse, even without optimizations.

We live in a world of many cores, and multiple CPUs all over the place - in your GPUs, your hard drives, motherboard controllers - and the intrinsic language support for multithreading literally does not exist as part of the C99 standard? One has to reach out to a mixture of POSIX, and the compiler extensions the POSIX implementation uses to annotate memory barriers so the optimizer won't break things, and intrinsics that introduce atomic operations, and... gah!

C and C++ do such a terrible job of this I have to resort to disassembly to debug program behavior far too frequently. These are the only languages I'm forced to do this with. If C or C++ were really "close to what the machine will really do", I'd expect the opposite result.

Even simple things like class and structure layouts and type sizes are controlled by a mess of compiler and architecture specific rules and extensions to control the application of those rules with regards to padding, alignment, etc. which I get to debug. Ever had to debug differences in class layout between MSVC and Clang due to differently handling EBCO in a multiple inheritance environment? What about handling alignment of 8-byte types on 32-bit architectures differently? At least you've replaced all uses of "long" because of the mixture of LP64 and LLP64 compilers out there...? And what about when two incompatible versions of the standard library with different type layouts get linked in by a coworker? These are the symptoms of a language that doesn't control what the machine is really doing very well at all.

When I really need tight control over what the machine will do at a low level, my tools are actual (dis)assembly, intrinsics, an understanding of the underlying hardware itself, and simple code that eschews features requiring significant runtime support or underpinnings. None of those are C or C++ specific. The last one requires some knowledge of how a language's features are implemented - C and C++ might be broken enough that you're forced to wrestle with that topic, when it's more optional in other languages, but... that still doesn't make it C or C++ specific.

</rant>


Rust is just as close to the hardware as C, it just checks your code more.


Then you are missing the whole point of Rust. The point of Rust IS to allow you to be close to the machine but while you maintain much higher level of safety. Rust is designed to do this with little overhead.

In this day and age with big software packages, security being an increased concern it really is high time programming languages do more to help us avoid bugs which expose us to hackers and crackers.

I do have an affinity for C, but as a Objective-C programmer currently coding in Swift, I am really seeing how many more bugs the compiler helps me uncover.

I think Rust is on the right track. It is a long overdue change to systems programming.


The proposition that Rust is offering is not new. In the 90s Modula-2 was touted as "a better, safer way" of doing system programming than C. It failed to get traction outside of education because it failed to offer a compelling reason for people to migrate. Those that do not study history are doomed to repeat its mistakes.

In the example given it's possible to write a similar library in C to protect against unwanted side effects or bad API design. I'm sure several have been written over the years.

Rust is a great language with lots of improvements over other system programming languages, but that is not going to be enough to get people to switch. You have to show that it's good enough to be worth throwing away 40 odd years of experience and well understood best practice. Something that is going to take a long time and big public projects to do. If just being better was good enough Plan 9 would have been a roaring success and Linux (if it happened) would probably be a footnote in history.

C and UNIX have survived as long as they have not because better alternatives haven't come along, but because the alternatives haven't offered a compelling reason to switch. Unfortunately at least now Rust is falling into the same category.

See also: Niccolo Machiavelli, The Prince


Safe systems languages already existed before C was a thing.

Modula-2 is just one example.

Burrough B5000 was programmed in safe systems programming in 1961.

https://en.wikipedia.org/wiki/Executive_Systems_Problem_Orie...

https://en.wikipedia.org/wiki/NEWP

"NEWP is a block-structured language very similar to Extended ALGOL. It includes several features borrowed from other programming languages which help in proper software engineering. These include modules (and later, super-modules) which group together functions and their data, with defined import and export interfaces. This allows for data encapsulation and module integrity. Since NEWP is designed for use as an operating system language, it permits the use of several unsafe constructs. Each block of code can have specific unsafe elements permitted. Unsafe elements are those only permitted within the operating system. These include access to the tag of each word, access to arbitrary memory elements, low-level machine interfaces, etc. If a program does not make use of any unsafe elements, it can be compiled and executed by anyone. If any unsafe elements are used, the compiler marks the code as non-executable. It can still be executed if blessed by a security administrator."

Sounds similar to modern practices? Done before C and UNIX were a thing.

C and UNIX have survived this long, because they go together as one, just like JavaScript is the king of the browser, C was the only way to go when coding on UNIX systems.


> Unfortunately at least now Rust is falling into the same category.

Rust offers one major thing that Modula-2 never did: eliminating memory management problems (also concurrency problems) with zero overhead. In the '80s and '90s it was not known just how dangerous memory management problems could be (use-after-free was thought to be a harmless annoyance). Not now in 2016, with every single browser engine falling to remote code execution via UAF in Pwn2Own.


Modula2 was from an era before the internet was ubiquitous and everyone had computers in the pocket. To compare lack of uptake of a "safer language" from a time when the internet and attack surface was so much smaller to now seems disingenuous. C and UNIX go hand in hand, nobody is disputing their worth or tenacity. I fail to see how a proposition that is not new detracts from Rust.


Different language, but Modula-3 is actively maintained again. https://github.com/modula3/cm3


Modula-3 descends from Modula-2, although not directly.

Some of the Xerox PARC Mesa/Cedar researchers went to work for DEC (later Compaq) and created Modula-2+ with feedback from Niklaus Wirth. Which had actually used Mesa as inspiration for Modula and Modula-2.

Eventually Modula-2+ evolved into Modula-3.

Nowadays I would say part of its ideas live on C#.


I don't think you want to be as near as the machine as possible, otherwise all system programmers would write machine code. You want to have powerful abstractions that the compiler can see through to produce optimal code.


I tend to agree with you with one caveat. We did a C like scripting language and added one thing that C was missing, a variable value that is "undefined" which is not the same as zero. Really simple but now you can do stuff like

pid_t p;

if (defined(p = fork()) { // parent / child stuff here } else { // fork error here }

It's pretty much the same as try / catch, we just implemented it as part of the variable. And any scalar or complex type can be undefined.

I suspect if C had this a lot of these code samples would be a little more clear. Maybe? Dunno, it's worked well for us. And we like C a lot.


Sorry if I'm too harsh, but saying that ZoL is "rock solid" and "production ready" sounds like a joke to my ears.

ZFS is an extremely complex filesystem, and it took Sun _years_ of internal testing first, and hundreds of angry customers later (sadly, at some point, the only way to improve a product is through real world testing), to reach a milestone where it was really production ready.

I know that on these days of dockers and unicorns, "production ready" has very different meaning than years ago, but still...


Of course you can't compare the trouble Sun originally had in creating ZFS with the difficulty of porting it to Linux when it has been used for years and you can built on stable code and learn by the other implementations. This comparison is quite unfair and lacking in substance. It's not like the ZoL guys had to start from scratch and reverse engineer

It depends of course, I would always choose FreeBSD over Linux when it comes to ZFS, but then again I would always choose a BSD-OS regardless of whether I want to use ZFS or not.

Take it certainly with a grain of salt, but the developers themselves announced ZoL to be production ready in 2013/03[1], Richard Yao (also a developer of ZoL) argued in a blog post in 2014/09[2] that ZoL was stable and production ready and elaborated on this. Some interesting comments also are here[3].

And all this also was quite a while ago, ZoL has been used by users for years now, it's come a long way, I have used it to some extent a while ago and had no bad experiences.

In the end everyone has to decide for themselves what "production ready" really means, and how high the threshold is to deserve that label. But I think that you can reasonably make the argument, that it is...

[1] https://groups.google.com/a/zfsonlinux.org/forum/m/?fromgrou...

[2] https://clusterhq.com/2014/09/11/state-zfs-on-linux/

[3] http://linux.slashdot.org/story/14/09/11/1421201/the-state-o...


A curious (and probably unintended) side effect, is that if its implementation of the Mach API is complete enough, this should allow to run Hurd user space server/translators on FreeBSD.

Not sure if useful, but would be cool (in a weird, nerdy way) anyway.


It isn't. For one thing, it's a port of an old OSF Mach kernel. Hurd uses GNU Mach, which is descended from CMU Mach 3.0. They don't support memory objects, their threading is bare bones, they assume a bootstrap server and there would be no way to actually get glibc and Hurd RPC working.


The OSF vs GNU Mach thing is not a problem. It was years ago, but I managed to run a Hurd translator statically linked against a slightly modified glibc on OSF Mach (the one bundled with MkLinux, you can see its code on my repo https://github.com/slp/mkunity).

In fact, if you look at Mach support code on glibc's code, you'll see build time conditionals for supporting non-GNU Mach versions.

The bootstrap server is not a problem either, but the lack of memory object would indeed break all libpager based translators, among other stuff.

As a PoC, I wrote a filesystem translator (https://github.com/slp/anonfs) which doesn't rely on memory objects, implementing conventional read/write semantics (no mmap() support, though).


The option of using MkLinux as a host has been raised as a hypothetical since many years ago, but was never tried. I don't have spare PPC hardware in mind, but that's interesting to hear it works.

Porting the NextBSD work to the Linux kernel API is certainly something on my list in case rump integration for Hurd doesn't pan out, in any event.


Some years ago, I've spent a lot of time studying GNU Mach and Hurd (I've also made some small contributions). I think I can say that I now both pretty well. I even started a project to preserve OSF Mach + MkLinux source code (https://github.com/slp/mkunity), a very cool project for its time (circa 1998).

These days I prefer to do my kernel hacking on monolithic kernels, mainly NetBSD. I've stopped working on Mach, Hurd and other experimental microkernels (there're a bunch out there) because it was becoming increasingly frustrating.

If you'd ask me to define the problem with microkernels with one word, that would be "complexity". And its a kind of complexity that impacts everything:

- Debugging is hard: On monolithic kernels, you have a single image, with both code and state. Hunting a bug is just a matter of jumping into the internal debugger (or attaching an external one, or generating a dump, or...) and looking around. On Hurd, the state is spread among Mach and the servers, so you'll have to look at each one trying to follow the trail left by the bug.

- Managing resources is hard: Mach knows everything about the machine, but nothing the user. The server knows everything about the user, but nothing about the machine. And keeping them in sync is too much expensive. Go figure.

- Obtaining a reasonalbe performance is har... imposible: You want to read() a pair of bytes from disk? Good, prepare a message, call to Mach, yield a little while the server is scheduled, copy the message, unmarshall it, process the request, prepare another message to Mach to read from disk, call to Mach, yield waiting for rescheduling, obtain the data, prepare the answer, call to Mach, yield waiting for rescheduling, obtain your 2 bytes. Easy!

In the end, Torvalds was right. The user doesn't want to work with the OS, he wants to work with his application. This means the OS should be as invisble as possible, and fulfill userland requests following the shortest path. Microkernels doesn't comply with this requirement, so from a user's perspective, they fail natural selection.

That said, if you're into kernels, microkernels are different and fun! Don't miss the oportunity of doing some hacking with one of them. Just don't be a fool like me, and avoid become obsessed trying to achieve the imposible.


It's like the argument about excessive modularity in software design in general: you can split a system into so many little pieces that each one of them becomes very (deceptively) simple, but in doing so you've also introduced a significant amount of extra complexity in the communication between those pieces.

Personally, I think modularity is good up to the extent that it reduces complexity by removing duplication, but beyond that it's an unnecessary abstraction that obfuscates more than simplifies.


The communication would've happened anyway. Now it just happens through a common mechanism with strong isolation. That all the most resilient systems, especially in safety-critical space, are microkernels speaks for itself. For instance, MINIX 3 is already quite robust for a system that's had hardly any work at all on it. Windows and UNIX systems each took around a decade to approach that. Just the driver isolation by itself goes a long way.

Now, I'd prefer an architecture where we can use regular programming languages and function calls. A number of past and present hardware architectures are designed to protect things such as pointers or control flow. Those in production are not, but have MMU's & at least two rings. So, apps on them will both get breached due to inherently broken architecture and can be isolated through microkernel architecture with interface protections, too. So, it's really a kludgey solution to a problem caused by stupid hardware.

Still hasn't been a single monolithic system to match their reliability, security, and maintenance without clustering, though.


>For instance, MINIX 3 is already quite robust for a system that's had hardly any work at all on it. Windows and UNIX systems each took around a decade to approach that. Just the driver isolation by itself goes a long way.

MINIX3 also has hardly any work done WITH it, so I don't think we can compare it to Windows and UNIX systems regarding robustness, unless we submit it to the same wide range of scenarios, use cases and work loads...


I'd like to see a battery of tests to see where it's truly at. Yet, there's still not a MINIX Hater's Handbook or something similar. That's more than UNIX's beginnings can say. ;)


Communication would've happened, but probably between far less actors. So, you have a communication channel which is orders of magnitude slower, and bigger communication needs. Not good.

That said, about the reliability point, I agree with you. If you're building an specialized system, and reliability is your main concern, microkernels+multiservers are the way to go (or, perhaps, virtualization with hardware extensions, but this is a pretty new technology for some industries).

Probably you're going to need to add orthogonal persistence to the mix, to be able to properly recover from a server failure, or an alternative way to sync states, which will also have an impact on performance. But again, you're gaining reliability in exchange.


The communication channel does get slower. The good news is that applications are often I/O bound: lots of comms can happen between such activity if designed for that. One trick used in the 90's was to modify a processor to greatly reduce both context switching and message passing overhead. A similar thing could be done today.

Of course, if one can modify a CPU, I'd modify it to eliminate the need for message-passing microkernels. :)


I think this is a good example of the law of conservation of complexity[1]. You can't reduce complexity, you can only change what's complex. In the case of monolithic kernels versus microkernels, it sounds like going to a microkernel moves the complexity from the overall design into the nuts and bolts of interprocess communication.

[1] https://en.wikipedia.org/wiki/Law_of_conservation_of_complex...


You've just hit the nail right on the head.



If you'd ask me to define the problem with microkernels with one word, that would be "complexity".

The problem with Mach, you mean. All the examples you listed are specific to it.


I don't know about other implementations, but I remember the original design of l4hurd (based on L4Ka), was even more complex. I'd same this applies to all "pure" multiserver designs.


Check out Genode.org, MINIX 3, or QNX. Seem to have gotten a lot more done than Hurd despite being microkernel-based OS's. KeyKOS is one of the best from back in the day with EROS being a nice x86 variant of it. Turaya Desktop is based on Perseus Framework.

Many working systems in production from timesharing to embedded to desktop that are microkernel-based. Hurd and Mach's problems are most likely due to design choices that created problems.


I don't know about the others, but at least both QNX and Minix3 cheated a little, i.e. allowing servers to write directly to other user space programs.

Also, the presence of microkernel+multiserver systems is still quite symbolic in comparison with the monolithic couterparts.


Of course, a virtualized Guest will have a performance penalty.

With that phrase I meant that, following this guide, you can make use of the Virtualization Extensions from the Cortex-A7 which powers the Raspberry Pi 2.

This is pretty useful for running multiple isolated services (such as a media server and an ownCloud instance), doing some kernel hacking, or testing a variety of ARMv7 distributions.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: