Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ada used to be basis for CS Curriculm at Cal Poly Pomona (Circa '92- '98). Ada teaches good practice. One prominent feature is that there is no implicit casting; Add a float and int, you get a compile error. It's good to know that you have to do exactly what you want and the compiler enforces this.


Haskell is similar in this sense. And Golang (IIRC, only coded a few hundred lines in it two years ago). Example from a Haskell REPL, where Haskell won't even upcast an Int (machine int) to an Integer (unbounded int, python style):

    Prelude> let a = 1 :: Int
    Prelude> let b = 2 :: Integer
    Prelude> a+b

    <interactive>:4:3:
        Couldn't match expected type `Int' with actual type `Integer'
        In the second argument of `(+)', namely `b'
        In the expression: a + b
        In an equation for `it': it = a + b


> unbounded int, python style

Arrrgh!! You mean "Lisp style". Lisp has had arbitrary-precision integers ("bignums") since 1971 -- long before Python was invented.


Whenever I talk about Haskell I always try to make my comparisons to Java, C, or Python for maximum relatability.

I think saying "unbounded int" is enough, but then people might say "what exactly do you mean by unbounded?" so I say it's about the same as a python int. Haskell itself predates python.

And IBM vacuum tube machines had arbitrary precision integers in the mid 50s, and transistor based ones in the 60s ;)


> Whenever I talk about Haskell I always try to make my comparisons to Java, C, or Python for maximum relatability.

Okay, well, then, thank you for giving me the opportunity to educate people on a bit of history :-)

> IBM vacuum tube machines had arbitrary precision integers in the mid 50s, and transistor based ones in the 60s

I seriously doubt it. They may well have had variable precision, but I don't think they had unbounded precision. That is, there was always a limit such that a result over that limit would either wrap around or trap (I don't know which), but that limit could be different for different instructions. (I'll bet there was a hardware-enforced maximum limit too, on the order of 12 or 15 digits.)

It wasn't until 1969 that Knuth published algorithms for arbitrary-precision arithmetic. Also consider that arbitrary precision requires heap allocation, which certainly wasn't being done in hardware in the 1950s and -60s.

If you can substantiate your claim, I'll be suitably impressed, but I'm extremely skeptical.


To add some specificity to gamegoblin's reference, I quote from page 28 of the IBM 1401 Data Processing System: Reference Manual[1]:

The two factors to be combined are added within core storage without the use of special accumulators or counters. Because any storage area can be used as an accumulator field, the capacity for performing arithmetic functions is not limited by standard-size accumulators or by a predetermined number of accumulators within the system.

[1] http://bitsavers.informatik.uni-stuttgart.de/pdf/ibm/140x/A2...


I was paraphrasing from http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic#...

So in the 50s they had integers with up to 2^9 - 1 (== 511) decimal digits.

In the 60s, the transistor-based one could do integers as big as your memory, the largest memory offered gave you up to 60K digits.


Okay, very good. I stand corrected. (Well, mostly. I would pedantically argue that you may have arbitrary-precision arithmetic, but you don't have arbitrary-precision integers until they're a functional datatype, so you can write simply 'a + b' to add them -- with the automatic allocation that that in general requires. But this is a quibble.)


ADA at university - never touched it again. It was ADA 83, and the ADA 95 object extensions looked horrible.

I remember seeing PL/SQL circa Oracle 8i and it looking a lot like ADA.


> I remember seeing PL/SQL circa Oracle 8i and it looking a lot like ADA.

Not a coincidence. The designers of PL/SQL were largely inspired by Ada's basic syntax and structure.

It's not an acronym, by the way, so no need to all-caps it.


The object extensions aren't bad; they just suffer from a combination of an unusual object model and an even more unusual use of terminology.


I was actually a little excited when I found out that the university I went to was an Ada school. Except, it turned out to be outdated information; it was actually a C++ school when I was there, and it started the process of becoming a Java school before I left.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: