Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm looking forward to my first project with GO. It appears to offer a lot with minimal complexity.

> Because Go has so little magic, I think this was easier than it would have been in other languages. You don’t have the magic that other languages have that can make seemingly simple lines of code have unexpected functionality. You never have to ask “how does this work?”, because it’s just plain old Go code.

That lack of magic and his comparison to C# sounds like a really good mix.



> It appears to offer a lot with minimal complexity.

Actually I think it offers little with minimal complexity.


Here's a blog post from Rob Pike about the design philosophies inherent in Go, and how that affected adoption from C/C++ developers vs. Python, Ruby, etc.

https://commandcenter.blogspot.com/2012/06/less-is-exponenti...


> That lack of magic

I have a really hard time understanding what people mean when they say magic. In every language I've ever worked in I spend a fair bit of time saying "how does this work". Go doesn't seem any different in that regard to me.


Rails is the epitome of the magic philosophy. Stuff "happens" through inference because you touched some part of the code (check out routes, foo_{url,path} methods, url_for, and passing some models as argument to those), or the database schema (defining accessors from DB fields or adding features when magic-imbued names are used, such as type, version or foo_id/foo_type). This makes one feel fast and powerful at first, but as the application grows one has to memorise all sorts of conventions and DSLs of Rails's as well as one's own, and this is more and more stuff the developers have to remember instead of being explicitly stated in the code. As the application grows in scope, it is bound to veer ever so slightly away from the Holy Conventional Way and trip onto something lurking in a dark corner, and that's where things start to break for seemingly no reason at all unless you want to dive deeper into the cave where dragons born out of someone's eagerness at being smart lie asleep.

IOW "Magic" is wanting to achieve extreme generalisation through combined use of conventions and dynamic features of languages, which inevitably leads to gotchas†, corner cases, and pitfalls[0] as well as significant cognitive dead weight due to the very nature of its implicitness.

† ever tried to mix STI, polymorphism and url_for?

[0]: http://urbanautomaton.com/blog/2013/08/27/rails-autoloading-...


I've been pondering this topic lately, as I come back to Rails (seems every few years I'll write a Rails app on the side, and my brain has totally forgotten everything since last time): one other way to look at this "magic", is it makes programming feel "intuitive". Not sure how to do something, I often find I can just "guess" the right and most natural way, and the code will just work. For that reason, I always feel like I'm most productive when writing Ruby (I really love Go too, just for different reasons).

I can totally see how the situation you describe would be frustrating too, I felt the same about Java annotations when they came out, and on massive code bases it could become a nightmare when used to the extreme. My own experience has been that Go scales very well to large code bases, I've never wanted to try the same with Rails.


I've had this same experience bouncing back to Rails for contract gigs. You always feel this temptation like you're missing out on 'real' programming, performance with low level code or super clever languages like Haskell or Clojure. But ultimately Rails is just a great programming experience for getting the job done.

Despite it's faults and the problems with using 'magic' frameworks like Rails it's a really great language/framework for what it's meant to do. And it still is in 2017 despite what some people say (although Elixir/Phoenix is getting there if it can reach the scale of adoption as Rails).

That's the end of the road lesson, that there's the right tools for different jobs. There is no 'perfect' solution. Not rabbit to keep chasing.

Either way though it's still good to get exposure to as many different languages as possible (low level ala C, easily parallel-based languages ala Erlang, some lisps, typed languages like Haskell, dynamic FP ala clojure, etc).


Example, properties in C# can be method calls, and while they appear to have the complexity of accessing a field they actually could be arbitrarily algorithmically complex. This leads to a programmer down the road, calling it in a tight look, expecting field access overhead and getting somones complex property-method-logic. That is an example of magic.

IMHO magic is when the run or space time complexity of a code isn't obvious by its on-screen representation.


So by that definition of magic, something like ranging over a channel is magic?

It looks like a simple for each loop, just like over a slice or map, but under the covers involves locking semantics.

If so, I guess I'll buy that definition of magic, I'm not sure thats any different than knowing what the language does.


In go, there are a minimal amount of primitives (like channels, slices) to learn. Once you know them, they're fairly intuitive.

In C#, properties can be arbitrarily complex. You can't just know how properties "work" and then do mental shorthand on them. Every time you look at a new codebase you might have to dig through several files to find out what one line does.


But the "magic" in this case is that properties can be methods. Once you know that how is it any different than methods?

In go methods can be arbitrarily complex. You can't know how they work without digging through several files to find what one line does.

Another example of magic in go would be method names. You have no idea if it is safe to change the name of a method because it could be satisfying an interface far away from the definition site (or in the case of exported methods nowhere you have access to).

We could go back and forth all day about what is and is not magic, but it still just seems like "language differences" to me. If the claim is "go does a lot less for you than other languages, so has a lot less opportunity for magic", I could probably concede that.


>But the "magic" in this case is that properties can be methods. Once you know that how is it any different than methods?

It's different because you have to apply that general knowledge in every single case when reading code that you are not familiar with.

In Go or Java, the information on whether a.x is a constant time variable access or a function call of arbitrary complexity is available at the call site. You don't have to look it up. It's one less thing to do when reading code.

And when you do have to look up what an expression means, how straightforward is it? Consider this expression:

  f(x)
In Go f(x) means whatever the function f does, and f is exactly one function in the current package.

f(x) in C++ (and to a slightly lesser degree in Java, C# or Swift) is one of a set of functions called f. Knowing which one actually gets called requires knowledge of tens of pages of name lookup rules plus knowledge of possibly large swaths of the codebase.

It is often claimed that languages more powerful than Go just have a steeper learning curve. But it's not true. Even if you know all the name lookup rules of your favorite language (do you?), you still have to apply them every single time you read unfamiliar code.

In my view it's pretty simple. If you have to read a lot of unfamiliar code all the time then Go is great. If you can know both a more powerful language and your codebase inside out, then Go will be frustrating for its lack of abstraction features.


> Another example of magic in go would be method names. You have no idea if it is safe to change the name of a method because it could be satisfying an interface far away from the definition site (or in the case of exported methods nowhere you have access to).

Is this true? If you changed the name of a method, it will no longer satisfy that interface and your code would not compile.


I don't think that's quite right. E.g. if you do call a method, it's complexity is unknowable with only local context, so it can't be obvious. I would rewrite that to:

> Magic is when the run or space time complexity of code is misleading by its on-screen representation.

An apparent field access that is actually a method is misleading. Calling a method explicitly just directs you to check that method to know for sure.


I think we are on the same page.


Yes, I'm just being pedantic about how you say it. :)


So these will be made into functions with uncertain O complexity. How's this situation preferable?


Most of the time in Go people use fields directly, so there is a clear difference between struct.Field and struct.Method(), struct.Field is preferred and you only have to worry about uncertain complexity if you see struct.Method(). The parent is saying in C# struct.Field might be a simple access or it might be a complex method.


When the programmer sees a function call they know its complexity is a O(?) and will (hopefully) vet it before calling it in a tight performance sensitive loop.


When a C# programmer sees any call to code they don't know - property or function - they should know its complexity is O(?) and will (hopefully) vet it before calling it in a tight performance sensitive loop.

I get that someone could say "Go has fields, and they're always fast" and that seems like a great facility of the language, but any C# developer that says similar about instance members is wrong, and has some invalid assumptions about the language they use.


A corollary to this is that C# property accesses can have side effects. I recently started working on a legacy code base where reordering access to a set of properties on an object resulted in different results!

And I'm not saying that properties are strictly a negative; in some cases, they can be very useful for refactoring an underlying implementation without having to change the API exposed to callers. But just like any magic, it needs to be applied thoughtfully and judiciously.


Take a look at a java codebase that uses:

  * Complex DI frameworks (Spring Bean*Processors, event listeners, XML config)
  * classpath scanning-based autowiring (See Spring @Component)
  * aspect weaving-based autowiring (See Spring @Configurable)
  * Code littered with annotations that invite aspect-based pointcuts
  * Complex ORMs like hibernate that are incredibly difficult to use properly
And you'll start to get an idea of how ridiculous things can be. Golang is making a huge mistake for not adding Generics. 99.9% of the complexity in a typical Java codebase has zero to do with generics and everything to do with the insane abuses of the JVM classloading system that the java community has subjected itself to, as well abuses of overly complex libraries like Spring and Hibernate.

If the Java community allowed itself to write simple golang-like code the majority of the time, there'd be much less defection to golang in my opinion.


There is nothing language specific or magic about those things. You could (and you will see people) write those things in go as the language starts getting more adoption.

Go goes further and encourages code gen, so that will probably be the way you start seeing terrible frameworks being built.

In any case, "configuration as code" doesn't seem like a good definition of "magic" to me.


My point exactly. The issue isn't java the language. The issue is the flexibility of the JVM runtime and how people are abusing it.

Also, if load-time aspect-weaving and classpath-scanning-based autodiscovery don't count as magic to you, then not much will. Code generation at least has the huge, huge, humongous advantage that you have code on disk that you can read and debug.

I also admire Racket's macro system for coming with IDE support for introspecting and debugging the code generated by macros. Macros are a much better design because they generally run at compile time and they generally only make local code transformations that are much easier to reason about, as opposed to the sweeping global changes a weaver will make.


When people say "magic", what they often mean is "code over here can effect the execution of code over there in an implicit way". Like in Ruby, I could conditionally monkey-patch a function into an object someone way over there was using, causing code to break.

Other languages, like those with stronger type systems, will not allow this to happen.


Yea, monkey patching is helpful when dealing with a 3rd party library that needs to be tweaked 10 layers up the inheritance chain without having to change the object type all over the whole system.

If it gets overused it causes problems but there are times when it is close to a miracle. That said, there is a reason ruby devs are so test conscious.


Yeah - of course monkey patching has good uses :) The problem is that when you're trying to debug an issue, it's another thing that you'll have to remember - "is anyone monkey patching something in here?"


Yea, I can't work on Ruby codebases without something like Rubymine where I can jump straight to the declaration for that exact reason.


one of the things that go eschews is operator overloading.

    a := b + c
What's the runtime complexity of this statement? How much memory will it cause to be allocated? In go there's only two possibilities for what this code is doing... either this is string concatenation, or it's adding two numbers. Both of which are immediately comprehensible for impact on run time and memory.

In C#, you can overload operators, so the + could in theory do anything. And what's bad about that is that it is deceptive. It's easy to miss the fact that this line might actually be doing something complex.

It also means that if someone is looking at your code, they can't make any assumptions about what any particular line of code is doing, without complete understanding of a vast amount of code.

This is one of the pieces of magic that I'm glad go doesn't have.


Hogwash. In a language with operator overloading, this would be something like `a.assign(b.plus(c))`. Which really doesn't tell you more about what go on under the wraps than the operator form.

What can be confusing is what the meaning of `+` (or `plus` is). In some case it can be fairly obvious (e.g. concatenating sequences), while in other not so much. Operator overloading is nice, but has to be used tastefully (like every abstraction or language tool).


In C#, you can overload operators, so the + could in theory do anything.

What can be confusing is what the meaning of `+` (or `plus` is)

QED


Any function can do anything. I can write a function called "read_from_file()" that doesn't read any files.

Amazing, I know.

Also please actually read the comment before replying:

> Operator overloading is nice, but has to be used tastefully.


The point the OP made was that operator overloading is not nice - it means that any operator (not just function) can do anything. It makes code harder to read and reason about.


    a := Sum (b, c)
How can you be sure that Sum actually does a sum without looking at its implementation?


I think you are missing the point. Of course you can't assume what a function will do with certainty.


From CS point of view + is just a function name just like any other.

A concept used in lambda calculus, introduced in computing since Lisp exists.

Also part of abstract mathematics field, where operator symbols get defined for the proofs.


> From CS point of view + is just a function name just like any other.

From a Go point of view, it isn't.


Just because Go eschews decades of CS knowledge, in the name of the "easy to hire programmers" for Google[1], it doesn't make it less true.

[1] - According to the language designers own words


What you wrote isn't a universal truth. In Go, + is not a function like any other. There's no argument to this.


What is the difference, apart from the notation?


The point is in most languages operator overloading is no more complex than a method call.

It's not exactly magic.


Overloading + could be magic if you want it to be. In go, + is exactly what you think it is. In languages with operator overloading, I literally could make + do whatever I wanted to.


Just like you can make a function do something totally unrelated to how it is called, do what it is actually in the name, wipe out the hard drive, launch missiles, whatever.


> How much memory will it cause to be allocated?

In Go, it's impossible to tell because "a" might be captured by a closure, in which case it will be heap-allocated. But if escape analysis promoted it to the stack or a register, then it will not allocate memory.


I honestly don't understand the difference with ' a := add(b,c) '. Does it really make things that much difference? And lack of overloading makes maths heavy code horrible to read.

Of course, you can go to the C extreme, no overloading, specialisation or anything, every function name means one thing. That does add some nice features, but is a pain in the ass when naming things!


The difference is more apparent in the other (non-overloaded) case. When in Go (or C, …) you see an expression "x << y" you know immediately that this is just a shift operation, mapping to at most a couple machine instructions. In certain other languages it's most likely an integer shift, too. Still, you have to carefully consider the context lest this simple shift expression causes synchronous I/O to some space probe near Mars


"they can't make any assumptions about what any particular line of code is doing, without complete understanding of a vast amount of code."

Yes, this is a problem with many designs. More importantly they can't easily look up what does an operator do. But this problem can be easily avoided if a set of operators used in a particular scope had to be explicitly specified. For example, if you want "+" to mean a bigint addition from a set of bigint math operators, you would have to import that set into that scope, kind of like this:

  import_operators "bigint"
  a := b + c
Now you still have overloading, but it is very clear where to look up an operator and a set of operators used in this scope.

"In go there's only two possibilities for what this code is doing..."

There should be only one possibility, though. Dual-meaning operators don't provide any value to ever have them, apart from familiarity with design mistakes of the past.


I agree, I wish + wasn't string concatenation either.


One of the few good decisions PHP made is not using + for concatenation.


> In go there's only two possibilities for what this code is doing... either this is string concatenation, or it's adding two numbers

That is overloading! What go eschews is user-defined overloading.


One way to look at it is, some languages, I would suggest, C, C++, Clojure

When you start learning those languages, and then compare the code you write, to the code of popular major or standard libraries, they look completely different

They say, that Go is different and special this way, in that advanced Go code, doesn't look all that different from average Go code

Anyway, I can't really judge, I never looked or tried to learn Go, hence (They say)


I don't know about this axis of "magic vs. muggle" we're talking about, but for me the phenomenon we're talking about seems like it can best be described by the ease with which you can find the code that implements specific behavior. Go (and Java) are pretty good at this. Python is OK. Ruby is awful.


This is true except in the case of structural satisfaction for interfaces. You can't trust a rename function in a IDE in go for instance due to this.

And in the case of channel select behavior...and ranges over a channel...and the context object, etc.

My point being "magic" seems to be code for "familiarity with the language and it's idioms". Which I'll grant might be easier in go because of how limiting it is.


Magic is when you access struct pointer members w/o the asterisk. ;-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: