Hacker Newsnew | past | comments | ask | show | jobs | submit | ninegunpi's commentslogin

tl;dr: Balancing tradeoffs and benefits during disclosure is a hard job sometimes and if authors chosen to do it this way - they could have done it for a reason? You don't have to trust me on this, but it has no commercial agenda behind it.

Disclaimer: I happen to work at the same company as authors, not involved in writing this, but I was witness to all research that led to this post. I have seen huge internal arguments on how much and how should be disclosed, given the context (see below), prior to this article being written.

1. I can attest that all these bugs are found in one physical device. I have seen it. Which is really widely used to this day. Moreover, this device has more relatives than we could easily enumerate, some of them potentially vulnerable to a subset of the bugs identified as well. The "vendor" is aware and nothing is changing for a while, in some ways getting worse (blast radius increases over time). This is result of economic reasons, rather than negligence,- "the vendor" in this case is a mixed bag of responsibility between several parties, not all of them commercial, not all of them actually existing to this very date, I believe.

2. In a normal situation, responsible disclosure path, instead of what you've been reading in a post, would be a right way to go. However, context matters in this case: authors happen to live in a country which is at war now (takes like 5 seconds to figure out, looking at the website), so their ability to talk about security vulnerabilities is a bit different to your expectations for reasons that are not very hard to understand. They use vague language, distort a few important details and focus on frivolous illustrations to avoid unnecessary damage.

Pointing out practical exploitability vectors publicly in a way that is understandable to anyone related to the field of practice is sufficiently helpful:

* Some people will now have explanations why their toy cars were stolen and consider changing their supplier of toy car equipment.

* Some people conducting engineering risk analysis will understand that this is not a "potential theoretical vulnerability", looking at their toy car and some of its settings, and consider alternatives.

Consider blog post and examples to be didactic material for an ongoing discussion about some hardware among field practitioners. Authors needed something to point their fingers at and say "this is how X can be exploited to do Y", without reading 2 hours lecture on cryptographic bugs that have been obvious 15 years ago.

3. Why not point out vendor and device list? Consider the context again, please.

It's easy to wave your hand and say "if people are idiots using hardware and devices that are known to be vulnerable, we should let them screw themselves", disclose the name of the vendor, and go on with your life. However:

* Being pointed out directly, these vulnerabilities could easily lead not only to "market levelling out discrepancies" (which does not always happen harmlessly, as we all know). It could lead to more physical damage and deaths immediately happening around authors of this post because exploitation is so easy.

* Not making it would lead to these devices being used over and over again, and obvious cryptographic bugs being dismissed as "theoretical threats", because remote toy car community is full of "Internet of Stuff" people who are dismissing cryptographic vulnerabilities on basis of "it's crypto, who knows how to exploit it, we've got more important stuff to worry about right now".


Infosec emotional climate always had a certain pessimistic, paranoid and panicky perception from the outside, but it is greatly exaggerated, I think.

FUD, bullshit, lack of skilled people, lack of budgets, lack of understanding from adjacent departments, chaos, mayhem, overtimes, incidents and creeping "I'm not sure what's going on" have always been parts of the profession. Learning to accept frustration, constant change, ill-formed perception and rejection is part of your career choice and a selection factor in the long term. Learning to look at the world from a certain angle which is hard to unlearn (especially if you're good at it) is a mental equivalent of firefighter's calluses.

If you can bear with it all - being on defensive side and being a kind of digital first responder (regardless of where exactly you are in the industry) is a fun job and calling for some.

(Edits: Typos)


Love cryptopals beyond my ability to articulate it well enough. This is monumental work many engineers owe their “cryptography 101-404” education to.

For a long while company I work for used Cryptopals as means to train/qualify interns before letting them touch serious practical stuff.

Not that knowing answers is a problem (there were public solutions, albeit clunky, before), but such careful explanation almost eliminates the fun in solving to some extent.

Gotta come up with our own take w/o public tour now, is anyone aware of any similar sets of challenges? CryptoHack is ok, but fairly deterministic one that removes “oh where do I go now” feeling, which is essential to bear with to learn to love the craft.

Edits: typos.


This makes my day. Thank you so much for the kind words!


To scale shipping static content, I'd rather look into CDN with proper caching, instead of maintaining ten layers of abstraction just to feel good about how modern my stack is.


Yes, but what about non-static?

Also, a CDN would also classify as "serverless", under the definition used by the article.


Isn't RASP just slapping the WAF-like signature detection into your application data streams directly? How would RASP prevent:

1. Insiders having access to database front?

2. Same SQL bypass techniques as employed to bypass WAFs?

3. Mitigate developer errors in query logic which enable custom injections?


If your security strategy relies on one or two security controls, you're doomed most of the time.

We've added SQL filtering as a defense-in-depth measure, having a convenient seat in the architecture, complementing every other mitigation measure proper application developers and DBAs should be doing (and frequently get wrong).

Even ORMs get bypassed once in a while:

- https://github.com/mysqljs/mysql/issues/342 - https://github.com/sequelize/sequelize/issues/5671 - (okay, we can avoid this one by saying nothing "nothing proper exists in NodeJS world) https://bertwagner.com/2018/03/06/2-5-ways-your-orm-will-all...

Dumb concatenation can nullify the merit of quite advanced ORM: copybook example of misusing Ruby's ActiveRecord (is that proper enough) got as far as OWASP testing guide: https://www.owasp.org/index.php/Testing_for_ORM_Injection_(O...

Prepared statements are cooked wrong as well, but rarely, that's why they are viable line of defense, but not the sole one (as nothing should be):

https://www.reddit.com/r/netsec/comments/ww9qm/sqli_bypassin... https://stackoverflow.com/questions/134099/are-pdo-prepared-...

(in fact, I've seen with my eyes exactly what first comment in reddit postmentions).


I actually came to comment on this matter. Anecdotal evidence of several people I know is that playing FPS games with trackpad (pretty much of a torture) improves touchpad intuitive usage to an extent where they don't notice any discomfort with it in daily life.

Perhaps, hand-eye coordianation training that happens in FPS shooters translates to better touchpad usage in normal cases.

(Still, I'm told that trying to competitively play FPS games with it just sucks).


What you are describing is effectively two aspects of the first step of traditional buddhist meditation Shamatha - awareness on chosen object and awareness on present signals of the body, so yes, meditation is is.


>We may never be able to build a machine that can recognize the full diversity of human emotional experience

Even humans have a lot of problems recognizing full diversity of their own emotional experience, unless trained appropriately.


Descendants of Aristotle still find limitations of the system amusing, that’s amusing itself.

I hope to live to the day when philosophical advancements of 20th century (or re-discovery of 2500-old Indian logic, if you like), formalized in accessible forms, get widespread acceptance, could leave plenty of people who’se job it to juggle limited abstractions with the need to pick more useful jobs.

No pun intended, these are terribly useful abstractions we’ve built our world on, but they barely hold up against thorough reality check and leave out a lot as ‘paradoxes’.


This article reminds me of how undergrad economics is taught.

1. Assume humans have a known, unchanging utility function that can be globally maximized, and assume they maximize it at all times

2. Lay out a whole bunch of reasons why this makes no sense

3. Ignore #2 and proceed to build a whole theory on #1


Please add some substance, or I will make your argument for you.

Yes! We all need to get on board with constructivist mathematics [0][1] already. Construction is very similar to computation, and it is not inconsistent to take "all reals are computable" or "all functions are continuous", the same rules Turing discovered, as axioms if we like. We can therefore move computer science fully onto a foundation that is more rigorous than typical maths.

[0] https://plato.stanford.edu/entries/mathematics-constructive/

[1] https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016...


You've made far better one than me below. Hats off.


Keep in mind that Aristotelian logic did not stop with Aristotle. It kept developing through the Middle Ages and even into today. Now, it's called term logic. Fred Sommers made tremendous advances in expanding syllogistic logic into something more versatile than what Aristotle worked on.


Indeed. Yet, it is still based on True/False pair, which does not reflect neither reality or human experience in most cases. Where it is applicable - it perfectly works. But the scope is limited.


Many-valued logics have been studied for quite a while. As far as I know, their applications are fairly limited.

https://en.wikipedia.org/wiki/Many-valued_logic


1. A "quite a while" is less than a hundred years after Godel and in math? Compared to 2000+ years of Aristotlean logic dominance in hard sciences just because Romans inherited most of their scientific views from Greeks, not from Indians/Chinese?

2. It depends on domains of applicability, if you think of it.

In pure CS and math? Yes, the visible value is limited, because most problems we choose to try to solve can be solved with math apparatus we're armed with. Value I know is mostly limited to optimizing problems that have poor solutions with binary logic.

In practical engineering? ATPG, to my understanding, requires multi-valued logic. Analysis of large phenomena and automated decision making becomes an order of magnitude simpler problems, with better efficiency over chosen metric. Temperature controllers, decisions based on photo-metering (autofocus, exposure adjustment), etc. Somehow, even with lack of readily-available building blocks and tooling, it turns out that there are problems people are motivated to solve from scratch, and MVL/FL comes handy.

It's only stuff I overheard of through my life among bright engineers.

3. Moreover, the biggest impact is not in CS (as original presented paradox isn't, as well), it's on human judgment, decision-making and general assessment of reality, where "neither true nor false" (I don't know) is the first stepping stone to make the world much easier place to live in.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: