Hacker Newsnew | past | comments | ask | show | jobs | submit | more yazaddaruvala's commentslogin

At least in theory. If the model is the same, the embeddings can be reused by the model rather than recomputing them.

I believe this is what they mean.

In practice, how fast will the model change (including tokenizer)? how fast will the vector db be fully backfilled to match the model version?

That would be the “cache hit rate” of sorts and how much it helps likely depends on some of those variables for your specific corpus and query volumes.


> the embeddings can be reused by the model

I can't find any evidence that this is possible with Gemini or any other LLM provider.


Yeah given what your saying is true and continues to be,

Seems the embeddings would just be useful for a “nice corpus search” mechanism for some regular RAG.


This can’t be what they mean. Even if this was somehow possible, Embeddings lose information and are not reversible, I.e embeddings do not magically compress actual text into a vector in a way that a model can implicitly recover the source text from the vector.


LLMs can’t take embeddings (unless I’m really confused). Even if it could take embeddings, the embeddings would have lost all word sequence and structure (wouldn’t make sense to the LLM).


Any insights into why game engines prefer triangles rather than guassians for fast rendering?

Are triangles cheaper for the rasterizer, antialiasing, or something similar?


Cheaper for everything, ultimately.

A triangle by definition is guaranteed to be co-planer; three vertices must describe a single flat plane. This means every triangle has a single normal vector across it, which is useful for calculating angles to lighting or the camera.

It's also very easy to interpolate points on the surface of a triangle, which is good for texture mapping (and many other things).

It's also easy to work out if a line or volume intersects a triangle or not.

Because they're the simplest possible representation of a surface in 3D, the individual calculations per triangle are small (and more parallelisable as a result).


Triangles are the simplest polygons, and simple is good for speed and correctness.

Older GPUs natively supported quadrilaterals (four sided polygons), but these have fundamental problems because they're typically specified using the vertices at the four corners... but these may not be co-planar! Similarly, interpolating texture coordinates smoothly across a quad is more complicated than with triangles.

Similarly, older GPUs had good support for "double-sided" polygons where both sides were rendered. It turned out that 99% of the time you only want one side, because you can only see the outside of a solid object. Rendering the inside back-face is a pointless waste of computer power. This actually simplified rendering algorithms by removing some conditionals in the mathematics.

Eventually, support for anything but single-sided triangles was in practice emulated with a bunch of triangles anyway, so these days we just stopped pretending and use only triangles.


As an aside, a few early 90s games did experiment with spheroid sprites to approximate 3D rendering, including the DOS game Ecstatica [1] and the (unfortunately named) SNES/Genesis game Ballz 3D [2]

[1] https://www.youtube.com/watch?v=nVNxnlgYOyk

[2] https://www.youtube.com/watch?v=JfhiGHM0AoE


>triangles cheaper for the rasterizer

Yes, using triangles simplifies a lot of math, and GPUs were created to be really good at doing the math related to triangles rasterization (affine transformations).


Yes cheaper. Quads are subject to becoming non-planar leading to shading artifacts.

In fact, I belive that under the hood all 3d models are triangulated.


Yes. Triangles are cheap. Ridiculously cheap. For everything.



Fwiw at Amazon it’s expected that the first 33%-50% of the meeting is reading time.

The rest is time for feedback, discussion, and ideally a decision or collecting action items.


The reality is that competition at Amazon is intense.

Amazon corporate is roughly broken down like this:

L4: 45%, L5: 45%, L6: 7%, L7: 2.5%, L8+: 0.5%

Only the first promotion has guardrails (L4 to L5). Typically you’re expected to be able to write at all. From there many people end their careers at L5.

Going from L5 to L6 requires being the best person on your team for multiple years running. Compared with some really smart, focused, and motivated people. You’re also expected to be able to write well, read, and kinda poorly coach other people’s writing.

Going from L6 to L7 is very difficult. One of the biggest differentiators really is scale. If you still need help writing docs - that’ll slow you down and you won’t scale. If you’re slow to read and provide valuable feedback again that will hold back scale (many people compensate by adding more time to their work days).

However, the funny thing is the doc writing culture at Amazon is built by and for L8+ leaders. Everything else was just training and weeding people out.

Going from L7 to L8 is where “in-meeting reads” really start showcasing the differences between leaders. I’ve known smarter people that made better decisions but had their careers stagnate while watching other “80th percentile” decision makers grow because of speed to grok information and deliver valueable feedback 9 of 10 times is more important than the incremental benefit of being right 10 of 10 times.

So I get your concerns about in-meeting reads throughs but just keep in mind who and what it is defacto built for.


The real benefit of doc writing isn’t for decision making it’s for education. It allows everyone at Amazon to evaluate the author’s ability to refine their “chain of thought”.

The nice side effect is the author taking 10x more time to save 10x L+1 and L+2 leaders (ie more expensive people) from spending that same time trying to understand it.


Yet there are more successful companies than Amazon or as successful that don’t use this approach.

That clearly shows it’s a cultural or quirky thing, otherwise there would be a clear correlation between a company’s success and the doc culture


> The nice side effect is the author taking 10x more time to save 10x L+1 and L+2 leaders (ie more expensive people) from spending that same time trying to understand it.

IMHO this is analogous to most of the economic justification for not being sucky at documentation. When you apply this analysis to the user-facing side, an hour of documentation can save hundreds of hours of user thrashing.

This is something that I would assume is intuitive, but under the capitalist mode of software development, it seems to be an uphill fight to get the economic rationale into management's heads. There are of course exceptions, and they stand out in excellence.

When you shortchange documentation, you're externalizing costs that shared efficiencies say should be internalized. Kind of like dumping sewage in the lake, or driving on leaded gas.


Blood oxygen sensors seem relatively cheap and low power.

I wonder if they could use that as the feedback mechanism.

Ideally if the sensors are small, low power, and cheap enough CO2 and lactic acid levels would also be good to check on to increase bloodflow.


I empathize with your opinion Dig1t. War is horrible and meat grinders are especially so.

But when a victimized population like Ukraine decides it wants to keep fighting. Especially given:

> Ukraine should be opposed to depopulating itself.

Then you gotta ask yourself:

Given they would rather die than suffer the consequences of a compromise.

Then maybe they know something about the consequences of that compromise that you and I can’t? and if so, maybe we should continue trusting the victims to not compromise ?


+1 I’ve really enjoyed using more declarative languages in recent years.

At work I’ve been helping push “use SQL with the best practices we learned from C++ and Java development” and it’s been working well.

It’s identical to your point. We no longer need to care about pointers. We need to care about defining the algorithms and parallel processing (multi-threaded and/or multi-node).

Fun fact: even porting optimized C++ to SQL has resulted in performance improvements.


Ok, so maybe a controversial opinion:

I've been buying local, pasture raised chickens for the last 10 years. I am very fortunate to have had the income to allow me to do so. I also don't eat that many eggs (roughly a dozen a month - so it hasn't been that expensive).

The price of my eggs was always between $8-$12 / dozen (including this weekend when I easily found and bought another 2 dozen). I get that I was buying "already expensive eggs", because apparently other people were buying eggs $2 / dozen.

However, to be frank, I'm not sure how people expect eggs to be so cheap. Taking into account the land, the water, the feed, the labor, the transportation all to create a dozen eggs, it must cost more than $2.

Clearly paying a little more for the eggs has allowed me to support farms which are robust to large shocks like this (both in terms of input costs and in terms of health of chickens). I really hope as a society we can all move away from the unsustainable farms and improve the economics of sustainable farming so that everyone can afford locally grown, healthy eggs for centuries to come.

In the meanwhile, there will be people who have to buy fewer eggs (either because of health regulations - or because reality checks will always exist like with market shocks right now).

Hopefully, after this crisis, through graduated health regulation we can cause a controlled increase to the floor price of unsustainably grown eggs, while also (through technology and economies of scale) reducing the floor price of locally sourced, sustainably grown eggs.


Feed at a large scale operation is a lot cheaper than you’d think. The bulk of the food is soybean meal left over from oil production and distillers dried grains with solubles left over from ethanol production. The feed manufacturers make deals with those producers for their left over product for very cheap. They supplement the feed with some other stuff like oyster shells for calcium. Bone meal from meat producers, bakery meal from stale or expired bread, wheat middles from milling flour, and so on. None of them are expensive primary products but whatever the cheapest local sources are producing as waste in huge quantities. Some places will even give the stuff away because the cost of transporting is less than its worth in compost. Since the input ingredients are variable and the feed manufacturers have to plan for that, they offer the big farms steep discounts on long term contracts that fix their costs.

A chicken lays a few hundred eggs per year so they’re very economically productive and you can house hundreds or thousands of them per coop somewhere the land, water, and labor are cheap.

Although we’ve sacrificed animal welfare, sanitation, and quality to get those prices.


Until the past few years $1 was normal here, often less when they went on sale. Also, most eggs in the supermarket are locally grown. Transporting them is a PITA both due to fragility and spoilability.


The store is happy to lose a dollar on the eggs to get you to stop there, it's not just about the production.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: