Would you say ctDNA tools are sensitive and specific enough now to be able to make a decision about post op adjuvant therapies? “Now that I’ve had surgery, did the R0 resection get it all, or do I need to do chemo and challenging medication like mitotane?”
I’ve seen it most commonly thought of as using ctDNA to detect relapse earlier.
So, more like — did the tumor come back? And if that does happen, with ctDNA, can you detect that there is a relapse before you would otherwise find it with standard imaging. Most studies I’ve seen have shown that this happens and ctDNA is a good biomarker for early detection of relapse.
The case for proactively looking for circulating tumor DNA without an initial diagnosis or underlying genetic condition is a bit dicier IMHO. For example, what if really like to know (I haven’t read this article, but I’m pretty familiar with the field) is how many people had a detectable cancer in their plasma (ctDNA), but didn’t receive a cancer diagnosis. It’s been known for a while that you can detect precancerous lesions well before a formal cancer diagnosis. But, what’s still an open question AFAIK, is how many people have precancerous lesions or positive ctDNA hits that don’t form a tumor?
Thank you for this! My wife recently got a pathology report back reporting a rare variant of a rare cancer and I’m trying to get back into genomics (an almost masters degree) now to see if there’s anything my computering can do to aid. I’ve contacted Broad Clinical Labs.
Yes, still relevant. The same can be accomplished with other tools, but it will probably be more difficult.
The idea of Glamorous Toolkit is that it’s a collection of tools you use to solve software problems by making little explanatory tools. You might start out with some bigger problem like “I need to make this service fast”, come up with a question like “what does the flow of data I care about look like through the service?” and then try to answer that question by making tools that could analyze/visualize logging output or a stack trace or whatever makes sense in your software’s context.
The technique of “making little tools that explain to help answer a question” is Moldable Development, similar to how Test Driven Development is “make a failing big feature test loop, write little tests and make them pass until the big one passes”.
You can make little tools to explain away questions you have while you’re working with plugins or shell scripts or whatever you’re comfortable with and that’s “Moldable Development”. The Glamorous Toolkit just happens to be a nice system of tools that make it easy to make more little tools to help explain away problems.
Hope that helps! Lmk if you want to see some examples.
Source and bias: I worked closely with the developers before I had to take a break from work and computer stuff.
What a great idea. I believe in this a trillion percent, due to the personal experience of watching myself and a stronger developer tackle similar problems at the same time. I relied on my brain and the standard tools of my system. He of course did the same, but also made a custom visualizer for the core of the algorithm. Night and day improvement!
So I'm a believer in the principles. But I'm also curious about throwaway743950's question. What are the things in the Glamorous Toolkit that concretely make it better for this style of programming than traditional tools? You say "[it] just happens to be a nice system of tools that make it easy to make more little tools", but that's got to be downplaying it. Switching environments is an agonizingly costly thing to do. What rewards await those who make the jump? Rubies? Emeralds? Custom views-with-nested-sub-custom-views? Curious (but not yet won over) readers want to know.
I would add one thing that makes GT very different from other tools and is very hard to recreate, is that these tools are ACTUAL objects and the things you see, are also ACTUAl objects, and not just a dummy representation of them as you see in other dataviz tools like plotting libraries or shell scripts.
This means your tools and visualizations are just a specific context-specific view of your objects. Meaning you aren't limited in how these tools can interact with said objects, because you are never working with static data, it's always with the actual objects.
It's hard to put into words, but it's similar to the difference between println debugging and a lisp repl or smalltalk debugger. They technically do the same thing but the actual implementation of them makes a world of difference.
Actually it wouldn't be difficult to add similar view protocols to Python objects as well (I used GT extensively a couple years ago). Pretty much everything is possible, but the live debugger and driller would be really difficult to replicate, which is where GT really shines for me. Alas it was just too much to properly bridge it with Python, where the majority of my work lies, and GT becomes overwhelmed when passed a fraction of data Python handles with ease.
Simple views sure, but tools like the driller or debugger are great examples of what I'm trying to highlight about when I say having the views work over actual objects is really important.
Because if it wasn't for the fact the graphical stack was implemented as smalltalk objects, you couldn't build tools like the driller or debugger since they would have to be implemented as a secondary piece of software that loses the original context.
Like for example, I built a custom tool for myself when I was working on this p2p network and had a section of the codebase with some non obvious control flow, since it was handling multiple different p2p networks at the same time. Normally this is where you include a diagram in the docs, but in about an hour I built a custom code editor for the class, that visualized all the control flow and explained the cases in a flow diagram by simply introspecting on the methods defined in the class. And this tool never fell out of sync like a static diagram, since it wasn't hardcoded by me. And from that point on, I worked within this tool whenever hanlding anything related to this.
And fwiw, the python story is pretty seamless from my usage of it a few months ago. I was able to integrate and use python libraries into this project without much hassle.
Over the last couple of years we added a reasonably extensive infrastructure for working with Python from within GT. You can define contextual inspector views both in GT and in Python for live Python objects, for example. There is also a debugger for Python.
Also, GT is now also a distributed Smalltalk system, too. We use it in productive settings to compute large jobs on large data sets :)
I think moldable development needs a few tutorials demonstrating concrete use cases. Without those it feels promising in the abstract, but I doubt many developers are able to connect with it and figure out how to actually use it.
It feels so open ended that I wouldn’t know where to start. And I’ve actually spent several hours exploring Glamorous Toolkit!
Thank you for the interest and for taking the time to explore the environment!
There are quite a number of videos and explanations now, but we are still struggling to package them in a way that seems more approachable.
We would need help with this. If you are interested, I would offer to have a session with you that we record and in which we go through your questions and I provide live explanations. Join us on Discord and we take it from there: https://discord.gg/FTJr9gP
> and then try to answer that question by making tools that could analyze/visualize logging output or a stack trace
We already have industry standards for doing this. Why would I want to build some micro-tool/throw-away code to do what another tool does much better and battle tested?
That's the pitch but it does not seem the reality. It seems like the exact opposite of unix philosophy, one tool to rule them all, rather than doing one thing really well.
I can achieve the same with unix philosophy, using the tools and languages I already know.
Indeed, it is possible to build tools elsewhere. The question is: do you build them, and if yes, when?
What we show with GT is that it is possible to build such tools for every development problem. This leads to thousands of micro tools per system that should co-exist.
GT is not a large tool to rule them all. In the Unix analogy, it is Unix, not one of the tools :).
This still leaves the question of why would want to build those tools when there are standard tools already? Because systems are highly contextual. This means we can predict classes of problems but not specific ones, which then means that any clicking tool built before the problem is known will not be addressing the specificity of that problem.
This is actually not that new of an idea. Testing is already done like that. We do not download tests from the web and run them on our system. We develop them as part of development after we know the problem. It's that contextualization that makes us stop every time a single test fails as we know that each of them captures something that our system specifically cares about.
Now, a test is a tool. We can extend the same idea to any other tool.
I think you believe we all struggle with your framed problem and your tool is some panacea. In reality, every tool and process has its limitations and rough edges, including yours as you've hopefully gleaned in comments here.
Being married to a specific tool like GT is limiting. GT doesn't work with most industry languages _today_, even though _in theory_ it could. It's written and scripted in a language few use, which makes it unapproachable
I definitely do not want people to marry to tools. That does not sound like a good idea not even if the tool is glamorous :).
More seriously, thank you for sparring with me.
GT is free and open-source. It's extensive. It comes with documentation, too. We even document the practices and the process, too. With public case studies. With peered reviewed publications. And we even bet our own livelihood that it works for tackling hard problems in significant systems that others cannot tackle.
So, yes, we are not just claiming that the problem exists. We have seen it validated first-hand over a large period of time (15+ years) so we are reporting on it :).
This experience points to the idea that decreasing the cost of creating a tool is much more important than the tools that exist out of the box.
Regarding the support for other languages, it's true that we only have analysis support for a couple of dozen languages. But creating the support for a new one is often measured in days. For example, it took a couple of weeks to add COBOL to the set. I challenge you to find even one properly working open-source parser (we looked and could not really found one). In GT you can find a whole free and open-source infrastructure :).
GT is certainly not a panacea. It's a documentation of how the approach can work. I am not aware of any other environment in which tools can be built in minutes and in which thousands of them practically co-exists. If this appeals to people, and it does appeal to some, now they have a vehicle to practice with. And for those that choose to not do that, that's Ok as well :).
Shopify is the best platform for online stores, as well as a diverse company of happy polyglots who hack on everything between UX, data warehouses, and low-level CRuby VM performance issues.
We value personal growth, and developer quality of life in a huge way and show it by having safe deployment systems anyone can trigger; facilitated retrospectives + off-sites with Corey Haines; GitHub and PRs for everything; small, mixed teams of folks with 0 to 30 years of experience, and encouragement from the still-hacking CEO to give back to open source on company time.
High throughput + realtime tools for customers; we hired the author of SkyDB to continue making it more awesome. People on this team have backgrounds in low-level kernel work, Smalltalk VM development, and deployment of national wireless systems.
Stack
The right thing to do should be the easiest thing to do; this team works on things like implementing Kafka across the entire Shopify system to let anyone pub/sub to any call-site, opening the door for easy instrumentation/experimentation.
Performance
We’re in the process of sharding Shopify in order to isolate and better service stores experiencing $3000000+/hour flash sales. People working on IBM’s specialized JVM are now hacking here, while also contributing context threading patches to CRuby off the side of their desk for fun.
Point of Sale
If you like making beautiful hardware + beautiful software that you can see used by customers who tell you this is the Holy Grail of retail systems, then you go here.
Operations
Engineers here work on projects like Chef, distributed cloud+iron failover systems, DDOS mitigation stuff and our new datacenters.
More teams
Data (giant, columnar data-stores + PhD withbackgrounds in finance and astrophysics)
Admin (Batman.js + UX professionals + designers who code)
Self help (machine learning for people learning)
Apps (JS SDKs for seamless iframes + APIs)
Checkout (provide low-latency experiences while dealing with high-latency payment gateways; don’t drop any of the zillion$$$ going by)
Platform + Growth (biz dev science + gamification of ecommerce education)
Internal tools (Fast Company-covered staff bonus system aka “Unicorn”, Bayesian classification of customer questions assigned to support experts, hand-made phone queueing systems wrapped around Twilio, etc., etc.)
Billing (hard-core testers need only apply)
Payments (integrations with Stripe + fraudster f-ing)
Core/API (everything not here)
(more but secret)
Seeking full-time programmers, and data analysts to work in Toronto, Montreal, and Ottawa, Canada. We’ll help with visas.
Seeking operations staff anywhere (remote friendly).
This bonus/props thing is really just for fun and recognition more than anything else. Performance reviews and salary adjustments have their own separate process.