Perspective from a computational biologist: Campus hosted HPC means the direct cost pressure is seen as IT staff and hardware related costs. Researchers are encouraged to use the available capacity. This is good.
Externally-hosted HPC means every single compute job is seen as something that directly costs money. This negatively affects the quality of scientific output (research playfulness / creativity / focus on the research / etc).
Yes. The costing models that are used (and often required by granting agencies) make apples to apples cost comparisons almost impossible, and impose undoubted and significant false costs on research budgets. No question this is true.
It is caused by three factors primarily: (1) extreme asymmetric testing rates between the two groups of all patients; (2) inclusion of <14 day or even <21 day post-vaccination into the “unvaccinated” column; and (3) in some cases even the “unknown vaccination status” is amazingly included into the “unvaccinated” column.
Number 2 essentially uses a proximal increase in vulnerability facilitated by the vaccination in order to sell more vaccinations. I am at a loss for words as to describing just how scandalous this is.
I agree, and I also urge people to watch this footage. It can be found here[1] (it is also featured for a few seconds in the other videos [2-4]).
There are two takes on this by comedian's here[2] and here[3], which I personally thought were intriguing (and funny).
There is also a comprehensive (serious) take here[4] showing a bunch of extra footage I hadn't seen yet.
If anyone watches these takes
and disagrees with them, I would love to hear how specifically you don't agree?
Especially: how can it be explained the omissions shown in [4] that the media are making, e.g. not showing the several incidences where the masses of Trump supporters were actively stopping the violent provocateurs (protecting police, stopping them from smashing a window, etc)? I am genuinely curious.
This video is likely being misinterpreted. It doesn't really make any sense because there are masses of plain clothes citizens on this side of the barricade before it is opened, including the guy who steps forward and gestures for everyone to "come on over." The police and barricade where they were had already lost the line by this time. The zooming at the beginning makes it hard to see exactly who is lifting and moving the barricades, though. Perhaps frame by frame it could be better analyzed.
Considering [4] Rudy "trial by combat" Giuliani, I'm going to take a pass at using that as a news source. Allowing him to shape your message is probably not ideal!
Yep. I believe you can also become an Australian permanent resident and then citizen through that visa. You just need a spare 5 to 15 million dollars AUD to invest, depending :). IIRC Steve Wozniak went for it or at least was considering it. Details here: https://www.border.gov.au/Lega/Lega/Form/Immi-FAQs/what-is-t...
I'm curious what your experience in Australia has been for you to think that. In my experience that's not what I'd call a lot in Australia, most commonly your average basic IT contactor will charge $80-120/hr, let alone any sort of speciality.
Don't most contractors use recruiting companies when they start out, then rely on word of mouth/networks? Might be why the hudson/greythorn rates seem low.
Hm, yeah that could be true. I think the big organisations like the banks, insurance companies, super companies, Telstra, NBN etc don't often take on contractors directly either.
I've always been in a startup or just a full-time employee - my point still stands though, if that's an average rate then why did the article point it out?
Someone above suggested because the journalist is lazy and the number sounds big enough to the general public. Of course professionals with these skills would demanding "more than" quite regularly.
It seems the "world's largest" claim is correct by a decent margin, I can't seem to find any battery installation that comes close to the 100MW output and 400MWh capacity claimed here. What is the second largest (that is at least financed and in-planning as it is here) solar-battery farm I wonder? Or is there bigger?
FWIW, The CSIRO logo is in the shape of Australia (the dot representing Tasmania), as it's an Australian organisation. It also sort of looks like WiFi signal bars to me, and I guess the CSIRO has some legitimacy for that seeing as they were involved in inventing WiFi: https://www.csiro.au/en/Research/D61/Areas/Wireless-and-netw...
Does anyone know a good way to implement wireless roaming around a large home? I had Ubiquity's Zero-Handoff on my list of things to check out, but seems they don't use it anymore, and as above it was a hack anyway.
I have been for years using (and recommending to others) multiple access points (of any brand) wired together and just set them up with the same SSID and passwords. Usually works mostly OK, although VoIP tends to drop out moving around the house (and thus changing BSSID). Keen to fix the VoIP issues. Maybe I'm missing out on something such as using special routers or some 802.11 standard that might work better.
If the OP of this thread would take a few minutes and read through Ubiquiti forums they'd know that there have been 10's of conversations wherein someone from Ubiquiti or a power use has explained why you shouldn't be using the old ZHO hacks. I said this yesterday in another Ubiquiti focused comment, but 1) ZHO forces all AP into the same L2 network and 2) ZHO puts all AP on the same channel and SSID. The last one should be blatant enough for anyone to know that that's not going to scale or be performant. ZHO was implemented for old, low bandwidth devices that needed to be on the same network. Full stop. Don't use ZHO!
Beyond that do some due diligence when setting up your networks. Even with 802.11n/ac in the 5GHz range you should be aware of best spectrum usage, about things like DFS and consider how wide of a band you're allocating to the channel selection your making (hint: it impacts overlap). Do not do not do not use auto channel selection unless you're forced to. I get it, you may not have control over this in a dense environment (apartment) but if you can control your airspace (reasonably so) map out channel utilization especially if you have more than one AP. Then go to the next step and figure out the worse spots for reception in your house are and, as the person above me states, tune RSSI down to get clients to get kicked as they "roam". This will force them to reselect much faster. One comment stated 8 second reassociations. That's just bad tuning and clear ignorance to the problem - you'd have the same thing with any other vendor.
I think Ubiquiti makes some really great products for the price. Have they violated GPL? Quite possibly but as others have said I don't think that's been proven judicially as of yet. You will get one of the best products at this price point though and, when setup correctly, will be much more performance than things like Eero or Google Wifi that relies on wireless as the backhaul between APs. If you're saying you can't use cabled infrastructure there are options. 1) Powerline is now "decent" as in 400-700 Mbit are attainable depending on your situation, they're also much cheaper (look for HomePlug AV2 standardization) now 2) Dedicated, directional focused wireless backhaul is the better option than an omni backhaul between something that is already handling your clients. That is Ubiquiti's forte. They only recently released the UniFi product but you can use NanoBeam, Bullet or NanoStation in your home to move things around to dedicated AP.
Ultimately if you want good wifi you have to pay for it and plan and deploy it correctly. The second half of it is why we have Eero and Google Wifi - because people don't do that. They buy Ubiquiti or some other enterprise focused WLAN solution, plug it in and do the minimal work and say it doesn't work or sucks without RTFM.
Finally if you want "fast roaming" you should know what you want before buying a product since the device OS needs to support it. Apple has basically said they're supporting 802.11r: https://support.apple.com/en-us/HT202628. And 802.11r is in the latest builds of UniFi firmware, but not really exposed to the GUI config because, well, people will probably shoot themselves in the foot with it.
Appreciate this comment and did want to follow up that we are not a Ubiquiti shop but instead have taken over other deployments of it. My deployment experience with Ubiquiti is none, so it's great to hear the actual implementation of Zero Handoff is a hack... nasty, and that should explain the problems we've seen for sure.
We're still in the process of tuning one client with a significant investment in UniFi. ZHO is not enabled in that case and am going down the routes you've described re RSSI tuning and RF mapping, but still see blatant performance issues, and will readily admit that being forced to resort to forums for support and outdated setup articles isn't exactly confidence inspiring and helps fuel my distaste for the product. We're stuck with it though.
This will likely be one of those things requiring experience but as I'm hoping to install a Ubiquiti focused system in my house over the next few weeks, do you have any suggested resources? Particularly on the subjects of spectrum usage, DFS, and channel selection? I'm not in an apartment but still of the opinion that neighboring network boxes are causing congestion on my local system. We're all on Google Fiber with the provided network boxes, which seem to automatically select the same channel. Simply selecting a different channel hasn't seem to improve the situation (and has potentially made things worse).
The /r/homenetworking subreddit will be happy to answer and walk you through your deployment, plan, etc. in whatever level of detail. They basically spend most of their time telling people which Ubiquiti gear to get, since that's the #1 solution to most people's problems (buy good, purpose-built hardware), why to get each peice, and how to arrange it, so basically exactly the sort of thing you're asking.
Today most, if not all, enterprise grade AP have settings for minimum RSSI values. Think of it this way - having one client on your WLAN that has a weak connection will introduce errors and retransmission impacting every client to a certain extent (it's a shared medium). In a deployment wherin you have multiple AP working together (on different channels) there's going to be overlap so you have a continuation of service. You may move around and be connected to WAP-B with an RSSI of -70 but be closer to WAP-A with an RSSI of -50. Your OS will try to hang on to that connection as long as it can by default since it's not scanning in the background for something better on a differing BSSID (the BSSID is the MAC, generally in a multi-AP environment you'll have a singular SSID and each AP has a unique BSSID).
With minimum RSSI implementations the AP will kick (deauth) clients that no longer meet the minimum RSSI level. This will force them to reconnect to and will go with the stronger BSSID being advertised with the same SSID. Still rudimentary but works rather well. I don't even notice roaming in my home anymore after tuning. It's very evident when you're on a network that has defaulted to auto and all of the channel selection and power output is up for grabs.
If you're looking at doing this with Ubiquiti they have docu and even point out the warts with simple RSSI tuning. Trolling the forums before doing a deployment is something you should do as this equipment isn't really something that will autoconfigure itself very well. I mean, it works - but clearly you can see in this thread that people who just expect it to work have bad things to say about it's effectiveness. What they don't understand is that a lot of those problems are self-inflicted.
Proteins don't usually "learn" how to fold from other proteins, their structure is usually determined by their amino acid sequence. In vivo there are chaperone proteins that help proteins fold into their natural structures by preventing premature aggregate folds, but still their final fold is a result of their amino acid sequence. Less commonly, there are some rare chaperones that cause specific folding that wouldn't otherwise occur, but these are the exception and not the rule. However you are right that prions are an exception here, prions are uniquely misfolded proteins that are amyloid-prone and cause their otherwise normal folded structure to refold into another prion form, leading to a chain reaction of refolding into the prion form for that particular protein. As far as I know there is still a lot of research to be done in order to verify the theories regarding prions.
It doesn't make sense to say newly synthesised proteins arises mostly from amino acid metabolism, metabolism involves both catabolism (breakdown) and anabolism (synthesis). Amino acid degradation results in glucose via gluconeogenesis or cellular energy (ATP), whereas amino acid synthesis is of course what is used for proteins, as proteins are made of amino acids. So yeah, doesn't really make sense. Perhaps you meant to say that newly synthesised protein uses amino acids of which are most often sourced from proteolysis?
As for red blood cells, they don't have any mitochondria, so they only metabolise glucose and other sugars (and anaerobically at that, due to their function of carrying oxygen). They certainly don't metabolise amino acids or proteins, so I'm not sure where you got that idea from.
Yeah I oversimplified protein 'chaperoning' for brevity's sake. As for prion recruitment of other proteins leading to amyloid plaque formation in the brain, I concede it's an open question. But my money would be on 'yes' to this hypotheses, given recent studies that have been coming out (except for a particular German one that concluded the opposite based on a mouse model of PD).
I'm probably on the wrong end of the metabolic semantics here; I was specifically referring to hydrolysis (or is the correct term hydrogenation?) occurring in red-blood cells. If this is not considered part of normal metabolic processed then mea culpa. The particular (pro)drug I mentioned (likely) relies on this mechanism to cleave/dissolve a covalent bond between the two constituents of the prodrug, to produce a 'time-release' effect: the constituents being l-lysine and dexamfetamine/dextroamphetamine ('lisdexamfetamine').
For example, here's one of the few serious studies I've been able to find on its metabolism (i.e. not a single dose study that concludes 'yes, this leads to amphetamine in the blood'): http://www.tandfonline.com/doi/full/10.3109/21556660.2013.77... . There was another (better) one somewhere on pubmed, but I can't seem to find it atm (although it was also an in vitro study). I also found this, more general paper, quite interesting (although I'm guessing it's outdated?): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1142344/
I admit, my understanding of the process by which amino-acids are synthesised in to proteins in vivo is not the best. I'm actually a little hazy on how I first arrived at the notion that an over-abundance of l-lysine could disrupt the Krebs-cycle. I probably came up with it during one of my long treks across Wikipedia, so it's very likely wrong....
Externally-hosted HPC means every single compute job is seen as something that directly costs money. This negatively affects the quality of scientific output (research playfulness / creativity / focus on the research / etc).