Top
Best
New

Posted by EvgeniyZh 10/23/2024

Yes, we did discover the Higgs(theoryandpractice.org)
281 points | 150 comments
lokimedes 10/25/2024|
Not long after the initial discovery, we had enough data for everyone at the experiments to simply run a basic invariant-mass calculation and see the mass peak popping up.

Once I could "see" the peak, without having to conduct statistical tests against expected background, it was "real" to me.

In these cynical times, it may be that everything is relative and "post-modern subjective p-hacking", but sufficient data usually ends these discussions. The real trouble is that we have a culture that is addicted to progress theater, and can't wait for the data to get in.

louthy 10/25/2024||
> run a basic invariant-mass calculation and see the mass peak popping up.

For the idiots in this post (me), could you please explain what that entails and why it helps confirm the discovery?

fnands 10/25/2024|||
Not the original commenter, but also ex-HEP person:

The invariant mass is the rest mass of the particle (i.e. it's "inherent" mass). You can calculate it by taking the final state decay products of the original particle (i.e. the particles that are actually observed by the detector) and summing up their four-vectors (squared).

You can plot the invariant mass calculated from any particular final state, and for a rare particle like the Higgs the majority of the contributions to your plot will be from background processes (i.e. not Higgs decays) that decay into the same final state.

If you have a lot of Higgs decays in your sample you should be able to see a clear peak in the distribution at the invariant mass of the Higgs boson, a clear sign that the Higgs (or something with the same mass) exists.

Often by the time the discovery has reached statistical significance, you might not really be able to see such a clear sign in the mass distribution. I.e. the calculations are telling you it's there but you can't see it that clearly.

I wouldn't really say this helps confirm the discovery in a scientific sense, just that it's reassuring that the signal is so strong that you can see it by eye.

exmadscientist 10/25/2024|||
> just that it's reassuring that the signal is so strong that you can see it by eye

It's really something when this happens. I worked on a big neutrino experiment searching for theta_13, where our goals were to (a) determine if theta_13 was dead zero or not (being truly zero would have a Seriously Major Effect in theories) and then (b) to measure its value if not.

Our experiment was big, expensive, and finely tuned to search for very, very small values of theta_13. We turned the thing on and... right there there was a dip. Just... there. On the plot. All the data blinding schemes needed to guarantee our best resolution kind of went out the window when anyone looking at the most basic status plot could see the dip immediately!

On the one hand, it was really great to know that everything worked, we'd recorded a major milestone in the field (along with our competition, all of whom were reading out at basically the same time), and the theorists would continue to have nothing to do with their lives because theta_13 was, in fact, nonzero. On the other hand... I wasted how many years of my life dialing this damned detector in for what now? (It wasn't wasted effort, not at all... but you get the feeling.)

nick3443 10/25/2024||||
Like this one? https://cds.cern.ch/record/1546765/files/figs_gamma_gamma_ma...
Filligree 10/25/2024||||
Squared four-vectors?

I'm only an amateur, but wouldn't that give different results depending on choice of units? I.e, I usually use C=1.

dguest 10/25/2024|||
the math is

m^2 c^4 = E^2 - p^2 c^2

where m is mass, E is the total energy in the decay products and p is the 3-vector sum of the momentum.

Those units should work out (they certainly do if you set c = 1).

Filligree 10/25/2024||
Ah, I see. I was assuming you meant the 4-momentum. Though I'm not sure this doesn't come out to the same thing.
dguest 10/25/2024||
what I showed is the squared 4 momentum when you use the Minkowski metric [1], assuming "squared" means "self-dot product". The formulation above is just another way to illustrate the Minkowski dot product.

[1]: https://en.wikipedia.org/wiki/Minkowski_space#Minkowski_metr...

sixo 10/25/2024|||
you use the same units on both sides of the equation, it's fine, it's like counting "meters squared"
WalterBright 10/25/2024|||
What about the loss of mass released as energy inherent to the decay process?
cwillu 10/25/2024||
“Energy” is only released as the energy and momentum of the resulting particles.
throwawaymaths 10/25/2024|||
Yeah. The higgs evidence is pretty convincing visually. Not so sure about LIGO. There is an extraordinary claim of noise reduction that requires extraordinary evidence and it's all obfuscated behind adaptive machine learning based filtering, and the statistical analysis on that is unparseable to a non-expert (that is worrisome). The pulsar timing network though is easily believable.

Luckily, there's pretty simple statistics that one can throw at that once the third detector comes online. Hopefully that comes in before we spend too much money on LISA.

It's basically this, from the article, but from astro:

> Particle physics does have situations where the hypothesis are not so data driven and they rely much more heavily on the theoretical edifice of quantum field theory and our simulation of the complicated detectors. In these cases, the statistical models are implicitly defined by simulators is actually a very hot topic that blends classical statistics with modern deep learning. We often say that the simulators don't have a tractable likelihood function. This applies to frequentist hypothesis testing, confidence intervals, and Bayesian inference. Confronting these challenging situations is what motivated simulation-based inference, which is applicable to a host of scientific disciplines.

polyphaser 10/27/2024|||
There's no fancy machine learning needed to detect some "bright" LIGO signals (some of the first blackhole-blackhole mergers). Given a set of template signals, a matched filter tries to find the one that best matches the noisy signal your instrument recorded. In order for a MF to work, what you really need is a good understanding of the noise in your instruments observations -- and there're very few people in the world better at that than LIGO folks. LIGO spent almost 20 years in construction and R&D, and almost 30 in planning. When you hear stories about how they can detect trucks miles away, and detect the waves crashing on the coast, it's possible because of scores of PhD students' who spent years characterizing each and every component that affects LIGO's noise levels. All of this to say that it's possible to download their data online and do a quick MF analysis (I did that), and with a little bit of work you get a blindingly bright statistical significance of 20-sigma or so. The actual result quoted in the papers was a bit higher. That's a testament to how well the instrument was built and its behaviour understood.
maxnoe 10/25/2024|||
How do you explain the LIGO detection of a neutron neutron star merger that was at the same time observed as GRB by many, many other telescopes?

https://en.m.wikipedia.org/wiki/GW170817

throwawaymaths 10/26/2024||
That's an n of one (I could be wrong but its the only multi messenger GW event we've seen, with how many events are being generated, you'd have expected more). Could be a coincidence. The angular resolution of LIGO is not exactly amazing, and we don't have a real estimate of the distance to the source of the GW? In fact, IIUC the event could be exactly in the opposite direction too.

1 for 29 by 2019:

https://scholarcommons.sc.edu/phys_facpub/164/

thaumasiotes 10/25/2024|||
> In these cynical times, it may be that everything is relative and "post-modern subjective p-hacking", but sufficient data usually ends these discussions.

I don't think that's right. I think having an application is what ends the discussions.

If you have a group of people who think CD players work by using lasers, and a rival group who think they do something entirely different, and only the first group can actually make working CD players, people will accept that lasers do what group #1 says they do.

MajimasEyepatch 10/25/2024|||
On the other hand, nearly everyone believes in black holes, and there's no practical use for that information. The difference is that "we pointed a telescope at the sky and saw something" is easier for a layman to understand and requires somewhat less trust than "we did a bunch of complex statistical work on data from a machine you couldn't possibly hope to understand."
6gvONxR4sf7o 10/25/2024|||
I think this is where it's worth differentiating between different types of "believes in" (and why I think modal logics are cool). I can convince myself that a thing seems safe to believe, or I can tangibly believe it, or I can believe it in a way that allows me to confidently manipulate it, or I could even understand it (which you could call a particular flavor of belief). Practical use seems to fit on that spectrum.

I certainly don't believe in black holes in the same manner that I believe in the breakfast I'm eating right now.

hn72774 10/25/2024||||
> there's no practical use for that information

The information paradox is closer to us than we think!

Joking aside, another perspective on practical use is all of the technology and research advanced that have spun out of black hole research. Multi-messenger astronomy for example. We can point a telescope at the sky where two black holes merged.

throwawaymaths 10/25/2024||
Has that been done?! Iirc there is only one multi messenger observation of LIGO results (of which there have been many) which casts doubt on LIGO
hn72774 10/25/2024||
It is still nascent according to this https://rubinobservatory.org/news/multi-messenger-astro
throwawaymaths 10/25/2024|||
There's a lot of (warranted imo) skepticism there too. Im sorry I can't find the citation but there was a Japanese paper out this year that claimed the ml post processing of the EHT data produces a qualitatively similar image given random data.
dspillett 10/25/2024||||
> people will accept that lasers do what group #1 says they do

Most people. Some fringe groups will believe it is all a front, and they are only pretending that so-called “lasers” are what make the CD player work when in fact it is alien tech from Area 51 or eldritch magics neither of which the public would be happy about. What else would CDDA stand for, if not Compliant Demon Derived Audio? And “Red Book”. Red. Book. Red is the colour of the fires of hell and book must be referring to the Necronomicon! Wake up sheeple!

biofox 10/25/2024||||
Counterpoint: Vaccines work, but far too many people think that COVID vaccines contain Jewish-made GPS tracking devices that act as micro-antennae to allow Bill Gates to sterilise them using 5G.
gpderetta 10/25/2024|||
That's a common misunderstanding. The mind controlling COVID vaccines are being spread by chemtrails. The 5G signal is only used by pilots to decide when to start spraying.
thaumasiotes 10/25/2024||||
[flagged]
seanhunter 10/25/2024|||
This sort of willful devil's advocacy doesn't further discussion and is extremely unhelpful. To take your first example, you know and I know that you can tell whether a CD player works. You put in a CD, you press "play", it plays.

If you have some theory as to why that test is inadequate, it's on you to lay your cards on the table and state it so there's something substantive to discuss. Until then you're just trolling.

thaumasiotes 10/25/2024||
Why do you think I presented those questions side by side?
oneshtein 10/25/2024|||
It looks more convincing this way, especially when your audience know nothing about antibodies. HN is just not you target audience. IMHO, it will work well in a pub after a drink or two.
seanhunter 10/25/2024||||
I think the reason you presented those questions side by side is that you are trolling. I won't be responding further.
romwell 10/25/2024|||
>Why do you think I presented those questions side by side?

Perhaps because cowardice prevented you from stating your point clearly, so instead you resorted to vague implications in the form of a question.

A fine tactic which gives the coward a plausible deniability defense when their bullshit is called out: I didn't say that, you said that.

Coupled with other similar techniques, e.g.:

* You know what I mean

* Questions that have actual answers, but are asked rhetorically to make an implication: How many cats are eaten alive each year by immigrants?

* Both-sidesing/false balance (giving equal weight to contradicting points of view, regardless of what the reality has to say)

* Ambiguous implication that can be taken in many ways, depending on how the asker defines the terms: Do you really think the immigrants are helping the economy?

* Non-sequiturs and non-answers

* False dichotomies

* Etc

...this gives the coward a superpower to start a heated discussion, where the coward never actually says anything directly.

Instead, the opponents exert an enormous effort to debunk each leaf on the tree of possible interpretations of the coward's incomplete thought, effectively doing the thinking for the coward in a futile attempt to nail down what point the coward intended to make.

(There is no such clear point, other than I am right and you are wrong).

By the time it becomes clear that no possible interpretation is supported by reality, the coward silently leaves the discussion, and says the same exact things elsewhere, feeling empowered by having made others frustrated.

That frustration counts as a victory in an argument in coward's view, with the assumption that it comes from inability to counter the coward's (unstated, vague, implied) points.

In reality, it's infuriating to have to be made guessing what the other person wants to say, doubly so when they are the one forcing to play that game, which, miraculously, you always seem to lose, because the outcome is invariably "that's not quite what I'm saying" (and what the coward is saying is never directly stated by the coward).

At the same time, this protects the coward from being riduculed for any of the views they're promulgating in the discussion.

This is why the coward never states the views directly, and instead e.g. "presents questions side by side".

If the coward were to say outright "Unlike CD players, there's actually no clear way to say whether vaccines work or not", the coward would be laughed out of the room, and they know it — which is why the resort to implications and rhetorical questions about coward's motivation.

I hope this fully answers the question asked.

12_throw_away 10/27/2024||
Well said, this is extremely good.
romwell 10/28/2024||
Thank you for this comment!

Sometimes I wonder whether it's worth to write anything of this sort, or am I screaming into the void.

Your comment makes it absolutely worth it.

roywiggins 10/25/2024||||
It's not hard to find people who are germ theory truthers. They have alternative "explanations" for why nobody gets polio or smallpox anymore.
otabdeveloper4 10/26/2024||
No need for scare quotes. Post hoc fallacies aren't proof of work, and anyways there's doubt if post hoc is even true for polio and smallpox.

Way too many confounding factors here, and nobody will risk and experiment with a control group in this situation.

qup 10/25/2024||||
They're talking about additional functions (side-effects) of vaccines, really. It's secondary to whether they work or not.

We can't tell whether a CD player has a listening device in it, for instance.

bongodongobob 10/25/2024|||
Yes.
nsxwolf 10/25/2024|||
COVID vaccines may "work" but they're pretty lame compared to something like the varicella vaccine where the disease basically disappears of the face of the earth.
IX-103 10/25/2024||
We'll see if varicella stays gone. It's tricky in that it can embed itself in the host genome and come back later. That means that until the last person exposed to the virus dies, we can't really consider it gone. Good luck convincing people to continue vaccinating for a disease no one has seen in a couple decades. Of course, if varicella was able to infect germ-line cells it would be even worse...

COVID on the other hand doesn't have such a mechanism, and just relies on being really contagious. So if everyone would stay up to date in their boosters and continue masking in public places, we may be able to get rid of it in a couple of years.

vlovich123 10/25/2024||
> So if everyone would stay up to date in their boosters and continue masking in public places, we may be able to get rid of it in a couple of years.

By that logic we'd have gotten rid of the flu. Vaccines for rapidly mutating viruses like flu and COVID can't keep up and remain an epidemic. The only disease we've actually been able to eliminate worldwide due to vaccines is smallpox. We'd have gotten rid of measles too if crazies hadn't decided the MMR vaccine causes autism due to criminally fraudulent research.

zehaeva 10/25/2024||
Didn't one strain of the flu become extinct during the pandemic because we masked up and staying away from each other for a year? One would think that if we just kept that up we'd get rid of all the others.

https://www.npr.org/2024/10/18/nx-s1-5155997/influenza-strai...

vlovich123 10/25/2024||
It's really hard to draw causal links here. It could be any number of factors or required all of them together. In fact, if that had worked, why didn't COVID or other strains die too? And China had much more severe & prolonged lockdowns but that didn't eliminate anything extra for them.

Don't underestimate the impact of stock viral interference - flu & COVID are both respiratory infections and COVID was much more infectious. Some flu strains probably just couldn't remain competitive with the combined set of other flu and COVID strains.

While masking and social distancing have a beneficial impact on limiting the spread of respiratory diseases, there are practical reasons why it doesn't work to eliminate it altogether and ignores the possibility and likelihood of other resevoirs to reintroduce the disease. For example, if North America remains masked & socially isolated by the virus persists in Europe, then as soon as North America opens up you'll get the virus in North America again. And imaging a simultaneous world wide lock down is a laugh - even during COVID governments were not globally coordinated and even within national governments there was mixed local coordination.

Aside from all that, let's say it was purely a result of masking and social distancing. The consequences of that were quite sever & catastrophic, not to mention that no one actually stayed away vs limited their normal contacts & there were plenty of practical reasons it wasn't possible (e.g. getting groceries). Life involves death & risk and it's pretty clear that even before the vaccines became available many people were not OK with the tradeoff COVID entailed (e.g. Florida).

whatshisface 10/25/2024|||
By that line of reasoning, the moon does not exist.
dguest 10/25/2024||
I think the gigantic bumps that Kyle pointed to "discovered" the higgs.

The statistical interpretation showing a 5 sigma signal was certainly essential, but I suspect it would have taken the collaborations much longer to publish if there wasn't a massive bump staring them in the face.

mellosouls 10/25/2024||
The article here is responding to an original blog post [1] that is not really saying the Higgs was not discovered (despite its trolling title), but raising questions about the meaning of "discovery" in systems that are so complicated as those in modern particle physics.

I think the author is using the original motivation of musing on null hypotheses to derive the title "The Higgs Discovery Did Not Take Place", and he has successfully triggered the controversy the subtitle ironically denies and the inevitable surface reading condemnations that we see in some of the comments here.

[1] https://www.argmin.net/p/the-higgs-discovery-did-not-take

noslenwerdna 10/25/2024||
He is implying that the scientists involved haven't thought of those questions, when in reality this field is one of the strictest in terms of statistical procedures like pre registeration, blinding, multiple hypothesis testing etc

Also he makes many factual claims that are just incorrect.

Just seems like an extremely arrogant guy who hasn't done his homework

ttpphd 10/25/2024|||
A computer scientist/electrical engineer who is arrogant? I dunno, I need to see the statistical test to believe that's possible.
eightysixfour 10/25/2024||
Computers are a “complete” system where everything they do is inspectable and, eventually, explainable, and I have observed that people who work with computers (myself included) over estimate their ability to interrogate and explain complex, emergent systems - economics, physics, etc. - which are not literally built on formal logic.
dekhn 10/25/2024||
a single computer might be complete (even then, not everything is inspectable unless you have some very expensive equipment) but distributed systems are not.

There was an entire class of engineers at google- SREs- many of whom were previously physicists (or experts in some other quantitative field). A fraction of them (myself included) were "cluster whisperers"- able to take a collection of vague observations and build a testable hypothesis of why things were Fucked At Scale In Prod. Then come up with a way to fix it that didn't mess up the rest of the complete system.

Nothing- not even computers are truly built on formal logic. They are fundamentally physics-driven machines with statistical failure rates, etc. There's nothing quite like coming across a very expensive computer which occasionally calculates the equivalent of 1*1 = inf, simply because some physical gates have slightly more electrical charge on them due to RF from a power supply that's 2 feet away.

eightysixfour 10/25/2024||
I think you're mixing up two different things: the challenges of building these systems at scale, and their fundamental properties. Take your example of the expensive computer returning 1*1 = inf because of a nearby power supply - that actually proves my point about computers being knowable systems. You were able to track down that specific environmental interference precisely because computers are built on logic with explicit rules and dependencies. When these types of errors are caught, we know because they do not conform to the rules of the system, which are explicitly defined, by us. We can measure and understand their failures exactly because we designed them.

Even massive distributed systems, while complex, still follow explicit rules for how they change state. Every bit of information exists in a measurable form somewhere. Sure, at Google scale we might not have tools to capture everything at once, and no single person could follow every step from electrical signal to final output. But it's theoretically possible - which is fundamentally different from natural systems.

You could argue the universe itself is deterministic (and philosophically, I agree), but in practice, the emergent systems we deal with - like biology or economics - follow rules we can't fully describe, using information we can't fully measure, where complete state capture isn't just impractical, it's impossible.

Vegenoid 10/25/2024||
To simply illustrate your point: if you see a computer calculate 1*1=∞ occasionally, you know the computer is wrong and something is causing it to break.

If you see a particle accelerator occasionally make an observation that breaks the standard model, depending on what it is breaking you can be very confident that the observation is wrong, but you cannot know that with absolute certainty.

eightysixfour 10/25/2024||
Great explanation, thank you.
BeetleB 10/25/2024|||
> when in reality this field is one of the strictest in terms of statistical procedures like pre registeration, blinding, multiple hypothesis testing etc

I'm not in HEP, but my graduate work had overlap with condensed matter physics. I worked with physics professors/students in a top 10 physics school (which had Nobel laureates, although I didn't work with them).

Things may have changed since then, but the majority of them had no idea what pre-registration meant, and none had taken a course on statistics. In most US universities, statistics is not required for a physics degree (although it is for an engineering one). When I probed them, the response was "Why should we take a whole course on it? We study what we need in quantum mechanics courses."

No, my friend. You studied probability. Not statistics.

Whatever you can say about reproducibility in the social sciences, a typical professor in those fields knew and understood an order of magnitude more statistics than physicists.

noslenwerdna 10/25/2024||
As an ex-HEP, I can confirm that yes, we had blinding and did correct for multiple hypothesis testing explicitly. As Kyle Cranmer points out, we called it the "look elsewhere effect." Blinding is enforced by the physics group. You are not allowed to look at a signal region until you have basically finished your analysis.

For pre-registration, this might be debatable, but what I meant was that we have teams of people looking for specific signals (SUSY, etc). Each of those teams would have generated monte carlo simulations of their signals and compared those with backgrounds. Generally speaking, analysis teams were looking for something specific in the data.

However, there are sometimes more general "bump hunts", which you could argue didn't have preregistration. But on the other hand, they are generally looking for bumps with a specific signature (say, two leptons).

So yes, people in HEP generally are knowledgeable about stats... and yes, this field is extremely strict compared to psychology for example.

exmadscientist 10/25/2024|||
> so complicated as those in modern particle physics

But... modern particle physics is one of the simplest things around. (Ex-physicist here, see username.) It only looks complicated because it is so simple that we can actually write down every single detail of the entire thing and analyze it! How many other systems can you say that about?

spookie 10/25/2024|||
Other systems might not be part of a field as mature as yours, I would argue.
exmadscientist 10/25/2024||
It has nothing to do with "maturity" and everything to do with just hierarchy in general. There is something to the old XKCD joke: https://xkcd.com/435/ because the disciplines really are divided like that. You have to know physics to do chemistry well. You have to know chemistry to do biology well. You have to know biology to ... etc.

Whereas to do physics well you need only mathematics. Well, at least, to do the theories well. To actually execute the experiments is, ah, more challenging.

So I would argue the Standard Model is pretty much the only thing in all of human knowledge that depends on no other physical theories. It's the bottom. Shame it's pretty useless (intractable) as soon as you have three or more particles to calculate with, though....

jaculabilis 10/25/2024||
> I think the author is using the original motivation of musing on null hypotheses to derive the title "The Higgs Discovery Did Not Take Place",

It's probably a reference to "The Gulf War Did Not Take Place" by Jean Baudrillard, which took a similar critical view of the Gulf War as TFA takes of the Higgs discovery.

mellosouls 10/25/2024||
Possibly! I remember that but completely missed it as an inspiration here.
stephantul 10/25/2024||
I think it is good this post was written, I learned a lot, but it makes me sad that it was prompted by such an obvious trolling attempt.
scaramanga 10/25/2024|
not to nitpick, but I think "reactionary" or "aspiring crank" are probably more descriptive :)

"This isn't music, back in my day we had Credence"

ayhanfuat 10/25/2024||
Here is Ben Recht’s response: https://www.argmin.net/p/toward-a-transformative-hermeneutic...
12_throw_away 10/25/2024||
Oof.

A Berkeley academic invoking "it's actually your fault for believing the words that I wrote" and following it up with a "I'm not mad, I actually find this amusing" ... it's just disappointing.

dguest 10/25/2024|||
Which is actually very reasonable, it ends with

> In any event, I use irreverence (i.e., shitposting) to engage with tricky philosophical questions. I know that people unfamiliar with my schtick might read me as just being an asshole. That’s fair.

People are piling the hate on Ben Recht here. I appreciate that he's calling his post what it is rather than doubling down.

It's also a great chance to lecture people on 4-momentum, thanks everyone!

munchler 10/25/2024||
That is some fancy backpedaling.
dekhn 10/25/2024||
For every time I see a criticism like Recht's (and Hossfelder's), I ask "could this theoretical scientist go into the lab and conduct a real experiment". I mean, find some challenging experiment that requires setting up a complex interferometer (or spectroscope, or molecular biology cloning), collect data, analyze it, and replicate an existing well-known theory?

Even though I'm a theoretical physicist I've gone into the lab and spent the time to learn how to conduct experiments and what I've learned is that a lot of theoretical wrangling is not relevant to actually getting a useful result that you can be confident in.

Looking at Recht's publication history, it looks like few of his papers ever do real-world experiments; mostly, they use simulations to "verify" the results. It may very well be that his gaps in experimental physics lead him to his conclusion.

rob_c 10/26/2024||
Just to pile on 'Ben', but sorry to break it to machine learning computing enthusiasts.

We(particle physicsts) have been performing similar, and in a lot of ways much more complex analyses using ML tools for decades in production.

Please stop shrouding your new 'golden goose' of AI/ML modelling in mystery it's 'just' massively multi-dimensional regression analyses with all of the problems, advantages and improvements that brings...

Why is there some beef that nature is complex, if you had the same vitriol toward certain other fields we'd be worrying about big-pharma's reproducibility crisis just at the top of the ice-berg of problems in modern science, not that most people are illiterate when it comes to algebra...

rsynnott 10/25/2024||
Honestly, while it's an interesting article, I'm not sure why one would even give the nonsense it's addressing the dignity of a reply.

Hadn't realised Higgs' boson denialism was really a thing.

thowfeir234234 10/25/2024|
The parent-poster is a very well known professor in ML/Optimization at Berkeley EECS.
TheOtherHobbes 10/25/2024|||
One of the smaller trade journals of EE was Wireless World. (It closed in 2008.)

In its pages you could find EE professors and chartered engineers arguing that Einstein was so, so wrong, decades after relativity was accepted.

I'd trust an EE to build me a radio, but I wouldn't let an EE anywhere near fundamental physics.

MajimasEyepatch 10/25/2024|||
I can't find the source at the moment, but I've seen it reported in the past that engineers are actually unusually likely to be fundamentalist Christians who believe in creationism. Engineers are also unusually likely to be Islamist terrorists, though there are many reasons for that. [1] There's a certain personality type that is drawn to engineering that believes the whole world can be explained by their simple pet model and that they are smarter than everyone else.

[1] https://www.nytimes.com/2010/09/12/magazine/12FOB-IdeaLab-t....

rsynnott 10/25/2024|||
> I can't find the source at the moment, but I've seen it reported in the past that engineers are actually unusually likely to be fundamentalist Christians who believe in creationism.

If it's the same thing I'm thinking of, it was kinda flawed, IMO, in that it was a comparison of such beliefs amongst various types of scientists, with, for some reason, engineers thrown in, too. And yeah, it's kind of unsurprising that engineers are more into unscientific nonsense than various types of scientists, because engineers aren't scientists. It would be more surprising if they were significantly worse than the _general population_, but I don't think that it showed that.

stracer 10/25/2024||||
> There's a certain personality type that is drawn to engineering that believes the whole world can be explained by their simple pet model and that they are smarter than everyone else.

Lots of failed theorists with that personality type/flaw as well.

yard2010 10/25/2024|||
What the heck did I just read. It feels like BS - Isn't the sample too small?

https://archive.is/FfEK4

olddustytrail 10/25/2024|||
You mean 400 people? No, that isn't too small. Why would you think it was?
dekhn 10/25/2024|||
it's sociology- a field which frequently does not provide evidence for its claims.
fecal_henge 10/25/2024|||
All this suggests is that chartership, professorship and shitty journal authorship are poor metrics for credibility.

Keeping EEs and any E for that matter away from fundamental physics is a shortcut to producing a whole lot of smoke and melted plastic.

AnimalMuppet 10/25/2024|||
Uh huh. And that makes said professor an expert in 1) epistemology, and/or 2) experimental particle physics? Why, no. No, it doesn't.

I mean, I'm as prone to the "I'm a smart guy, so I understand everything" delusion as the next person, but I usually only show it in the comments here. (And in private conversations, of course...)

hydrolox 10/25/2024||
to be fair, maybe there is a decent overlap of people who saw the original and this. At least that might dispel the 'myths' raised in the original. Also, since this rebuttal article was written by a physicist (much more involved in the field), its also defending their own field
12_throw_away 10/25/2024||
The article this is responding to is some of the worst anti-science, anti-intellectual FUD I've seen in a while, with laughably false conceits like (paraphrased) "physics is too complicated, no one understands it" and thus "fundamental research doesn't matter".

Worse, the author of the original FUD is a professor of EE at Berkeley [1] with a focus in ML. It almost goes without saying, but EE and ML would not exist without the benefit a lot of fundamental physics research over the years on things that, according to him, "no one understands".

[1] https://people.eecs.berkeley.edu/~brecht/

KolenCh 10/28/2024||
Having been in his lectures in the past, he is the kind of person who teach you to question what you are told/taught. You should know he basically does the same thing to the field of ML (as this does to HEP).

You know, when I first read this thread and the 3 posts involved, I find the original post Ben wrote arrogant and hard to swallow. But once I searched who he is, and recognized him, knowing his character I immediately “get his point”. While not an expert in the fields, I have both graduate level educations in HEP and ML. My point is that my conclusion is unlikely due to the lack of understanding of these fields, but more because of my understanding of who he is…

Admittedly, he should not assume people read it as he intended how people would perceive. It took a lot of contextualizations, including the expectation from the title he explained in the later posts, to really take his posts seriously.

xeonmc 10/25/2024||
> ...is too complicated, no one understands it.

Quoth the AI researcher.

nyc111 10/25/2024||
This debate reminded me Matt Strassler's recent post that most of the data observed in the accelerators are thrown away [1]:

    So what’s to be done? There’s only one option: 
    throw most of that data away in the smartest way
    possible, and ensure that the data retained is 
    processed and stored efficiently.
I thought that was strange. It's like there is too much data and our technology is not up to it so let's throw away everything that we cannot process. Throwing data "in the smartest way possible" did not convince me.

[1] https://profmattstrassler.com/2024/10/21/innovations-in-data...

elashri 10/25/2024||
I would like a chance to jump into this point because this problem is a function of two things. The throughput that you can make your Tigger (Data acquisition system) save the data and transfer it to permanent storage. This is usually invlove multiple steps and most of them happens in real-time. The other problem is the storage itself and how it would be kept (duplicated and distributed to analysts) which at the scale of operation we are doing is insanely costly. If we are to say save 20% of the generated collision data then we would fill the entire cloud storage in the world in a couple of runs . Also the vasr majority of data is background and useless so you would do a lot of work to clean that and apply your selections which we do anyway but now you are dealing with another problem. The analysts will need to handle much more data and trying new things (ideas and searches) becomes more costly which will be discouraged. So you work in a very constrained way. You improve your capabilities in computing and storage and you present a good physics case of what data (deploy trigger line which is to pick this physics signal) that the experiment is sensitive to and then lets the natural selections take place (metaphorically of course).

Most of the experiments cannot because of the data acquisition problems.

dguest 10/25/2024||
The technology really is not up to it, though.

To give some numbers:

- The LHC has 40M "events" (bunch of collisions) a second.

- The experiments can afford to save around 2000 of them.

This is a factor of 20k between what they collide and what they can afford to analyze. There is just no conceivable way to expand the LHC computing and storage by a factor of 20k.

Valid question would be why they don't just collide fewer protons. The problem is that when you study processes on a length scale smaller than a proton, you really can't control when they happen. You just have to smash a lot and catch the interesting collisions.

So yeah, it's a lot of "throwing away data" in the smartest way possible.

-------------------

All that said, it might be a stretch to say the data is "thrown away", since that implies that it was ever acquired. The data that doesn't get saved generally doesn't make it off a memory buffer on a sensor deep within the detector. It's never piped through an actual CPU or assembled into any meaningful unit with the millions of other readouts.

If keeping the data was one more trivial step, the experiments would keep it. As it is they need to be smart about where the attention goes. And they are! The data is "thrown away" in the sense that an astronomy experiment throws away data by turning off during the day.

aeonik 10/26/2024||
My 200Mhz Oscilloscope "throws away" a lot of data compared to the 4GHz scope I used at work, but for the signals I'm looking for, it doesn't matter at all.
RecycledEle 10/25/2024|
The team behind the LHC laid out the criteria for discovering the Highs Boson before beginning their experiments.

They never came close to what they said they needed.

But they now claim they succeeded in finding the Highs Boson.

And the paper setting out the criteria has been memory holed.

I call BS in the Highs Bozo team.

More comments...