Members banned from this thread: domer76, Charoite, Walt, Doc Dutch and Geeko Sportivo


Page 2 of 3 FirstFirst 123 LastLast
Results 16 to 30 of 35

Thread: How Did This Guy Get To Bankrupt The World? See The Timeline Of Errors For Yourself.

  1. #16 | Top
    Join Date
    Apr 2009
    Posts
    108,120
    Thanks
    60,501
    Thanked 35,051 Times in 26,519 Posts
    Groans
    47,393
    Groaned 4,742 Times in 4,521 Posts
    Blog Entries
    61

    Default

    Quote Originally Posted by StoneByStone View Post
    #triggered
    Even for a Millennial you're exceedingly juvenile.

  2. #17 | Top
    Join Date
    Jan 2019
    Posts
    26,116
    Thanks
    694
    Thanked 5,043 Times in 3,907 Posts
    Groans
    85
    Groaned 1,697 Times in 1,555 Posts

    Default

    Quote Originally Posted by Grajonca View Post
    Even for a Millennial you're exceedingly juvenile.
    For mocking your snowflake tendencies?

  3. #18 | Top
    Join Date
    Apr 2009
    Posts
    108,120
    Thanks
    60,501
    Thanked 35,051 Times in 26,519 Posts
    Groans
    47,393
    Groaned 4,742 Times in 4,521 Posts
    Blog Entries
    61

    Default

    .
    He has a lot to answer for, reminds me of that arrogant sod Michael Mann to be honest.
    .
    Forget Ferguson's personal failures – it's his science that needs scrutiny

    Newspapers aren’t the place to debate expert advice on a crisis. Advisors advise, ministers decide. We should keep politics out of science.

    These three cries – and numerous variations upon them – have become common refrains as the UK’s increasingly fractious debate on the lockdown, the science behind it, and the best way to lift its various restrictions rolls on.

    At first, they sound completely reasonable and unarguable: people are stepping up to the plate to help the government make life-or-death decisions in a time of crisis. That’s an admirable thing to do. What’s more, they’re doing it with years of expertise in their field behind them. Of course we should leave them to their work, and let them help guide our course.

    The reality, of course, is messier.

    Perhaps the most contentious of the government’s high-profile scientific advisors is professor Neil Ferguson of Imperial College, who heads up that university’s epidemiological modelling team, and whose model was credited as influential in sparking the lockdown.

    Ferguson was the subject of surely unwelcome press attention this week when his lockdown liaisons with a married lover were splashed across the newspaper front pages. It was clear he would have to step down from his role on the government’s scientific advisory committee at this point – but only for the hypocrisy of not following the rules that he was influential in shaping, nothing more.

    It is nonsense to imply, as some have, that Ferguson’s participation in a little (apparently ethical) non-monogamy affects his work as a scientist in any way. It does not – hypocrisy was his sin here, nothing more. This would provide no reason for SAGE or for ministers not to continue consulting his modelling, or even informally consulting Ferguson himself if they so choose.

    But if we are to say that Ferguson and the Imperial model’s work should be judged on its own merits, that does mean that we – all of us – should be allowed to do just that. We cannot sit back and allow it to stand because he’s the expert. Other people with relevant expertise should be able to see the team’s workings, to ask awkward questions, and to loudly disagree.

    For a long time, this was all but impossible. In a fairly unusual break from best practice, Ferguson did not release the code on which his model runs (and has run in various forms for several years), saying it was largely undocumented and would make little sense to outsiders.

    This is poor practice for multiple reasons, not least of which being that replicating another’s work is a core principle of science, and essential to check workings. It’s also well known among programmers and scientists alike that most code eventually contains errors and idiosyncrasies, for which we must remain constantly vigilant.

    Far, far simpler models than Ferguson’s have ended up containing huge errors that have drastically altered their conclusions.

    A pseudonymous post on Lockdown Sceptics has done just that preliminary analysis on a version of Ferguson’s code that has been cleaned up by Microsoft and others.

    It raises a series of concerns that the published version of the model introduces randomness where it shouldn’t. Such models are intended to include some randomness – the concept is that they are run many, many times and we take an average, given that the path of the spread of a virus is itself is subject to chance.

    But factors like the computer type upon which the code is running should not affect the result – and when those developing a model can’t rule out systemic errors (as they don’t seem to know what’s behind them at all), that should worry us. No model should be above questioning.

    We should, though, pause well short of that article’s conclusion, which suggests throwing out all papers based on the code should be retracted and ‘all academic epidemiology be defunded’, which risks putting one and one together and making 11,000.

    Ferguson’s model has not led the UK down a drastically different path from that of many other countries – indeed, it only recommended lockdown relatively late versus those used by other countries. It likely contains errors, but it’s hardly a huge outlier from the international consensus. Those looking for anything to show lockdown is an error should search for another straw to grasp.

    We should, though, welcome the efforts to test and even to tear down the Imperial model. This is what the scientific process is – a spirited and often fractious public debate, a battle place of ideas. It is rarely as high-minded and public-spirited as those who place it on a pedestal would hope.

    Peer reviewers savage a paper because it contradicts their own research, or because they’ve guessed who the author is and can’t stand them. Institutions battle for fame and for funding. People hold grudges. Personality, like politics, doesn’t stop at the water’s edge – good work comes out of dubious motivations.

    Science also doesn’t stop at the journal or at peer review. The disastrous MMR study on autism by Andrew Wakefield may have been boosted by supporters in the media, but it was published in a peer-review journal. The drug thalidomide passed all appropriate scientific and medical checks. Continued scrutiny might not be nice, but it can save lives.

    We should be grateful to anyone stepping up to try to help tackle coronavirus. But that shouldn’t stop us for a second in holding their feet to the fire either.

    https://app.spectator.co.uk/2020/05/...y/content.html
    Last edited by cancel2 2022; 05-09-2020 at 08:34 PM.

  4. The Following User Groans At cancel2 2022 For This Awful Post:

    FUCK THE POLICE (05-18-2020)

  5. #19 | Top
    Join Date
    Oct 2016
    Location
    land-locked in Ocala,FL
    Posts
    27,321
    Thanks
    30,862
    Thanked 16,758 Times in 11,557 Posts
    Groans
    1,063
    Groaned 889 Times in 847 Posts

    Default

    Thank-you. This is excellent.
    Abortion rights dogma can obscure human reason & harden the human heart so much that the same person who feels
    empathy for animal suffering can lack compassion for unborn children who experience lethal violence and excruciating
    pain in abortion.

    Unborn animals are protected in their nesting places, humans are not. To abort something is to end something
    which has begun. To abort life is to end it.



  6. The Following 2 Users Say Thank You to Stretch For This Post:

    cancel2 2022 (05-09-2020), Grokmaster (05-17-2020)

  7. #20 | Top
    Join Date
    Sep 2009
    Posts
    107,358
    Thanks
    5
    Thanked 19 Times in 18 Posts
    Groans
    0
    Groaned 2 Times in 2 Posts

    Default


    Tom, leftists won't click a link if they think it doesn't support their narrative, you lazy sod.

    On March 16th a professor from Imperial College in London called Neil Ferguson used a mathematical computer model he created in 2009 to estimate the infection rate and death toll of the coronavirus.

    Only 2 days later he entered “self isolation” having supposedly contracting the coronavirus himself.

    He said that he expected 60% of the country to contract the coronavirus and he predicted that the USA would see up to 2.2million deaths and 500,000 for the UK.

    On the back of Ferguson's prediction two of the worlds largest economies completely shut down.

    Only 9 days after scaring the world into an unprecedented lock down, Ferguson drags himself out of “self isolation” to speak to a parliamentary committee and reveals that he has now readjusted his model and “now feels confident that the death toll in the UK will be below 20,000”.


    BTW, he was forced to resign in disgrace when it was revealed that he'd been breaking his own "quarantine" to fuck a married woman.

    https://www.cnn.com/2020/05/05/uk/ne...ntl/index.html

  8. #21 | Top
    Join Date
    Sep 2009
    Posts
    107,358
    Thanks
    5
    Thanked 19 Times in 18 Posts
    Groans
    0
    Groaned 2 Times in 2 Posts

    Default

    Quote Originally Posted by Grajonca View Post
    You're right, stupid is far too insipid a word to describe people like you. Cretinous, bovine, pig-headed are nearer the mark.
    Why whinge about their trolling if you don't threadban the trolls, you dozy cunt?

  9. #22 | Top
    Join Date
    Nov 2012
    Posts
    45,148
    Thanks
    9,823
    Thanked 7,426 Times in 5,873 Posts
    Groans
    0
    Groaned 6,512 Times in 6,255 Posts
    Blog Entries
    2

    Default

    Quote Originally Posted by Grajonca View Post
    I didn't read the 5G bullshit but Ferguson has much to answer for regardless.
    Haw, haw, haw, haw, haw, haw...........................haw.
    " First they came for the journalists...
    We don't know what happened after that . "

    Maria Ressa.

  10. #23 | Top
    Join Date
    Apr 2009
    Posts
    108,120
    Thanks
    60,501
    Thanked 35,051 Times in 26,519 Posts
    Groans
    47,393
    Groaned 4,742 Times in 4,521 Posts
    Blog Entries
    61

    Default

    Quote Originally Posted by Grajonca View Post
    Here is a woman with 30 years experience in IT and what she has to say about Ferguson's computer model is truly devastating. In years to come people are going to ask how the fuck did we let this garbage code decide government policy?


    Code Review of Ferguson’s Model


    Imperial finally released a derivative of Ferguson’s code. I figured I’d do a review of it and send you some of the things I noticed. I don’t know your background so apologies if some of this is pitched at the wrong level.

    My background. I wrote software for 30 years. I worked at Google between 2006 and 2014, where I was a senior software engineer working on Maps, Gmail and account security. I spent the last five years at a US/UK firm where I designed the company’s database product, amongst other jobs and projects. I was also an independent consultant for a couple of years. Obviously I’m giving only my own professional opinion and not speaking for my current employer.

    The code. It isn’t the code Ferguson ran to produce his famous Report 9. What’s been released on GitHub is a heavily modified derivative of it, after having been upgraded for over a month by a team from Microsoft and others. This revised codebase is split into multiple files for legibility and written in C++, whereas the original program was “a single 15,000 line file that had been worked on for a decade” (this is considered extremely poor practice). A request for the original code was made 8 days ago but ignored, and it will probably take some kind of legal compulsion to make them release it. Clearly, Imperial are too embarrassed by the state of it ever to release it of their own free will, which is unacceptable given that it was paid for by the taxpayer and belongs to them.

    The model. What it’s doing is best described as “SimCity without the graphics”. It attempts to simulate households, schools, offices, people and their movements, etc. I won’t go further into the underlying assumptions, since that’s well explored elsewhere.

    Non-deterministic outputs. Due to bugs, the code can produce very different results given identical inputs. They routinely act as if this is unimportant.

    This problem makes the code unusable for scientific purposes, given that a key part of the scientific method is the ability to replicate results. Without replication, the findings might not be real at all – as the field of psychology has been finding out to its cost. Even if their original code was released, it’s apparent that the same numbers as in Report 9 might not come out of it.

    Non-deterministic outputs may take some explanation, as it’s not something anyone previously floated as a possibility.

    The documentation says:

    The model is stochastic. Multiple runs with different seeds should be undertaken to see average behaviour.

    “Stochastic” is just a scientific-sounding word for “random”. That’s not a problem if the randomness is intentional pseudo-randomness, i.e. the randomness is derived from a starting “seed” which is iterated to produce the random numbers. Such randomness is often used in Monte Carlo techniques. It’s safe because the seed can be recorded and the same (pseudo-)random numbers produced from it in future. Any kid who’s played Minecraft is familiar with pseudo-randomness because Minecraft gives you the seeds it uses to generate the random worlds, so by sharing seeds you can share worlds.

    Clearly, the documentation wants us to think that, given a starting seed, the model will always produce the same results.

    Investigation reveals the truth: the code produces critically different results, even for identical starting seeds and parameters.

    I’ll illustrate with a few bugs. In issue 116 a UK “red team” at Edinburgh University reports that they tried to use a mode that stores data tables in a more efficient format for faster loading, and discovered – to their surprise – that the resulting predictions varied by around 80,000 deaths after 80 days:


    That mode doesn’t change anything about the world being simulated, so this was obviously a bug.

    The Imperial team’s response is that it doesn’t matter: they are “aware of some small non-determinisms”, but “this has historically been considered acceptable because of the general stochastic nature of the model”. Note the phrasing here: Imperial know their code has such bugs, but act as if it’s some inherent randomness of the universe, rather than a result of amateur coding. Apparently, in epidemiology, a difference of 80,000 deaths is “a small non-determinism”.

    Imperial advised Edinburgh that the problem goes away if you run the model in single-threaded mode, like they do. This means they suggest using only a single CPU core rather than the many cores that any video game would successfully use. For a simulation of a country, using only a single CPU core is obviously a dire problem – as far from supercomputing as you can get. Nonetheless, that’s how Imperial use the code: they know it breaks when they try to run it faster. It’s clear from reading the code that in 2014 Imperial tried to make the code use multiple CPUs to speed it up, but never made it work reliably. This sort of programming is known to be difficult and usually requires senior, experienced engineers to get good results. Results that randomly change from run to run are a common consequence of thread-safety bugs. More colloquially, these are known as “Heisenbugs“.

    But Edinburgh came back and reported that – even in single-threaded mode – they still see the problem. So Imperial’s understanding of the issue is wrong. Finally, Imperial admit there’s a bug by referencing a code change they’ve made that fixes it. The explanation given is “It looks like historically the second pair of seeds had been used at this point, to make the runs identical regardless of how the network was made, but that this had been changed when seed-resetting was implemented”. In other words, in the process of changing the model they made it non-replicable and never noticed.

    Why didn’t they notice? Because their code is so deeply riddled with similar bugs and they struggled so much to fix them that they got into the habit of simply averaging the results of multiple runs to cover it up… and eventually this behaviour became normalised within the team.

    In issue #30, someone reports that the model produces different outputs depending on what kind of computer it’s run on (regardless of the number of CPUs). Again, the explanation is that although this new problem “will just add to the issues” … “This isn’t a problem running the model in full as it is stochastic anyway”.

    Although the academic on those threads isn’t Neil Ferguson, he is well aware that the code is filled with bugs that create random results. In change #107 he authored he comments: “It includes fixes to InitModel to ensure deterministic runs with holidays enabled”. In change #158 he describes the change only as “A lot of small changes, some critical to determinacy”.

    Imperial are trying to have their cake and eat it. Reports of random results are dismissed with responses like “that’s not a problem, just run it a lot of times and take the average”, but at the same time, they’re fixing such bugs when they find them. They know their code can’t withstand scrutiny, so they hid it until professionals had a chance to fix it, but the damage from over a decade of amateur hobby programming is so extensive that even Microsoft were unable to make it run right.

    No tests. In the discussion of the fix for the first bug, Imperial state the code used to be deterministic in that place but they broke it without noticing when changing the code.

    Regressions like that are common when working on a complex piece of software, which is why industrial software-engineering teams write automated regression tests. These are programs that run the program with varying inputs and then check the outputs are what’s expected. Every proposed change is run against every test and if any tests fail, the change may not be made.

    The Imperial code doesn’t seem to have working regression tests. They tried, but the extent of the random behaviour in their code left them defeated. On 4th April they said: “However, we haven’t had the time to work out a scalable and maintainable way of running the regression test in a way that allows a small amount of variation, but doesn’t let the figures drift over time.”

    Beyond the apparently unsalvageable nature of this specific codebase, testing model predictions faces a fundamental problem, in that the authors don’t know what the “correct” answer is until long after the fact, and by then the code has changed again anyway, thus changing the set of bugs in it. So it’s unclear what regression tests really mean for models like this – even if they had some that worked.

    Undocumented equations. Much of the code consists of formulas for which no purpose is given. John Carmack (a legendary video-game programmer) surmised that some of the code might have been automatically translated from FORTRAN some years ago.

    For example, on line 510 of SetupModel.cpp there is a loop over all the “places” the simulation knows about. This code appears to be trying to calculate R0 for “places”. Hotels are excluded during this pass, without explanation.

    This bit of code highlights an issue Caswell Bligh has discussed in your site’s comments: R0 isn’t a real characteristic of the virus. R0 is both an input to and an output of these models, and is routinely adjusted for different environments and situations. Models that consume their own outputs as inputs is problem well known to the private sector – it can lead to rapid divergence and incorrect prediction. There’s a discussion of this problem in section 2.2 of the Google paper, “Machine learning: the high interest credit card of technical debt“.

    Continuing development. Despite being aware of the severe problems in their code that they “haven’t had time” to fix, the Imperial team continue to add new features; for instance, the model attempts to simulate the impact of digital contact tracing apps.

    Adding new features to a codebase with this many quality problems will just compound them and make them worse. If I saw this in a company I was consulting for I’d immediately advise them to halt new feature development until thorough regression testing was in place and code quality had been improved.

    Conclusions. All papers based on this code should be retracted immediately. Imperial’s modelling efforts should be reset with a new team that isn’t under Professor Ferguson, and which has a commitment to replicable results with published code from day one.

    On a personal level, I’d go further and suggest that all academic epidemiology be defunded. This sort of work is best done by the insurance sector. Insurers employ modellers and data scientists, but also employ managers whose job is to decide whether a model is accurate enough for real world usage and professional software engineers to ensure model software is properly tested, understandable and so on. Academic efforts don’t have these people, and the results speak for themselves.

    My identity. Sue Denim isn’t a real person (read it out). I’ve chosen to remain anonymous partly because of the intense fighting that surrounds lockdown, but there’s also a deeper reason. This situation has come about due to rampant credentialism and I’m tired of it. As the widespread dismay by programmers demonstrates, if anyone in SAGE or the Government had shown the code to a working software engineer they happened to know, alarm bells would have been rung immediately. Instead, the Government is dominated by academics who apparently felt unable to question anything done by a fellow professor. Meanwhile, average citizens like myself are told we should never question “expertise”. Although I’ve proven my Google employment to Toby, this mentality is damaging and needs to end: please, evaluate the claims I’ve made for yourself, or ask a programmer you know and trust to evaluate them for you.

    https://lockdownsceptics.org/code-re...rgusons-model/
    I find it truly mind boggling, albeit predictable nonetheless, that the media hasn't see fit to highlight this issue. The world is going hell in a handcart and they seem more concerned what some dopey CNN reporter said at a White House press conference.

  11. The Following User Groans At cancel2 2022 For This Awful Post:

    FUCK THE POLICE (05-18-2020)

  12. The Following 2 Users Say Thank You to cancel2 2022 For This Post:

    dukkha (05-17-2020), Grokmaster (05-17-2020)

  13. #24 | Top
    Join Date
    Apr 2009
    Posts
    108,120
    Thanks
    60,501
    Thanked 35,051 Times in 26,519 Posts
    Groans
    47,393
    Groaned 4,742 Times in 4,521 Posts
    Blog Entries
    61

    Default

    .
    Computer code for Ferguson's model which predicted 500,000 would die from Covid-19 and inspired Britain's 'Stay Home' plan is a 'mess which would get you fired in private industry' say data experts

    https://www.dailymail.co.uk/news/art...s-experts.html

  14. The Following User Groans At cancel2 2022 For This Awful Post:

    FUCK THE POLICE (05-18-2020)

  15. The Following 2 Users Say Thank You to cancel2 2022 For This Post:

    Earl (05-18-2020), Grokmaster (05-17-2020)

  16. #25 | Top
    Join Date
    Apr 2009
    Posts
    108,120
    Thanks
    60,501
    Thanked 35,051 Times in 26,519 Posts
    Groans
    47,393
    Groaned 4,742 Times in 4,521 Posts
    Blog Entries
    61

    Default

    .

    This is his mistress she looked pretty good with a bit of slap on, but side on, dog rough!!

    28489826-0-image-a-1_1589701337658.jpg

    28480318-0-image-a-23_1589669226937.jpg
    Last edited by cancel2 2022; 05-17-2020 at 09:31 PM.

  17. The Following User Says Thank You to cancel2 2022 For This Post:

    Grokmaster (05-17-2020)

  18. #26 | Top
    Join Date
    Nov 2017
    Posts
    53,910
    Thanks
    254
    Thanked 24,831 Times in 17,263 Posts
    Groans
    5,340
    Groaned 4,597 Times in 4,275 Posts

    Default

    What is the mechanism for calling for the total destruction of the economy and why would I want to do that? I cannot believe people are that twisted to believe such a thing. Rights say the stupidest shit and other rightys chime in. How do you live with such hate?

  19. The Following User Groans At Nordberg For This Awful Post:

    cancel2 2022 (05-17-2020)

  20. #27 | Top
    Join Date
    Apr 2009
    Posts
    108,120
    Thanks
    60,501
    Thanked 35,051 Times in 26,519 Posts
    Groans
    47,393
    Groaned 4,742 Times in 4,521 Posts
    Blog Entries
    61

    Default

    Quote Originally Posted by Nordberg View Post
    What is the mechanism for calling for the total destruction of the economy and why would I want to do that? I cannot believe people are that twisted to believe such a thing. Rights say the stupidest shit and other rightys chime in. How do you live with such hate?
    Man, you're either incredibly naive or just plain stupid, maybe both.

  21. The Following User Groans At cancel2 2022 For This Awful Post:

    FUCK THE POLICE (05-18-2020)

  22. The Following User Says Thank You to cancel2 2022 For This Post:

    Grokmaster (05-17-2020)

  23. #28 | Top
    Join Date
    Sep 2009
    Posts
    107,358
    Thanks
    5
    Thanked 19 Times in 18 Posts
    Groans
    0
    Groaned 2 Times in 2 Posts

    Default

    You lazy sod.


    On March 16th a professor from the Imperial College in London called Neil Ferguson used a mathematical computer model he created in 2009 to estimate the infection rate and death toll of the coronavirus.

    Only 2 days later he entered “self isolation” having supposedly contracted the coronavirus himself.

    He said that the USA would see up to 2.2 million deaths and predicted 500,000 for the UK.

    His report clearly stated that he believed we needed to lock the country down for up to 18 months.

    On March 20th America shut its borders to travelers from Europe, and shortly after, the UK.

    On March 25th, only 9 days after scaring the world into an unprecedented lockdown, Ferguson drags himself off out of “self isolation” to speak to a parliamentary committee and reveals that he has now "readjusted his model" and “now feels confident that the death toll in the UK will be below 20,000!”





    https://www.thepause.com/consciousness/how-did-this-guy-get-to-bankrupt-the-world-see-the-timeline-of-errors-for-yourself/

  24. #29 | Top
    Join Date
    Sep 2009
    Posts
    107,358
    Thanks
    5
    Thanked 19 Times in 18 Posts
    Groans
    0
    Groaned 2 Times in 2 Posts

    Default

    Quote Originally Posted by Grajonca View Post
    Here is a woman with 30 years experience in IT and what she has to say about Ferguson's computer model is truly devastating. In years to come people are going to ask how the fuck did we let this garbage code decide government policy?


    Code Review of Ferguson’s Model


    Imperial finally released a derivative of Ferguson’s code. I figured I’d do a review of it and send you some of the things I noticed. I don’t know your background so apologies if some of this is pitched at the wrong level.

    My background. I wrote software for 30 years. I worked at Google between 2006 and 2014, where I was a senior software engineer working on Maps, Gmail and account security. I spent the last five years at a US/UK firm where I designed the company’s database product, amongst other jobs and projects. I was also an independent consultant for a couple of years. Obviously I’m giving only my own professional opinion and not speaking for my current employer.

    The code. It isn’t the code Ferguson ran to produce his famous Report 9. What’s been released on GitHub is a heavily modified derivative of it, after having been upgraded for over a month by a team from Microsoft and others. This revised codebase is split into multiple files for legibility and written in C++, whereas the original program was “a single 15,000 line file that had been worked on for a decade” (this is considered extremely poor practice). A request for the original code was made 8 days ago but ignored, and it will probably take some kind of legal compulsion to make them release it. Clearly, Imperial are too embarrassed by the state of it ever to release it of their own free will, which is unacceptable given that it was paid for by the taxpayer and belongs to them.

    The model. What it’s doing is best described as “SimCity without the graphics”. It attempts to simulate households, schools, offices, people and their movements, etc. I won’t go further into the underlying assumptions, since that’s well explored elsewhere.

    Non-deterministic outputs. Due to bugs, the code can produce very different results given identical inputs. They routinely act as if this is unimportant.

    This problem makes the code unusable for scientific purposes, given that a key part of the scientific method is the ability to replicate results. Without replication, the findings might not be real at all – as the field of psychology has been finding out to its cost. Even if their original code was released, it’s apparent that the same numbers as in Report 9 might not come out of it.

    Non-deterministic outputs may take some explanation, as it’s not something anyone previously floated as a possibility.

    The documentation says:

    The model is stochastic. Multiple runs with different seeds should be undertaken to see average behaviour.

    “Stochastic” is just a scientific-sounding word for “random”. That’s not a problem if the randomness is intentional pseudo-randomness, i.e. the randomness is derived from a starting “seed” which is iterated to produce the random numbers. Such randomness is often used in Monte Carlo techniques. It’s safe because the seed can be recorded and the same (pseudo-)random numbers produced from it in future. Any kid who’s played Minecraft is familiar with pseudo-randomness because Minecraft gives you the seeds it uses to generate the random worlds, so by sharing seeds you can share worlds.

    Clearly, the documentation wants us to think that, given a starting seed, the model will always produce the same results.

    Investigation reveals the truth: the code produces critically different results, even for identical starting seeds and parameters.

    I’ll illustrate with a few bugs. In issue 116 a UK “red team” at Edinburgh University reports that they tried to use a mode that stores data tables in a more efficient format for faster loading, and discovered – to their surprise – that the resulting predictions varied by around 80,000 deaths after 80 days:


    That mode doesn’t change anything about the world being simulated, so this was obviously a bug.

    The Imperial team’s response is that it doesn’t matter: they are “aware of some small non-determinisms”, but “this has historically been considered acceptable because of the general stochastic nature of the model”. Note the phrasing here: Imperial know their code has such bugs, but act as if it’s some inherent randomness of the universe, rather than a result of amateur coding. Apparently, in epidemiology, a difference of 80,000 deaths is “a small non-determinism”.

    Imperial advised Edinburgh that the problem goes away if you run the model in single-threaded mode, like they do. This means they suggest using only a single CPU core rather than the many cores that any video game would successfully use. For a simulation of a country, using only a single CPU core is obviously a dire problem – as far from supercomputing as you can get. Nonetheless, that’s how Imperial use the code: they know it breaks when they try to run it faster. It’s clear from reading the code that in 2014 Imperial tried to make the code use multiple CPUs to speed it up, but never made it work reliably. This sort of programming is known to be difficult and usually requires senior, experienced engineers to get good results. Results that randomly change from run to run are a common consequence of thread-safety bugs. More colloquially, these are known as “Heisenbugs“.

    But Edinburgh came back and reported that – even in single-threaded mode – they still see the problem. So Imperial’s understanding of the issue is wrong. Finally, Imperial admit there’s a bug by referencing a code change they’ve made that fixes it. The explanation given is “It looks like historically the second pair of seeds had been used at this point, to make the runs identical regardless of how the network was made, but that this had been changed when seed-resetting was implemented”. In other words, in the process of changing the model they made it non-replicable and never noticed.

    Why didn’t they notice? Because their code is so deeply riddled with similar bugs and they struggled so much to fix them that they got into the habit of simply averaging the results of multiple runs to cover it up… and eventually this behaviour became normalised within the team.

    In issue #30, someone reports that the model produces different outputs depending on what kind of computer it’s run on (regardless of the number of CPUs). Again, the explanation is that although this new problem “will just add to the issues” … “This isn’t a problem running the model in full as it is stochastic anyway”.

    Although the academic on those threads isn’t Neil Ferguson, he is well aware that the code is filled with bugs that create random results. In change #107 he authored he comments: “It includes fixes to InitModel to ensure deterministic runs with holidays enabled”. In change #158 he describes the change only as “A lot of small changes, some critical to determinacy”.

    Imperial are trying to have their cake and eat it. Reports of random results are dismissed with responses like “that’s not a problem, just run it a lot of times and take the average”, but at the same time, they’re fixing such bugs when they find them. They know their code can’t withstand scrutiny, so they hid it until professionals had a chance to fix it, but the damage from over a decade of amateur hobby programming is so extensive that even Microsoft were unable to make it run right.

    No tests. In the discussion of the fix for the first bug, Imperial state the code used to be deterministic in that place but they broke it without noticing when changing the code.

    Regressions like that are common when working on a complex piece of software, which is why industrial software-engineering teams write automated regression tests. These are programs that run the program with varying inputs and then check the outputs are what’s expected. Every proposed change is run against every test and if any tests fail, the change may not be made.

    The Imperial code doesn’t seem to have working regression tests. They tried, but the extent of the random behaviour in their code left them defeated. On 4th April they said: “However, we haven’t had the time to work out a scalable and maintainable way of running the regression test in a way that allows a small amount of variation, but doesn’t let the figures drift over time.”

    Beyond the apparently unsalvageable nature of this specific codebase, testing model predictions faces a fundamental problem, in that the authors don’t know what the “correct” answer is until long after the fact, and by then the code has changed again anyway, thus changing the set of bugs in it. So it’s unclear what regression tests really mean for models like this – even if they had some that worked.

    Undocumented equations. Much of the code consists of formulas for which no purpose is given. John Carmack (a legendary video-game programmer) surmised that some of the code might have been automatically translated from FORTRAN some years ago.

    For example, on line 510 of SetupModel.cpp there is a loop over all the “places” the simulation knows about. This code appears to be trying to calculate R0 for “places”. Hotels are excluded during this pass, without explanation.

    This bit of code highlights an issue Caswell Bligh has discussed in your site’s comments: R0 isn’t a real characteristic of the virus. R0 is both an input to and an output of these models, and is routinely adjusted for different environments and situations. Models that consume their own outputs as inputs is problem well known to the private sector – it can lead to rapid divergence and incorrect prediction. There’s a discussion of this problem in section 2.2 of the Google paper, “Machine learning: the high interest credit card of technical debt“.

    Continuing development. Despite being aware of the severe problems in their code that they “haven’t had time” to fix, the Imperial team continue to add new features; for instance, the model attempts to simulate the impact of digital contact tracing apps.

    Adding new features to a codebase with this many quality problems will just compound them and make them worse. If I saw this in a company I was consulting for I’d immediately advise them to halt new feature development until thorough regression testing was in place and code quality had been improved.

    Conclusions. All papers based on this code should be retracted immediately. Imperial’s modelling efforts should be reset with a new team that isn’t under Professor Ferguson, and which has a commitment to replicable results with published code from day one.

    On a personal level, I’d go further and suggest that all academic epidemiology be defunded. This sort of work is best done by the insurance sector. Insurers employ modellers and data scientists, but also employ managers whose job is to decide whether a model is accurate enough for real world usage and professional software engineers to ensure model software is properly tested, understandable and so on. Academic efforts don’t have these people, and the results speak for themselves.

    My identity. Sue Denim isn’t a real person (read it out). I’ve chosen to remain anonymous partly because of the intense fighting that surrounds lockdown, but there’s also a deeper reason. This situation has come about due to rampant credentialism and I’m tired of it. As the widespread dismay by programmers demonstrates, if anyone in SAGE or the Government had shown the code to a working software engineer they happened to know, alarm bells would have been rung immediately. Instead, the Government is dominated by academics who apparently felt unable to question anything done by a fellow professor. Meanwhile, average citizens like myself are told we should never question “expertise”. Although I’ve proven my Google employment to Toby, this mentality is damaging and needs to end: please, evaluate the claims I’ve made for yourself, or ask a programmer you know and trust to evaluate them for you.

    https://lockdownsceptics.org/code-re...rgusons-model/
    JPP leftists won't read that, Tom. They wouldn't understand it if they tried, either.

  25. #30 | Top
    Join Date
    Sep 2009
    Posts
    107,358
    Thanks
    5
    Thanked 19 Times in 18 Posts
    Groans
    0
    Groaned 2 Times in 2 Posts

    Default

    Neil Ferguson's model has been the bedrock of the Anglo-American lockdown.

    It is his Imperial College modelling that predicted a genocidal amount of deaths and a severe crisis unless a total lockdown was implemented, even when rival models, like one from Oxford University, predicted otherwise.

    Now, it is understandable why politicians panic.

    No one wants to be blamed for inaction, especially in the early days when Italians’ socialized health care was collapsing'

    It turns out, the model was severely flawed.

    The model’s software was 13 years old, with a program that predicted at random.

    This particular scientist of a history of failed predictions.

    Shutdowns aren’t sustainable.



    https://thefederalist.com/2020/05/18/how-blind-faith-in-scientific-expertise-wrecked-the-economy/

Similar Threads

  1. Replies: 18
    Last Post: 04-28-2019, 11:21 AM
  2. Trump speech: errors, fibs, whoppers and howlers
    By christiefan915 in forum Current Events Forum
    Replies: 40
    Last Post: 07-23-2016, 09:26 PM
  3. DBase Errors...
    By Damocles in forum Announcements
    Replies: 5
    Last Post: 12-06-2012, 12:59 PM
  4. 'We'd rather bankrupt the world..."
    By signalmankenneth in forum Current Events Forum
    Replies: 23
    Last Post: 07-27-2011, 05:08 PM
  5. No 500 errors today
    By uscitizen in forum Off Topic Forum
    Replies: 0
    Last Post: 03-14-2007, 11:13 AM

Bookmarks

Posting Rules

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •