Gulf Stream itself in danger of disruption - open discussion

This map is egregiously incorrect. The Gulf Stream does not flow across the North Atlantic at all.

The Gulf Stream flows from the Bahamas along the eastern coast of North America, fading into nothing all the way. The stream that crosses the northern Atlantic ocean is a COLD current (flowing in deep waters) known as the Atlantic Drift current. It is NOT fed by the Gulf Stream in any way.

The Western flowing current that feeds the Gulf Stream is the Atlantic North Equatorial Current.

Currents are driven by unequal heating of the Earth's surface. This unequal heating drives both ocean and air currents. NOTHING can shut them down unless you shut off the Sun or make the Earth stop spinning.

You're wrong but you won't admit it!
 
Honestly, I don't know how to respond to this.
Probably because this is new to you?
I like you, but statistics is all about using a minimum amount of data to attempt to derive a signal.
WRONG. Statistics does NOT have ANYTHING to do with interpolation, which is MAKING UP NUMBERS.
It's no different than machine learning.
Machine 'learning' is not statistics and has nothing to do with it.
Surely you would agree that AI is able to find patterns using minimal data?
No. It is not. AI is about discerning patterns in data...period. It then uses that to expand a choice matrix. Example: dictionaries used in speech recognition are made up of phonemes that are at first preprogrammed in, then later expanded as more voice samples arrive. This is the basis of services like Alexa. The Echo devices that Alexa makes use of also employ a unique and very effective noise cancellation method involving multiple microphones.

Siri has a less well developed dictionary, but has the advantage of using simple microphones, using a timed sampling method for noise rejection (the irritating beep you have to wait for before speaking). It also has the advantage of being able to use offline preprogrammed dictionaries, making it reasonable ideal for automotive use.

Google has a well developed dictionary, and also uses multiple microphones for noise rejection on their Google Home devices. On android devices such as phones, people are speaking directly to their phone, alleviating the need for multiple microphones.
That's why there are confidence intervals and minimum sample sizes

The term 'confidence interval' is a buzzword. So is 'minimum sample size'. These are buzzwords commonly used among AI programmers that don't really mean much.

Machines do not learn. They fake it. AI is about building a decision matrix that is expanded by additional information, which effectively fakes 'learning'. Computers, above all, are general purpose sequencing machines and I/O devices (telegraphy and telemetry), no smarter than your average washing machine timer. Whether it's an ARM chip, an Intel CPU, or an AMD CPU, or an Nvidia GPU, all computers are simple sequencers, cycling off a single oscillator. Statistical math as not involved in any of it.

The closest mathematics to AI is probability math, not statistical math. Probability math is not capable of prediction either, due to it's use of random numbers.

Neither statistical math nor probability math make use of a randU type random number.
 
Probably because this is new to you?

WRONG. Statistics does NOT have ANYTHING to do with interpolation, which is MAKING UP NUMBERS.

Machine 'learning' is not statistics and has nothing to do with it.

No. It is not. AI is about discerning patterns in data...period, and using that to expand a choice matrix. Example: dictionaries used in speech recognition are made up of phonemes that are at first preprogrammed in, then later expanded as more voice samples arrive. This is the basis of services like Alexa. The Echo devices that Alexa makes use of also employ a unique and very effective noise cancellation method involving multiple microphones.

Siri has a less well developed dictionary, but has the advantage of using simple microphones, using a timed sampling method for noise rejection (the irritating beep you have to wait for before speaking). It also has the advantage of being able to use offline preprogrammed dictionaries, making it reasonable ideal for automotive use.

Google has a well developed dictionary, and also uses multiple microphones for noise rejection on their Google Home devices. On android devices such as phones, people are speaking directly to their phone, alleviating the need for multiple microphones.


The term 'confidence interval' is a buzzword. So is 'minimum sample size'. These are buzzwords commonly used among AI programmers that don't really mean much.

Machines do not learn. They fake it. AI is about building a decision matrix that is expanded by additional information, which effectively fakes 'learning'. Computers, above all, are general purpose sequencing machines and I/O devices (telegraphy and telemetry), no smarter than your average washing machine timer. Whether it's an ARM chip, an Intel CPU, or an AMD CPU, or an Nvidia GPU, all computers are simple sequencers, cycling off a single oscillator. Statistical math as not involved in any of it.

The closest mathematics to AI is probability math, not statistical math. Probability math is not capable of prediction either, due to it's use of random numbers.

Neither statistical math nor probability math make use of a randU type random number.

Machine learning is comparing statistical analysis of the new set of data against the old set from the last look and then also comparing the result against the expected result and developing the bias factor that tells the machine if it's on track.

Sorry if you think I'm wrong, but I have some experience making such a program.
I'll link it from my other computer in a future post to prove to youm I know what I'm talking about
 
You're wrong but you won't admit it!

surface_currents_sm2.jpg


This is a simplified map. There are other smaller currents not shown on this map.

The so-called El Nino and La Nina events are the results of slight changes in the positions of the north and south equatorial currents in relation to the equatorial counter current. A similar effect takes place in the Atlantic, but this ocean is smaller and has less of an effect.

When the equatorial counter current is stronger, you get an El Nino event, moving warm water off the coast of South America. This type of event can cause the northern jet stream to split into two as it crosses North America, bring extra moisture to southern SOTC and across the deserts of North America. The other is more northerly, but a weaker than usual flow. This is so named due to the loss of good fishing off the coast of the Americas. Fish go where the food is, and warm waters tend to smother the nutrients welling up from below that feeds the algae and plankton that forms the base of this food chain.

When the equatorial counter current is 'pinched off' by the northern and southern equatorial currents, warmer water will move westward off the coast of Asia, producing a La Nina event. This often results in a very 'wavy' shape to the northern jet stream, producing unusual areas of cold and often a heavy snow pack in the mountains. This is so named due to the 'blessing' of good fishing conditions off of the Americas.

A 'neutral' year is when things are in the middle of these two extremes.

There is no natural oscillator controlling where these currents flow. It is random, like the weather. The randomness is of type randR (same type of randomness of dice).

Sailors and fisherman (particularly of Polynesia) have been well aware of the shifts of these currents over the years, since they affect where to sail for an ocean crossing in the area.

Today, many buoys are stationed along these currents to help generate a prediction of an event during the next winter season, as the Sun passes the equator. They are not enough to even begin to measure the temperature of the ocean in this area, but they can indicate relative changes in warm and cold water at each buoy.
 
Last edited:
>Machine 'learning' is not statistics and has nothing to do with it.

This made laugh and cry
https://en.wikipedia.org/wiki/Machine_learning

Machine learning (ML) is the study of computer algorithms that improve automatically through experience.[1] It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so.[2] Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.
 
Machine learning is comparing statistical analysis of the new set of data against the old set from the last look and then also comparing the result against the expected result and developing the bias factor that tells the machine if it's on track.
Nope. You are talking about probability math, not statistical math. There are no 'should be' values in statistical math. There is no 'expected result'. That's a random number of type randU (a number thought up in someone's head...the so-called psuedo-random number). The 'bias factor' is similar a randU. A statistical summary can produce different results ON THE SAME DATA due to the use of randN during selection. Comparing two random numbers and comparing against an expected random number is rather pointless.

In probability math, you are talking about the likelihood of an expected value. Like statistical math, it cannot actually predict that value, but it can indicate the likelihood of such a value.

Sorry if you think I'm wrong, but I have some experience making such a program.
I realize that you think your experience in building such programs is using statistical math, but it is not so. Like I said, it is closer to probability math. This is actually an errant belief among many working in AI since it is pushed by computer programming schools.
I'll link it from my other computer in a future post to prove to youm I know what I'm talking about
No need. I believe you do work in AI. You are using the same terminology a lot of AI programmers tend to use.
I am, among other things, a mathematician, a scientist, a computer programmer, and a hardware engineer. I also design computer systems for industrial, medical, and aerospace use, often incorporating AI into these systems. It is my own business and it is a successful business.

Whether you believe my credentials is neither here nor there. Blind forums such as JPP make such claims rather useless and void since there is no way to prove such credentials.

The arguments presented, however, are valid regardless of credentials.
 
>Machine 'learning' is not statistics and has nothing to do with it.

This made laugh and cry
...deleted Holy Wikipedia Link and Quote...
False authority fallacy. Wikipedia as a reference is discarded on sight. You cannot use it as a reference with me. Too many articles are badly written, incomplete, or just plain wrong.
 
Back
Top