A human mind is difficult to create, replicate, or duplicate. It has tremendous capacity and complexity. It allows people to understand and learn about the world around them, by taking in information while applying their experiences, and other stimuli. They are simply, incredibly ingenious, capable of making the world how it is today. However, all of this happened over several lifetimes, and from several individual contributions. Simply, it took a while, what if there was a way of increasing speed, and efficiency? A way for something to be good at what a human is not good at. This is one of the many things machine learning is capable of. Machine learning represents one of the most complex groups of computer technology to date, it is important to learn all that has led up to its world.
Back before the 1940s, no one believed that machine learning was possible. Human minds were unique, and unlike any other intelligence, especially against machines that were just metal, materials, and electricity. Not up till 1943, when “two scientists, Warren S. McCulloch and Walter H. Pitts, publish the groundbreaking paper A Logical Calculus of the Ideas Immanent in Nervous Activity.” (CHM, 2022). This would end up being the start of all machine learning, it was the world’s very first algorithm that mimicked human minds. It is important because many people had always thought that the human mind was superior to any other intelligence in the world. With this, many had begun to recognize and believe that a machine might just be possible to create something similar to the dominant intelligence of the human mind. Afterward, in 1948 famous people such as “Norbert Wiener would publish his book Cybernetics” (CHM, 2022), while Alan Turing, a famous mathematician would propose “that if a computer, based on written replies to questions, could not be distinguished from a human respondent, then it must be “thinking” (CHM, 2022) in 1951. This would help machine learning gain great popularity and widespread recognition. Leading many people to research and allowing the funding of machine learning at Dartmouth Summer Research Project, the birthplace of artificial intelligence. Followed by several other algorithms that allowed machines the capability to play checkers, tic-tac-toe, and recognize patterns. This would be followed up with automation, where robotic arms would perform step-by-step tasks on an assembly line. These arms would further advance, and their uses broadened as the years go by, performing various robust tasks, or more technical tasks. Then in 1979, “The Stanford Cart, a long-term research project undertaken at Stanford University between 1960 and 1980” (CHM, 2022), was completed. This represents the evolution of basic, to even more complicated tasks done by machine learning. The cart was able to traverse a room full of obstacles, with computer controls, which later became autonomous. This project would be used as the basic configuration, and tests for what would eventually become the lunar rover. Without machine learning, it meant that spaceships, satellites, and vehicles would all not readjust due to minor changes or recalculate when problems arise. While anything autonomous is simply impossible. However, these advancements were only the start. A large majority of the population still did not believe machine learning to beat the best of the best in human minds. Sure, it was way faster and more efficient, but it did not do things inhuman, out of this world that humans could not just do. This would create major drama, and debate over whether or not machine learning could be smarter and defeat the height of a human mind. It came during 1996’s IBM’s Deep Blue versus Garry Kasparov chess event. “With the ability to evaluate 200 million positions per second, IBM’s Deep Blue chess computer defeats the current world chess champion, Garry Kasparov on May 11” (CHM, 2022). The rise of the world wide web and the internet triggers a great amount of controversy throughout the world. People’s perspectives on machine learning shifted and large cooperations began investing in and integrating many of these advanced systems into their various applications. The idea that machine learning could do various things much better than the human mind had been solidified. Machine learning algorithms offered accurate ways to track, monitor, and satisfy user recommendations. From being able to recognize different kinds of videos to identify individuals to recommendations or suggestions. Machine learning improved their business by aligning with what consumers wanted.
The logistics for machine learning to work are similar to the way a human mind’s understanding works. However, it is also different because a lot more data may be needed for machine learning to understand whereas it may take human minds less to understand. To understand how machine learning works it is important to know what it is. Firstly, to run machine learning, it needs to have the capability of using machine language. Machine languages are Python, JavaScript, C++, Java, and R. They act as a translator for the human mind to communicate a set of instructions to tell a computer what to do. “Machine learning is essentially concerned with extracting models from data, often (though not exclusively) using them for prediction.” (Hüllermeier, p.1, 2021). This means that machine learning uses data to accurately give feedback on certain elements of an application. For example, the more data it has on a larger scale, the better machine learning can provide useful information. Similarly, to the human mind, this is like practice. Humans learn and get things correctly by practicing and practicing. Machine learning uses these data points as practice, with more data they can practice more until they are always right. The difference is that machine learning can examine millions of data quickly while it would take a human mind a very long time to do so. This is one of the reasons that make machine learning useful. To get this data, there has to be an algorithm. These algorithms are very complex, and mathematically heavy and calculate various data for model use, or where the machine is shown how to collect, use, and learn from a data set.
There are usually three main kinds of machine learning, separated into supervised, unsupervised, and reinforcement algorithms. Each one with its unique qualities, that have specifics as to when they should be used. In supervised machine learning, the algorithm is trained to separate data into various classifications, whether general or specific. This is great for statistical research, where it “combines a large number of summary statistics (Materials and Methods) that provide complementary information about the shape of the genealogy underlying a region of the genome.” (Schrider, p.8, 2018). By doing this, they can find correlations, and coordinate where, and what each piece of data exists. They also know how the machine comes up with its results or conclusions. All thanks to their classifications, or regression, which has independent and dependent variables, simply they have parameters defined. Unsupervised learning is great for researching outside parameters. They look for any correlation within the datasets, these could be hidden patterns, groups, or other secret correlations. By doing this, it creates a “system that does not involve human bias; we do this through an unsupervised machine learning approach.” (Cheng, p.5, 2021). This allows for accurate conclusions on something human minds may not see or may take too long to see. While at the same time not altering the dataset somehow with a human’s mind, and what they may think. It leaves the dataset up to machine learning away from any other sort of interference. Reinforced learning is trial and error, where prompted, it would be rewarded if correct, or punished if incorrect. It usually goes through various steps to reach its goal. For example, “in ML-reinforced biosensors, it is essential to select appropriate algorithms according to the signal characteristics and the application.” (Zhang p.4, 2021). This is because it is important to test whether something they did was good or bad over a while. Used to teach new skills and increase receptiveness for desired actions. In this example, machine learning had to learn from the decision-making process, using data from their sensors.
Machine learning has had a great impact on this current world. This is because of how useful and well it does what it is supposed to do. The model, gather, store, analyze, recognize patterns, trends, behaviors, and predictions, and react to simulations, quickly and effectively. The largest problem is the lack of good data, since a lot of data is needed for it, and not all data is good, there are outliers and differences and to find all of these, simply, stops it from improving and perfecting. Still, machine learning has huge implications and impacts on the world. It is commonly used by some of the largest and most powerful corporations and is still improving. Amazon for example uses “algorithms importantly in today’s online world with www.amazon.com making 2,500,000 price changes every day based on their algorithm” (Yeoman, p.2, 2019). Similarly, giant cooperations like Alphabet, Apple, Facebook, Tesla, and Walmart, do the same. They commonly use machine learning in the services they provide to consumers. With Alphabet's cooperation, they offer services such as Google, and YouTube. Machine learning is responsible for predicting what consumers would like to see, from recommending user searches, to videos. They obtain this data by tracking consumers’ website data, as simply as whether certain links are clicked, to see what is being recommended to them, and to the time spent looking. This data allows machine learning to recommend items of similar interests. It is all around us, Facebook, does similar things, however with different algorithms. A lot is going on behind the scenes when using applications with machine learning to provide consumers with what they want. Even Walmart uses machine learning to manage, order, and predict purchases from stores in-person or online. “The analysis reveals that at a low range of mean demand, a firm should completely rely on an “on-demand” public cloud provider.” (Yeoman, p.3, 2019). This means that machine learning can figure out a variety of things about the consumers they have from graphical data. These are only the impacts machine learning has had the most effect on the world for now. Machine learning is still growing and is getting much more complex, going into the future. Work on machine learning has not stopped, and its impact is only growing.
Artificial intelligence is another category machine learning has been responsible for. Some recent examples of development in this area are with Apple, for example, they have services such as Siri. Many other cooperations would soon follow suit. These would be Amazon’s Alexa, Microsoft’s Cortana, and Google’s Assistant. Many of these services are used in their strive for artificial intelligence. Since they need a lot more improvement before they are truly sentient. As more people use them, they become slightly better, but they are far from perfect. “Artificial intelligence is the science of studying how to build intelligent programs and machines that can solve problems in an imaginative manner” (Fersht, p.2, 2019). This means that machine learning would be able to think for themselves, outside of what they have in their programming. The implications for this are astronomical. Tesla has been rising due to their artificial intelligence advancements, from autonomous driving to robots. They are not known simply for their cars. They are much more than just cars; they currently lead in good data, that gives useful information that can be used in building the best artificial intelligence algorithms. This is why many other cooperations are racing against each other. Such as Facebook’s massive investments into developing Meta, even though it has lost so much in the short term, they believe that artificial intelligence would rise in the long term in the future. Artificial intelligence has become a lot more advanced since then, they have been able to create deep fakes, and generate images that are becoming more and more like the real thing.
Some of the plans with machine learning are incorporating it to improve the versatility of machine learning since it seems to have no end. Medical industries have also been experimenting with the transition towards genetic algorithms and implementing machine learning into their healthcare systems. This is thanks to the different ways an algorithm can be written. However, the problem with this has been whether or not it is ethical for machine learning to take care of patients. These systems may sometimes be unethical, and cause problems. To progress this though, they need more data which means more use of this kind of system for “therapeutic relationship and will need to be bound by the core ethical principles, such as beneficence and respect for patients, that have guided clinicians” (Char, p.10, 2018). This means that they would work alongside machine learning in the process to implement machine learning into the medical field. Until they have enough good data, it does not seem like it will be implemented in the medical field alone by itself anytime soon. It is difficult and takes a long time to collect the necessary good data for machine learning. Another way machine learning can improve work in the medical field is by “allowing the inverse model to vary the aperture field in a statistically-informed fashion” (Hawkins, p.27, 2020). This means that researchers are using machine learning to find out what would happen, also known as simulations. It brings together all the data and makes accurate predictions. Helpful for speeding up a process that may take a very long time, by going through several computations.
In conclusion, machine language has been around for nearly a lifetime and has advanced so much so since it was first introduced into the world. There are things that people never would have dreamed possible until now. No one believed how valuable machine learning would become. It is now being used to track nearly everything online, while it surrounds us in everyday life. Machine learning has become a commodity used as an effective business strategy to learn about consumer databases. Data has also grown exponentially more important than before. With information about individuals, cooperations can target and recommend the wants of the masses. This is only the beginning since the capabilities have not been perfected, machine learning is growing in the future, and expanding into every part of the world from medicine to simulations to games, writing, speaking, and creating. It will become more important than ever as the world figures out the full extent of its capabilities.
Resources
Chai, W. (2020, October 20). A timeline of machine learning history. WhatIs.com. Retrieved November 28, 2022, from https://www.techtarget.com/whatis/A-Timeline-of-Machine-Learning-History
Char, D. S., Shah, N. H., & Magnus, D. (2018). Implementing machine learning in health care — addressing ethical challenges. New England Journal of Medicine, 378(11), 981–983. https://doi.org/10.1056/nejmp1714229
Cheng, T.-Y., Huertas-Company, M., Conselice, C. J., Aragón-Salamanca, A., Robertson, B. E., & Ramachandra, N. (2021). Beyond the Hubble sequence – exploring galaxy morphology with unsupervised machine learning. Monthly Notices of the Royal Astronomical Society, 503(3), 4446–4465
Fersht, A. R. (2021). Alphafold – a personal perspective on the impact of machine learning. Journal of Molecular Biology, 433(20), 167088. https://doi.org/10.1016/j.jmb.2021.167088
Hawkins, A. J., Fox, D. B., Koch, D. L., Becker, M. W., & Tester, J. W. (2020). Predictive inverse model for advective heat transfer in a short‐circuited fracture: Dimensional Analysis, machine learning, and field demonstration. Water Resources Research, 56(11). https://doi.org/10.1029/2020wr027065
Hüllermeier, E., & Waegeman, W. (2021). Aleatoric and epistemic uncertainty in Machine Learning: An introduction to concepts and methods. Machine Learning, 110(3), 457–506. https://doi.org/10.1007/s10994-021-05946-3
Schrider, D. R., Ayroles, J., Matute, D. R., & Kern, A. D. (2018). Supervised machine learning reveals introgressed loci in the genomes of drosophila simulans and D. Sechellia. PLOS Genetics, 14(4). https://doi.org/10.1371/journal.pgen.1007341
Timeline of Computer History. (2022). AI & Robotics: Timeline of Computer History: Computer History Museum. Retrieved November 28, 2022, from Timeline of Computer History. (2022). AI & Robotics: Timeline of Computer History: Computer History Museum. Retrieved November 28, 2022, from
Yeoman, I. (2019). Algorithms. Journal of Revenue and Pricing Management. https://doi.org/10.1057/s41272-019-00196-4
Zhang, K., Wang, J., Liu, T., Luo, Y., Loh, X. J., & Chen, X. (2021). Machine learning‐reinforced noninvasive biosensors for Healthcare. Advanced Healthcare Materials, 10(17), 2100734. https://doi.org/10.1002/adhm.202100734