Full description not available
G**P
‘As of early 2017, AI is currently used by many tech giants including Microsoft, Apple, Uber, Google, Facebook, and IBM.’
Author Michael Taylor offers no biographical information to provide a reference for his expertise in writing this book, but begin reading and absorbing this well illustrated manual that is designed for Beginners only (as Michael states, ‘This book is designed as a visual introduction to neural networks. It is for BEGINNERS and those who have minimal knowledge of the topic. If you already have a general understanding, you might not get much out of this book’) and as such it is a solid starting point about a complex subject.Michael’s manner of definition and explanation and teaching is easily accessible and even a pleasure to read. He first defines his subject – ‘Neural networks have made a gigantic comeback in the last few decades and you likely make use of them everyday without realizing it, but what exactly is a neural network? What is it used for and how does it fit within the broader arena of machine learning? To start, we’ll begin with a high-level overview of machine learning and then drill down into the specifics of a neural network…. A neural network, also known as an artificial neural network, is a type of machine learning algorithm that is inspired by the biological brain. It is one of many popular algorithms that is used within the world of machine learning, and its goal is to solve problems in a similar way to the human brain. Neural networks are part of what’s called Deep Learning, which is a branch of machine learning that has proved valuable for solving difficult problems, such as recognizing things in images and language processing. Neural networks take a different approach to problem solving than that of conventional computer programs. To solve a problem, conventional software uses an algorithmic approach, i.e. the computer follows a set of instructions in order to solve a problem. In contrast, neural networks approach problems in a very different way by trying to mimic how neurons in the human brain work. In fact, they learn by example rather than being programmed to perform a specific task. Technically, they are composed of a large number of highly interconnected processing elements (nodes) that work in parallel to solve a specific problem, which is similar to how the human brain works.’Taking us into the meat of the book, Michael informs us, ‘There are many reasons why neural networks fascinate us and have captivated headlines in recent years. They make web searches better, organize photos, and are even used in speech translation. Heck, they can even generate encryption. At the same time, they are also mysterious and mind-bending: how exactly do they accomplish these things? What goes on inside a neural network? On a high level, a network learns just like we do, through trial and error. This is true regardless if the network is supervised, unsupervised, or semi-supervised. Once we dig a bit deeper though, we discover that a handful of mathematical functions play a major role in the trial and error process. It also becomes clear that a grasp of the underlying mathematics helps clarify how a network learns. This is why the following chapters will be devoted to understanding the mathematics that drive a neural network. To do this, we will use a feedforward network as our model and follow input as it moves through the network.’This is an intelligent, well-scripted book, rich in helpful diagrams, that makes a topic about which most of us have little knowledge and turns that topic into fresh, useable knowledge. And that is a feat! Grady Harp, September 17
P**S
Very Useful
Make Your Own Neural Network by Michael Taylor is an introductory text that is meant to teach people the basics needed to understand what neural networks are, how they function, and the different types. The book begins by taking the reader through the topics that will be discussed. It then moves into a discussion of the types of neural networks and their different uses and components. This section is what is going to be the most useful to people who are new to this topic. It shows the reader lots of terminology that the reader will need to be able to understand the chapters that show up later on in the book. The next major topic that the book dives into is the math behind neural networks. This is a section that lets the reader know that this topic is not for the faint of heart. The math required to be able to work with the networks requires a high level of knowledge of algebra, statistics, and calculus. This section also comes along with charts that will help the reader have a better understanding of the concepts that are in the chapter and later chapters as well. After the opening chapters the book begins to dive into the heavier concepts that make up the content of the book.This book is very informative and will be very useful to anyone who is looking to brush up on the idea of neural networks or looking to be introduced to the idea. I will say that it may be a heavy read for anyone who does not have a background in this topic. I had a hard time keeping up with some of the content and had to reread some sections to be sure I understood what was going on. This is not a knock on the author or on the content. It is just a very in depth subject that requires some background knowledge. The book was well edited and presented in a very easy to follow format. Great job to the author.
N**I
Book quality is good and printing quality looks clean
Looks good
A**S
Well it is for a very special type of beginner (meaning very advanced)
If you have a pretty good handle on calculus (partial derivatives, chain rule, etc.), statistics (various cost functions), S curves (logic regression, TANH, etc.), and some coding in Python and Tensorflow... than you are a qualified beginner to digest the information within this book.The "basic" example of building a DNN model with Tensorflow near the end of the book takes 47 lines of fairly cryptic codes. The next example teaching you how to build another DNN model with another Google software takes 29 lines of pretty challenging codes.In other words, there is absolutely nothing "beginning-like" within this book. But, that is not the author's fault. There is nothing beginning-like about DNN models. They are at the cutting edge of AI. And, they are not the simplest edge. To the author's credit, I am not sure he could have made the subject any clearer, anymore "spelled-out" than he already has. The way he explains in detail every single greek math formula is brilliant. I wished other technically oriented author took his lead in explaining greek formulas as well as he has. That part of the book was a real pleasure to deal with.When the author explains the theory, he is world class. When he chooses coding examples, I may not be able to evaluate him fairly; but, I intuitively feel he did not use the most straightforward examples for a "beginning" audience. But, again that may not be his fault. It could be the nature of Tensorflow and other related programs.I have a bit of experience developing DNNs. And, I found developing DNNs with the R deepnet package far easier than what he demonstrates with Tensorflow. In a single line of code using the mentioned R package you can develop a very well specified DNN. I gather within the AI afficionado, Tensorflow is the coolest package to use. But, it certainly seems far more complicated than some of the alternatives.
Trustpilot
2 months ago
1 month ago