Читать онлайн книгу «Neural Networks Beginnings» автора Jade Carter

Neural Networks Beginnings
Jade Carter
The book is an excellent resource for those who want to get acquainted with the basics of neural networks and their application in life. The book explains in detail what a neuron is and how it works in a neural network, what weights and biases are, how a neuron makes decisions, and how a neural network is built. In addition, the book covers topics such as training neural networks, the main types of neural networks (fully connected, convolutional, and recurrent), and their application to classification, regression, and clustering problems.The author explains in detail how to use these methods in neural networks and how they can help in solving complex problems.Whether you are new to the field of neural networks or have some experience with them, this book will be a useful resource for expanding your knowledge and skills. It provides clear and accessible information about a technology that is becoming increasingly important in our lives.

Jade Carter
Neural Networks Beginnings

Introduction

Neural networks are computer systems that attempt to imitate the functioning of the human brain. They consist of neurons that are connected and process information by transmitting it through neural connections. Each neuron performs a simple function, but together they can process complex tasks.
Neural networks are important because they allow us to solve tasks that were previously impossible or very difficult to solve using traditional programming methods. They are used in various fields, including image and sound processing, speech recognition, economic trend forecasting, production process management, and much more.
Currently, neural networks are one of the key components of machine learning and artificial intelligence. They can be trained on large amounts of data and gradually improve their results, making them very useful for solving tasks that were previously inaccessible for automation.
The goal of this book is to introduce the reader to the basics of neural networks, starting with simple concepts and methods and ending with more complex topics. In the book, you will learn how neurons work, how to train neural networks, how to choose the appropriate neural network for a particular task, and how to apply neural networks to solve classification, regression, and clustering tasks.
The book is aimed at beginners and does not require any prior knowledge in the field of machine learning. It provides the reader with a complete practical guide to working with neural networks, which will help to start applying them in their own projects. By reading the book, you will acquire the necessary knowledge and practical skills for working with neural networks, as well as learn about the latest trends and developments in this field.

Our book will help you to:
Understand how neural networks work and what tasks they can solve;
Learn about various types of neural networks and choose the most suitable one for a particular task;
Learn how to create and train neural networks using different libraries and tools;
Master techniques for working with data, preparing data, and selecting the most appropriate model parameters to achieve the best results;
Learn about the application of neural networks in various fields such as image processing, speech recognition, text analysis, forecasting, and more;
Gain practical skills in working with neural networks through examples that can be applied in real projects.
In this book, we will focus on a practical approach and provide numerous examples and tasks that will help you better understand and absorb the material. You will learn how to create neural networks from scratch, train them on real data, and evaluate their results. We will also provide a wealth of resources and links to help you continue your learning and development in this field.
We are confident that this book will be useful for anyone interested in neural networks, machine learning, and artificial intelligence. Whether you are a student, IT professional, or simply a technology enthusiast, you will find a lot of useful information and practical skills in this book. Let's begin our journey into the world of neural networks!

Chapter 1: Basics of Neural Networks
Neural networks are a powerful tool in the field of artificial intelligence and machine learning. They are used in many applications, such as speech recognition, image processing, and forecasting. However, to understand how a neural network works, one must start with the basics.
The basic unit of a neural network is a neuron. A neuron is a simple information processing unit that mimics the function of a nerve cell in our brain. A neuron receives input signals from other neurons and generates an output signal that is passed on to other neurons.
Each neuron in a neural network has weights and biases. Weights determine how important each input signal is to the neuron's function, while biases are added to the sum of the input signals to make the neuron more flexible and enable it to make decisions based on a wider range of input data.
When a neuron receives input data, it multiplies the inputs by the weights and adds the bias. It then applies an activation function, which determines whether the neuron should activate and pass on the signal further through the network. The activation function can vary depending on the task the neural network is designed to perform. For example, the activation function can be sigmoid, hyperbolic tangent, ReLU (Rectified Linear Unit), and many others.

Neural networks are a powerful tool in the field of artificial intelligence and machine learning. They are used in many applications such as speech recognition, image processing, and prediction. However, to understand how a neural network works, we need to start with the basics.
The foundation of a neural network is a neuron. A neuron is a simple unit of information processing that mimics the function of a nerve cell in our brain. A neuron receives input signals from other neurons and generates an output signal that is passed on to other neurons.
Each neuron in a neural network has weights and biases. Weights determine how important each input signal is for the neuron's function, while biases are added to the sum of input signals to make the neuron more flexible and able to make decisions across a wider range of input data.
When a neuron receives input data, it multiplies it by the weights and adds the bias. It then applies an activation function that determines whether the neuron should be activated and pass the signal on to the next layer of the network. The activation function can vary depending on the task the neural network is performing. For example, the activation function could be sigmoid, hyperbolic tangent, ReLU (Rectified Linear Unit), or many others.
A neural network consists of many neurons that are organized into layers. There are several types of layers, but the most common types are input, hidden, and output layers. The input layer takes in the input data, while the output layer produces the result of the neural network's processing. Hidden layers are located between the input and output layers and perform various computations that help the neural network solve the task.
When we talk about how a neural network is constructed, we are referring to how it organizes neurons into layers, how each neuron processes input signals, and what activation functions are used. There are many different neural network architectures, and the choice of a specific architecture depends on the specific task we want to solve.
It is important to understand that a neural network learns by adjusting the weights and biases to achieve the best result on the training data. Neural network training occurs in several stages. In the first stage, we provide input data and the desired output for that data. The neural network then predicts the result, and we compare it to the desired result to determine the error.
Using backpropagation, we can adjust the weights and biases to reduce the error and improve the accuracy of the predictions. This process is repeated many times until we reach the desired level of accuracy.
To better understand the concepts we've learned in Chapter 1, let's look at some examples of using neural networks:
Automatic Speech Recognition:
The neural network takes an audio file and breaks it down into sequences of fragments. Each fragment represents a short segment of sound that may contain speech sound samples.
Then each fragment is passed through a layer of neurons that use recurrent connections. This means that each neuron stores its previous state in memory and uses it to make decisions at the current step.
After the neural network processes all the sound fragments, we will obtain a sequence of probabilities for each speech sound sample in the file. Then we use a language model to generate a transcription of the speech.
Recommendation System:
The neural network takes user data, such as their preferences, purchases, viewing history, etc.
Then the neural network analyzes this data and uses it to predict what the user may be interested in. For example, if the user previously purchased science fiction books, the neural network may recommend other books on this topic.
For this, the neural network can use different types of neural networks, such as convolutional neural networks or recurrent neural networks.
Automatic Emotion Detection:
The neural network takes data about a person's voice, facial expressions, or body gestures.
Then the neural network analyzes this data and uses it to determine the person's emotional state. For example, the neural network may determine that a person is happy, sad, angry, or experiencing other emotions.
For this, the neural network can use convolutional neural networks, recurrent neural networks, or a combination of different types of networks.

These are just some examples of how neural networks can be applied in real life. Each of these examples can be implemented using different types of neural networks and configurations, and each may require a large amount of data for training. However, understanding the basics of how neural networks work and their structural elements, such as neurons, weights, and activation functions, is key to building effective neural networks and solving various machine learning tasks.
The examples described in the first chapter can be implemented using various software tools for machine learning and neural network development. Let's look at the most popular ones.
TensorFlow: an open-source software for machine learning developed by Google. TensorFlow supports various types of neural networks and makes it easy to create, train, and deploy machine learning models.
Keras: a high-level interface for building neural networks that works on top of TensorFlow. Keras simplifies the process of creating neural networks and allows for quick experimentation with different architectures and hyperparameters.
PyTorch is an open-source machine learning software developed by Facebook. PyTorch also supports various types of neural networks and has a user-friendly interface for creating and training models.
Scikit-learn is a Python library for machine learning. Scikit-learn includes many machine learning algorithms, including some types of neural networks, and simplifies the process of creating and evaluating models.
The specific choice of working environment depends on the specific task and the developer's personal preferences. However, all of these tools have extensive documentation and user communities that can help in the process of working with them.

Let's take a closer look at the implementation of the practical examples mentioned above in the TensorFlow environment.
Digit recognition in images. For digit recognition in images, we can use a neural network with several convolutional layers and fully connected layers based on the TensorFlow library. Below is an approximate implementation of such a neural network.

The first step is to import the necessary TensorFlow modules and load the training and testing data:

import tensorflow as tf
from tensorflow import keras
#Load MNIST dataset
(train_images, train_labels), (test_images, test_labels) = keras.datasets.mnist.load_data()
#Convert data to a format suitable for training a neural network and normalize it
train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images / 255.0
test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images / 255.0

Define the neural network model. In this example, we will use a neural network with three convolutional layers, each followed by a max pooling layer, and two fully connected layers. The output layer will consist of 10 neurons corresponding to the digit classes, and will use the softmax activation function.

model = keras.Sequential([
keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
keras.layers.MaxPooling2D((2, 2)),
keras.layers.Conv2D(64, (3, 3), activation='relu'),
keras.layers.MaxPooling2D((2, 2)),
keras.layers.Conv2D(64, (3, 3), activation='relu'),
keras.layers.Flatten(),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])

Then we can compile the model, specifying the loss function, optimizer, and metrics for evaluating the model's performance.

model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

After that, we can start the training process by passing the training and testing data to the model and specifying the number of epochs (iterations) and batch size (the number of examples processed in one iteration).

model.fit(train_images, train_labels, epochs=5, batch_size=64, validation_data=(test_images, test_labels))

Finally, we can evaluate the performance of the model on the test data.
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy)

The result of training a neural network for recognizing digits in images will be a model that can take an image of a handwritten digit as input and predict which digit is depicted in the image. This code allows us to train a neural network for object recognition in images, specifically for classifying images from the CIFAR-10 dataset. The trained neural network can be used to recognize objects in other images that were not used in the training set. To do this, simply feed the image to the neural network and get the output as the probability of belonging to each class.
To check the accuracy of the model, a test set of images with known labels (i.e. correct answers) can be used, and the model's predictions can be compared to these labels. The higher the accuracy of the model on the test data, the more successfully it performs the task of recognizing digits.
After training the model, it can be used to recognize digits in new images, for example, in an application for reading handwritten digits on postal codes, bank checks, or in other areas where automatic digit recognition is required.
2. Automatic Speech Recognition. To implement the second example in the TensorFlow environment, we will need the CIFAR-10 dataset, which can be loaded using the built-in TensorFlow function. The CIFAR-10 dataset contains 60,000 color images of size 32x32 pixels, divided into 10 classes. For training the neural network, we will use 50,000 images, and for testing – the remaining 10,000. Here's what the implementation of the second example looks like in TensorFlow:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
#Defining the architecture of a neural network
model = keras.Sequential(
[
layers.LSTM(128, input_shape=(None, 13)),
layers.Dense(64, activation="relu"),
layers.Dense(32, activation="relu"),
layers.Dense(10, activation="softmax"),
]
)
#Compilation of the model
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=0.001),
loss=keras.losses.CategoricalCrossentropy(),
metrics=["accuracy"],
)
#Loading audio file
audio_file = tf.io.read_file("audio.wav")
audio, _ = tf.audio.decode_wav(audio_file)
audio = tf.squeeze(audio, axis=-1)
audio = tf.cast(audio, tf.float32)
# splitting into segments
frame_length = 640
frame_step = 320
audio_length = tf.shape(audio)[0]
num_frames = tf.cast(tf.math.ceil(audio_length / frame_step), tf.int32)
padding_length = num_frames * frame_step – audio_length
audio = tf.pad(audio, [[0, padding_length]])
audio = tf.reshape(audio, [num_frames, frame_length])
#Extracting MFCC features
mfccs = tf.signal.mfccs_from_log_mel_spectrograms(
tf.math.log(tf.abs(tf.signal.stft(audio))),
audio.shape[-1],
num_mel_bins=13,
dct_coefficient_count=13,
)
# Data preparation for training
labels = ["one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "zero"]
label_to_index = dict(zip(labels, range(len(labels))))
index_to_label = dict(zip(range(len(labels)), labels))
text = "one two three four five six seven eight nine zero"
target = tf.keras.preprocessing.text.one_hot(text, len(labels))
X_train = mfccs[None, …]
y_train = target[None, …]
# Training the model
history = model.fit(X_train, y_train, epochs=10)
# Making predictions
predicted_probs = model.predict(X_train)
predicted_indexes = tf.argmax(predicted_probs, axis=-1)[0]
predicted_labels = [index_to_label[i] for i in predicted_indexes]
# Outputting results
print("Predicted labels:", predicted_labels)

This code implements automatic speech recognition using a neural network based on TensorFlow and Keras. The first step is to define the neural network architecture using Keras Sequential API. In this case, a recurrent LSTM layer is used, which takes in a sequence of 13-length sound segments. Then there are several fully connected layers with a relu activation function and one output layer with a softmax activation function, which outputs probabilities for each speech class.
Next, the model is compiled using the compile method. The Adam optimizer with a learning rate of 0.001 is chosen, the loss function is categorical cross-entropy, and the classification accuracy is used as the metric.
Then a sound file in the wav format is loaded, decoded using tf.audio.decode_wav, and transformed into float32 numerical values. The file is then split into fragments of length 640 with a step of 320. If the file cannot be divided into equal fragments, padding is added.

This code implements automatic speech recognition using a neural network based on TensorFlow and Keras. The first step is to define the architecture of the neural network using the Keras Sequential API. In this case, a recurrent LSTM layer is used, which takes in a sequence of 13-length sound snippets. Then there are several fully connected layers with the relu activation function, and one output layer with the softmax activation function, which outputs probabilities for each speech class.
Next, the model is compiled using the compile method. The optimizer chosen is Adam with a learning rate of 0.001, the loss function is categorical cross-entropy, and the classification accuracy is used as the metric.
Then, a sound file in the wav format is loaded and decoded using tf.audio.decode_wav, and transformed into float32 numerical values. The file is then split into fragments of length 640 with a step of 320. If the file cannot be evenly divided into fragments, padding is added.
Next, Mel-frequency cepstral coefficients (MFCC) features are extracted from each sound fragment using the tf.signal.mfccs_from_log_mel_spectrograms function. These extracted features are used for training the model.
To train the model, the data needs to be prepared. In this case, text is used that indicates all possible classes and the corresponding label for each class. For convenience, the text is converted into one-hot encoding using the tf.keras.preprocessing.text.one_hot method. The prepared data is then passed to the model for training using the fit method.
After training the model, the results are predicted on the same data using the predict method. The index with the highest probability and its corresponding class are selected.
Finally, the predicted class labels are outputted.

Recommender system
For convenience, let's describe the process in five steps:
Step 1: Data collection
The first step in creating a recommender system is data collection. This involves gathering data about users, such as their preferences, purchases, browsing history, and so on. This data can be obtained from various sources, such as databases or user logs.
Step 2: Data preparation
After the data is collected, it needs to be prepared. For example, data preprocessing may be required to clean it from noise and outliers. Various techniques can be used for this, such as standardization and normalization of the data.
Step 3: Model training
Once the data is prepared, we can proceed to model training. To create a recommender system, we can use various types of neural networks, such as convolutional neural networks or recurrent neural networks. The model should be trained on the training set of data.
Step 4: Model testing
After training the model, we need to test it to ensure that it works correctly. To do this, we can use a testing set of data. During testing, we can analyze metrics such as accuracy and recall.
Step 5: Model application
After the model has passed testing, it can be used to recommend content to users. For example, we can use the model to recommend science fiction books to a user who has previously purchased such books. In this case, the model can use data about the user to predict what they might be interested in.

The code for a recommender system will depend on what data about users and items is being used, as well as what neural network architecture is being employed. Below is an example code for a simple matrix factorization-based recommender system that utilizes user and item ratings data:

import numpy as np
#loading the data
ratings = np.array([
[5, 3, 0, 1],
[4, 0, 0, 1],
[1, 1, 0, 5],
[1, 0, 0, 4],
[0, 1, 5, 4],
])
# initializing the parameters
num_users, num_items = ratings.shape
num_factors = 2
learning_rate = 0.01
num_epochs = 1000
# initializing the user and item matrices
user_matrix = np.random.rand(num_users, num_factors)
item_matrix = np.random.rand(num_factors, num_items)

The code for a recommender system will depend on the type of user and item data being used, as well as the neural network architecture being used. Here is an example code for a simple matrix factorization-based recommender system that uses user and item ratings data:

import numpy as np
#load data
ratings = np.array([
[5, 3, 0, 1],
[4, 0, 0, 1],
[1, 1, 0, 5],
[1, 0, 0, 4],
[0, 1, 5, 4],
])
#initialize parameters
num_users, num_items = ratings.shape
num_factors = 2
learning_rate = 0.01
num_epochs = 1000
#initialize user and item matrices
user_matrix = np.random.normal(scale=1./num_factors, size=(num_users, num_factors))
item_matrix = np.random.normal(scale=1./num_factors, size=(num_factors, num_items))
#matrix factorization training
for epoch in range(num_epochs):
for i in range(num_users):
for j in range(num_items):
if ratings[i][j] > 0:
error = ratings[i][j] – np.dot(user_matrix[i,:], item_matrix[:,j])
user_matrix[i,:] += learning_rate * (error * item_matrix[:,j])
item_matrix[:,j] += learning_rate * (error * user_matrix[i,:])
#predict ratings for all users and items
predicted_ratings = np.dot(user_matrix, item_matrix)
#recommend items for a specific user
user_id = 0
recommended_items = np.argsort(predicted_ratings[user_id])[::-1]
print("Recommendations for user", user_id)
print(recommended_items)

In this example, we used matrix factorization to build a recommender system. We initialized user and item matrices with random values and trained them based on known user and item ratings. We then used the obtained matrices to predict ratings for all users and items, and then recommended items based on these predictions for a specific user. In real systems, more complex algorithms and more diverse data can be used.

4. Automatic emotion detection.
Process description.
We import the necessary modules from TensorFlow.
We create a model using convolutional neural networks. The model takes input data in the form of a 48x48x1 pixel image. Conv2D, BatchNormalization, and MaxPooling2D layers are used to extract features from the image. The Flatten layer converts the obtained features into a one-dimensional vector. Dense, BatchNormalization, and Dropout layers are used to classify emotions into 7 categories (happiness, sadness, anger, etc.). We compile the model, specifying the optimizer, loss function, and metrics. We train the model on the training dataset using the validation dataset.We evaluate the accuracy of the model on the testing dataset. We use the model to predict emotions on new data.

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# Creating a model
model = keras.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(48, 48, 1)),
layers.BatchNormalization(),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Dropout(0.25),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.BatchNormalization(),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Dropout(0.25),
layers.Conv2D(128, (3, 3), activation='relu'),
layers.BatchNormalization(),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Dropout(0.25),
layers.Flatten(),
layers.Dense(256, activation='relu'),
layers.BatchNormalization(),
layers.Dropout(0.5),
layers.Dense(7, activation='softmax')
])
# Compiling the model.
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Training the model
history = model.fit(train_data, train_labels, epochs=50, validation_data=(val_data, val_labels))
# Evaluation of the model
test_loss, test_acc = model.evaluate(test_data, test_labels)
print('Test accuracy:', test_acc)
# Using the model
predictions = model.predict(new_data)

This code creates a convolutional neural network for recognizing emotions on 48x48 pixel images.
The first layer uses a 3x3 convolution with 32 filters and a ReLU activation function that takes 48x48x1 input images. Then follow layers of batch normalization, max pooling with a 2x2 filter size, and dropout to help prevent overfitting.
Two additional convolutional layers with increased filter numbers and similar normalization and dropout layers are then added. A flattening layer follows, which converts the multidimensional input to a one-dimensional vector.
Next are two fully connected layers with ReLU activation and batch normalization, as well as dropout layers. The final layer contains 7 neurons and uses the softmax activation function to determine the probability of each of the 7 emotions.
The optimizer Adam, the categorical_crossentropy loss function, and the accuracy metric are used to compile the model. The model is trained on the training data for 50 epochs with validation on the validation data.
After training, the model is evaluated on the test data, and the accuracy of predictions is displayed. Then the model is used to predict emotions on new data.
Conclusion on Chapter 1:
In this chapter, we have covered the fundamental concepts underlying neural networks. We learned what a neuron is, how it works in a neural network, what weights and biases are, how a neuron makes decisions, and how a neural network is constructed. We also discussed the process of training a neural network and how it adjusts its weights and biases to improve prediction accuracy.

Конец ознакомительного фрагмента.
Текст предоставлен ООО «ЛитРес».
Прочитайте эту книгу целиком, купив полную легальную версию (https://www.litres.ru/pages/biblio_book/?art=69505096) на ЛитРес.
Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.