python|利用新闻情绪预测股市

This is Part-2 of a two-part series. Please read Part-1 here: https://medium.com/@kala.shagun/stock-market-prediction-using-news-sentiments-f9101e5ee1f4
这是由两部分组成的系列文章的第2部分。 请在此处阅读第1部分: https : //medium.com/@kala.shagun/stock-market-prediction-using-news-sentiments-f9101e5ee1f4
造型 (Modeling) The ML Models used here are selected based on the production requirement. We want to deploy the model. As we know that time series model needs to be trained every time in production with the new data points for accurate prediction so we will be using only those models which have low time complexity in training i.e. which trains faster with new data.
根据生产要求选择此处使用的ML模型。 我们要部署模型。 我们知道,每次生产时都需要使用新的数据点对时间序列模型进行训练以进行准确的预测,因此我们将仅使用训练中时间复杂度较低的模型,即使用新数据进行更快的训练。
1. ARIMA (1. ARIMA) An ARIMA model is a class of statistical models for analyzing and forecasting time series data.
ARIMA模型是用于分析和预测时间序列数据的一类统计模型。
ARIMA is an acronym that stands for AutoRegressive Integrated Moving Average. It is a generalization of the simpler AutoRegressive Moving Average and adds the notion of integration.
ARIMA是首字母缩写词,代表自动回归综合移动平均线。 它是对更简单的自回归移动平均线的概括,并增加了积分的概念。
This acronym is descriptive, capturing the key aspects of the model itself. Briefly, they are:
该首字母缩写是描述性的,捕获了模型本身的关键方面。 简而言之,它们是:
  • AR: Autoregression. A model that uses the dependent relationship between an observation and some number of lagged observations.
    AR : 自回归 。 一种模型,它使用观察值和一些滞后观察值之间的依赖关系。
  • I: Integrated. The use of differencing of raw observations (e.g. subtracting an observation from observation at the previous time step) in order to make the time series stationary.
    我 : 综合 。 为了使时间序列固定,使用原始观测值的差异(例如,从上一个时间步长的观测值中减去观测值)。
  • MA: Moving Average. A model that uses the dependency between an observation and a residual error from a moving average model applied to lagged observations.
    MA : 移动平均线 。 一种模型,该模型使用观察值与应用于滞后观察值的移动平均模型的残差之间的依赖关系。
Each of these components is explicitly specified in the model as a parameter. The parameters of the ARIMA model are defined as follows:
这些组件中的每一个都在模型中明确指定为参数。 ARIMA模型的参数定义如下:
  • p: The number of lag observations included in the model, also called the lag order.
    p :模型中包含的滞后观测值的数量,也称为滞后阶数。
  • d: The number of times that the raw observations are differenced also called the degree of differencing.
    d :原始观测值的差异次数也称为差异程度。
  • q: The size of the moving average window, also called the order of moving average.
    q :移动平均窗口的大小,也称为移动平均的顺序。
Let’s plot Autocorrelation and Partial Autocorrelation Plot to identify the above parameter values.
让我们绘制自相关和部分自相关图以标识上述参数值。
p — The lag value where the PACF chart crosses the upper confidence interval for the first time. If you notice closely, in this case, p=2.
【python|利用新闻情绪预测股市】 p — PACF图表首次超过上限置信区间的滞后值。 如果密切注意,在这种情况下,p = 2。
q — The lag value where the ACF chart crosses the upper confidence interval for the first time. If you notice closely, in this case, q=2.
q — ACF图表首次超过上限置信区间的滞后值。 如果密切注意,在这种情况下,q = 2。
d — In differencing method, a shift of 1 period produced a stationary timer series. So we will use d = 1.
d —在微分方法中,每移位1个周期便产生一个固定的计时器序列。 因此,我们将使用d = 1。
We forecast stationary time-series which we got after the differencing method using ARIMA. Then transform the results to get our original time series.
我们使用ARIMA预测通过微分方法得到的平稳时间序列。 然后转换结果以获得我们的原始时间序列。
Forecasting using ARIMA Model 使用ARIMA模型进行预测 RMSE from ARIMA = 1707.77
ARIMA的RMSE = 1707.77
Let’s try to improve the prediction using more advanced methods.
让我们尝试使用更高级的方法来改善预测。
2. SARIMAX (2. SARIMAX) ARIMA model considers only trends information in the data and ignores seasonal variation. SARIMAX is a variation of the ARIMA model which considers seasonal variation in the data as well. Though, our data do not have high seasonality but why not give it a try.
ARIMA模型仅考虑数据中的趋势信息,而忽略了季节性变化。 SARIMAX是ARIMA模型的一种变体,它也考虑了数据的季节性变化。 虽然,我们的数据没有很高的季节性,但为什么不尝试一下。
RMSE from SARIMAX = 964.97
SARIMAX的RMSE = 964.97
Woah! RMSE got down to 964 from 1707. SARIMAX really works well.
哇! RMSE从1707年下降到964。SARIMAX确实运行良好。
3. Facebook先知 (3. Facebook Prophet) The prophet is an open-source library published by Facebook that is based on decomposable (trend+seasonality+holidays) models. It provides us with the ability to make time-series predictions with good accuracy using simple intuitive parameters and has support for including the impact of custom seasonality and holidays!
该先知是Facebook发布的一个开源库,它基于可分解(趋势+季节性+假日)模型 。 它使我们能够使用简单直观的参数准确地进行时间序列预测,并支持包括自定义季节和假日的影响!
RMSE from Facebook Prophet = 709.70
来自Facebook Prophet的RMSE = 709.70
Nice! RMSE has further reduced to 709 from 964. It is still far from acceptable prediction. Let’s try deep learning models now.
真好! RMSE已从964进一步降低至709。这仍远未达到可接受的预测。 让我们现在尝试深度学习模型。
Before going ahead, let’s look at some useful plots Facebook Prophet provides:
在继续之前,让我们看一下Facebook Prophet提供的一些有用的图:
Our data has some seasonal information present. This is why SARIMAX also performed well.
我们的数据提供了一些季节性信息。 这就是SARIMAX也表现出色的原因。
Following points can be observed from the above graphs:
从上图可以看出以下几点:
  1. Our data shows an upward trend.
    我们的数据显示出上升趋势。
  2. Stock price gets up on Saturday and remains almost flat during weekdays.
    股票价格在星期六上涨,在工作日几乎保持不变。
  3. There is a high chance to observe 52 weeks low Stock Price in the August End- Sept starting period.
    在八月底至九月开始的时期内,有很高的机会观察到52周的低股价。
  4. Stock Price fluctuates during the whole day.
    股价全天波动。
4. LSTM模型 (4. LSTM Model) Finance is highly nonlinear and sometimes stock price data can even seem completely random. Traditional time series methods such as ARIMA, SARIMAX models are effective only when the series is stationary, which is a restricting assumption that requires the series to be preprocessed by taking log returns (or other transforms). However, the main issue arises in implementing these models in a live trading system, as there is no guarantee of stationarity as new data is added.
金融是高度非线性的 ,有时股价数据甚至看起来可能是完全随机的。 传统的时间序列方法(例如ARIMA,SARIMAX模型)仅在序列稳定时才有效,这是一个限制性假设,要求对序列进行对数返回(或其他变换)进行预处理。 但是,在实时交易系统中实施这些模型时会出现主要问题,因为不能保证添加新数据时的平稳性。
This is combated by using Neural Networks (sequential models like LSTM, GRU, etc.), which do not require any stationarity to be used. Furthermore, neural networks by nature are effective in finding the relationships between data and using it to predict (or classify) new data.
使用神经 网络 (顺序模型,如LSTM,GRU等)可以消除这种情况,而无需使用任何平稳性。 此外,神经网络本质上可以有效地发现数据之间的关系,并使用它来预测(或分类)新数据。
The LSTM model needs validation data as well to fine-tune the parameters. Let’s split the data again.
LSTM模型还需要验证数据来微调参数。 让我们再次分割数据。
def create_dataset(dataset, look_back=1): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] dataX.append(a) dataY.append(dataset[i + look_back, 0]) return np.array(dataX), np.array(dataY)

To prepare the data, stock price data is scaled first using MinMax Scaler. We are giving LSTM 60 features. X = Stock Prices of last 60 consecutive days as 60 features. Y = Actual Stock Price on 61st day.
为了准备数据,首先使用MinMax Scaler缩放股票价格数据。 我们提供LSTM 60功能。 X =最近60天连续60天的股票价格。 Y =第61天的实际股价。
Since our data is not too large and cumbersome so we will build a simple single-layer model.
由于我们的数据不是太大和麻烦,所以我们将建立一个简单的单层模型。
!rm -rf ./logs/ keras.backend.clear_session() %load_ext tensorboardmodel = Sequential()# Adding the input layer model.add(LSTM(units=48, activation='tanh', kernel_initializer=tf.keras.initializers.glorot_uniform(seed=26), input_shape = (X_train.shape[1], 1)))# Adding the output layer model.add(Dense(1, name="output_layer"))# Compiling the RNN model.compile(optimizer = keras.optimizers.Adam(learning_rate=0.001), loss = root_mean_squared_error)#Using Tensorboard logdir = "logs" tensorboard_callback = TensorBoard(log_dir=logdir, histogram_freq=5, write_graph=True)# Fitting the RNN to the Training set model.fit(trainX, trainY, epochs = 50, batch_size = 16, validation_data = https://www.it610.com/article/(cvX, cvY), callbacks = [tensorboard_callback])

RMSE from LSTM = 285.53
LSTM的RMSE = 285.53
Hats off to this deep learning marvel. RMSE has gone down to 285 compared to 709 from Facebook Prophet Model. LSTM Model has predicted very accurately. Let’s try more advanced LSTM variations.
向这个深度学习奇迹致敬。 相对于Facebook Prophet Model的709,RMSE下降到285。 LSTM模型已经非常准确地预测了。 让我们尝试更高级的LSTM变体。
5.具有新闻极性的LSTM (5. LSTM with news polarity) We will be using only 5 years of stock data for this model. Since we have news available for only 5 years period 2015–2019.
对于该模型,我们将仅使用5年的库存数据。 由于我们仅提供2015-2019年的5年新闻。
This model will take 61 features. X = Stock Prices of last 60 consecutive days as 60 features and News Sentiment of 60th day. Y = Actual Stock Price on 61st day. All stock prices are scaled here as well.
该模型将具有61个功能。 X =最近60天的连续60个交易日的股价和60天的新闻情绪。 Y =第61天的实际股价。 所有股票价格也在此处按比例缩放。
!rm -rf ./logs/ keras.backend.clear_session() %load_ext tensorboardmodel = Sequential()# Adding the input layer model.add(LSTM(units=128, activation='tanh', kernel_initializer=tf.keras.initializers.glorot_uniform(seed=26), input_shape = (trainX.shape[1], 1), unroll = True))# Adding the output layer model.add(Dense(1, name="output_layer"))# Compiling the RNN model.compile(optimizer = keras.optimizers.Adam(learning_rate=0.01), loss = root_mean_squared_error)#Using Tensorboard logdir = "logs" tensorboard_callback = TensorBoard(log_dir=logdir, histogram_freq=5, write_graph=True)# Fitting the RNN to the Training set model.fit(trainX, trainY, epochs = 30, batch_size = 64, validation_data = https://www.it610.com/article/(cvX, cvY), callbacks = [tensorboard_callback])

RMSE from LSTM with news polarity = 170.91
LSTM的RMSE,新闻极性= 170.91
Wow! RMSE has further reduced to 170.91 from 285. News Sentiments has helped LSTM to improve the prediction further.
哇! RMSE从285进一步降低至170.91。NewsSentiments帮助LSTM进一步改善了预测。
Predicted results are looking very accurate now. We will pick this last model as our best model.
预测结果现在看起来非常准确。 我们将选择最后一个模型作为最佳模型。
This best model takes stock price data of the last 60 days along with News Sentiment Compound Score from VADER for the last day and it will predict the stock price for the next day.
最好的模型将获取最近60天的股价数据以及VADER的最近一天的新闻情绪综合评分,并将预测第二天的股价。
摘要 (Summary) Best Model: LSTM with News Sentiments
最佳模型:具有新闻情感的LSTM
RMSE from Best Model: 170.91
最佳模式的RMSE:170.91
异常检测 (Anomaly Detection) In this section, we will try to find the anomalies in our stock price data which is not learned correctly by our best model.
在本节中,我们将尝试在我们的最佳价格模型无法正确学习的股价数据中找到异常。
Let’s plot the errors to identify the outliers.
让我们绘制误差以识别异常值。
Considering a 3% acceptable error, let’s find anomalies.
考虑3%的可接受误差,让我们找到异常。
Anomalies identified on Error Plot 在错误图上识别出异常
# Visualising the anomaliesplt.figure(figsize=(16, 6)) plt.plot(test_data[61:]['date'], test_data[61:]['price'].values, color = 'orange', label = 'Test Data') plt.scatter(anamolies['date'], anamolies['price'], marker='*', s = 200, color='black', label='Anomaly') plt.title('Anamoly Detection in Stock Prices') plt.xlabel('Years') plt.ylabel('NIFTY Index Prices') plt.legend() plt.show()

Anomalies identified on Nifty Stock Data 在Nifty股票数据上发现异常 We can observe that anomalies are present when there is a steep rise or drop in stock prices. This can happen due to a major event that occurred during these days. Let’s analyze tweets for the days having anomaly.
我们可以观察到,当股价急剧上升或下降时,就会出现异常情况。 这可能是由于这些天发生的重大事件而发生的。 让我们分析出现异常的日子的推文。
Word Cloud for Positive News 积极新闻词云 Word Cloud for Negative News 负面新闻的文字云 We can see that in positive tweets the most common word is ‘cut’ which is a negative sentiment word and in negative tweets, some common words are ‘highest’, ‘biggest’, ‘peak’ all these words are of positive sentiment.
我们可以看到,在正面推文中,最常见的词是“ cut”(消极情绪词),在负面推文中,一些常见词是“最高”,“最大”,“峰值”,所有这些词都是正面情绪。
We can conclude that VADER Sentiment Analyzer didn’t put correct sentiments and the sentiment score for these tweets.
我们可以得出结论,VADER情绪分析器没有为这些推文输入正确的情绪和情绪评分。
最终部署模型管道 (Final Model Pipeline for Deployment)
#Function to Predict Next Day Index Pricedef prediction_single_day(date):#date: Enter date for which you want next day's price prediction.#Loading Data data = https://www.it610.com/article/pd.read_csv('data_processed_final.csv') with open('min_max.pickle', 'rb') as i: minmax = pickle.load(i)#Predicting data['price'] = minmax.transform(data['price'].values.reshape(-1, 1))model = create_model() model.load_weights('LSTM_with_Sentiments.h5')try: present_day = data[data['date'] == date].index[0]last_60_days_price = data['price'][present_day-59:present_day+1].values last_day_news_score = data[data['date'] == date]['score']prediction_array = np.append(last_60_days_price, last_day_news_score).reshape(-1, 1) prediction_array = np.expand_dims(prediction_array, axis=0)print("Predicting Next Working Day's Nifty 50 Index Price...\n")predicted_stock_price = model.predict(prediction_array) predicted_stock_price = minmax.inverse_transform(predicted_stock_price) predicted_stock_price = predicted_stock_price[0][0]actual_price = data['price'][present_day] actual_price = minmax.inverse_transform([[actual_price]]) actual_price = actual_price[0][0]print(f'Predicted Index Price for the next working day after {date}: {predicted_stock_price}') print(f'Actual Index Price for the next working day after {date}: {actual_price}\n')except (IndexError, UnboundLocalError): print('Entered Date should lie between period 2015-01-01 and 2019-12-31 and should not lie on a stock market holiday. Please enter a correct date.')except: print('Invalid Date Format. Please put date in yyyy-mm-dd format.')

#Function to Predict Price for Random 60 Consecutive Daysdef prediction_multiple_days():#Loading Data data = https://www.it610.com/article/pd.read_csv('data_processed_final.csv') with open('min_max.pickle', 'rb') as i: minmax = pickle.load(i)#Predicting data['price'] = minmax.transform(data['price'].values.reshape(-1, 1))prediction_prices = [] actual_prices = []random.seed(20) n = random.randint(0, len(data)-60) random_date = data['date'][n] print(f'Predicting for next 60 days from date: {random_date}')model = create_model() model.load_weights('LSTM_with_Sentiments.h5')for i in range(n, n+60): date = data['date'][i]present_day = data[data['date'] == date].index[0]last_60_days_price = data['price'][present_day-59:present_day+1].values last_day_news_score = data[data['date'] == date]['score']prediction_array = np.append(last_60_days_price, last_day_news_score).reshape(-1, 1) prediction_array = np.expand_dims(prediction_array, axis=0)predicted_stock_price = model.predict(prediction_array) predicted_stock_price = minmax.inverse_transform(predicted_stock_price) predicted_stock_price = predicted_stock_price[0][0]actual_price = data['price'][present_day] actual_price = minmax.inverse_transform([[actual_price]]) actual_price = actual_price[0][0]prediction_prices.append(predicted_stock_price) actual_prices.append(actual_price)plt.figure(figsize=(12,7)) plt.plot(prediction_prices, color = 'red', label = 'Predicted Prices') plt.plot(actual_prices, color = 'green', label = 'Actual Prices') plt.title('Nifty Index Prediction for 60 Consecutive Days') plt.xlabel('Days') plt.ylabel('Prices') plt.legend() plt.show()RMSE = sqrt(mean_squared_error(prediction_prices, actual_prices)) print(f"RMSE: {RMSE}")

The normal best model is taking 649ms to predict the stock price for the next day.
正常的最佳模型需要649毫秒才能预测第二天的股价。
训练后量化 (Post-training quantization) Quantization refers to techniques for performing computations and storing tensors at lower bitwidths than floating-point precision. A quantized model executes some or all of the operations on tensors with integers rather than floating-point values. This allows the model to run faster but this comes at a cost of accuracy.
量化是指用于执行计算并以低于浮点精度的位宽存储张量的技术。 量化模型对张量使用整数而不是浮点值执行部分或全部运算。 这样可以使模型运行得更快,但这要以准确性为代价。
By default, the model weights are saved in Float32 format but it can be reduced to Float16 or Int8 to get the calculations faster but due to approximation, we can expect a little drop in the accuracy.
默认情况下,模型权重以Float32格式保存,但可以将其减小为Float16或Int8以加快计算速度,但是由于近似值,我们可以预期精度会有所下降。
As we can see above that it currently takes around 800ms for our model to predict the next price. With the help of quantization techniques, we will try to reduce this runtime of our model.
正如我们在上面看到的那样,我们的模型目前需要大约800毫秒来预测下一个价格。 借助量化技术,我们将尝试减少模型的运行时间。
Converting Models into TFLite Models and Saving Them
将模型转换为TFLite模型并保存它们
For quantization: We will be converting Float32 weights into Float16 weights to make the prediction calculation faster.
对于量化:我们将把Float32权重转换为Float16权重,以使预测计算更快。
run_model = tf.function(lambda x: model(x))BATCH_SIZE = 64 STEPS = None INPUT_SIZE = 1 concrete_func = run_model.get_concrete_function(tf.TensorSpec([BATCH_SIZE, STEPS, INPUT_SIZE], model.inputs[0].dtype)) MODEL_DIR = "./saved_model"converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_types = [tf.float16] tflite_quant_model = converter.convert()#saving converted model in "converted_quant_model.tflite" file open("converted_quant_model.tflite", "wb").write(tflite_quant_model)

部署的量化模型管道 (Quantized Model Pipeline for Deployment)
#Function to Predict Next Day Index Pricedef prediction_single_day_quantized(date):#date: Enter date for which you want next day's price prediction.#Loading Data data = https://www.it610.com/article/pd.read_csv('data_processed_final.csv') with open('min_max.pickle', 'rb') as i: minmax = pickle.load(i)#Predicting data['price'] = minmax.transform(data['price'].values.reshape(-1, 1))# Initialize the interpreter interpreter = tf.lite.Interpreter(model_path = "converted_quant_model.tflite") interpreter.allocate_tensors()input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() input_shape = input_details[0]['shape']try: present_day = data[data['date'] == date].index[0]last_60_days_price = data['price'][present_day-59:present_day+1].values last_day_news_score = data[data['date'] == date]['score']prediction_array = np.append(last_60_days_price, last_day_news_score).reshape(-1, 1) prediction_array = np.expand_dims(prediction_array, axis=0)print("Predicting Next Working Day's Nifty 50 Index Price...\n")# Test model on input data. input_data = https://www.it610.com/article/np.array(prediction_array, dtype = np.float32) interpreter.set_tensor(input_details[0]['index'], input_data)interpreter.invoke()predicted_stock_price = interpreter.get_tensor(output_details[0]['index']) predicted_stock_price = minmax.inverse_transform(predicted_stock_price) predicted_stock_price = predicted_stock_price[0][0]actual_price = data['price'][present_day] actual_price = minmax.inverse_transform([[actual_price]]) actual_price = actual_price[0][0]print(f'Predicted Index Price for the next working day after {date}: {predicted_stock_price}') print(f'Actual Index Price for the next working day after {date}: {actual_price}\n')except (IndexError, UnboundLocalError): print('Entered Date should lie between period 2015-01-01 and 2019-12-31 and should not lie on a stock market holiday. Please enter a correct date.')except: print('Invalid Date Format. Please put date in yyyy-mm-dd format.')

#Function to Predict Price for Random 60 Consecutive Daysdef prediction_multiple_days_quantized():#Loading Data data = https://www.it610.com/article/pd.read_csv('data_processed_final.csv') with open('min_max.pickle', 'rb') as i: minmax = pickle.load(i)#Predicting data['price'] = minmax.transform(data['price'].values.reshape(-1, 1))prediction_prices = [] actual_prices = []random.seed(20) n = random.randint(0, len(data)-60) random_date = data['date'][n] print(f'Predicting for next 60 days from date: {random_date}')# Initialize the interpreter interpreter = tf.lite.Interpreter(model_path = "converted_quant_model.tflite") interpreter.allocate_tensors()input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() input_shape = input_details[0]['shape']for i in range(n, n+60): date = data['date'][i]present_day = data[data['date'] == date].index[0]last_60_days_price = data['price'][present_day-59:present_day+1].values last_day_news_score = data[data['date'] == date]['score']prediction_array = np.append(last_60_days_price, last_day_news_score).reshape(-1, 1) prediction_array = np.expand_dims(prediction_array, axis=0)# Test model on input data. input_data = https://www.it610.com/article/np.array(prediction_array, dtype = np.float32) interpreter.set_tensor(input_details[0]['index'], input_data)interpreter.invoke()predicted_stock_price = interpreter.get_tensor(output_details[0]['index']) predicted_stock_price = minmax.inverse_transform(predicted_stock_price) predicted_stock_price = predicted_stock_price[0][0]actual_price = data['price'][present_day] actual_price = minmax.inverse_transform([[actual_price]]) actual_price = actual_price[0][0]prediction_prices.append(predicted_stock_price) actual_prices.append(actual_price)plt.figure(figsize=(12,7)) plt.plot(prediction_prices, color = 'red', label = 'Predicted Prices') plt.plot(actual_prices, color = 'green', label = 'Actual Prices') plt.title('Nifty Index Prediction for 60 Consecutive Days') plt.xlabel('Days') plt.ylabel('Prices') plt.legend() plt.show()RMSE = sqrt(mean_squared_error(prediction_prices, actual_prices)) print(f"RMSE: {RMSE}")

The quantized model is taking 60.8ms to predict the stock price for the next day. That’s a huge improvement! But notice the increase in RMSE as well.
量化模型花费60.8毫秒来预测第二天的股价。 这是一个巨大的进步! 但也要注意RMSE的增加。
Performance Comparision between the normal and quantized model:
正常模型与量化模型之间的性能比较:
We can clearly observe that time for prediction has reduced significantly when we use the quantized model. But accuracy got decreased as RMSE has increased in the quantized model.
我们可以清楚地观察到,当我们使用量化模型时,预测时间已大大减少。 但是,随着量化模型中RMSE的增加,准确性下降了。
Quantization is a great technique when we are required to make faster predictions without caring too much about accuracy. These models consume lesser space as well. These models are perfect for Mobile and Online Applications use where we need quick results.
当要求我们做出更快的预测而又不太关心准确性时,量化是一种很棒的技术。 这些模型也占用更少的空间。 这些模型非常适合需要快速结果的移动和在线应用程序使用。
结论 (Conclusion) In this case study, We learned how to handle and process time-series data and build deep learning models with a production perspective. Stock Price time series is considered the most challenging time series and we were able to predict the Nifty Index Data with high accuracy. We also learned how to optimize the model in post-training phase to make it ready for deployment.
在本案例研究中,我们学习了如何处理和处理时间序列数据,并从生产角度构建深度学习模型。 股票价格时间序列被认为是最具挑战性的时间序列,我们能够高精度地预测Nifty Index数据。 我们还学习了如何在训练后阶段优化模型,以使其可以部署。
进一步改进 (Further Improvements) Here are some of the leads which can improve the results from the above-discussed solution:
以下是一些可以改善上述解决方案结果的线索:
  1. Collect news data for more years to have more data points.
    收集更多年的新闻数据以拥有更多数据点。
  2. Deep Learning Models work very well with large data. Since we have limited stock price data. To do a more extensive stock analysis, we can take hourly stock price data instead of daily stock price data to increase the data points. This shall improve accuracy.
    深度学习模型非常适合大数据。 由于我们的股票价格数据有限。 要进行更广泛的库存分析,我们可以采用小时价格数据代替每日价格数据来增加数据点。 这将提高准确性。
  3. Play more with the LSTM architecture and hyperparameters to improve the model accuracy.
    充分利用LSTM架构和超参数来提高模型的准确性。
  4. Instead of using a pre-trained VADER Sentiment Analyzer, we can train our own model by first creating training data. This custom trained model should give better sentiment results since it will get trained on the stock market news language.
    无需使用预先训练的VADER情绪分析器,我们可以通过首先创建训练数据来训练自己的模型。 此定制的经过训练的模型应该提供更好的情绪结果,因为它将接受关于股市新闻语言的训练。
  5. There is recent research going on stating that GAN, Reinforcement Learning can also be used to predict the stock market better.
    最近的研究表明,GAN,强化学习也可以用来更好地预测股市。
  6. Anomalies can be handled better by retraining the data with the correct sentiment scores with the help of a custom trained sentiment analyzer.
    借助定制的经过训练的情绪分析器,可以通过使用正确的情绪分数重新训练数据来更好地处理异常。
  7. For even faster prediction, quantization of model to int8 can be used but it will reduce the accuracy significantly.
    为了更快地进行预测,可以使用将模型量化为int8的方法,但是它将大大降低准确性。
代码参考 (Code Reference) 联系链接 (Contact Links) Email Id: kala.shagun@gmail.comLinkedin: https://www.linkedin.com/in/shagun-kala-061a3b127/
电子邮件ID: kala.shagun@gmail.com Linkedin: https : //www.linkedin.com/in/shagun-kala-061a3b127/
参考论文 (Papers Referred)
  1. Stock Price Prediction Using News Sentiment Analysis: http://davidanastasiu.net/pdf/papers/2019-MohanMSVA-BDS-stock.pdf
    使用新闻情绪分析预测股价: http : //davidanastasiu.net/pdf/papers/2019-MohanMSVA-BDS-stock.pdf
  2. Sentiment Analysis of Twitter Data for Predicting Stock Market Movements: http://arxiv.org/pdf/1610.09225v1.pdf
    Twitter数据预测股市走势的情感分析: http : //arxiv.org/pdf/1610.09225v1.pdf
其他参考 (Other References)
  1. AppliedAICourse.com
    AppliedAICourse.com
  2. www.tensorflow.org/lite/performance/post_training_quantization
    www.tensorflow.org/lite/performance/post_training_quantization
  3. https://github.com/sonalimedani/TF_Quantization/blob/master/quantization.ipynb
    https://github.com/sonalimedani/TF_Quantization/blob/master/quantization.ipynb
  4. https://towardsdatascience.com/end-to-end-time-series-analysis-and-modelling-8c34f09a3014
    https://towardsdatascience.com/end-to-end-time-series-analysis-and-modelling-8c34f09a3014
  5. https://medium.com/analytics-vidhya/stock-prices-prediction-using-machine-learning-and-deep-learning-techniques-with-python-codes-a630c0d3f137
    https://medium.com/analytics-vidhya/stock-prices-prediction-using-machine-learning-and-deep-learning-techniques-with-python-codes-a630c0d3f137
  6. https://udibhaskar.github.io/practical-ml/debugging%20nn/neural%20network/overfit/underfit/2020/02/03/Effective_Training_and_Debugging_of_a_Neural_Networks.html
    https://udibhaskar.github.io/practical-ml/debugging%20nn/neural%20network/overfit/underfit/2020/02/03/Effective_Training_and_Debugging_of_a_Neural_Networks.html
  7. https://machinelearningmastery.com/how-to-develop-lstm-models-for-time-series-forecasting/
    https://machinelearningmastery.com/how-to-develop-lstm-models-for-time-series-forecasting/
翻译自: https://medium.com/@kala.shagun/stock-market-prediction-using-news-sentiments-dc4c24c976f7

    推荐阅读