Forecasting in Practice

Does Deep Learning Make Sense?

In the previous articles, I presented three fundamentally different forecasting scenarios:

  • products with short history, seasonality, and promotional events (Project 1)
  • sparse data, where the key challenge is predicting whether a sale will occur at all (Project 2)
  • daily data influenced by external factors such as holidays or location-specific effects (Project 3)


Each of these projects leads to different model behavior — and more importantly, to different conclusions about which approach makes practical sense in real-world applications.
At the same time, several common patterns emerge that help determine which approach is the most appropriate:

  • stable data with long historical coverage — simpler statistical models are often fully sufficient
  • seasonality — basic patterns can be handled by statistical models, while more complex scenarios require models capable of working with multiple inputs
  • promotional events and external influences — benefits appear only when models can incorporate additional information (ML, DL)
  • irregular fluctuations and short history — this is where deep learning begins to show a significant advantage
  • sparse data — the challenge shifts to predicting the existence of sales, where deep learning achieves substantially better results

Implementation

From an implementation perspective, the differences between approaches are substantial.

Statistical models can typically be deployed relatively easily. In many companies, it is often sufficient to export data from internal systems, perform basic preprocessing, and run the forecasting model itself.

Machine learning represents an intermediate step. It allows models to work with multiple inputs and capture more complex relationships, but it still requires data preparation, feature engineering, and model tuning. The implementation complexity is higher than for statistical methods, but significantly lower than for deep learning.

With deep learning, the situation becomes considerably more complex. These models contain a large number of parameters, require extensive data preparation, feature engineering, and repeated tuning. In addition, each dataset usually requires an individual approach.

As a result, deep learning implementations in practice often take months and require a specialized team. This is also one of the reasons why many projects remain only in the experimental phase and never reach real production environments.

This complexity is naturally reflected in the overall cost as well. Deep learning requires not only expertise, but often more powerful infrastructure, including GPUs. For many smaller companies, it therefore initially makes more sense to stay with simpler approaches.

Is an x% Difference Significant Enough?

At first glance, the difference between models may appear relatively small — often only a few percentage points. However, this is primarily true for stable scenarios (such as Project 3), where models have sufficient historical data and the behavior is relatively predictable.

In more complex cases (such as sparse data in Project 2 or promotional scenarios in Project 1), the differences increase significantly — often reaching tens of percent. In addition, aggregate metrics frequently hide important differences: a model may perform well on average while still failing in critical situations. For some products, the forecasting deviation may be minimal, while for others the errors become substantial — especially during fluctuations, seasonal peaks, or for less stable products. These are precisely the situations that have the greatest practical impact.

Forecasting differences then directly affect real business operations:

  • higher inventory levels and tied-up capital
  • product unavailability and unfulfilled customer demand
  • the need for operational interventions in planning


Another important factor is the complexity of the decision-making process itself. In most companies, planning relies on a combination of simple models and the experience of specific individuals. This approach works reasonably well for smaller product portfolios, but as data volume and relationship complexity increase, it becomes increasingly difficult to maintain. 
The final outcome then depends heavily on individual expertise — and such knowledge is not always transferable or stable over time.

Deep learning therefore provides value not only through improved metrics. Its main advantage lies in the ability to systematically handle data complexity and reduce dependence on manual decision-making.

Is Deep Learning Implementation Worth It?

Thanks to modern libraries and frameworks, models can now be built and tested relatively quickly. However, this solves only part of the problem.

In practice, the biggest challenges are:

  • data preparation
  • feature engineering
  • systematic testing of different variants
  • evaluation of results in the context of a specific product portfolio


The model itself is only one component of the overall solution. Without a framework that enables experiment management, result comparison, and consistent handling of data, the process quickly turns into isolated experiments with little real-world impact. 
In practice, I have found an iterative approach to work best, where different model types, parameters, and input features are systematically tested, evaluated, and compared — both through quantitative metrics and through visual inspection of specific products. An important factor is not only forecast accuracy, but also model stability in critical scenarios.

LLM models do not solve the forecasting problem itself, but they help with result analysis, interpretation of model behavior, and automation of parts of the decision-making process. Their role is supportive — but when combined with traditional forecasting models, they can significantly accelerate analysis and simplify interpretation of results.

Connection to Real-World Planning

Most planning systems — such as inventory management or financial planning platforms – use forecasts as one of their key inputs. However, these forecasts are often based on statistical methods that are unable to capture more complex data behavior. When the forecasts are inaccurate, the impact propagates throughout the entire process — from purchasing and inventory levels to product availability for customers


Integrating these systems with forecasts generated by deep learning models can significantly improve prediction accuracy and, as a result, improve decision quality across the entire planning process.

Does Deep Learning Make Sense?

The answer is not universal.

 

For simple and stable scenarios, basic approaches are often fully sufficient. However, once the data becomes more complex — combining seasonality, irregularity, short history, or external influences — the benefits of deep learning become substantial. In real-world sales forecasting, simple scenarios are often the exception rather than the rule.

Conclusion

The choice of model is not the main challenge. The real challenge is how effectively you can work with data, test different approaches, and evaluate results in the context of your specific product portfolio. The differences between approaches are reflected not only in metrics, but also in the real operation of the business — inventory levels, product availability, and the number of operational interventions required in planning. 

 

Deep learning provides the greatest value in complex scenarios. However, the advantage over simpler approaches does not appear automatically — it becomes meaningful only when the data is properly analyzed, feature engineering is carefully designed, and the entire experiment is structured correctly. 

What Determines Success

  • how quickly you can test new variants
  • how well you understand your data
  • how effectively you can translate results into decisions


Start with a specific business problem, not with model selection.
Focus on the part of the portfolio where forecasting errors have the greatest impact.
Test approaches systematically and evaluate them based on real business impact, not only on forecasting metrics.

Subscribe for Latest News

You have been successfully Subscribed! Ops! Something went wrong, please try again.

Better forecasting thanks to AI

Business Info.

IČO: 172 28 018

DIČ: CZ 172 28 018

Data Box ID: ykwdnxf

sales@neebile.cz

Jičínská 226/17, Praha, Žižkov, PSČ 130 00 Česká republika

(910) 658-2992

© 2025 Vytvořeno DigitalWays