To say that the supply chain and logistics industries are in one of the most challenging periods in their histories would be a massive understatement. But what is perhaps most concerning about the current situation is that many of the issues that are causing this current crisis aren’t new. Instead, they have been swept under the rug for decades.
Perhaps more than any other industry, the supply chain and logistics sectors are synonymous with an aversion to change and a slow adoption of technology. And while it is true that many organizations in these spaces have hit fast-forward on their digital transformation efforts since the start of COVID, these industries are perhaps more fragmented than ever as businesses are at different stages of technological maturity. This is particularly true when it comes to sharing information and data between stakeholders across both of these industries.
And with that, perhaps it could be argued that the Biden administration’s Freight Logistics Optimization Works (FLOW) initiative – which is aiming to ease the data sharing friction in the supply chain and logistics industries – could not have come at a better time. However, to get this pilot program right, there are several data considerations that need to be addressed early-on or otherwise FLOW may end up as another failed attempt to modernize America’s stumbling supply chain.
Here are a few areas in particular that need to be considered first and foremost.
The Role of AI and Where Does it Fit?
With the popularity of the AI continuing to grow, the AI supply chain market is set to be worth over $14 billion by 2028. That said, with many supply chain and logistics companies only just dipping their toes into the AI waters, the Biden administration needs to find a way to reconcile the disparate data operations environment that exists in these sectors today.
If not, it will generate significant hurdles to creating a reliable and standardized approach to information and data sharing. Moreover, if not handled properly, it will likely create a stratified environment where some companies are shut out from this new data sharing workflow while others are able to reap benefits more readily.
Data transparency remains a huge problem for many areas of the supply chain, particularly when it comes to procurement and on-time delivery (OTD). In order for the industry to operate as efficiently as possible, organizations need to have access to all the data they need so that they can communicate clearly with one another and optimize their own performance. And unfortunately, due to a variety of factors – ranging from opaque brokers to siloed off internal technology infrastructures – this level of transparency simply doesn’t exist in the supply chain today.
Focus on Available Capacity
Capacity is a really good place to start in terms of which areas need better data and oversight the most. The capacity landscape over the last couple of years has been defined by unpredictability – with long periods of low capacity suddenly being interrupted by short bursts of high capacity. This has made it incredibly challenging for the industry to anticipate and “keep things moving” with any semblance of consistency.
In addition, this lack of capacity insight has had knock-on impacts, with carriers having to rethink their routes and fuel usage, which in turn has resulted in further back-ups on loading docks and the higher costs associated with all of these variables. Therefore, finding ways to boost capacity data insights must be a foremost priority.
There is certainly a lot of work that needs to be done in order to boost data efficiency in the supply chain and logistics space. However, if the FLOW initiative can get this new data infrastructure right, it could prove to be a massive success in moving the supply chain forward in the years ahead.