More information may boost accuracy, but challenges exist
A flood of precision agriculture technology to America’s farm fields over the past two decades has created an abundance of information, a mixed blessing for such agencies as the U.S. Department of Agriculture as it tracks the country’s crop production.
The USDA publishes weekly and monthly reports on current prices, volume, quality and other market data related to crops grown in the United States. Because the agency’s reports are considered the most reliable source of information on the status of U.S. crop production, they often drive the ebb and flow of markets for those products.
Kansas State University agricultural economist Jesse Tack says, however, that the volumes of information now available thanks to precision measurements of farm fields may be inadvertently creating confusion about total crop production in the U.S.
“In particular, there are questions about how accurate USDA crop production estimates are compared to private-sector estimates,” Tack said.
Tack is the lead author on an article that was published recently in the journal, Applied Economic Perspectives and Policy, indicating there are significant differences between collecting data with on-farm surveys – as done by the USDA – and aggregating large volumes of data available with such precision technology as yield monitors, drones and global-position satellites.
The article is co-written by Robert Johansson, the chief economist at the USDA, and agricultural economists at Mississippi State University and the University of Kentucky.
Tack noted that information obtained on crops sometimes can be biased due to the agricultural strength in a given region, “and this bias persists in the presence of larger samples. Even when the focus is on major production regions, biases in the estimates and imprecision persist.”
Tack said the paper also reports that sampling from large farms tends to introduce bias and imprecision into overall estimates of crop production.
“To the extent that large farms are the early adopters of precision technology, our results suggest that, if not corrected for bias, data from those farms could introduce inaccuracy relative to a representative national sample,” he said.
For decades, the USDA has measured and forecast crop production with data collected from representative farms across the country. That system, while relied on heavily for decades, is now being challenged by machines that can literally measure information on every inch of farm land that it passes.
Some private companies are beginning to test the industry’s openness to precision technology forecasts, typically aggregating large amounts of data from multiple sources. Some of those companies filled the void when the USDA was not able to produce its weekly reports during the recent partial shutdown of the U.S. government.
Further, the traditional ways of measuring and forecasting crop production takes more time and resources, including money. On the other hand, farmers often can provide precision technology data at the push of a button.
But, Tack notes, precision technology is not a perfect answer, and the information it provides should be used with some caution.
“The new data is potentially not very representative because of who has the equipment and whether or not they decide to share it,” Tack said. “The USDA works very hard to get representative samples. So there is a trade-off. The current approach is costly but reliable, while the new approach is cheap but potentially unreliable.”
He adds: “Augmenting USDA data sources for estimating various crop-production statistics using big data from machinery is possible and offers potential. While this article does not map out the path that is required to generate hybrid statistics, it does raise the potential and illustrate several means by which the limitations of the use of machine data can be corrected for.”
The full research article is available here.
Original release Feb. 12, K-State Research and Extension
Researchers study precision ag’s effect on USDA forecasts