This is the second article in an eight-part series about the top four challenges in solar performance monitoring and how to overcome these challenges.
In the first article of this eight-part series on solar performance monitoring, we discussed the challenge of working with real-world (read: imperfect, messy, “noisy”) operating data. Many monitoring applications fail when trying to ingest and generate insights from noisy plant operating data. In this second article, we discuss another common challenge we encounter when using solar monitoring software: the challenge of scale.
By “scale,” I mean that solar monitoring applications need to be able to consume large volumes of data at a high velocity coming from a high variety of technologies:
Coming out of the traditional power industry, I was surprised to find that the solar asset class generates more data than a similar fossil power plant. Traditional power plants are comprised of large and sophisticated rotating equipment, boilers and fuel handling systems. You would think that a fossil plant would have many more sensors and meters than a relatively simple solar power plant of the same size? Not so.
Why is that? Though, from a technical perspective, solar power equipment is much simpler than their fossil power plant equivalents, they have many more generators (PV modules) and electrical evacuation systems than fossil power plants. For example, a typical 500 MW solar power plant has over 100 times more sensors than a 500 MW coal power plant.
Then add the fact that solar asset owners are onboarding plants at a rate of over 120 times the rate of fossil power owners. You start to see how big the data mountain we have to climb really is. Power Factors’ typical customer is generating over a million tags an hour and over 50,000 events per day. That’s a lot of data to deal with!
At the same time as we address the problem of high volume, we also need to consume that data at a high frequency. Data frequency is driven primarily by solar operator and regulatory needs.
Solar remote operations centers are responsible for near-real-time monitoring of plant status and events. Five-minute sample frequencies are considered the minimum requirements for most operators. For those who also need to monitor plant breaker status and meet rapid response service level agreements from owners, even that may be too infrequent.
In addition, grid operators and regulators are increasingly relying on renewable power assets to provide operational support to the grid, including voltage and frequency control. To meet these response, control and monitoring requirements, asset monitoring applications are being asked to work with sub-minute and even sub-second data.
If addressing the volume and velocity challenges were not enough, the solar power market is still a young industry. This introduces a high variety of technology that monitoring systems to have to work with. The industry has dozens of inverter, tracker, SCADA and module manufacturers, each with their own open and proprietary communication protocols and alert codes.
The asset monitoring application needs to be able to communicate with each of these vendor’s unique interface requirements, map alert codes to a common set of standards, and generate meaningful events and performance analytics.
This no small task. The comparatively more mature wind and fossil power industries have consolidated down to a few equipment vendors and have adopted standard communication and failure codes resulting in a dramatic reduction in the technology variety seen in those asset classes. Solar’s not there yet.
An industrial-strength software application is needed if we are going to successfully acquire, process and analyze data of this magnitude. IT professionals call this a robust and scalable software platform.
Robust software speaks to its reliability and availability. Robustness must be built into the platform design. The platform infrastructure must be highly available and able to maintain application performance when stress tested.
Tech Terms defines software scalability as “scalable hardware or software [that] can expand to support increasing workloads. This capability allows computer equipment and software programs to grow over time, rather than needing to be replaced.” Like robustness, a scalable platform must be designed from the ground up.
Many solar monitoring software platforms on the market were not designed with these capabilities in mind. These platforms were initially built to meet the needs of incentive programs where simple energy production monitoring was required by public agencies to demonstrate the solar project was producing electricity commensurate with the incentive program rating. These monitoring applications never envisioned the magnitude of the market nor the data volumes that today’s and tomorrow’s software platforms would need to process.
As the scale increased, more and more capability had to be bolted on to these platforms in an attempt to keep these applications from breaking. The problem with this approach is that the foundational data acquisition and processing engine was not designed to handle this much scale — eventually the weight of the bolt-ons will crush the software.
We need to start over with a robust and scalable design using tools and a back-end infrastructure that can adapt to the growing market needs.
The challenge of scale is debilitating to most solar monitoring software applications simply because they were not designed to address the volume, velocity and variety of data thrown at them. Until this fundamental problem is resolved, users will continue to be frustrated with software monitoring tools that break far too often.
Want to learn more about how Power Factors’ Drive Pro asset performance management (APM) platform helps you overcome the challenge of scale?
Steve Hanawalt is EVP and Founder at Power Factors.