IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
To find out what driving’s like when you have a sixth sense, I took a radar-equipped Audi A8 around the highways and byways of Stuttgart, Germany. It was great.
I couldn’t help but smile when I pulled behind a huge truck and, resisting the temptation to hit the brakes, focused on steering. The adaptive cruise-control system, which uses a new radar from Robert Bosch that can see hundreds of meters ahead, did the rest. The system gently nestled the car behind the juggernaut and accelerated at my command, so I was able to pull out into the passing lane, all the while getting the most out of the 4.2-liter diesel, which rapidly propelled me to the speed I’d selected.
The system did have its foibles. Once the radar locked onto the car in front of me, and when the car turned hard to the right and then hard to the left, the radar came unlocked. So I took control, applying the brakes well before the emergency braking would have kicked in. That episode was a little disconcerting. Still, I could easily get used to this gizmo.
Most people who have driven for a while using such a radar are loath to ever give it up. And the number of such devotees will only grow as this technology—which now adds about US $1000 to the price of the car—becomes more affordable. The first commercial system appeared in Japan in 1997, on the Toyota Celsior; others soon followed in some top-of-the-line models from the likes of BMW, Jaguar, Lexus, Nissan, and Mercedes. The market has been expanding at about 40 percent a year, and as prices fall, that rate should rise.
Today’s systems can dramatically reduce your risk of rear-ending someone else’s car, and when most cars have such radars, they will also be much less likely to rear-end you. Once every vehicle on the road is able to sense and avoid others, there’ll be no reason why they won’t be able to negotiate tailing distances among themselves. Eventually, they might even be sending radio messages about their intentions to one another and to monitors on the roadway over ad hoc communication networks. Smart roads may thus emerge organically.
The first step in that evolution, the democratization of radar, is clearly under way. Next year Bosch will release a less expensive version of its radar, with a range of 160 meters, two-thirds that of the one I tested. This won’t be a problem, though, because it’s intended for cars that don’t go nearly as fast.
Falling costs are the key, but of course, costs don’t fall by themselves. Engineers have done their part by ditching the expensive compound semiconductors in their radar sets in favor of the old standby, silicon—but a special form of silicon that’s been speeded up.
In the late 1960s workers atMullard Research Laboratories, in England, developed a car radar system that operated at 10 gigahertz, and RCA used the same frequency in its 1972 system. To make the next step and cram such a radar into a small space—such as under the hood of a car—manufacturers had to shrink the array of antennas, keeping each antenna far enough from its neighbors to allow for good resolution of detail. They accomplished this task by moving first to 34 GHz, then to 50 GHz, and recently to 77 GHz. The choice of frequency has something to do with the absorption of microwaves in the air and a lot to do with legislation: The law places strict limits on power for the lower frequencies, which is why systems in the lower bands can look forward just a few meters, only enough to avoid fender benders in stop-and-go traffic.
To manage the higher frequencies, long-range auto radars have until recently required seven or more gallium arsenide–based chips to generate, amplify, and detect the microwave signals. That set of chips costs from $20 to $60—not all that much, it might seem. But those chips have to be connected and tested, and if one fails to work, it must be rooted out and replaced. This labor adds substantially to the cost of any radar based on gallium arsenide technology.
In 2009, the German chipmaker Infineon Technologies, based in Neubiberg, produced a system designed around a single silicon-based chip. Then it teamed up with Bosch and started supplying a more flexible, two-chip variant for radar systems in 2010 models of the Audi A8, Porsche Panamera, and Volkswagen Touareg. Not only are these new systems less expensive, they also have significantly better performance, allowing them to cover more than four times the area in front of the car, four times as accurately.
Even specialists in the gallium arsenide industry expect that silicon chips will grab most of the car radar market. Asif Anwar, director of the program for gallium arsenide and compound-semiconductors technologies at the market-research firm Strategy Analytics, in England, predicts that over the next three years, silicon’s share of the chip market for automotive radar will grow from nearly nothing to perhaps 60 percent. Although Infineon will then have captured most of the resulting $120 million market for silicon-based radar chips, it already faces the first signs of serious competition: U.S. chipmaker Freescale Semiconductor, in Austin, Texas, has just started sending samples of its silicon-based chip to automotive radar makers. Other companies are surely following suit.
Infineon has thus overturned the conventional view that silicon chips would never be able to generate, detect, and amplify high frequencies. The problem is that electrons move slowly through those chips—which is why a decade ago Infineon and a handful of other companies were using the faster gallium arsenide to build automotive radar chips. But in mid-2002 Infineon got out of the gallium arsenide business. A year later it was in discussions with Bosch about automotive radar chips based on silicon.
“At that time everybody thought this was not possible to do with silicon-based technologies,” recollects Rudolf Lachner, Infineon’s program manager for radar technologies. “But we did some high-speed circuits, such as voltage-controlled oscillators, which worked at 77 GHz.”
To realize such high speeds in a silicon transistor, Infineon’s engineers inserted into the heart of the device a thin layer that was four parts silicon and one part germanium. The idea was hardly new. Indeed, it can be traced to theoretical work that Nobel Prize–winning physicist Herbert Kroemer, now at the University of California, Santa Barbara, did way back in the 1950s. However, the world had to wait until 1975 for the first real device, made at AEG Research Center (now part of Daimler) in Ulm, Germany. Infineon’s claim to fame comes from boosting this kind of transistor to record speeds, thanks to improvements in internal configuration and material quality.
Adding that layer of silicon germanium alloy introduces electric fields that present the moving electron with the equivalent of a downhill path, speeding it up automatically. Now even transistors with 50-nanometer-thick base layers can reach the speeds demanded by 77-GHz automotive radar.
Switching to the new transistor delivers another benefit—very low noise levels. You can speed up conventional silicon transistors by thinning the base layer, but you’ll just impede the flow of electrons and increase background noise. To muffle it, you could try to reduce the resistance of the base by doping the silicon with traces of boron, whose atoms each have three electrons in the outer shell, rather than silicon’s four. Because there aren’t enough electrons to form all the covalent bonds required, you get a “hole,” or virtual positive particle, which moves freely through the crystal, increasing its conductivity. Unfortunately, increasing the base doping this way reduces the amplification, or gain. Working with a silicon germanium base layer gets around this problem because it makes its own contribution to the gain, offsetting the losses caused by doping. You can make the base doping very high, explains Lachner. “And by making it very high, you get a very low base resistance, which improves the noise behavior of your transistor,” he says.
The fundamental insight stemmed from work Infineon did in the early 1990s while developing chips for next-generation mainframe computers. That project never took off. Nor was the company able to market its chips to mobile-phone vendors: As conventional transistors shrank, their lower cost proved more important than the lower power consumption of Infineon’s chips. But soon after, it became clear to Infineon that this technology was a perfect fit for auto radar.
Perhaps it’s the fiendishly high speeds of the autobahns that have made Germany so keen on technology to avoid collisions. Or it could be government aid. In 2004, Infineon began a three-year automotive radar program with 10 million in subsidies from the German government. That project allowed the company to collaborate with automotive radar system makers Bosch and Continental and carmakers BMW and Daimler.
Infineon’s prototype could operate up to only about 80 GHz, good enough for use in an oscillator but not in the amplifier. That’s because for a transistor to deliver reasonable gain at a given frequency, it needs to top out at about three times that value. In 2007, by improving the quality of the boron-doped silicon germanium in the base, Infineon’s engineers increased the transistor’s maximum operating frequency to the requisite level and soon went on to produce the first commercial silicon germanium automotive radar chips, which ran at 77 GHz. Four years later, Infineon continues to churn out the chips at its huge fab in Regensburg, Germany.
Inserting the silicon germanium layer into the device requires no exotic techniques or extraordinary tools: Infineon simply uses 200-millimeter silicon wafers and grows thin silicon films on top using conventional chemical-vapor deposition. At the appropriate point during the process, a valve opens, germanium-based gases flow into the growth chamber, and a silicon germanium film forms.
One such wafer can yield thousands of chips. “This gives us enough headroom to produce as many automotive radar systems as we would like,” explains Lachner. In fact, most of the fab’s output of 10 000 wafers goes to other purposes. If Infineon somehow captured the entire automotive market overnight, it could easily satisfy the demand.
So why do other companies, such as TriQuint Semiconductor, in Hillsboro, Ore., and United Monolithic Semiconductors, in Orsay, France, still produce automotive radar chips based on pricey gallium arsenide? For one thing, gallium arsenide is still the biggest player in the radar market at the moment, and these firms can sell a lot of chips, at least for a few years. Also, these companies don’t necessarily have silicon production lines to switch to, nor would it make sense to build a full-blown silicon fab for car radar alone.
Cost isn’t the only thing driving change. It’s not only cheaper to use one Infineon chip (or two, in the fancier system); it’s also more effective than the handful of gallium arsenide chips it replaces. When Bosch upgraded Infineon’s product during the development of its third-generation long-range radar (dubbed, unimaginatively, the LRR3), both the minimum and maximum ranges of its system got better: The minimum range dropped from 2 meters to half a meter, and the maximum range shot from 150 to 250 meters. At the same time, the detection angle doubled to 30 degrees, and the accuracy of angle and distance measurements increased fourfold. The superiority stems from the significantly higher radar bandwidth used in the systems containing the silicon-based chips, says Thomas Fuehrer, Bosch’s senior manager for strategic marketing for driver assistance: “It is around 200 megahertz on the LRR2, and we are now using 500 MHz on the LRR3.”
Another selling point is the new system’s compact size—just 7.4 by 7 by 5.8 centimeters. “If you are comparing it with the competitor’s systems, this really is a very small masterpiece,” Fuehrer says. What it means is that automobile designers can stick this thing just about anywhere—even in the headlamp assembly.
The system employs four antennas and a big plastic lens to shoot microwaves forward and also detect the echoes, all the while ramping the emission frequency back and forth over that big fat 500-MHz band. (Because the ramping is so fast, the chance of two or more radars interfering is extraordinarily low.) The system compares the amplitudes and phases of the echoes, pinpointing each car within range to within 10 cm in distance and 0.1 degree in displacement from the axis of motion. Then it works out which cars are getting closer or farther away by using the Doppler effect—the change in frequency associated with motion that also causes us to perceive a train whistle to rise in pitch as it approaches us and fall as it pulls away. In all, the radar can track 33 objects at a time.
On the Audi A8, you receive two separate warnings when you get worryingly close to the car in front. First, a high-pitched alarm sounds, and a light appears on the dashboard. If that sound-and-light show doesn’t work, then comes a short, sharp brake to snap you out of your stupor. “Tests and studies show that most drivers will then immediately look forward at the road and notice if they are too close,” says Bernhard Lucas, head of Bosch’s department for developing car radar hardware.
Even braking may not prevent a collision: Statistics gathered by Bosch show that nearly half of rear-end crashes are caused by drivers pressing the brake pedal too softly. But if that happened in the radar-equipped Audi A8, additional braking would be applied automatically.
If worse comes to worst, the braking system goes into action by itself. “In rare cases where the driver is completely unable to do anything—he is helpless or half dead—full emergency braking is applied when the crash is really unavoidable,” says Lucas. Then the car decelerates abruptly, throwing the driver forward into the safety belt with up to six times the force of gravity but minimizing what would otherwise be a catastrophic impact with the car in front.
Of course, you’ll probably never have to call on such emergency powers to save your life. Few people even consider such features when purchasing a car. That’s why the day-to-day operation of the system is important for winning over the driver. Today, the benefits come mainly in the form of a radar-enhanced cruise control. You can set your radar to lock onto the vehicle in front and keep pace with it, braking and speeding up appropriately. You specify the following distance and the maximum allowable speed, which can be as high as 250 kilometers per hour (155 miles per hour).
It is interesting that when Audi, Porsche, and VW started making radar-ready cars last year, all three companies chose to use the radar as a driving aid rather than a full-blown autopilot. They thus reduced their liability for any accidents that might ensue. Today, it’s clear that the main roadblock for a software-based chauffeur are legal worries and perhaps the fear of the unknown. Should any automaker dare to take the plunge, the technology will not be lacking.
This article originally appeared in print as “A Driver’s Sixth Sense.”
Richard Stevenson got a Ph.D. in physics at the University of Cambridge. There he focused on gallium nitride and other compound semiconductors, the beat he usually covers for Spectrum. This time, he writes about an advanced yet inexpensive car-radar system from Infineon Technologies that’s based on good old silicon. “With the shift to silicon, we may soon see luxury features like adaptive cruise control on economy cars,” he says.
Your weekly selection of awesome robot videos
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
The secret to making a robot is to pick one thing and do it really, really well. And then make it smaller and cheaper and cuter!
Not sure how much Baby Clappy is going to cost quite yet, but listen for it next year.
A leading Argo program scientist describes the quiet revolution in undersea tech
Glenn Zorpette is editorial director for content development at IEEE Spectrum. A Fellow of the IEEE, he holds a bachelor's degree in electrical engineering from Brown University.
A robotic underwater craft called a profiling float was deployed by the French research vessel Pourquoi Pas in 2020.
In the puzzle of climate change, Earth’s oceans are an immense and crucial piece. The oceans act as an enormous reservoir of both heat and carbon dioxide, the most abundant greenhouse gas. But gathering accurate and sufficient data about the oceans to feed climate and weather models has been a huge technical challenge.
Over the years, though, a basic picture of ocean heating patterns has emerged. The sun’s infrared, visible-light, and ultraviolet radiation warms the oceans, with the heat absorbed particularly in Earth’s lower latitudes and in the eastern areas of the vast ocean basins. Thanks to wind-driven currents and large-scale patterns of circulation, the heat is generally driven westward and toward the poles, being lost as it escapes to the atmosphere and space.
This heat loss comes mainly from a combination of evaporation and reradiation into space. This oceanic heat movement helps make Earth habitable by smoothing out local and seasonal temperature extremes. But the transport of heat in the oceans and its eventual loss upward are affected by many factors, such as the ability of the currents and wind to mix and churn, driving heat down into the ocean. The upshot is that no model of climate change can be accurate unless it accounts for these complicating processes in a detailed way. And that’s a fiendish challenge, not least because Earth’s five great oceans occupy 140 million square miles, or 71 percent of the planet’s surface.
“We can see the clear impact of the greenhouse-gas effect in the ocean. When we measure from the surface all the way down, and we measure globally, it’s very clear.” —Susan Wijffels
Providing such detail is the purpose of the Argo program, run by an international consortium involving 30 nations. The group operates a global fleet of some 4,000 undersea robotic craft scattered throughout the world’s oceans. The vessels are called “floats,” though they spend nearly all of their time underwater, diving thousands of meters while making measurements of temperature and salinity. Drifting with ocean currents, the floats surface every 10 days or so to transmit their information to data centers in Brest, France, and Monterey, Calif. The data is then made available to researchers and weather forecasters all over the world.
The Argo system, which produces more than 100,000 salinity and temperature profiles per year, is a huge improvement over traditional methods, which depended on measurements made from ships or with buoys. The remarkable technology of these floats and the systems technology that was created to operate them as a network was recognized this past May with the IEEE Corporate Innovation Award, at the 2022 Vision, Innovation, and Challenges Summit. Now, as Argo unveils an ambitious proposal to increase the number of floats to 4,700 and increase their capabilities, IEEE Spectrum spoke with Susan Wijffels, senior scientist at the Woods Hole Oceanographic Institution on Cape Cod, Mass., and cochair of the Argo steering committee.
Why do we need a vast network like Argo to help us understand how Earth’s climate is changing?
Susan Wijffels: Well, the reason is that the ocean is a key player in Earth’s climate system. So, we know that, for instance, our average climate is really, really dependent on the ocean. But actually, how the climate varies and changes, beyond about a two-to-three-week time scale, is highly controlled by the ocean. And so, in a way, you can think that the future of climate—the future of Earth—is going to be determined partly by what we do, but also by how the ocean responds.
Aren’t satellites already making these kind of measurements?
Wijffels: The satellite observing system, a wonderful constellation of satellites run by many nations, is very important. But they only measure the very, very top of the ocean. They penetrate a couple of meters at the most. Most are only really seeing what’s happening in the upper few millimeters of the ocean. And yet, the ocean itself is very deep, 5, 6 kilometers deep, around the world. And it’s what’s happening in the deep ocean that is critical, because things are changing in the ocean. It’s getting warmer, but not uniformly warm. There’s a rich structure to that warming, and that all matters for what’s going to happen in the future.
How was this sort of oceanographic data collected historically, before Argo?
Wijffels: Before Argo, the main way we had of getting subsurface information, particularly things like salinity, was to measure it from ships, which you can imagine is quite expensive. These are research vessels that are very expensive to operate, and you need to have teams of scientists aboard. They’re running very sensitive instrumentation. And they would simply prepare a package and lower it down the side into the ocean. And to do a 2,000-meter profile, it would maybe take a couple of hours. To go to the seafloor, it can take 6 hours or so.
The ships really are wonderful. We need them to measure all kinds of things. But to get the global coverage we’re talking about, it’s just prohibitive. In fact, there are not enough research vessels in the world to do this. And so, that’s why we needed to try and exploit robotics to solve this problem.
Pick a typical Argo float and tell us something about it, a day in the life of an Argo float or a week in the life. How deep is this float typically, and how often does it transmit data?
Wijffels: They spend 90 percent of their time at 1,000 meters below the surface of the ocean—an environment where it’s dark and it’s cold. A float will drift there for about nine and a half days. Then it will make itself a little bit smaller in volume, which increases its density relative to the seawater around it. That allows it to then sink down to 2,000 meters. Once there, it will halt its downward trajectory, and switch on its sensor package. Once it has collected the intended complement of data, it expands, lowering its density. As the then lighter-than-water automaton floats back up toward the surface, it takes a series of measurements in a single column. And then, once they reach the sea surface, they transmit that profile back to us via a satellite system. And we also get a location for that profile through the global positioning system satellite network. Most Argo floats at sea right now are measuring temperature and salinity at a pretty high accuracy level.
How big is a typical data transmission, and where does it go?
Wijffels: The data is not very big at all. It’s highly compressed. It’s only about 20 or 30 kilobytes, and it goes through the Iridium network now for most of the float array. That data then comes ashore from the satellite system to your national data centers. It gets encoded and checked, and then it gets sent out immediately. It gets logged onto the Internet at a global data assembly center, but it also gets sent immediately to all the operational forecasting centers in the world. So the data is shared freely, within 24 hours, with everyone that wants to get hold of it.
This visualization shows some 3,800 of Argo’s floats scattered across the globe.Argo Program
You have 4,000 of these floats now spread throughout the world. Is that enough to do what your scientists need to do?
Wijffels: Currently, the 4,000 we have is a legacy of our first design of Argo, which was conceived in 1998. And at that time, our floats couldn’t operate in the sea-ice zones and couldn’t operate very well in enclosed seas. And so, originally, we designed the global array to be 3,000 floats; that was to kind of track what I think of as the slow background changes. These are changes happening across 1,000 kilometers in around three months—sort of the slow manifold of what’s happening to subsurface ocean temperature and salinity.
So, that’s what that design is for. But now, we have successfully piloted floats in the polar oceans and the seasonal sea-ice zones. So we know we can operate them there. And we also know now that there are some special areas like the equatorial oceans where we might need higher densities [of floats]. And so, we have a new design. And for that new design, we need to get about 4,700 operating floats into the water.
But we’re just starting now to really go to governments and ask them to provide the funds to expand the fleet. And part of the new design calls for floats to go deeper. Most of our floats in operation right now go only as deep as about 2,000 meters. But we now can build floats that can withstand the oceans’ rigors down to depths of 6,000 meters. And so, we want to build and sustain an array of about 1,200 deep-profiling floats, with an additional 1,000 of the newly built units capable of tracking the oceans by geochemistry. But this is new. These are big, new missions for the Argo infrastructure that we’re just starting to try and build up. We’ve done a lot of the piloting work; we’ve done a lot of the preparation. But now, we need to find sustained funding to implement that.
A new generation of deep-diving Argo floats can reach a depth of 6,000 meters. A spherical glass housing protects the electronics inside from the enormous pressure at that depth.MRV Systems/Argo Program
What is the cost of a typical float?
Wijffels: A typical cold float, which just measures temperature, salinity, and operates to 2,000 meters, depending on the country, costs between $20,000 and $30,000 U.S. dollars. But they each last five to seven years. And so, the cost per profile that we get, which is what really matters for us, is very low—particularly compared with other methods [of acquiring the same data].
What kind of insights can we get from tracking heat and salinity and how they’re changing across Earth’s oceans?
Wijffels: There are so many things I could talk about, so many amazing discoveries that have come from the Argo data stream. There’s more than a paper a day that comes out using Argo. And that’s probably a conservative view. But I mean, one of the most important things we need to measure is how the ocean is warming. So, as the Earth system warms, most of that extra heat is actually being trapped in the ocean. Now, it’s a good thing that that heat is taken up and sequestered by the ocean, because it makes the rate of surface temperature change slower. But as it takes up that heat, the ocean expands. So, that’s actually driving sea-level rise. The ocean is pumping heat into the polar regions, which is causing both sea-ice and ice-sheet melt. And we know it’s starting to change regional weather patterns as well. With all that in mind, tracking where that heat is, and how the ocean circulation is moving it around, is really, really important for understanding both what's happening now to our climate system and what's going to happen to it in the future.
What has Argo’s data told us about how ocean temperatures have changed over the past 20 years? Are there certain oceans getting warmer? Are there certain parts of oceans getting warmer and others getting colder?
Wijffels: The signal in the deep ocean is very small. It’s a fraction, a hundredth of a degree, really. But we have very high precision instruments on Argo. The warming signal came out very quickly in the Argo data sets when averaged across the global ocean. If you measure in a specific place, say a time series at a site, there's a lot of noise there because the ocean circulation is turbulent, and it can move heat around from place to place. So, any given year, the ocean can be warm, and then it can be cool…that’s just a kind of a lateral shifting of the signal.
“We have discovered through Argo new current systems that we knew nothing about....There’s just been a revolution in our ability to make discoveries and understand how the ocean works.” —Susan Wijffels
But when you measure globally and monitor the global average over time, the warming signal becomes very, very apparent. And so, as we’ve seen from past data—and Argo reinforces this—the oceans are warming faster at the surface than at their depths. And that’s because the ocean takes a while to draw the heat down. We see the Southern Hemisphere warming faster than the Northern Hemisphere. And there’s a lot of work that’s going on around that. The discrepancy is partly due to things like aerosol pollution in the Northern Hemisphere’s atmosphere, which actually has a cooling effect on our climate.
But some of it has to do with how the winds are changing. Which brings me to another really amazing thing about Argo: We’ve had a lot of discussion in our community about hiatuses or slowdowns of global warming. And that’s because of the surface temperature, which is the metric that a lot of people use. The oceans have a big effect on the global average surface temperature estimates because the oceans comprise the majority of Earth’s surface area. And we see that the surface temperature can peak when there’s a big El Niño–Southern Oscillation event. That’s because, in the Pacific, a whole bunch of heat from the subsurface [about 200 or 300 meters below the surface] suddenly becomes exposed to the surface. [Editor’s note: The El Niño–Southern Oscillation is a recurring, large-scale variation in sea-surface temperatures and wind patterns over the tropical eastern Pacific Ocean.]
What we see is this kind of chaotic natural phenomena, such as the El Niño–Southern Oscillation. It just transfers heat vertically in the ocean. And if you measure vertically through the El Niño or the tropical Pacific, that all cancels out. And so, the actual change in the amount of heat in the ocean doesn’t see those hiatuses that appear in surface measurements. It’s just a staircase. And we can see the clear impact of the greenhouse-gas effect in the ocean. When we measure from the surface all the way down, and we measure globally, it’s very clear.
Argo was obviously designed and established for research into climate change, but so many large scientific instruments turn out to be useful for scientific questions other than the ones they were designed for. Is that the case with Argo?
Wijffels: Absolutely. Climate change is just one of the questions Argo was designed to address. It’s really being used now to study nearly all aspects of the ocean, from ocean mixing to just mapping out what the deep circulation, the currents in the deep ocean, look like. We now have very detailed maps of the surface of the ocean from the satellites we talked about, but understanding what the currents are in the deep ocean is actually very, very difficult. This is particularly true of the slow currents, not the turbulence, which is everywhere in the ocean like it is in the atmosphere. But now, we can do that using Argo because Argo gives us a map of the sort of pressure field. And from the pressure field, we can infer the currents. We have discovered through Argo new current systems that we knew nothing about. People are using this knowledge to study the ocean eddy field and how it moves heat around the ocean.
People have also made lots of discoveries about salinity; how salinity affects ocean currents and how it is reflecting what’s happening in our atmosphere. There’s just been a revolution in our ability to make discoveries and understand how the ocean works.
During a typical 10-day cycle, an Argo float spends most of its time at a depth of 2,000 meters, making readings before ascending to the surface and then transmitting its data via a satellite network.Argo Program
As you pointed out earlier, the signal from the deep ocean is very subtle, and it’s a very small signal. So, naturally, that would prompt an engineer to ask, “How accurate are these measurements, and how do you know that they’re that accurate?”
Wijffels: So, at the inception of the program, we put a lot of resources into a really good data-management and quality-assurance system. That’s the Argo Data Management system, which broke new ground for oceanography. And so, part of that innovation is that we have, in every nation that deploys floats, expert teams that look at the data. When the data is about a year old, they look at that data, and they assess it in the context of nearby ship data, which is usually the gold standard in terms of accuracy. And so, when a float is deployed, we know the sensors are routinely calibrated. And so, if we compare a freshly calibrated float’s profile with an old one that might be six or seven years old, we can make important comparisons. What’s more, some of the satellites that Argo is designed to work with also give us ability to check whether the float sensors are working properly.
And through the history of Argo, we have had issues. But we’ve tackled them head on. We have had issues that originated in the factories producing the sensors. Sometimes, we’ve halted deployments for years while we waited for a particular problem to be fixed. Furthermore, we try and be as vigilant as we can and use whatever information we have around every float record to ensure that it makes sense. We want to make sure that there’s not a big bias, and that our measurements are accurate.
You mentioned earlier there’s a new generation of floats capable of diving to an astounding 6,000 meters. I imagine that as new technology becomes available, your scientists and engineers are looking at this and incorporating it. Tell us how advances in technology are improving your program.
Wijffels: [There are] three big, new things that we want to do with Argo and that we’ve proven we can do now through regional pilots. The first one, as you mentioned, is to go deep. And so that meant reengineering the float itself so that it could withstand and operate under really high pressure. And there are two strategies to that. One is to stay with an aluminum hull but make it thicker. Floats with that design can go to about 4,000 meters. The other strategy was to move to a glass housing. So the float goes from a metal cylinder to a glass sphere. And glass spheres have been used in ocean science for a long time because they’re extremely pressure resistant. So, glass floats can go to those really deep depths, right to the seafloor of most of the global ocean.
The game changer is a set of sensors that are sensitive and accurate enough to measure the tiny climate-change signals that we’re looking for in the deep ocean. And so that requires an extra level of care in building those sensors and a higher level of calibration. And so we’re working with sensor manufacturers to develop and prove calibration methods with tighter tolerances and ways of building these sensors with greater reliability. And as we prove that out, we go to sea on research vessels, we take the same sensors that were in our shipboard systems, and compare them with the ones that we’re deploying on the profiling floats. So, we have to go through a whole development cycle to prove that these work before we certify them for global implementation.
You mentioned batteries. Are batteries what is ultimately the limit on lifetime? I mean, I imagine you can’t recharge a battery that’s 2,000 meters down.
Wijffels: You’re absolutely right. Batteries are one of the key limitations for floats right now as regards their lifetime, and what they’re capable of. If there were a leap in battery technology, we could do a lot more with the floats. We could maybe collect data profiles faster. We could add many more extra sensors.
So, battery power and energy management Is a big, important aspect of what we do. And in fact, the way that we task the floats, it’s been a problem with particularly lithium batteries because the floats spend about 90 percent of their time sitting in the cold and not doing very much. During their drift phase, we sometimes turn them on to take some measurements. But still, they don’t do very much. They don’t use their buoyancy engines. This is the engine that changes the volume of the float.
And what we’ve learned is that these batteries can passivate. And so, we might think we’ve loaded a certain number of watts onto the float, but we never achieved the rated power level because of this passivation problem. But we’ve found different kinds of batteries that really sidestep that passivation problem. So, yes, batteries have been one thing that we’ve had to figure out so that energy is not a limiting factor in float operation.
Learn how an electromagnetic simulator can be applied to various scenarios in the automotive industry
This whitepaper shows several examples of how WIPL-D electromagnetic simulator can be applied to various scenarios in the automotive industry: a radar antenna mounted on a car bumper operating at 24 GHz, 40 GHz, and 77 GHz, an EM obstacle detection at 77 GHz, and vehicle-to-vehicle communication at 5.9 GHz. Download this free whitepaper now!