Many equipment manufacturers are facing a problem; the performance requirements for signal transfer are becoming ever more critical as data-rates continue to increase and the boundaries of physics lead to minimal headroom.
Interconnects, once a simple commodity, now present a highly complex design and manufacturing task, one that most equipment manufacturers are now finding they have neither a sufficiently experienced design staff or in-house knowledge. As a result, interconnect designs fail, and have to go through additional unplanned design iterations.
It’s the age of the internet that has brought on this situation. The number of users worldwide has increased along with the bandwidth of the content, such as ever higher definition images and video, leaving systems needing to transport a huge amount of data traffic.
Data centers, for example, are massively expanding in order to service customers. The physical size of many data centers cannot change because the cost of data center real-estate moves rapidly upwards. The equipment must become more dense and the interconnects have to move data faster.
Another example would be consumer electronics, particularly the generations of gaming consoles that will come out in subsequent years. The level of graphics that they’re going to deliver will have an even higher resolution and include support for 3D displays. They will demand connectors and cables that will have to provide 10 to 20 times the data rate of the current generation of games hardware. Soon, copper cables simply won’t be able to handle it.
The world of video transfer alone is driving significant changes in the interconnect world. Corporate websites will become video rich over the next couple of years and it won’t be long before a high percentage of us have our TV delivered through the internet.
What will this mean to the interconnect world? Data center, telecoms, and IT devices will need to be able to pass much greater amounts of data, almost instantaneously, and the connectors that are doing that job today are certainly not capable.
At the moment, the interconnect world is still obeying Moore’s Law. The number of semiconductors per square inch is doubling every 18 months as are processing power, storage requirements, and interconnect data rates.
Where a single channel SFP (small form-factor pluggable) connector once was adequate, soon came the need for SFP+, which doubled the throughput. Then the QSFP (quad SFP) came a couple of years later and used a connector that was only 50% larger than the original SFP.
At each speed increase, transmission limitations dictated shorter possible interconnect lengths. Current predictions say data transfer will continue down a similar path for another five years before copper can no longer transfer very rapid speeds without losses. We know, for example, that 40 Gigabit/s over copper will be possible, but only to seven meters.
The jury is out on 100 Gigabit/s over copper and beyond. I’m not saying that it can’t be done, but to combat transmission losses cables need to be wider and heavier. Sooner or later it won’t be practical in many applications to make copper cables any thicker. Plus, the achievable distances (sub seven meters) will no longer be sufficient. At Volex, we’re now seeing a lot of copper replaced with optical solutions, and we forecast many more. As fiber optics is coming into its own, many manufacturers now face a move from copper to fiber, a technology that many have limited experience with.
The Business Case
Whether designing copper or fiber interconnects, the problem electronic design engineers now face is that the signal loss budgets for the interconnects in their systems are becoming very small in order to obtain the bandwidth and data rates.
So small that many do not have the skills to translate those loss budgets into the connector, cable, and termination selections required for the cable assemblies in order to guarantee dependable performance.
In my experience, engineers often choose the wrong connector for the application, and it simply doesn’t provide a reliable or workable interconnect solution for the project.
Other times, the interconnects are low on the list of development priorities. In many systems and equipment projects, interconnects seem to be an afterthought.
Over the years, I’ve found that design teams that do not consider interconnects until the last minute are always in a redesign mode. They spend time solving problems that they should’ve addressed at the front end of the design effort. This can be very costly, both financially and in missed deadlines. Missing your time-to-market can have a negative impact far beyond a few lost sales, such as losing out to a competitor’s product or losing brand reputation.
Asking for help from interconnect experts can also pay off in ancillary ways. In the aircraft industry, people get bonus points for weight reduction. They want composite materials, anything that can do the job for the least possible weight — because the compound effect of every gram reduced (over thousands of components) can make a significant difference to an airline’s fuel bills over the lifetime of each plane. Shifting from copper to fiber not only gave aircraft higher data rate connectivity, it gave them additional advantages of weight reduction and EMC (electromagnetic compatibility).
Many companies have taken advantage of horizontal integration and recognized the certain functional elements that they could sub-contract without any loss of intellectual property or interruption to business, so that they could focus on their core competencies.
It can work very well, but the potential danger to the business comes from having to rely on the capability and quality of the partners that you commit to in order to carry out the sub-contracting.
The Power of Partners
When looking for an interconnect provider you need one that will significantly contribute to the design process; one that will look at your equipment design (preferably as early as the block diagram stage) and ask the right questions. When you reach the end of the main design process, all of the interconnects will already be designed and tested with no trial and error.
We recently worked with a medical imaging equipment manufacturer that wanted to sharpen up the image produced by one of their scanning devices. At the same time they wanted to reduce the number of cable assemblies so, with our combination of high-speed copper, fiber, and radio frequency expertise, we created a hybrid product that replaced a lot of radio frequency lines with optical lines.
We also combined everything into a single customized multi-function connector, which included the power, the input signal from the scanner, and the optical signal which feeds the display.
Don’t Bleed at the Leading Edge
Finding the right design partner can help avoid the consequences of the old adage “stay away from the bleeding edge or you’ll get cut.”
Over the years, we’ve found that the best way to mitigate the potential risks when leading edge equipment designs demand leading edge interconnect designs is to undertake non-disclosure agreement-based technology roadmap sharing.
Technology roadmap sharing offers our partner customers our full expertise in how we see the market and technology evolving, and what we’re doing to address it in terms of our competencies, capabilities, and design projects internally. Our partner gives us an equally frank vision of where they see their own technology roadmap going, and what this is going to mean in terms of interconnect capability requirements. We both share short-, mid-, and long-term views. This could happen on an annual basis, for example, in what we call a technical alignment meeting.
The result of this is that both companies are better prepared for the future. Both companies can amend their plans based on shared knowledge.
I’ve lost count of the number of times I’ve wanted to say, “If only you’d called us in here three months ago, you wouldn’t still be in development.”
Being at the leading edge doesn’t have to be high risk when you partner with experts in non-core specialist fields.