pa|A report on the Swedes’ opinions regarding technology, progress and safety. Produced by Sylog AB, a Data Respons company. Dansk Retursystem manages a world class recycling system which retrieves, counts and sorts empty cans and bottles. An impressive 90 % is sent for recycling. Data Respons Solutions deliver a customised and trustworthy central control system that ensures reliable operations 24/7. Young people are our future and we want to be a part of giving coming generations the best starting point possible and the ability to grow and prosper into educated, healthy and valuable individuals. Would it be fair to say that the GraalVM is a step closer to? Yes, according to the virtual machine experts from MicroDoc. Here’s an overview of what the GraalVM can do for software developers. Kenneth Ragnvaldsen, CEO at Data Respons, had a chat with AKKA Technologies on how Data Respons is Enabling a Digital Future. As everything around us is getting more and more connected and gathers more and more data, we are constantly enabling digital products, processes and business models. Together, we develop smarter products and systems. As a consequence, we create a more efficient, productive, and sustainable world. Meet Hans Christian Lønstad, CTO of Data Respons Solutions. A software engineer with 20+ years experience working at Data Respons, Hans Christian knows a thing or two about technology, and he is the proud owner of a Tesla Model 3. So, what would be more obvious than to ask him how that relationship is going – is that much-hyped car brand delivering on its promise? What are the upsides and downsides of owning a Tesla? And what are his thoughts on the current state of the automotive industry? Get the latest news on innovative solution, new technology and new business opportunities from Data Respons. IT Sonix have been awarded a contract worth 2,5 million euros for developing the software for an international online platform that enables B2B energy trading. The Swedish Data Respons subsidiary Sylog has won a contract in the integrated logistics support program for the Swedish Armed Forces Materiel Administration. Europe is once again turning on the brakes, demanding strict social distancing and an extended use of home offices. We have been through it before and many of us have not been part of a physical work environment since March. Our CEO, Kenneth Ragnvaldsen has a few learning points that he wants to share on how the pandemic and the home office solutions is affecting us all. For years Data Respons have supported homeless children in Nepal. Through our engagement we aim to develop basic infrastructure and prevent trafficking through education di|Data Respons welcomes Frobese GmbH as the latest additions to a growing family of niche tech companies in Europe. Frobese is a cooperative and successful team of experts specialized in software consulting for German banks and insurance companies. Data Respons signed the guide to avoid green washing in Norway in early 2020 and has followed the instructions both in Norway and in our subsidiaries in Sweden, Denmark, Germany, France and Taiwan. Now, as the guide launches internationally, we want to take a stand and promise that we will do our outmost to report honestly and be transparent about our biggest challenges regarding reducing emissions. st|PERSONAL SAFETY VS. PERSONAL INTEGRITY Reliable control system for Danish recycling system Enabling the Young OUR COMPANIES Newsletter sign up h1|A SMARTER SOLUTION h2|Sign up for our newsletter h4|LEARN MORE sp|ANNUAL Interrupt Inside 2020 SUSTAINABILITY THROUGH TECHNOLOGY The “One VM” Concept – towards Kenneth Ragnvaldsen – Enabling a Digital future with Data Respons Man vs. machine – a software engineer and his Tesla * THE DATA RESPONS GROUP Data Respons welcomes Frobese to the family! NEWS IT Sonix, a Data Respons company, wins contract to develop platform for renewable energy trading NEWS Data Respons have signed the guide against green washing NEWS Data Respons subsidiary captures Swedish defence contract NEWS Hacking the home office SUSTAINABILITY Enabling a better future for homeless children in Nepal starts from inside Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Global emissions must fall by 7.6% every year from now until 2030 to stay within the 1.5°C ceiling on temperature rises that scientists say is necessary to avoid disastrous consequences. One of Data Respons’ core values is to take responsibility, and we acknowledge that slowing down climate change is one of the greatest challenges we need to take on in our time. Global emissions must fall by 7.6% every year from now until 2030 to stay within the 1.5°C ceiling on temperature rises that scientists say is necessary to avoid disastrous consequences. One of Data Respons’ core values is to take responsibility, and we acknowledge that slowing down climate change is one of the greatest challenges we need to take on in our time. We support the UN Global Compact and conduct our business in line with the ten principles related to human rights, labour standards, environment and anti-corruption. We also align our efforts with the UN Sustainable Development Goals and our company values; taking responsibility, to perform, to be generous and having fun. At Data Respons, we believe that new technology is a key enabler for a more sustainable world. In fact, many of our projects are contributing through the innovation of smarter and greener technology solutions, leaving a lasting sustainable footprint. Data Respons have been working with Laerdal Medical for nearly a decade, delivering IoT solutions including wireless handheld controllers used to simulate training scenarios and various control units placed inside the simulator. In the UK, more than 72,000 children are missing out on their childhood due to long-term illness. That means in every sixth classroom, there is an empty desk. When a pupil can’t attend class themselves, AV1 will take their place. AV1 is the telepresence robot for children and young adults suffering from long-term illness. The EnergyBase system automatically optimizes energy consumption with its self-learning algorithms and controls the energy flows in your home. The system allows you to collect, store and intelligently distribute the self-generated energy throughout the house st|ESG TECHNOLOGY DEVELOPMENT Helping save lives with technology The AV1 robot let children go to school when they can’t… IoT-based solution for innovative energy management Enabling the Young Enabling the Young OUR COMPANIES Newsletter sign up h1|Data Respons ESG report Becoming carbon neutral h2|ible business, we address some of the challenges the world is facing related to inequality, climate change, health and poor access to quality education. h3|We believe is vital to enable a sustainable future! We believe in We believe in sp|> ESG ARTICLE Taking overall responsibility is an important core value at Data Respons. As a respons Becoming carbon neutral Data Respons continue to support the UN Global compact Sustainability through technology Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|“It is important for Data Respons to enable young people to have the best opportunities to grow and prosper!” Kenneth Ragnvaldsen, CEO “It is important for Data Respons to enable young people to have the best opportunities to grow and prosper!” Kenneth Ragnvaldsen, CEO pa|Click to find out more about our management and board of directors. Click to find out more about our management and board of directors. Data Respons is a truly a growth company with a strong customer focus and a technology driven culture. Click to find out more st|OUR COMPANIES Newsletter sign up h2|Management & board Culture & values Our history sp|Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|When being a part of a Data Respons team your hard work matters. Whether you develop software to makes cars safer, or you create the highest performing PCB design, you will most definitely make an impact for our customers! As the industry is embracing the digital transformation, our competencies and experience within IoT, artificial intelligence, cloud and security is vital for their success. In other words, our people are rolling up their sleeves as we speak! Our teams are in the front row seat to exciting innovations developed together with our customers within fields such as automation, robotics, AI, and connectivity. Our specialists work can be found inside almost every area in the tech enabled world, making it smart from the inside. Staying healthy is something we take seriously at Data Respons. In order both at work and at home, we need to keep a sound mind and a strong body. That is why we exercise together, and why we have made keeping our people healthy, and on top of their game, a KPI! More importantly, our ‘InShape’ program is for every level of fitness, with the common goal of staying active st|SMART PEOPLE Our teams have people from all tech disciplines and we are on a constant look out for new talents within software development, UI/UX design, hardware, project management. TO PERFORM OUR COMPANIES Newsletter sign up h1|JOIN US! sp|– creating the technology of tomorrow > Career It starts from inside! Create the technology solutions of tomorrow! On top of your game Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Data Respons has a solid and well-balanced customer base within several industries, based on our strong competence on IoT, digitalisation and embedded technologies. Our geographical footprint coupled with more than 30 years of experience has given us relevant vertical competence within these areas. The transport and automotive industry is undergoing the largest transformation in several decades driven by multiple new disruptive technologies combined with stricter safety and environmental requirements. Innovation and technology advances are making the industry more advanced … The Smart Factory, robotised, digitalised and data driven (AI), changes the way we look at production and automation going forward. The ongoing evolution from smart optical sensors to data analytics based on big data processing increases the speed, quality and cost of all industrial and consumer products and goods. The across industry trend of a more data driven, smarter and connected society is challenging existing telecom infrastructure to provide connectivity, bandwidth and standard protocols supporting new services. The ongoing investment in 5G technology and networks is the key foundation for broader roll out of new value adding IoT applications. The global energy consumption will continue to grow driving investments in cleaner and more sustainable energy production to limit global emissions. Digital technology disruption offers new communication solutions enabling the industry to become more data driven optimising and improving asset efficiency. The ongoing digitalisation is creating new opportunities across all industries. Robust sensors, secure communication, and advanced video processing and image solutions is enabling the industry to explore new areas and improve operational accuracy without compromising human lives. Smarter and more innovative products and solutions will transform the enormous healthcare industry into a high-tech sector. Digitalisation of patient records and workflows, data analytics and AI supported patient diagnostics (patient self-care), advanced simulation and training systems, and robot-assisted surgeries are all fast growing technology areas that will improve healthcare quality and lower costs of service. The banking industry has undertaken a comprehensive digitalisation automating as many processes as possible to stay competitive. However, the emergence of fintech companies requires further adoption in all aspects of the value chain. For several traditional players, this means a total remake of their system infrastructure to a modern, software and cloud based framework to offer customers flexible and platform independent services and implementation of artificial intelligence (AI) to support key decision processes to stay competitive st|Smarter and more sustainable transportation Improve performance beyond human capabilities Meet the growth in demand for data Unlocking opportunities with digitalisation Sophisticated and secure operations Life-saving technology Flexible, secure and scalable applications and infrastructure OUR COMPANIES Newsletter sign up sp|Our markets Mobility Automation Telecom & Media Energy & Maritime Space, Defence & Security Medical Technology Finance & Public MOBILITY AUTOMATION TELECOM & MEDIA ENERGY & MARITIME SPACE, DEFENCE & SECURITY MEDTECH FINANCE & PUBLIC Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Please fill out the form and we will contact you st|Contact us Data Respons AS (Corporate) OUR COMPANIES Newsletter sign up h2|Contact us Explore the opportunities sp|> Contact . Sandviksveien 26 N-1363 Høvik Tel: +47 67 11 20 00 Press contacts Sebastian Eidem Chief Communications Officer +47 932 23 964 Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|Sign up for our newsletter and get the latest news on innovative solutions and new technology. pa|Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » st|OUR COMPANIES Newsletter sign up h1|Latest news h3|The “One VM” Concept – towards Kenneth Ragnvaldsen – Enabling a Digital future with Data Respons Man vs. machine – a software engineer and his Tesla New virtual machine for the cars of tomorrow Six sustainable tech projects from 2020 GraalVM – the Swiss Army knife of virtual machines A 3D-printed carbon fiber robotic arm Frobese – the strength of being an expert in both IT and banking IT Sonix, a Data Respons company, wins contract to develop platform for renewable energy trading Data Respons welcomes Frobese to the family! Connecting Cranes to The Cloud Interrupt Inside 2020 – for the first time in interactive format! Monitoring the electric grid for a greener future Three software specialists on 5G opportunities Personal safety vs. personal integrity MicroDoc, a Data Respons company, is re-elected to the Java Executive Committee Data Respons have signed the guide against green washing 5G is a game changer for the military Data Respons subsidiary captures Swedish defence contract Get to know Guillaume Wolf – The youngest and newest General Manager in the DR family sp|> News Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Cargotec is one of the world’s leading providers of cargo and load handling solutions. Cargotec and its business areas Kalmar, Hiab and MacGregor are well known in the cargo and load handling industry with their products, services and solutions that support their customers in ports, at sea and on roads. While Cargotec enables smarter cargo flow with its leading cargo handling solutions and services, it also embraces industry trends like automation, robotics, electrification, and other business activities that are closely connected to digitalisation. In all of this, connectivity is a fundamental prerequisite. Being able to connect and communicate all equipment in ports, on ships and on trucks is essential – regardless of where they might be located. Only then can the next step be taken to design new services and open new revenue streams from remote monitoring, smart services, predictive maintenance etc. This is where the box comes in. Cargotec turned to Data Respons for expert knowledge and engineering expertise in how to connect its equipment – and almost all of it across the entire range of Cargotec brands. Anders Jansson, Sales Director Data Respons Solutions Sweden elaborates: – Initially Cargotec asked us to design a standard gateway to connect the truck cranes manufactured under their Hiab brand. While we were preparing that project, Cargotec continued further in focusing more strongly on digitalisation. That decision allowed us to broaden the scope and build a gateway to fit nearly all products across all Cargotec brands. We’ve designed all the hardware and a large part of the software of the Cargotec Gateway. Its model name is CE-IMX6-01 and with it Data Respons has taken gateway versatility and robustness to the extreme – as required by the customer. The sturdiness of the device is remarkable. It is built to take a severe beating, withstand a pressure washer, and resist salt spray. Furthermore, it has an operating temperature range from -40 to +80 degrees Celsius, all of this to enable it to function under extreme conditions. On top of that, the Cargotec Gateway is multi-lingual in various ways. It speaks a number of tech languages: it can communicate via Bluetooth, wifi, ethernet, and has a 4G modem for connecting to cellular networks. Also, it can connect from nearly any location around the world. The gateway has been certified for more than 45 regions and connects to a telecom network via an e-SIM card, a dedicated chip with the same functionality as a conventional physical SIM card. But ruggedness and global connectivity are by far not the only features that make the gateway a major achievement in the Cargotec journey towards industrial digitalisation. Without being able to fit into the Cargotec IoT ecosystem, it wouldn’t be worth much, and in this respect Data Respons has taken the role of Cargotec’s development partner with also other responsibilities than the design of the gateway. Hans Christian Lønstad, CTO of Data Respons Solutions explains: – We’ve worked closely together with Cargotec engineers and data specialists to find the optimal technical solutions, all the way from crane to cloud. We’ve worked with Cargotec on a number of issues well beyond technical issues related to the gateway. Together we’ve developed an understanding of how to combine hardware and software on different levels to make the IoT ecosystem work as a whole. We’ve been involved – and still are – as advisors on how the entire system should behave technically to be able to live up to the long-term plans and visions of Cargotec in regards to IoT. According to Hans Christian Lønstad, this holistic approach has been key to the successful development of the CE-IMX6-01 Gateway, demonstrated by its large production volume. The overall goal is to handle the increasing cargo volumes crisscrossing our globe with as little environmental impact as possible. And since Cargotec is serving industries that cover the majority of the world’s gross domestic product, this can make a huge difference. ­­­­ – Digitalisation is one of Cargotec strategic must-win battles. Our target has been to achieve full connectivity for the equipment we manufacture, across all Cargotec brands. And that’s what we’ve achieved. Digitalisation is one of the key initiatives at Cargotec, and currently connectivity is available for 99 percent of our equipment. – Digitalisation enables new business models and service offerings, and our goal is that 40 percent of our net sales should come from software and service. Connectivity is a key tool in introducing new kinds of digital products to our customers, so that we can enhance their operations and safety. What we aim to do is to offer new kinds of services to the market. We want to develop real business on top of connectivity and data. Connectivity is nice, but it is not a value in itself. The magic happens when we can add value based on the data. – For instance, Hiab has developed HiConnect, which offers equipment owners real-time data about their equipment’s operation and condition. Kalmar has Kalmar Insight, a performance management tool for cargo handling operations. – The Cargotec Gateway is the crucial element here. We needed a flexible gateway that could fit nearly all our products, as well as cope with harsh environments, and connect globally. Also, it had to be cost-effective. And as we are market leaders, it had to meet the same high quality standards as our basic products. – The businesses under the Cargotec umbrella are quite diverse. That means we sometimes have needs that are totally opposite from each other. First of all we needed a partner that could understand our need for flexibility and that could provide us with the right technical solution for it. That is why we chose Data Respons. – With the technical infrastructure in place we are now working with the data we are collecting. We utilize the data to do analytics and to add more value to it. We really feel the data is valuable, and we are getting a good understanding of what is happening with our equipment. In that part we’re really strong, in my opinion. When we combine our R&D understanding with the actual operational data we can really create good value for our customers di|Arne Vollertsen for Data Respons st|BY: Connectivity is king Across all brands Robust and versatile IoT ecosystem Holistic approach The Cargotec digitalisation vision Tuomas Martinkallio, Director, Digitalisation, Kalmar Mobile Solutions: OUR COMPANIES Newsletter sign up h1|Connecting Cranes to The Cloud h2|Take a look at the image below. If the only thing you see is a crane with electronics in it, then you’re missing the big picture. These cranes are connected to the cloud by a gateway developed by Data Respons Solutions, which is a key enabler in Cargotec’s journey towards digitalisation. sp|> Connecting Cranes to The Cloud Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Tell us about your requirements, and we will help you to develop a solution that is adapted to your surroundings. Specifications may include parameters for conditions such as temperature fluctuations, vibration or humidity. In addition, special adaptations such as form or performance requirements or industry certifications may be required. The development of embedded solutions is tailored to each individual customer and project. Data Respons can help customers at every stage of design, development and production – from prototype to serial delivery. This demands a system perspective on embedded solutions and knowledge of customised HW/SW development, choice of technology platform and suppliers, preparation of test documentation and certification of prototypes. Next generation is about helping our customers prepare for further development. An embedded solution normally involves a fair number of components from several suppliers, and we keep our customers up-to-date with regards to changes, upgrades and end-of-life issues. In addition we advise on alternative solutions and next generation technology. Prototyping, testing and industrialisation are part of our operational capabilities. Data Respons has comprehensive local testing facilities and cooperates closely with major test laboratories, enabling us to test products and solutions according to specific customer needs. We provide our customers with quality assurance, predictability and traceability. We can handle all necessary paperwork in terms of shipping and customs, or ship products directly to the customer or to the end-customer. Running into test problems at an accredited lab can be a very costly affair. Detecting, eliminating and fixing compliance issues at an early stage can not only reduce costs and time to market, it can improve your chances of passing fully accredited certification tests. Data Respons Solutions has the necessary equipment for pre-testing in accordance to customer functionality. Our team of experienced engineers perform pre-certification tests for you, within sectors such as automotive, consumer, marine, medical, defence, subsea and telecom. “If your product or system passes our pre-compliance tests, it’s a good indication that it will pass fully accredited certification tests,” says Ingvild Johansen, OEM Solution Department Manager. In the event that a product or system fails to pass the tests, we can provide rapid assistance in identifying the problems as well as access resources to resolve them. Data Respons Solutions also has expertise and equipment for development tests and pre-certification tests for consultancy services li|Detect, eliminate & fix issues at an early stage Reduced cost Improved chances of passing accredited tests Make it to market st|OUR COMPANIES Newsletter sign up h1|Smarter solutions from inside Innovation Rugged solutions for extreme conditions Bridge to Asia h2|Specifications Customised development Next-generation Prototyping, testing and industrialisation Our test facilities >> h3|Data Respons develops and delivers custom solutions by combining engineering services with standard embedded computer products from leading partners. We are involved throughout the entire process, from specification and development to volume deliveries and next generation issues. Access to the primary area for embedded technology in the world, highly skilled expertise and working with different industries drives our innovation. In tough, challenging environments computer equipment must have special qualities in order to function optimally. This is especially important if the system must meet the demands of classification companies. Data Respons has established long-term strategic partnerships with our global partners, primarily located in Asia. h4|Pre-test benefits sp|> Solutions Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Our mission is to strengthen our customers’ competitive power by providing the best solutions. We will continuously improve to provide R&D Services and Solutions that fulfill or exceed our customer, employees, partners and other relevant interested parties expectations in terms of quality, ethical/social conduct and long-term sustainability. We shall work systematically to prevent injuries and sickness amongst our employees. Our goal is to ensure a safe work environment for all our employees. We shall strive to continuously improve our OHS rules and ensure that they will comply with applicable law and regulations in the countries we operate in. Data Respons shall set clear goals of which we will be measured by regularly. OHS is a management responsibility and each management is responsible for meeting Data Respons OHS goals, ensuring work being executed according to our OHS rules and regulations. In order for us to meet our OHS goals, each employee is him/herself responsible for working within Data Respons’ . The trend and development towards serial deliveries of Embedded Solutions creates new requirements for quality assurance. The ISO-certification acts as a safeguard and guarantees that the customer receives a high quality product or service. In this sense, quality assurance is an internal tool which benefits the end-user li|We shall, as a company, comply with all relevant environmental legislation as well as Data Respons internal environmental requirements. We shall design and deliver solutions that comply with all relevant environmental legislation, environmental requirements from customers, requirements from other interested parties as well as Data Respons internal environmental requirements, and make sure that products and solutions can be recycled or disposed of safely at the end of product life. We shall select transportation of goods and people with knowledge about and a goal to contribute to reduced pollution and CO2 emissions. This includes extensive use of new technology in communication, to reduce unnecessary business travel. We shall contribute in product development of embracing technology that address and solve environmental challenges. We shall continuously improve our processes in order to prevent pollution and secure a sustainability operation. st|Data Respons aims to conduct technology projects contributing to a more sustainable world, especially those making the world greener, stronger, smarter and more equal. OUR COMPANIES Newsletter sign up h1|Quality, Environment, Occupational, Health and Safety h2|Quality Mission Quality Policy OHS Policy and Mission Environmental policy & targets ISO Certification h4|Data Respons’ certificates Other downloads sp|> > QEOHS > > Data Respons Code of Conduct > > Data Respons Supplier Conduct Principles Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Throughout our history, we have made tough priorities in order to arrive at the robust platform we have today. This includes establishment of new offices to secure proximity to customers, drive for internationalisation, smaller and larger acquisitions, recruitment of great talents, but also change of strategy and restructuring and close down of businesses that has not perform as desired. However, we have never given up on a customer development project – regardless of complexity – in the company’s 30 years of history. Our employees have always been the most important asset and our core values has always been the same; Data Respons established at Høvik in Norway with main business idea of engineering advanced specialist system solutions based on open standard technology. Operating as a project organization working for leading industry companies and start-ups. Established as a leading provider of embedded solutions in Norway with revenues of 50 million NOK. Focus remains on engineering of customised computer solutions for customers in defence, telecom and maritime industries. Assignments characterized as challenging both in terms of technology and in terms of environmental requirements. Expanding footprint in Norway through establishment of offices in Bergen and Trondheim. First expansion outside Norway through the establishment of Data Respons in Denmark Continued expansion in Norway with office in Kongsberg, an important industrial cluster with high density of engineers. Expansion to Sweden through the establishment of Data Respons AB Continued strong growth with revenues surpassing 100 million NOK Break-through in Sweden with major defence contract with SAAB Data Respons ASA listed on Oslo Stock Exchange in under the ticker DAT Data Respons OY established with office in Helsinki (Finland) – presence in all Nordic countries except Iceland The group now employs 125 employees and has 7 offices in Norway, Sweden, Denmark and Finland Growth continues with revenues surpassing 150 million NOK – Data Respons acknowledged as a leading player in Nordics with customers such as Ericsson, Kongsberg Group, VMETRO, ABB, Brüel & Kjær, Volvo and SAAB Kenneth Ragnvaldsen appointed CEO. Business restructured to improve margins – new management team recruited – 52 % growth outside of Norway New vision “Leading in embedded solutions in Europe by 2010” established (Data Respons was the first company using the term “Embedded Solutions”) Embedded Solutions continue to drive the grow counting for almost 70% of the business First step towards becoming a leading player in Europe taken through the establishment of Data Respons in Germany during the first quarter Rune Wahl appointed CFO First bolt-on acquisitions completed (Certified Computer Technology (CCT) – a supplier of advanced computer equipment to the maritime industry, and Centrex – a niche player with specialist competence within telecom) Data Respons integration center established to strengthen competitiveness and capacity to deliver larger customised embedded solutions The company continue to grow and reaches close to 400mNOK in revenues FPGA specialist company Digitas AS was acquired further strengthening the services offering in Norway Number of employees reaches 226, with approx. 50% in Norway The strong growth continued and Data Respons surpassed 0.5bNOK in revenues reaching 635mNOK Number of employees amounted to 376 – up from 226 in 2006 Throughout 2007, Data Respons continued to improve the local presence with a total of 14 offices across the Nordics and Germany including new locations in Jutland (Denmark), Linköping, Gothenburg (Sweden) Quality and technology centre in Taipei (Taiwan) established Sylog AB and Syrén Software AB acquired forming the platform for R&D Services in Sweden At the same time, Sweden overtakes as the market with most employees in Data Respons R&D Services continue to grow and counts for approx. 40% of revenues Strategic acquisitions include Lundinova AB in Lund (Sweden) and Ipcas in Erlangen (Germany) Two new offices established in Stavanger (Norway) and Västerås (Sweden) As many other companies, Data Respons was impacted by the financial crisis in 2008 and we experienced negative growth the first time in decades Several actions was taken to refocus the company A new vision is launched – “A smarter solution starts from inside”. This is our company’s DNA described in one sentence. We truly believe that we can make the world smarter and we think that this starts from the inside – whether it be inside the heads of our specialist engineers or new technology embedded into the world’s products and solutions. Sylog is awarded to Sweden’s most growing consulting company Data Respons positive development continues and profitability is improving significantly enabling the company to pay dividends for the first time since listing – and has since then paid annual dividend to its shareholders Strategy shift towards a more software oriented company implemented. Data Respons established TechPeople A/S as a joint venture with 50% ownership. The strong growth continues mainly driven by Sweden (48%) who becomes the largest region measures followed by Norway (36%). Start of mega trends like Internet of things (IoT) and industrial digitalisation The interest for Data Respons is picking up and international ownership increases to 10% Strategic decision to develop a stronger presence in Germany anchored in the Board Growth continues and revenues passed 1 billion NOK for the first time – a new milestone for the company In September, Data Respons acquires 100% of the shares in MicroDoc Computersysteme GmbH, a software technology company in Germany with HQ in Munich, establishing a platform for R&D Services in Germany Market value of Data Respons increases by close to 40% and investor interests is increasing significantly Sylog AB completes three bolt-on acquisitions to strengthen its position in Sweden Revenues continue to grow reaching 20% revenue growth – record revenue again! In March, Data Respons acquires the remaining 50% shares in TechPeople A/S Foreign ownership increased to 58% at year-end The company enjoyed solid performance across all business areas combined with an industry wide digitalisation trend, which enabled another record year. In addition Data Respons welcomed 125 new specialists through the acquisitions of Germany based IT SONIX and XPURE. Two leading R&D companies with niche software technology knowhow. An all time high for Data Respons contributing to an 17% annual growth over the last 19 years. The group represents more than 1400 specialists and have multiple operations across Germany and the Nordics. Data Respons also acquired Donat Group GmbH, a German R&D Services company headquartered in Ingolstadt with 140 employees, and inContext AB, a Swedish R&D Services company located in Stockholm with 80 employees. AKKA Technologies acquires all of the shares in Data Respons at an equity value of NOK 3.7 billion. The acquisition creates Europe’s largest digital solutions powerhouse, able to address the high-volume and fast-paced growth in the digital market. Data Respons also Launched Data Respons France. Located in Paris the company is able to access much of the European continent. And support our parent company, AKKA Technologies, and their customer base in France. Data Respons acquired Frobese GmbH, a cooperative and successful team of experts specialized in consulting for banks and insurance companies. Frobese is located in Hanover with 96 employees st|2000 2001 2002 2003 2005 2006 2007 2008 2009 2010 2012 2015 2016 2017 OUR COMPANIES Newsletter sign up h1|Our history h3|Data Respons is a truly a growth company with a strong customer focus and a technology driven culture. The company has grown from 50 million NOK in 1998 to almost two billion 20 years later through a combination of robust organic development and selected acquisitions – corresponding to an annual growth of 17 % sp|> > Our history RESPONSIBILITY, PERFORM, BEING GENEROUS AND HAVING FUN. 1986 – 1999 1986 1998 1999 2000 – 2005 2006 – 2010 2011 – 2017 2017 – 2020 2019 2020 2021 – Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. bo|2018 2021 pa|Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » st|OUR COMPANIES Newsletter sign up h1|We believe technology development is vital to enable a sustainable future! h3|Data Respons strive to explore technology projects contributing to a more sustainable world, especially those making the world greener, stronger, smarter and more equal. Smart farming: automated precision feeding station promotes sustainable live stock production Smart radon detector helps reduce the risk of lung cancer Intelligent street lighting in Copenhagen Safety at sea with wireless sensor technology Energy efficiency through digitalisation Scanning for epilepsy using smartphones Reliable control system for Danish recycling system Helping save lives with technology Reducing waste with reverse vending Fighting the Pacific Oysters with an optical robot Smart, green and affordable charging Intelligent energy management The classroom robot that lets children be at school event when they can’t go… IoT-based solution for innovative energy management 20 blood tests in a day without hospitalisation Sensors keeping people safe sp|> > Sustianability through Technology Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Specialists from contributed to the project with Hardware development and software enabling wireless communication between the charger robot and the car. The Easee charging robot enable affordable charging of up to three electric vehicles simultaneously promoting clean and affordable energy and a more seamless shift towards zero emission transportation st|OUR COMPANIES Newsletter sign up h1|Smart, green and affordable charging h2|The Easee charging robot lets you power up to three electric cars simultaneously, smart, green and affordable h3|With an Easee charger robot, you get access to a groundbreaking technology that charges your electric car when the power prices are the lowest. This is one of several projects within electric powering of car fuel cells and smart chargers. sp|> Smart, green and affordable charging Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Specialists from Data Respons R&D Services contributed directly into Novelda’s developing teams with in depth software expertise and programming resources. The XeThru technology enable early discovery of certain disorders and allow elderly people to be cared for at home and benefit from a respiration monitoring device that can keep a check on vital signs st|OUR COMPANIES Newsletter sign up h1|Sensors keeping people safe h2|The XeThru technology is a technology that can improve people's quality of life, personal comfort and safety. h3|XeThru technology is used in sensors that detect presence, distance and motion. In addition, it can monitor vital signs like pulse and breath, meaning it can be used to discover sleeping disorders, monitor a patients vital signs from home. sp|> Sensors keeping people safe Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Specialists from contributed to the project with stress testing the product and identifying issues by provoking failure and making sure the safety features installed works as designed. The Smartly Energy system enable energy to be distributed wisely and helps people monitor their costs and use accordingly st|OUR COMPANIES Newsletter sign up h1|Intelligent energy management h2|Smartly develops and delivers simple and effective solutions for energy management sp|> Intelligent energy management The Smartly energy system is a simple and effective solution for energy management to housing companies and businesses – both measurement, reporting and management of energy consumption. Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|In:Range. Is a system that track the location of crew members on-board a vessel in an emergency situation. Every crew member wears a signal tag which responds to the sensors in the ships interior, communicating to the bridge and the nearest rescue station. Specialits from Data Respons R&D Services have helped ScanReach with industry expertise, hardware and mechanics within the system as well as certification for maritime use and EX approval. Lives can be saved using this system as delays in identifying and locating missing personnel leads to further delays in mobilising assistance and providing potentially life-saving treatment st|OUR COMPANIES Newsletter sign up h1|Safety at sea with wireless sensor technology h2|ScanReach is a maritime IoT company developing wireless connectivity platforms enabling personel and asset control in complex and confined steel environments like vessels. Data Respons R&D Services have assisted ScanReach develop their product In:Range. ScanReach is a maritime IoT company developing wireless connectivity platforms enabling personnel and asset control in complex and confined steel environments like vessels. have assisted ScanReach develop their product In:Range. sp|> Safety at sea with wireless sensor technology Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|The consultants from Data Respons subsidiary, , have primarily developed the brush-less engine unit and blood circulation control in the catheter and valve that opens and closes for access to blood sampling. At the same time, they have developed and designed the user interface for programming the device and concluded in the preparation of requirements specifications, tests, software audit control, etc. The Fluispotter automated blood sampling system will enable better medical research and more accurate diagnostics. The device also impacts a more sustianable healthcare as it enables this type blood surveillance without hospitalising the patient, saving money and hospital beds st|Fluispotter is programmable and will collect the samples according to individually designed sampling schedules decided by the user. Drops of blood are dispensed onto a roll of special paper at a requested rate, and can subsequently be analyzed one by one. OUR COMPANIES Newsletter sign up h1|20 blood tests in a day without hospitalisation h2|Fluisense's wearable and fully automated blood sampling system, Fluispotter®, is a unique new technology to be used for collection and storage of up to 20 serial blood samples in 20 hours with minimal stress for the patient. sp|> 20 blood tests in a day without hospitalisation Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Data Respons has a solid and well-balanced customer base within several industries, based on our strong competence on IoT, digitalisation and embedded technologies. Our geographical footprint coupled with more than 30 years of experience has given us relevant vertical competence within these areas. MicroDoc is a technology oriented company with more than 60 specialists in SW development, Java and system design as well as SW solutions for IoT, mobile/network infrastructure and embedded applications. IT SONIX and XPURE are leading niche providers of specialist services and SW technology (Java, Embedded, Cloud and AI) specifically aimed at “Connected Car” solutions, internet of things, mobile services and embedded applications. EPOS CAT designs, develops and operates tailor-made software solutions to support and optimise customer’s business processes mainly targeting the automotive industry Data Respons Solutions design, develop and deliver smart embedded and industrial IoT solutions by combining specialist engineering competence with standard embedded components from leading technology partners. Data Respons R&D Services provide specialist services through development projects, consulting services and technology consulting. Donat IT is a leading niche provider of software solutions and specialist services within software development and architecture, system integration and test management as well as business critical R&D IT services inContext is a fast-growing R&D Services company that specialises in interconnect, electrification, embedded SW technology, mechanical design and project management. A leading technology partner – from system architecture, mechanical and HW design to software and application development and communication solutions for embedded and IoT solutions. The technological complexity is increasing as more sensors and units are connected, enormous amounts of data collected and analysed, systems integrated both in the edge and in cloud-based platforms whilst maintaining end-to-end security. Frobese focuses on business expertise, project management, meeting quality standards and software development st|Advanced software development, digitalisation and IoT Niche providers of specialist services and SW technology Automotive IT and computer aided testing (CAT) Specialist consultants in system and SW development, technology and IT Smart embedded and industrial IoT solutions A complete technology partner from sensor level to the mobile application Specialised software services within the mobility sector Interconnect, autonomous systems and embedded software A highly specialised consultancy company with expertise in embedded and IT solutions. Specialists within advanced software development, digitalisation and IoT Specialists within consulting for banks and insurance companies OUR COMPANIES Newsletter sign up sp|Our companies MicroDoc IT SONIX & XPURE EPOS Sylog Data Respons Solutions Data Respons R&D Services DONAT IT inContext Techpeople Data Respons France Frobese MICRODOC | GERMANY IT SONIX & XPURE | GERMANY EPOS | GERMANY SYLOG | SWEDEN Sylog’s customers are world leaders in telecom, automotive, defense, medtech, finance, the media and the gaming industry. Passion, knowledge and freedom are Sylog’s keywords. DATA RESPONS SOLUTIONS | NO/SE/DK/DE DATA RESPONS R&D SERVICES | NORWAY DONAT IT| GERMANY INCONTEXT | SWEDEN TECHPEOPLE | DENMARK DATA RESPONS FRANCE | FRANCE FROBESE | GERMANY Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Developing devices and systems within the medical and life sciences requires an understanding of and experience in living up to tough criteria on EMC/ESD designs, as well as certification and documentation in compliance with government regulations. Data Respons has delivered several certified systems, applications, and simulation equipment to leading players in the medtech industry. Read more » Read more » Read more » Read more » st|MEDTECH + 38% 75% $38bn APPLICATIONS & EXPERIENCE OUR COMPANIES Newsletter sign up h1|Life-saving technology h3|Smarter and more innovative products and solutions will transform the enormous healthcare industry into a high-tech sector. Digitalisation of patient records and workflows, data analytics and AI supported patient diagnostics (patient self-care), advanced simulation and training systems, and robot-assisted surgeries are all fast growing technology areas that will improve healthcare quality and lower costs of service. Image classification using (AI) of diagnostic equipment of medical devices and appliances of blood and other bodily fluids medical applications for medical simulation Development of documentation solution for scanner application SELECTED CUSTOMERS Scanning for epilepsy using smartphones Medtech contract in Norway Helping save lives with technology 20 blood tests in a day without hospitalisation sp|> > Medtech Growth in IoT-based health care market from 2015 to 2020 Of all patients expect to use digital services in the future Global medical education market in 2024 (+4.3% annual growth) artificial neural networks Redesign for certification processes Control and surveillance Diagnostic screening Wireless Remote-control training equipment requirements and test Modular system Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Specialists from Data Respons R&D Services contributed with expert SW knowledge throughout the product range. SDG 3: The solution help homeowners and businesses detect rises in radon levels enabling them to take necessary action early to protect themselves from exposure leading to cancer st|OUR COMPANIES Newsletter sign up h1|Smart radon detector helps reduce the risk of lung cancer h2|According to the Norwegian Cancer Society (NSC) radon is a contributing cause of 370 cancer cases in Norway each year, being the second most common cause of cancer next to smoking. The Norwegian technology company Airthings have created a range of smart real time radon detectors which let you monitor your fluctuating radon levels over time. sp|> Smart radon detector helps reduce the risk of lung cancer Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. bo|Smart detectors allows for continuous measurement of fluctuating radon levels, compared to the traditional ways of using charcoal meters you had to have analysed in a lab. Measuring radon digitally and continuously is typically more accurate and reliable. The data provides a richer understanding of the levels of gas in your property, so that you can take more effective action if needed. The detectors plays an important role in decreasing the number of radon-related cancer cases. pa|Data Respons have been working with Laerdal Medical for nearly a decade, delivering IoT solutions including wireless handheld controllers used to simulate training scenarios and various control units placed inside the simulator. We contributed with full hardware development of both units as well as the firmware securing connectivity between the mannequin and the tablet. ” One of most the rewarding things in working with Laerdal is to take part of their vision to help saving lives – from premature simulators to pre-hospital and in-hospital emergency care providers” – Additionally, Data Respons R&D Services has recently helped Laerdal explore new ways wireless technology can optimise their products’ user-friendliness. The training system enable high quality education and training for all levels of medical staff and students in medical institutions worldwide. The solution helps save lives as hospital staff and students are trained in a safe, yet realistic and practical environment and ensure up-to-date training material through the cloud solution st|OUR COMPANIES Newsletter sign up h1|Helping save lives with technology h2|Laerdal is a major manufacturer of medical equipment and medical training products based in Stavanger, Norway. Their mission is to help save lives through medical technology. sp|> Helping save lives with technology Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|Terje Jensvik, Technical Manager Data Respons Solutions pa|Here you can read in-depth technology articles written by Data Respons’ own R&D Specialist and published in the magazine Interrupt Inside. At the bottom of this page, and in the left hand image, you can find the full downloadable magazine in PDF format. Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » Read more » st|OUR COMPANIES Newsletter sign up h1|An in-depth magazine about embedded technology, IoT and digitalisation h3|The “One VM” Concept – towards New virtual machine for the cars of tomorrow GraalVM – the Swiss Army knife of virtual machines Connecting Cranes to The Cloud Interrupt Inside 2020 – for the first time in interactive format! Monitoring the electric grid for a greener future 5G is a game changer for the military Controlling the power needed to de-ice drones No Internet of Things without strong cyber security Electrification and autonomous driving – the mega trends pushing the boundaries of wire harness design Agile Teams – What are the benefits behind the buzzword? Greener electronics, yes please, but how? The 2020s: the decade of software-defined mobility Software-driven cost cutting and performance optimisation of wind turbines The Internet of Insured Things – IoT platform for preventive monitoring The optimal toolbox for Open Source Development Rapid development and beyond with Oracle APEX Code quality assurance with PMD – An extensible static code analyser for Java and other languages. Atlassian Suite: tools for every team and more agility in projects Bringing the Internet to the Internet of Things Data logging & autonomous vehicles How optimal is your approach? Autonomous cable survey with magnetometers Developing an emergency communication device for disaster relief work Performance-aware energy-efficient data cache accesses Improving motion control in a bipolar printer Automotive: An industry in change A shortcut to embedded SmartMesh networks EnergyBASE: Iot-based solution for innovative energy management Fake? Agile System Modeling Distributed monitoring & control using DDS Drones and wireless video End-of-life (EOL) SoC FPGA Evaluation Guidelines Processors for high temperature applications Pros & cons of using STM32CubeMX code generation tool insead of manually writing drivers for an ARM Cortex-M microcontroller FEM Modelling Industrial Connected Things Device specific power consumption control sp|> Interrupt Inside Magazine 2018 Download the full magazine 2017 Download the full magazine 2016 Download the full magazine 2015 Download the full magazine 2014 Download the full magazine Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|In hindsight, the innocence of our shock facing such simple component deception seems almost quaint. We were stunned by the sheer audacity of the fraudsters and did not realize that hawking empty capsules as real components is actually one of the more benign forms of counterfeiting. Counterfeit electronic components, at that time an almost unknown issue, would accelerate in the 10 years following to become a visible and acknowledged problem with thousands of reported incidents in 2005 which in turn increased another 300% by 2008. The issue of counterfeit components pivoting around the turn of the millennium is closely related to fundamental changes in the electronics component supply chain at and around that time. The admittance of China into the World trade Organization (WTO) in 2001 resulted in the lifting on export bans for non-governmental entities. A surge of manufacturing outsourcing and the development of global shipping shifted the manufacturing center of gravity to Asia, specifically China, a region with weak protection and understanding of intellectual property, creating distance between the OEMS and their supply chain. Somewhat earlier, major efforts to establish a responsible E-waste handling led to a massive export of hazardous waste in the form of discarded electronics to China and other developing countries, creating a substantial industry centered on e-waste recycling. This industry, intended for the recovery of precious metals from electronic assemblies, became a growing source of reclaimed electronic components. The word counterfeit invokes associations of unauthorized copies. An early and famous case affecting thousands of computer motherboards involved a capacitor electrolyte made from a formula first stolen, then corrupted, which caused the capacitors to burst and the computers to malfunction. The case alone cost the computer makers more than USD 100 million. However, making copies, now specifically termed cloning, is just one of many ways of creating counterfeit parts, and not even the most common. Other major sources of counterfeits are excess inventory improperly disposed of, legitimately produced parts rejected by the test process, legitimate parts re-marked and re-labelled as parts of better performance and the aforementioned empty capsules. But the most common, and perhaps most sinister counterfeits are parts reclaimed from used and discarded electronic products, primarily in Chinese backyard operations. The boards are typically heated over open fire to as much as 400°C (far higher than the approved rated reflow temperature) to liquefy the solder, then hit and thumped to the concrete floor until the parts fall off. After sorting and cleaning in whatever water is available at the site, the top markings are ground down and a new topcoat is applied before the parts are marked, labelled, packaged, and reintroduced as fresh parts through the grey market. This group constituted an estimated 80-90% of the component counterfeit market in 2012, which in turn was assumed to be 8-10% of the total electronic components market and representing an annual revenue loss of USD 7-8 billion to the semiconductor industry. However, this is only a fraction of the overall cost counterfeit components represents to society, albeit maybe the only one that is close to quantifiable. Correcting a problem invoked by a counterfeit component, once detected, may exceed the value of the components by orders of magnitude. A counterfeit component not detected may cause serious loss of infrastructure in the worst case, and the loss of life and safety for people. A salvaged waste component already spent an unknown and possibly significant percentage of its useful service life before being recycled. Add to that an unknown, and possibly inappropriate service situation, and the fact that the part is on a board that has been discarded, and it is clear that re-claimed electronic components can only be legitimately used in non-critical applications. However, the vast majority of reported counterfeit incidents are in the military and aerospace segments, and includes cases involving safety and mission critical systems. It is clear that these segments are particularly susceptible to counterfeits, and not only discovering incidents to much higher degree. Since the 2011 reported discovery of counterfeits in major military systems like the F16 fighter jet, and the realization of the risks it represents, the attention on fighting counterfeits has been intense, far greater than in regular commercial markets. The test and qualification regimes of the defense sector along with the consciousness of the consequences of failure contributes to a superior detection of substandard quality. However, defense and aerospace, with product lifetimes spanning several decades, are especially mismatched to components life cycles of a few years, and do rely on a steady supply of components that are in effect obsolete. These hard-to-come-by parts are most easily found in the grey market. Hence, the problem of counterfeit components are closely related to the ever-mounting problem of obsolescence and life-cycle management (see article in the previous issue of Interrupt Inside). On the face of it, avoiding counterfeit components should be simple; buy components directly from component makers and reputable authorized distributers only, and you have no problem. Not until you need a part not available through those channels, that is. Considering a complex military system like a fighter jet or a helicopter, adding to it variants, upgrades and maintenance, it is obvious that the supply chain is extremely large and convoluted involving sub-contractors with sub-contractors at multiple levels. Each of them battling their own difficulties with obsolescence, lead times, delivery pressure and cost, and with varying levels of maturity and control handling the parts supply, not to mention ethics. The temptations to make use of the grey market are multifold. A vendor of a sub-component pressured and committed to a delivery date but missing a handful of critical components, gets instant relief from a smaller broker. An EMS provider, cut to the bone on price by his customer, sees the opportunity of recovering some of his profit procuring the most expensive parts from another friendly broker. The appearance of counterfeit components in military planes is no mystery, once you know it. Early topcoats could easily be removed with an Acetone wipe, and date and lot codes printed on the components themselves were often incorrectly formatted relative to the specifications from the vendor. Fakes were therefore relatively easy to detect once looked for. However, counterfeiters are steadily getting better at what they do, so the technology to detect frauds must improve as well. Companies are, as an example, working on using botanical DNA to mark chips, and the use of RFID tags has long been considered, but the long term impact of improved marking will only be relevant for cloned components. To aid detection, IPC has developed inspection training and certification for detection of counterfeit components. One must assume that frequent re-education is necessary. Several automated test and inspection systems targeting detection of counterfeit components are also about to hit high-end markets. Parallel to improvements in process and technology, the distribution of counterfeit components is fought in American courtrooms. In response to relatively resent definitions of new offences introduced in law motivated by the appearance of counterfeit parts in defense systems, the FBI has stepped up its investigation of component fraud, and several American brokers have been subject to high-profile prosecution and sentenced to lengthy incarceration. The message is clear; if you supply US defense companies you need to be sure that all components are genuine, or face charges and prison terms. The original “manufacturers” and distributers of the counterfeit parts are of course still out of reach. Inspection and prosecution aside, getting on top of the counterfeit component problem necessitates getting in control of the supply chain. Traceability from manufacture through distribution and assembly is inevitable for any OEM or sub-system manufacturer who wants to be confident that their product is clean of counterfeits. Also in this context, coordinated industry responses are important. Like the SAE internationals standard AS5553 for procurement of electronic parts, and directly motivated by volume of fraudulent parts in the supply chain. Counterfeit products are not limited to electronic components. Fakes, primarily clones, are widespread in all markets. Every year more than a 100 million fake phones are put in circulation. Fake ball bearings, car parts, cables, network servers, safety textiles, vehicle airbags and many more are well known and severe examples of counterfeits discovered. Considering the profits involved, the fragmentation of the supply chain and the many pressures on manufacturers, there is little reason to expect the fight to end counterfeiting to be successful. It appears that the only path to successful avoidance of counterfeit components goes through solid life cycle management. The link between counterfeit component avoidance and obsolescence management cannot be overstated, and actions taken to avoid “distress procurement” are also actions to keep fake components out of the factory. An unlawful or unauthorized reproduction, substitution, or alteration that has been knowingly mismarked, misidentified, or otherwise misrepresented to be an authentic, unmodified electronic part from the original manufacturer, or a source with the express written authority of the original manufacturer or current design activity, including an authorized aftermarket manufacturer. Unlawful or unauthorized substitution includes used electronic parts represented as new, or the false identification of grade, serial number, lot number, date code, or performance characteristics.” A fraudulent part that has been confirmed to be a copy, imitation, or substitute that has been represented, identified, or marked as genuine, and/or altered by a source without legal right with intent to mislead, deceive, or defraud di|Haldor Husby, Principal Development Engineer, Data Respons st|BY: OUR COMPANIES Newsletter sign up h1|Fake? h2|My first encounter with counterfeit components occurred more than 20 years ago while working for a large North-American electronics manufacturer. One day a fellow test engineer was called to the production line to help investigate the 100% test failure of a certain product. We headed down to the line, and quickly isolated the failures to a specific component. Getting no electric response from the part, we decided to X-ray it to look for damage or broken wire bonds. Looking at the images in disbelief, we had to check several times before accepting that the component housing contained neither chip nor wire bonds. Our boards were all populated with empty capsules marked, labelled, packaged and passed off as real components. Department of Defense definition h3|Types of Counterfeits Risks and consequences Fighting counterfeits Staying Safe SAE International Standard AS5553 definition sp|> Fake? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa| di|Linda Søgaard, Data Respons R&D Services st|BY: THE AGILE MANIFESTO OUR COMPANIES Newsletter sign up h1|Agile Teams – What are the benefits behind the buzzword? h2|How can we best respond to change and how do we deal with uncertainty? In 2001 a group of software developers got together to address this issue – the solution was a manifesto and 12 principles. The Scrum Master Working agile with Scania Reducing time and costs The digital journey sp|> Agile Teams – What are the benefits behind the buzzword? Agile is the ability to create and respond to change. It’s about understanding the environment and situation you’re in, identify the uncertainty in front of you and figure out how to adapt to it. Agile Team focus on close interaction between the people involved and is a methodology that use both incremental- and iterative development . Agile software development is an umbrella term that’s about solving problems through collaboration between self-organizing and cross-functional teams, with a strong focus on people, how they work and collaborate. The Agile Manifesto In 2001 seventeen developers sat down together to address issues that software developers faced. The result ended in a manifesto with four core values and 12 principles, which provide guidance on how to respond to change and deal with uncertainty. Based on the manifesto, the developers made 12 principles which is the guiding practices who helps teams use the Agile method. An agile team usually consists of a few people with different backgrounds, knowledge, skills and experience. Within the Agile Team Methodology there are several frameworks, one example is Scrum. At Sylog Scrum is used in almost every project, and Björn is a Specialist Software Engineer at Sylog, and a Scrum Master. – When you’re putting together a team, it’s important to think about several thing. One is to have all the knowledge needed to solve the project in the team, at the same time you want to have people with different skills and experience. A good team also complement each other and are specialists together. Secondly it is important to have people who are helpful, and people who can take constructive criticism – when working Agile you must be flexible and adapting to change because the customer can add, delete or change requirements continuously, and sometimes you can be told to do it another way then what you already delivered. Scania is a major Swedish manufacturer of commercial vehicles like big trucks, buses, engines and services and is a world leader provider of transport solutions in more than 100 countries. For ten years Björn has worked with Agile Team s and is currently working as a consultant at Scania and leading an Agile Team there. – Every day at the same time we have a daily meeting called pulse meeting . Here we bring everyone up to date, and each team member states what they have worked on, what’s next and if they have any issues that need support. One of the things that sets Agile Teams apart from other methods is that the customer has an active role as the project owner and are closely involved in the project and the team , where they participate in meetings, decisions and progress. – For Scania, we have demonstrations every third week where we brief them on what we have done and the road ahead . This is an effective way of working with a project, and by having the customer so involved, we can easily change what we have done (or continue developing if they are pleased) and deliver more precisely on the e xpectation. After the demonstration we plan for the next three weeks. In this way, the customer gets more value for the money and can save time by getting what they want right away. And we avoid the situation where the customer discovers late in the production that the end result is not what they wanted or that it doesn’t solve the real issue. Agile Teams is a more flexible way of working than other methodologies. The process reduces time to market and allows for closer and better communication with the customer. Scania is a diligent use r of Agile Teams, and Sophie Höglund , Head of System Development Service Application , at Scania says her department has worked a gile since 2015 . – Working a gile gives several benefits . When you plan the sprints , you shorten the delivery arrangements which benefits all parties . Seen from the customer side, w orking agile does not affect the price tag directly , but it does affect what you get out of it . B y working agile you get the breadth in a team , a s well as safety (by a team sharing responsibility ) , efficient and flexibility. By working agile , you as a customer get to be a key part of the team, and you have the opportunity to contribute and make changes along the way. For the consultants working agile means freedom under responsibility – they can test their thinking, learn by experimenting, and at the same time deliver results to the customer and get feedback right away. Additionally, as a consultant you often don’t get to work closely with your colleagues, but in Agile Team s you can collaborate and learn together with your team members. This way you get to know your colleagues better and the company get a stronger professional and social environment. Through the Industrial Revolutions the automotive industry has been revolutionized. From basic vehicles in the 1800’s to smart and connected vehicles in the 1900’s. Now we have vehicles with IoT systems and IoE systems. Sophie has worked in the automation industry for a long time and has seen these changes herself. – Software has provided new opportunities. For Scania we have gone from delivering big trucks and buses, to becoming a part of a transport system, and we are now one of the bigger employers for IT competence in Stockholm. Digitalization has provided many opportunities – such as being connected, get more information and to better understand the customer’s needs. And understand how everything fits together and optimize in different ways. Everything from apps that give the customer information regarding the vehicles status to how the logistics center are able to see the trucks on routes and of course all the new digital support we may offer our service network, the area which my department is focusing on. All this is now possible thanks to software. We are in a time where software is more important than ever before. The fourth Industrial Revolution is here, and the future are getting smarter every day. Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. bo|Individuals and interactions Working software Customer collaboration Responding to change em|We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: over processes and tools over comprehensive documentation over contract negotiation over following a plan That is, while there is value in the items on the right, we value the items on the left more. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale. Business people and developers must work together daily throughout the project. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation. Working software is the primary measure of progress. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. Continuous attention to technical excellence and good design enhances agility. Simplicity–the art of maximizing the amount of work not done–is essential. The best architectures, requirements, and designs emerge from self-organizing teams. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. pa|At the heart of the problem of component obsolescence is Moore’s law and the ever-shrinking dimensions of the transistor, the essential atomic building block of an integrated circuit. Moore’s observation that the number of transistors in an integrated circuit doubles every second year has held true for 50 years, and the rate at which the number doubles is only now slowing down. Miniaturisation has brought steady improvements in aspects of performance such as switching speed and power consumption, but make no mistake, the primary driver for – and over-riding motivation behind this rapid technological development has been cost savings. A smaller transistor occupies a smaller area, and in the silicon world, cost is proportional to consumed area. Take Intel’s recent transition from a 22nm to a 14nm process, which coincide with their change from the Haswell to the Broadwell processor families. Based on the same microarchitecture as the Haswell processors, the Broadwell nevertheless boast a 35% increase in the number of transistors while their die size has shrunk by 37%. Intel did not transition from a 22nm to a 14 nm process in order to offer us all better performance and added functionality (important processor performance parameters like clock speed levelled off a decade ago). Their sole motive was to convert the area reduction into a cost reduction. The added functionality was simply thrown in as an incentive to computer makers and consumers to switch to Broadwell as quickly as possible. As a process with smaller feature sizes comes on line, a chip manufacturer starts to port its high-volume products to the new process in order to realise the cost reduction it offers and get a return on the cost of developing the process. Low runners are left behind and as their volumes drop, the manufacturer seeks to limit its obligations to customers and to shut the process down. This is when the manufacturer issues EOL notices. On the one hand, market segments with lower volumes benefit tremendously from this: the incredible volumes of the consumer market supports a development model that delivers vast computing power at a very low cost. On the other hand, those segments with product lifetimes of a decade or more must learn to operate with a supply chain geared for product life times of only one or two years. Obsolescence is nothing new, but it is becoming more and more serious as a product maintenance problem. What began as an irritation has gradually become a serious burden to many organisations. The fact that this has happened gradually may explain why many organisations still deal with it in a reactive manner. Typically nothing happens until the company receives an EOL/LTB notice. That then triggers a frantic attempt to size up the last order, an activity that must normally be concluded within 180 days. Very few companies have forecasts allowing them to see a decade into the future with any precision, and the many uncertain variables that must be taken into consideration make it an almost impossible task to strike the right balance between opportunity and cost. Haswell (top) and Bradwell (bottom). Die shrink in the transition from a 22nm to a 14nm process (source: Intel)Buying and stockpiling components create other problems too. Even when the future need is correctly estimated, components in storage may represent a significant, if not unacceptable, amount of tied-up capital. Reserve charges and an increased reluctance among distributors to hold inventory for more than two years will affect the business case for continued manufacture of the product. Besides, electronic components go stale. After a couple of years in storage, the solderability suffers, leading to a higher yield loss. This is bad enough under normal circumstances, but it is much worse when the lost parts cannot be replaced. Moreover, maverick lots in storage may go undetected for years causing yield and reliability problems long after the expiry of the guarantee and end of support from the supplier. A need for parts a long time after LTB may tempt buyers into the grey market, where all the fake components are. Although it can extend product life for a few years, the reactive approach is seldom viable. To get a handle on obsolescence management it is helpful to categorise parts along these lines: Each category may then be associated with an obsolescence risk calculated on the basis of the probability of obsolescence and the consequence if it happens. Once obsolescence is viewed in terms of risk, it ceases to be an unpredictable and devastating “force majeure” and becomes instead something that can be managed with well-established techniques of risk tracking and mitigation. Category A components represent the highest obsolescence risk, which means they must be given priority in a good obsolescence management strategy and tracked more closely than the other categories. And while an embedded circuit board may comprise a few hundred individual part numbers, there will typically be no more than a handful of category A components among them. The majority will fall under category C which may safely be tracked only casually or not at all. The task is suddenly far less daunting. Mitigation, when EOL strikes, will also be different for each category. For category A components – which are by nature unique – there may be little option other than to perform a Last Time Buy to forecast. But for category B parts the primary mitigation technique is to search for and evaluate replacement parts. When this is performed up-front and resulting in an Approved Vendors List (AVL), the obsolescence of parts have no particular consequence and may be handled as part of the normal procurement and manufacturing activity. Another element of a proactive obsolescence management is the use of planned technology refresh cycles. Keeping a product alive for decades will often require one or more design revisions to replace obsolete components. Based on the anticipated lifespan of the category A components, the refresh interval may – to great advantage – be decided already during the initial design phase. Doing this divides the active life span of the product into shorter periods of defensive logistic obsolescence management, each separated by a technology refresh release. The first advantage of this is that, whereas forecasting and stockpiling for 10 years might not ever be viable, doing so for three or four years (until the next technology refresh) is normally quite straightforward: instead og having to bridge an indefinite parts gap, Last Time Buys performed in this context only need to bridge a parts gap until the next planned revision of the product. Secondly, the use of planned refresh intervals makes the design resource requirement visible in the organisation as each revision is planned and not a suddenly conceived and hastily executed stunt project performed to fulfill an unexpected order. Unplanned revisions will burden any design team as the time-line is often tight and the necessary design resource already allocated to other tasks. Planned revisions, on the other hand, contributes quality to the design and future proofing, as the purpose of the revision is to consider the life expectancy of the entire design not just the obsolete parts. Thirdly, making plans for technology refresh cycles during the initial design phase forces designers explicitly to consider the life expectancy of the parts they choose, and this will in turn influence key design decisions such as part selection and architecture. A design is a series of trade-offs made under pressure of time and cost, and product life expectancy does not always get the attention it is due in the process. Effective tracking and mitigation strategies aside, the long-term sustainability of an electronic circuit board is no doubt contingent on choices made during the design stage. This is obviously the case during the part selection process as all parts causing an obsolescence problem were once selected during a design phase. However, it is an illusion to think that making the “right” component choices may solve the problem of obsolescence. A wiser view would be that due consideration of a part’s life expectancy contributes to an overall obsolescence management strategy. Knowledge of the suppliers and their markets, as well as their commitment to their own and industry roadmaps should inform parts selection. After-market support and extension of life programs are important, and it is crucial to consider the intended market for a given component. Parts intended for the automotive market will be around for much longer than those meant for tablet computers. Individual parts may perish, but the overall architecture must stay the course. It might limit the scope of mid-life technology updates, and it helps reduce the extent of the re-qualification that is required after a design change. An enduring architecture favors a modular design with thin and standard interfaces between sub-systems. The general drive towards higher integration obstructs this goal, but designers should be conscious that a very integrated or convoluted design is hard to maintain and may require an extensive re-design effort. Also crucial in its bearing on choice of architecture is the realisation of the fact that software development costs always exceed hardware development costs even for simple systems. And the gap between the two will only expand with time. Hardware resources are abundant and easy to include but making them useful requires software effort, and coding efficiency is not likely to increase sharply anytime soon. Consequently, a system design intended to last for decades will be one that limits the software effort required after a hardware design update. Preferred characteristics are modular designs and layered SW structures. But the most important thing to realise is that system design for long product life is a multi-disciplinary undertaking. The emergence of businesses offering solutions of various kinds bears witness to obsolescence as an escalating problem. A range of companies provide logistical help in the form of planning and tracking tools as well as database and other services. Another niche is filled by operations like German HTV who offers a proprietary long-term storage and conservation process for components and assemblies; they claim to decelerate the component aging process by a factor 12-15. Paired with on-going test and monitoring, their offer makes it technically viable to stock components for one or two decades. A very distinguished position in the component obsolescence industry is held by Rochester Electronics. Supplied by a great number of the major semiconductor manufacturers they take over processes and portfolios once retired by the original manufacturer. Their warehouse presently holds more than 10 billion “obsolete” components of their own manufacture, and they even offer part re-creation as part of an overall service called Extension-of-Life®. The concepts and techniques discussed in this article are only elements of a proactive approach to life-cycle management. How these elements are used as part of an overall obsolescence management strategy depends on the product in question. Someone who earns a living by making and selling off-the shelf single-board computers will take a different approach from someone who uses embedded electronics as parts of a much larger and more expensive system. The former must account for the premium necessarily added by life-extending measures on a board-by-board basis while the latter may rather view it as part of an overall system cost picture. Constructing the right strategy may be hard, but technology companies will find it increasingly difficult to survive in modern market conditions without a sustainable product maintenance strategy. Probably coined by Carver Mead in the early seventies as recognition of predictions made by Gordon Earle Moore of Intel in a seminal paper from 1965: Moore’s original paper predicted a doubling every year, but he modified it in 1975. Recalling that the integrated circuit or “chip” was invented only three years earlier, the relative adherence to Moore’s law 50 years later is evidence of its influence as a roadmap. Moore’s actual agenda was to say something about cost optimisation in chip-making. The problems of component obsolescence and counterfeit parts are close cousins. Fake parts find their way into equipment of all kinds including military airplanes and weapons systems. A substantial majority of these components mimic hard-to-find parts discontinued by their original manufacturer. In the next article in this series, we take a closer look at the issue of counterfeit electronic components, the industry’s response to it and the protective measures available to companies making electronic products di|Haldor husby, Principal Development Engineer, Data Respons li|Unique, single sourced parts critical to function (examples are processors and FPGAs) Integrated parts which may have complex functionality but are to some extent standardised and with multiple sources (memory and power components) So called Chiclets or popcorn parts. Standardised parts available from many suppliers (passive components, logic gates etc.) st|BY: The number of transistors in a dense integrated circuit doubles approximately every 2 years. Next article in this series: OUR COMPANIES Newsletter sign up h1|End-of-life (EOL) h2|It happens with increasing frequency: a maker of electronic components issues a notification to customers announcing “End-of-Life (EOL) and Last-Time-Buy (LTB)” for certain chip components. The LTB date is the last date on which the vendor will accept orders for the parts concerned. Before this date, manufacturers using any of the affected parts in their products must determine once and for all how many parts they need. Upsides and market erosion are two in a range of uncertain variables to be considered when the same manufacturers attempt to balance the capital cost of buying too many parts with the opportunity loss of buying too few. In market segments with long development cycles, companies may receive the first EOL notices before the product even enters series production. h3|Moore’s law, yet again Reactive approach Proactive Approaches Obsolescence Management in the Design Phase Commercial Offerings Concluding Remarks Moore’s Law Counterfeit electronic components sp|> End-of-life (EOL) Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|To the right: “Number of components per integrated function for minimum cost per component extrapolated over time” The five data points on which Moore based his predicition (source: Intel) pa|Some security experts compare the current state of IoT security with Asbestos. They predict that in a few years time we’ll look back asking ourselves “What were we thinking of?”. Others draw parallels to the World Wide Web of 1994-95, arguing that IoT will be a security train wreck for years, before we eventually figure it out. Messages like these may paint a too gloomy picture of the challenges within IoT. But nevertheless, cyber security is a crucial IoT prerequisite, not least due to its close interaction with the physical world. IoT threats can go far beyond the well-known, conventional Internet threats like credit card theft. They could disable home security systems, manipulate navigation systems on connected vehicles, disrupt smart medical devices or knock out entire energy systems. At Data Respons we have broad experience and a long track record in IoT security, and currently we are experiencing a significant increase in customer inquiries and projects in the IoT cyber security domain. No wonder, because IoT is coming at us with terrific speed. We are connecting more and more devices and systems to the Internet, whether they’re industrial control systems, cars, cameras, door locks, fitness trackers or medical technology. By 2020, the number of installed IoT devices is forecast to grow to nearly 31 billion worldwide. And IoT threats are increasing simultaneously: Experts predict that in 2020 more than 25 per cent of enterprise attacks will involve IoT. Luckily, awareness of the importance of IoT security is increasing. For instance, it was a wake-up call for the IoT business, when in 2016 the Mirai botnet succeeded in enslaving millions of devices, including IP cameras and routers, turning them into centrally controlled botnets for Distributed Denial of Service (DDoS) attacks. Currently there are still Miria variants, like Mukashi, out there constantly scanning the web for vulnerable IoT devices, looking for weakly protected machines with factory-default credentials or common passwords. Moreover, in June 2020 the largest independent consumer body in the UK, Which?, revealed that 3.5 million cheap wireless cameras produced in China and distributed worldwide could potentially be hijacked by hackers. So, the picture is quite clear: IoT sets a whole new agenda for cyber security. It’s not enough to take security concepts and standards from the world of modern administrative IT and adapt them to this new domain. Furthermore we have to keep in mind the closeness of IoT to the physical world, together with the increased complexity and multi-layer nature of many IoT ecosystems. All this requires a multi-level approach to security. For the sake of clarity let’s divide IoT projects into two different categories, each of which requires different approaches: Firstly, developing a complete new IoT product from scratch, and secondly, adapting a legacy system to the new world of IoT. Developing new IoT products is relatively straightforward, seen from a security perspective. Starting from scratch gives you the advantage of incorporating security into an early stage of your design. You can do security-by-default, taking all the right decisions when it comes to patches, updates, access control, user authentication etc., integrating security from the very beginning. Also, greenfield projects allow you to adopt a holistic security approach. Thinking holistically is the best way to handle the complexity of the multi-layer IoT ecosystem. It means thinking security on every level, whether it is on the sensor/actuator and gateway level, whether it is encrypting the data sent through the system, or securing the stored data and the web and mobile applications being developed. Another important approach is risk assessment. It helps you channel your security effort into where it’s most needed and where it will make the biggest difference. Risk assessment means finding vulnerabilities and threats, estimating the likelihood of the threat to become reality, finding ways to mitigate attacks etc. It is crucial that risk assessment is done for the complete end-to-end value chain of an IoT product or service, bearing in mind that it’s more complex than conventional digital services. An IoT solution will typically be blending technologies, devices, software, connectivity, data storage etc., so there is much to consider. For instance, you may have designed an IoT device with great security features. But if you fail to think security when you’re designing the app associated to it, you might get in trouble. Likewise, if there are flaws in the cloud solution you have chosen to store your data. Risk assessment is increasingly gaining momentum, driven among other things by standards and legislation requiring developers to take a holistic, risk-based approach to IoT security. Furthermore, this approach helps you prioritize your development resources and helps you spend your security budget where it makes the biggest difference. A whole new challenge comes, when we want to adapt older systems to the modern IoT world. Bringing systems developed 20 or 30 years ago into the new world of IoT requires much consideration regarding security. Quite understandably the companies responsible for these systems want to give their customers access to the new business opportunities coming from IoT. As an example, manufacturers of ship engines and other heavy duty ship equipment are looking for ways to bring their machinery online, thus creating new possibilities for service and maintenance. But enabling these legacy systems in terms of access and connectivity to the internet from everywhere and from a wide range of devices means exposing them to a new world of security risks. Connecting to the Internet means connecting to potential cyber threats. This is particularly challenging, as these legacy systems are “born” with a very low level of security, both in terms of the way they have been developed and the way they are maintained. Now they have to be aligned to the modern cyber security world, and to meet state-of-the-art requirements for patching, updates, password protection etc. That is a major challenge. Probably the companies responsible for these legacy systems are not in the habit of issuing security patches, simply because they have never been required to do so. Patches were released, when there was a requirement for e.g. new functionality. Making legacy systems that were never intended to work with any kind of security, comply with modern security requirements is a complex task. But it has to be done, because all the advantages coming from connectedness will turn into threats, if we are unable to ensure the confidentiality, integrity and availability of these systems. The vast majority of IoT devices or devices used in ICS (Industrial Control Systems) do not follow or have not been designed to follow security standards or guideline. This means that we’ll need to ” i.e. design and implement security during the implementation, instead of during the design of the products or early in the products’ lifecycle. Some security standards exist though, like IEC 62443. Others are about to be developed on a European level e.g. from ENISA (The European Union Agency for Cybersecurity) and ISO (International Standardization Organization). These will become available in the years to come. Luckily, awareness regarding cyber security is rising. The media is publishing cyber crime stories on an almost daily basis, and manufacturers and service providers face considerable pressure from both customers and from governments and regulators, if they are found neglecting their security responsibilities. But still we see more reactivity than pro-activity. All too often security experts or tech-savvy users are the ones that find and publish security flaws. Only then manufacturers will fix the problem, and by that time the damage done could be significant. In the coming years we will be witnessing numerous incidents, in which IoT devices are used for cyber attacks or in which customer data has been compromised. The companies affected will react in retrospect, but ideally it should be the other way around: Because of high security standards and heightened awareness we will – hopefully soon – get to a point, where reacting in retrospect is rare and where heightened awareness will keep incidents to a minimum. However, the well-known dilemma between convenience and security will continue to challenge companies, developers as well as cyber security experts. The old saying about password complexity also applies to IoT security: the longer and more complex, the more secure, but the more tiresome as well. On the one hand companies and customers want convenience and ease-of-use. They want devices and services available at their fingertips without the hassle of security procedures. On the other hand we have the security experts pushing for confidentiality, integrity and availability. The tricky thing is to find the balance between these two considerations. But when you consider this dilemma more closely, you’ll find that there is no getting around security. In fact, although the starting point for many companies in IoT is the cost savings and convenience IoT has to offer, they quickly realize that only with security in place they can focus back on realising the potential of IoT, optimizing processes, boosting service, reducing costs and designing outstanding customer experience. Data Respons di|Arne Vollertsen for Data Respons & René Matthiassen, TechPeople consultant, CISSP, CISM, ISO27001 senior lead implementer and auditor st|BY: Chief Communications Officer OUR COMPANIES Newsletter sign up h1|No Internet of Things without strong cyber security h2|The concept of IoT holds great potential: By connecting millions of devices to the internet we can save time and money and become more efficient, we can offer our customers more convenience, better service and much more. But no grand vision without a snake pit of problems: With the Internet of Things comes the Internet of Threats. We need to protect our new network-aware systems and devices. There will be no Internet of Things without a strong focus on cyber security. sp|> No Internet of Things without strong cyber security IoT is speeding up Increasing awareness New security agenda Greenfield projects Risk assessment Legacy systems Low level of security IoT vulnerabilities Patching Patches are not released with the same frequency as commonly done in the IT world. That leaves vulnerabilities in the system for a long time before patches are sent out to fix the problem. Or worse: Some devices are not designed to receive patches/updates at all Weak Passwords Some IoT devices have only 4, 5 or 6 digit passwords, and this lack of complexity means they are easily breakable. Also, it may not be possible to change the admin user of the device, and default usernames and passwords are easy to find on the Internet. Communication Is communication from the device encrypted, and if yes, is encryption strong enough? Is it encrypted both in transit and in rest? Faulty software When you develop your IoT product it may be a good idea to reuse software developed by others. However you have to check that the software you’re reusing is without security flaws, and that you’re using the newest version of the code. End-of-life What happens if the component or device you’re using reaches end-of-life and is not supported by the supplier anymore? Privacy protection Do you have any data about your user stored on the device? What about 3 party integrations? Only one layer of security One layer is not enough. You need defence in depth, where several layers of security are used to protect data and information The need for security standards More pro-activity needed Well-known dilemma Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|“pave the road while we drive it pa|FEM (finite element method) is a method for solving the differential equations that describe e.g. A mechanical problem by subdividing the solution into a number of smaller parts. In modelling this means that an arbitrary structure is divided into a number of structural elements often shaped as rectangles or triangles (two dimensions) and rectangular boxes or tetrahedrons (three dimensions). Examples of technology areas where fem-modelling is used are structural mechanics, acoustics, fluid dynamics, heat flow, optics and electromagnetic fields. Data respons has contributed with fem-modelling in several projects dealing with various applications such as elastic wave propagation in steel bolts, acoustic noise pollution in offshore piling, sound propagation in district heating pipes and a fish tag. Some of us have perhaps noticed the peculiar and captivating sound that occurs when we skip a stone over thin, newly formed ice on a lake. There are a number of videos on youtube illustrating this phenomenon and the sound the skipping makes. It is an interesting challenge to model the structural dynamic and acoustic phenomena that cause the fascinating sound. To mimic the proper conditions we need to model an ice sheet of e.g. 2 cm on top of a water volume. Above the ice is the air conducting the sound to our ears. The impact of the stone is modelled as a momentary point force acting perpendicular on the ice in the very centre of the model. In reality the listener is static and the stone hits the ice at progressively increasing distances from the listener. In the model we do the opposite and move the listener to different positions (radii) from the pounding stone. This way we can model the situation as a cylindrical 2d-model which greatly reduces the model complexity compared to a full 3d-model. It is interesting to notice that a bounce from the stone creates an essentially finite acoustic pulse in the air even though the ice sheet vibrates for a relatively long period of time. However the sound pulse in the air gets progressively longer as the distance from the bouncing point increases. The initial part of the sound is emanating from the ice immediately surrounding the listener while the later part of the pulse is dominated by the sound that travelled through the air from the point where the stone hit the ice. Listen to the sound below: A reduced need for practical tests and prototypes is not the only advantage with modelling. It also visualises phenomena otherwise impossible to see or measure and thereby increases the intuitive understanding of the modelled process. Examples are sound and mechanical stress fields that are made visual by the modelling tool. In practical tests and measurements there is always noise present, and it is sometimes difficult to independently change a single parameter in order to study its influence on the total system. In modelling, individual parameters can be altered at will and noise is not present unless it is deliberately added to the model. The performance of an ultrasonic underwater data communication link depends on several factors such as the transducer design, form of modulation and the acoustic propagation situation. The signal processing necessary for the modulation converting data bits to acoustic pulses can be modelled in matlab and the performance of the transducer together with the acoustic circumstances can be modelled using fem. Combined these two tools let us model and fine tune the entire system before moving on to prototype development and practical tests. An ultrasonic transducer is often resonant and the efficiency is strongly dependant on the frequency of the carrier chosen for the data modulation. It is thus essential to know the resonance frequency of the transducer and also the sound radiation pattern as the output power varies with the radiation direction. The piezo electric element vibrates and the vibrations spread through the entire transducer structure and perhaps further. The acoustic radiation field depend on the entire mechanical design of the transducer and modelling reveal how different parts of the transducer vibrate and affect the emitted field. Fem enable us to test and modify the mechanical design in order to reach an optimal acoustic output. The picture to the left shows (greatly exaggerated) how the entire transducer vibrates when the piezo ring is excited by the electric 50 kHz signal. This Comsol Multiphysics model includes piezoelectricity, structural dynamics and acoustics and thus models the complete chain from electric input to acoustic output di|Erik Asplund, Principal Development Engineer, Data Respons st|BY: OUR COMPANIES Newsletter sign up h1|FEM Modelling h2|This article will focus on piezo electricity, structural dynamics and acoustics. When combined with signal processing the entire operation of e.g. a measurement system can be simulated, tested and modified before a real life prototype is produced and tested. h3|The sound of ice Advantages with modelling Ultrasonic underwater data communication link sp|> FEM Modelling Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|Modelled acoustic pulses at a height of 1.5 m above the ice and at a distance of 10 m (blue) and 40 m (green) from the point where the stone hit the ice. To the human ear, the pulses created by the model sounds just like the real acoustic pulses. Hear the sound when modelled on a 2 cm thick sheet of ice: The picture shows the modelled acoustic pressure field (air cross section) in 5 m (y-axis) of air above the 2 cm ice sheet at 15 ms after the bouncing moment. It is evident from the picture that the sound reaching the human ear located somewhere along the x-axis (radius) is composed of both sound emanating directly from the ice surface beneath and a pulse travelling through the air from the point where the stone hit the ice. The sound coming from the ice below is the result of a horizontally propagating vertical vibration of the ice acting on the air above. The pitch of the pulses depends on the thickness of the ice. A 5 cm ice sheet results in a frequency peak at approximately 700 Hz and a 2 cm ice sheet results in a peak at approximately 1500 Hz. This picture shows the pulse (acoustic pressure) after 100 ms, propagating from the point of impact in the centre. The top surface of the model is a matched impedance surface allowing the sound to properly leave the model at 2 m above the surface of the ice. Below the ice is the water volume and the model is equally good at modelling what a diver in the water would hear from the skipping stone. The picture shows a transducer model complete with piezoelectric ring and housing. Moulded plastic covers the piezo ring and protects it from the water. It is important to get the design correct and use a suitable material to achieve good mechanical impedance match between the piezo ring and the surrounding water. This is a visualisation of the acoustic level field in the water surrounding the transducer. It is obvious that the intensity of the sound at 50 kHz varies with both the aspect angle and distance and that the maximum communication range will depend on the orientation of the transducer. The colour scale is in dB rel. 1µPa. The graph to the right shows the acoustic output level referred to a distance of 1 m in the horizontal plane of the transducer. Clearly the transducer is resonant and is suitable for a carrier frequency of approximately 50 kHz. The model includes the piezoelectric effect and the transducer is excited by 35 Vrms. pa|Well, extremely complicated in fact, at least since our cars started morphing into computers on wheels. The 50s and 60s are long gone. Back then power steering, electric windows, and the occasional aircon were the height of luxury motoring. Nowadays the metal skin of a premium car hides a multitude of sensors, actuators, control units, high-performance computers, infotainment system etc., and more features and components are being added at breathtaking speed: Five years ago, vehicles had 25 per cent less circuits than today’s cars. Five years from now, that number will increase by another 30 per cent. The wire harness is the spider’s web in the middle of it all, and it is indispensable to nearly all aspects of a modern vehicle. That is why designing the wire harness of a state-of-the-art car, bus or truck requires both a general understanding of car components like sensors, actuators, batteries, motors etc., as well as knowledge of the nuts and bolts of electrical systems design. You need to know everything about wires and connectors, and you need to understand the vehicle as a whole to be able to design a wire harness, that is clever and cost-efficient, while being easy to assemble and service as well. Welcome to the world of wire harness design, right now struggling with a nasty cross-pressure: How do we connect an ever-increasing number of components with less and less space at our disposal? inContext is one of a handful of specialist companies focusing on wire harness design, and its 80+ developers are involved in a broad range of projects in the Swedish vehicle industry. Their expert skills in Complete Electrical Systems Design go into the development of new cars, buses and trucks that incorporate cutting edge technologies. For instance, inContext is working on a new electro powered bus, electrification of a plug-in hybrid truck, and providing wire harness design for the special requirements of military vehicles. Also, inContext contributes to future autonomous vehicle concepts with interconnect, electrification and software development. In short, the inContext people know what they are talking about, when you ask them what the next generation wire harness will look like: It will enable more powerful electrical systems to operate vehicles, as the latest electrical connectivity allows ever more signals from on-board sensors, other vehicles, road-based infrastructure and satellites to be streamed into a high-performance computer. That computer, in turn, will transmit signals through the wire harness to braking, steering and other control systems. All this is gradually maturing into a technical infrastructure for electrification and autonomous driving – an exciting vision, indeed, but a vision not without challenges. To begin with, as mentioned above, there is the cross-pressure issue: A growing number of sensors and other devices are being added to the vehicle, and thus needing more wires to integrate them into the car’s system. But at the same time vehicles want to become smaller, thinner and lighter. So, where to put the new wire spaghetti when you’ve got less space at your disposal? It’s hard to discard anything, as you still need all the traditional vehicle components for it to work properly. Wire harness designers are competing fiercely with all the other teams in charge of developing a new vehicle. They all need their piece of the shrinking space to be allocated for their specific use, so everybody needs to compromise to make it work. Modularity is one of the keywords in that specific dilemma, looking into the future of wire harness design. Designing with modularity in mind can help cope with the cramped space and rising amount of wires in a modern vehicle, particularly because many vehicles are produced in a number of different variants. In theory, you could design a wire harness that could handle all the vehicle options and features on offer. But that would be too costly, it would add to the vehicle’s weight, and it would take up too much space. Instead you need to think LEGO. With a modular design you can expand the basic harness with sub-harnesses where needed. That approach also facilitates assembling and servicing, especially in the heavy vehicle industry, where many inContex customers operate. When assembling a vehicle, instead of rolling out the complete wire harness and installing it at once, you can do it in sequence. This plug-and-play approach to assembling makes good sense, when a vehicle comes in many different variants, as is the case in the heavy vehicle industry. And what makes sense in assembling makes sense in maintenance as well. It is a lot easier to replace a wire harness designed in a modular fashion. You avoid having to replace the whole thing because of one cable breaking down. Another significant challenge for wire harness design is electrification. The magnetic field created by high-voltage cables tends to disturb low-voltage systems, so when designing a harness you need to factor in this EMC noise (Electro Magnetic Compatibility). To protect the signals running in the low-voltage communication cables you have to be careful not to put them near to their high voltage siblings. And that is quite a challenge, especially with limited space at your disposal. High voltage cables can pose a threat to humans as well. If a passenger riding an electric bus carries a pacemaker the EMC noise coming from the bus motor could interfere with it. That is yet another risk has to be addressed by wire harness designers in collaboration with component owners. And apart from EMC noise there is the sheer size of high voltage cables, not to mention their cost. For both reasons they need to be as short as possible. Everybody is talking about self-driving vehicles. inContext is contributing to this megatrend, as well as to electrification, by designing reliable, cost-effective wire harnesses that are easy to assemble and service. Moreover, next generation wire harnesses may enable extremely powerful electrical systems to operate vehicles without human intervention. For that we need more sensors, more bandwidth, bigger computers – and all this is leading to a re-engineering of automotive wire harnesses. The industry is thinking about architecture in new ways, for instance finding inspiration in high-security domains like aerospace. Think multi-layer redundancy, fault tolerance, advanced connectivity, and cyber security. Those are the requirements of the future, and you have to think really hard trying to meet these goals while keeping down weight, power consumption, and overall cost One way of addressing these trends is rethinking the wires themselves. We are going towards using a larger variety of wires, compared to the regular wires used in a standard vehicle CAN system. CAN is not enough to transfer the huge amounts of data in the system, and CAN cables need to be complemented by coax- and ethernet-type cables. However, many of them are not really adapted to the automotive industry, so manufacturers are working to develop new types of cables to meet the changing requirements in the industry. No doubt, wire harnesses are evolving rapidly, to meet the challenges posed by electrification and autonomous driving. But what are the long-term perspectives? Looking into the future, what will a state-of-the-art vehicle wire harness look like in 10 years? According to the inContext experts, wire harness design will probably be totally different from now. Today we use wires because they are flexible and easy to route, but the future of wires may not even be wires. Most of the signals could be communicated via wireless, provided we find satisfactory solutions to all the cyber security issues that inevitably will follow. Wireless technologies have a number of advantages, for instance when it comes to saving weight and avoiding EMC noise. However, power cables are difficult to replace entirely. To simplify that part we might begin using modular busbars going through the whole vehicle, functioning as the main power source for vehicle electronics. No matter what, the use of wires will most likely go down, while vehicle complexity will continue to increase. When it comes to the brain and nervous system of a vehicle, a paradigm change is on its way – we are pushing the boundaries of wire harness design di|Arne Vollertsen for Data Respons & Martin Lampinen, Managing Director, inContext AB st|BY: OUR COMPANIES Newsletter sign up h1|Electrification and autonomous driving – the mega trends pushing the boundaries of wire harness design h2|Everybody is talking about autonomous driving and electric cars. However, not many are aware of the invisible helper making it all happen. It is the car’s nervous system – the cables and connections that make signals and data flow inside the vehicle, enabling the super sophisticated features of a modern, sensing vehicle. Say hello to the wire harness. sp|> Electrification and autonomous driving – the mega trends pushing the boundaries of wire harness design Wires are just wires, you may think. How complicated could that possibly be? Indispensable connectivity Wiring experts The next generation harness More stuff, less space Going modular Handling high voltage Autonomy coming New types of cables The harness of the future Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Our highly experienced specialists have a broad range of expertise from various disciplines and can cover all parts of the development cycle. Data Respons’ unique business model enable customers to choose a form of collaboration that suits their needs. We can provide a complete competency platform during a development project with the knowledge from our R&D specialists. Get on-demand access to high-end technical expertise as well as well proven agile methodology when you need to scale up your development project or complement your existing R&D team. Our development specialists work as an extension of your in house team offering flexibility and cost effectiveness. We bring extensive industrial knowledge and skills according to your project needs and transparency through dynamic and agile work models. Data Respons has the resources to start new projects immediately. We bring more than 30 years of experience with 800+ talented specialists who has both the technology skills you need and the appropriate industrial knowledge. This allows your company to be more agile and bring projects faster to your customers. Data Respons can develop everything from sensor level to the app, making us a good partner for our customers with their digital transition. We can provide a complete competency platform during a development project. Our engineers specialise in understanding the environmental challenges and demands of our customers products on top of being best-in-class within their technical disciplines. This combination of experience and knowledge is the foundation that makes us specialists on embedded development. Servicing a diverse range of customers requires in-depth industry knowledge and an understanding of the conditions and markets our customers deal with. Data Respons has more than 30 years of industry experience and can provide a high level of competence within our customers challenges such as environmental standards and certification processes. Our teams have experience from automobile industries to medical and healthcare technology allowing us to bring a complete competency platform into your projects. Our specialists teams use a dynamic work model based on both documented procedures as well as tacit knowledge to ensure workflow, productivity and transparency. Its a model that has developed from the inside that aims to balance the relationship between quality, functionality, time and cost in a development project. Read more » Read more » Read more » Read more » Read more » Read more » st|OUR COMPETENCES OUR COMPANIES Newsletter sign up h1|WE MAKE THE TECHNOLOGY YOU NEED We are hiring more smart people Technical articles h2|– in all our R&D departments OUR SPECIALIST SERVICES COMPANIES h3|Data Respons delivers consultancy services, R&D development projects and experienced specialists with extensive industry knowledge The “One VM” Concept – towards New virtual machine for the cars of tomorrow GraalVM – the Swiss Army knife of virtual machines Connecting Cranes to The Cloud Interrupt Inside 2020 – for the first time in interactive format! Monitoring the electric grid for a greener future sp|> R&D Services Hire an R&D specialist or a complete team! R&D SPECIALIST SERVICES INCREASE YOUR PROJECT CAPACITY DEDICATED SPECIALIST TEAMS FASTER TIME TO MARKET Complete technology house In depth industry knowledge Dynamic and agile methodology > Engineering consultancy > Project management > Custom software development > IoT application development > Java (TM) > Software development > End to end development > UX/UI design > Embedded software > Application security > Electronic and hardware development > DevOps as-a-service > Atlassian Confluence ® / Atlassian Jira ® > Mechanical design > Test and quality Java is a registered trademark of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Data Respons is a niche supplier of specialist development services and high-tech solutions to several leading players in the transport and automotive industry. We have extensive experience in developing smarter, connected and digital software and hardware solutions supporting the ongoing transformation of the automotive industry and improving logistics efficiency. Read more » Read more » Read more » li|and infotainment systems of cars and trucks SW and solutions for , autonomous driving and car sharing systems IoT and SW solutions for systems Communication systems for access on trains Fanless computer and solutions for vehicle applications for integrative cloud-based platform as well as the back-end communication, mobile services and the related interfaces System for in public transportation and system integration assisting all phases of the full software development cycle Software for electronic fare collection, and POS/POI systems and Testing Services and logistics solutions for cars and trucks and web services to manage logistics processes and the economic efficiency of vehicles for aircrafts Interactive systems for public transportation st|Mobility 470 200 6.4 APPLICATIONS & EXPERIENCE OUR COMPANIES Newsletter sign up h1|Smarter, safer and more sustainable transportation million sensors million h2|The transport and automotive industry is undergoing the largest transformation in several decades driven by multiple new disruptive technologies combined with stricter safety and environmental requirements. Innovation and technology advances are making the industry more advanced with additional sensors, more embedded software and increased demand for data processing. Furthermore, the introduction of always-connected vehicles are enabling a broad range of new value adding services. h3|SELECTED CUSTOMERS New virtual machine for the cars of tomorrow Electrification and autonomous driving – the mega trends pushing the boundaries of wire harness design The 2020s: the decade of software-defined mobility sp|> > Mobility Connected cars on the road by 2025 Per car by 2020 Fleet management systems in active use in Europe Digitalisation of the cockpit connected car automated asset/fleet management non-disrupted internet safety-critical SW Embedded SW solutions optimising fuel efficiency R&D IT services fleet management CAT Measurement Technologies Advanced telematics IoT based mobile Obstacle-tracking systems mobile multimedia Robust digital signage Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|SmartMesh IP is a wireless technology pioneered by the Linear Technology owned company Dust Networks. A descendant of ultra-low power and ultra-high reliability protocols such as WirelessHART, the SmartMesh IP protocol is based on the 6LoWPAN and 802.15.4e standards. It features a time slotted, channel hopping mesh network where every node knows exactly when to listen, talk or sleep, resulting in a very power efficient and collision-free packet exchange. Every device in the mesh network has the same routing capabilities, often referred to as “mesh-to-the-edge”, as it provides redundant routing to the edge of the network. This allows for a self-forming and self-healing network that constantly adapts to changes in topology, while maintaining an extremely high data reliability, even in harsh frequency environments. A SmartMesh IP network consists of one or several wireless nodes, known as motes, which collect and relay data, and a network manager. The manager has two fundamental functions: Firstly, it is an access point (AP) that acts as a gateway between the mesh network and the monitoring or control network. Secondly, it runs the network application software that continuously makes decisions on how to build and maintain the mesh network. The Embedded Manager is a self-contained solution where both the AP function and the network management algorithms runs on a single chip. This setup however, illustrated in Figure 1, is limited for smaller networks, as the single AP has a hardware constraint of 100 motes and a throughput of 36.4 packets per second. The customer software communicates with the manager directly through a serial Application Programming Interface (API). A second, new alternative is the Virtual Manager, where the network application runs on an x86 virtual machine, while only the AP functionality remains on-chip. The AP, together with a bridge SW on a locally connected MCU or PC, then constitutes an AP gateway that connects remotely to the virtual manager. This connection can be serial, Ethernet, wifi or even cellular, as long as it can support the maximum throughput of 40 packets per second from the AP. In this setup, illustrated in Figure 2, the customer application interacts with the virtual manager through an HTTP-based API. Adding multiple aps can scale the network to support thousands of motes, as well as increase the available throughput, reduce latency or achieve redundancy. The typical use for a mesh network is to publish sensor data from each node to a centralized application for processing, storage and/or visualization. As illustrated in Figure 3, a smartmesh IP mote can operate in two different modes. Running in a master mode, the on-board ARM Cortex-M3 processor can access sensors and other I/O directly, where it runs an application that terminates commands and controls network joining. An On-Chip Software Development Kit (SDK) allows a user to write applications directly on the mote, on top of the smartmesh IP network protocol stack. Alternatively, the mote can run as a slave to a connected device, expecting the master device to terminate commands and control network joining via a serial API. This puts more complexity in the hands of the user, but is often the most viable option in an embedded solution, as a custom MCU adds more flexibility. Since both the smartmesh mote and embedded manager has a similar serial API that the typical embedded application has to interact with, Linear provides a complete implementation of both in the smartmesh C Library. This library abstracts commands into simple function calls, handling serial formatting and framing for the high-level data link control (HDLC) protocol used in all serial communication with smartmesh devices. The library also makes sure to match sent commands with ensuing replies, passing them back through a callback function. Notifications received from the smartmesh device are also parsed and correctly acknowledged, before they too are passed back “up” through a callback function. Still, implementing the API itself is not necessarily the hardest part. On the manager-side there is little to no required intervention, as it will autonomously start creating a network upon power-up – The connected customer software simply need subscribe to the desired notifications, while commands and interactions are stateless, and thus reasonably straightforward. By contrast, on the mote-side a software designer has to be aware of mote states and corresponding behavior, as well as the correct sequence of configurations and commands to join a network. Linear found that this knowledge barrier sometimes prevented potential customers from embedding smartmesh IP in their applications, which is why the need for a simpler starting point emerged. The quickstart Library (QSL) developed by Data Respons abstracts the mote interface one step further: A finite state machine (FSM) schedules the necessary sequence of commands depending on the current state, events and replies from the mote, leaving only a minimal and intuitive API for the user. For example, the steps necessary to configure the mote, set up sockets, initialize a search for and join a network, as well as request a certain bandwidth, are all hidden in a simple call to . Downstream user payloads are also handled by storing them in a circular inbox buffer with a configurable size, where calls to will pop the oldest message in the inbox, if any. queues a payload for transmission to the manager, while is a simple way to check if the mote is still connected (this way the user application can determine if a failed is the result of not being connected or an actual transmission failure). Lastly, should be called once upon startup, and will simply initialize the data structures and establish the serial connection to the mote. Except for , which returns the number of bytes read, the API only returns simple Booleans to let the user application know if an attempt was successful, avoiding the need to interpret any response codes. Furthermore, and has a configurable timeout such that the user application can be sure that their call will return within a set limit. While only makes one attempt at queueing a packet, keeps trying to join a network until successful or the configured timeout occurs. Designed to be highly portable, the QSL (and the underlying C Library) is written in C without any hardware-specific code, allowing its use “as-is” in any C-based platform. Platform dependent functions only have their prototypes implemented, leaving their definition to the user. For instance, a developer has to define how to feed the watchdog (if any) or how individual bytes are written to or read from the serial port. Figure 4 illustrates the library structure, where the hardware specific categories that need definitions are listed to the left (watchdog and lock for concurrency is optional). To further help developers get started, complete sample code is provided for a set of commonly used platforms: Raspberry Pi, Atmel SAM C21 and STM32, as well as a generic example for the ARM mbed operating system. Sample code for these platforms also include implementations of the necessary prototypes. QSL is accompanied by a detailed guide, with step-by-step instructions on how to get started with the typical case of data publishing from an external MCU. The guide also explains how to get a demo up and running with the sample code provided for the supported platforms. It also includes guidance on existing tools that can visualize data arriving on the manager-side, as well as transmit data downstream to motes. This allows a developer to integrate a prototype mesh network with their embedded system within only a few hours. As the name entails, the QSL is primarily meant to help developers get started with embedding smartmesh IP in their applications. The library is not an exhaustive API for the smartmesh IP mote, although its interface is adequate for most simple applications, as it provides functionality for data transmission and configuration of the most important network settings. Furthermore, by extending its functionality or simply by using it as a thorough how-to, the QSL can reduce development time for advanced applications that require more features from the underlying mote interface di|Jon-Håkon Bøe Røli, Development Engineer, Data Respons st|BY: connect read Send is Connected send init read send connect send connect OUR COMPANIES Newsletter sign up h1|A shortcut to embedded SmartMesh networks h2|In the emerging world of Internet-of-Things, wireless low-power mesh networks are more relevant than ever. Data Respons has gained valuable experience with one particular technology after employing it in a large industrial instrumentation project, namely Linear Technology’s SmartMesh IP. As a specialist on embedded solutions, Data Respons recently became an official Linear SmartMesh partner, after developing the QuickStart Library: A software library that greatly reduce development time for embedded applications of SmartMesh IP. h3|Mesh-to-the-edge Motes and manager Master and slave C library Quickstart library Platform independent Rapid mesh network prototyping sp|> A shortcut to embedded SmartMesh networks Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|Note: On March 10th, 2017 Linear Technology Corporation officially became part of Analog Devices. Inc pa|Hans Kamutzki / Hendrik Höfer 1991 September 2016 Munich (Germany) 68 (2017) The company’s focus on complex software technology and software infrastructure has made them a respected partner for large corporations and even for other software businesses. The engineering team composes highly skilled professionals from more than 10 different countries. Teams are composed to match customer’s requirements with a mix of experienced senior experts augmented by younger aspiring developers. Operating from three offices in Germany (Munich, Berlin, Stuttgart), the company serve leading corporations from a variety of business domains including automotive, self-service systems, telecommunication, utilities and financial services. MicroDoc has specialized in solving challenging software problems, which require in depth knowledge of end-to-end technology and business scenarios (including mainframe computer, networks, desktops, mobile devices and embedded systems). Dr. Ing. Heidi Sauer / Mr. Günter See 1991 December 2017 Ingolstadt (Germany) 97 (2017) Modern vehicles contain increasingly complex IT system driving demand for software development, test and technical support to comply with strict industry safety regulations. Measurement and test systems represent a significant cost factor in vehicle development and quality assurance. Their proprietary “computer aided testing” (CAT) software solution supports customers in managing the ever shorter product development cycles in an efficient and secure way. The company’s engineers are located close to customers to secure efficient development, rapid respond support and evolve their industry know-how. Kim Fahrenholtz 2010 December 2017 Copenhagen (Denmark) 63 (2017) TechPeople is a leading technology company with a broad network of more than 60 specialists covering everything from system architecture, mechanical and embedded hardware design to software and application development. Operating out of two offices in Denmark (Herlov and Aarhus), they offer the best consultants in the market for advanced development within medtech, energy and telecom. Through long-term customer relationships they have demonstrate capability to deploy new technology and develop innovative solutions. Example projects include interface between a hearing-aid device and mobile applications, development of advanced communication systems for disaster zones based on mesh technology and supporting the development of a complete IoT solution for energy management from sensors and actuators, connected to a cloud platform enabling the user to control everything from his mobile application. Johan Jacobsson 2002 December 2007 Stockholm (Sweden) 413 (2017) Sylog is a fast growing Swedish consultancy company located in Stockholm, Gothenburg and Linköping. The company provide specialist and engineering services to some of the largest and most innovative companies in Sweden supporting development of anything from platform-independent payment solutions to smart network solutions enabling future IoT applications. The technological complexity is increasing as more sensors and units are connected, enormous amounts of data collected and analysed, systems integrated both in the edge and in cloud-based platforms whilst maintaining end-to-end security. An individual consultant or smaller team seldom possesses all required knowledge and skill sets to solve a complex technology challenge. The company therefore combine senior specialists from Sylog with young engineers from YABS (Young Aces by Sylog) and complement this with sub-contractors from its subsidiary Profinder (as needed), to ensure that customers are equipped with necessary capabilities to develop new innovative solutions that increase their competitiveness and strengthen their position. Ivar A. Melhuus Sehm 1986 Since the beginning Høvik (Norway) 89 (2017) The company delivers specialist services, development projects and experienced specialists with extensive industry knowledge. Located at five locations in Norway (Oslo, Høvik, Kongsberg, Stavanger and Bergen) the company make sure to be close to the customers enabling efficient collaboration and knowledge transfer. Over the last 30 years, the company has acquired expertise and valuable insight about physical environments and industry standards across several industries enabling them to deliver high-quality development projects and services. Their specialists cover a broad range of competences and disciplines enabling them to develop everything from apps and cloud based services, to intelligent sensors and IoT solutions. The flexible delivery model can support any customer need – from an R&D specialist to a complete team. Jørn E. Toppe 1986 Since the beginning Høvik (Norway) 93 (2017) Solutions is involved throughout the entire process – from specification, system architecture, HW design, software development, secure connections, test and qualification to volume deliveries. The company also provide value-adding services based on specialist competence including technical support and lifecycle management services and are involved in next generation studies. The demand for increased SW content, more functionality, higher performance and securely connected solutions increase. A customized solution for your specific needs will often result in a lower cost of ownership and ensure that your system has an appropriate end-to-end security with a fall back solution to avoid data being compromised. Dr. Andreas Lassmann 1999 2018 Leipzig (Germany) 125 (2017) IT SONIX / XPURE situated in Leipzig with 125 employees. The companies are leading niche providers of specialist services and SW technology (Java, Embedded, Cloud and AI) specifically aimed at “Connected Car” solutions, internet of things, mobile services and embedded applications. They have been active in telematics, communication and project management for more than 15 years specializing in agile software development for client-server, mobile applications and on-boar units. The companies are deeply involved in the ongoing digital transition for some of the leading automotive brands in Germany, some of the world’s most dynamic and R&D intensive industries st|Managing Director Founded Joined Data Respons Headquarter # of employees Managing Director Founded Joined Data Respons Headquarter # of employees Managing Director Founded Fully owned by Data Respons Headquarter # of employees Managing Director Founded Fully owned by Data Respons Headquarter # of employees Director Founded Fully owned by Data Respons Headquarter # of employees Managing Director Founded Fully owned by Data Respons Headquarter # of employees Managing Director Founded Joined Data Respons Headquarter # of employees OUR COMPANIES Newsletter sign up h1|A respected partner for advanced software development, digitalization and IoT A leading consulting and service company for automotive IT and computer aided testing (CAT) A highly specialized consultancy company with expertise in embedded and IT solutions. A specialist consultancy company within system and SW development, technology and IT A complete technology partner from sensor level to the mobile application A leading provider of smart embedded and industrial IoT solutions in the Nordics and Germany Leading niche providers of specialist services and SW technology h2|Data Respons Solutions design, develop and deliver smart embedded and industrial IoT solutions by combining specialist engineering competence with standard embedded components from leading technology partners. h3|MicroDoc is a technology oriented company with more than 60 specialists in SW development, Java and system design as well as SW solutions for IoT, mobile/network infrastructure and embedded applications EPOS CAT designs, develops and operates tailor-made software solutions to support and optimize customer’s business processes mainly targeting the automotive industry A leading technology partner – from system architecture, mechanical and HW design to software and application development and communication solutions for embedded and IoT solutions Data Respons R&D Services provides specialist services through development projects, consulting services and technology consulting. (Java, Embedded, Cloud and AI) sp|MICRODOC EPOS TECHPEOPLE SYLOG DATA RESPONS R&D SERVICES DATA RESPONS SOLUTIONS ITSONIX / XPURE Sylog’s customers are world leaders in telecom, automotive, defense, medtech, finance, the media and the gaming industry. Passion, knowledge and freedom are Sylog’s keywords. Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Our more than 1,400 specialists across Nordics and Germany are developing innovative products and services solving complex technology challenges for our customers. We have a lifetime perspective on our relationships and work closely with customers – from idea to implementation. Customers are benefitting from our multi-disciplinary engineering competence and industry know-how developed over the last 30+ years. We develop everything from sensor level to the mobile app, making us a good partner for our customer’s digital transition. “Smarter” challenges us to explore new technology! Smarter for us means data oriented, analytical, digitalised and securely connected solutions enabling a more sustainable future. “Inside” refers to the “brain” of any machine, system or industrial robot. Inside also means inside the minds of our specialists striving to improve their skills every day. Our mission drives Data Respons to work closely with the customer as a long-term preferred partner. Based on our multi-disciplinary engineering competence and broad industry experience developed over the last 35 years, we support our customers from specification to delivery. It is important that our customers experience the benefits of speed, innovation, quality and cost improvements – our main value propositions to the market st|OUR COMPANIES Newsletter sign up h1|Our business h3|Data Respons is a technology company delivering R&D engineering services, software and hardware development, smarter embedded and IoT solutions. Our vision Our Mission sp|> > Our business A smarter solution starts from inside To strengthen our customer’s competitiveness by developing innovative solution fulfilling specifications Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Data Respons works closely with customers in all aspects of this comprehensive industry. That being building smarter robots, developing sensor based systems to software heavy and cloud based asset management systems. Read more » Read more » Read more » Read more » st|AUTOMATION 14% + 60% 15% APPLICATIONS & EXPERIENCE OUR COMPANIES Newsletter sign up h1|Improve performance beyond human capabilities h3|The Smart Factory, robotised, digitalised and data driven (AI), changes the way we look at production and automation going forward. The ongoing evolution from smart optical sensors to data analytics based on big data processing increases the speed and quality, while it reduces the cost of all industrial and consumer products and goods. for reverse vending systems, waste recognition & sorting (temperature, humidity & radiation) for automation and production systems and applications Control systems for (DAQ) and control based measurement systems systems (AI) for predictive production and robotising Reliable Ethernet-based for industrial automation assisting all phases of the full software development cycle. System solutions for of machinery SELECTED CUSTOMERS Meet Servet Coskun, who eats, sleep and breathe green technology! Contract in Germany of NOK 17 million Intelligent street lighting in Copenhagen Reliable control system for Danish recycling system sp|> > Automation Annual growth in world wide supply of industrial robots Of global manufactures use data recorded from connected IoT devices to optimise production Productivity increase in delivery and supply chain performance driven by implementation of IoT Control systems Intelligent sensor monitoring Predictive analytics Advanced test robots Data Acquisition Optical Sensor Asset management Smart factory systems monitoring system Machine vision R&D IT Services and system integration control, command and surveillance Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Specialists from Data Respons contributed with expert SW knowledge, mentoring No Isolations technical team within Yocto and Linux distro. The AV1 robot allow pupils who are unable to physically be at school to participate the classes and communicate with their friends, making sure they don’t unnecessary miss out on their education. The telepresence robot helps children and young adults suffering from long-term illness to feel less lonely and isolated st|In the UK, more than 72,000 children are missing out on their childhood due to long-term illness. That means in every sixth classroom, there is an empty desk. When a pupil can’t attend class themselves, AV1 will take their place. AV1 is the telepresence robot for children and young adults suffering from long-term illness. OUR COMPANIES Newsletter sign up h1|The classroom robot that lets children be at school event when they can’t go… h2|The AV1 robot allow pupils who are unable to physically be at school to participate the classes and communicate with their friends, making sure they don’t unnecessary miss out on their education. sp|> The classroom robot that lets children be at school event when they can’t go… Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|When taking total responsibility we mean helping our colleagues, getting involved, showing enthusiasm and being loyal. Having an underlying will to succeed in everything we do. Desire to acquire new knowledge and exploring new ways to achieve our aim to be the best at what we do. We strive to have an open-minded, inclusive and learning-based attitude and culture. In Data Respons, crazy, non-standard ideas are valued. A good laugh and a sense of humour bring energy st|TAKING RESPONSIBILITY TO PERFORM OUR COMPANIES Newsletter sign up h1|We live our values every day! sp|> > Culture & Values BEING GENEROUS HAVING FUN Sharing our knowledge It is within the Data Respons mind set to share what you know with others who can benefit from it. We do that through our mentoring programs as well as through sharing our specialists in-depth knowledge in articles, presentations and technical talks. Staying in shape together Our colleagues across the group help each other stay in shape by exercising together and challenging one another to take part in various sports events. These three guys represents our Swedish R&D company Sylog AB. In addition all employees within the group can take part in our InShape program where we motivate and reward being active in our daily lives. Having fun When working for a Data Respons company you will notice we bring a little madness to work. We try to have fun, either by bringing our sense of humour to the workplace or by participating in some of the activities that takes place locally. Some of us go on hiking trips, or we meet to have a beer and a round of bowling. A few people even meet to fly drones. Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Sylog’s customers are world leaders in telecom, automotive, defense, medtech, finance, the media and the gaming industry. Sylog is a fast growing Swedish consultancy company located in Stockholm, Gothenburg and Linköping. The company provide specialist and engineering services to some of the largest and most innovative companies in Sweden supporting development of anything from platform-independent payment solutions to smart network solutions enabling future IoT applications. The technological complexity is increasing as more sensors and units are connected, enormous amounts of data collected and analysed, systems integrated both in the edge and in cloud-based platforms whilst maintaining end-to-end security. An individual consultant or smaller team seldom possesses all required knowledge and skill sets to solve a complex technology challenge. The company therefore combine senior specialists from Sylog with young engineers from YABS (Young Aces by Sylog) and complement this with sub-contractors from its subsidiary Profinder (as needed), to ensure that customers are equipped with necessary capabilities to develop new innovative solutions that increase their competitiveness and strengthen their position. Johan Jacobsson Stockholm (Sweden) 413 (2017) st|Specialist consultants in system and SW development, technology and IT HQ FOUNDED # OF EMPLOYEES JOINED DATA RESPONS OUR COMPANIES Newsletter sign up sp|> > > Sylog SYLOG MANAGING DIRECTOR 2002 2007 COMPANY website Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Data Respons is a niche supplier of specialist development and test services supporting players throughout the entire value chain in developing next generation networks (e.g. 5G and cellular IoT) and leveraging new service opportunities from these networks. We also have broad experience in developing customised IoT endpoints, infrastructure and applications ensuring a trusted security chain. Read more » Read more » Read more » Read more » st|TELECOM & MEDIA 1000x 7x 20bn APPLICATIONS & EXPERIENCE OUR COMPANIES Newsletter sign up h1|Meet the growth in demand for data h2|The across industry trend of a more data driven, smarter and connected society is challenging existing telecom infrastructure to provide connectivity, bandwidth and standard protocols supporting new services. The ongoing investment in 5G technology and networks is the key foundation for broader roll out of new value adding IoT applications. h3|Software and solutions for (5G, 4G, Celluar IoT, Radio) Infrastructure and system solutions for IoT (sensors & actuators), IoT (Routers & Gateways), IoT systems Software-heavy cloud solutions and mobile services platforms and applications Solutions and platforms for , cloud infrastructure and IoT Software and applications for embedded applications in Telecommunication , Voice over IP, video streaming and digital signage High-volume for M2M solutions, music and imaging for control of air traffic, shipping and public transportation SELECTED CUSTOMERS Contracts in Sweden of SEK 35 million Contracts in Sweden of SEK 40 million The classroom robot that lets children be at school event when they can’t go… Contract in Sweden of SEK 10 million sp|> > Telecom & Media More mobile data volume enabled in 5G compared to 4G networks Increase in smartphone traffic globally from 2015 to 2021 Connected IoT devices by 2020 networks access mobile networks and satellite broadband endpoints infrastructure security infrastructure for connectivity Integrated Telematics platforms digital services JAVA Virtual Machines Media gateways portable accessories Secure communication Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|AKKA is an international leader listed on the Euronext stock exchange and provides engineering consulting and R&D services for clients in the field of mobility, life sciences, telecommunications, energy and defense. Click to find out more Data Respons recently released an ESG report together with the annual report for 2019. Take a moment to find out how we work for a more sustainable world through technology. Our integrated report for 2019 is out. Last year was another record year across the board, and we upped our efforts on sustainability reporting. Reports and presentations st|Rune Wahl, CFO OUR COMPANIES Newsletter sign up h2|Data Respons has joined the AKKA Group ESG report 2019 Data Respons annual report sp|Management and board Contact: Phone: +47 950 36 046 Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa| st|OUR COMPANIES Newsletter sign up h1|Interrupt Inside 2020 – for the first time in interactive format! h2|We have wrapped up our most read and popular articles of this year in a magazine. Our content gathers in-depth technology articles written by Data Respons’ own software development and R&D specialists. We hope you enjoy reading them and find some useful insights. Lean back and start reading here! sp|> Interrupt Inside 2020 – for the first time in interactive format! Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|The Block diagrams and flow diagrams are perhaps the de facto standards of visually describing structure and functions of embedded systems. These diagrams have good track records, but one shortcoming lies in the fact that they can not easily be integrated with other diagram types or even other diagrams of their own type using common tools. If you for example develop five block diagrams in Microsoft Visio with some common elements, you will need to maintain each of them individually. Make a change in one, and you will need to validate the others. If flow diagrams for these elements are also developed, the effort of making changes and ensuring consistency increases exponentially. A solution to making multiple diagrams consistent, is to use a tool that integrates several diagrams and diagram types using a relational database. This was done by UML to unify the world of software modeling. UML is now a 20 year old mature software modeling language that promotes an object oriented mindset. A major strength of UML is the ability to combine diagrams showing SW Structure and SW behavior, and reuse elements. Reuse is also a key aspect of SysML, and portions of models can be reused between product generations or variants. The most used UML diagram types are arguably the class, sequence, and use-case diagram. UML tools lets elements be reused between diagrams, and a change in one element is therefore reflected in all diagrams showing that element. This enables more aspects of a system to be documented with diagrams, for less effort. Common criticism of UML is that its “red tape” that gets in the way of coding, and UML will indeed let you describe SW all the way down to SW function level. Compared to UML, the Systems Modeling Language (SysML) is more light-weight, more general, and targeted towards modeling requirements and architecture. In 2001, the International Consortium on Systems Engineering (INCOSE) and the Object Management Group (OMG) issued the “UML for Systems Engineering” request for proposal, with the intention of adapting UML for system specification and design. In the 16 years since, SysML (now at version 1.4) has developed into a mature and more agile language than UML, that is suitable for modeling requirements, hardware, software and processes. In addition, a SysML model gives opportunities for documenting the relationships between requirements and system components at any level of decomposition in accordance with best practices and also functional safety requirements. The SysML language is a profile of UML, and provides both a notation in the form of diagrams, elements, and relationships, and the semantics of these. Some diagrams are directly adopted from UML, the requirements diagram and parametric diagram are new diagram types, and some UML diagrams have been left out in SysML. The authors have used Enterprise Architect from Sparx Systems for SysML modeling. It is a feature rich and flexible modeling tool with good Support for SysML 1.4. The process of modeling an aspect of the system is to first create a diagram of a suitable type (see SysML 1.4 Diagram Types), secondly drag in any previously defined elements, and third define any new elements or relationships. After that, descriptive text and visual formatting can be added for increased readability. Enterprise Architect lets you hide relationships and element properties on a per diagram basis, so a diagram can show what you want and nothing more. SysML models have been used in Data Respons by the authors since 2011. The applications have ranged from concept studies, internal process descriptions, through requirements specifications and architecture descriptions. For requirements specifications, the authors have created model with full bi-directional traceability between system requirements and environmental requirements (Aviation). In the automotive industry, the authors have established bi-directional traceability in a system requirements specification down to software unit level. This has shown that SysML models can be efficient means to achieve traceability between system requirements and stakeholder requirements, and also down to low level design. The authors have also used a SysML model for stakeholder management, capturing requirements, exploring solution concepts and developing system architecture. Capturing this information in the same model has shown the strength of using a model containing diagrams as a tool for communicating and validating design decisions in an iterative manner. The model also proved to be efficient for establishing a shared terminology and understanding of the system under development, for exploring solution concepts in team, and for documenting system architecture at multiple levels of decomposition. There are some SysML capable tools to choose from, both with commercial and open source licenses. Googling “SysML tools” yields lists of popular tools, comparisons and feature lists. SysML underwent significant changes up till version 1.3. The current version of the standard (1.4) has been around since 2015. Some tools are better than others in implementing new SysML features, and not all tools available have mature enough SysML support for efficient system modeling. No Magic MagicDraw, Altova Umodel and Sparx Enterprise Architect are among the most popular SysML capable modeling tools. SysML is a language. In order to create a model in the SysML language that serves a purpose in a given project, the purpose must first be defined, this might for example be a requirements specification, consistent design diagrams, interface specifications, test management, or a full architecture description. Secondly, a suitable model structure and workflow must be established. This is arguably the most critical challenge of working with SysML models. Knowledge of Systems Engineering best practices and experience with SysML or UML and the modeling tool is recommended. A SysML model is structured using packages. These are logical containers that contains diagrams and other elements. Elements in the same or different packages can have relationships between each other. Even though the element and relationship types have defined SysML semantics, practice shows this is not always clearly enough defined. While using the semantics of the SysML standard is a good thing, we have found that the usage of diagram, elements and relationship types should not be too tied to the SysML semantics, but instead be documented on a per-model basis. Thus ensuring consistency within the model. The package diagram below shows one possible model structure with package dependencies. SysML models should be structured on a per project basis in order to meet project specific requirement. However, the experience of the authors is that following some general rules when setting up the model results in it being more readable and maintainable. Diagram Legends can be defined once and used in several diagrams. Use Legends and Element colors consistently to help make the model readable. All diagrams should have a text box, describing what aspect of the system the diagram shows. For improved readability, do not rely solely on SysML notations like the diagram header. Document the usage of SysML element and connector “Types” and “Stereotypes”, and make sure the meaning is unique. This is a prerequisite for a consistent model, improves traceability, and enables complex model searches. Use a “package” diagram to establish a package structure and document package dependencies. This serves as an overview of the model, and helps in managing changes to the model. Define a logical system break-down, and structure all information in accordance with this. The items in the breakdown structure should represent logical parts of the system (housing, power module, controller SW etc). Break down as many levels as needed. Manage the model scope, and stop to consider return on investment before modeling below “architecture level”. Quick visualization of relationships benefits from a properly structured model. Make sure you understand the visualization capabilities of the modeling tool before deciding on model structure. For Enterprise Architect this is the “Traceability view”, the “Relationship Matrix”, and “Insert Related Elements” feature.’ Projects may require traceability between stakeholder requirements, system requirements, components, test-cases and tests at different levels. As long as the “Model Setup Recommendations” are followed, custom searches can be saved and performed quickly on the model without the need for documenting complex relationships directly. Examples of possible custom searches are listen below: Passed test-cases at SW component level tracing to a set of stakeholder requirements. Components impacted by changes in a requirement, and tests that must be re-run. List of stakeholder requirements not yet verified at component level. Sometimes, there are better ways to present or share information than using diagrams. Most SysML capable modeling tools have several options for reporting and presenting data. Enterprise Architect has a customizable report generator for MS Word and PDF, .html generator and an .XMI import/export function in addition to version control integration. Also, relationships in packages can be presented using the relationship matrix, and any diagram can be presented on list format. This makes it possible to generate different but consistent reports and visualizations. SysML provides a language with notation and semantics, but do not advise on the process of system modeling. SysMod is a framework for modeling the system from stakeholder requirements to a product architecture. SysMod uses examples with SysML and Enterprise Architect. This can be a good starting point for determining the scope of the modeling effort and model structure. SysMod describes at a high level what should be modeled and the relationship between packages. See Literature Recommendations for a description of SysMod. SysML with supporting tools provides opportunities for reducing the documentation effort and increase quality in all stages of development projects. In the initial phases of a project, a SysML model can improve communication and help validate requirements and design decision. Inconsistency will be easier to discover by the use of visual models. The project manager can track progress using custom searches across complex relationships. For example by the number of customer requirements that are verified at component level. After traceability between requirements, design and test has been established, use-cases or user stories can be prioritized for each phase of development more efficiently. Impact analysis diagrams can be generated and used for change management. Use case or user-story based development can benefit from giving the developer auto generated views of requirements and architecture. This provides relevant information for the specific use case or user story. The ability to document relationships from system objective, through requirements and design makes it possible to trace all functionality back to customer requirements and business value. This is also valuable for testing, as test coverage can be easily measured. Diagrams showing complex relationships can be auto generated based on custom searches. At project delivery, customer documentation or internal documents can be auto generated from the model, using custom templates. For example design descriptions, interface descriptions and test reports. Consistency is ensured when all reports are generated from the same model. The model can also be reused for future generations of the product to speed up the initial phases of a project. If a model is structured in a manner that facilitates its purpose, the results can be requirements and architecture descriptions that are more consistent and less time consuming to develop and maintain than document based specifications. The visual notation of SysML gives the model user a quicker understanding of requirements and architecture, this can make collaboration with stakeholders and within the development team more efficient. A prerequisite for this is that some modeling guidelines are followed in structuring and developing the model. The model structure decided on initially will impact its usability later on in the project. Consideration of the models’ purpose and potential scope must therefore be given as early as possible. This article gives some recommendations for structuring system models. To get a clearer picture of the opportunities and limitations of SysML models and the Enterprise Architect modeling tool, we recommend the literature listed at the end of this article. SYSMOD – The System Modeling toolbox gives an overview of the modeling process, and can be great input for deciding on the modeling scope and structure. Description: The SysML Language, Author: Friedenthal, Moore, Steiner Description: A Systems Engineering process based on best practices, that uses SysML. Author: Tim Weilkiens Description: Useful tips and tricks for modeling in Sparx Enterprise Architect. Author: Peter Doomen http://www.incose.org/about http://www.uml.org/what-is-uml.htm http://www.omgsysml.org http://www.omg.org https://leanpub.com/sysmod di|Fredrik Bakke, Senior Development Engineer, Data Respons & Svein Tore Ekre, Senior Development Engineer, Data Respons st|BY: A Practical Guide to SysML SYSMOD – The Systems Modeling Toolbox 50 Enterprise Architect Tricks INCOSE – International Council on Systems Engineering UML – Unified Modeling Language SysML OMG SysMOD OUR COMPANIES Newsletter sign up h1|Agile System Modeling h2|The concept of modeling system requirements and design in is not a new one. However, recent advances in languages and tools has created opportunities for reducing the total development effort for embedded systems, and improve quality. This article aims to present some of these opportunities, based on the authors’ experiences. Keywords are traceability, and multiple consistent requirements, design, and test views. The article gives an overview of the SysML language, its usage, and potential benefits. The article also gives advice on how to get started with system modeling along with literature recommendations. Literature Recommendations Refrences h3|SysML Background SysML 1.4 and enterprise architect Systems modeling in Data Respons Modeling Tools Getting started Model setup recommendations Traceability, Reporting and Visualization SysMod – the systems modeling toolbox Summary sp|> Agile System Modeling Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Drone or UAV (Unmanned Aerial Vehicle) are generic terms that includes many types of unmanned, remote controlled aerial vehicles. This includes fixed wing planes, helicopters and multi-rotors. Professional drones have a wide range of applications. Aerial photography during sports events does not have to rely on expensive full size helicopters, and real-estate agents frequently use drones for documentation. Drones discover missing people, and can monitor habitats exposed to risk of pollution. Electricity companies are now inspecting some of their high-voltage lines without expensive power outages and risky climbs. Even a conservative industry like the railway companies are considering drones for inspection of disrupted tracks in areas with limited access. Several companies plan to deliver small packages by drones, but it is not a commercial reality yet. This is partly due to regulatory limitations. Larger military drones have been common for decades, but recently small stealthy nano UAVs have been developed for shorter reconnaissance missions. Figure 1 shows the Prox Dynamics Black Hornet which is also intended for use by rescue workers. It can give situational awareness, or discover victims trapped in collapsed building structures. Drones can be piloted in two different ways, either line of sight by visually observing the drone, or by First Person View (FPV). In an FPV system the video image from an onboard camera is transmitted by radio to a personal video display on the ground in the form of a screen or video goggles. Figure 2 shows a typical set of video goggles with circular polarised antenna and embedded receiver. Systems span from simple low-cost systems, to advanced systems with high-power video transmitters and ground receivers with directional tracking antennas that offer ranges of tens of km. The range of the wireless video link is limited by a number of factors. The path loss itself will diminish the signal when distance increases, and obstacles in the line of sight can give additional attenuation. However in a natural environment there are some less obvious challenges to the radio-link that require clever solutions. We will take an in-depth look at the two main issues. Other sources of radio transmission in the environment can interfere with the main signal. If the interfering signals occur in the same frequency band as the wireless video link it will act as inband noise. This will reduce the signal to noise ratio, resulting in a noisy video image and limited range of the link. A typical interferer can be the video transmitter of another drone in the area, a nearby WiFi hotspot or mobile phone. The problem can be minimised by selecting a channel as far away in frequency from the interferer as possible, or by moving the video receiver and antenna. If the source of interference is powerful, but outside the band of the wireless link, it is called a blocker. The blocking signal can penetrate insufficient front-end channel filtering, and decrease the dynamics of the Low Noise Amplifier (LNA). A simplified diagram of a receiver signal chain is shown in Figure 3. Typical high power blockers can be radars, broadcast towers or military radios. Technical measures for handling interference can be to ensure good front-end channel filtering of the video receiver, and use a directive ground antenna to minimise interference from other directions. Directional antennas with a narrow beam and high directional gain will also increase the receive strength of the signal from the drone. The antenna can even be equipped with a tracker that automatically directs the antenna at the moving drone. A tracker takes GPS coordinates from the drone as input for the control system and the tracking algorithm. Even with a strong, noise free signal, a radio link can get sudden dropouts, especially in cluttered or urban environments. This can be due to the reflected propagation path cancelling the direct propagation path. The cancelling occurs because of the phase shift associated with different propagation delay. This occurs at a specific point of the receiving space, and can disappear just by moving the antenna less than one wavelength. In addition to signal cancellation, multipath propagation also results in symbol delay spread. The symbols from the various paths arrive at different time, causing bit errors if the delay is significant. Figure 4 shows the principle of multipath propagation and delay spread. The two main strategies to deal with multipath fading are avoidance or constructive combination of reflected signals. The most intuitive way to avoid reflected signals is to have a directional focus at the line of sight. As explained previously this can be accomplished with directional antenna systems. The reflected signals will approach at an angle outside the main lobe of the antenna, and will be attenuated. Another less intuitive, but yet simple way is by adding antenna polarisation. The most effective type for this purpose is circular polarisation, where the radio wave due to antenna shape will have a twisting propagation. The receiving antenna with matching polarisation will pick up this signal but will suppress un-polarised signals. Reflected signals are avoided because the reflection disrupts the polarisation. Figure 5 shows circular polarised antennas connected to a diversity switching FPV receiver. An important factor to consider is antenna positioning on the drone, because losses are introduced when the angle between the transmit and receive antennas is increased. At exactly 90° the loss is theoretically infinite, so a combination of antennas with different angles could be beneficial for a drone. This strategy could as well diminish the impact of flying into the antenna radiation pattern null. Omni directional antennas have a doughnut-like radiation pattern, with a null at the perpendicular axis. When flying directly over the ground station antenna, the drone can hit this spot. The strategy of reflection avoidance has one major downside, and that is the dependence of a line-of-sight path. When reflected signals are suppressed, major obstacles in the radio path cannot be handled efficiently resulting in a rapidly declining signal. Constructive combination of reflected signals can solve not just the multipath fading problem, but also maintain sufficient signal strength when path obstacles occur. A commonly used mechanism for constructive combination of multipath signals is diversity. Diversity can be employed both on the transmitting and receiving side, by having two or more antennas connected. When the antennas are spaced by a distance of around a wavelength or more, the probability for severe signal cancelling is greatly reduced. The signals from the antennas can be passively summed before entering the receiver, or the receiver could actively switch to the antenna with the strongest signal. The radio can also have duplicated receive chains and sum the signals at base-band. A more sophisticated mechanism evolving from diversity is Multiple Input Multiple Output (MIMO). Here a higher number of antennas are used to be able to receive and analyse several multipath signals. The phase and amplitude of the signals at each antenna is measured and compared, and an advanced and fast algorithm calculates how each signal must be processed to make up the wanted signal. Each signal must be precisely phase shifted and gained to enable constructive combining. This process is called weighting. Combination and individual treatment of a higher number reflected paths, gives a much higher signal to noise ratio than diversity. Consequently more advanced modulations can be employed, resulting in higher link bandwidth. The weighting of each signal can be done at different stages in the receiver signal chain. With the advances in compact and low-power digital processing, it is most common to do this on the base-band samples after analog to digital conversion. Figure 6 shows a simplified block diagram of this principle. The MIMO concept is widely used in LTE (Long Term Evolution) or 4th generation mobile data transmission, and has recently been adapted for use in high-end digital video links for FPV. The camera used in FPV setups is typically light and compact, but robust and very light sensitive. Apart from the optics, the image sensor is the most critical part for high quality video. There are currently two main types of image sensors available: Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD). CMOS sensors are most commonly used. They are inexpensive, require low operational voltage, and have lower power consumption than CCD sensors. Most drone pilots demand high performance video, even when moving fast and during vibrations In this area CMOS image sensors have some shortcomings. Images are actually scanned pixel-by-pixel, line-by-line, and there is a limit to this scanning speed. This way of capturing the image is called rolling shutter. On a fast moving drone with high frequency vibration from the motors and propellers, artifacts, distortion and even wobbling will be noticeable on the image. CCD cameras on the other hand has a different way of collecting the image. The light-induced charge of each pixel is collected from all lines simultaneously. This is referred to as global shuttering and is the reason why CCD cameras can handle fast movement and vibration without image distortions. The cells or pixels require very small amounts of light, and hence the sampling of the full frame can be fast. Since the CCD readout and signal conversion requires less active circuitry on the sensor itself, the raw pictures will contain less noise than the output from a CMOS sensor. CCD sensors will also have a higher dynamic range, i.e. a bigger difference between low-light threshold and saturation of a pixel. The best CCD cameras for consumer FPV setups are sufficiently light sensitive even for night flights, with only a few light sources present. CCD cameras however are more expensive, and require a higher operational voltage to run the charge-coupled cells in the image detector. The signal conversion circuitry also consumes more power than the distributed CMOS pixel amplifiers. Most consumer, and even professional FPV systems still use analog wireless video transmission, with very compact and low-cost systems. Figure 7 shows a complete analog FPV transmitter system giving approximately 1 km range (an AA-cell is shown for size comparison). Analog video is real-time and requires no image compression and advanced processing. This results in near zero latency between the image captured by the camera and the one viewed by the pilot. Latency of just tens of milliseconds is very noticeable when flying fast with FPV, and the hundreds of milliseconds often seen in standard digital High Definition (HD) systems for consumer use will pose a safety threat. Another surprisingly favorable property of analog video is the gradually increasing picture noise when the video link starts to break down. This occurs when reaching the boundaries of the radio range, or encountering interference or obstacles. A pilot experiencing this instinctively knows that he has to turn around the drone or avoid an obstacle. With a consumer grade digital video link over radio, a rapidly decreasing signal to noise ratio will often result in stuttering and frozen video frames. The last couple of years intense development has gone into creating compact and low latency HD video links. The digital radio transmission system needs robustness to preserve an acceptable framerate even under rapidly deteriorating radio conditions. Fast adapting dynamic modulation schemes are applied, to narrow the required bandwidth during a period with reduced signal quality. A reasonable framerate is maintained while restricting video resolution. A wireless high framerate HD videolink has to carry very high bitrates, and hence an advanced modulation is needed. This puts a strict requirement on the signal to noise ratio, so a variant of the antenna MIMO concept previously explained is applied, together with Orthogonal Frequency Division Multiplexing (OFDM). OFDM is very effective for communication over channels with frequency selective fading (different frequency components of the signal experience different fading). With a traditional single wideband carrier, frequency selective fading is complex to handle. OFDM mitigates the problem by converting the high speed serial data into parallel low bandwidth subcarriers. Some subcarriers are reserved as pilot carriers (used for channel estimation/ equalisation and to combat magnitude and phase errors in the receiver) and some are left unused to act as guard bands. The reservation of subcarriers to guard bands helps to reduce the out of band radiation, and thus eases the requirements on transmitter front-end filters. In the receiver fading correction is applied to each subcarrier. This can be seen as a form of diversity, called Frequency Diversity. OFDM is a well-known principle from WiFi and Wimax systems, and DAB broadcasting. Figure 8 shows simplified block diagram of the OFDM concept. The underlying modulation is dynamically selected depending on the available signal to noise ratio of the channel. Quadrature Modulation (QAM) is a modulation form where each constellation represents certain amplitude and phase of the radio signal. At 64QAM one symbol equals 6 bits and yields a high bitrate per subcarrier, but this constellation can be used only under good signal to noise ratio. Under deteriorating channel conditions the modulation steps down to 16QAM, and continues to simpler constellations if the channel worsens. For very bad conditions Binary Phase Shift Keying (BPSK) with only two constellations is employed, where each constellation represents a phase shift. With one bit per symbol, it gives a very modest bitrate, but is correspondingly robust for noisy conditions. In this state the video resolution and framerate is just sufficient to fly safely for a short duration. Figure 9 and 10 shows a simulation of how interference outside and inside the channel affects a 16QAM constellation. Unlike WiFi radiolinks, a low latency FPV link is cannot be dependent on acknowledge for every data frame, and does not support re-sending of failed data frames. Instead Forward Error Correction coding (FEC) is employed, to handle most of the occurring bit errors. The remaining few failed frames will be part of the displayed video stream. This can be observed as occational macroblocking in the image, but will not affect the image quality much if the adaptive modulation reacts quickly during worsening radio conditions. The pictures in figure 11 shows screenshots taken during a drive-test of a digital wireless HD video link. It is a good illustration of an adaptive modulation scheme operating. Wireless video for FPV drone piloting is still immature technology, and we will see compact and low-cost HD FPV systems emerge in the near future. The key to lowered cost is increased integration of systems on chip and a resulting high volume. Paradigm shifts will occur when totally new radio, camera, or display concepts emerges. The next generation of cellular and WiFi technology, termed 5G will exploit dynamic beamforming to increase system gain and keep interference low. Together with more sophisticated MIMO this will increase performance and transmission bandwidth further. It is likely that these concepts will be applied for future FPV systems when the technology matures. This will result in higher performance with extended range, higher image quality and better reliability. It will enable drones to handle more of our present challenges, and those we have not come to think of yet di|Bjørn Bergersen, Specialist Development Engineer, Data Respons st|BY: OUR COMPANIES Newsletter sign up h1|Drones and wireless video h2|Lately, drones have become very popular both as professional tools and for recreation and air sports competitions. In a series of articles we will present the different systems in a drone, including the ground support equipment. This article will focus on how wireless video systems are implemented to give a First Person View (FPV) to the pilot remote controlling the drone. Wireless video for first person view Challenges and antenna solutions for wireless video Interference Multipath fading due to reflection Avoidance of reflected signals Constructive combination of rejected signals Analog FPV transmission Digital FPV transmission Future development h3|FPV camera sp|> Drones and wireless video Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Data Respons is working with leading technology providers to enable state of the art software and infrastructure supporting efficient operations. Our specialist competences and relevant cross industry technology experience makes us an interesting partner in this market. Read more » Read more » Read more » st|FINANCE & PUBLIC $14 $248 64% APPLICATIONS & EXPERIENCE OUR COMPANIES Newsletter sign up h1|Flexible, secure and scalable applications and infrastructure trillion billion of people h3|The banking industry has undertaken a comprehensive digitalisation automating as many processes as possible to stay competitive. However, the emergence of fintech companies requires further adoption in all aspects of the value chain. For several traditional players, this means a total remake of their system infrastructure to a modern, software and cloud based framework to offer customers flexible and platform independent services and implementation of artificial intelligence (AI) to support key decision processes to stay competitive. New digital SW structure and frameworks Digital for modern Fintech solutions Digital and systems Integration of state of the art (Memory grids) for web and mobile applications SW and programs Provision of into multiple channels Integration of into a private cloud infrastructure SELECTED CUSTOMERS Data Respons delivers record high quarterly results Data Respons results for the 4th quarter 2018 Contract of 20 million NOK sp|> > Finance & Public Transaction value from next-gen payment technology Global Fintech investments Use their smartphone for mobile payments on a weekly basis cloud based SW architecture payment solutions Cloud SW Advanced test systems application modernisation B2C processes core software Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Data Respons has, fourth year running, signed an agreement for supporting the international chess championship, Stockholm Chess Challenge for young talents. Data Respons support The Society for Street Children in Nepal, a non-profit fund-raising organisation working for the provision of permanent accommodation for street children or those who risk becoming street children in Nepal. Since 1998 Data Respons has cooperated with the humanitarian organisation “On Own Feet”, which works with children in war-torn countries. Data Respons R&D Services are main sponsors for a team of innovative youth representing tomorrow’s technology experts in the International competition First Lego League. Data Respons supports the official U19 development team of the UCI WorldTeam BORA – hansgrohe, Team Auto Eder Bayern. Our company MicroDoc supports students with a scholarship program during bachelor or master studies through I.C.S. (International Co-operative Studies) Every year our employees at Ingolstadt-based EPOS CAT contributes to the United Nations Children’s Fund by participating at the Unicef Company Charity Run! As bonus to the donations made, this activity also brings the team together, as well as achieving good health! Data Repons company ITSonix support the project “Total Garden” where the kindergarden Kita Am Kirschgarden rehabilitated their out doors areas to motivate children to be more active during the day st|ENABLING THE YOUNG ENABLING THE YOUNG ENABLING THE YOUNG Enabling the Young through Chess Enabling street children in Nepal On Own Feet Enabling technology experts of tomorrow! Enabling young cyclists to perform! Scholarship programme Running together against hunger and hardship Enabling child mobility in Leipzig OUR COMPANIES Newsletter sign up h1|through quality education to have fun & perform to combat inequality h3|Young people are our future and we want to be a part of giving coming generations the best starting point possible and the ability to grow and prosper into educated, healthy and valuable individuals. This is why Data Respons have set up a fund called Enabling the Young. The fund will support a wide range of efforts where young people benefit, and we seek worthy causes were we feel assured that the support given will go more or less directly to the cause with very few administrative expenses. sp|> > Enabling the Young Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|“It is important for Data Respons to enable young people to have the best opportunities to grow and prosper!” Kenneth Ragnvaldsen, CEO Data Respons pa|Vertical Farming is high tech growing of crops in large windowless industrial buildings close to the world’s mega cities. Plants are stacked on shelves, with their roots in water instead of soil. Sunlight is replaced by LED light, while water enriched with nutrient circulates in a large, closed system. Thus Vertical Farming can achieve total independence from outside weather conditions. Crops can be harvested several times a year, and food can be produced close to where it is consumed. Since Dickson D. Despommier, professor of microbiology at Columbia University, launched the concept of Vertical Farming in 1999, it has been quite popular among futurists, tech-trendsetters and others promoting new technology handling climate change. But so far it has been rather difficult to turn the Vertical Farming vision into reality. In 2050 Earth’s population will have increased from 7,5 to approx. 10 billion people, with two thirds of them living in urban areas. As a result, global food production has to increase by 70 per cent. This is where vertical farming could come in, not least because food could be produced directly within the mega cities of the future, thus eliminating the environmental impact of transport. Until now Vertical Farming has been unable to evolve into large-scale production. The few attempts made soon brought one crucial weakness to light: profitability. Naturally, it dampens the enthusiasm, when a head of lettuce coming from Vertical Farming ends up being 10 to 20 times more expensive than a conventional one. The main reason for this lack of profitability is the amount of manual handling needed. Vertical Farming is labour intensive and needs rigorous automation before it would make commercial sense to go from small experimental demonstrators to large-scale production facilities. This is where Watney the robot gardener comes in. Watney is being developed by electronics engineer Servet Coskun. He is designing the autonomous self-driving robot with a scissor lift to move the 250 x 80 cm plant trays stacked three stories high at the experimental facility. For a year Servet and his company has run their own small-scale Vertical Farming facility together with a local company, operating a chain of restaurants and canteens, to find the most labour intensive operating procedures and find ways to automate them. – We soon found out that handling of the trays should be at the top of the list, and to automate that procedure we designed Watney. It has scissor lift and forklift functionality and can drive autonomously to a plant tray. It then moves up to a desired height, takes out the tray and moves it to a new position. – You can find similar self-driving robots in other domains, small-scale ones in pharmacies and big ones in the manufacturing industry and in logistics centers. We have designed Watney to fit the specific requirements of Vertical Farming, and I expect Watney to be ready for series production by the end of this year or early next year. – In the long run we will to provide Watney with additional functionalities, like a camera monitoring the growth of the plants and checking for pests. Watney will also be able to cut the plants. Our long-term goal is to enable Watney to do everything a gardening worker does – and more. Servet Coskun is primarily using off-the-shelf components to build Watney, an approach he picked up while studying and working for a company developing autonomous vehicles for industry and logistics. These engineers preferred to use standard components when possible, developing their own electronics only if they could come up with a more customized and cheaper solution. – Watney will help us reach our goal, which is reducing manual handling in Vertical Farming with 50 per cent, says Servet Coskun. – That goal is very ambitious, but it is necessary to get there to make Vertical Farming compatible and scalable. However, Watney will not be able to achieve a 50 per cent reduction on its own. It needs help, and analysis of the work processes has highlighted other areas that could benefit from being automated. – While the handling of plant trays is at the top of our list, cleaning and preparing the system after a harvest comes in second. In Vertical Farming plants grow in a hydroponic system, in water circulating in a closed environment. They grow in large trays and after a harvest each tray has to be cleaned, and roots and other residue removed. That is done by connecting a secondary system, which cleans the trays using water with hydrogen peroxide added. After cleaning the tray is reconnected to the primary system. That process we want to automate as well. – Another work intensive process is calibrating the sensors monitoring the closed system supplying the plants with water. In hydroponics the water has to have specific characteristics, e.g. a pH value of around 6.5, which is a little lower than ordinary drinking water. When growing, the plants emit basic substances causing the pH-level to rise. For the pH level to remain at 6.5 we have to continually measure it and reduce it if necessary. – A pH sensor is very sensitive and requires calibration once a month, which is messy work. You remove it from its usual place in the main tank of the watering system. Then you clean it and put it in three different calibration resolutions. Each time you have to wait until it delivers a stable measurement, which you then register in your computer. Then you return it to the main water tank. – On top of that, the sensor measuring the salinity of the water needs to be calibrated in two calibration resolutions, while the oxygen sensor needs one calibration resolution. A large Vertical Farming plant is equipped with a large number of sensors, so it makes good sense to find ways to automate the handling of them. The solution Servet is using today is range of sensors run on low power boards from Particle and communicate via WiFi. By the way, Servet Coskun’s robot gardener is named after the main character in the 2015 Ridley Scott movie The Martian. In 2035, astronaut and botanist Mark Watney, played by Matt Damon, is left behind on Mars, but manages to survive by growing vegetables in an improvised hi-tech nursery.’ TechPeople is a consultancy house within the Data Respons group. The company is based in Copenhagen, and specialises in embedded solutions and IT business systems. TechPeople have specialists within hardware, software, mechanic development, project management and product testing. TechPeople’s innovative customers range from large international companies to creative start ups di|Arne Vollertsen for TechPeople A/S st|BY: Want To Know More? OUR COMPANIES Newsletter sign up h1|Meet Servet Coskun, who eats, sleep and breathe green technology! h2|Servet is a specialist electronics engineer working for Data Respons subsidiary TechPeople. He has his heart set on technology and sustainability, and in his spare time he started his own start-up company with the mission to make Vertical Farming competitive compared to conventional farms and greenhouses. To achieve this goal, Servet designed a self-driving robot gardener called Watney. sp|> Meet Servet Coskun, who eats, sleep and breathe green technology! Vertical Farming Vertical farming has the potential to contribute to handling some of the great challenges the world is facing: A step towards profitability Analysing operating procedures Off-the-shelf components Watney needs help Cumbersome calibration of sensors Managing Director TechPeople Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|IT SONIX / XPURE situated in Leipzig with 125 employees. The companies are leading niche providers of specialist services and SW technology (Java, Embedded, Cloud and AI) specifically aimed at “Connected Car” solutions, internet of things, mobile services and embedded applications. They have been active in telematics, communication and project management for more than 15 years specialising in agile software development for client-server, mobile applications and on-boar units. The companies are deeply involved in the ongoing digital transition for some of the leading automotive brands in Germany, some of the world’s most dynamic and R&D intensive industries. Leipzig (Germany) 125 (2017) st|Leading niche providers of specialist services and SW technology (Java, Embedded, Cloud and AI). HQ FOUNDED # OF EMPLOYEES JOINED DATA RESPONS OUR COMPANIES Newsletter sign up sp|> > > IT Sonix & XPURE IT Sonix & XPURE MANAGING DIRECTOR Dr. Andreas Lassmann 1999 2018 COMPANY website Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|The frobese GmbH Informatikservices specialists have a strong record in large projects, managing the business- and it-architectural lifecycle especially in core banking and general financial service businesses. We develop customized concepts, transformation strategies and solutions that build up or support the business of our customers and we also step up to the project management frontline to meet goals and to deliver success. Our long-standing customers include NORD/LB, KKH, Finanz Informatik and VÖB Service. Our expertise extends to IT/strategy consulting, transformation of core financial business structures, business process management, organizational consulting, business field development, IT quality management, requirements management, procedures and models, software architectures, implementation, test management and testing. We have experts and skills especially in the fields of: Hanover (Germany) 96 (2020) li|core banking and insurance business requirements operational processes and IT architecture and their modernization and transformation management of large scale projects in the financial business requirements analysis and conceptual design for customer specific IT and non IT solutions agile management and development of high quality software with a broad set of methods, concepts and platforms st|Frobese is a cooperative and successful team of experts specialized in consulting for banks and insurance companies. We focus on business expertise, project management, meeting quality standards and software development. HQ FOUNDED # OF EMPLOYEES JOINED DATA RESPONS OUR COMPANIES Newsletter sign up sp|> > > Frobese Frobese MANAGING DIRECTOR Dr. Dirk Frobese Nick Stöcker 1998 December 2020 COMPANY website Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Data Respons has in-depth industry know-how and more than 30 years’ experience in developing software, smarter IoT and embedded solutions certified for extreme and challenging environments. Read more » Read more » Read more » Read more » li|Smart grid / Smart and telemanagement technology System solutions for energy storage and for renewable energy production utilities for the clean-tech industry technology for home/­industrial control and automatic meter reading and maritime IoT applications systems (AIS) High-tech Customised, modular and maritime panels, computers and solutions systems (Innovative and custom system designs) Advanced systems for refuse Video surveillance, monitoring and Fibre-optical platforms Down-hole and pipeline Subsea Electronics Module ( ) technology systems and solutions Industrial flow measurement computer systems st|ENERGY & MARITIME +75% +40% 90% APPLICATIONS & EXPERIENCE ENERGY MARITIME OIL & GAS OUR COMPANIES Newsletter sign up h1|Unlocking opportunities with digitalisation h2|The global energy consumption will continue to grow driving investments in cleaner and more sustainable energy production to limit global emissions. Digital technology disruption offers new communication solutions enabling the industry to become more data driven optimising and improving asset efficiency. h3|SELECTED CUSTOMERS Safety at sea with wireless sensor technology Energy efficiency through digitalisation Autonomous cable survey with magnetometers EnergyBASE: Iot-based solution for innovative energy management sp|> > Energy & Maritime Growth in renewable energy production Growth in global CO2 emissions by 2040 mainly from power generation Of world trade carried by international shipping industry Smart home solutions charging systems and robots Energy-efficient control renewable power Mission critical system High-performance platforms Power-saving wireless Digital ship Automated identification vessel stabilisation certified Control and communication data-collection distribution over Ethernet signal distribution Sensor reference inspection tools SEM AUV technology ROV controllers and sensors Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa| st|OUR COMPANIES Newsletter sign up h1|Hacking the home office h2|Europe is once again turning on the brakes, demanding strict social distancing and an extended use of home offices. We have been through it before and many of us have not been part of a physical work environment since March. Our CEO, Kenneth Ragnvaldsen has a few learning points that he wants to share on how the pandemic and the home office solutions is affecting us all. Lessons learned and two appeals from our CEO, Kenneth Ragnvaldsen h4|For all those who are not veterans, connect with your manager as often as you need to All you experienced specialists and managers, connect with a new team member as often as you would in the office sp|> Hacking the home office Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|Europe is once again turning on the brakes, demanding strict social distancing and an extended use of home offices. We have been through it before and many of us have not been part of a physical work environment since March. But our own data shows that we as an organization are coping well and even excelling in these challenging times. However, there are a few learning points that we want to share and there is more and more data on how the pandemic and the home office solutions is affecting us all. A recent study from the consultancy firm EY shows that it’s the young digital native talents that are struggling the most with the home office concept. And the explanation is obvious when you think of it. Coping with a prolonged home office situation is easier when you are a veteran. When you know your colleagues. When you know your tasks, your boss and all the processes and systems that comes with the territory. If you on the other hand are a newly recruited talent that doesn’t know anyone and still haven’t fully understood your new job, it is stressful and uncomfortable to sit alone on an everyday basis. For us its also not an option not to hiring young talents and specialists. We are taking a small piece of responsibility by offering jobs and opportunities to newly educated talents. During the pandemic we have continued to hire people across our group because we are optimistic, and we are able to absorb new people into our organisation. But onboarding new people with a lock down in place is not ideal. We have two appeals for our employees for the rest of 2020, and it might be useful for any other organisation as well: Frequently asking questions as the new team member can be uncomfortable. And even more so if it means to call your manager without knowing if it’s a good time or not. But as the newest team member this is your responsibility. Good communication is a two-way street and you therefore need to do your part. And trust me, it will always feel good to have reached out. There is a range of platforms where it’s possible to reach out to a colleague. You don’t have to make a call. Sharing some insight or just a funny gif is also counted as reaching out and it opens the door to a useful dialogue. In this current situation there are numerous young people sitting in their apartments with questions they don’t feel justifies an email to their manager. And especially as managers, it is proven that frequent virtual team meetings and a joint virtual coffee breaks can make a positive difference when it comes to motivation, stress and mental health. In order to hacking the home office, we all need to step up our communication efforts. If we take responsibility, we can make a difference for everyone, but especially for those that are the newest members in each local family of colleagues. Stay safe, Kenneth pa|In order to be positive factor in the fight against climate change we are reducing our own emissions and we are focusing on developing technology that matters to the planet. We have set a target to make our company carbon neutral by 2025. To become a carbon neutral Data Respons needs to reduce its emissions throughout the entire value chain. And, we need to start with understanding how much carbon we are responsible for emitting. To find out just how much carbon (CO2 equivalents ) Data Respons emit a year we asked Endrava, a climate and energy specialised consultancy, to help us estimate the amount of our greenhouse gas emissions. Our operation is international, but not very infrastructure or resource demanding. Hence, our framework for emissions mapping is based on calculations of emissions from energy consumption, transportation of people, transportation of goods, and IT equipment. We found that in 2018 Data Respons companies were responsible for 1385 tons of CO2 Equivalents, which amounts to 1,1 ton per employee. So, in total we emit the same as 139 average Norwegian people, or 691 cars per year. We found that over 70% of our emissions are transportation based, almost equally divided between transportation of people and transportation of goods. Another 20% is from energy consumption and 9% is from IT equipment. Our approach to reducing these numbers comes in several parallel steps: Firstly, all our companies are taking concrete steps reduce their direct carbon footprint. This means switching to guaranteed renewable energy sources in our office locations. Secondly, all our locations have implemented, or are implementing recycling systems for their office waste. Thirdly, and importantly, we are making new IT policies to increase the time we use equipment before replacing it. These are the easy steps… Transportation of goods represents 37% of our emissions and it is difficult to achieve large reductions, as our method of transportation impacts our customers. Still, it is an opportunity to incentivise choosing sea-based transportation rather than using airfreight and increase the environmental awareness both internally and for our customers and suppliers. Another large section (34% of our emissions) is the transportation of people, mostly air travel. We are looking into various ways of decreasing the number of flights by finding better and more efficient ways to conduct meetings. As we have experienced this spring, with the challenges we are facing with the Covid-19 virus, a number of meetings can be quite effectively executed through digital platforms. Nevertheless, we still need to meet our people across the world, but we are now adding CO2 footprint to the mix of costs to consider when choosing our destinations. When there is a need for travel, we are promoting the use of UN certified carbon credits. Internally, we are also promoting our staff to use green transportation like trains, electric cars, bicycles or walk. As a long-standing technology provider, we can use our industry knowledge to promote the benefits of designing technology solutions in a circular manner, increasing the resource efficiency in the production and life cycle of our customers hence decreasing their footprint, using Eco-design throughout our product development process. As most operations, we will, inevitably, always produce some carbon, and becoming carbon neutral is a process that must take place over time. To achieve our goals, we are introducing carbon budgets for our subsidiary companies. Each of our companies will have a carbon budget per year based on the emissions they were registered with in 2018. The carbon budget will decrease every year accordingly to our goals of becoming carbon neutral in 2025. We expect increased environmental awareness and focus will help our company accomplish the first steps towards carbon neutrality. However, we acknowledge that after the low hanging fruits, such as reducing air travel, have been picked we must work with new technological solutions to continue the journey towards becoming carbon neutral. We need to get all our customers, suppliers and employees on board the carbon neutral ambition, which is both a cultural and technical challenge. In addition, we need to establish the financial flexibility to choose more expensive, but cleaner sources of energy and materials. And, in order to uphold our ambition, we are open to neutralising some emissions through buying UN certified carbon credits, after all other measures have been implemented. Carbon credits are a practical and effective way to address climate change and encourage the growth of renewable energy. Our investments in carbon credits will help fund new projects that reduce greenhouse gas pollution and increases the cost of polluting by reducing the number for available carbon credits. We strongly believe that technology is an enabler for green growth. According to the Allied Market Research Group , the global green technology and sustainability market is expected to reach $44.61 billion by 2026. In other words, there are huge opportunities within innovative environmental and advanced technologies that enable organisations to unlock more value from their energy resources and improve efficiency. To take an active part in accelerating sustainability through technology, Data Respons has set a goal of being involved in at least 100 sustainable technology projects in 2020. Each one of these projects must contribute in reaching one or more of the UN’s Sustainable Development Goals. In addition to the environmental and social benefits of focusing on these types of projects, we are experiencing that the younger generations of development specialists look for these values when looking for employers. As a company we depend on new technology to help us become carbon neutral, and hopefully our own specialists and solutions will play key roles in developing that technology. Making it easier for every company to pursue value creation and sustainability at the same time di|Elisabeth Andenæs, Corporate Brand and Communications manager li|A carbon dioxide equivalent or CO equivalent, abbreviated as CO -eq is a metric measure used to compare the emissions from various greenhouse gases on the basis of their global-warming potential (GWP), by converting amounts of other gases to the equivalent amount of carbon dioxide with the same global warming potential. (Eurostat statistics explained) “Green Technology and Sustainability Market Statistics: 2026”, Allied Market Research Group, 2020. st|BY: OUR COMPANIES Newsletter sign up h1|Becoming carbon neutral h2|Global emissions must fall by 7.6% every year from now until 2030 to stay within the 1.5°C ceiling on temperature rises that scientists say is necessary to avoid disastrous consequences. One of Data Respons’ core values is to take responsibility, and we acknowledge that slowing down climate change is one of the greatest challenges we need to take on in our time. h3|To zero from what? Easy at start … … the toughest reductions will come last Accelerating sustainability through technology sp|> Becoming carbon neutral Taking responsibility for our own direct emissions Combining sustainability and business considerations Increasing resource efficiency in our processes and solutions Introducing carbon budgets to get to zero Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|MicroDoc is a technology oriented company with more than 60 specialists in SW development, Java and system design as well as SW solutions for IoT, mobile/network infrastructure and embedded applications. The company’s focus on complex software technology and software infrastructure has made them a respected partner for large corporations and even for other software businesses. The engineering team composes highly skilled professionals from more than 10 different countries. Teams are composed to match customer’s requirements with a mix of experienced senior experts augmented by younger aspiring developers. Operating from three offices in Germany (Munich, Berlin, Stuttgart), the company serve leading corporations from a variety of business domains including automotive, self-service systems, telecommunication, utilities and financial services. MicroDoc has specialised in solving challenging software problems, which require in depth knowledge of end-to-end technology and business scenarios (including mainframe computer, networks, desktops, mobile devices and embedded systems). Munich (Germany) 68 (2017) st|Advanced software development, digitalisation and IoT HQ FOUNDED # OF EMPLOYEES JOINED DATA RESPONS OUR COMPANIES Newsletter sign up sp|> > > Microdoc MicroDoc MANAGING DIRECTOR Dr. Christian Kuka Florian Oehlschlegel 1991 September 2016 COMPANY website Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|EPOS CAT designs, develops and operates tailor-made software solutions to support and optimise customer’s business processes mainly targeting the automotive industry. Modern vehicles contain increasingly complex IT system driving demand for software development, test and technical support to comply with strict industry safety regulations. Measurement and test systems represent a significant cost factor in vehicle development and quality assurance. Their proprietary “computer aided testing” (CAT) software solution supports customers in managing the ever shorter product development cycles in an efficient and secure way. The company’s engineers are located close to customers to secure efficient development, rapid respond support and evolve their industry know-how. Andreas Muench Ingolstadt (Germany) 97 (2017) st|A leading consulting and service company for automotive IT and computer aided testing (CAT) HQ FOUNDED # OF EMPLOYEES JOINED DATA RESPONS OUR COMPANIES Newsletter sign up sp|> > > EPOS CAT EPOS MANAGING DIRECTOR 1991 2017 COMPANY website Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Donat IT is a leading niche provider of software solutions and specialist services within software development and architecture, system integration and test management as well as business critical R&D IT services. Ediba Hastor Ingolstadt (Germany) 145 st|Specialised software services within the mobility sector HQ FOUNDED # OF EMPLOYEES JOINED DATA RESPONS OUR COMPANIES Newsletter sign up sp|> > > Donat IT DONAT IT MANAGING DIRECTOR 1980 2019 COMPANY website Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|inContext is a fast-growing R&D Services company that specialises in interconnect, electrification, embedded SW technology, mechanical design and project management. inContext was founded in 2006 and have, since the start, focused on building long term relations with their clients within the electrical equipment, automotive and home appliances industries. The company provide consultancy services of experienced and dedicated consultants who support who clients in future defining projects. inContext have five years in a row been awarded Dagens Industris Gasellföretag (Gazelle company) and also “Veckans Affärer’s Superföretag” (Swedish business magazine Top Company award) . Martin Lampinen Stockholm (Sweden) 78 (2019) st|Interconnect, autonomous systems and embedded software. HQ FOUNDED # OF EMPLOYEES JOINED DATA RESPONS OUR COMPANIES Newsletter sign up sp|> > > inContext INCONTEXT MANAGING DIRECTOR 2006 2019 COMPANY website Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa| st|AGM archive OUR COMPANIES Newsletter sign up h1|Annual General Meeting sp|> > Annual General Meeting > Protocol AGM 2020 Notice of Annual General Meeting and attachments Archive Annual General Meeting Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|To emphasise the challenge, some call it “The Mother of All Tech Battles”: In our effort to digitalise, connect and automate every aspect of mobility, we need to handle steeply increasing system complexity, cyber threats, new business models, and lawmaking issues, just to name a few of the many obstacles ahead. As experts in embedded and IoT solutions, and a trusted and experienced technology partner to the transport and automotive industry, at Data Respons we face these challenges on a daily basis. And based on that expertise we feel we have a fairly clear view of the things to come in the 2020s, a decade that doubtlessly will bring enormous changes in the mobility sector. The vehicle industry has come a long way since the introduction of the Cruise Control, the first vehicle feature that integrated mechanical and electrical systems. That was in the 1950s. Now, cars are computers-on-wheels running millions of lines of code. An average new car is managed by between 70 and 100 Electronic Control Units, and constantly monitoring itself and its surroundings with hundreds of sensors. Partly, this revolution has been triggered by other technology areas, for instance telecom. There has been exponential growth in memory and processor power, while components have become cheaper, smaller and more robust. On top of that, a set of new components like radar, infrared cameras and ordinary cameras are being added to the system, further increasing complexity on all levels. Facing this complexity, vehicle architecture is evolving as well. Currently, there are two main approaches in vehicle architecture: Either one large computer serving the whole vehicle or a distributed set of computers with a network between them. Both paradigms have their pros and cons, and it remains to be seen which approach will prevail. However, as autonomous driving is slowly developing, component and system complexity is increasing, and with it the amount of data to be processed. To handle that complexity it is tempting to look for inspiration in aerospace and aviation, or similar domains operating complex, mission critical systems. That makes good sense, e.g. when it comes to data analysis, as these areas produce a lot of data to be processed, analysed and combined in the most efficient and correct ways. But there is a difference. In aerospace and aviation you operate in controlled areas, and traffic is heavily regulated. Though obviously not without risk, it happens in a fairly controlled environment. That is not the case with a self-driving vehicle navigating in an urban area with its unpredictable mix of conventional cars, pedestrians, children, pet animals etc. So, is the object detected by the car’s radar a rock, a plastic bag or a child? Or is it somebody walking across the street with a bicycle, like in Temple, Arizona, in March 2018, when a woman was killed by a self-driving Uber car? A truly autonomous vehicle is still a thing of the future, although Tesla is leading the way with self-learning algorithms. But there have been a number of accidents in the US which clearly indicate that autonomous vehicles cannot be trusted 100 per cent. They still depend on driver intervention, although some car manufacturers seem to be over-selling their partially automated vehicle, e.g. by using the term “Autopilot” to describe its Driver Assist system. Following an investigation of a crash in 2018 in California in which the driver of a Tesla died, Robert L. Sumwalt, chairman of the US National Highway Traffic Safety Administration, summed up the situation in this way: “It’s time to stop enabling drivers in any partially automated vehicle to pretend that they have driver-less cars.” ( ) To be on the safe side, it is sensible to restrict fully autonomous vehicles to controlled environments like industrial sites or harbours. But all that may change quickly. Industry roadmaps show, that by 2025 almost every car manufacturer will have a fully autonomous car in its product portfolio. From that point on the number of autonomous vehicles will rise quickly. When approx. 50 per cent of all vehicles have become autonomous it would make sense to gradually allow autonomous vehicles in non-restricted areas. We could see autonomous driving in semi-controlled environments like for example on motorways. Regulators may decide, that some parts of a motorway only are to be used by autonomous vehicles, with drivers switching back to manual when leaving the motorway and heading for urban areas. As mentioned, with improvements in vehicle autonomy the complexity of the vehicle will increase significantly. With one exception: the car’s powertrain. Compared to an electrical engine a conventional combustion engine has more mechanical parts, and it is much more difficult to control injection times, combustion in the cylinders etc. In this regard the vehicle of the future will be simpler. However, when it comes to the powertrain manufacturers face an altogether different challenge: What will be the fuel of the future? Although it is widely agreed that the conventional fuel combustion engine will be a parenthesis in human history, the battle about what will come next is still raging. Currently electricity seems to be gaining the upper hand, but although batteries are a much more efficient way of using energy than gasoline, they have an environmental impact, requiring rare minerals and recycling when worn out. For instance, to extract 1 ton of lithium requires 2 million litres of water. And cobalt, another rare metal required to manufacture batteries, comes primarily from thousands of small, private mines in the highly unstable Democratic Republic of Congo, often involving child labour. For these reasons, a probable future scenario could be a combination of a battery pack on-board the vehicle, combined with electrification of roads through induction via the road surface or other energy transmission technologies. But all that is extremely hard to predict. By the end of this decade things may have changed and other superior technologies may have emerged. Regardless what powertrain technology will prevail, multi-layer connectedness will be the dominant feature of any future car. The vehicle will connect to its immediate surroundings, to local infrastructure, to other vehicles, and to the cloud, all at the same time. The vehicle will monitor its surroundings through an array of different sensors, and it will receive data from surrounding infrastructure like traffic lights or an approaching emergency vehicle. Also, vehicles will communicate with each other. This short-rage vehicle-to-vehicle communication could come into play, when a vehicle is part of a train of vehicles. If the car in front detects an obstacle and hits the brakes it will instantly signal to the cars behind it to brake as well. Thus a vehicle can extend its on-board sensor capacity to thousands of additional sensors in its vicinity. In addition to vehicle-to-vehicle and vehicle-to-near-infrastructure communication there will be long-range connectivity enabling other features, like user-based insurance, condition monitoring or various car-as-a-service solutions. The data produced by the vehicle is stored in large data lakes, to be utilized for instant analysis, for development of new services and much more. Manufacturers, scientists, authorities and others can dig into these data lakes e.g. to design more efficient logistics and mobility systems. Accessing and utilizing these vast amounts of data ought to be beneficial to all involved, provided that integrity and privacy is guaranteed. Obviously, these new possibilities create many risks as well, and in this it is crucial to stress the importance of cyber security and data integrity. Imagine if criminals could get access to data showing that a car and its owners are out of town, leaving their house unguarded, making it an easy target for break-in. Or imagine a trucking company trying to hurt a competitor by breaking into its system, downloading faulty roadmaps to the competitor’s fleet management system, deleting freight orders etc. When it comes to cyber security and data integrity, the mobility industry can look to sectors in which these issues are mission-critical, banking and finance for instance, where transactions, access, and confidentiality are guarded by state-of-the-art technology. Also, the digitalisation of mobility will give lawmakers some hard nuts to crack. Laws will need to be rewritten, nationally and internationally, and there will be tough debates on the freedom of the individual and the right to privacy versus what’s best for society and for the environment. As an example, everybody would probably agree that it would be in everybody’s best interest to allow an approaching emergency vehicle to take over control of vehicles in front of it and force them to pull over, for it to reach an accident as quickly as possible. But how about taking that scenario one step further: In everyday traffic, should local authorities be allowed to take control of a number of cars and reroute them to avoid congestion? Or maybe even prevent a number of vehicle owners from using their vehicle for a period of time, for the sake of the environment? Imagine walking out to your car in the morning to drive to your office, just for it to tell you “No, not today, please use public transportation instead”. Digitalising and automating the mobility sector will not only pose big challenges to lawmakers. Businesses are also taking huge risks and making significant investments in technology, knowing that some of it may not make it to mass production. As it is widely known, Tesla, the technology leader in autonomous electric vehicles is burning billions of dollars. Still, in 2019 Tesla produced only 367.500 vehicles, next to nothing compared to the world’s large-scale vehicle manufacturers. They churn out between 7 and 10m units a year. With so much happening in the mobility sector, manufacturers have a lot on their plate. Simultaneously, they integrate new components into vehicle systems, they develop algorithms for autonomous driving, and they work with electrification of the powertrain. Adding to that, the challenges of electrification of the powertrain and autonomous driving is attracting significant investment from companies outside the vehicle sector. New companies focusing on either developing algorithms for autonomy or technology for electrification are emerging. And as profitability in the vehicle industry is slowly shifting from metal and mechanics to software and services, there may very well be a new Ford or a new Toyota among them. The future may see completely new business cases in the vehicle industry, shifting from a car brand as we know it to a service provided by a nondescript shell on wheels. Truly, these are exciting times in the mobility sector. IT Sonix di|Arne Vollertsen for TechPeople A/S and Crister Nilson, Consultant Manager & Automotive Business Area responsible, Sylog AB st|BY: A computer on wheels Increasing amounts of data A thing of the future The powertrain is simple Vehicle-to-X Rewriting laws and regulations Huge investments Managing Director OUR COMPANIES Newsletter sign up h1|The 2020s: the decade of software-defined mobility h2|Electrification, autonomous driving and all-embracing vehicle connectivity is fundamentally changing the way we move goods and people around, and the digitalisation of mobility has the potential to help us handle the huge challenges the world is facing regarding urbanisation, sustainability and climate change. No doubt, the 2020s will be the decade of software-defined mobility. sp|> The 2020s: the decade of software-defined mobility Securing data lakes Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|New York Times 26.2.2020 pa|A leading technology partner – from system architecture, mechanical and HW design to software and application development and communication solutions for embedded and IoT solutions. TechPeople has a broad network of more than 60 specialists covering everything from system architecture, mechanical and embedded hardware design to software and application development. Operating out of two offices in Denmark (Herlov and Aarhus), they offer the best consultants in the market for advanced development within medtech, energy and telecom. Through long-term customer relationships they have demonstrate capability to deploy new technology and develop innovative solutions. Example projects include interface between a hearing-aid device and mobile applications, development of advanced communication systems for disaster zones based on mesh technology and supporting the development of a complete IoT solution for energy management from sensors and actuators, connected to a cloud platform enabling the user to control everything from his mobile application. Gilad Mizrahi Copenhagen (Denmark) 63 (2017) st|A highly specialised consultancy company with expertise in embedded and IT solutions. HQ FOUNDED # OF EMPLOYEES JOINED DATA RESPONS OUR COMPANIES Newsletter sign up sp|> > > TechPeople TECHPEOPLE MANAGING DIRECTOR 2010 (fully owned) 2017 COMPANY website Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|I grew up in Lyon and went to Business school in Grenoble. During my studies, I started as an intern in AKKA Detroit, where I stayed for one year. After finishing school, I worked in AKKA for two years as a member of the Merger & Acquisition team, where I among other sales, was involved in the process of analyzing and understanding the operations of Data Respons. After that I was executive assistant & project manager for Mauro Ricci (CEO of AKKA) for the last seven months. In my free time, I enjoy skiing and mountain biking in the alps. The launch was very well received in the market. We are a startup with all the positive aspects: a young General Manager, recruiting a new team, scalable and the ability to be quick in decision-making processes. I now have the chance to create a new company culture, but at the same time lean on useful resources in DR and AKKA to assist on new customer projects and building a company from scratch. These first weeks I’ve had two priorities: finding the best talents that will create the foundation of Data Respons France and the first customers. I’ve had plenty of interviews, met different people that gave me a lot of valuable insights and perspectives. I am also in discussions with potential customers, but this is more challenging. It’s no secret that in the French market you need to speak French to really connect with the customer. So, the plan is to build up a small team of two key persons: a sales expert and a technical engineer. All other recourses will be used from other Data Respons companies. The big milestone is to be profitable as soon as possible, sometime the next 3-6 months. Short-term goal is to recruit and onboard a team of key specialists to be able to deliver the first projects: a salesperson and a software specialist. Everything else will come later on. What characterizes the French marked: which opportunities and challenges do you expect Data Respons France will meet as you enter the marked? What’s unique about DR France is that it’s specialized in complex software development. In the French market there are only a few competitors that focus primarily on software development, so there’s a huge opportunity to grow. Each time we build something, we will have the sustainability goals of becoming CO2-neutral by 2025 in mind. Sustainability is not only about environment, it’s also crucial to employ people in all ages, origins, cultures and genders. We will have a dedicated focus on recruiting a diversified group of people. I am happy to meet all of you and getting to know the DR family. Don’t hesitate to reach out, share ideas or just have a chat di|Isabelle Sarah Borchsenius. Marketing, Communication and Sustainability Manager at Data Respons st|BY: OUR COMPANIES Newsletter sign up h1|Get to know Guillaume Wolf – The youngest and newest General Manager in the DR family h2|The launch of Data Respons France is the start of an exciting chapter for all of us in the DR family. We are following the establishment of our brand in a new market with great excitement. Guillaume Wolf, General Manager of Data Respons has been a part of this journey since the first of October, giving his best effort to build up the Paris office: finding solid key employees as well as the first customers. We used the opportunity to have a chat with Guillaume on how it is going and what’s keeping him occupied these days. Get to know Data Respons France! h4|Can you introduce yourself: background, work experience, background from AKKA? How did the launch of Data Respons France go so far? Where there any challenges or surprises? What are your expectations for the foreseeable future and more short term the next months? How will DR France contribute to reaching Data Respons sustainability goals of becoming CO2-emission neutral by 2025? What are your ambitions in this area? sp|> Get to know Guillaume Wolf – The youngest and newest General Manager in the DR family Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Kenneth Ragnvaldsen, CEO, Data Respons ASA, tel. +47 913 90 918. Rune Wahl, CFO, Data Respons ASA, tel. + 47 950 36 046 Data Respons is a full-service, independent technology company and a leading player in the IoT, Industrial digitalisation and the embedded solutions market. We provide R&D services and smarter solutions to OEM companies, system integrators and vertical product suppliers in a range of market segments such as Transport & Automotive, Industrial Automation, Telecom & Media, Space, Defence & Security, Medtech, Energy & Maritime, and Finance & Public Sector. Data Respons ASA is listed on the Oslo Stock Exchange (Ticker: DAT), and is part of the information technology index. The company has offices in Norway, Sweden, Denmark, Germany and Taiwan. This information is subject of the disclosure requirements pursuant to section 5-12 of the Norwegian Securities Trading Act st|For further information: About Data Respons OUR COMPANIES Newsletter sign up h1|Medtech contract in Norway h2|Data Respons has signed a frame agreement with an annual value of NOK 40 million with a Norwegian supplier of medical equipment. sp|The contract comprise delivery of R&D Services and smart solutions embedded in the customers advanced systems. Deliveries will start in 2019. – Companies across all industries focus on innovation and development and are leveraging intelligent sensors, advanced connectivity solutions (e.g. IoT) and more data driven processes. This requires specialist competence that are at the core of Data Respons business and competence profile. Implementation of new technology and new smart solutions will be a prerequisite to enable a more efficient utilisation of resources – everything from asset utilisation, energy optimisation and recycling to knowledge- and competence sharing. Data Respons will play an important role going forwards assisting companies across all industries to leverage technology opportunities. We have a target of completing at least fifty technology development projects annually so that we, together with our customers, can contribute to a more sustainable future, says Kenneth Ragnvaldsen, CEO of Data Respons ASA. Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|In the late 1990s electronics became more advanced in the vehicle industry and the number of computers started to increase substantially in vehicles to control individual functions. To develop more advanced functions, the computers in the vehicles needed to be connected together in networks for communication between different parts: engine, brakes and gearbox for functions such as automatic cruise control. The vehicle industry had long been regarded as an individual industry, with the power transmission chain’s efficiency and environmental impact a major focus, but skills were now needed to be able to drive the development forward within electronics and software. The challenges that the vehicle industry is now facing are no longer limited to the industry. Today connected service, energy and environmental impact, autonomy and AI are fundamental issues in all industries. Fully electrified power transmission chains are in their infancy, the intermediate stage of hybrid cars (electricity/combustion) still has a bigger share of the market. Tesla’s presentation of the first Model S has highlighted the fact that it is possible to mass produce electrified vehicles for a wider market. Within the heavy vehicle industry companies are investing large sums in electrification of the power transmission chain. There are many reasons for this, but it is primarily minimisation of fossil fuels that is driving the development. The development of self-driving vehicles is also in its infancy. The technology exists, and is used extensively in controlled environments such as mines. The technology for self-driving vehicles will drive development of more advanced electronics in the form of new and faster sensors. Calculation speed in self-driving vehicles will need to be increased together with increased bandwidth in the communication between subsystems. Some parts of functions will also require the vehicle to be connected, partly to other vehicles and partly to back-end systems. The telecommunications industry will thus have become a part of the vehicle industry. If the 1990s was the telecommunications industry’s decade, there is a major chance that the 2020s will be the vehicle industry’s decade. Views on ownership of vehicles will change and laws will need to be rewritten, nationally and internationally. The step of taking the technology out into open traffic will require a concentrated effort in terms of development in a number of different technological segments. Sylog is a consultancy company specializing in product development, engineering and IT and has since 2007 been a part of the Data Respons group. Their specialties are: di|Crister Nilsson, Vehicle Industry Manager in Sylog AB li|Project management Development Testing and quality assurance Configuration Management st|BY: OUR COMPANIES Newsletter sign up h1|Automotive: An industry in change h2|(Next issue teaser): Our world is in a state of constant technological development. During the post-war period there have been major military technological achievements, much of which have spread to civil society. A rarely witnessed technological leap took place within the computer industry in the 1990s, processor speed doubled every 18 months and storage units rapidly became more effective. It became possible to develop the mobile telecommunications systems, which have subsequently been further developed in new generations’ technology. h3|Electrification Autonomous vehicles sp|> Automotive: An industry in change Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|The contracts comprise development of smart solutions improving efficiency and supporting new digital product and service offerings. The assignment includes application modernisation, provision of B2C processes into multiple channels and integration of software into a private cloud infrastructure. – The traditional banking industry is challenged by new technology (fintech), payment solutions from new players (Google, Amazon, Apple Pay), and internet banks operating at a significantly lower cost base. In order to address competition, significant investments in automation of core processes are required. In addition, a complete remake and modernisation of existing infrastructure to a modern, cloud based, and platform independent solution and development of new applications and services is needed to meet new customer demands. Our specialist expertise within embedded software and experience with digitalisation projects from other industries make us a relevant partner in this market, says Kenneth Ragnvaldsen, CEO of Data Respons ASA. For further information: Kenneth Ragnvaldsen, CEO, Data Respons ASA, tel. +47 913 90 918. Rune Wahl, CFO, Data Respons ASA, tel. + 47 950 36 046 Data Respons is a full-service, independent technology company and a leading player in the IoT, Industrial digitalisation and the embedded solutions market. We provide R&D services and smarter solutions to OEM companies, system integrators and vertical product suppliers in a range of market segments such as Transport & Automotive, Industrial Automation, Telecom & Media, Space, Defense & Security, Medtech, Energy & Maritime, and Finance & Public Sector. Data Respons ASA is listed on the Oslo Stock Exchange (Ticker: DAT), and is part of the information technology index. The company has offices in Norway, Sweden, Denmark, Germany and Taiwan. This information is subject of the disclosure requirements pursuant to section 5-12 of the Norwegian Securities Trading Act di|Data Respons, 23.01.2019 st|BY: About Data Respons OUR COMPANIES Newsletter sign up h1|Contract of 20 million NOK h2|Data Respons has signed contracts with a leading player in the German banking industry for development of business critical systems. sp|> Contract of 20 million NOK Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|All 17 of the UN SDGs are relevant to our business, yet we have chosen to focus on four main areas. Good health and Well-being, Quality Education, Climate action and environmental issues and Reduced Inequalities. We find that we can contribute more within these four areas and that they are enablers to further strengthen the full set of UN goals. One of Data Respons core values is taking total responsibility – and when it comes to health we mean taking responsibility for your own body and mind as well as looking out for others. Emerging technologies like AI, 5G and smart devices enable digitalisation within healthcare. For example, we have developed technology that ensures high training of medical staff for a customer in the medtech industry. As a responsible business we support several efforts in promoting good health. All our employees benefit from our motivational In-Shape program facilitating activity in everyone’s daily lives. Education is key to achieve many of the other sustainability goals. Quality education can break the cycle of poverty, reduce inequalities and reach gender equality. In 2016 and 2017 we contributed with specialist competence in the development of No Isolation’s allowing children with long-term sick leave to attend school from home through a telepresence robot. In 2018 we started a new partnership with the organisation The Society for Street Children in Nepal focusing on educating children and teenage girls into nurses and midwives. We are also pleased to continue our work with the organization helping children in war torn countries access to schools. Inequalities based on income, gender, age, disability, sexual orientation, class, ethnicity and religion are according to the UN still continuing to persist across the world. We believe technology can empower people with disabilities. An example is the ReSound device creating a direct connection between a hearing aid and your smart devices – allowing people with hearing disabilities access the sound from their smart devices directly through their hearing aids. Aside from technological efforts we cooperate with a range of charities promoting the education of women, opportunities for the mentally disabled and that enable children living in poverty to go to school. In Data Respons, we have a target of being involved in more than 50 sustainable technology projects annually. This means that we seek to engage in customer development projects that have a positive impact on the environment. Our specialists have participated in various development projects within electric optimization technology. One example includes the that automatically manages charging of your electrical vehicle according to supply and demand on the power grid optimizing power consumption and lowering costs making it more affordable to drive electric. Another example promoting clean and affordable energy and including a larger eco system is EnBw’s . This system automatically optimizes energy consumption with its self-learning algorithm and controls the entire energy consumption in your home enabling users to produce and sell energy back on the grid st|UN Sustainable Development Goals Read more about our efforts promoting good health and well-being classroom robot On Own Feet Easee charging robot EnergyBase OUR COMPANIES Newsletter sign up sp|> > UN Sustainable Development Goals The were adopted by all the world’s governments at the United Nations in 2015 and provide a common and necessary roadmap. In Data Respons, we celebrate these goals and believe in making a difference from inside. Inside the technology and inside our companies. We strive to explore technology projects contributing to a more sustainable world, especially those making the world greener, stronger, smarter and more equal. Ensuring healthy lives and promoting well-being for all at all ages. Ensure inclusive and quality education for all and promote lifelong learning. To reduce inequalities within and among countries. Take urgent action to tackle climate change and its impacts Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|We truly believe that technology and the digital revolution will enable solutions that will solve the big problems and challenges of our time. In times of increasingly divided societies and remaining a clear sight on the challenges we face becomes even more essential. This sets high standards to communicate clearly, transparently and honestly on our emissions and efforts to reduce them. In order to avoid greenwashing and praising our business to be greener as it is, we need to fully understand the real impact of our products and services to address our common cause: fighting climate change. This is why we take CO2-mapping and Sustainability Reporting seriously and commit to using internationally approved standards (GRI). We believe that only if we manage to build trust within our audiences by keeping our promises, we can actually contribute to the green shift. Data Respons signed the guide to avoid green washing in Norway in early 2020 and has followed the instructions both in Norway and in our subsidiaries in Sweden, Denmark, Germany, France and Taiwan. Now, as the guide launches internationally, we want to take a stand and promise that we will do our outmost to not promote something to be greener than it actually is. In all our communication we will strive to report honestly about the efforts being made, as well as being open and transparent about our biggest challenges regarding reducing emissions. .” (source: skift) The suggests ten principles to make it easier and more measurable to follow. We commit to being honest, accountable and transparent in our CO2-reduction efforts and everything else. We commit to report on openly on all our emissions through a yearly ESG report that follows the GR standard (GRI). Data Respons has also been a member of the UN Global Compact initiative since 2018, and thus adopting its reporting requirements. Sustainability efforts are not limited to marketing and communication departments but integrated throughout in the company structure. We map our carbon emissions every year, and every daughter company in the group is involved in that process. There is also monthly and yearly reporting on a number of ESG factors in the group that measures the effects of all relevant actions and ambitions. We have set ambitions and concise targets for our sustainability ambitions. In 2019 we counted more than 70 customer projects that had a positive and direct impact on the UN SDG’s. Among a long list of projects, we have helped windmills produce more energy through better software. We have enabled the possibility to both share a car, and car chargers. In addition to that we have reduced the fuel needed in both trains and trucks, and we have developed first aid technology that saves lives on a regular basis. We are fully transparent on our emissions in our yearly ESG report. Sustainability is a continuous process, and we believe in the importance of integrating it in every aspect of our business. We only buy carbon offsets for emissions that can’t be avoided. But we strongly believe that buying carbon offsets are an important tool for the world to reduce its emissions. We have used a comprehensive ESG reporting format for our yearly integrated reports and a strict use of the UN SDGs. We have a dedicated focus on not using these terms in our external and internal communication to make our efforts look better than they are. Instead, we seek to report honestly about our projects and draw the connection to the FN sustainability goals where it is appropriate. We always strive to have a holistic point of view on the impacts of our work, both negatively and positively. We use the UN sustainable development goals as continuous guidelines and refer to them when appropriate. Donations and sponsorships are used strategically to support the UN goals. Among others, we support to educate the next generation and avoid human trafficking in the long term. Data Respons Data Respons di|“Taking real responsibility to fight the biggest challenge of our time is an important value at Data Respons. As a responsible business, we address some of the challenges the world is facing related to inequality, climate change, health and poor access to quality education. Making false promises on these commitments would be fatal for our trustworthiness and credibility. As we continue to grow internationally, we aim to build a valuable, strong and international brand. Effective environmental communication will report how our business in technology development impacts nature and humans.” st|Kenneth Ragnvaldsen, CEO. What is greenwashing? Our take on these guidelines 1. Be honest and accountable 2. Make sure sustainability efforts are not limited to your communications and marketing departments 3. Avoid talking about the importance of sustainability, if your company has not made serious efforts on these issues 4. Do not under-communicate your company’s own emissions and negative impacts. 5. Be careful using a big share of the marketing budget on small measures that do not affect your company’s footprint significantly. 6. Avoid buying a clean conscience (i.e. through climate quotas. 7. Use established labelling 8. Be careful using terms such as “better for the climate, nature, and the environment”. 9. “Cherry Picking” from the UN sustainable development goals can lead you astray 10. Donations and sponsorships are great, but not a proof that you are working on sustainability-issues Chief Communication Officer OUR COMPANIES Newsletter sign up h1|Data Respons have signed the guide against green washing h2|Data Respons signed the Norwegian guide against green washing in early 2020. Now we have signed international guide and will integrate it in all our subsidiaries. sp|> Data Respons have signed the guide against green washing Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. bo|Marketing, Communication and Sustainability Manager em|fake news, Greenwashing is a form of misleading marketing or communication, where a product, service or company is presented as “better” in respect to climate change, the environment or human rights issues, without proper documentation to back this claim pa|Our specialists at Copenhagen-based helped Signify with a solution that controls the wireless communication between the lamps using the MESH network technology, Zigbee along with sharing their domain knowledge helping Signify make smart and future proof choices when choosing hardware. A solution based on standard components ensure future access to components which is important when producing a long life product When light is connected it makes cities smarter and helping municipalities to save energy, reduce operational costs and keep their citizens safer st|OUR COMPANIES Newsletter sign up h1|Intelligent street lighting in Copenhagen h2|Signify (prev. Philips) is world leading within advanced lighting systems. They have developed the iconic street lamps called “The Copenhagen" into a connected and smarter system saving energy, cost and making city life safer. sp|> Intelligent street lighting in Copenhagen Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|In a typical car bought the last decade, there are over 70 computers connected in an internal network. There are sensors that sense if you crash, triggering safety mechanism to save your life. There is also a network of sensors monitoring the engine in order to keep performance and emissions within the tolerated limits, or to control the charging of batteries in hybrid or electric cars. In modern ships, we find highly complex controller networks, so much that a modern ship is more like a floating factory, managing power, controlling ballast tanks and thrusters based on a number of inputs from the bridge and from sensors, and also providing the operator on the bridge with crucial information. The thrusters can be procured by one vendor, while the dynamic positioning systems can be bought by a different vendor, without sacrificing interoperability. The key to success for these kinds of networks is that the machines speak the same language. Information a sensor gathers is typically not displayed directly to a human, but must rather be interpreted and “understood” by another machine. A climate control system e.g. Needs to know what units temperature and humidity is measured and how this information is encoded on the network. For this to work, it is important that devices adheres to standards. In automotive, cars typically use controller area network (can) for communication. First, can was mostly a communication standard defining how raw information should be sent on the wire, but have later been extended with more standards that defines in detail behavior of specific applications. The canopen standards provides a wide range of device profiles for a myriad of applications, for everything from medical tomography to large crane installations. There is also a standard for writing a standard if none of the existing fits your application. The marine industry have also extended canopen by standardizing how high reliability redundant networks can be built for ships, maintaining control over the ship even in the events such as fire disabling parts of the ship. There is a lot of good work that have been put into making such standards for interoperability, but how does it relate to a smarter and more connected internet of things? Without standards, we risk ending up with an internet of incompatible things, or as jean-louis gassée from the apple initial alumni team put it, we end up with a “basket of remotes”. Today we often see each internet of thing vendor providing their own app for controlling their devices, but they provide no way to integrate the different devices into doing new smart things in a seamless way. However, home automation standards such as zigbee or z-wave takes some of the same design decisions as industrial standards, and specifies how different kinds of devices should operate in order to be compliant with the standards. While industrial networks have typically been designed with safety and reliability in mind, security is another issue. Features such as authentication, authorizon and confidentiality are typically not subjects that are being addressed by industrial standards. If we are going to apply the experiences from industrial networked devices to the internet of things, these are issues that needs to be addressed. The security expert bruce schneier compares the current situation to the general computer security of the mid-90s, when the internet first saw widespread adoption, but without software or security practices ready for this revolution. In 2010, security researchers studied the tire pressure sensors of a car. Since it is hard to make wired connections to a rotating wheel, the tire sensors were made wireless, and this was what interested the security researchers. By forging malicious data into the wireless receiver of the car, the researchers where able to take full control of the internal network of the car, and were able to monitor and control critical subsystems such as engine control and braking. That this breach was possible was mainly due to a design that did not take into account security attacks of this kind, but a design that was based on the assumption of security by isolation of the network. A conceptually similar attack was brought against nuclear enrichment centrifuges in iran, with the internet worm stuxnet. Even though the enrichment plant was not connected to the internet, this worm also spread on usb thumb drives, and in the end, the attack succeeded in spinning the centrifuges into destruction. A basis for making secure software for devices is to have a clear and well defined communication protocol. Typically it is seen that proprietary solutions have had less scrutiny and discussion than openly developed standards. One interesting case is the industrial bus hart, which have been extended to a wireless standard, wirelesshart. In this standard, in addition to the more traditional reliability and safety concerns, security is also addressed, keeping unauthorized devices out of the network, and keeping messages confidential and authenticated using encryption. But even a good design can have a buggy implementation. While we have many good practices for achieving high software quality, it is sadly beyond the state of the art to implement perfect software without security holes. A device vendor that ships a product must acknowledge this in order to maintain a satisfactory level of security. Security, like hygiene in a hospital, must be viewed as an always ongoing process, embracing the whole lifetime of the products. In 2014, both general motors and tesla was ordered to do a recall because of a fire hazard when using damaged charging cables. Gm had to physically bring in all the cars for repair, while tesla performed an over-the-air software update to detect bad cables, and then limit the currents to safe levels. While we are seeing a growth of new devices targeted for the consumer market, it is not clear how this myriad of devices should be connected in a meaningful way. The industry have been successful in defining standards and protocols for such use, but there is still work to be done with security before exporting these ideas to the mass market. Also, it is also wise for the industry to try to learn lessons from the development in the consumer market, where the innovation moves at a even faster pace di|Kristoffer Koch, Senior Development Engineer, Data Respons st|BY: OUR COMPANIES Newsletter sign up h1|Industrial Connected Things What challenges are there, and how can we meet them? h2|In the industry, connecting large networks of sensors and actuators with smart logic is nothing new. While these networks are typically not internet connected, they are however networked “things”. Are there lessons to be learned from the industry when we build the internet of things for the future? h3|Network of things Challenges Need for experience sp|> Industrial Connected Things Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|(Data Respons company) is a software licensor for Java technology and has been an engineering service provider for the highly innovative EnergyBASE project of EnBW since 2015. Our engineers support the development of complex software infrastructure for management of an energy mix comprised of solar power, battery storage and fixed network power grids. Homes with EnergyBase installed produce their own clean solar power and rely less on carbon based energy sources. Homes with EnergyBase installed produce their own clean solar power and rely less on carbon based energy sources st|The system allows you to collect, store and intelligently distribute the self-generated energy throughout the house. It knows whether the energy should be consumed, stored or fed. Through the forecasts for the next 24 hours, energybase creates an individual plan for optimal use of the generated electricity. This is completely automatic and can be controlled centrally via an app. Simply an optimal combination. OUR COMPANIES Newsletter sign up h1|IoT-based solution for innovative energy management h2|The EnergyBase system automatically optimizes energy consumption with its self-learning algorithms and controls the energy flows in your home. sp|> IoT-based solution for innovative energy management Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|In order to increase project success, the project approach must be customized because it depends on the success criteria, the available resources and the complexity. The chance of success can be increased by involving the people set to carry out the project work in adapting the approach. There are many different causes of complexity, for example a short duration, a large number of people involved, an insufficient budget, uncertainty in estimates, or external dependencies such as other projects. The project approach can also be changed during a project. It always depends on the situation. A is all the phases/periods from idea to a final delivery. Characteristics of the following life cycle models will be addressed: Traditional W and life cycles, with their sequential processes, have a lot of planning upfront. They also have a lot of analysis and design before the build phase. The waterfall life cycle model, shown in figure 1, is most appropriate when the requirement are well known and fixed, in projects with stable teams and low risk. There is a review at the end of each phase to verify the work and to decide if the next phase can start. Testing starts only after the finished build phase because the phases do not overlap. The waterfall model can be problematic if used for long and complex projects, and for projects with many changing requirements, due to the sequential processes without several iterations. Figure 5 shows four life cycle categories and their characteristics related to degree of change and frequency of delivery. No life cycle is perfect for all projects. Instead, each project should find an optimal balance between the characteristics [2]. Approaches based on a waterfall life cycle takes advantage of things that are known and proven. Detailed requirements and plans are created at the beginning of the project. This means that the sequential waterfall life cycle can be suitable for a small project with fixed requirements and low risk. Agile approaches have early and continuous delivery of valuable products or results, and the projects can adapt to high rates of change. This can increase the customer satisfaction. Just-in-time requirement analysis means that a project starts with high-level requirements, and that the requirement specification is developed into more details during the project. Agile project teams should look for early and frequent deliveries to obtain feedback. When teams deliver small increments, they will better understand the true requirements. Software development is normally about learning while delivering value. Hardware development and mechanical development are similar in the design parts of the project. Therefore, an agile mindset can also be relevant for parts of hardware and mechanical development processes. Magne Jørgensen at Simula Research Laboratory has recently performed a survey for 122 recently completed Norwegian IT projects: [3]. The results indicated that it is useful to postpone adding details to the requirement specification if the project is large, has an agile approach, and/or has a time-and-materials contract. For non-agile approaches in IT projects of small or medium size and with a fixed price, a detailed requirement specification can be preferred and perhaps even necessary. The results also indicated that requirement changes during the projects due to learning contributed positively. Requirement changes due to external changes, and imperfect early analysis, were negative for the successfulness of the projects. Half of the projects with a well-functioning agile process, time-and-materials contract, limited detailed requirements at project start-up, and frequent requirement changes during the project life cycle, were successful, and no projects in this group had a worse outcome than acceptable. Some examples of common teamwork success factors are clear objectives, joint responsibility (supporting one another), open and honest communication, mutual respect and trust between everyone, and flexibility (adapting to context and changes). Prioritizing, as well as rapid and transparent feedback, are common success factors for high-uncertainty projects that can imply high rates of change, complexity and risk. A project needs several skills, and a team that has all the skills necessary to complete the work is a cross-functional team. The team members themselves should determine who will perform the work prioritized for the upcoming period. Empowered teams are more accountable and productive. Further, by limiting the work in progress, the cross-functional team members can collaborate more to deliver completed work. If team members are not 100 % allocated to a project, they can experience productivity loss because of task switching. Conversely, when every team member is 100 % allocated to a project, they can continuously collaborate and make the team more effective. The size of an agile team is also of importance. The PMI Agile Practice Guide [2] and the Scrum Guide [5] recommend a development team size between three and nine members. Based on the project approach needs, a project manager may be desired. A project manager can add significant value in many situations, for example to facilitate a chartering process and collaboration, coach, give direction, help and advice. An effective project manager can also help meet objectives and expectations, help respond to risk in a timely manner, help resolve project issues, help optimize use of resources, help manage changes and constraints, and help deliver the right products at the right time and cost di|Stig-Helge Larsen Principal Development Engineer Data Respons li|A project life cycle model (e.g. An iterative model) Rules with respect to decision-making The way of gathering information and reporting The different meeting structures The responsibilities and authorities Sequential (e.g. Waterfall) Agile Iterative Incremental Hybrid Better Practices of Project Management – Based on IPMA Competences 4 revised edition – 2017 – John Hermarij Agile Practice Guide – 2017 – PMI Requirement changes in IT projects – Computerworld Norge – week 47 – 2017 – Magne Jørgensen at Simula Research Laboratory Manifesto for Agile Software Development – 2001 – The Scrum Guide – 2017 – st|BY: References: OUR COMPANIES Newsletter sign up h1|How optimal is your approach? h2|A system development project involves different disciplines, and will always have a level of uncertainty. This uncertainty implies a degree of change, complexity, and risk. The chosen project approach will affect the success of the project. Why and how should a project approach be selected? This article addresses agility and various approaches for system development projects. h3|Selection of project approach Characteristics of project life cycles Requirement changes Teamwork success factors Measurement of performance and progress Summary sp|> How optimal is your approach? The project approach is the way in which project deliverables will be realized [1]. A project approach consists of a set of applied methods, techniques or tools applied to satisfy expectations and needs. Project characteristics such as the level of uncertainty, the available resources and the project success criteria should influence development of a successful approach in each system development project. Initially, before a project approach is selected, the relevant stakeholders are identified and their expectations are discussed. Their need for information and involvement during the project are also discussed and analyzed. Measurable project objectives and related success criteria must be identified, discussed and prioritized. All this is documented together with other initial high-level information, such as project purpose, prioritized high-level requirements, and project exit criteria. A question that should be answered by the customer (or sponsor) is in regard to project constraint priorities: Is the quality, time, functionality or cost the first priority? All the initial communication described above is useful input to find the approach that best fit with all priorities. The prioritized success criteria are used as the guiding principle for the approach to be developed. Factors that may impact success must be identified. Further, through the approach development, the necessary success factors are selected to satisfy the success criteria. The chosen approach should also be based on a life cycle model with characteristics that match the project characteristics. An example of project characteristics is considerable uncertainty that implies a high rate of change, complexity, and risk of rework. For this example, an appropriate approach can be based on a life cycle model that allows the project to tackle a high amount of uncertainty, via small increments of work. Prior to any detailed planning, an initial approach for a project may, among other things, consist of: An agile life cycle can be an alternative to the waterfall life cycle. Project approaches based on an agile life cycle model are commonly used. Agile life cycles are both iterative and incremental. This means both repeated activities and frequent small deliveries, as shown in figure 4. The goal for agile approaches is to deliver a continuous flow of value to customers and achieve better business outcomes. Feedback on each delivery is used when planning the next iteration. Agile approaches will follow the principles of the Agile Manifesto [4]. Figure 2 shows an iterative life cycle. An iterative life cycle can be appropriate when the complexity is high and when frequent changes are expected. Figure 3 shows an incremental life cycle. An incremental life cycle can be appropriate when the customer want frequent smaller deliveries with a subset of the complete solution because of business needs that cannot wait. Further, frequent reviews improves the quality. If an iteration-based agile approach is used, the team collaborates to finish the most important features in each iteration (each time-box). Agile life cycles have several advantages for system development projects, but there are also some potential challenges to be aware of. Quantification of effort, time and cost is difficult at the beginning of an agile project life cycle because the team does not have all the upfront estimation and planning as in waterfall. However, the team can provide better estimates after a few iterations (sprints), when the team has established a reliable velocity (average amount of work completed in each iteration). Another challenge is risk for insufficient emphasis on necessary design and documentation. Further, a risk is also having only inexperienced engineers in an agile team – they should be combined with engineers or a project manager that has the experience needed to make the required decisions during the development process. An example of a specific life cycle is use of a model that group increments and/or iterations into several large phases, where each of the phases are divided into several smaller time-boxes. This enables high-level planning of one larger phase at a time, and more detailed planning for each time-box. A commonly used hybrid life cycle is a combination of waterfall and agile. For example by using some agile methods such as short iterations (e.g. 2 weeks), backlog, frequent demonstrations, and retrospectives, but still follow other aspects such as considerable upfront estimation, analysis, and progress tracking according to waterfall approaches. Use of both Scrum (including a board to visualize the flow of work) and elements of the eXtreme Programming (XP) method is a common blend of standard agile methodologies [2]. The Scrum framework provides guidance and description of concepts like product owner, scrum master, product backlog, sprint planning, daily scrum, sprint demonstration/review and sprint retrospective. Further, XP inspires engineering practices like continuous integration, refactoring, automated testing and test-driven development. A pragmatic approach can be used together with waterfall, agile and hybrid approaches. A pragmatic approach will only use the practices that make sense for the individual team. The team will remove any unnecessary ritual, and focus on getting the quality and work done as quickly as possible. Agile is not what you do – agility is how you do it. Attention to quality is a premise to release anything rapidly if an agile approach is used. Regression testing and testing at all levels are important – from unit testing to system and acceptance testing. This applies to both agile and sequential approaches. Several types of tests may be needed, for example stress, compatibility and usability testing, as well as load and performance testing. In addition, simulations are often useful for interim test of hardware and mechanical designs. Project measurement data is essential for improved forecasting, reporting and decision making. Two commonly used and recommended methods for empirical and value-based measurement of project performance and progress are Earned Value and Burndown Charts, as shown in figure 6 and figure 7. These two methods measure finished work. The measurements are based on what the team delivers, not what the team predicts it will deliver. The Earned Value is the value of the work actually completed, accumulated at fixed time intervals – measured in either currency, work hours or story points. While the Burndown Chart shows work left to do (work hours or story points) versus time. An approach should be developed in order to optimize the project processes to achieve a successful project that satisfies expectations and needs. This means that the approach should provide the greatest chance of success. The development of a successful approach should be influenced by project characteristics, such as the level of uncertainty, the available resources and prioritized project success criteria. The uncertainty implies a degree of change, complexity, and risk. To satisfy the success criteria, necessary success factors are selected through the approach development. Further, the developed approach should be based on a life cycle model with characteristics that match the project characteristics. Several aspects of working together, such as communication, responsibilities and decision-making, are established through the chosen project approach. The approach is always dependent on the situation, and should be open to changes during a project. It is not something you design on your own. The development of an approach is done in cooperation with important and influential key players, and the approach must be customized for each system development project. Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|project life cycle aterfall V-model “Requirement changes in IT projects: Threat or opportunity?” pa|Frobese consists of software experts with strong competence in the banking and insurance business and with long-term customer relationships. Frobese is based in Hanover, thus further increasing the Data Respons footprint in Germany, as Frobese joins four other Data Respons software engineering companies across Germany. According to the European Commission, Germany leads the EU on 5G readiness but is lagging on digitalising public services and German companies have not made progress in the level of Integration of digital technologies . With a group of software niche companies, including Frobese, Data Respons is increasingly well positioned to exploit the future need for digital services. https://ec.europa.eu/digital-single-market/en/scoreboard/germany li|It’s with great joy that we welcome Frobese to the Data Respons family. With this latest expiation we are adding more than 90 specialists on cutting edge software within banking and finance to our portfolio. More importantly we believe that this is an ideal merger of skill sets, culture and opportunities, says Kenneth Ragnvaldsen, CEO in Data Respons. During Covid-19 the Data Respons business model has proven to be resilient as digitalisation continues to be the key to a competitive advantage. Frobese has also experienced a positive development through a challenging year, proving that this is a good match of business models and attractiveness, comments Ragnvaldsen. Established in 1998 Office in Hannover Core competence: IT/strategy consulting, transformation of core financial business structures, business process management, organizational consulting, business field development, IT quality management, requirements management, procedures and models, software architectures, implementation, test management and testing. Main vertical: Financial sector Two legal companies: Frobese GmbH and Frobese IT Akademie GmbH 95 employees and 27 freelancers https://www.frobese.de st|Key facts about Frobese: OUR COMPANIES Newsletter sign up h1|Data Respons welcomes Frobese to the family! h2|Data Respons welcomes Frobese GmbH as the latest additions to a growing family of niche tech companies in Europe. Frobese is a cooperative and successful team of experts specialized in software consulting for German banks and insurance companies. sp|> Data Respons welcomes Frobese to the family! Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Data Respons has extensive know-how and experience with R&D development in this sector including military standards, environmental stress factors, security and government requirements. Our customers are leading global companies that supply turn-key systems to various sections of the defence industry and armed forces st|SPACE, DEFENCE & SECURITY $248 $558 €8.8 APPLICATIONS & EXPERIENCE OUR COMPANIES Newsletter sign up h1|Sophisticated and secure operations billion billion billion h3|The ongoing digitalisation is creating new opportunities across all industries. Robust sensors, secure communication, and advanced video processing and image solutions is enabling the industry to explore new areas and improve operational accuracy without compromising human lives. Extreme fire control computer (MIL-STD-810F) for armoured military vehicles for missile systems Simulation, training and applications and solutions Electronics and embedded solutions for SW, systems and solutions for systems and solutions Rugged and computers for defence and aerospace and camera interfaces for image tracking Control, navigation, Systems and solutions for and maritime environment SELECTED CUSTOMERS sp|> > Space, defence & security Size of cyber security market world wide in 2023 Value of global space industry market Value of European space market ruggedised Application software digital applications Cyber security space applications drones, robots and ROVs Safety-critical embedded solutions High-end video processing alarm and security naval operations Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|The number of commercially available SoC FPGA devices is continuously increasing, giving a large variety of configurations to choose between. If you encounter a project where a processor and an FPGA is needed or preferred, you should know what a SoC FPGA is and understand when such a solution could be useful. The purpose of this article is to highlight the most important questions to resolve when evaluating and selecting SoC FPGA solutions. A SoC FPGA could be useful for replacing the following traditional configurations: • A stand-alone processor and stand- alone FPGA • An ASIC including a processor • A stand-alone processor Compared to using a stand-alone processor and a stand-alone FPGA, a solution using a SoC FPGA is cheaper, uses less power consumption, and is easier to put into a design. Two circuits are replaced by one, which means less time for designing and less space on the PCB. If external RAM is used for both the processor and the FPGA, these memory circuits can be consolidated into one RAM chip, saving space, cost and reducing complexity. Communication between processor and FPGA can also go much faster with both units on the same chip. Compared to manufacturing an ASIC, a SoC FPGA is first of all much cheaper and requires much less design time. You will also have a much more flexible design process, as the firmware may be rewritten at any time. Compared to using a stand-alone processor, a SoC FPGA will be more flexible, since hardware structures can be added throughout the whole design process when needed. It also gives the possibility of parallelising the data processing by allocating computing intensive operations to dedicated FPGA firmware. In the rest of this article, we will go through the most important issues that must be resolved when picking an appropriate SoC FPGA solution for a given application. We will look further into performance, reliability, flexibility, cost, power and software tools. What ultimately constrains your system performance is the communication between processor, memory, FPGA logic, and interconnects. Normally, when selecting the SoC FPGA to use, the processor speed and the FPGA logic are carefully evaluated and checked, but the same is sometimes not done properly for memory and interconnects. In terms of memory, there are different specifications that determine the memory performance – the supported memory frequency is not necessarily what gives the highest performance. One of the first things to check is: • Does the SoC FPGA have an independent, hard memory controller for both the processor and the FPGA logic? • Is the memory controller able to extract data in sufficiently high speed, manage priority, reorder command and data, and schedule pending transactions? A slow memory controller with smart data handling could perform better than a high frequency memory controller with simplistic data handling. It is important to make sure that the memory controller can provide the required throughput of mass data transfer and have the minimum latency to meet real-time requirements. When it comes to connections between major building blocks inside of the SoC FPGA, like hard CPU and data processing FPGA blocks, it is important to check the interconnection speed, and make sure it supports the required data throughput between FPGA logic and processor. In addition, to prevent blocking the high-speed transfer between the FPGA and the processor, a low latency path should also exist to do simple setup and configure access to the hardware accelerators in the FPGA logic. In terms of reliability, there are especially two things that have to be carefully checked, and that is memory protection and software bug handling. With increasing CPU and Memory speed and decreasing semiconductor-manufacturing technology, there is an increasing probability to have memory errors. It is therefore important to protect your memory, for example with Error Correction Codes (ECC). If both the processor and the FPGA share the same memory, you must be sure that the memory is protected from overwriting data. If there is no protection, one can spend weeks on debugging software bugs which may be caused by the FPGA overwriting memory used by the CPU. A memory protection unit can fix this. One thing that is certain is that you are going to encounter bugs during your software development, and it is important to handle these in an appropriate way. Implementing a watchdog timer to reset the CPU in case of a system hangup is a classical approach. A good architecture lets you choose if you want to reset both the CPU and reconfigure the FPGA or only reset the CPU. We have already discussed the reduced cost of a SoC FPGA compared to a standalone FPGA and a processor devices. Component cost, design cost and PCB space can be saved simultaneously. Selecting a sufficiently flexible SoC FPGA can also contribute to a significant cost reduction for your application. An important issue to look into is the question of which hard IP blocks are integrated into the device. Things to look into here could be PLL circuitry, appropriate memory controllers and communication modules (for example SPI, UART, I2C, USB, and CAN). Although most required units can be generated by soft IP, it is more efficient in terms of performance, cost and logic utilisation when hard IP blocks can be instantiated directly. Boot setup is another topic to explore: Does the SoC FPGA`s processor have the option to boot independently of the FPGA configuration, and then configure the FPGA from the CPU? And vice versa, is the FPGA able to boot first, and then boot the CPU through FPGA logic? Even though both the processor and the FPGA is in the same device, it is important that they can operate like two separated chips. With increasing clock speeds and higher performances, power consumption has become one of the biggest challenges, if not the top design criteria, for many new designs. By replacing a stand-alone CPU and FPGA solution with a SoC FPGA, one can reduce the power down to 50% of the power consumed by the original twochip system. The SoC FPGA can also save a considerable amount of power if it is able to put the FPGA in a low-power standby mode while keeping the CPU alive and running. We have already mentioned that a memory controller supporting high frequency RAM is not necessarily better than a memory controller that runs at lower frequencies. It all depends on the way the memory is handled. A slow DRAM controller does also have the advantage of lower power consumption. When looking at Software tools, one of the most important things to explore is debugging capabilities. In a development project, one can assume that 60-70 % of the project time is spent on debugging. Compared to CPU debugging on a fixed hardware platform, one has to be aware that the SoC FPGA firmware may be changed at any time throughout the design process. It is therefore important that the software-debugging tool is able to adapt to changes in the FPGA logic. Another issue is cross-triggering. When a breakpoint is set in software, you would want the FPGA to freeze at the same time. Like that, you may perform a proper inspection. Similarly, when setting a logic analyser trigger in the FPGA, you would like to freeze and inspect the CPU software. You would also like to have a debugging tool that not only helps you find the mistakes in the code, but also tells you something about the performance of your code. For example, you would like to know if it can be optimised, and if there are functions/code that are not being used or are no longer running. For ARM processors with two or more cores, it is desirable to have multicore debugging. Multicore debugging makes you able to control and monitor both cores simultaneously. For example, you may put a breakpoint on one of the cores while the other is still running, or have the possibility to stop both cores at a given breakpoint. When carefully selected, SoC FPGA circuits can perform better than their traditional counterparts can. They have therefore become a highly relevant competitive alternative. Throughout this article we have highlighted important features to evaluate when selecting an appropriate SoC FPGA for a given application. We have looked into aspects like performance, reliability, flexibility, cost and software tools, and we have explored important issues to resolve before arriving at a particular circuit solution. These are all general guidelines and you will probably have to do some exploration work yourself to arrive at a SoC FPGA solution which is suited for your particular application. https://www.altera.com/products/soc-overview/architecture-matters.html#squaresbox-4 di|Inge Nikolay Torsvik, Development Engineer, Data Respons st|BY: OUR COMPANIES Newsletter sign up h1|SoC FPGA Evaluation Guidelines h2|FPGA is a constantly evolving technology, especially in terms of logic density and speed. Among the newest improvements in the FPGA world are System on a Chip (SoC) FPGA devices. A SoC FPGA integrates a hard processor core and programmable logic on the same die. The three largest FPGA vendors, Xilinx, Altera and Microsemi (Previously Actel), have all started to manufacture such devices. Although having in common that they all put a hard 32-bit ARM processor together with programmable logic, there are also considerable differences between the designs that will be discussed in this article. h3|Overview Performance evaluation Reliability evaluation Flexibility and cost evaluation Power evaluation Evaluating the software tools Conclusion Sources sp|> SoC FPGA Evaluation Guidelines Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|(05/04/2018) The contract includes software development supporting the customer’s next generation platform-independent payment solutions. – The digitalisation trend changes the normal way of doing business. New digital payment solutions must be developed in order to facilitate new business models, and I am glad that a specialist team from Data Respons has been selected to support this important work, says Kenneth Ragnvaldsen, CEO of Data Respons ASA st|OUR COMPANIES Newsletter sign up h1|Contract in Sweden of SEK 10 million h2|Data Respons has signed a contract of SEK 10 million with a customer within Telecom & Media. sp|> Contract in Sweden of SEK 10 million Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|PMD is a static source code analyser. It scans your source code and searches for patterns, that indicate problematic or flawed code. Sometimes it is just a too complex solution which might increase maintenance costs in the long term or it might be a indication of a real bug. In that sense, PMD can be seen as another pair of eyes, that reviews your code. For example, PMD can be used to find usages of the printStackTrace() method, which is often generated by the IDEs when surrounding a statement with a try-catch-block. Just printing the stacktrace might result in swallowing the original exception, since the output might end up somewhere. Ususally such output should be logged with the appropriate logging framework. PMD provides the rule , which detects such cases. See figure 1. The abbreviation “PMD” is , it is actually a backronym. But “Programming Mistake Detector” or “Project Mess Detector” are the most logical meanings. However, the tool is usually known and referred to simply as “PMD”, sometimes with the tagline “Don’t shoot the messenger”. See Figure 2 for the official logo. The patterns, that PMD is searching for, are defined by rules. PMD is shipped with more than 250 built-in rules, that can be used immediately. When the rules detect a problematic piece of code, a rule violation is reported. Furthermore, own rules can be developed in order to adapt PMD to specific project requirements. With so many possible rules, it is clear, that one cannot simply enable all rules. Some rules even contradict each other. And some rules just have different coding conventions in mind, that might not be suitable for the concrete project at hand. In the field of code analysers and so called linters, there are other products available. For Java projects, often is used in order to enforce a common (project- or company-wide) code style. Having a common code style helps a lot if multiple developers working together on the same project, since each part of the project is then be read and skimmed as easy as any other part – regardless of the author. Checkstyle concentrates on the source code directly including whitespace checks like correct indentation and also documentation via JavaDoc comments. PMD doesn’t support whitespace checks, but it has basic support for comments, like enforcing the existence of JavaDoc comments for classes or fields. Other tools like and its successor are analysing the compiled bytecode of Java projects instead of the source code. They have therefore access to the compiler optimised code and might see slightly different code. Moreover, SpotBugs can rely on the structure of a classfile and does not need to deal with syntax errors. SpotBugs can only be used after the project has been compiled, while Checkstyle could be run before. PMD can be seen in between these two tools: While the starting point for PMD is also the source code, PMD takes advantage of the compiled classes. This feature in PMD is called “type resolution” and it helps PMD to understand the analysed source code better in order to avoid false alarms. E.g., if PMD knows the return type of a method call, rules can be written that only apply to a specific type. Otherwise, the rule would need to “guess” and assume the type by looking at the type name only and do a simple string comparison. If the project has an own class with the same name, then we might mix up the classes. A concrete example can be seen in unit tests: PMD provides several rules for JUnit. But if the project uses a different test framework with the same class names (but obviously different packages), then these rules would find issues, which are maybe irrelevant for the other test framework. There are other big players for code quality tools on the market like SonarQube that support a more integrated solution to also monitor quality improvements or regressions over time. When PMD is integrated into the build pipeline, it can act as a quality gate. For example, if rule violations are detected, the build can be failed or the commit can be rejected. This can be used to enforce a specific quality goal. The build pipeline could also be configured to only make sure, that no new rule violations are introduced, so that the code quality doesn’t degrade and hopefully improves over time. There is one other component in PMD, that is often overseen: CPD – the CopyPaste-Detector. This is a separate component, that searches for code duplications in order to follow the DRY principle (Don’t Repeat Yourself). PMD analyses the source code by first parsing it. The parsing process consists of the two steps: • lexing, which produces a stream of tokens • and parsing, which produces an abstract syntax tree (AST) This tree is the equivalent representation of the source code and has the root node “Compilation Unit”. In Java, you can define multiple types in one source file (as long as there is only one public) and classes can be nested. Classes itself can have methods, which in turn have zero or more statements. Figures 3 and 4 a simple java class and the corresponding AST. When the source code could be parsed to an AST, then the syntax is correct. Nowadays, it is recommended to use PMD after the project has been compiled in order to take advantage from type resolution. This means that PMD can concentrate on valid syntax, e.g. if the parsing fails, the analysis of this source file is simply skipped. Technically an own grammar for is used to implement the parser for the Java language. Therefore, failing to parse a specific source file might actually indicate a bug in PMD’s own Java grammar and does not necessarily mean, that the source code is not valid. After that, the AST is enriched by a couple of visitors: First, the qualified names for the types, that are defined in the source code, are determined. This is later helpful when referencing this class (and its nested classes and lambdas) itself. Second, the symbol facade visits the AST. It searches for the fields, methods and local variables and looks up their usages within the scope of this source file. The information collected in this step is made available to the rules, e.g. they can easily figure out, if a (private) field or method is used or not. The found variables are organised in different scopes, that are nested. The third visitor is the “Data Flow” facade. It’s goal is to follow variable definitions, assignments and reassignments and their accesses throughout the program low. It allows to detect nomalies such as assigning a new value to a variable after it has been accessed. It’s currently limited to a single method. The last visitor is the “Type Resolution” facade. It traverses the AST and resolves the concrete Java types of variable declaration, method parameters, and classes whenever a referenced type is used. It uses the compile-time classpath (also known as the auxiliary classpath) of the project that is being analysed. Now, after the AST has been created and filled with additional information, the rules are executed. While all rules for one file are executed one after another, the analysis of multiple files (and ASTs) is executed multi-threaded. Each rule has the possibility of reporting rule violations, which are collected in reports. The violation contains the information about the rule, the location (like line and column in the source file) and a message. In the end, the reports are transformed into the desired output format, such as XML or HTML. When utilising PMD for a project, there are a few different approaches possible. For greenfield projects, it’s a no-brainer: PMD is active with a basic set of rules from the very beginning. So, every code, that is added, will be checked by PMD. For projects with an existing code base, the situation is most likely different. It can be overwhelming, if a whole bunch of rules are activated at once. You might be drowning in violations and it is not clear, which one to fix first. For this situation, an incremental approach is recommended: Prioritising and enabling one rule at a time. Alternatively, all the selected rules can be enabled at once and the current number of violations are monitored. The goal is then, to reduce the violations with every commit and not introduce new violations. This however requires support from the build environment and is not possible with PMD alone. But it can be implemented using a quality gate in SonarQube. PMD should be integrated into the development process as early as possible. The earlier PMD is used, the less issues need to be fixed later on. Therefore there are also IDE plugins that execute PMD while developing code. For Eclipse, there are today 3 different plugin implementations: For other IDEs and editors, there are plugins, too. For the full list, see the . Especially if your project is using Apache Maven as the build tool and you are using Eclipse, you should have a look at plugins, which transform the configuration from your Maven project files and make them available for the PMD, Checkstyle and Findbugs plugins in Eclipse. This means, you can configure your code quality tooling within your build tool and it is automatically working in Eclipse. To compile, build and package software projects, usually build tools are used, such as Apache Maven, Gradle or Ant. For Ant, PMD provides an own task, that can be used. For the other build tools, plugins are existing, that can execute PMD. And most importantly: these plugins can fail the build, acting as a simple gate keeper. can create a report for the project site and also contains a check goal, to fail the build, if PMD rules are violated. It also supports CPD, the copy paste detector. All the previous tools are good, if you are building the project locally. But if a whole team is working on the project together, there is usually a central continuous integration server. Basically, such CI servers could just execute the build tool with its configuration for PMD, but they often provide a little bit more support for code quality tools like PMD: Since they regularly build the project and can keep a history, they allow to compare the reports generated by PMD from build to build. This allows you to see the development of the code quality over time like new introduced violations or violations that are resolved. For , there is a available, which produces a simple graph of violations. Nowadays, such CI servers are available as a service, too. Especially for open source projects they are often free to use. PMD itself uses e.g. . GitHub as a code hosting platform provides integrations with various 3rd party services, that can be enabled. Two such services already use PMD to offer their service: and . These services can also be integrated for verifying pull requests to get early feedback. Since these service also create a history, you can see the results over time. PMD provides many different built-in rules. Since PMD 6, these rules are organised into 8 categories: Best Practices, Code Style, Design, Documentation, Error Prone, Multithreading, Performance, and Security. The recommended approach is, to create an own ruleset, which references the rules that should be used for the specific project. This ruleset should be part of the project, so that it can be easily shared between developers and build tools. For Maven projects, often an extra module with the name “build-tools” is created, which can be used as a dependency. This is described in the for the . You might also find yourself in a situation, that you need a very specific rule, which is not available in PMD itself. Since it is very specific to your project, it might not be even useful outside of your project. Therefore you can define own rules, and the code for these custom rules naturally goes into the “build-tools” module as well. The ruleset can also contain project wide file exclusion patterns, e.g. if you don’t want to analyse generated code. While referencing the existing rules in your ruleset, you can configure them exactly to your needs. Many rules can be easily customised via properties. The rules also define the message, that appears in the report, if a violation is detected. This message can also be overridden and customised. A typical customisation is the priority. You can give each rule a specific priority and during the build, you can decide to fail the build because of an important rule violation but ignore other rules. You can also add own rules. See Figure 6 for an example of a custom ruleset. It’s now time to look at a few selected features, that PMD provides. The first feature is the support for XPath based rules. Since the AST is a tree structure, it can be dealt with like a XML document. The document can then be queried using XPath expressions, to find nodes within the AST, that match certain criteria. This provides an alternative API to develop rules, if you don’t want to implement a rule using the visitor pattern to traverse the AST. This is a very convenient way to create ad-hoc rules. There is even a graphical rule designer to make it easier to develop XPath rules. The designer shows the parsed AST and executes a given XPath query. You can see the matched nodes directly. In the end, the developed XPath expression can be exported as a custom PMD rule in XML format, that you can add to your own ruleset. Since the rule designer displays the AST, it is also a valuable tool for developing rules in Java using the visitor pattern. See Figure 7 for a screenshot of the designer. This way of providing access to the AST and reuse XPath to write custom rules is a unique feature of PMD, that does not exist in other static code analysers. Another feature of PMD is the so called type resolution. As explained above, type resolution happens as an extra step after parsing the source code. The goal is, that the AST is enriched with concrete type information whenever possible. Consider the following source code: Via type resolution, the field declaration for LOG is assigned the type Logger, which (through the import) is identified as org.slf4j.Logger. If the library “slf4j-api” is on the auxiliary classpath, then PMD can attach a concrete instance of Class to that node in the AST and the rule can access it. The rule can now first verify, that this field really is a logger, instead of simply relying on naming conventions of the field name or the simple class name. This helps greatly to reduce false positives for rule violation detection. In the example code snippet, PMD is correct to suggest to use the slf4j placeholder syntax (“… message: {}”, arg), but PMD would be wrong, if the logger would be of a different type. Since the rule has access to the concrete class instance, it can even use reflection to gather more information as needed. This type resolution does not only work for 3rd party libraries, but in the same way it works within the same project, that is being analysed by PMD. That’s why it is necessary, that the project is compiled first before PMD is executed. This means that references to other classes within the same project are resolved exactly the same way and the concrete class instances are made available. There are a couple of rules, that make use of type resolution. And more rules will make use in the future, since type resolution is enabled by default for new Java rules. For example, the rule LooseCoupling” finds usages of concrete collection implementations which should be replaced by the collection interface (e.g. use List<> instead of ArrayList<>). The fairly new rule “MissingOverride” actually uses type resolution and reflection to figure out, which methods are overriding methods from the super class and are missing a @Override annotation. Type resolution has been available for a long time now in PMD. However, it is still under development. There are currently limitations for determining the types of method parameters, especially when overloading is in use and generics come into play. The next feature is quite new: Metrics. It was added in 2017 during a Google Summer of Code project and provides a clean access to metrics of the analysed source code. The metrics are e.g. access to foreign data (ATFD) or weighted method count (WMC). There are more metrics available already and the whole framework is usable by other languages, too. The metrics can be accessed by Java rules as well as by XPath rules. In the easiest case, these metrics can be used to detect overly complex or big classes, such as in the rule “CyclomaticComplexity”. Multiple metrics can be combined to implement various code smell detectors such as “GodClass”. The next step in this area is to support multi file analysis. Currently, PMD looks only at one file, but for metrics it would be interesting to relate certain numbers of one class against, e.g., the total number of classes within the project. There are also benefits for the symbol table, if it has a view of the whole project. This will then allow to do full type resolution. Each rule has then access to all information which makes the rules more robust to false positives and also allows to find otherwise ignored special cases. Implementing this involves sharing data between the different file analysers – possibly involving an additional processing stage. The challenge is of course, to provide this functionality and not affecting the performance of the analysis negatively. PMD started as a static code analyser just for the Java programming language only. This was the status for PMD version up to and including 4.3 (except for a little support for JSP). With PMD 5, a big refactoring took place, in order to support multiple languages. And with the initial release of PMD 5, three new languages were included: JSP, JavaScript (aka. ecmascript) and XML. Later on, support for PLSQL and the templating language has been added while keeping the Java support up to date. The last big addition was support for Salesforce.com Apex. Now, PMD supports in total 10 different languages including rules. Most rules are for Java, of course. Adding a new language takes quite some effort, but it is described in the step-by-step guide . It involves integrating the language specific parser, mapping the language AST to the generic PMD interface types and last, but not least, writing new rules. Most of the PMD framework can be reused, so you’ll immediately benefit from the possibility, to write XPath based rules for your language. The Copy-Paste-Detector (CPD) on the other hand supports many more languages. This is, because you only need to support a language specific tokeniser, which is much simpler than a full language grammar with productions. PMD provides even a AnyLanguage” for CPD, which basically tokenises the source code at whitespaces. Language specific support is needed to improve the results of CPD, e.g. correctly identifying keywords and statement separators. With more effort, there is also the possibility to ignore identifier names during copy-paste-detection. This allows then to find duplicated code, which only differs in variable names, but is otherwise structurally the same. This feature however is only available for Java at the moment. The following is a summary of the history of PMD that Tom Copeland wrote in the book “ ”. It covers the years 2002 till 2005. The project PMD was started in Summer 2002. The original founders are David Dixon-Peugh, David Craine and Tom Copeland. The goal was to replace a commercial code checker, which these three guys were using in a government project in the US. They decided to write their own code checker and got approval to open source it. Now PMD was living on SourceForge. In November 2002, PMD version 1.0 was released with already 39 rules and a copy/paste detector. In March 2003, thanks to Dan Sheppard, XPath rules were introduced with PMD 1.04. Since PMD 1.3 (October 2003), the BSD license is used, which helped a lot to adopt it. Since then it has been integrated into many products. The copy/paste detector has been rewritten a couple of times and improved in performance. With every release of PMD, new rules or report formats have been added and existing rules fixed. With PMD 2.0 (October 2004) the data flow analysis component has been added. With PMD 3.0 (March 2005) support for Java 1.5 was added. Java 1.6 was added with PMD 3.9 (December 2006), Java 1.7 with PMD 4.3 (November 2011), Java 8 with PMD 5.1.0 (February 2014), Java 9 with PMD 6.0.0 (December 2017), Java 10 with PMD 6.4.0 (May 2018), Java 11 with PMD 6.6.0 (July 2018), and Java 12 with PMD 6.13.0 (March 2019). A big step happened between PMD 4 and 5: A major refactoring took place in order to properly support rules for multiple languages. This introduced many breaking API changes and was released in 2012. Also with PMD 5, Apache Maven is being used as the primary build tool instead of Ant. Support for PLSQL was added in February 2014 with PMD 5.1.0. With PMD 5.2.0 (October 2014) the code was completely modularised into a core module and several language modules. This made it easier to add new languages. With PMD 5.5.0 (June 2016) Salesforce.com Apex has been added. With PMD 6.0.0 another small, but important refactoring took place. It has unfortunately a bigger impact on end users: All the rules have been categorised, so that they are easier to find. They have been moved into different rulesets. However, we are keeping the old rulesets for backwards compatibility, so that the existing custom rulesets still continue to work. Over the last years, the project gradually moved more and more infrastructure from SourceForge towards GitHub. The complete subversion repository has been converted to git. It contains the full history back to the year 2002. While at the beginning every sub-project was in the same repository, we have now several separate repositories, e.g. for the eclipse plugin or other extensions. The move to GitHub was a step forward in terms of presence and attracting new contributors. The GitHub web interface is more user friendly, easier to use and feels faster than SourceForge. GitHub especially encourages contributions through the concept of pull requests. GitHub is now the primary location for the source code and the issue tracker. On SourceForge, we still have the mailing list running and a webspace and the archive of old releases. There are other services PMD uses, e.g. as a build server. It builds every push and deploys the snapshot via the by Sonatype. For releases, this build server is even able to deploy the final artifacts directly to Maven Central. Also, every pull request is built automatically. Other services are e.g. for test coverage and for hosting the eclipse plugin update site. In 2017, PMD participated the first time in . This is a student stipend program offered by Google. Students all around the world have the opportunity to work during semester break on various open source projects. Open source organisations provide projects and mentors and the students apply for a project with a proposal. In 2017 two students worked on type resolution and metrics. In 2018 PMD is participating again. As of today, the project has 3 active maintainers, about 100 different contributors, 500 merged pull requests. A cording to cloc it contains about 100k Java lines of code, surprisingly 88k XML LOC (which probably are the test cases) and many other types. What’s left to do for PMD? Aside from keeping the support for Java and other languages up to date and fixing bugs, adding new rules, adjusting additional rules, there are a few topics, that sound promising. In order to lower the barrier of using PMD, specialised rulesets might be useful. There could be a “Getting Started” ruleset, that has just enough generic rules, that are useful for any project. This might be the default ruleset and could be a template for creating an own customised tailored ruleset for the project. There could also be use-case based rulesets, the group the rules not by category but by another topic, e.g. Unit testing, Logging, Migration of library usages, Android specific patterns. Another interesting feature is autofixes. Since PMD has the knowledge, where a violation exactly is in the source code, it is for some rules trivial to provide a fix. The goal is, that PMD provides directly the fixed source code, that can be confirmed in a IDE plugin and applied automatically. Then, besides type resolution, which is still not completely finished, there is also the data flow analysis (DFA) part. PMD has a good start for the DFA, but it’s still very limited. A related feature is control flow analysis. With that available, rules could be written which can detect unused code. Or rules, that verify that a specific guarding method must be called before another method. Having the call stack available, would make this possible to verify. This requires, similar to the mentioned multi file analysis, an overview of the complete project that is being analysed. And last, but not least, a possible future feature could be cross language support. Since PMD already supports multiple languages, this would put multi-language support onto the next level: Some languages allow to embed other languages, e.g. JavaScript inside HTML, or PHP+HTML+JavaScript. Or there is Salesforce.com VisualForce with Lightning. When and if these features are implemented is unknown. The project is driven by volunteers and contributors and all this depends on the available time. New contributors are always welcome to work together and make PMD even better di|Andreas Dangel, Software Engineer, MicroDoc GmbH li|the the st|BY: OUR COMPANIES Newsletter sign up h1|Code quality assurance with PMD – An extensible static code analyser for Java and other languages. h2|Developing new software is great. But, writing software that is maintainable and sustainable is not so easy. Luckily, there are tools available that can help you achieving better code quality. One of these tools is PMD. sp|> Code quality assurance with PMD – An extensible static code analyser for Java and other languages. What is PMD Overview / how does it work? Features Beyond Java The Project The future Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|We’re welcoming Radu Florin Burcea, new development engineer at Data Respons R&D Services. His master thesis project is rather spectacular and although he graduated a year ago it is well worth presenting. Not least because it illustrates the challenges of embedded technology: Making hardware and software work seamlessly together, taking into consideration the restraints of the physical world, and merging materials, mechanics, and algorithms. Radu graduated as a robotics engineer from the University of Oslo in 2020. All the way through his studies he chose to focus on robots, in theory as well as in practise. However, the practical side of robotics turned out to be difficult. Because even when you’re studying Robotics and Intelligent Systems at university that doesn’t necessarily mean you’ll get to work with real robots and get hands-on experience building them. In fact, initially Radu was a bit disappointed by the lack of opportunities to actually build robots as part of his education. Most of the robotics courses at university were entirely theoretical. Still, he wouldn’t give up on his plan to learn as much as possible about constructing robots, and not only the software and programming part of that effort, but getting to know the strengths and limitations of actual materials, and components like servomotors, grippers etc. He even experienced a kind of revelation, when he decided taking a university course in 3D modelling and rapid prototyping. – I liked it so much I worked 10 hours a day. I realized I could build what I wanted. I just had to design it and upload it to the 3D printer. At university they had this very expensive Markforged 3D printer that could print stuff in carbon fibre composite material. You have continuous lines of carbon fibre in the print, and with that material you can print components, which are much better than aluminium. Unlike aluminium, when it bends under stress it just bends back into shape again. Radu decided to focus his master thesis on customizing an off-the-shelf robotic arm to make it stronger and more precise. – I was inspired by a company that builds robots for NASA. They are using this composite instead of aluminium and they’ve redesigned their robots, for instance reducing the number of parts by 91%. – I wanted to try to push the boundaries of low-cost robotic arms, and make them cheaper, lighter, and stronger, Radu says, and he found the perfect use case at the electronics company he was working for as a student assistant. – They needed to test a chip to see if the software downloaded on it was working correctly. It was dull and time-consuming work. You had to take the chip, put it in a test unit, close the lid, wait for a few minutes for the test to be completed, open the test unit, remove the chip, and start again. A few seconds of work and a lot of waiting. I wanted to have a robot do it instead of a human. As a starting point he decided to use off-the-shelf robotic arm parts from the manufacturer CrustCrawler. – In my opinion it was not very good quality. It was a bit shaky, not especially precise, and made of cheap aluminium that would bend under stress. Radu chose to redesign the links of the robot and 3D print them using carbon fibre. – I must admit that my instructor wanted me to focus on programming the robot. He wanted me to use a special algorithm called Inverse Dynamics Control which calculate the exact amount of power needed to perform a specific motion, based on inertia, acceleration, speed and gravity. That was OK, and I did that. But I also wanted to exploit the potential of the Markforged printer and this material. And so I did, and I managed to redesign parts of the robotic arm to make it stronger and more precise than the original. At the same time, I was able to reduce its weight by 50 per cent and its price by 60 per cent. Radu’s interest in hardware has now led him to work as a development engineer for Data Respons. – I chose Data Respons because it allows me to focus on hardware based engineering and embedded systems. Here we work directly on the hardware, and that is what I want to learn about. I want to learn as much as possible about low-level interaction with the hardware. – When I did my thesis, I realized that I couldn’t control it the way I wanted. The software was high level, so I couldn’t program the robot the way I wanted due to lack of tweaking flexibility of the motion planning framework used. I realized I had to learn to write my own software from scratch to make it work properly. So, I needed to learn how to do that. Learning the software is part of Radu Florin Burcea’s great plan for the future. Sometime in the future he wants to build his own robot. – My biggest dream is to have my own robot company. I’m not talking about expensive industrial robots, but low-cost robots that would make life easier for normal people, in their homes, to automate cleaning or cooking for instance. Currently you can buy small hobby robots online. I want to take that approach a step further and design cheap but strong robots that are good at performing advanced tasks. To keep the price down you could 3D print some of their parts yourself, maybe in carbon fibre composite. But the most important part of that plan is learning to create my own software so I can easily make changes to improve the robot and have better control over what it does. To program such a robot properly requires a lot of knowledge. However, Radu’s first assignment for Data Respons is developing world class video conferencing systems at Cisco st|Robotics engineer from UiO 3D revelation Master thesis Parts from CrustCrawler Hardware based engineering The dream is alive OUR COMPANIES Newsletter sign up h1|A 3D-printed carbon fiber robotic arm h2|It all began with LEGO. Development engineer Radu Florin Burcea always wanted to build things. Especially robots. And so, he built one for his master thesis: a robotic arm for automated testing of electronics. sp|> A 3D-printed carbon fiber robotic arm Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|What is self-driving car? Among the general public, there is no clear conception of what a self-driving car is. Within the vehicle industry there are 6 levels that define how self-driving a vehicle is. The driver is in control of the vehicle’s forward motion. The driver is in complete control of the vehicle’s forward motion, but can utilise certain assistive functions such as ABS and Cruise Control. At this level the driver can hand over control of the vehicle to the car’s system in well selected scenarios, parking assistance for example. The driver is still responsible for taking over control in critical situations. The driver can allow the vehicle’s system to take over all safety critical elements, but the driv­er’s attention is still necessary. At this level the system can determine itself when it is safe to take over control of the vehicle and then do so. The system is not able to handle all dynamic situations that can arise, it then hands over control to the driver. Requires no interaction with the driver in any situation. Today the majority of vehicle manufacturers have levels 1, 2 and 3 technology on the market. But the higher levels are not far away, Tesla, for example, has already launched self-driving ve­hicles at level 4. Most other vehicle manufacturers are aiming to offer level 4 vehicles between 2020 and 2025 and be able to offer vehicles at level 5 from 2025. The sensor systems that are needed to achieve self-driving cars are usually divided into three main groups: camera, radar and lidar based systems. Both camera and radar systems are currently used on cars for levels 1 and 2. Sub-components in these systems are also sufficiently advanced to be used for the higher levels. What is elegant about this is that it can be utilised to collect data to be analysed for the next level’s autonomous functions. The vehicle’s current components, which are usually connected in one or a number of CAN networks, will be supplemented by a completely new layer of components and networks which will replace the driver and the driver’s choice of actions in different situations that can arise when driving. The difference in number of sensors and processing power will increase markedly with each stage of automation. The traditional network with relatively simple computers is designed to manage the vehicle’s transducers and sensors. All interaction with the surroundings requires a driver. Replacing the driver in certain or all situations will require a sharp increase in new components and computing power. A camera and radar system generates a relatively large amount of data which has to be analysed in real time together with data from the vehicle’s traditional systems. To analyse camera and radar information, today’s distributed computers must be supplemented with computers with advanced processors and greater memory capacity. The CAN bus does not have the capacity to handle the quantities of data, rather it is necessary to introduce elements such as an Ethernet. Similarly, logging of vehicle data must be developed. Traditionally the vehicle industry has logged CAN communication since it was introduced. The quantities of data have gradually increased from the first logging of J1935 at 1 Hz which requires relatively low memory capacity to logging of CCP/XCP with up to 1 kHz where the memory requirement has increased to at least Gbyte level. The next step is the introduction of logging equipment, partly with traditional CAN connections,as well as with both Ethernet and analogue inputs for film and radar logging. Logging of image and radar data drastically increases the requirement for memory management. It is not just the actual size of the memory that has to be taken into account. The memory’s capacity to store and upload in the shortest possible time is of major importance. To facilitate management of film and radar sequences, preprepared formats can be used for storage instead of logging the analogue flows. It will also facilitate time stamping of events that are logged where, for example, an alarm signal on the CAN bus can be analysed together with a film sequence. The major challenge will be to analyse all data that is logged and to use data from logging effectively in, for example, simulations in the lab for future autonomous functions. A prerequisite to raise the level of automation, with all the attendant complexity, is to have and be able to analyse data collected. The data consists of a multitude of different situations and types of data collected from the vehicle, the surroundings and driver interaction or AI. As several of the sensors that are needed for full autonomy are already fitted in vehicles with a lower level, it is a perfect situation to start logging data from all systems involved and from the driver’s actions. Imagine a vehicle of level 3 type and that the radar system produces an emergency braking call but the driver simply releases the accelerator. The film that has been recorded shows that there was a large black bin bag on the roadway. Using data collected from all components will subsequently enable models to be developed so that the camera can determine what type of obstacle it is seeing. Recorded data can also be used for feedback to functions under development in simulated environments, regression testing etc. Environments within the Automotive field always place high requirements on environmental endurance. Combined with several interfaces, a high memory capacity and frequently a lack of space in the test object, this makes specifying and developing general log equipment complicated. All vehicle manufacturers have their own strategy to construct solutions to log automatic functions. At Data Respons/Sylog, we have long experience of both data logging and the vehicle industry. Together with customers, we are looking at different solutions to help them raise automation to the next and subsequent levels di|Crister Nilson, Consultant Manager & Automotive Business Area responsible, Sylog AB st|BY: Level 0, no automation: Level 1, driver assistance: Level 2, partial automation: Level 3, conditional automation: Level 4, high automation: Level 5, full automation: Want to know more? Sylog would love to hear from you! OUR COMPANIES Newsletter sign up h1|Data logging & autonomous vehicles h2|For the vehicle industry, the expression ”Self-driving cars” has become a slogan which engages people who have not previously had the slightest interest in traditional automotive technologies. Everyone (almost) can relate to sitting in a car that drives you from one place to another without your involvement, some people imagine it with alarm others with gratifying enthusiasm. sp|> Data logging & autonomous vehicles The answer to the question Roadmap Components Data logging Raising the level of automation Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|The technological complexity is increasing as more sensors and units are connected, enormous amounts of data collected and analysed, systems integrated both in the edge and in cloud-based platforms whilst maintaining end-to-end security. Data Respons France is specialising in solving challenging problems and developing complex software, which require in depth knowledge of end-to-end technology and business scenarios. This includes mainframe computer, networks, desktops, mobile devices and embedded systems. The Managing Director of Data Respons France, Guillaume Wolf says it’s a huge privilige to be responsible for launching the Data Respons brand in France. , Guillaume Wolf concludes. Located in Paris the company is able to support much of the European continent. And support our parent company, AKKA Technologies, and their customer base in France. Located in Paris the company is able to support much of the European continent. And support our parent company, AKKA Technologies, and their customer base in France. Data Respons di|“I like to say that we are enabling a digital future, meaning that through our specialists we can support traditional industries with the transition from traditional to intelligent products, services and business models. It is about doing more with less and to get that necessary competitive advantage in today’s turbo charged world. We have developed a successful operating formula over the years in Data Respons and we believe that our expertise within industrial digitalisation will provide many solutions to the clients in France” st|Kenneth Ragnvaldsen, CEO of Data Respons Chief Communication Officer OUR COMPANIES Newsletter sign up h1|Launching Data Respons in France h2|Today Data Respons is launching the newest addition to the family – Data Respons France. Located in Paris the company is able to access much of the European continent. And support our parent company, AKKA Technologies, and their customer base in France. sp|> Launching Data Respons in France Unlocking French Opportunities Guillaume Wolf takes on the New Market Want to know more? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|“Building on the amazing track record and market position of Data Respons enables us to get a running start on an exciting project. My focus for the coming months is to gather a high performing team than can use the Data Respons culture and competence to build a new set of opportunities in the French market” pa|12/02/2020 12/02/2020 11/02/2020 10/02/2020 st|OUR COMPANIES Newsletter sign up h1|News & Notices sp|> > News & Notices Latest: . 19.05/2020 Last day of trading for DAT 13.05/2020 Received application for delisting 26.03/2020 Changes in the financial calendar 21/02/2020 AKKA Technologies SE completes voluntary offer for all outstanding shares in Data Respons ASA 17/02/2020 AKKA Technologies SE – Final results of the voluntary offer for all outstanding shares in Data Respons ASA 13/02/2020 AKKA Technologies SE – Preliminary results of the voluntary offer for all outstanding shares in Data Respons ASA End of offer period for voluntary offer for all outstanding shares in Data Respons ASA Update on acceptances – voluntary offer for all outstanding shares in Data Respons ASA Flaggemelding i DAT Extension of Offer Period for voluntary offer for all outstanding shares in Data Respons ASA Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|According to MicroDoc, the GraalVM virtual machine will cure many of the well-known headaches caused by traditional Java virtual machines. And that’s not a small thing, because Java is the world’s most widely used programming language. In the automotive industry for instance, many of the telematics services, connectivity services, and infotainment systems are programmed in Java. In healthcare, high-end medical machines for e.g. analysis have interfaces written in Java. And while there are numerous good reasons for Java being so popular, it carries some significant problems as well, startup performance being one of them, memory footprint another. GraalVM is addressing these problems, according to MicroDoc CEO Dr. Christian Kuka: – The history of Java is a real success story. It’s a rather old language, initially invented by Sun Microsystems for embedded use. Over time it became the mainstream programming language for commercial applications, and it continues to be the most common language for serious programming. MicroDoc alone has sold 40 million Java licenses to the automotive industry. – Some years ago, Oracle acquired Sun, and Java became an Oracle product. Oracle launched a number of internal projects to advance the technology. GraalVM was one of them, and it was built to be the programming interface of the future for the Oracle database. One of the main ideas was to give the database programmer a multilingual tool, so that he could choose the language he wanted for his programming. The approach for GraalVM was cool and it became successful, and eventually it got decoupled from the database and got a life of its own. In the words of Dr. Kuka, there is a ”beautiful convergence of requirements”, when you look at what is currently happening on servers and compare it to the embedded space. On big servers you have many services running in parallel. For this micro services architecture you want something, that has a fast startup, uses a minimum of resources, does its service and shuts down again. In the embedded space you have the exact same requirements for fast startup, low footprint, and high performance. With MicroDoc now introducing GraalVM to the embedded industry, customers can profit from Oracle’s significant investment in this cool, new technology, initially developed to meet the requirements in the cloud for infrastructure that supports micro services. – Now we can take this interesting new piece of technology, which is normally available from Oracle on servers, and we can put it in your device. That means you can have the same performance and advantages that you would have in the cloud. We can run an application, that has been written for servers, desktop or mobile devices and move it to wherever we want to run it. Bruno Caballero, head of Virtual Machine Technologies at MicroDoc and member of the GraalVM Advisory Board, explains: – We are working closely with Oracle to develop and enhance GraalVM as a platform for the embedded space. Our first embedded product with GraalVM is planned for release in the first quarter of 2022. – And there will be many more, because GraalVM solves some important issues. It helps you take care of very complex system interaction. Normally, you have different languages and different virtual machines running simultaneously and communicating together on the same device. But you don’t want a lot of complicated infrastructure. What you want is to use one virtual machine that is solid and good, and that will work for every language. Now we can take GraalVM and host everything we want. We can get rid of all these independent components and have everything built on the same infrastructure and on the same virtual machine. It’s hard to underestimate the importance of GraalVM being multilingual. It runs applications written in languages like JavaScript, Python, Ruby, and R, and it even supports the execution of C and C++ in a safe, virtualized environment. It runs any language with an LLVM compiler, including FORTRAN, together with the entire Java universe, including Scala, Kotlin, and Java itself. Not only does this help you reduce complexity on your device. It also makes GraalVM innovative and attractive for developers, and every CTO or project manager knows how important that is. If you want to attract young programmers, you have to give them the opportunity to do cool stuff. Java has been around for very long, and it continues to be the most common language for serious programming. But if you want to attract young developer talent you should allow them to use languages that are new and innovative, although they might not yet be widely used or well adopted by big companies. The multilingualism of GraalVM makes that possible. You can mix Java with JavaScript and Python, and you can use existing libraries and frameworks available in those languages and use them in one single programme. – GraalVM gives developers freedom to use the language they find best suited for the job at hand, says Dr. Kuka. – It allows them to try new and cool stuff, while having the consistency of a mature product. GraalVM is supported by one of the biggest IT companies on the planet, and as part of the Oracle database product it has a life cycle that is appropriate for automotive use cases and other long-term projects. If you’re a project manager or CTO that means you can give your developers the freedom they want. At the same time, you can be certain the technology will be durable and available for a very long time. Please welcome GraalVM – the Swiss Army knife of virtual machines di|Arne Vollertsen for Data Respons st|BY: Perfect fit Beautiful convergence Taking care of complexity Many languages Setting developers free OUR COMPANIES Newsletter sign up h1|GraalVM – the Swiss Army knife of virtual machines h2|Our subsidiary, MicroDoc, is introducing GraalVM to the embedded world. It is the Swiss Army knife of virtual machines. It can accelerate the startup of applications like a car’s rear view camera with more than a factor of 10. Furthermore it hosts multiple programming languages at the same time, can reduce memory usage dramatically, and frees you from taking care of complex software infrastructure issues. sp|> GraalVM – the Swiss Army knife of virtual machines Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Isabelle Sarah Borchsenius di|Isabelle Sarah Borchsenius. Marketing, Communication and Sustainability Manager at Data Respons st|BY: According to UN, the effects of the COVID-19 pandemic could reverse the limited progress that has been made on gender equality and women’s rights. The coronavirus outbreak exacerbates existing inequalities for women and girls across every sphere – from health and the economy, to security and social protection. Want To Know More? OUR COMPANIES Newsletter sign up h1|Enabling a better future for homeless children in Nepal h2|For years Data Respons have supported homeless children in Nepal. Through our engagement we aim to develop basic infrastructure and prevent trafficking through education. 1. Can you introduce yourself, who you are and what you do at Sylog? 2. Can you tell us about the project, why you joined it, and its background? 3. How has Covid19 affected the situation for the children in Nepal? 4. How is Sylog supporting you in this engagement? 5. What’s the end goal for you? h4|What about the fathers? sp|> Enabling a better future for homeless children in Nepal Data Respons strongly believes that y oung people are our future and we want to be a part of giving coming generations the best starting point possible and the ability to grow and prosper into educated, healthy and valuable individuals. We call it E nabling the young An example for how we approach this vis i on is how we have support ed our daughter company Sylog’s engagement in the Swedish non-profit organization Gatubarn i Nepal (Street children in Nepal). The long-term mission of the organi z ation is to develop basic infrastructure in Nepal ’ s villages, put children to school and protect the future of young girls. And developing basic infrastructure contributes to reaching these UN’s development goals, amongst others: The Society for Street Children in Nepal ( Gatubarn i Nepal), is a non-profit fund-raising society working for the provision of permanent accommodation for street children or those who risk becoming street children in Nepal. Highest priority is given to girls since they in particular run the risk of human trafficking. The Society provides education for children and young people who would otherwise have no access to education. In addition, the Society provides food for children still living on the streets. In order , to get a clearer picture of what impact the organi s ation has had for street children, w e interviewed Ylva Lilja, consultant manager in Sylog, who has been engaged in Gatubarn for 2,5 years and has been chairman since the last two years . My name is Ylva Lilja and I am a consultant manager at Sylog, which is a Data Respons subsidiary in Sweden . I have been employed at Sylog for 15 years. Sylog employs over 250 consultants in Sweden, in G ö teborg , Stockholm and Link ö ping. My key responsibility i n Sylog is to take care of and manage consultants and make sure the projects deliver on time and in accordance with the client’s expectations . I had this idea that I wanted to do something for a good cause abroad. My mother told me about this project and about Eva Holmberg Tedert , the founder of the organization. Eva herself was travelling to Nepal when she became aware of all the homeless children on the streets. S he quickly realized that there were almost only boy s and wondered where all the girls were . After discussions with a local monk, he explained that girls are very likely to bec o me victims of human trafficking . Shortly after, in 2010, Eva found ed Gatubarn i Nepal . Hearing this story made a huge impression on me. So, I contacted Eva, travelled to Nepa l and got involved in Gatubarn . Every time I travel to Nepal, it strikes me: it ’ s total chaos and poverty, bu t a t the same time, people appear to be so thankful and happy. My motivation is to contribute to a good cause. It feels deeply satisfying to see how our work has immediate positive consequences on the children’s lives . In addition, I got especially convinced to support Gatubarn because there is almost no administration fee for donation, which means that 97% of the money go e s directly to the operations in Nepal. Among other things , w e have built an orphan age which now houses 14 girls and one of them has moved to a student dorm. The o rgani z ation works to give the girls a safe everyday life with good study opportunities and good leisure activities . Further more , we enable and e ducate girls to become nurses and doctors , so they will go back to their village s with a medical competence and thus directly affecting the local health conditions . Help ing the children in Nepal is about starting from the bottom of the society. We help mothers su rvive, b ecause when they die, children become orphans with no one to protect them. Often, the fathers are no longer around. When the fathers still are part of the families though , they often work abroad , for instance in Qatar . We help the mothers by talking about famil y planning, provide them with contraceptive, helping them during the pregnancy and help them in giving birth . This is crucial to help the next generation have a good start in life. When none of the parents are around, children, and mostly girls, are especially vulnerable to become victims of trafficking. The ultimate goal is to stop trafficking and to make the villages self- s ufficient and independent of our help. To achieve this, w e help villages ge t fresh water, so the children can go to school instead of getting water, and goats and beehives , so the villages can breed more goats and hopefully sell some of them and beehives can make honey. One village got goats and for the money that they earned the y bought tomato plants and a greenhouse . T hanks to that they c ould buy hens. Th at is another reason why this is so important, and it feels fantastic to be a part of giving them the opportunities to develop. It’s really hard for them. The children in the orphanage are doing relatively well, compared to those in the slums. But in the slums, they still have a lockdown and are starving. The situation is horrible. All schools are closed, and the children are not allowed to have lessons online, because it would be unfair if not everyone is granted access because far from everyone has digital devices. There is no basic infrastructure, which makes the pandemic hit them even harder. There are no roads and no medical help. The pandemic is affecting the children both short term and long term. Short term, they are suffering damages from starving. In the long term, they develop learning disabilities because of starvation. Sylog is one the main sponsors, but they also support Gatubarn by giving me time off to travel ther e . Due to the monsoon season in summertime, I can ’ t travel there in my regular summer vacation. So , Sylog is making it possible to travel outside of holiday season . If we want to reach any of the development goals, we have to get the whole world developed. Enabling e ducation for future generations is the key to reach ing this. The contribution from Data Respons assist in developing the next generation on a very basic level. The vision is to stop trafficking by enabling education and making sure that the m others stay alive by improving general healthcare . Every impact we have in Nepal aims at developing the very basic infrastructure. When developing another country, we need to do it in their terms. We won’t change Nepal the next years and it would be an illusion to think we are making Nepal a better place any time soon . But we are, hopefully, educating the children so they can do it themselves . They are the future. Marketing, Communication and Sustainability Manager at Data Respons Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|ONE MILLION plastic bottles are bought every minute worldwide. 50% is recycled. There is an enormous potential in reducing plastic waste by bringing reverse vending to new parts of the world. The TOMRA RVM’s already collect 35 billion bottles annually, reducing greenhouse emissions equal to two million cars travelling 10 000 km st|OUR COMPANIES Newsletter sign up h1|Reducing waste with reverse vending h2|TOMRA is the world leader in the field of reverse vending, with over 82,000 installations across more than 60 markets. Data Respons R&D Services has assisted TOMRA in developing reverse vending machines for more than 12 years. We are proud to partner up on solutions which reduces plastic waste and green house emissions. sp|> Reducing waste with reverse vending is helping TOMRA develop a reverse vending machine for a new market, mainly with software (SW steering motors, camera sensors Graphical User Interface) and hardware development. Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|The technological complexity is increasing as more sensors and units are connected, enormous amounts of data collected and analysed, systems integrated both in the edge and in cloud-based platforms whilst maintaining end-to-end security. Data Respons France is specialising in solving challenging problems and developing complex software, which require in depth knowledge of end-to-end technology and business scenarios, including mainframe computer, networks, desktops, mobile devices and embedded systems. Located in Paris the company is able to support much of the European continent. And support our parent company, AKKA Technologies, and their customer base in France. Guillaume Wolf France st|HQ FOUNDED # OF EMPLOYEES JOINED DATA RESPONS COMPANY OUR COMPANIES Newsletter sign up h2|Specialists within advanced software development, digitalisation and IoT sp|> > > Data Respons France DATA RESPONS FRANCE MANAGING DIRECTOR 2020 10(2020) 2020 Website Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|At the same time, a full 87 per cent of respondents don’t want their movement patterns being registered so that companies can take a look at them. However, it is possible to develop surveillance processes that only react when something out of the ordinary happens using AI technology. When someone behaves unusually, the police or someone else can investigate in more detail. Meaning the rest of us “normals” don’t have to have our behaviour registered. With this report, we can confirm that many people are concerned about personal safety, but also worried about ideas of integrity. This will become an ever more important part of our digital society st|OUR COMPANIES Newsletter sign up h1|Personal safety vs. personal integrity h2|A report on the Swedes’ opinions regarding technology, progress and safety. Produced by Sylog AB, a Data Respons company. sp|> Personal safety vs. personal integrity I consent to recieve newsletters from Data Respons. Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. bo|This investigation shows that nearly eight in ten Swedes want to see more cameras on streets and in town squares. pa|Within operative applications, critical mission communication – including critical machine-type communication (cMTC) – is crucial. This involves ensuring the necessary functionality, a high degree of robustness, security, shielding of data, and ensuring high uptimes across existing wireless carriers such as the IP-based LTE and 4G. However, in line with increasing digitization, automation and autonomation, it is crucial that the military can also exploit new technologies such as fifth-generation mobile communication. That is to say, 5G is not a technology in itself but is a set of requirements, see 3GPP’s 5G-NR (New Radio) standard (release 15 and later). 5G’s speed of 10 gigabits per second (see eMBB or Enhanced Mobile Broadband) is estimated to be 100 times faster than 4G. And the technology’s theoretical delay of just a few thousandths of a second (1 ms) is 400 times faster than the blink of an eye… 5G terminology likes to talk about URLLC (or Ultra-Reliable Low-Latency Communications) with regards to the above. The low delay is achieved among other things with the help of so-called Edge computing where data processing and data generation (systematic indicators, trends and performance data) is executed as close as possible to the endpoints, including sensors and effectors, where these can exchange data with one another locally with practically zero waiting time. With the help of Software-Defined Networking (SDN) and Network Function Virtualization (NFV), it is possible to assign private, specially adapted user areas – so-called “network slices” – to different sectors, industries and enterprises on the 5G core network. These areas are built on top of the underlying mobile network. They are central to 5G technology since it is not possible to combine all the capacities previously mentioned without extreme investments. For example, it is impossible to combine very low delay with massive area coverage (up to 1 million units per square kilometre, ref. massive machine-type communication; mMTC). The private slices are therefore adapted based on critical parameters for each sector or enterprise, or different defence applications. For example, a private 5G “Defence Slice” with high, prioritized speed and low latency will simplify heavy end-to-end encryption using keys that can only be read by the recipients. This is what is being tested in the 5G Vertical Innovation Infrastructure (VINNI) project which the Armed Forces are participating in. These sorts of private areas are also of interest for other key agencies in the public sector, regardless of whether these agencies are part of a national defence structure or not. It is, for example, an expectation that a dedicated 5G network slice will replace the current emergency network in Norway from 2027 (after the Norwegian government decided back in 2017 that the next generation emergency network – NGN – should be based on a commercial mobile network). This network then recognizes that the coupled unit belongs to the “emergency services slice” and prioritizes it over other network traffic and communication. Edge Computing nodes in airports, hospitals, in a municipality to provide essential services when the central 5G Core is not available. We are already very familiar with the expression the Internet of Things”, the so-called IoT or IIoT (Industrial Internet of Things), whilst the international military jargon talks of the Internet of Military Things (IoMT) or Internet of Battlefield Things (IoBT). Whatever term you use, the essence is the same: smart devices that talk to each other and their surroundings in their own cyber domains via the internet. This way, you can collect, process and interpret data, and control devices and sensors remotely. For the military, it is about sensor fusion; merging and analyzing data from devices and sensors such as surveillance cameras, detection sensors, base stations and gateways, smartphones, radios and communication nodes, and not least of all manned and autonomous vehicles and drones. Based on this information, models and usable real-time data are generated for logistics and area control, intelligence and situational awareness, command and control, and finally active protection of combatant units and bases. With the properties and capacities offered by 5G technology, we can take a giant leap forward and (as previously mentioned) the military can gather their IoMT together into one dedicated slice – a “defence NW slice” – with robust security algorithms and procedures, and where the properties and real-time speed of 5G technology (with performance in line with fibre optics) mean the guaranteed quality of service (QoS). This all means that the military can get the most out of artificial intelligence (AI), virtual reality (VR) and advanced reality (AR). They can also react to emergency situations and control drones and vessels – individually or in swarms- in real-time via the mobile network. There has also been a great deal of debate surrounding 5G and its vulnerability, not least in relation to radio equipment and key components for the 5G core network from the Chinese company Huawei. Several countries have chosen to ban Chinese technology from their critical infrastructure, the digital foundations of the nation. The largest telecommunications companies in Norway have also decided against Huawei ever since the new Norwegian Security Act came into force in 2019. It is also about what frequency range the military will use. In Europe, there are three so-called 5G pioneer bands, two of which fall under Frequency Range 1 (<6GHz), specifically the low band (700MHz-2.3GHz) and the medium band (3.5GHz). The third falls under Frequency Range 2 and is often called the millimetre wave or high-frequency band (26GHz+). There are opportunities and limitations, advantages and disadvantages to the different frequency bands, including with respect to the military’s future use. The low band, known in Norway as the national network, is characterized by a high degree of robustness and increased area coverage. Still, it is not particularly fast compared to the other two bands. The medium band handles bigger volumes of data and is typically built up in suburban areas. The high-frequency band is characterized as being super-fast but only over short distances, meaning it requires a high cell density and extensive use of repeaters (millimetre waves suffer significant attenuation or are completely blocked by building walls and physical obstacles, and are absorbed in the atmosphere). No military cannot rely solely on borrowing frequencies from commercial operators, and instead must be able to establish and manage their own coverage where necessary. Military organizations also realize that frequencies within all three of the ranges are useful but for different applications. Frequencies in the low band are useful for deployable broadband solutions and tactical radio lines (outside of built-up areas where the risk of WiFi interruptions is lower). In the medium band, the military already have existing licences for radar installations and depend on these frequencies being taken into consideration going forwards. There are also parts of this range that are of interest to the military in connection with the group and direction-defined antenna technology (MIMO/beamforming) and 5G drone detection (multi-static radar). The medium band is also widely used in the USA for radars, missile defence, electronic warfare and monitoring airspace. However, the American Department of Defence (DoD) recently approved 3.4 and 3.5GHz frequencies for helping national technology companies to compete with China. Finally, the high-frequency band is interesting for the Armed Forces in terms of ultra-broadband card communication at bases and headquarters, for distributed sensors which require a lot of data communication, and for 5G satellite technology. Regardless of the range, having their own dedicated and harmonized frequencies will allow the military to develop new, robust and secure technology solutions for administrative and operational applications. There is also a debate raging internally within NATO regarding the vulnerability, infrastructure and range, but what is certain is that without 5G communication it would be close to impossible to fully exploit the possibilities offered by big data, artificial intelligence and cloud processing in both the military and other sectors. The same goes for getting the full-capacity effect out of hi-tech platforms such as the multi-role F35 aircraft in so-called multi-domain operations where situational information from Land, Sea, Air and Space is processed in a fifth domain – the cyber domain- allowing us to react by combining effectors from these domains. The current government has stated in various forums that “the military must be the best at utilizing technology” and that this should be achieved through a high degree of independent technological competence, and cooperation between the military and governmental agencies. In Norway the military is therefore also working on several 5G technology experiments, including experimental and pilot projects such as the 5G-VINNI project where new and secure speech and data architecture is also being integrated and tested in the “defence slice”. Many of these projects are being conducted in collaboration with commercial stakeholders and businesses, which is important since Norway is home to a highly competitive environment both within and outside of the military in the area of wireless, operational and tactical communication. SDN: Software Defined Network. NFV: Network Functions Virtualisation. LTE: Long Term Evolution. cMTC: Critical Machine type communication. mMTC: Massive Machine type communication. QoS: Quality of Service. MIMO: Multiple Input Multiple Output. The only external sources used in thus information was publicly available information, including reports and consultations from the Norwegian Defence Research Establishment (FFI), Norwegian Defence Logistics Organisation (NDLO) and National Communication Authority (NKOM), as well as information available on the web from NATO, Telenor, Defence systems and Wikipedia di|Mikkel Helweg, Business Development Manager and Terje Jensvik, Technical Manager Solution in Data Respons Solutions Norway. st|BY: Critical assignment communication is key High speeds and low latency on the 5G network Separate “defence area” on the 5G network 5G Autonomous Service IoT in the military domain; IoMT Infrastructure, vulnerability and frequency range The military have applications in all frequency bands Without 5G, we can’t exploit the potential of new technologies to the full Some abbreviations we have used: Sources: OUR COMPANIES Newsletter sign up h1|5G is a game changer for the military h2|Secure wireless data communication is hugely important for the military, both at home and abroad. Besides the apparent administrative use, this goes not least the military tactical communication management system. sp|> 5G is a game changer for the military Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Most unmanned vehicles are remotely controlled by radio and their position is pinpointed via GPS. This is problematic for unmanned subsea vehicles, as electromagnetic waves do not propagate nearly as far in water as they do in air. Today, transmit speeds can come up to a couple of kilobytes per second when the range is about 200 meters. Alas, it’s still insufficient for the videofeed needed for remote control. The most widespread unmanned subsea vehicles are ROVs (Remote Operated Vehicles). Said vehicles has a wired connection to the operating ship. This umbilical cord transmits video as well as the steering signals. To perform a cable survey the ROV is manually steered along a cable and inspected for damage. This is both tedious and expensive. Further, the need for an operating ship is a large drawback as it causes a significant cost increase. This is especially true for time consuming operations like cable surveys. The alternative to ROVs are AUVs (Autonomous Underwater Vehicles). An autonomous system is more independent that an automatic. In short terms it’s able to fulfill a mission entirely without human intervention. Ideally, an AUV is self-reliant and does not need an operating ship. This makes it more cost effective. Unfortunately, accurate IMUs (Inertial Measurement Unit) are expensive and the exact coordinates of the object to survey isn’t always known. This might cause the AUV to search in the wrong area or even get lost entirely. To prevent the latter, ships are often used to follow the AUV. Today, this is the one of the greatest criticism of AUVs as they, in practice, also need an operating ship in order to function properly. By using the magnetic field produced by a power cable it’s possible to track along it with relative ease and high accuracy. Potentially, this can reduce the cost of cable surveys. Additionally, it enables tracking of buried cables, as well as a way to determine how deep they are buried. The latter is important as cables that are not properly buried are more prone to take damage. By mounting a camera on the AUV an operator can inspect unburied sections post-survey. There are mainly three types of underwater power cables. The discussion here will be limited to three-phase AC power cables, where there is a separate cable for each phase. This is common for high power transmission lines crossing relatively small portions of water (< 100km). For longer cables, HVDC (High-Voltage DC) is used, as the loss from inductance to water outweighs the expenses of ACDC-converters. The procedure for autonomous cable tracking can be roughly divided into three. Signal processing to extract the magnetic field generated by the cable, Localization of the cable from its magnetic field and Steering the AUV along the cable. All these parts are performed live on the AUV; i.e. without any human intervention. To use the cables’ magnetic field for autonomous tracking the AUV must be within range and isolated from other fields. The latter includes anomaly fields from magnetized rocks, the earth’s geomagnetic field and the field induced by the AUV. The range is proportional to the current in the cable and dependant on the accuracy of the magnetometers. As an example, the new 420Kv cable in Oslofjorden will be detectable at about 30m from the FLC3-70 magnetometer that costs about 1000NOK. All AC currents in Norway are at 50Hz, so the magnetic field will oscillate at this frequency. Both the anomaly field and the magnetic fields are static. Lastly, the AUV induced field can be tuned. Consequently the sole field at 50Hz is produced by the cable. This makes it extractable from the sampled signal by the discrete Fourier transform, which is a well-known tool in engineering mathematics. The purpose of the Fourier transform is to decompose a signal into its frequency part. Basically, it does this by calculating the covariance between the sampled signal and imaginary signals ranging from 0 to the sampling frequency. If the covariance is high the current frequency is a part of the signal. By using a three-axis magnetometer and implementing the Fourier transform, the amplitude and phase shift of the cable generated field can be isolated in each direction. After extracting the cables magnetic field it’s possible to determine the relative heading and cross track distance to the cable. These are the parameters needed to autonomously steer along the cable and are notated in Figure 1 as ψ and Y. The magnetometers are three-axis, meaning that they decompose the magnetic signal in x-y-z components relative to the AUV. We know the magnetic field is perpendicular to the cable, so the relative heading ψ between the AUV and the cable can be determined by trigonometric functions. As most AUVs are equipped with a compass, it’s heading ψ is also known. From ψ and ψ , the heading of the cable, ψ , can be determined. The distance to the cable is a bit more complicated and requires two triaxial magnetometers. By triangulating the magnetic field at the two magnetometers the exact distance to the cable can be determined. The equations are rather lengthy and are omitted here. For readers with special interest, a derivation is given in Xiang, X. (2016). To track along the cable the AUV must be sufficiently close and the relative heading must be zero. The now known cable heading ψ and cross track distance Y can be used to implement a steering algorithm. Most AUVs use a small rudder to turn in yaw. Intuitively, it can be understood that the turn rate ψ ́ is a depended on the AUVs rudder angle δ. A simple heading autopilot can therefore be implemented as: If the desired heading equals the cable heading we obtain: This is called a Proportional controller or P-controller, where K is a constant achieved by tuning. As seen from the equation, the rudder angle will be zero when actual heading equals the desired. If there is an ocean current, this controller is insufficient, as it will saturate. Therefore it might be wise to add integral action: This is called a Proportional-Integral controller (Pl) and will sum the error over time, thus gradually suppress the effect from a constant opposing force. To limit overshoots, it’s common to add a derivative term as well: This makes it a PID-controller, which is an extremely popular low-level controller. It’s used in everything from AUV autopilots to regulating temperatures in ovens and so on. It has the edge over model-based controllers because it does not require a transfer function of the system, which might be difficult or even impossible to derive. Its drawback is the required tuning of the K-gains and that the controller will never be optimal. An example of a model based heading autopilot is Nomoto’s first order. It was first presented by Nomoto in 1957, but is still the basis of many heading autopilots used on marine crafts today. The transfer function is: Which is an integral part in cascade with a low pass filter. Or simpler put; the heading is sum of the rudder angles where too rapid changes are filtered out. K and T are here not achieved by tuning, but by a mathematical model of the craft. For AUVs the mathematical model tend to be very complicated as it, compared to ships, has two extra controllable degrees of freedom (heave and pitch). Therefore, they are usually implemented with PID low-level controllers. Figure 2 shows the control loop of the system. In the AUV block the heading is measured and the relative heading is calculated from the magnetometer readings. In the PID block the rudder angle is calculated from the relative heading. Whether a tuned PID or a derived model-based controller is used, the AUV will be able to steer along the cable with the same heading. However, only using a heading autopilot does not ensure that the AUV moves closer to the cable. Instead of making a new low-level controller, a guidance system can be used. A common guidance law is LOS (Line of Sight). LOS was originally developed during the Cold War for military operations. It was used to intercept moving targets by predicting the meeting point. Specifically, it was used in surface-to-air missiles by constantly recalculating and adjusting the heading to ensure impact on the target aircraft. Assuming the cable will continue in a somewhat straight line, LOS guidance can be used in a less explosive manner to diminish the horizontal distance while holding a smooth course. An illustration of the LOS steering law is shown in Figure 4. The desired heading is the heading of the cable added with the heading to the meeting point χ . The distance to this point is decided by tuning Δ. The desired heading when using LOS is Note that the meeting point is recalculated at each iteration, making the AUV steer in a gradual curve until the cross track error is diminished and it’s directly above the cable. At this point, a mounted camera can be used to record video along the trajectory for inspections post-survey. Figure 4 shows an illustration of the control loop when including the LOS-guidance in cascade with the autopilot. The subsea environment introduces many challenges for surveying installations. Today, ROVs are the most used solution. For a relatively small survey area, it may be the best solution. Yet, for cable surveys, which span a large distance, they have severe limitations both in cost and time. A proposal is to use AUVs in conjunction with magnetometers to autonomously track along power cables. This might dramatically reduce the cost of such surveys. A great concern for AUVs are the complicated algorithms needed to find the survey area and to prevent getting lost. By using the magnetic field of power cables, it’s possible to determine points to steer after with relative ease. When this critical information is obtained, many existing control algorithms can be implemented. Fossen, T. (2011). Handbook of Marine Craft Hydrodynamics and Motion Control. Kjetså, A. S. (2017). Localisation of Submarine Power Cables by Magnetometers on Remus 100AUV. Trondheim. From Xiang, X. (2016). Subsea Cable Tracking by Autonomous Underwater Vehicle with Magnetic Sensing Guidance. Sensors di|Aksel Stadler Kjetså Development Engineer Data Respons st|BY: Refrences OUR COMPANIES Newsletter sign up h1|Autonomous cable survey with magnetometers h2|An extensive network of conductors is needed to transmit electricity from power stations to the consumers. When these cables cross a large portion of water, they are usually buried beneath the seabed. The cables are here exposed to the marine environment and regular surveys are required. Unmanned subsea vehicles are usually used to carry out said surveys. h3|ROVs and AUVs Magnetic-based Cable Tracking Signal Processing Localization Steering Conclusion sp|> Autonomous cable survey with magnetometers Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|Autopilot Guidance pa|The contract comprises high-end computer solutions embedded in the customer’s advanced instruments and high-value analytical and diagnostic solutions. Deliveries will take place in 2020 with further opportunities going forward. – The ongoing trends with increased automation, industrial digitalisation (Industry 4.0), internet of things (IoT) provide great growth opportunities for our company. Germany is the largest market for industrial R&D services and solutions in Europe. We have been present in this market since 2005 and now have a solid platform of 7 locations and 500 employees. Germany counted for 25% of total group revenues in first half 2019 and we expect continued growth going forwards, says Kenneth Ragnvaldsen, CEO of Data Respons ASA. Kenneth Ragnvaldsen, CEO, Data Respons ASA, tel. +47 913 90 918. Rune Wahl, CFO, Data Respons ASA, tel. + 47 950 36 046 Data Respons is a full-service, independent technology company and a leading player in the IoT, Industrial digitalisation and the embedded solutions market. We provide R&D services and smarter solutions to OEM companies, system integrators and vertical product suppliers in a range of market segments such as Transport & Automotive, Industrial Automation, Telecom & Media, Space, Defence & Security, Medtech, Energy & Maritime, and Finance & Public Sector. Data Respons ASA is listed on the Oslo Stock Exchange (Ticker: DAT), and is part of the information technology index. The company has offices in Norway, Sweden, Denmark, Germany and Taiwan. This information is subject of the disclosure requirements pursuant to section 5-12 of the Norwegian Securities Trading Act st|For further information: About Data Respons OUR COMPANIES Newsletter sign up h1|Contract in Germany of NOK 17 million h2|Data Respons has received a contract of NOK 17 million with a German customer in the industrial automation (Smart Factory) segment. sp|> Contract in Germany of NOK 17 million Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|The contracts include specialist R&D services, software development, advanced testing and system integration supporting all phases of the development cycle. The deliveries will be carried out during 2019. – These contracts confirm a strong structural demand among our customers and strengthens the record high order intake so far in 2019. The most important drivers are increased R&D investment among our customers combined with an increased technology driven product development strategy. Large investments in new network solutions (5G) enables broader implementation of Internet of Things (IoT) across all industries and offers great opportunities going forward. Our skilled engineers deliver specialist services in long-term and business critical development projects, says Kenneth Ragnvaldsen, CEO of Data Respons. Kenneth Ragnvaldsen, CEO, Data Respons ASA, tel. +47 913 90 918. Rune Wahl, CFO, Data Respons ASA, tel. + 47 950 36 046 Data Respons is a full-service, independent technology company and a leading player in the IoT, Industrial digitalisation and the embedded solutions market. We provide R&D services and smarter solutions to OEM companies, system integrators and vertical product suppliers in a range of market segments such as Transport & Automotive, Industrial Automation, Telecom & Media, Space, Defense & Security, Medtech, Energy & Maritime, and Finance & Public Sector. Data Respons ASA is listed on the Oslo Stock Exchange (Ticker: DAT), and is part of the information technology index. The company has offices in Norway, Sweden, Denmark, Germany and Taiwan. www.datarespons.com st|For further information: About Data Respons OUR COMPANIES Newsletter sign up h1|Contracts in Sweden of SEK 35 million h2|Data Respons has signed contracts of SEK 35 million with a Swedish customer within Telecom and Media. sp|> Contracts in Sweden of SEK 35 million Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|The contracts include specialist R&D services, software development, advanced testing and system integration supporting all phases of the development cycle. The deliveries will be carried out during 2019. – These contracts confirm a strong structural demand among our customers and contribute to a record high order intake so far in 2019. Over the last years, we have enjoyed great success in the Swedish market and significantly increased revenues through strong organic development, says Kenneth Ragnvaldsen, CEO of Data Respons. The main drivers are increased R&D spend among our customers and that companies are getting more technology-driven in their product development strategies. Products are becoming more intelligent, service and software-oriented, and are expected to be securely connected 24/7. Our skilled engineering teams deliver highly specialised R&D services in long term and business critical product development projects, Ragnvaldsen concludes. Kenneth Ragnvaldsen, CEO, Data Respons ASA, tel. +47 913 90 918. Rune Wahl, CFO, Data Respons ASA, tel. + 47 950 36 046 Data Respons is a full-service, independent technology company and a leading player in the IoT, Industrial digitalisation and the embedded solutions market. We provide R&D services and smarter solutions to OEM companies, system integrators and vertical product suppliers in a range of market segments such as Transport & Automotive, Industrial Automation, Telecom & Media, Space, Defence & Security, Medtech, Energy & Maritime, and Finance & Public Sector. Data Respons ASA is listed on the Oslo Stock Exchange (Ticker: DAT), and is part of the information technology index. The company has offices in Norway, Sweden, Denmark, Germany and Taiwan. This information is subject of the disclosure requirements pursuant to section 5-12 of the Norwegian Securities Trading Act st|For further information: About Data Respons OUR COMPANIES Newsletter sign up h1|Contracts in Sweden of SEK 40 million h2|Data Respons has signed contracts of SEK 40 million with a customer within Telecom & Media. sp|> Contracts in Sweden of SEK 40 million Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Data Respons Solutions design, develop and deliver smart embedded and industrial IoT solutions by combining specialist engineering competence with standard embedded components from leading technology partners. Data Respons Solutions is involved throughout the entire process – from specification, system architecture, HW design, software development, secure connections, test and qualification to volume deliveries. The company also provide value-adding services based on specialist competence including technical support and lifecycle management services and are involved in next generation studies. The demand for increased SW content, more functionality, higher performance and securely connected solutions increase. A customized solution for your specific needs will often result in a lower cost of ownership and ensure that your system has an appropriate end-to-end security with a fall back solution to avoid data being compromised. Jørn E. Toppe Høvik (Norway) 93 (2017) st|Smart embedded and industrial IoT solutions HQ FOUNDED # OF EMPLOYEES JOINED DATA RESPONS OUR COMPANIES Newsletter sign up sp|> > > Data Respons Solutions DATA RESPONS SOLUTIONS MANAGING DIRECTOR 1986 Since the beginning COMPANY website Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|In August 2019 the 3 generation of the Fairphone was launched. The Fairphone, developed by a Dutch company and manufactured in Taiwan, is the only mobile phone in the world using Fairtrade certified gold, while the tin and wolfram it needs is sourced in non-conflict zones. It consists of more than 80% recycled materials, and it has a modular design to enable repairing. It even comes with a small screwdriver. But as green and likeable the Fairphone may be, it is still very much a niche product. So far, less than 200.000 have been produced, a vanishingly small number compared to the millions and millions of conventional smartphones sold each year. Yet, there is no doubt that, looking forward, products like the Fairphone will attract increasing attention. The demand for electronic devices designed with a lighter environmental footprint is growing, and it comes from several places: from consumers as well as lawmakers, but not least from companies looking for ways to reduce the environmental footprint of their business. Just consider the long-term impact of what happened on the 11 of March 2020, the day the European Commission launched its . Being part of the European Green Deal it aims at making circular products the new norm in the EU. The Circular Economy Action Plan focuses on the sectors that are the most resource intensive and have the highest potential for circularity. Among other things the plan targets electronics and ICT with a “Circular Electronics Initiative” to promote longer product lifetimes through reusability and reparability, as well as upgradeability of components and software to avoid premature obsolescence. Among the initiatives is establishing a new “right to repair”, a common charger solution, reward systems to return old devices, and a new regulatory framework for batteries. Not only legislators like the European Commission, but also a growing number of consumers are demanding electronics products designed and manufactured with a lighter environmental impact in mind. The same goes for businesses. An increasing number of companies are looking for ways to align their business strategies with their carbon footprint strategies. More and more businesses will be looking much more closely and seriously into how to embrace environmental considerations, maybe even exploring how a stronger green focus could boost their overall competitiveness, open doors to new markets etc. However, when it comes to electronics and digital solutions, you need to brace yourself for an uncertain journey. Eco-friendliness in electronics development is complex and requires careful consideration. You won’t find many straight answers, and there are numerous dilemmas that need to be articulated openly. To begin with, let us sum up what exactly a green electronics product should try to achieve: For Data Respons the journey has just begun. As a company delivering R&D engineering services, software and hardware development, and smarter embedded and IoT solutions we are dedicated to aligning our business and sustainability strategies. As a consequence, we are working on reducing our own environmental footprint and have set a goal of . Also, we take responsibility for our suppliers. We have established Supplier Conduct Principles, ensuring that business operations are environmentally sound, and we are performing regular due diligence reviews of our suppliers. Our Supplier Conduct Principles include guidelines for our suppliers regarding labour standards, hazardous substances, greenhouse gases, waste treatment, and conflict minerals. On top of that we are exploring how to lower the environmental footprint of our core business – the products and solutions we develop for our customers. We believe that this will increase competitiveness, for us and for our customers alike. However, we can only move forward in close cooperation with our customers, and ultimately it requires re-thinking conventional product development and business strategy. In other words, the two well-known parameters in our business – pricing and time-to-market – need to be supplemented with a third parameter: Environmental impact. With more than 30 years of experience in software and hardware development Data Respons is up to the task, and we are educating key personnel in how increased focus on eco-design and sustainability creates stronger competitiveness. But as mentioned earlier, going green in electronics is a complex issue, and there are no easy answers to handling the challenge. In fact, it is important to be aware of the fact that a number of recent trends in electronics and digital technologies have the potential to put an additional burden on the environment. As an example, if you want to design products that last longer you’ll be going up against a trend that has been gathering momentum over the last 10 to 15 years: The life cycle of electronics is getting shorter and shorter. While component manufacturers have been shifting their focus from e.g. the vehicle industry to computers and telecom, components used in consumer electronics have become cheaper and cheaper. Thus technology for industrial and other non-consumer use is utilizing components developed for consumer electronics. But as the average lifespan of a smartphone is 2-3 years, components are built to that standard. To fight that trend, electronics developers and their customers need to work together to achieve stronger diversification between consumer and industrial products. The customer may even have to accept a higher price, although longer operational lifetime would still ensure good Return On Investment. Another trend with similar built-in dilemmas is energy consumption. The vision shared by many tech companies is that their technology enables people and business to do more with less. Digitalisation increases efficiency, and that in itself is said to have a positive effect on the environment. That may very well be correct, but a significant side effect of digitalisation is an increase in energy consumption. Power-hungry server farms pop up everywhere across the globe, and researchers predict that data centres soon will have a bigger carbon footprint than the entire aviation industry. Just to pick one example, it is well known that Bitcoin needs vast amounts of computing power. At the height of the Bitcoin speculative frenzy in 2018 the crypto-currency was estimated to produce an annual amount of carbon dioxide equivalent to 1 million transatlantic flights. And there is more to come. Soon a tsunami of data will hit us. It is coming from 5G, high-res video, IoT, surveillance cameras, etc., and these technologies will need enormous amounts of electricity to function. Some researchers predict that by 2025 data centres will consume 20 per cent of the world’s electricity. So, beyond the catchphrase of achieving sustainability through technology, we’ll be looking at some significant challenges, e.g. that electronics will need a dramatic rise in energy efficiency to make up for that carbon footprint. Another thing to consider is, that the energy consumption of a device while operational might not be its most significant environmental impact. Instead, it may be the energy used in manufacturing it. While conventional consumer products like a refrigerator or a light bulb consume much more energy during their lifetime than consumed in their manufacture, the opposite is true when it comes to state-of-the-art electronic equipment. Manufacturing methods for electronic circuitry are energy-intensive, and the energy required to manufacture these devices is much higher than the energy needed to run them. These are some of the pressing dilemmas and challenges that need to be addressed when it comes to eco-friendliness in electronics development. At Data Respons we may be able to provide some of the answers. With our extensive knowledge about components and materials we can advise our customers on how to create a product with reduced environmental footprint. Also, together with our customers and other players in the industry we can influence component manufacturers to increase the lifetime of their products, thus securing longer lifetime for the solutions we develop for our customers. But on a higher level, these challenges can only be addressed by collaborating. Component manufacturers, software and hardware consultants, and their customers need to find ways to collectively align their environmental strategy and their business strategy. To quote Fairphone CEO Eva Gouwens in a Youtube video presenting the Fairphone 3: It is time to get together and talk about how we can combine the ambition of developing the best technology in the world with the ambition of developing the technology that is best for the world. Data Respons Solution di|Data Respons AS li|The functionality of the device should – one way or the other – contribute to reducing our environmental impact on the planet. Moreover, the device itself should be designed to last longer, to be easier to repair and consume less energy than similar devices. Also, the manufacturing of the device should affect the environment as little as possible. It should be made out of as few components as possible with as much recycled material as possible. And have end-of-life potential, meaning that the product after its initial intended use could be used in other ways. st|BY: Managing Director OUR COMPANIES Newsletter sign up h1|Greener electronics, yes please, but how? h2|- The complexity of lowering the environmental impact of electronics sp|> Greener electronics, yes please, but how? You’ll find it fairly easy to buy organic vegetables in your local supermarket or to find eco-cotton t-shirts in your favourite fashion shop. But when it comes to electronic devices, buying green is much more difficult. Electronics designed and produced to be environmentally friendly are few and far between However, that will change, in the B-2-C as well as in the B-2-B sector. Rising demand for greener electronics Aligning business and carbon footprint The complexity of electronics Journey just begun The third business parameter Longer lifetime Tricky energy consumption Tsunami of data Energy-intensive manufacturing Collaboration is needed Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|. – I don’t need the best phone in the world. I want the best of the world in my phone. pa|2020, has taught us how quickly we can adapt to new challenges. From one day to the next, the majority of our employees left the offices and got used to working remotely. People, companies and politics adapted almost overnight. With this experience in mind, we know we have the capability to make quick changes and we can transfer this experience to other challenges. For instance, the transition from fossil to renewable energy. Covid also reminded us, once again, of the necessity of increasing the speed to reach the UN Sustainable Development Goals and in this, Data Respons has committed itself to enable minimum 100 sustainable technology every year. With an added ambition to increase the number of projects year on year to support our ambition to facilitate sustainability through technology. Here are six green tech projects from 2020 that enabled more sustainability, realized by our German daughter company, IT Sonix, through their customers. IT Sonix has developed an energy trading platform for its German market and is now expanding the concept to the whole of Europe. On this platform anyone can sell their own renewable energy from a min. size of 3000 MWh, like for instance solar, wind, water or biogas. As an energy supplier you can thus be sure that your offer is taken to market in the best possible way, and that you will get the correct market price, without any delay. This platform also indirectly incentives more people to invest in small scale renewable energy by making it possible and easy to sell their excess energy to the market. “ ”, CTO at IT Sonix, Artur Schiefer emphasizes. Another similar project is an online platform to publish a proposal for a solar power plant. The owner of any given land area describes the conditions and environment for where solar power panels shall be installed. Solar energy and energy storage providers can then contact the landowners and propose their individual offers. Another advantage this platform provides is that the platform also functions as a bridge to the open energy market. As a landowner with a solar energy plant on your property you can buy the energy back later at a discount or use it at other places like your vacation home. Through this online platform an owner of an area that can fit a small solar plant can easily connect with the right companies and become almost self-sufficient with renewable energy. Most importantly the platform lowers the bar for more people to become providers of renewable energy and taking part in the smart energy infrastructure. The third project is about making winds turbines and more effective and intelligent through automation and smarter connectivity. Wind turbines need regular maintenance and cannot run under certain circumstances, like when endangered birds are passing through the area or when the wind reaches too strong levels. It Sonix developed and implemented a software stack that gathers data on the availability of wind turbines through a given timeframe. As a consequence, data allows more reliable planning of operational time and predicted downtime. Through better data and operational understanding, it also becomes easier to integrate wind energy into the grid and have clear picture on the potential energy mix at any given time. If you are one of the pioneers that have purchased an electric car in Germany, you probably have experienced some frustrations in not finding a functioning charger or you got lost in the jungle of varying charging providers. IT Sonix has developed a platform that gathers several charging providers on the same digital platform making it easier have an electric car and use it across charging providers. The platform is already being rolled out across Germany and next in line is Europe. Adding more existing charging providers and new car chargers. Cars are parked 95% of the time. IT Sonix has also developed a cloud backend and architecture, worked on frontend user experience, and designed mobile apps, that enables pure electric car sharing in Berlin. The mission is to use cars more efficiently, contribute to electrification and avoid unused cars and reduce the number of cars in the cities. The platform is planned to roll out internationally in 2021. Electric car sharing provides flexible mobility without the costs, commitment, and responsibilities of owning an own car. At the same time, you contribute to a quieter, less polluted and more livable city by only occupying a car when you really need it and by driving purely electric. Ultimately, this solution helps to save money, both for businesses and individuals as both customer types can rent electric cars on-demand and hereby reduce costs. IT Sonix has developed a solution that connects every truck to the cloud, thus enabling connectivity and data gathering on a whole new level. Better data and connectivity enable much efficiency and the possibility to transport more goods with fewer trucks on the road. The solution has provided the steppingstone for automated and optimized rides. Also making it possible to predict which routes save the most emissions, and to give the driver feedback on how to drive more economically. To mention a few of the features. Allowing a company to track its vehicles in real-time helps to avoid delays, simplifies communication and avoids unnecessary rides. Finally, it increases safety by addressing unsafe driving and helps the driver to optimize the driver experience through an own driver app. IT Sonix is located in Leipzig with 125 employees. The company is leading niche providers of specialist services and SW technology (Java, Embedded, Cloud and AI) specifically aimed at “Connected Car” solutions, internet of things, mobile services and embedded applications. They have been active in telematics, communication and project management for more than 15 years specializing in agile software development for client-server, mobile applications and on-boar units. The company are deeply involved in the ongoing digital transition for some of the leading automotive brands in Germany, some of the world’s most dynamic and R&D intensive industries. It Sonix have been part of the Data Respons group since 2018 di|Isabelle Sarah Borchsenius | Marketing, Communication & Sustainability Manager st|BY: 1. Online energy trading platform for renewable energy 2. Online solar power platform for all 3. Smarter and more effective windmills 4. Smart charging network across Germany 5. Urban electric car sharing 6. Transporting more goods with fewer trucks This is IT Sonix OUR COMPANIES Newsletter sign up h1|Six sustainable tech projects from 2020 h2|We have an ambition to be directly involved in at least 100 sustainable tech projects every year that makes a difference. Here’s six examples from 2020. sp|> Six sustainable tech projects from 2020 Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|We developed the platform that enables trading renewable energies for our customer. We have been working on this project for over a year and until now is was a rather small team. We are delighted that we can contribute to our customers success. Now we help to make it available in all Europe. This case contributes to the green shift by enabling the possibility to capitalize on sustainable energy production with little effort and energy market knowledge pa|Consultants from , worked on the recording unit that communicates with the cap and the smartphone app. Around 1% of the world population suffer from Epilepsy, yet people living in development countries do not have access to the necessary scanning equipment to get a diagnose. In the West African country Guinea 12 million people share one EEG machine and three neurologists. With solutions like BrainCapture you can make expensive scanning facilities remote and available st|The system consist of a head-cap cap filled with electrode sensors, a small recording unit that receives and sends the signals recorded from the cap and a smartphone app which receives the data and sends it to a cloud based diagnostic software. OUR COMPANIES Newsletter sign up h1|Scanning for epilepsy using smartphones h2|The Danish company BrainCapture have created a solution that can scan the brain using an electrode head-cap, a recording unit and a smartphone app! The Data Respons subsidiary, TechPeople assisted BrainCapture with hardware development. sp|> Scanning for epilepsy using smartphones Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|The company delivers specialist services, development projects and experienced specialists with extensive industry knowledge. Located at four locations in Norway (Høvik, Kongsberg, Stavanger and Bergen) the company make sure to be close to the customers enabling efficient collaboration and knowledge transfer. Over the last 30 years, the company has acquired expertise and valuable insight about physical environments and industry standards across several industries enabling them to deliver high-quality development projects and services. Their specialists cover a broad range of competences and disciplines enabling them to develop everything from apps and cloud based services, to intelligent sensors and IoT solutions. The flexible delivery model can support any customer need – from an R&D specialist to a complete team. Ivar A. Melhuus Sehm Høvik (Norway) 100 (2020) st|A complete technology partner from sensor level to the mobile application. We are hiring! HQ FOUNDED # OF EMPLOYEES JOINED DATA RESPONS OUR COMPANIES Newsletter sign up sp|> > > Data Respons R&D Services DATA RESPONS R&D SERVICES MANAGING DIRECTOR 1986 Since the beginning COMPANY website Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|The solution collects data from the sensors on the ships, enabling top-down situational awareness, a fleet-wide approach and operational and technical performance. Eniram’s solutions helps cutting emissions that are harmful to the environment by optimizing energy emissions on i.e. cruise ships, tanker and container vessels st|OUR COMPANIES Newsletter sign up h1|Energy efficiency through digitalisation h2|Eniram (a Wärtsila company) is a Helsinki-based company providing the maritime industry with energy management technology to reduce fuel consumption and emissions. h3|Our teams at in Sweden have been helping Wärtsila company Eniram with rugged hardware solutions for both passenger ships and industrial vessels throughout several life cycles, enabling Eniram to reduce fuel consumption and emissions. sp|> Energy efficiency through digitalisation Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Simon Breum Fisker and Jacob Lindeberg launched their Sculpto printer in 2016, and September 2017 they released a new, improved model called Sculpto+. The new printer is quicker and quieter. Also, a new “print engine” makes it even easier to find models on the internet, e.g. on thingiverse.com and print them directly from your phone. But although extremely user friendly the machine is highly sophisticated as both the arm and the bed move in a polar coordinate system. For the Sculpto+, TechPeople applied some advanced printer engine control software, up until now only used in a few expensive high-end printers. Also TechPeople consultants improved the original PCB layout. The Sculpto+, though fairly cheap, user friendly and brightly coloured, it is a sophisticated little device. Developers from TechPeople have assisted Sculpto, among other things doing a PCB layout review. TechPeople’s review expert made a number of recommendations for improvements. The new layout has eliminated engine noise. Now you can only hear a faint humming from the printer fan. TechPeople also applied new printer engine control software. The control software makes acceleration and deceleration softer, resulting in higher speed and precision. An improved motion control algorithm was developed and implemented for the printer. The algorithm ensures controlled acceleration of the printer’s stepper motors and a smooth print pace. There were two major challenges regarding motion control for the 3D printer that needed to be considered. The first challenge was the physical movement of the print head and print plate. Due to the bipolar nature of the printer, the print plate is required to rotate with a speed that is dependent on print-head distance from center of the plate. Otherwise, the print-head will not keep a constant pace when moving across the print plate in a straight line. The constant pace is required to get a good print result. The challenge is that this requires the print plate stepper motor to accelerate and decelerate quickly when crossing near the print plate center in order to turn the plate 180 degrees quickly, so that the print head can maintain a constant pace all the way across the plate. Another issue related to physical movement of the print head is, that when it needs to change direction, this should be done as quickly as possible to avoid stopping too long at the same location and thus leaving too much print material in the edges of the print. Meanwhile, it is critical not to accelerate too fast because otherwise the stepper motors risk skipping steps with the result that the coordinate system of the 3D print becomes shifted in the middle and thus the print cannot be used. The second challenge was motion planning based on a stream of print commands known as CNC g-codes. This means that the motion controller at any time only knows about a small part of the complete path for the 3D print. Even with this limited information, the printer is required to keep a smooth pace by constantly updating its planning ahead route, in order for print material to be extruded in smooth layers. If the print-head at any time stops even for a short while, it will leave too much print material on the stopping location resulting in bumpy prints. The challenge here was that a lot of mathematics needed to be defined, so that print speed can be maximized while the motion is kept within several different constraints on speed and acceleration. The solution for improved motion control consists of two major updates as well as a number of minor updates for stability, configuration, test and debug possibilities. The major updates are an updated motion planner algorithm that is based on constant jerk motion (jerk is the derivative of acceleration) and a stepper motor control that supports constant jerk motion and the microstepping step drivers used in the Sculpto printer. The motion planner algorithm implements constant jerk motion governed by classical constant acceleration motion equations expanded to the constant jerk case. The equation that forms basis for all the optimization formulas derived and implemented in the motion planner is: where, s = distance v = velocity a = acceleration j = jerk t = time Movement using constant jerk involves up to 7 steps in the motion as seen below. Furthermore, the planner needs to handle cases where it is not possible to achieve maximum defined acceleration or velocity for each axis. These cases arise when the motion is limited either by total distance of movement or the dynamic velocity constraint. The dynamic velocity constraint is caused by 4-axis motion on the printer where each axis has an independent maximum velocity and acceleration since the overall movement needs to follow point-to-point print commands. The updated stepper motor control supports constant jerk motion instead of the previously used constant acceleration motion. This pushed the microcontroller used in Sculpto closer to the limits, since the requirement for the solution was to use the existing microcontroller due to hardware cost considerations. Both memory and CPU resources was restricted. A solution was implemented that found a compromise between memory usage by the pre-calculated stepper timings during motion planning and CPU load in the on-the-fly calculations during the 50 kHz stepper update interrupt routine. Furthermore, some of the calculations were converted from 32-bit fixed point to 64-bit fixed point in order to achieve the computation precision required by constant jerk motion equations. All in all the software update enabled the Sculpto printer to use smoother and faster motions during printing, while keeping within the limits of their existing microcontroller. As a bonus Sculpto got access to and knowledge about more commands and settings in the internal motion controller. This enabled them to continue optimizing the printer settings and constraints as they see fit, even after the software update was completed. All in all the software update enabled the Sculpto printer to use smoother and faster motions during printing, while keeping within the limits of their existing microcontroller. As a bonus Sculpto got access to and knowledge about more commands and settings in the internal motion controller. This enabled them to continue optimizing the printer settings and constraints as they see fit, even after the software update was completed. – 3D printing has a huge potential. But the typical printer is big, heavy and difficult to operate, say Simon Breum Fisker and Jacob Lindeberg, who founded Sculpto in 2015. – We decided to develop a printer that was cheap, compact and user friendly. In particular we wanted to simplify the steps you have to go through from designing a model to actually printing it. That was really complicated and required a lot of technical knowledge. We wanted to automate that as much as possible and open up 3D printing to a much bigger group of users. When Simon Breum Fisker and Jacob Lindeberg set out to develop the Sculpto printer, they decided to focus on the two aspects that they saw as the major barriers to 3D printing becoming a commodity: price and user friendliness. – We took the printer control board and transformed it to the screen the user carries in his pocket – his smartphone. We developed an app for controlling the printer wirelessly. In that way we can continuously add value to our customers through the app and through online updates of the printer. – Also, we chose bipolar printing because it makes the printer look nicer and more accessible. A bipolar printer extrudes the plastic at the intersection point of two circles, as opposed to a cartesian coordinate system, where the plastic is extruded at the intersection point of two straight lines di|Brian Obel Manager Aarhus, TechPeople st|BY: Affordability and user friendliness OUR COMPANIES Newsletter sign up h1|Improving motion control in a bipolar printer h2|Sculpto – user friendly 3D printing for kids, school and leisure. No expert knowledge needed to use the Sculpto-printer. You can control it with your smartphone. Danish start-up company Sculpto makes 3D printing available for children, teachers and others. Developers from the Danish consulting company TechPeople, owned by Data Respons, have assisted Sculpto in developing a new, improved version of the printer. h3|Sophisticated device Physical movement Motion planning Improved motion control Updated stepper motor control Smoother and faster About Sculpto sp|> Improving motion control in a bipolar printer Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|The OMG Data-Distribution Service for Real-Time Systems (DDS) is the first open international middleware standard directly addressing publish-subscribe communications for distributed real-time and embedded systems. It offers abstracted communication schemes, where the different systems and applications can cooperate without a typical client/server architecture. Currently more than 10 companies or groups provide DDS middleware / products. DDS goal is to provide the , at the and at the , providing a for systems, ranging from machine domain (Edge) to Cloud. This is done via middleware, providing a portable application API and an underlying reliable and real time interoperability protocol (RTPS). It uses quality (Quality of Service – QoS) schemas to ensure that data transfer between participants is done according to mutually agreed standards. Control system data should often be limited or filtered to be the , based on rate, content, etc. Being a data centric solution DDS understands the schema of the shared data, allowing for such advanced filtering (For instance: Filtering on a publisher to send ONLY temperature data when it’s above 300 C is possible) DDS dynamically discovers publishers and subscribers, the data they want to share and how they want to do so. Its self-forming nature then ensures that data is delivered to the even if consumers arrive late. It also detects loss of data or data producers (It implements QoS – enforced logical channel between each publisher – subscriber pair). The balance of scarce system resources is needed to deliver the data at the . DDS middleware utilises QoS policies, for instance set by applications at runtime, to balance efficiency and determinism (For example, if a subscriber requires an update every 10ms and its matched publisher does not deliver, the system declares an error, enabling remedial action). QoS covers many characteristics such as urgency, importance, reliability, persistence and liveliness. DDS provides a self-forming, scalable and distributed middleware, which gives the applications a global shared data space, and when you add characteristics such as deterministic performance, low latency / high throughput and high fault tolerance, it seems ideal for mission critical IoT and distributed control systems. Also due to the dynamic and loosely coupled nature of these systems, DDS significantly reduces maintenance cost, since individual systems may be modified, added or upgraded without impact on the existing system. DDS addresses data in a manner similar to relational databases. It can manage data by both structuring related topics (by key-fields) and allows for ad-hoc queries and filters on content and time, so applications can extract specific data as needed. Pub-sub messaging: DDS uses the publish/subscribe paradigm for dynamic discovery and primary management of data-flows between relevant DDS entities, including publishers, subscribers, durability services, recording and replay-services, and connected databases. Request-reply and other patterns are built on this powerful substrate. The DDS standard wire protocol implements reliable multicast over plain UDP sockets, allowing systems to efficiently benefit from modern networking infrastructures. Unlike message-centric products, DDS offers explicit application support for information life cycle awareness. For instance, it detects, communicates, and informs applications about first and last appearances of data (topic instance) updates. This facilitates timely responses to new and outdated information. For large control systems with +10000’s of I/O (sensors and the like), data exchange needs to be smart, reliable and efficient. DDS has been tested for this purpose in several mission critical systems within industries and domains such as power, medical, aviation and space, and the US Navy have used this standard for more than 10 years. DDS can easily merge today’s trends with yesterday’s standards in a perfect manner. Interfaces, tools and libraries can easily convert data to and from DDS, to other fieldbus types, for instance Modbus, OPC (DA, UA), etc. Using it from your application code is easy, and done via a standardised API. Thinking in a data centric way, one starts off by defining a set of Topics (holding data types, structures, etc) that you want to have on your data bus, then you create a Participant (to listen to data within a domain / your separate data space) which again hold DataReaders and/or DataWriters to write or read that Topic. Rune Volden, R&D Manager , Ulstein Power & Control AS Ulstein’s experience with DDS started in 2013. May 14 that year, I got an email from a colleague regarding an alternative middleware. June 18th we had requested pricing n Open Splice/RTI Connext. We then started using Prism Tech’s Open Splice DDS the first months for graphical user interface (GUI). November 5th 2013 we purchased RTI Connext licenses. My colleague then worked with RTI Connext DDS to implement the communication between GUI and the control system throughout 2014. During 2014 we developed our IAS based on 3rd party control system middleware (CDP), and only used DDS as communication towards the GUI. Earlier we used a Modbus communication based on JSON, but this approach required much development work, not to mention testing, to get a good result. In 2014 our IAS project met great challenges regarding system scaling, and handling the numbers of signals required by our customers. After repeated attempts with our former middleware, this approach was eventually cancelled. We made a thorough technical investigation from November 2014 to January 2015, as to how to build our future control system. The conclusion was to develop our new control system in-house, in cooperation with Data Respons AS, using DDS as the fundamental block in the communication layer. This work started in February 2015. The IAS project was divided into teams working with documentation, Graphical User Interface design, Graphical User Interface implementation, Graphical editor, Control system kernel, Control system application, IO controller application and the Configurator tool. Our automated systems experts started the documentation work in 2014, and all is done according to the guidelines and structure of the DNVGL’s ISDS standard. In cooperation with Eggs Design we used the work from an earlier Ulstein Bridge Vision project as a starting point for the realisation of a graphical user interface, starting in January 2015. In March the implementation of the Graphical User Interface started in cooperation with The Qt Company. They also started developing a Graphical editor for us which makes it possible to get the complete control system including all graphics into one readable configuration. Alert Lab image (Ulstein Power & Control AS) The control system kernel was made from scratch, with a possibility to create and configure all internal components from XML, as a requirement. This was done in close cooperation with Data Respons, where they greatly contributed to get a strong and well tested kernel. The kernel then offers a communication layer towards the fieldbus layer (IO Controller) and graphical user interface. The communication mostly utilises DDS (Data Distribution Services). We have evaluated several versions of DDS, but currently we use RTI Connext 5.2 in our systems. This is applicable for the control system, IO controller (Fieldbus ++) and graphical user interface also. DDS act’s as the glue between all the different applications on various controllers, PC and workstations. The control system application is also made in cooperation with Data Respons. In this case, legacy code is ported to the new system kernel, in addition to adding new code and functionality needed. To get the delivery of control systems to the end customers as efficient as possible, with the highest possible flexibility, we have developed a configurator tool. This enables the application engineer to set a configuration of the control system in an easy, safe and understandable way, according to the customer’s requirements. For an automation system on a ship this typically means adding pumps, tanks, valves, pipes, switches, generator sets, propellers, motors etc., where each component has a control/remote control, control logics, mimic and user interface design. In the existing SCADA platforms this is a very time consuming and comprehensive work. One of our great challenges is that the changes shall be executed fast and efficient in the final phase of large projects. Typical “last minute changes” can introduce human errors that everybody wants to avoid. The configurator tool we are developing will minimise this risk in an efficient way. Ballast control system. (Ulstein Power & Control AS) Data Respons has also been involved in the development of the I/O controller application. This application process all I/O on that I/O controller, which for instance can have serial lines (RS422), CAN, Modbus RTU, analog and digital IO, and sends/receives data via DDS. Via the Configuration tool we can download the configuration of serial buses and CAN to the I/O controller. The configuration can include all required configuration, down to the node level, on a local fieldbus. With this automation, only limited changes are required via vendor specific tools. The I/O Controller application reads from and writes to the IO via the controller’s I/O API. All maritime approved controllers with C++ API are basically of interest. Our control system is not vendor specific. Currently we have at least three different suppliers of I/O and I/O controllers. Initially we start delivering systems with I/O controllers from Bachman, then Phoenix and Wago. Eventually all I/O Controllers with an environment and an API to create the I/O to DDS data transitions can be used. This gives our product flexibility, since often one supplier can’t offer a complete solution but as a total they can. Data Respons recommended us to use test driven development, continuous integration, build servers and analysis tools at an early stage. We can now see that this saves us much time in rework and testing during both development and integration phases. With DDS replacing components with “bots” or “mocks” for systems test is much easier, since they all use DDS to communicate. We have also started using Docker to run and simulate a multi controller environment network, and thus quite easily running large scale system and integration tests. Due to the number of applications and controllers making up the system, continuous test and integration could have been a lot of work if not automated and thorough unit and system tests were applied continuously. Ulstein traditionally makes control systems for large ships, but by using DDS and having the flexibility of our new kernel, the ideas and the actual system can easily be adapted to most mission critical and normal control systems. The underlying data model of control systems often consist of transporting digital and analog data, in addition to some business specific types. The framework to do so and the knowledge of how DDS works, already exists at Ulstein. We think we are in a position to offer control system expertise and software solutions to customers outside of our traditional domain also di|Rune Volden, R&D Manager Ulstein Power & Control | Preben Myrvoll, Principal Development Engineer, Data Respons st|BY: right data right place right time global data space right data right place right time Relational data modelling: Reliable multicast: Life cycle awareness: BY: OUR COMPANIES Newsletter sign up h1|Distributed monitoring & control using DDS Ulstein’s experience with DDS h2|Ulstein has developed control systems for the maritime sector for decades, and are continuously seeking to improve their solutions and products to solve the demanding challenges their customers face. Recently, their search for an improved control system platform led them towards the Data Distribution Service (DDS). Find how their use of DDS simplifies both system architecture, development and testability. Technical DDS Some underlying technical concepts In use Highly adaptable for various markets h3|Example The DDS glue Graphical User Interface Control system Configurator tool / Configure to order I/O controller Testing sp|> Distributed monitoring & control using DDS Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|I want to send and have someone receive the price of my delicious homemade strawberry ice cream. A «double» can hold the price such an item, but it would make no sense if I just sent that double on the wire, without any contextual information. So a Topic is created (I call it «IceCreamPrice»), using «double» as the data type, and thus enabling me to send, and someone to receive, this price. Then I create Data-Writers and -Readers (with some QoS settings) to send and receive that Topic. A simple setup, and now I’m ready to open my ice cream store. pa|Texas Instruments OMAPL137-HT processor in a downhole modem rated for 175°C. Most high temperature processors are based on old architectures. Often, an old architecture is taken as a starting point and improved in regards to temperature. The documentation is then updated to some extent, but the documentation can vary quite in quality depending on the manufacturer. High temperature processors are not necessarily supported in the manufacturer’s tool set. For instance, Texas Instrument (TI) do not recommend their own Code Composer Studio IDE if developing for their SM470 high temperature microcontroller. Instead, TI recommend Embedded Workbench from IAR Systems. In the last few years some companies have been moving towards current microcontroller architectures, specifically ARM Cortex-M. Vorago Technologies has developed a device with a CortexM0 core. Another company, e2v, provides a device with a Cortex-M4 core. BY: Henning Sjurseth Senior Development Engineer Data Respons In general, with higher temperatures device cost goes up and level of functionality goes down. In particular, 150°C degrees devices can be very affordable, but when your design requirement passes 150°C degrees you’ll see a significant price jump. Don’t be surprised if your production cost multiplies by a factor of 5 to 10. In the 150°C degree temperature range, there’s several processor alternatives. As mentioned, e2v is focusing on the ARM Cortex-M4 architecture. Currently e2v have a device rated at 150°C degrees for 1000 hours based on a Freescale microcontroller. The device has both Analog-to-Digital and Digital-to-Analog converters. e2v are hoping to secure development funding to extend the temperature rating for this device to 175°C. Texas Instruments (TI) lineup includes three devices in this temperature range all with plastic packaging: SM470R1B1M-HT, SM320F28335-HT and MSP430F2619S-HT. The first two are also available in a ceramic package for higher temperatures. The first device, SM470R1B1M-HT is a microcontroller with an ARM7TDMI core (32-bit). This device has an Analogto-Digital converter, but no Digital-toAnalog converter. Recommended development tools are from IAR Systems: Embedded Workbench and I-jet debug probe. The second device, SM320F28335-HT is a 32-bit DSP with Analog-to-Digital converter. Recommended development tools are TI Code Composer Studio v6 (based on Eclipse) and TI XDS100/200/560 debug probe. The third and last device is MSP430F2619SHT, a 16-bit microcontroller architecture with Analog-to-Digital and Digital-ToAnalog converters. Suggested development tools are Code Composer Studio v6 and TI MSP-FET debug probe. Atmel has several devices based on their 8-bit AVR architecture for applications up to 150°C. These devices also have Analog-to-Digital and Digital-To-Analog converters. Suggested development tools are Atmel Studio IDE (based on Microsoft Visual Studio) and Atmel-ICE or AVR ONE! debug probes. Microchip provides devices for this temperature based on 8- and 16-bit architectures. Their 8-bit architecture devices for 150°C can be found in the PIC12, PIC16 and PIC18 families and their 16-bit architecture devices for 150°C is part of the PIC32 and dsPIC33 families. Recommended development tools are MPLAB X IDE (based on the NetBeans IDE), XC8 (8-bit) or XC16 (16-bit) compiler and MPLAB ICD 3 or REAL ICE debug probe. Texas Instruments provides one device for this temperature, the OMAPL137-HT. This is a dual-core device with one ARM core and one DSP core. This is one of very few devices device suitable for embedded Linux. Recommended development tools are TI Code Composer Studio v6 IDE and TI XDS100/200/560 debug probe. This device doesn’t have an onboard flash for the application image. If you decide to boot the DSP from an external flash, it’s important to note that TI’s own high temperature flash is not compatible. We’ve used TTSemiconductor TTZ2564 flash for booting the application image with the OMAPL137 with no issues. A quick tip, if you have issues booting, check the BOOTCFG register during debug which contain the state of the boot configuration pins during booting. This can be an issue as the boot pins are often used for other functions, such as SPI, after booting. For this range, Texas Instruments offers 3 devices with ceramic packaging. These parts are rated for 210-220°C for 1000 hours. A common use for these are in equipment for downhole oil well services. They are obviously not suited for permanent downhole installation at this temperature. Two of these devices, SM470R1B1M-HT and SM320F28335-HT, are also available in plastic packaging rated for 150°C and are described earlier in this article. There are a few development kits that uses the SM470R1B1M-HT with ceramic packaging. The one typically used is the SM470 development kit manufactured by IAR available for 470 euros. No temperature testing can be performed with this kit as the microcontroller is the only part rated for temperature. This kit also includes an IAR J-Link Lite debug probe. Texas Instruments do not provide an affordable development kit for this part or the equivalent non-high temperature part. The only offer from TI is the HEATEVM development kit at 5749 USD. This kit can be used for evaluating the SM470 (in ceramic packaging) microcontroller, EXTREME CONDITIONS but also several other TI high temperature parts such as operational amplifiers, ADC and transceivers. The kit can be used for high temperature testing. The TI device not discussed previously is the SM320F2812-HT. This is a 32-bit DSP with Analog-to-Digital converter. Recommended development tools for this device are TI Code Composer Studio v6 and TI XDS100/200/560 debug probe. Vorago Technologies, former Silicon Space Technologies, has developed a microcontroller with an ARM Cortex-M0 core. The device, PA32KASA, is rated to 200°C for 1000 hours, but it has survived testing at 250°C for 2500 hours. Note that this device is quite large physically compared to other microcontrollers. Processors operating beyond 200°C are not in frequent use, but there are some options. One is Honeywell’s HT83C51 8-bit microcontroller. It is designed for 225°C operation. However, Honeywell claims «… parts will operate up to 300°C for a year, with derated performance». Tekmos also provides alternatives, the TK8xH51x family. It is designed for 250°C and this is also an 8-bit architecture. Frequently, high temperature parts are specified for 1000 hours at certain temperature. If this lifetime is not sufficient for you application, you can select a device rated at a higher temperature as it will survive longer at a lower temperature. The figure above shows the lifetime for TI SM470R1B1M-HT in ceramic packaging dependent on temperature. For 220°C, the device lifetime is 1 000 hours (1 month, 10 days), at 175°C lifetime is 4 000 hours (5.5 months), at 150°C lifetime is 12 000 hours (1 year, 4 months) and at 125°C lifetime is 40 000 hours (4.5 years). In conclusion, during specification gathering pay particular attention to the temperature requirement of you application. If the temperature requirement can be reduced, you’ll benefit from significant production cost savings. Development efforts are also significantly affected by the temperature requirement and this mainly affects the hardware development. The driving factors are time consuming temperature testing and documentation that is very limited specifying performance at maximum temperature. If this is your first high temperature design, keep in mind that the tool support and documentation might not be at the same level as for non-high temperature processors. As always, discuss your application with others engineers with relevant experience di|Henning Sjurseth, Senior Development Engineer, Data Respons st|BY: OUR COMPANIES Newsletter sign up h1|Processors for high temperature applications h2|In this article we will go through the most commonly used microcontrollers and Digital Signal Processors (DSP) for high temperature applications from 150°C and upwards. This article will not discuss all processor alternatives, but focus on the most common and relevant options. Also, this article does not cover processors available in die form, often referred to as «known good die» (KGD), as this product group is usually avoided as it complicates product manufacturing considerably. h3|Old architectures 150°C degrees 175°C degrees 200 – 220°C degrees Alternatives at 225°C degrees and beyond Extending device lifetime Conclusion sp|> Processors for high temperature applications Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|IT Sonix is working on the software development which will allow any energy producer of a certain range to sell their energy to the international market. As energy technology becomes cheaper it becomes easier for anyone to install a wind, water, biogas or solar based system that creates electricity for a household, a farm, an office building and so on. Hence developing a platform where excess energy can be bought and sold is an important step for a more sustainable world. And it’s an important piece in enabling the smart grid concept which makes much better use of the energy infrastructure. This project is especially environmentally friendly as only sustainable energy sources are accepted on the platform. IT SONIX is situated in Leipzig with 150 employees. The companies are leading niche providers of specialist services and SW technology (Java, C#, Embedded, Cloud and AI) specifically aimed at “Connected Car” and energy solutions, internet of things, mobile services and embedded applications. They have been active in telematics, communication, energy and project management for more than 15 years specialising in agile software development for client-server, mobile applications and on-board units. The companies are deeply involved in the ongoing digital transition for some of the leading automotive brands in Germany, some of the world’s most dynamic and R&D intensive industries. IT Sonix have been part of the Data Respons Group since 2018. Data Respons is a pure-play digital leader with an in-depth expertise in software development, R&D services, advanced embedded systems and IoT solutions. The number of blue-chip customers is increasing, and Data Respons expects this trend to continue going forward. The trends of increased automation, digitalisation and ‘everything connected’ (IoT) fit well with both the Data Respons’ business units and competence map. We develop everything, from the sensor level to the mobile app, making it an ideal partner for its customers in their digital transition. The company has a highly diversified customer portfolio in industries such as the Mobility sector, Telecom & Media, MedTech, Security, Space & Defence, Energy & Maritime, Finance & Public and Industrial Automation. Data Respons is headquartered in Oslo (Norway) and has a strong portfolio of clients in the Nordic region and in Germany, supported by 1,400 software & digital specialists. Data Respons has achieved an 17% annual growth over the last 20 years li|This is a perfect example on how software is conquering the world and at the same time making it a better place. This project enables more efficient use of sustainable energy production, which is an important step to limit global warming, says Kenneth Ragnvaldsen, CEO in Data Respons. This is a project that lets 15 of our specialists use every tool in our toolbox to deliver a state-of-the-art platform with integrated security and a great user experience, says Andreas Lassmann, managing director in IT Sonix. Furthermore, it’s a project that serves a higher purpose, which is to connect all small sustainable energy producers to the international energy market, concludes Lassmann. st|About IT Sonix About Data Respons OUR COMPANIES Newsletter sign up h1|IT Sonix, a Data Respons company, wins contract to develop platform for renewable energy trading h2|IT Sonix have been awarded a contract worth 2,5 million euros for developing the software for an international online platform that enables B2B energy trading. sp|> IT Sonix, a Data Respons company, wins contract to develop platform for renewable energy trading Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|eWave is an energy display which goal is to raise awareness on the amount of energy spent in a household, given in kWh, currency and CO2 emissions. The original eWave display was a passive display that only displayed the consumption of the main meter, but the latest version of the device can also actively control user-selected devices in a home to reduce unnecessary consumption. eWave is a product of the Sandefjord based company eWave Solutions, previously known as Miljøvakt. Ewave started its life as an idea by entrepreneur and founder gunnar skalberg. Norway is currently among the top ten in the world when it comes to energy consumption per capita. The main reason for this is the cold weather, and the relatively cheap power prices. The main source of the power consumption in norway is arguably heating of houses and water, and lighting. Households have been reported to be responsible for almost half of the co emissions in the world, and ewave solutions’ goal is to reduce this significantly. The ewave tablet started out as a touch display, targeted at a scientific test project. The decision to go against the mainstream app-based energy displays combined with a gateway was a conscious choice. Previous studies conducted in the uk concluded that a physical device greatly increases awareness compared with a more abstract application on a smartphone. Since the main instrument for energy reduction in the original ewave was increased awareness of the issue, a concrete and visible tablet-based display would prove important. However, the ewave tablet has gone through several iterations, evolving from an awareness-based display to an active power saving home control device. The latest version of the ewave tablet is currently capable of controlling switches wirelessly, and reading temperatures from wireless thermometers. It is also possible to use the ewave display to keep track of power consumption on several circuits, which makes the device perfect for tracking how much energy one single oven is using over time, with associated costs in both currency and co emissions. Ewave can also display the current consumption of any circuit with a high refresh rate. This feature gives the user instant feedback when they turn on or off an electric device. The ewave tablet includes many features targeted at reducing energy consumption, including regular savings tips and overview of the current and historical power price from the energy providers. The tablet can also keep track of the household consumption. A savings account application is available in the tablet, which lets the user set up a saving goal per day, and keeps track of how much money the user saves over time. The user also sets up a yearly saving goal, which the tablet uses for feedback and status reports every day. Furthermore, ewave solutions and data respons is currently working on extending the ewave functionality further into the world of home automation with more smart control of household power consumption. In early 2014, ewave partook in a research project in hvaler organized by smart energi hvaler. The goal of the project was to test out new energy reducing technology and see the effects it had on the consumers. Ewave proved to be one of the most influential devices participating in the project, resulting in a general consumption reduction of up to 20%. Some users were also able to use the ewave tablet to find electrical faults that increased energy consumption in their homes. The tablet used for the ewave project is a custom android based tablet running on a dual core arm cortex-a9 cpu. The ewave application was developed using qt for android, which was in the early stages when the project started. Qt was chosen to ensure platform interoperability if another os is chosen at a later stage. Using qt with minimal android support was somewhat of a gamble, but as the support grew better, the gamble paid off significantly. Qt is now supporting linux, android, ios, osx, windows, winrt, blackberry, sailfish os and more, which makes the ewave application highly portable. For wireless applications, the ewave devices supports both z-wave and zigbee. The z-wave api is currently only targeted at energy readings. For zigbee, the tablet supports clusters for switches, thermometers, energy readings and a few others, but more clusters will be supported when needed. The tablet sets itself up as a zigbee network coordinator, and automatically binds to all previously known devices that are nearby. This simplifies the user experience and makes the ewave easier to use. All energy consumption data is stored both locally on the tablet, but also synchronized to a server maintained by ewave solutions. This way, all historical data is safely stored and available if a device needs to be replaced. Having the consumption data stored on a remote server also enables large scale observation of consumer energy consumption habits. This data can be used as an indication for future improvements in the power infrastructure, which can be a major asset for the power company. The consumption data can also be used for research and commercial purposes. Data respons has been responsible for the development of the application since december 2012, and is currently working closely with ewave solutions’ cto on the home automation extensions for the product. The ewave project has been an important project for the r&d department in asker over the past two years, with innovative development and exciting new technology. With the increasing focus on reduced energy consumption and co emissions, the ewave product projects a brighter future that r&d services is proud to be a part of di|Andre Firing, Data Respons Alumni st|BY: OUR COMPANIES Newsletter sign up h1|Device specific power consumption control h2|The eWave tablet has gone through several iterations, evolving from an awareness-based display to an active power saving home control device. h3|From awareness to controlled Power saving System and functions Hvaler project Software and hardware Contributions sp|> Device specific power consumption control Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|DATA RESPONS AS Ragnvaldsen trained as a business economist and has an MBA from BI Norwegian School of Management. He worked in finance, sales and marketing for three years before joining Data Respons in 1995. He was previously the Sales & Marketing Director of Data Respons ASA and appointed CEO in 2003. DATA RESPONS AS Wahl holds a Siviløkonom degree (four-year program in economics and business administration) from BI Norwegian school of Management and has an MBA degree from INSEAD in France. He joined Data Respons as CFO in 2005. Previously, he has been CFO in Tandberg Storage ASA and has had various management positions within finance in Atea, Orkla and DNV GL. DATA RESPONS AS Eidem holds a bachelor in Political Science from the Norwegian University of Science and Technology (NTNU), and a Master in Political Economy from the Norwegian Business School (BI). Eidem headed a political office working for student rights and welfare (2007- 2008), worked with intelligence in the Norwegian armed forces (2008-2010), worked as an Account Director in the PR agency Gambit H+K (2012-2018), as Head of Communication, PR and Media in Get and as Communication Manager in Telia Norway (2018 – 2019), before joining Data Respons in 2019. DATA RESPONS SOLUTIONS Toppe has an MSc in cybernetics from the Norwegian University of Science and Technology (NTNU). He worked with seismic exploration in various positions at GECO and was one of the founders of Data Respons back in 1986. Toppe was responsible for R&D Services before appointed Managing Director of Data Respons Norway in 2002. He is now Managing Director for the Solution Business Unit in Data Respons. DATA RESPONS R&D SERVICES Sehm holds an MSc in Electronic & Electrical Engineering from the Heriot-Watt University in Scotland and a BSc in Electronic & Electrical Engineering from Oslo University College (HiO). Sehm has an extensive background from the embedded industry and has previously worked at the Norwegian Army Material Command, Sysdeco AS, and Geoteam Exploration. SYLOG Jacobsson is one of the founders of Sylog and Profinder. He has a background in consulting, sales and business administration. Previously has been Sales and Consultant Manager of Sylog and appointed Managing Director in 2009. TECHPEOPLE Mizrahi is a trained hardware engineer from Tel Aviv University. In 2005 he started as a hardware consultant in TechPeople’s predecessor, Embeddit. Since then, Embeddit merged with in Data Respons Solutions Denmark in 2007 where he later took over the managing director’s office in 2012. In 2017 Data Respons bought TechPeople, and in 2021 Gilad Mizrahi took over the position as Managing Director. MICRODOC Kuka graduated from the Carl von Ossietzky University of Oldenburg with a diploma in computer science and holds a doctor of natural sciences with focus on data stream processing. He has been working as a researcher at OFFIS – Institute for Information Technology before joining MicroDoc as software engineer in 2016. Kuka has been Managing Director at MicroDoc GmbH since 2019. MICRODOC Öhlschlegel holds an MBE from Steinbeis University Berlin and has a diploma in Business Informatics. Before he joined MicroDoc in 2019, he had several positions as Finance Manager and was Managing Director of a subsidiary of the Käfer Group. EPOS CAT Sauer, holds a Doctorate Degree of Law at the Paris-Lodron-University Salzburg. Joined EPOS CAT in 2006 as director of finance and controlling. Previously team member of an auditing firm in Ingolstadt. Sauer has been Managing Director at EPOS CAT GmbH since 2014. IT SONIX & XPURE Lassmann has a Master in Computer Science and Economics from the University of Leipzig. His dissertation titled “Co-Browsing for internet based service and support processes” was finished in 2008 at the University of Leipzig. Lassmann was one of the founders and CEO of ITCampus in 1999, until it was sold to Software AG in 2009 – employing over 100 software developers. In 2011, he co-founded ITSonix with his brother. Lassmann is today the Managing Director of ITSonix and Xpure. INCONTEXT Lampinen holds BSc in Electronic & Electrical Engineering from the Högskolan Dalarna University in Sweden. He has an extensive background within the automotive industry and has previously worked at the truck manufacturer Scania in Södertälje, Sweden. DONAT IT Hastor has a degree in mathematics from the University of Regensburg. After studying, Hastor started working in IT as a developer. In 2008 she came to Donat IT GmbH and went through various departments. Hastor has been managing director of Donat IT GmhH since 2015. DATARESPONS FRANCE Wolf has a Master in Finiace from the Business school in Grenoble. He started as an intern in AKKA Detroit while studying. After finishing school, he worked in AKKA for two years as a member of the Merger & Acquisition team. Prior to the lauch of Data Respons France, he was executive assistant & project manager for Mauro Ricci (CEO of AKKA). Wolf has been FROBESE Dirk Frobese founded Frobese GmbH, based in Hanover, in 1998 and is still the managing director. He studied electrical engineering at the University of Hanover and received his doctorate in computer science from the University of Hildesheim. He built his skills in various companies, including as a department manager at DVG (now Finanz Informatik). Today, he focuses on project management of large-scale projects and programs. FROBESE Nick Stöcker studied business administration in Hamburg and has worked in IT consulting since the beginning of his career, focusing on projects in the banking sector. His career path finally led him to Frobese GmbH in 2014 after several intermediate stations (including as an executive at Capgemini). As a managing consultant, he was responsible for the software development area and worked as a key account manager for various customers. Today he is responsible for the operational business of the Frobese group st|KENNETH RAGNVALDSEN RUNE WHAL SEBASTIAN EIDEM JØRN E. TOPPE IVAR A. MELHUUS SEHM JOHAN JACOBSSON GILAD MIZRAHI DR. CHRISTIAN KUKA FLORIAN ÖHLSCHLEGEL DR. HEIDI SAUER DR. ANDREAS LASSMANN MARTIN LAMPINEN EDIBA HASTOR GUILLAUME WOLF DR. DIRK FROBESE NICK STÖCKER OUR COMPANIES Newsletter sign up h1|Corporate Management sp|> > Corporate Management managing director of Data Respons France Since 2020. Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. bo|Chief Executive Officer (CEO) Chief Financial Officer (CFO) Chief Communications Officer (CCO) MANAGING DIRECTOR MANAGING DIRECTOR MANAGING DIRECTOR MANAGING DIRECTOR MANAGING DIRECTOR MANAGING DIRECTOR MANAGING DIRECTOR MANAGING DIRECTOR MANAGING DIRECTOR MANAGING DIRECTOR MANAGING DIRECTOR MANAGING DIRECTOR MANAGING DIRECTOR pa|The JCP Executive Committee (EC) is the group of members guiding the evolution of Java technology in the Java Community Process (JCP). The EC represents both major stakeholders and a representative cross-section of the Java Community. It is composed of 16 JCP members plus a non-voting chair. The chair of the EC is a member of the Process Management Office (PMO). The 16 voting members are selected from JCP members. The EC is responsible for approving the passage of specifications through key points of the JCP and for reconciling discrepancies between specifications and their associated test suites. MicroDoc is a software business serving an international customer base. Since 1991 MicroDoc has grown into a technology-oriented software engineering and professional services company. Our focus on complex software technology and software infrastructure made us a well-respected partner for large corporations and even for other software businesses in their digital transformation. Operating from three offices in Germany (Munich, Berlin, Stuttgart), the company serve leading corporations from a variety of business domains including connectivity, automotive, self-service systems, telecommunication, utilities and financial services. MicroDoc has specialised in solving challenging software problems, which require in-depth knowledge of end-to-end technology and business scenarios. Since 2016 MicroDoc have been part of the Data Respons Group. Data Respons is a pure-play digital leader with an in-depth expertise in software development, R&D services, advanced embedded systems and IoT solutions. The number of blue-chip customers is increasing, and Data Respons expects this trend to continue going forward. The trends of increased automation, digitalisation and ‘everything connected’ (IoT) fit well with both the Data Respons’ business units and competence map. We develop everything, from the sensor level to the mobile app, making it an ideal partner for its customers in their digital transition. The company has a highly diversified customer portfolio in industries such as the Mobility sector, Telecom & Media, MedTech, Security, Space & Defence, Energy & Maritime, Finance & Public and Industrial Automation. Data Respons is headquartered in Oslo (Norway) and has a strong portfolio of clients in the Nordic region and in Germany, supported by 1,400 software & digital specialists. Data Respons has achieved an 17% annual growth over the last 20 years li|, CEO in Data Respons. , comments Dr. Christian Kuka, managing director in MicroDoc. Java is one of the world’s most popular programming languages. Java is the #1 developer choice for any cloud solution and has a strong presence in the embedded industry. Java is used by 95% of the enterprises as their primary language. It is much more than C and the other languages. In one year, Java gets downloaded one billion times. Today, Java rationally runs on more than 1 billion as the Android operating system of Google uses Java APIs. Java was created way back in 1995, making it one of the oldest programming languages out there Java is widely used and has even been the software platform in the onboard computer in NASA’s . st|Key facts about the Java and the JCP Executive Committee: Committee members: About MicroDoc About Data Respons OUR COMPANIES Newsletter sign up h1|MicroDoc, a Data Respons company, is re-elected to the Java Executive Committee h2|MicroDoc, a Data Respons company have been re-elected to sit on the Executive Committee for the Java Community Process. Together with 18 other companies like Oracle, Alibaba, IBM, Intel, SAP and Twitter, MicroDoc will have key role in shaping one of the most used software languages in the world. sp|> MicroDoc, a Data Respons company, is re-elected to the Java Executive Committee Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|Being part of an exclusive committee alongside the world’s largest tech companies, shaping the software language the entire world uses, is a testament to how far we and MicroDoc have come. And this accomplishment is purely based on dedicated specialists being the best in the business. It’s a perfect example on what Data Respons represents and I am very proud of the standing MicroDoc has built in the software community, says Kenneth Ragnvaldsen Being a small operation in the company of giants is a privilege we highly appreciate. And it is also a privilege to participate in the continuous development of Java. Our seat in the Executive Committee allows us to provide our expertise in future software developments, but it also allows us to represent the needs and requirements of Java embedded that is used by our automotive and telecommunication customers pa|A significant amount of telematics services, connectivity services, and infotainment systems in the automotive industry are programmed in Java. But while there are many good reasons for Java being the most widely used programming language in the world, it has a few shortcomings as well, startup performance being one of them, memory footprint another. GraalVM remedies these shortcomings. It accelerates startup time by a factor of up to 10, can reduce resource consumption, and can host multiple programming languages and run different software on the same infrastructure. Initially developed by Oracle to be the programming interface of the future for the Oracle database, GraalVM is now being introduced to the embedded world. Especially in the automotive industry GraalVM will make a huge difference, says MicroDoc CEO Dr. Christian Kuka. – It’s well known that the largest part of development costs for a new model goes into software. Essentially, a modern car is a big rolling smartphone, and that is a huge challenge to the auto industry. Why? Because, on one hand you have your costumers. They expect to be offered new features as fast as they’re used to from smartphones and other consumer electronics. On the other hand, car manufacturers have strict safety and warranty obligations. That means you want software that’s extremely stable and reliable, and therefore you have to focus very much on certifications, testing etc. Thus, you can end up with very long development cycles for new complex applications. According to Dr. Christian Kuka the GraalVM can help narrow this gap between customer expectation and industry requirements. It allows you to reuse existing components and legacy code already tested and approved. Also, as GraalVM is hardware independent, you can use your existing infrastructure instead of having to introduce a new one. In addition to that, GraalVM fits the auto life cycle. It’s supported by one of the biggest IT companies on the planet, and as part of the Oracle database it has a life cycle that is appropriate for automotive use cases. Accordingly, MicroDoc offers its customers long term contracts, so that they still can get GraalVM updates and security fixes during the usual automotive product lifecycle. This means that car manufacturers using the GraalVM will be able to quickly integrate new features into their platform, and at the same time guaranteeing the availability of those features throughout the car’s lifetime. And GraalVM allows manufacturers to use the same infrastructure for new features while also using it for long-running, stable functionality without the need for frequent updates. As mentioned, GraalVM was initially developed to meet the requirements in the cloud for infrastructure supporting micro services. In the automotive industry you have similar restrictions of resources in regards to memory, CPU power etc. GraalVM addresses these restrictions and allows developers to do much more with the limited resources at hand. Instead of having different languages and different virtual machines run simultaneously and interacting on the same device, GraalVM can run everything. It will work for every language, and allows you to get rid of independent components and have everything built on the same infrastructure, and on the same virtual machine. GraalVM runs applications written in languages like JavaScript, Python, Ruby, and R, and it even supports the execution of C and C++ in a safe, virtualized environment. It runs any language with an LLVM compiler, including SWIFT and Rust, together with the entire Java universe, including Scala, Kotlin, and Java itself. Moreover, you can mix Java with JavaScript and Python, and you can use existing libraries and frameworks available in those languages and use them in one single programme. According to Dr. Christian Kuka, these features allow GraalVM to function as a general-purpose backbone that can host basically everything in a car, with the exception of features with hard realtime requirements. – GraalVM will make a significant difference in regards to everything that relates to interaction with users, infrastructure, network and cloud services. It allows for faster start-up time, and quicker response to any kind of user input. As an example, in today’s telematic applications you have to wait until the application has loaded all resources and is completely up and running before it can operate, and i.e. transmit your current position to a backend service. By that time you’re already back on the street and the first kilometres are missing in the records. With GraalVM, the application is up and running nearly instantly and able to record your position with the beginning of your trip. – Due to the fact that GraalVM supports different programming models and languages, it is suited for many different types of applications in a car. That goes in particular for applications relying on connectivity with backends. These backends can be for the OEMs themselves, for instance for predictive maintenance, or it can be connections to 3 party applications like insurance apps. For instance, in Italy you can save a lot of money if you install an application that gives you pay-as-you-drive auto insurance. That’s big business in Italy, and you can basically cut your insurance costs in half if you have this feature in your car. Looking into the future, cars will connect to a great number of services, be it advanced navigation services, special points of interest, weather services, radar control warnings and the like. That trend has started already. As an example, the head unit in a state-of-the-art car has up to 50 concurrent web connections open to all kinds of services that are not hosted by the OEM. And that number will increase. Just like a smartphone, a car will connect to any number of services, and the GraalVM will be its crucial switchboard. The GraalVM can host not only OEM applications. It offers a standardized programming model for any kind of 3 party application in a car. This allows 3rd parties to add software and to rely on a proven programming mode to do so, be it Java, JavaScript or something else. Just write the code and with GraalVM it’s encapsulated and put in the car. Execution of 3 party code in a VM also separates it form vital internal functions and enhances the overall system robustness and security. Apart from future-proofing, GraalVM also allows for updating of existing systems. Its ability to reduce memory footprint and resource consumption makes it possible to add new features to older systems currently in the field, despite of their limitations. Furthermore, you can update a car during runtime, which is important, when you need to quickly address emerging vulnerabilities by installing software updates while the car is operating. In addition to this, as Dr. Christian Kuka points out, GraalVM has the advantage of coming in both an open source version and within a commercial licensing model. – If you’re a developer it gives you the freedom to try out the technology and get familiar with it without up-front investment. Afterwards, when you’re ready to integrate it into a vehicle’s system you can choose the security of a commercial model. And while the open source world is full of IP traps, a license shields you from e.g. patent trolls on the Virgin Islands, who make a living suing companies that use open source software. MicroDoc has a commercial model that gives you all the necessary IP rights, and it’s done under EU legislation, which is very different from buying software from the US. So, to sum it all up, while our cars are quickly converting into giant smartphones on wheels, GraalVM will be their new virtual engine, taking care of the increasing complexity, while at the same time narrowing the gap between demanding customers and the auto industry’s own demand for stable and reliable systems di|Arne Vollertsen for Data Respons st|BY: Solving a challenge Long lifecycle Addressing restrictions General-purpose backbone The connected car License and open source OUR COMPANIES Newsletter sign up h1|New virtual machine for the cars of tomorrow h2|Cars are quickly converting into cyber centres on wheels, and buyers expect new features to be introduced just as fast as in their smartphones and consumer electronics. That puts tremendous pressure on car manufacturers. To relieve some of the pressure MicroDoc is now introducing GraalVM embedded, a virtual machine allowing for faster development cycles while retaining the stability and longevity required by the auto industry. sp|> New virtual machine for the cars of tomorrow Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Just like passenger jets and other manned aircraft, drones run into trouble when ice builds up on wings and propellers. Manned or unmanned, they need to be aware of what frozen water can do: Ice accumulated during flight increases an aircraft’s weight, reduces lift and maneuverability, and can ultimately cause it to stall and crash. When it comes to fixed-wing drones, power will do the trick. Similar to the basic concept of an oven, you can send an electric current through a resistive material – in this case ultra-thin sheets – to heat up the wings and fix the problem, according to Kasper Borup and Kim Sørensen, founders of the Trondheim based startup UBIQ Aerospace. Building on their PhD studies and research at the Center for Autonomous Marine Operations at the Norwegian University of Science and Technology, the UBIQ team is now turning their research into a commercial product. Named D•ICE it is the world’s first autonomous drone de-icing system. It is designed for medium sized fixed-wing drones with a wingspan of 3 to 5 metres but will also be applicable to large unmanned aircraft, the most valuable of which can cost up to 500 million Dollars. D•ICE is a completely autonomous system, requiring no outside operator to manage. To detect icing hazards, a sensor package monitors atmospheric conditions. The data is analysed by a set of algorithms, which also monitor the behaviour of the drone, to detect any changes due to icing. A control unit then channels the appropriate amount of power from the aircraft’s battery to the thermoelectric panels mounted on the wings and tail of the drone. As Kim Sørensen and Kasper Borup point out, when developing such a complex product there is a long way from prototype to finished product ready for volume production. Not least when the technologies involved come straight out of the research laboratory, and the system is required to function in a harsh environment with strict requirements regarding stability and safety. – We’ve worked on this for 7 years, starting out in research and then beginning to commercialize the technology in 2017. Primarily our competences lie in software and development of autonomous systems. We are a small team, and we can’t do everything ourselves, so when we needed to improve some of the hardware we decided to look for a partner. We did a thorough survey to find the right company to collaborate with, and Data Respons just stood out. They were extremely responsive and dedicated, and after meeting with them we just felt relieved. We had found the right people for the job, and we’re going to work a lot more with them in the future. UBIQ wanted to tap into Data Respons’ broad experience in preparing prototypes for large-scale production as well as designing hardware for harsh environments, such as aviation, subsea and military applications. On top of that, they had a tight deadline, with only a few months to get a new version of the de-icing system ready for a number of important – and expensive – wind tunnel and flight tests. – We asked them to design a new version of the control unit that processes the sensor data and controls the flow of energy from the drone’s battery to the thermoelectric panels. They managed to significantly improve the controller. Now it is much smaller than the previous one, less error prone, more sleek and functional, and designed to meet the industry standard in this domain. – Just to mention one thing, now we’ve got much better control of the powerful current that goes to the panels. That may not sound super sexy, but when you’re sending high current through a small aircraft that can cost millions you need to be able to control it precisely, to avoid the risk of melting panels or a burning battery. Furthermore, the Data Respons team was able to meet the tight deadline of the project. Starting in the beginning of June it had to be completed early Septe mber. The team was able to speed up the project by collaborating with a Data Respons sub-supplier in Shanghai that has worked with the company for more than 10 years. – We are impressed by what the team has done. For us it is really comforting to have people with that level of expertise contribute to the project. Now there is one thing less for us to worry about, and that allows us to concentrate on what we are good at: developing autonomous systems. – And we haven’t finished partnering with Data Respons. We are very satisfied with the collaboration and with the support we got. They have experience in developing robust hardware solutions that meet the tough requirements in our domain, and they know how to bring prototypes up to industry standard and preparing them for batch production. We’ll definitely make use of that expertise moving forward di|Arne Vollertsen for Data Respons st|BY: OUR COMPANIES Newsletter sign up h1|Controlling the power needed to de-ice drones h2|In its effort to bring the first drone de-icing system to market, Trondheim-based startup UBIQ Aerospace reached out to Data Respons R&D Services for hardware expertise to control the energy needed for setting drone wings on “defrost”: A challenging assignment with a tight deadline.  sp|> Controlling the power needed to de-ice drones From research to business Harsh environment Hardware expertise Tight deadline Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|Animation of the current going from the drone’s battery to de-icing panels in the wings The control unit for the UBIQ de-icing system has been designed by Lyder (left side) and Ole (right side), drawing on Data Respons’ vast experience in developing hardware solutions for challenging environments such as subsea and defence. Furthermore, on top of being a highly experienced hardware engineer Ole is also a drone enthusiast. He designs his own drones and is very well informed in regards to hardware and software controlling drones, battery usage, motors etc. Among other things he uses on-board cameras and VR goggles to view the world from the drone’s perspective. pa|Whenever it is possible and reasonable, users prefer web applications to desktop applications because of their easy accessibility in web browsers on desktop or mobile devices without the installation of any additional software. Furthermore, the amount of data in companies has increased over the years. At all times database systems are the preferred ways to store and handle data efficiently. The database manufacturer Oracle [1, 2] is well-known for its relational database system “Oracle Database” which provides many efficient features to read and write large amounts of data. To cope with the growing demand of developing web applications very fast, Oracle has created the online development environment “Oracle APEX” [3] which comes as a no-cost plugin for “Oracle Database” and is already included in “Oracle XE”. “APEX” stands for “APPLICATION EXPRESS” and that is precisely what it is. Oracle APEX is fully supported through and available for Oracle Database. After its installation Oracle APEX provides a powerful development environment which is accessible online via web browsers. Oracle APEX is independent of the operating system underneath. Just a web browser is required. Due to the fact that Oracle APEX is installed on Oracle Database, the corresponding database can be accessed directly through the online development environment shown in figure 1. Every part of an Oracle Database, e.g. tables, views, triggers, etc., can be accessed in this way by using the SQL Workshop. Thus no persistence layer is needed to exchange data between the developed application and the database. However, this means similarly that Oracle Database is required to use Oracle APEX. In view of the operating costs of Oracle Database, which can be very high depending on the preferred license model, Oracle APEX is more suitable for companies that already operate Oracle Database. Oracle Database is a powerful relational database system which can handle huge amounts of data and supports tables, views, sequences and triggers etc. In addition to those database items, Oracle Database provides packages and functions to enable developers to extend their databases by writing source code with PL/SQL. PL/SQL was invented by Oracle and extends the ordinary SQL functionality with features known from other programming languages. Thus “PL/SQL” stands for “Procedural Language/Structured Query Language” and makes it feasible to use, for example, variables, arrays, if-queries and loops directly in an Oracle Database. Even object orientation can be applied. As mentioned before, no persistence layer is needed to gain direct access via the online development environment to the database underneath. In the same way PL/SQL does not require such a persistence layer to gain access to tables or views. In packages and functions PL/SQL can be mixed up with ordinary SQL. This enables a very easy, fast and lightweight way to write powerful PL/SQL scripts. It is feasible to use prepared statements as well. Except for PL/SQL, which extends the ordinary SQL with powerful features known from programming languages, there is no additional programming or scripting language to learn. Oracle APEX is an online development environment which supports developers to create web applications. Thus all the established technologies in web development, e.g. HTML, CSS, JavaScript or jQuery, can be used to design web applications and make them do what should be done. Furthermore, other technologies such as Java or Jasper Reports can be applied as well. In addition to the off-the-shelf set of items like buttons, text fields or select lists, Oracle APEX can be extended with other powerful third-party items in the form of plugins. One of the most common plugin items is the “Select2 APEX plugin” [4] which is based on “Select2” [5] and improves the functionality as well as the user-friendliness of ordinary Oracle APEX select lists. Furthermore, it is also very simple to create own APEX plugins. Besides the advantages mentioned, regarding software development itself, Oracle APEX supports developers in an earlier stage as well. As a default layout theme for the graphical user interface (consecutively GUI) Oracle APEX comes with the Universal Theme and the Theme Roller as an easy-to-use tool to adapt the Universal Theme individually. There is no need to spend time on the GUI at the very beginning. Thus the developer can directly start with implementing the business logic. This is the reason why Oracle APEX is feasible to create rapid GUI-Prototypes without logic. Thus prospective customers can get an idea of how their future application will look. One of the most efficient features of Oracle APEX is tabular reports of data which are used in so called Master-Detail-Pages. Master-Detail-Pages consist of a tabular master page and an item-based detail page. The master page provides an overview over the corresponding data in form of data sets while the detail page provides the possibility to create and edit a single data set. To create such a tabular report just an ordinary SQL SELECT statement is sufficient. According to the selected columns Oracle APEX generates a tabular report including the same columns. The appearance of this report is based on the current theme used for the regarding application. Figure 2 shows an SQL SELECT statement as well as the corresponding tabular report. The world is getting more and more connected. Thus the demand for multilingual applications is growing rapidly as well. Oracle has taken this fact into consideration and has equipped Oracle APEX with powerful translation utilities to translate whole applications. The language in which the APEX application is developed at the very beginning serves as the default language. All the labels, buttons, region titles, main and sub menu entries can be translated with no additional effort. There is no need to think about translating the application at the very beginning. Each APEX application can be translated rapidly at any time. If an application is to be translated, Oracle APEX will store all translatable texts in a translation repository. This repository can be exported as an XLIFF file. “XLIFF” stands for “XML Localisation Interchange File Format” [7] and is an XML-based format to store and exchange translation data. Once an XLIFF file is created it can be easily extended with the translated texts and imported into the translation repository. As a final step the updated translation repository must be published before the translated application can be used. However, the translation utilities of Oracle APEX just cumulate all translatable texts without any duplicate checks. This means, that e.g. every OK-Button of the application appears a number of times within the XLIFF file. Due to this, some texts have to be translated more than once. This confession must be made to keep the flexibility of translating an application at any time during the development process. (Eric Brandenburg, Senior Applicants Architect, Brunswick Corporate [6]) Although web applications can be accessed very easily by anyone using a web browser, it is not always intended that everyone can read or edit every data within the application. Different users should get different permissions to access data. Oracle has thus developed a comparably powerful and easy-to-use approach to secure web applications developed with Oracle APEX. To allow or deny users to read or edit data, the application has to know the identity of the user which requests access. This process is called authentication. Oracle APEX provides the possibility to define and apply different authentication schemes. Particular authentication servers can also be used to adapt SSO, which stands for “Single-Sign-On”. An authentication scheme needs to know where to find authentication information of the users and what to do with new users. This can be achieved with PL/SQL. Once a web application contains more than one authentication scheme, it is very easy to switch. Even for the APEX workspace users of the development environment there is a built-in authentication theme available off-the-shelf. As soon as a user is authenticated successfully the next process, called authorisation, comes into play. According to the authentication schemes mentioned Oracle APEX provides authorisation schemes to manage what permissions a user has. Such an authorisation scheme needs to know where to find the permissions the users have. This can be achieved with PL/SQL as well. The following example works with a table “USER” containing the user identity, a table “ROLE” containing the existing application roles and a table “USER_ROLE” containing roles the users have. Figure 3 shows a PL/SQL function, located in the package “APP_SEC”, which returns a value whether or not the committed user has administrator permissions or not. In addition the figure shows the corresponding content of the authorisation scheme which calls the PL/SQL function. Almost everything in Oracle APEX (e.g. pages, regions, items, buttons, validations, processes etc.) can be restricted with an authorisation scheme. In addition to authentication and authorisation, Oracle has provided an additional functionality called Oracle VPD. VPD stands for “Virtual Private Database” and offers the possibility to implement multi-client capability into APEX web applications. With Oracle VPD and PL/SQL special columns of tables can be declared as conditions to separate data between different clients. An active Oracle VPD automatically adds an SQL WHERE clause to an SQL SELECT statement. This WHERE clause contains the declared columns and thus delivers only data sets that match (row level security). (Georges-Martin Caron – IT and Technology Project Manager – Coordinator of the Information Systems, Université du Québec à Trois-Rivières [6]) Oracle has created a powerful development environment in the form of a no-cost plugin for Oracle Databases called Oracle APEX. It can be used for both rapid development and rapid prototyping. Oracle APEX provides easy-to-use and at the same time powerful support for authentication, authorisation and internationalisation. In addition, with PL/SQL it is feasible to create efficient, multilingual, secure and future-proof web applications which are independent of their dimensions. The German IT service provider EPOS CAT GmbH has been working with Oracle Databases and Oracle APEX for more than 10 years. Since 2005 more than 80 web applications for over 170,000 users have been developed. This shows the great success of Oracle APEX di|Markus Kaml,Project Manager and Instructor, EPOS CAT GmbH li|Oracle (07.03.2014). About Oracle. Called on 13.10.2018 from https://www.oracle.com/corporate/#infoOracle: Oracle Fact Sheet – The Complete Cloud and Next-Generation Platform For Business. September 2017. Oracle (03.10.2018). Oracle APEX. Called on 13.10.2018 from https://apex.oracle.com/en/ APEX-PLUGIN.COM (12.08.2013). Select2. Called on 13.10.2018 from http://www.apex-plugin.com/oracle-apex-plugins/item- plugin/select2_344.html SELECT2 (2015).Select2. Called on 13.10.2018 from https://select2.org/ Oracle (03.11.2015). Oracle Application Express Customer Quotes. Called on 13.10.2018 from https://www.oracle.com/technetwork/developer-tools/apex/learnmore/apex-quotes-1863317.html OASIS (01.02.2008). XLIFF Version 1.2. Called on 13.10.2018 from http://docs.oasis-open.org/xliff/v1.2/os/xliff-core.html #orclapex (26.11.2015). apeks.png. Called on 13.10.2018 from https://blogs.oracle.com/academy-dach/oracle-apex-programming-competition-2016-fr-studenten-aus-nrw Logok (18.11.2014).Oracle_logo-880×660.png. Called on 30.01.2019 from http://logok.org/oracle-logo/ st|BY: OUR COMPANIES Newsletter sign up h1|Rapid development and beyond with Oracle APEX h2|Over the years, the requirements of software and its development have changed a lot. On the one hand there is an increasing customer demand for web applications which can be accessed easily via a web browser and can handle huge amounts of data. On the other hand there is growing demand on the part of software developers for faster development including faster prototyping. The well-known database manufacturer Oracle has found a way to combine both demands. Refrences sp|> Rapid development and beyond with Oracle APEX Combined development environment and database Programming directly within the database No additional programming language Rapid GUI prototypes Internationalisation “APEX has allowed us to migrate several disparate Excel and MS Access applications to a consistent, secure, web based environment. The speed and concurrency offered by APEX have been exceptionally valuable.” Authentication and authorisation “For close to 20 years, the Université du Québec à Trois-Rivières (UQTR) has used the Oracle PL/SQL technology to develop most of its internal and public systems on the Web platform (for example, the student portal). Moreover, we have integrated Oracle Application Express (APEX) to our development, and we are completely satisfied with it. Oracle Application Express is a quick, powerful, and mature development tool that allowed us to improve our productivity level.” Conclusion Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa| di|Data Respons, 25.01.2019 st|BY: When: Thursday 31 January, 08.30 Where: Hotel Continental, Stortingsgaten 24/26, OSLO OUR COMPANIES Newsletter sign up h1|Data Respons results for the 4th quarter 2018 h2|Data Respons will release its financial results for the 4th quarter 2018 on January 31st. Please see below for details about the presentation and webcast. h3|Data Respons will release its financial results for the 4th quarter 2018 January 31st. Please see below for details about the presentation and webcast. The presentation will be held in English. If you are unable to attend the presentation, you can follow it live on our webcast or stream it at a later time. sp|> Data Respons results for the 4th quarter 2018 Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|No more wifi and cables? To be fair, a tiny disclaimer might be appropriate before we begin elaborating on, how 5G will change the world as we know it: Actually, wifi and cables won’t disappear entirely. Wifi will still be used for indoor connectivity, where high speed and maximum reliability are less important. And cables will obviously still be there for power supply and fibre backbone, although 5G will replace ethernet cables in many places, even when it comes to real-time safety-critical systems. That said, the sky is the limit, when it comes to imagining how 5G will change our daily lives. 5G will secure instant access to high-speed internet everywhere, all the time, allowing for an explosion of new applications. In short – a shift in paradigm if there ever was one. – Very interesting times lie ahead of us, says Tore Levin, project manager at Sylog with extensive experience within telecom. – 5G will give us the possibility to develop so many new applications, integrating AI, sensors, ultra-fast data collection and much more. The only limit is our own imagination. You can compare 5G to electricity. Look at all the things we have achieved with electricity that we couldn’t even dream of in the beginning. CEO of Sylog Johan Jacobsson agrees. He began working in the telecom industry in the late 90s, pioneering machine-to-machine communication and connecting refrigerators, drilling machines, and coffee machines to the internet. Since then machine-to-machine communication has been renamed IoT, and moreover, it is finally starting to work. Why? Because now we have the bandwidth, the smartphones, the cloud applications, and not least the customer demand for it. – Now, with 5G coming we’re on the verge of something really big. I like to put it this way: if 4G is about connecting people, then 5G is about connecting things. Compared to people, cars, ships, factories, healthcare etc. are more demanding in regards to quality of service, bandwidth, and latency. For instance, when you’re a heart surgeon doing remote surgery half way across the globe, then you need a 100 per cent stable connection, as close to real-time as possible. 5G can provide that. Sylog has worked within telecom for more than 20 years, all the way from 1 generation GSM to 5G. Johan Jacobsson and his team have contributed on all levels of the telecom industry, developing radio base stations and other equipment for suppliers, working with operators to integrate hardware into their infrastructure, providing systems for provisioning, billing, and roaming and much more. Not least, Sylog has helped serveral international industry companies connect their machines and devices to the network. – That versatility is the real edge we have here at Sylog. We’ve been on all sides, both on the telco side, with the manufacturers, and with the industry. We’ve contributed to the whole eco chain, so we have a lot of experience to bring to the table. Patrik Veräjä is a member of the Sylog in-house team focusing on 4G and 5G. According to him, the speed and the low latency of 5G will make a world of difference. At top speed, 5G will be around 100 times faster than 4G. That means you can develop applications in which ultra-low latency and high bandwidth are essential: think self-driving vehicles, automated harbours, remote surgery etc. Another unique feature of 5G is the possibility to divide the network into different spectrums. Called Network Slicing, this enables you to guarantee a certain level of connectivity for specific applications, e.g. dedicated frequencies for the Blue Light industries, or special hotspots at charging stations for electric cars, with high-speed connectivity for software updates. Here is another example of, what Network Slicing can do: you might want to provide connectivity to a specific area inhabited by 10.000 people. In that same area there is a factory located that needs ultra-high bandwidth for automated production lines, self-driving robots, and safety critical operations performed remotely. You can offer a dedicated slice of the network with ultra-high performance to the factory, limited to the factory area. Meanwhile the people living near by will be using another slice of the network with ordinary run-of-the-mill bandwidth sufficient for streaming music and watching Netflix. Patrik Veräjä elaborates: – There are different parts of the spectrum, where 5G can be used. Operators are talking about low, mid and high frequencies. When you have a need for high bandwidth, but only for a few applications and a limited amount of users, then you could get a certain frequency, which offers that precise level of service. If you have many users who only need low bandwidth, you can allocate another part of the network to them. 4G is different, it’s more or less “here’s your bandwidth, handle with care”. – With the high-speed connectivity and the network slicing within 5G, there is so much potential. There are so many areas in which 5G can be used: AI and gaming, IoT and automotive, you could go on and on. One part of it is to put the infrastructure in place and scale up the 5G networks. Then the next step is to develop software to enable the possibility to connect different parts of larger systems to interact with one another. Talking about the paradigm shift caused by 5G, Tore Levin points to the gaming industry for a glimpse of the future. For game developers there is only a small step from what they do now to new 5G-enabled applications utilizing enhanced video, augmented and virtual reality etc. Taking it all a bit further still, Tore Levin thinks that not far into the future we might reach a whole new level of connectedness. Instead of connecting through computers and smartphones we might connect directly into our body. That is a bit scary, Tore admits, but nevertheless he sees it coming. But before that happens 5G will already have changed the world. In logistics it can optimize the flow of goods around the world, save energy and reduce waiting times, automate harbours, streamline container traffic, and increase security for harbour workers. In manufacturing it can optimize production lines, enable remote control of time critical processes, and allow robots to become faster and safer. In healthcare new levels of remote monitoring and treatment can be achieved, not to mention what 5G can do in the entertainment industry: imagine a football game with hundreds of cameras covering every possible point of view. Which ones to choose, that’s for you to decide st|Interesting times ahead Connecting things Speed makes the difference Slicing the spectrum Gaming leads the way OUR COMPANIES Newsletter sign up h1|Three software specialists on 5G opportunities h2|Goodbye wifi, goodbye cables. You’ll be left standing in the corner. 5G has arrived and it will take your place almost everywhere. But most importantly, 5G will enable software developers to design new experiences, services, and business opportunities harnessing the high bandwidth, low latency and virtually unlimited access provided by the fifth generation technology standard for broadband cellular networks. sp|> Three software specialists on 5G opportunities Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa| st|> > > OUR COMPANIES Newsletter sign up h1|Financial reports and presentations sp|> > Financial reports and presentations ANNUAL REPORT 2019 SUSTAINABILITY OTHER PRESENTATIONS Archive Quarterly Results Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Tools to manage software development involving various developers are often expensive. As an alternative, the Open Source world offers similar tools, but in the past they have been standalone or difficult to integrate. That has changed, according to Thomas: The guide details a process in which the individual developer creates a branch in the source code to develop a new feature without compatibility issues in relation to the work done by the other developers. The following tools are used to support the process – all installed under Linux: Developers will see how, at the start of a new feature, they get their own sandbox for the development. They receive tools support for development with code completion and formatting. On the build server, their codes are controlled using static code analysis and unit test. Code coverage is measured and quality criteria/goals can be set out. Once a feature is completed, ‘develop’ is merged into the feature branch and it is ensured that everything runs in Jenkins. After that, there is a merge back to ‘develop’ and the feature is finished. All things considered, the integration task becomes so much simpler and most of it happens in the feature branches. Furthermore Thomas Arnbjerg drills below the surface and looks at the configuration and use in a Continuous Integration (CI) scenario in which Jenkins supports the development in feature branches. According to Thomas, the only annoying aspect about Jenkins is that (like all other open source projects) Jenkins does not have a large marketing department which highlights its excellent qualities. Another thing is that all the functionalities available today as well as future improvements have required and will continue to require that a person who understands the problems and has the skills and the time to solve them (could be sponsored) works out an extension which will make life easier for everyone. These are the mechanisms which have created the super tool that can be installed today free of charge. Everything in this guide is open source-based without costs for licenses and all the projects go several years back and are actively maintained. Following up on development and upgrading on an ongoing basis is thus manageable. Last but not least, it is considered good style to give something back to those who have made it possible to create a professional development setup that does not require investment of large amounts of money. So, while you benefit from these tools you might perhaps consider donating part of the savings to the open source projects or contribute with developer resources to make the experience even better di|TechPeople A/S li|Ensuring that code written by different developers is uniform Ensuring that many developers may work on the same program at the same time Ensuring that each developer is able to QA his or her code before it’s integrated into the system Ensuring that the overall system will function after changes are implemented Jenkins as Continuous Integration Server with a wide variety of installed plugins Eclipse IDE for C/C++ development with an extensive range of installed plugins Git versioning systems st|BY: Thomas Arnbjergs guide addresses the well known challenges in coding: DOWNLOAD OUR COMPANIES Newsletter sign up h1|The optimal toolbox for Open Source Development h2|Are you looking for a fully integrated set of tools for state-of-the-art support of C++ development on Linux? Then look no further. Thomas Arnbjerg, software developer at TechPeople, the Danish subsidiary of Data Respons, has done the job for you, putting together the optimal Open Source development setup, and describing it in an extensive step-by-step guide, free for you to download from our website. Here is an introduction to the paper. sp|> The optimal toolbox for Open Source Development – In this guide I’ve tried to pull these tools together into a well-functioning, fully integrated and free set of tools which provide state-of-the-art support of C++ development on Linux. Many of these tools may also be used in other situations. Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Instead of a direct link, a steady stream of proprietary gateways of many shapes and forms are used to connect to IoT devices or to collect data from them. This holds true for low-power radio systems like Bluetooth LE, ZigBee, Z-wave and similar technologies. This is in contrast to other technologies like WiFi (IEEE 802.11) which are often accompanied by a TCP/IP stack, which can make them a part of a local or global network. The problem with the current approach is that you need proprietary software on an intermediate device like a phone, server or router to be able to talk to your IoT device. This makes sense if your device is one device of many in a larger farm of devices, where only the combined data from all the devices is of interest. However, if your device is a low-power, self-contained device, you might want to simply talk to the device directly. That way you can avoid all the other data pit-stops, which makes your product more complex and requires constant maintaining. Keeping an IOS, Android or Linux server applications working over time is far from free, and can lead to unnecessary complexity. Security can also become more tricky, and peer-to-peer security can be harder to achieve, as all the pit-stops might need some kind of awareness of the data being transmitted. Even if this is not the case, it would be optimal, if existing and proven security protocols like “TSL” and “SSL” (which are used for e.g. secure web browsing) could be used directly to connect to the device over IPv6. Bluetooth LE is a fairly young technology, while the fundamentals that make up for the security on e.g. the Internet are old, mature and well proven. So what would a typical example look like in the real world? Let’s take an example, where a user wants to query a specific Bluetooth IoT device for some data via a browser. To do this currently it would involve a web-server to process the requests and a proprietary application on e.g. a mobile phone which has a Bluetooth link to the device. But what if the IoT device was the web server? This would eliminate the need for an external web server all together, and would reduce the mobile phone to a simple TCP/IP router with no knowledge of the device’s application or the data. You could also keep the web server, and only eliminate the phone application, if the server needed to process the data. This typology is something we already take for granted, when it comes to other technologies. People would e.g. find it very odd, if they needed a special app on their WiFi router for their TV to work. Normally you would assume that any WiFi device that is connected directly to your home network is on the “internet”. This is not empirically true, but merely a result of standards and protocols having evolved this way. To help Bluetooth and other protocols evolve to this step, 6LowPan enters the arena. 6LowPan stands for “Low-Power Wireless Personal Area Networks”, and is like other internet standards defined by the “Internet Engineering Task Force”, via the “RFC 6282” document, and further defines “RFC 7668”, which details how specifically TCP/IPv6 is mapped on top of Bluetooth Low Energy with regards to addressing and such. Further, to incorporate it into to world of Bluetooth, bluetooth.org defines the “IPSP” standard, which maps the 6LowPan protocol into L2CAP on a fixed port number. With all this in place, the IoT device can now talk directly to a TCP/IPv6 network, such as the “Internet” as we know it. Combining all the mentioned standards and protocols makes up a complete protocol stack from application to physical transport. The protocol stack ends up looking like this: To avoid having to introduce new addresses to the system, the Bluetooth address (which is a 6 byte MAC address issued by IANA) is mapped directly into a reserved range of IPv6 addresses, which will therefor not collide with any existing IPv6 addresses. This is a clever trick, as this means that you can deduct the address of your remote IPv6 device, simply by knowing its Bluetooth address. In the end the whole point of this is the application. This means that existing TCP/IPv6 applications and protocols can now be placed directly on top of the protocol stack. This could give life to older TCP based protocols like “FTP” and “Telnet”, which are fairly bandwidth effective, but could also be used with newer protocols like “HTTP”. Looking at the entire protocol stack, one might deduct that this is all a bit too much for a small microcontroller to handle, as this requires processing of additional packet check sums and protocol handling. However, in practice, an efficient and minimal implementation of this can be done in a couple of kilobytes of code on a microcontroller like the ARM Cortex M0, which is an entry-level controller at the time of writing. It goes without saying that for high data throughput, larger controllers will be needed. Bluetooth 5 introduces new, faster packet types that can hold more data and transmit faster. One of the largest challenges of using TCP/IPv6 on devices with limited resources is the fact that IPv6 was initially designed for large systems with large payload sizes. An original IPv6 and TCP header is usually around 48 bytes. As the hardcoded payload area for a Bluetooth LE can be as low as 27 bytes, this is a challenge. Furthermore, you would not like to see all your battery power and bandwidth go up in smoke because of excessive headers. To remedy this, 6LowPan utilizes something called “Header Compression” to vastly reduce the size of the problematic headers. The compression of 6LowPan datagrams is specified in standards such as “RFC 6282”. One of the most significant compression mechanisms is the “LOWPAN IPHC“ which defines how IPv6 headers can be reduced to a few number of bytes by using some carefully defined logic that causes much of the information to become implicit. Other schemes define how TCP/UDP and other internet protocols can be compressed in a similar fashion. If you have a bandwidth limited link, this will make a world of difference. For a 10 byte payload sent inside a 27 byte MTU (Max Transmission Unit) physical link, it would look like this without any compression: Via header compression this can be reduced to: Great Scott! This means 3 times less power, and 3 times less bandwidth is used. That is a significant reduction and makes the technology much more appealing. The things that are compressed are usually addresses and port numbers. As an example, because the Bluetooth address is mapped directly into the IPv6 address, the remote address can be deducted to zero, if the receiver of the IPv6 packet is the same entity as the device receiving the Bluetooth LE packet. In a similar fashion, other fields can be eluded by using implicit knowledge to deduct the values. One thing to always worry about is power. Contrary to the name, Bluetooth Low Energy is not very low energy if you are sending a lot of data frequently. So you really want to keep your IoT device in a sleeping state as much as possible. One way to do this is simply to apply reasonable values for Bluetooth LE related parameters such as the “connection interval“, which limits the number of slots used for detecting incoming traffic. But when IPv6 is introduced into the mix, a new problem arises. Apart from the information you want to send, the gateway itself could be sending vast amounts of broadcast information, which is not necessarily useful for your device. As an example, many networks utilize discovery protocols to detect which devices and what services are available on the network, such as the IPv6 “neighborhood discovery”. To battle one of these problems the “RFC 5741” introduces optimizations to what “neighborhood search” requests are sent to the 6LowPan enabled device. Another, more simple approach, is to simply make sure that the network devices on your network are only setup to route the most necessary packets to the device. This can typically be controlled via firewall and routing rules. However, as the gateway might be an off-the-shelf phone, that might not be an option. Power consumption will always be in focus, and 6LowPan will not change that. However, we might see new challenges in existing protocols along the transition towards IPv6 (if we are going that way), as many of them were simply not designed with such power critical devices in mind. However, standard extensions like “RFC 5741” seems to ratify this rather nicely. And it will likely not be the last. When 6LowPan was developed, an open implementation was needed to test and verify it with. As Linux is widely accepted as the de facto standard for embedded open source development, implementing 6LowPan as a module for Linux was obvious. So since kernel 3.17 (from 2014) this has been possible to some extent. As the standard evolved, so did the kernel module, so a newer version of the kernel is recommended. Additionally, the kernel module will need to be enabled, and the documentation on how to do this is available several places online. Searching for the keywords “Linux” “Bluetooth” and “6LowPan” should get you started. For Embedded or Windows usage, a proprietary stack is needed, as Windows does not currently support 6LowPan out of the box. This can be obtained from various sources, including the author’s own “DTBT – DonTech BlueTooth”, which has support for 6LowPan on Windows, MacOS and embedded targets. Some embedded targets, like Texas Instruments 2630, natively support 6LowPan and IPv6. To help the developers along, the sniffers and packet tracers used for Bluetooth and networks have adopted support for these new protocols as well. Sniffers from Wireshark, Frontline and Ellisys have gained support for parts of the 6LowPan stack. Below is an example trace of the classical “IP ping” being sent from a Linux device to a Bluetooth connected Windows implementation with DTBT: There are many backers of the 6LowPan standard, including the biggest producers of Bluetooth controllers such as “Texas Instruments” and “Nordic Semiconductor”. Other companies are also pushing 6LowPan for other low-power network types. The thing they have in common is that they are chipset manufacturers and not end product manufacturers. The biggest challenge with the 6LowPan is not the standard, the producers of chipsets or the specific implementations. The big hurdle is adoption. IoT is currently a very hot subject, but various companies want to make their own IoT ecosystem, where they can sell more of their own products. This is a classical problem. In the early days of the Internet, many companies like Microsoft, AOL and CompuServe tried to push their own version of the “Internet”, so they would have complete control over devices and services sold on it. Today, people would hardly accept a solution where e.g. your TV only worked with a specific Internet connection. Companies like Apple and Google still seem to be interested in this philosophy. On one hand they present things as an open-ish platform, so other manufacturers can participate, but on the other hand they do not wish to employ open standards to accomplish their goals. They still want control, and this bait-and-switch tactic can be something that keeps consumers and companies from fully embracing the technology. However, if history is to repeat itself, technologies like 6LowPan could change all that. So for now, there is no 6LowPan on Android or IOS. Actually, Linux is the only platform supporting it out-of-the-box. As Android is based on Linux, they could of course suddenly announce its arrival, if they felt it did not get in the way of any of their own visions of deploying devices like “Google Home”. One thing is for sure, though. 6LowPan over Bluetooth is not standing still. The more recently released “Bluetooth Mesh Profile” for Bluetooth LE has already gotten a new sister RFC called “draft-ietf-6lo-blemesh-00”. As the name indicates, it is still in draft. But the goal is to extend 6LowPan to cover mesh networks using Bluetooth. So companies that are looking into Bluetooth mesh might also want to take a look at topping if off with IPv6 connectivity. Bluetooth™, Linux™, Windows™, IOS™ and MacOs™ are trademarks of their respective owners di|Peter Dons Tychsen, TechPeople st|BY: OUR COMPANIES Newsletter sign up h1|Bringing the Internet to the Internet of Things h2|All is not what you think when IoT devices get connected: One of the more curious things about IoT devices is that most of them are not actually a part of what we today loosely describe as the “Internet”. The “Internet” is usually defined as a global network consisting of TCP (Transmission Control Protocol) combined with protocols like IPv4 and IPv6, and various other supporting protocols. Despite the name “Internet Of Things”, IoT devices rarely have support for any of these protocols, making a direct link to (or from) them impossible. Based on the idea that the Internet Protocol could and should be applied even to the smallest devices, 6LowPan is now entering the arena. This article offers a look into the inner workings of 6LowPan, together with assessing the current state of adoption of the standard. And although many difficulties lie ahead, one thing is for sure: 6LowPan over Bluetooth is not standing still. h3|Gateways provide connectivity Bluetooth IoT, simplified What protocols make the magic work? Making large packets fit inside small containers With great protocols comes great power Playing around with 6LowPan in the Penguin’s sandbox Who is on the 6LowPan train, and where is it going? sp|> Bringing the Internet to the Internet of Things Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|– Initially I decided on buying an electric car to save money. In Norway electric cars are exempt from VAT, and they enjoy a number of other benefits. I studied the cars on offer and realised that Tesla is ahead of the competition when it comes to range and charging infrastructure, just to mention a few things. On top of that, everything works together seamlessly. A Tesla appears smarter than other cars. – I find it interesting to see what a company can achieve, that has no history and is not bound by any kind of legacy. They started out with a blank sheet of paper. – As I see it, Tesla is a great example of a truly disruptive business case, similar to when Apple launched the iPhone. Apple was the frontrunner, and afterwards all the Android phones produced in Asia followed. – Now new car manufacturers are doing like Tesla. If I remember correctly, there are 10 new electric car brands emerging out of China. And just like Tesla they are starting from scratch. Also, the old car manufacturers are investing heavily in electric cars. Soon the electric car market will become fiercely competitive, and it’s hard to say if Tesla will be able to maintain its position. Who knows, maybe Tesla will continue as the flagship of electric cars. It’s still ahead of the competition, and that must be the reason why its value is so hysterically high, even though it hasn’t made much money yet. – If you look at a conventional car, it consists of a lot of subsystems, many of them manufactured by subcontractors. That concept worked well, as long as these components were isolated subsystems without the need for coherent communication and update mechanisms. But slowly everything became more and more dependent on communication between components (east/west) and to cloud platforms (north/south). A car may employ 25 or more computer modules and without a coherent software stack tying it together you’ll never be able to build a truly modern car. – They are investing heavily to develop a software stack and equipment configuration, and we’re already seeing some results. Volkswagen for instance is launching a series of electric cars based on the same platform. Some of the German software companies that are part of the Data Respons group are contributing to this new way of constructing a car, working for Audi, Mercedes, and others. – But parts of the industry have had difficulties embracing that new approach to designing a car. About 4 or 5 years ago I attended a talk given by the head of development of Volvo. He told us that 70 per cent of development costs for a new model go into software, and only 30 per cent into mechanical components. That trend came as a shock to some vehicle industry executives, and now conventional manufacturers are investing enormous amounts in developing a state-of-the-art software stack and platform. – An electric car is actually very simple. There are very few moving parts. Anyone could make an electric car. But the software required is where things get complicated. Here Tesla has a leading edge, and now other manufacturers are working hard to develop similar systems. – However, I find at a bit strange that everybody is developing their own system. I wouldn’t be surprised if in 10 years time we’ll have an open source software stack that may be employed as a baseline for car manufacturers to license. – With the exception of the screen going black on occasion (you have a ctrl alt del on the steering wheel), the car has proven to be reliable. It also appears well built, although not quite on German premium cars standards. – On top of that it’s fun to drive. My car is a performance model and it’s very powerful. You would have to pay 5 times the amount for a petrol car to get similar performance. – And moreover, I’m looking forward to what Tesla has to offer when it comes to self-driving. – I’m interested in how self-driving technology is developing. When I bought my car I paid a premium for the upcoming “Full Self-Driving” package, which is said to enable the car to find its way to a destination without any driver intervention. A beta version has already been distributed to a select number of Tesla owners in the US, but I’m not sure the package will even be available in Europe. We’ll see about that. – I don’t think so. For years autonomous cars have been touted as the next big thing in automotive, but in reality many automakers are backing off on this promise and focusing on lower hanging fruits such as driver assistance systems. In general, I’m sceptical to the idea of full self-driving, at least when not confined to strictly regulated and provisioned environments. The general urban traffic scenario is highly complex and it is unlikely that machine learning can accomplish fluid traffic given the complexity of the task. Remember these algorithms depend on statistical confidence in order to make the right choice. Whenever a decision with potential safety implications is to be made, this confidence level must be very high – otherwise the car will probably have to halt. This is likely to be a recipe for a traffic jam. – Tesla employs sensors as cameras, ultrasound and radar to establish the situational awareness required for autonomous driving. Cameras may however be blinded or confused by lack of contrast as is easily observed driving in winter conditions in Norway. – There also exist ethic and legal aspects to the whole concept of autonomous cars. Who is responsible in the event of an accident? The non-driving driver or the car manufacturer? The algorithm supplier? What will the insurance cost become – if you ever get one? – A part of the screen will at all times show objects recognised by the car and thus give an indication of the situational awareness as perceived by the car. The deviation from my own understanding of the current traffic situation tells my something about the ability to navigate traffic – as a side note I have driven 40 years without accidents. In general, my experience is that the car definitely does not get the full picture. It is also dependent on lane markings to stay on track when auto steer is activated. – One funny observation is that updates may lead to worse performance for instance in assisted breaking, probably due to stricter requirements to statistical confidence being put on the car makers. The car will generally appear more “nervous”. – For the car to receive updates and transmit data it has to connect to WiFi. In my house I’ve installed a sophisticated WiFi network, which allows me to see all clients and how much data they transmit. It tells me that when parked in my garage the car transmits considerable amounts of data after being taken for a drive. Its many sensors collect a lot of data, and Tesla is very good at using its fleet of cars to channel data back into their machine learning systems to improve them. – That is one of the reasons why Tesla is ahead of the competition. They extract data to train their machine learning models. It is likely to be some kind of reinforcement learning, where they pick real world data related to situations when something unforeseen occurs. For instance, the car drives on autopilot and suddenly the driver grabs the wheel or steps on the brakes. I assume they want to analyse sensor data related to such an incident, and that makes good sense. But I’m guessing here, because the only thing I can see is that the car is uploading a lot of data. Exactly how it’s done, that is something Tesla keeps as a business secret di|Arne Vollertsen for Data Respons st|BY: OUR COMPANIES Newsletter sign up h1|Man vs. machine – a software engineer and his Tesla h2|Meet Hans Christian Lønstad, CTO of Data Respons Solutions. A software engineer with 20+ years experience working at Data Respons, Hans Christian knows a thing or two about technology, and he is the proud owner of a Tesla Model 3. So, what would be more obvious than to ask him how that relationship is going – is that much-hyped car brand delivering on its promise? What are the upsides and downsides of owning a Tesla? And what are his thoughts on the current state of the automotive industry? sp|> Man vs. machine – a software engineer and his Tesla Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|Hans Christian, why a Tesla? What do you think of Tesla as a car manufacturer? What does owning a Tesla tell you about the current state of the automotive industry? What are the legacy car manufacturers doing to get past that barrier? In your opinion, what can legacy carmakers learn from Tesla? You’ve had your car for about a year. Are you satisfied with it? How so? Are we going to see autonomous vehicles in the near future? How about your own car, how does it behave in traffic? Like all other Teslas your car is communicating with the Tesla headquarters. Have you noticed anything peculiar in that regard? Hans Christian, thank you for your time, and I hope you continue having fun with your car! pa|It may look like a flying saucer straight out of a 50s sci-fi movie, but the Heimdall Power Neuron is pure hi-tech. Its sensors spy on the wire it’s sitting on in a multitude of ways: the flow of current in the line, wire angle, vibration and temperature, snow load, short circuit detection and much more. Neurons distributed all over the grid send their intel to the Heimdall headquarters in the cloud. Through the Heimdall Cloud grid owners can access monitoring data in real-time and harvest valuable information to predict line faults before they occur, streamline maintenance and minimise blackouts. The technology promises to optimize energy distribution and may increase the capacity of the power infrastructure by a staggering 25 per cent. Although the Neuron is designed to be mounted on a wire and is used for monitoring wires, much of what goes on inside it doesn’t need any wires whatsoever. The device gets its energy from the magnetic field surrounding the power cable, and it sends its measurements wirelessly to the cloud via mobile phone networks. However, many power cables run in areas with very scattered population, where cellular coverage may be weak or even non-existent. This is where Data Respons R&D Services can help. – In areas without cellular coverage the Neuron switches to an auxiliary communication technology, development engineer Monica Lapadatu explains. – The current version of the Neuron uses a radio technology called LoRa, but for the next-generation Neuron the Heimdall development team wanted something else, so they turned to us for advice. – They had several reasons for switching technologies. The most important ones were that LoRa communication is restricted, in the sense that you are only allowed to send a specific amount of data over a specific time period. This restriction limits the real time performance and amount of data Heimdall can transmit between the Neuron and Cloud. Another issue with LoRa is that it requires base stations stationed around the network. This requires both hardware and software, and is thus an extra point of possible failure, as well as another part of the system that needs to be maintained and updated over time. But what to use instead of LoRa? To answer that question, Monica was tasked with surveying the radio technology landscape and pick the one best suited for connecting Neuron and Brain. – I came back with a recommendation to choose Bluetooth Mesh. For various reasons it’s a good fit for this application. We needed a long line of Neurons to send data from one to the other. Bluetooth can do the long line, while other mesh networks are star-shaped and need concentrator nodes to function. Also, Bluetooth can provide the range we need. Bluetooth Version 5.0 has a feature called “long-range” mode. That gives you a range of around 1.3 km between each network node, which is more than sufficient. While equipping the upcoming new version of the Heimdall Neuron with Bluetooth communication features, and testing it to be sure it met requirements, Monica also assisted the Heimdall development team in testing the previous version, which is equipped with LoRa, and fixing minor bugs. – I am specialized in cybernetics and robotics. That means I have good testing skills, because I understand all the parts of a device, both electronics and software. In this case we did two very different kinds of testing. Regarding the current version of the Neuron, we had short time before product release, and I just had to fix a few things that weren’t working properly. – With the Bluetooth version we were right at the beginning of development, so I was tasked to find out what the possibilities were and what the technology could do. We did a lot of ground tests to measure transmission range etc. Everything went well, so now we’re ready to implement Bluetooth in the next version of the Power Neuron. When that version is launched, customers who run older versions of the Neuron can upgrade or keep the old version, depending on their specific needs and requirements. With the emergence of The Internet of Things, non-cellular communication technologies like Bluetooth Mesh are gaining traction. Data Respons has extensive experience in analysing concrete use cases and choosing the wireless protocol best suited for each specific application. Below we are presenting four technologies and their main characteristics, together with a number of relevant use scenarios. If you are interested in learning more, please feel free to reach out to our wireless experts. 5) 6) 1) 2) di|Arne Vollertsen for Data Respons td|Two new technologies, both based on mobile (cellular) technology created to be particularly suitable for enabling global. IoT connectivity. LTE-M and NB-IoT are both good connectivity options for industries looking to take advantage of LPWAN (Low Power Wide Area Networks) technology that enhances the battery life of devices and connects devices that have previously been hard to reach. They are both available today, standardized and built on the 4G network which means they are future-proof, have global network coverage and are backed up by GSMA and telecom standards. Seldom communicated – when applying this technology, you are dependent on 3.party infrastructure (mobile network operators). With 10+ years of operation, this starts to be a risk. 5) Bluetooth Mesh is a computer mesh networking standard based on Bluetooth Low Energy that allows for many-to-many communication over Bluetooth radio. Enables the creation of large-scale device networks making it ideally suited for control, monitoring and automation systems where tens, hundreds, or thousands of devices need to reliably and securely communicate with one another Thread is a standards-based IPv6-based mesh networking protocol developed for directly and securely connecting products around the home to each other, to the internet, and to the cloud. Simpler to set up than Bluetooth and also benchmarked to be faster (larger bandwidth, shorter latency). 4) SmartMesh IP combines reliability and ultra low-power with a native Internet Protocol (IP) layer for a robust, standards-based offering perfect for a broad range of industrial applications. SmartMesh IP provides robust wire-free connectivity for applications where low power, reliability, and ease of deployment> matter. 3) li|LTE-M is the better alternative with respect to handling firmware and software updates that are expected during the lifecycle of the devices. LTE-M is built for roaming and has the best support for international deployments using a single point of contact and subscription for enterprises. Both LTE-M and NB-IoT have significantly improved indoor coverage compared with LTE. -LTE-M is a better alternative for moving devices, as it will not lose ongoing data transfers. LTE-M is prepared for voice technology and Voice over LTE. With LTE-M, devices can react in milliseconds if required, enabling use cases where a fast response is needed, which is relevant for the usability of human-machine interactions. Coverage of very large areas Self-organizing many to many network The ability to monitor and control large numbers of devices Optimized, low energy consumption Compatibility with currently available smartphone, tablet and personal computer products Industry-standard, government-grade security Simple network installation, start-up, and operation. Self-organizing many to many network Secure Large commercial networks No single point of failure Low power Cost effective Market ready Ultra low power consumption Deterministic power management and optimization Auto-forming mesh technology for a self-healing and self-sustaining network Dynamic bandwidth support, load balancing and optimization Network management and configuration Zero collision low power packet exchange Scalability to large, dense, deep networks High network reliability st|BY: OUR COMPANIES Newsletter sign up h1|Monitoring the electric grid for a greener future h2|The Norwegian startup Heimdall Power is aiming to catapult the electric grid – designed 100 years ago – into the 21st century. How? By inventing the Power Neuron, a robust metal ball the size of a football, mounted on live wires. Inside the Neuron a sensor package is monitoring the wires and sending out early warnings, if line faults are about to happen. This unique real-time monitoring system enables grid owners to optimize their infrastructure and increase its capacity significantly. h3|Wires and wireless Reasons for switching Surveying technologies Two ways of testing Four wireless technologies – characteristics and use scenarios sp|> Monitoring the electric grid for a greener future Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. bo|Wireless technology Short presentation Main characteristics and use scenarios LTE-M/NB-IoT Bluetooth mesh Thread SmartMesh IP em|Ref: https://www.silabs.com/documents/public/user-guides/ug103-11-fundamentals-thread.pdf Ref: https://advantech-bb.com/what-is-smartmesh-ip/ Ref: https://www.analog.com/media/en/technical-documentation/user-guides/smartmesh_ip_user_s_guide.pdf Ref: https://blog.nordicsemi.com/getconnected/an-introduction-to-thread Ref: https://www.telenorconnexion.com/iot-insights/lte-m-vs-nb-iot-guide-differences/ Ref: https://www.bluetooth.com/blog/an-intro-to-bluetooth-mesh-part1/ pa|An arm, reaching out from a boat, is submerged into the water and the sea bottom, “looking” for Pacific Oysters, leaving all other subsea fauna alone. Data Respons R&D Services has helped Oystercatch with software controlling the robot as well as mechanics. An arm, reaching out from a boat, is submerged into the water and the sea bottom, “looking” for Pacific Oysters, leaving all other subsea fauna alone. Data Respons R&D Services has helped Oystercatch with subsea expertise and mechanics on this project. – The challenge is to create subsea engines that can withstand the salty conditions, says R&D Manager Øyvind Milvang at Data Respons R&D Services st|OUR COMPANIES Newsletter sign up h1|Fighting the Pacific Oysters with an optical robot h2|The Norwegian company Oystercatch has developed an optical oyster-catching robot in order to help stopping the mass growth of the Pacific Oyster, threatening beaches in Europe and around the Pacific Ocean. sp|> Fighting the Pacific Oysters with an optical robot Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|– The challenge is to create subsea engines that can withstand the salty conditions, says R&D Manager Øyvind Milvang at Data Respons R&D Services. pa|The partnership with Dansk Retursystem started back in 2009 and has since then delivered several thousand systems to be used in shops and malls all over Denmark. They have gone on to deliver multiple generations of the solution. Sweden is aiming for a zero waste society. – items are recycled through reverse vending (pant) each year, i.e. where people get money back for empty cans and bottles. (Source: Dansk Retur and Swedish Waste Management Association, Swedish EPA) st|1,850 million OUR COMPANIES Newsletter sign up h1|Reliable control system for Danish recycling system h2|Dansk Retursystem manages a world class recycling system which retrieves, counts and sorts empty cans and bottles. An impressive 90 % is sent for recycling. They have since 2002 had the sole responsibility for running the Danish deposit and return system. Data Respons Solutions deliver a customised and trustworthy central control system that ensures reliable operations 24/7. sp|> Reliable control system for Danish recycling system Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|– So far 2019 has been a very good year for us, but this quarter has really hit the mark with 35% growth, says Kenneth Ragnvaldsen, CEO of Data Respons The strong performance is a mix of successful acquisitions and strong organic growth. In July, the German company Donat Group and the Swedish company inContext became part of the Data Respons group. In addition, the company can point to 11% organic growth for the group. – Our long-term strategy, where we have cultivated the specialist role, combined with a strong digitalisation trend, has yielded very good returns this quarter, Ragnvaldsen continues. It was not only financially that Data Respons experienced growth. In Q3 the company headcount grew by 270 engineers and developers across the group. However, the demand for more people is still high and several of the subsidiaries would like to recruit a developer every week in order to meet the increasing demand in the market. The company now counts 1 386 specialists in offices in Norway, Sweden, Denmark, Germany and Taiwan. – Our employees represent the core value of the company, and as the number of specialists increases, so does the opportunity for the entire company to grow and take on larger exiting projects. We have gone from being a small Norwegian tech company to becoming an international technology group with 35 nationalities and many very smart and talented people, says Ragnvaldsen. Data Respons’ revenue has grown by 28% this year, and much of that growth is credited to being strategically positioned in the response to big technology drivers according to Ragnvaldsen. – We have become a business-critical supplier to many large European companies that are in the early beginning of extensive digital evolutions. In Germany, we are part of the development of tomorrow’s banking industry and mobility segment. In Sweden, we have large teams working on the roll out of 5G and IoT-technologies, and in Denmark and Norway we are creating digital platforms to the energy and MedTech sector, just to name a few, comments Ragnvaldsen. McKinsey, PWC and Deloitte have all produced reports this year that point to how the supplier value is shifting from hardware to software within mobility. Specifically, its autonomous platforms, connected vehicles, electrification and shared user platforms that are behind the shift towards software. – Mobility is just one example of what the massive effects digitalization can have in a single sector. We have already seen how electric cars, connected concepts and new car sharing services create many opportunities for a business like ours. – In addition, we have good speed into the fourth quarter with a strong order intake, and across the group we are involved in long term projects that will continue to fuel our growth. We are still determined to continue the development of the company through a combination of organic development and selective bolt-on acquisitions, concludes Ragnvaldsen li|Revenue in the third quarter was NOK 460.0 million (341.8), a growth of 35%. EBITA was NOK 55.6 million (33.2), a growth of 68%. The underlying EBITA in the third quarter, adjusted for expensed transaction cost of NOK 5.9 million related to the acquisition of DONAT Group GmbH and InContext AB, was NOK 61.5 million. EBIT was NOK 47.7 million (29.8), a growth of 60%. The profit for the period was NOK 29.9 million (18.9). EPS was 0.38 (0.32). Data Respons had a net operating cash flow of NOK 63.4 million (3.1) in the third quarter. Revenue for the first nine months was NOK 1 344.9 million (1 051.3), a growth of 28%. EBITA was NOK 151.5 million (95.0), resulting in an EBITA margin of 11.3% (9.0%). Data Respons had a cash flow from operating activities of NOK 129.5 million (9.2). The total number of employees on 30 September 2019 was 1 006 (644), and including subcontractors, the company had 1 385 (961) employees. st|Quarterly Year to date OUR COMPANIES Newsletter sign up h1|Data Respons delivers record high quarterly results h2|Data Respons presented figures for Q3 showing both solid revenue growth and a good margin increase. The company reports 35% growth in operating income and 68% growth in the operating profit, reaching an EBITA margin of 12.1% in the third quarter. Growing number of specialists Growth driven by global megatrends Q3 facts sp|> Data Respons delivers record high quarterly results Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|In the beginning of the 90s, the slowness of memory speed compared to processor speed was felt, known as . This led to an introduction of cache memories, small amount of fast, but expensive memory, used to increase the performance of load operations by keeping required data closer to the processor than the main memory. However, because cache memories, L1 DC in particular, are optimized for performance rather than energy consumption, the energy consumed by cache memories can account for a significant amount of the total energy consumed by microprocessor-based architectures. Techniques such as Speculative Halt Tag Access (SHA) and Early Load Data Dependence Detection (ELD ), based on way-halting and sequential loads, respectively, can be used to reduce the energy dissipation without sacrificing the performance of L1 DC. To provide the Central Processing Unit (CPU) with necessary data as quick as possible, the most frequenctly used data is stored in the caches that are placed closer to the CPU. Figure 1a shows a typical memory hierarchy existing in today’s computers. L1 cache is usually placed on-chip, such that it is possible to exploit locality by keeping data likely to be reused as close as possible to the CPU. If there is a cache miss in L1 DC, the search request will begin for L2 DC, which is often larger than L1 DC, thus results in higher latency. With each cache miss, the search proceeds to the next level memory until the requested data is found. In order to reduce the search time for data requests, the caches often have a restricted placement policy, known as cache associativity. Cache hits are then detected through an associative search of all tags, instead of searching through the entire cache. Conventional L1 DCs are often set-associative caches with low associativity, where the latency of load operations is optimized by accessing all ways with the same tag address in parallel, as shown in Figure 1b. However, this results in a signicant amount of wasted energy as only data from one way is used. To reduce the energy consumption, numerous cache architectures, such as way-prediction, way-shutdown and highly-associative have been proposed. However, these optimization techniques leads to increased latency and complexity, which makes them unattractive for L1 DCs. Practical way-halting by speculatively accessing halt tags, is a cache architecture that can reduce the energy dissipation without increasing the latency and complexity. That is accompished by halting cache ways that cannot possibly contain the requested data, thus avoid accessing all ways with the same index unnecessarily. The technique is based on the observation that the displacement address often is small and usually only change the offset of the relative memory address, see Figure 2. This makes it possible to require the halt tags, low-order bits of the tag, using the base address, in parallel with the memory address calculation in the address generation stage. Since the base address and the displacement address is available in the address generation stage, a comparison of the tag and index bits of the base address and effective address can be done to determine if the displacement is small, before the data access stage. When the displacement check succeeds, the halt tags can be accessed from the halt-tag cache, such that a halt tag check is performed before each L1 DC load operation. Halt tag bits of the base address are compared with the halt tag bits of each cache way accessed. If there is a match between the halt tag from the base address and the halt tag from a way, the bit corresponding to the way in the response vector is enabled to indicate that there is a halt-tag match, see Figure 3. Only the ways that have enabled bit are then accessed in the next pipeline. The technique, has no performance penalty and adds very little complexity to a conventional processor core design. Although the displacement address often is small, the SHA technique can not be used for displacement addresses that will change the tag or index bits during the address calculation. When the displacement address is too large for the SHA technique, Early Load Data Dependence Detection (ELD ) can be used to reduce the energy dissipation. Early Load Data Dependence Detection (ELD ) is an approach that can detect if the load operation has a data dependency with a following instruction that will cause a pipeline stall. If there is no data dependency between the load instruction and the following instructions, the load operation is performed sequentially where all tag ways are accessed, but only one data way in which the data resides, is accessed in the next clock cycle. However, if there is a data dependency, the data access is performed parallelly where both tag and data ways are accessed in the tag/data access stage, shown in Figure 4. In order to decide whether the data ways should be accessed sequentially or in parallel, the information about data dependency between the load instruction and the following instructions must be available at the time of load operation. In a conventional in-order pipeline processor, the information must be available before the end of the address generation stage. Commonly, it is possible to check for data dependency by comparing the destination register of the load instruction with the source registers of the instruction that immediately follows it. However, it is not directly possible to check for data dependency between the load instruction in address generation stage and the second and third upcoming instructions, which is required by ELD technique. Therefore, a Data Dependency Bit (DDB) memory is implemented in the address generation stage that holds the dependency status for each instruction in level-one instruction cache (L1 IC). When a load instruction is detected after instruction fetch, the data dependency bit is accessed from the DDB memeory for the corresponding instruction. Figure 5 illustrates the DDB memory for a two-way L1 IC. The dependency bit will be correct as long as the cache line is not evicted from the L1 IC. Should a cache line be evicted from L1 IC, the load operation will still be executed correctly at the expense of an additional stall cycle. Moreover, the dependency bit for the load instruction will be updated during the writeback, such that the dependency bit is correct next time the load instruction is executed. By combining SHA and ELD together, the ELD technique can be used when the displacement address is too large for the SHA technique. A load operation can then be performed like this: When the displacement is small, the halt tags are accessed but the DDB memory is not accessed. The tag and data ways are accessed in parallel, but SHA will halt both tag and data ways using the hit vector from halt tag access. When the displacement is too large for SHA, the halt tags are not accessed, but the DDB memory is accessed. The outcome of DDB memory will decide the next step taken. If the DDB memory returns a dependency bit which is cleared, then all tag ways are accessed in parallel, but the data way is sequentially accessed. Or if the DDB memory returns a dependency bit which is set, the tag and data ways are accessed in parallel, such that the data can be forwarded to the following instruction and avoid a stall cycle. The effectiveness of the SHA and ELD implementations were evaluated by running MiBench benchmark applications on a four-stage RISC-V RV32I 32-bit processor, implemented on Single-ISA Heterogeneous MAny-core Computer (SHMAC) framework. SHMAC is a research project initiated by the Energy Efficient Computing Systems (EECS) department at Norwegian University of Science and Technology (NTNU), that uses a tile-based architecture. The MiBench applications were compiled using the RISC-V GCC toolchain, and analyzed using the SHMACsim, a cycle-accurate simulator for the SHMAC framework. Figure 6 shows the average number of ways accesses when using SHA and ELD techniques in combination, relative to a conventional cache implementation. When the displacement is small and the SHA technique is used, only one tag and data way is accessed for most load instructions, shown as S:1. In addition, when there is a cache miss, zero tag and data ways are accessed. Load instructions that result in cache misses occur, as we can see from S:0, quite frequently. When the displacement is too large for SHA and there is no data dependency, we can see that the number of data ways accessed is reduced significantly by accessing the data ways sequentially using ELD , shown as E:S. Improving the energy-efficiency of computing is an important area of research, and there is a potential for reducing the energy dissipation of caches. This article shows that using the concept of practical way-halting and data dependency detection, it is possible to reduce the energy dissipation for L1 DC without reducing the performance di|Salahuddin Asjad Development Engineer Data Respons st|BY: 1: 2: 2a: 2b: OUR COMPANIES Newsletter sign up h1|Performance-aware energy-efficient data cache accesses h2|With the increased growth of Internet of Things (IoT) devices, the need for energy efficient computing systems is more important than ever. Many of these systems are battery operated and often in places where recharging or replacing the batteries is difficult. This is why researchers and the semiconductor industry are using a significant amount of resources in increasing energy efficiency by developing embedded systems such that it consumes as little power as possible while still increasing the performance. h3|Cache Memories Speculative Halt-Tag Access (SHA) Early Load Data Dependence detection (ELD ) SHA+ELD Results Conclusion sp|> Performance-aware energy-efficient data cache accesses Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. em|the memory gap pa|Data Respons’ principles for good corporate governance will form the basis of the long-term value added in the best interests of shareholders, employees, customers, suppliers and other interested parties. Data Respons will, at all times, meet the requirements made of the company by the Norwegian Companies Act and the Norwegian Accounting Act. In addition, the company will aim to achieve transparency in respect of financial matters and other matters so that the capital market, shareholders, customers and suppliers are able to assess the company’s situation and future potential. The Board of Directors focuses on maintaining a high standard of corporate governance in line with Norwegian and international standards of best practice. The foundation for the Data Respons group’s corporate governance structure is Norwegian law, and Data Respons ASA is a Norwegian-registered public limited liability company listed on the Oslo stock exchange st|OUR COMPANIES Newsletter sign up h1|Declaration on Corporate Governance h4|Download the full declaration here: sp|> > Declaration on Corporate Governance Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Sylog has been awarded a framework agreement for consulting services at the Swedish Defence Materiel Administration in the area of ILS – Integrated logistics support. The agreement, including options, extends over seven years, and all branches of the Armed Forces can use the agreement. – This is a large and important agreement for Sylog and especially for us at Sylog Systems. For many years, we have built up competence in the defence sector. The agreement is central to the Swedish Armed Forces’ growth and it is therefore extra stimulating that we have now been approved as a trusted supplier, says Erik Westman, Head of Sylog Systems. – As Sylog has grown it has become evident that we hove a lot of expertise to offer the public sector in Sweden, including the armed forces. We started Sylog Systems in 2019 in order to become a household supplier to the Swedish armed forces. One years later we can state the we have got a fantastic start which this framework confirms. It gives us great pleasure to help the Swedish armed forces with their really interesting projects thus helping Sweden staying up to date and safe, says Johan Jacobsson, CEO in Sylog. Data Respons is a pure-play digital leader with an in-depth expertise in software development, R&D services, advanced embedded systems and IoT solutions. The number of blue-chip customers is increasing, and Data Respons expects this trend to continue going forward. The trends of increased automation, digitalisation and ‘everything connected’ (IoT) fit well with both the Data Respons’ business units and competence map. The company can develop everything, from the sensor level to the mobile app, making it an ideal partner for its customers in their digital transition. The company has a highly diversified customer portfolio in industries such as the Mobility sector, Telecom & Media, MedTech, Security, Space & Defence, Energy & Maritime, Finance & Public and Industrial Automation. Data Respons is headquartered in Oslo (Norway) and has a strong portfolio of clients in the Nordic region and in Germany, supported by 1,400 software & digital specialists. Data Respons has achieved an 17% annual growth over the last 20 years. AKKA Technologies acquires Data Respons in 2020. The acquisition creates Europe’s largest digital solutions powerhouse, able to address the high-volume and fast-paced growth in the digital market. Data Respons is part of AKKA Technologies, the European leader in digital engineering consulting and R&D services in the mobility segment td|Tel.: +47 93 22 39 64 Data Respons Sylog st|About Data Respons Media Relations Data Respons Chief Communications Officer Media Relations Sylog Chief Executive Officer OUR COMPANIES Newsletter sign up h1|Data Respons subsidiary captures Swedish defence contract h2|The Swedish Data Respons subsidiary Sylog has won a contract in the integrated logistics support program for the Swedish Defence Material Administration. sp|> Data Respons subsidiary captures Swedish defence contract Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|The EnergyBASE system is comprised of the intelligent edge component (EnergyBASE hardware and software stack), an optional backend that integrates a variety of services, and a modern web UI provided by the local webserver on its EnergyBASE device. This customer frontend is optimized for desktop and mobile devices. As shown in the Illustration 2, the frontend is accessible through the local network and (if activated) also from remote. The EnergyBASE device contains a 450 MHz ARM9 processor, 128 MB RAM and 4 GB of flash storage. It provides ethernet and serial interfaces and also a polyphase electric meter. The EnergyBASE software stack is made up by a MicroDoc port of Oracle Java™ SE Embedded 8 JRE and a Smart Home OSGi framework. The EnergyBASE applications are written in Java. This allows for portable code for a variety of target platforms. While the project teams develop on Windows, Linux and Mac system, the same Java code can be used on cloud based test- and demo instances as well as on the actual target hardware of the EnergyBASE without any code changes. The Java promise “write once, run anywhere” is real. The EnergyBASE architecture is also based on the OSGi component model. By choosing this technology, we are able to provision, deploy, start, stop and remove software components (called “bundles”) on-the-fly on remote devices without interrupting operation or other services on the device. Bundles can be updated individually or within groups, which gives us the ability to react quickly and effectively to new requirements and potential problems in production environment. The component model is used to assemble customer specific applications depending on parameters like hardware release, customer contracts, configuration, stage or use case scenario. As shown in the Illustration 3, there are many possibilities to combine the application bundles: multiple adapters to handle different kinds of devices from different manufacturers, implementation of protocols for communication purposes, external service connectors, selectable forecast algorithms, optimization methods to use the energy in an efficient way and much more. There are a few preliminary decisions you’ll want to make while defining an OSGi bundle. One of these decisions is the dependency to other bundles. Each bundle can be independently defined or in conjunction with other. For example, let us assume Bundle B is dependent on Bundle A. In this case, it is sure that the startup process of bundle B will be initialized after bundle A is already in the correct state (started). In more concrete manner: it will not happen that one of the device adapter bundles get started while the necessary protocol implementation bundle is not available. Furthermore, we developed a mechanism to define relationships between services and the ability of injection. According to the inversion-of-control pattern, our ServiceMonitor (or more specifically the OSGi BundleContext) observes and manages the complete lifecycle of each service and provides the requested instance. At this point the relationship between dependencies on Bundle- and Service-Layer becomes much more important. Following the Illustration 4 we can see, that Service X is injected in Service Z. The instance of Service X can only be created when Bundle B is running. Due to the relationship between Bundle B to Bundle A, its running state is also necessary. This small example shows that this technique give us a handy way to control dependencies but can also grow to a complex construct really fast. In practice we keep the dependencies as small as possible. Despite having the loosely coupled components, the ability to communicate to each other by means of an event-based publish/subscribe mechanism is still present. In addition to the general properties of the OSGi-based platform, we also benefit from various add-ons created by the OSGi-Engine. It provides several features to monitor and manage external devices. The EnergyBASE behaves in a very performant and smooth way notwithstanding to the huge amount of functionality of the engine and the complexity (see Illustration 3) of our application. The ability to communicate with our backend through a (SSL encrypted) TCP socket connection is already provided by the used OSGi engine. The EnergyBASE is obviously completely useable without an active connection to the internet or our backend. But there are some handy features, also used by most of our customers, like remote access through the web (https://energybase.enbw.com), mail sending in case of malfunctioning or weather consumption which requires an active connection. Our backend system is based on the “mPower Remote Management” (mPRM). It is built by using the same software stack Java/OSGi, as the EnergyBASE, which brings us many advantages. It provides some essential features out-of-the-box like monitoring external devices, configuration, remote software updates and the internal repository to handle different versions of software components. We are able to extend the existing set of functionalities by providing self-developed bundles. We use this technique, for example, to consume weather-data or electricity prices from external service providers, sending emails and push notifications or to activate and deactivate EnergyBASE(s). The mPRM provides a generic RESTful API which allows executing its functions via HTTPS service calls. This feature is very convenient to develop automated test-cases which implement complete test scenarios over all system components. Due to the use of Java and OSGi on server and device side, it’s easy to implement distributed services for both components. For example, in case the EnergyBASE is connected to a server, weather data can be consumed and prepared on the server-side, while the device is only collecting the relevant regional data from the backend. Furthermore, it is possible to shift functions from the EnergyBASE device up to the server to process computationally intensive operations. Further aspects of the general software development process are affected by the homogeneous choice of technology. We can use the same IDE, with the same set of plugins and also the same testing framework to develop client and server bundles. Also, the build and publishing process on our CI system does not need any changes. This may not sound very important but when you have already worked with totally different technology stacks on the client and server side you will be very pleased with the simplicity of this approach. The frontend applications are implemented using modern web-app technologies and are provided to allow end customers access to statistics and process control via desktop and mobile browser. Besides its browser-based access there exists also hybride, HTML5-based mobile web applications.The set of functionality is not as large as the default web application but it contains all important data to get a broad overview about the current energy production, batteries state of charge and further more (see Illustration 6). Each UI related bundle contains three directories to provide its content for different environments: one folder just for mobile application related files, one for web browser files, and one shared folder for both cases. The CI system chooses the right files in conjunction to the target environment while building the software. The mobile applications are currently available for Android and iOS as well. We are able to virtualize every component of the whole system, including the devices and the device adapters. This technology is very useful when it comes to testing individual device configurations as well as performing integration testing. Our continuous build process makes use of automated testing during nightly builds. It is also possible to model complete sample installations (virtual households) that can be used for training of the system maintenance staff or to support the sales process: it is always very convincing to demonstrate a live system rather than showing off a slide deck. Creating a “white label” solution from a branded product Our customer EnBW decided to offer a “white label” variant of the EnergyBASE product due to market demand. The main challenge for an OEM product provider is to allow for flexible extensions, customization and customer specific skinning of the applications. Our aim was to provide mechanisms for customizing the EnergyBASE software and offer customers limited or changed set of functions and UIs compared with the original software. Since our software is implemented and structured in OSGi bundles, it is relatively easy to add, replace or remove functionality by deploying or removing bundles. So adding “white label” capabilities to our system was not really a technical challenge since the underlying architecture directly supports the necessary configurability. To extend the backed for OEM use, we had to extend our system database for multi-client capabilities. This was done by extending the data model to include contract data for the OEM customers. The contract types are being used to configure which part of the software e.g. which OSGi bundle is included in the runtime environment for which particular contract. The customer can also order (or implement) some kind of extra functionality besides the preexisting bundles and include them into his contract configuration. In addition it was also necessary to provide a way to set specific contract information for any particular EnergyBASE device and to integrate the dynamically configured extensions into the systems management. mPRM provides an appropriate technology (called “control units”) which we used to monitor custom extensions on the backend. As a part of the implementation of the EnergyBASE device software, we developed a possibility to mark a bundle within its “Manifest.mf” configuration file as “ManagedByContract”. Such bundle will only be loaded when the currently applicable contract calls for it. The contract information is managed by a software component called ContractService. This service receives every change in the contract from the backend instantly through an event system and begins to start/stop different bundles according to the new configuration. Additionally, we did many changes to the local UI layer and separated these bundles into smaller pieces. Now onwards, there is one bundle with the standard UI and one additional for each customer. This customer bundle does not only contains UI stuff but also arbitrary code to solve the required features or change existing behavior. In the case of UI, it contains just the difference between standard and customer specific CSS/HTML/Assets. The Illustration 7 demonstrates this kind of change in Web-UI just by changing the contract. Everything happens without any reboot or manual browser refresh. Due to the dependencies between bundles shown in Illustration 4 it is possible to imagine that there can be several tricky situations to think about while developing this part. The Java/OSGi software platform used for the EnergyBASE project has proven to be stable, flexible and extensible. Homogeneous runtime environments on all system components allow us to distribute code and functionality as needed. In particular, the OSGi component model and it’s hot deploy/undeploy capabilities helped us to quickly implement the customer requirement to expand the system from a proprietary offering into an open multi-client platform. Even new requirements, like opening up the system as a hosting platform for domain specific third party apps would not be a significant challenge for this robust architecture. Let’s have a quick look on the requirements of our first real world OEM customer and the resulting efforts in development di|Thimo Koenig, Senior Software Developer, MicroDoc GmbH | Christine Mitterbauer, Senior Engineer & Project Lead MicroDoc GmbH li|The EnergyBASE web app should not be available anymore after the installation is done. => Easy. The UI bundle will be marked as “ManagedByContract” and not included in the related contract-configuration. The customer wants to develop its own mobile application which displays the data collected by the EnergyBASE every five minutes. => Just implement a mechanism to send the needed data every five minutes to the mPRM within the customer-specific bundle. A kind of observer mechanism will be triggered after receiving the data on the mPRM side. After that, just forward it to the customer. Based on some cooperation contract, only two manufacturers should be available in this configuration. A specific solar inverter and one type of battery. Other devices and manufacturer should not be supported. => The same procedure as we did in point 1. The device adapters are already split into separate bundles for each manufacturer. Just add the allowed devices in related configuration. The remote support access should be activated all the time. => The original Version of EnergyBASE software does not force the user to enable the option for remote access. In this case, we just have to override this option within the customer-specific bundle. Additional function: Each device should consume and store the stock prices for electricity by using a defined service once per day. => Create a new “ManagedByContract” bundle and add it to the contract configuration. Additional function: The customer should consume energy for free to load its battery in the time periods with negative prices (based on the data from the previous point) until a specific amount is reached. => The same procedure as described in point five. This real-world example clearly shows that significant deviations between customer requirements and default implementation can be managed in a very clean way. We don’t need complex and error prone if/then/else code to solve the challenge. A split of functionality and responsibility on our bundles and the overall modular approach help us manage most of the requirements with a very low effort on the development side. All complex enhancements were separated in custom bundles without any need to change the existing code. st|BY: OUR COMPANIES Newsletter sign up h1|EnergyBASE: Iot-based solution for innovative energy management h2|The market for electrical power has changed substantially during the last couple of years. Central power plants are getting less important while decentralised power production of regenerative energy becomes more common. The so called “Energiewende” in Germany, a political program to shut down nuclear power plants and foster solar power and wind farms, has fueled this development and allows private households to sell energy into the public power grid. EnBW, one of Germany’s biggest producer of electrical energy is driving a project which helps the typical home owner to migrate from mere energy consumption to becoming a “prosumer” (producer-consumer) with local power production and storage by solar panels and stationary batteries. “EnergyBASE” represents a paradigm shift for EnBW. Rather than just selling power to their customers, EnBW is now offering technology and process know-how for energy management. h3|EnergyBASE system components EnergyBASE device EnergyBASE device system architecture EnergyBASE backend Frontend Applications Virtualization & Testing Backend changes Device changes Conclusion The first OEM customer sp|> EnergyBASE: Iot-based solution for innovative energy management Refrences: EnBW Energie Baden-Württemberg AG – https://enbw.com EnergyBASE – https://energybase.com Oracle Java SE 8 Embedded – https://oracle.com/technetwork/java/embedded ProSyst mBS Smart Home – https://dz.prosyst.com/pdoc/mBS_SDK_7.5 OSGi – https://www.osgi.org/ Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|The shareholder-elected members of the board are independent of the company’s senior management and material business contacts. ​ (FOUNDER & CEO, AKKA TECHNOLOGIES) After a successful career at Renault Automation, Mauro Ricci founded HYSYS in 1984. The company provided industrialization and production technology support to manufacturers as well as productivity improvement consulting services. Mauro founded 3 additional companies to complete the HYSYS suite of services between 1984 and 1999. Anticipating market developments, Maurice RICCI merged these four companies to establish the AKKA Group in 1999, offering a holistic R&D service to its clients. (GROUP CFO, AKKA TECHNOLOGIES) Nathalie joined the AKKA Group in late 2013 after having assisted the Group as an external consultant during the acquisition of MBTech in 2012. Prior to that, for nearly 13 years Nathalie worked at PricewaterhouseCoopers in auditing, consulting and M&A transactions concerning companies of all sizes, spanning every different business sector and geographical region. (DIRECTOR MERGERS & ACQUISITIONS/ CORPORATE DEVELOPMENT, AKKA TECHNOLOGIES) Lars (born 1971) joined AKKA Technologies in 2007 and runs the AKKA Group’s operations of Mergers & Acquisitions as well as Corporate Development worldwide. He has been previously running the International Development and M&A of technology driven German and Internationally positioned Consulting and innovative Product Companies. He trained as a business economist at German University of Passau and Insead. Erik Langaker (born 1963) is a full-time technology Investor and entrepreneur. He served as a member of the board of Data Respons from November 2011 to April 2015 and was re-elected as Chairman in April 2016. He has extensive experience in building international technology companies through a combination of organic growth and targeted M&A. His experience includes well-known names like StormGeo Group, LINK Mobility and Talkmore Mobile. He currently serves as Chairman in CMR Surgical (UK) Ltd., CAMO Analytics, Brandmaster and Kezzler and non-exec. Director in HitecVision and Resoptima. Fräjdin-Hellqvist (born 1954) was elected to the Board in November 2011. She holds an MSc in Engineering Physics from Chalmers and has held leading positions at Volvo Cars and the Swedish Confederation of Enterprise. She has extensive board experience and is currently Chairman of the Board at Karlstad Innovation Park and board member at several public, private and state owned companies. Fräjdin-Hellqvist works as an independent contractor and partner st|MAURO RICCI NATHALIE BUHNEMANN ERIK LANGAKER ULLA-BRITT FRÄJDIN-HELLEQVIST OUR COMPANIES Newsletter sign up h1|Board of Directors sp|> > > Board of Directors LARS PETERS Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. bo|CHAIRMAN OF THE BOARD MEMBER OF THE BOARD MEMBER OF THE BOARD MEMBER OF THE BOARD MEMBER OF THE BOARD pa|– What we do is so much more than programming some functionality, says Dirk Frobese. We are right at the heart of our customers’ business and working as their trusted partner in long-term digital transition. What he experienced in the banking sector ran contrary to Dirk Frobese’s education as an electronics engineer. At university he had been trained to logically structure everything he did, in the same way you would analyse the flow of current when designing a piece of electronics. But when he looked at his customers in the banking and insurance world, he said to himself: “What a mess!” – You would think that people specializing in numbers and finance would do their job in a very systematic and logical way, but no. It seemed as if their work had grown by itself over many years with nobody ever asking if this was the right way to do things. There were a lot of people working, but they didn’t necessarily know what the others were doing, and some of them were doing the same thing twice. And of course everything was paper based. Gradually Dirk Frobese built a company specializing in analysing and improving workflows and processes in the industry. Because, as he points out, what’s the point in replacing an outdated system with new technology, unless you optimize the workflow you want to digitalize as well? The Frobese team includes the full bandwidth of skills for that task. You have software developers, you have people with university degrees in economics, and you have people from the banking and insurance world. This diverse skill set ensures that Frobese can deliver on all three of the company’s main business areas: software development, analysis of workflows and processes and management of large-scale transition projects. – Typically our customers are well-established and mature companies that want to update their infrastructure and offer new services and applications to their customers. Some of them are inspired by the many emerging Fintech companies, that have brought a lot of innovation to the sector regarding automation and delivery of financial services. According to its founder the strength of Frobese lies in its understanding of both technology and of banking. The Frobese team knows how to talk to upper management, which typically consists of people with a background in finance and banking, and it can talk with the IT people of the bank as well. – In fact, we are right in the middle between the two. We consider ourselves translators between them because often we find that the top managers have difficulties understanding their own IT people. – What makes us special is, that we can navigate in both worlds. We can do nice PowerPoint presentations in front of the board of directors, just like business consultants from Deloitte or KPMG. But what sets us apart from them is, that we can actually build what we are showing on those PowerPoint slides. Nowadays, when it comes to software development, Frobese is primarily focusing on integration and adding new functionality and developing new APIs for data exchange between different systems. We are a broad team of experts in data warehouse solutions, together with specializing in compliance. The banking and insurance sector is intensely regulated, and Frobese is offering a framework called G2C – governance to compliance – for customers to handle complex compliance issues, for instance regarding identity access management. Regarding workflows and processes, the Frobese team is looking at how banks are handling different tasks. Based on that workflow knowledge Frobese improves and streamlines them with technology, always acknowledging that digitalization has to start at the beginning of a workflow, not at the end of it. In recent years the project management leg of Frobese has grown significantly. We are handling large transition projects, many of them with a multi million Euro budget. These projects are “mission critical” to customers and Frobese is providing all necessary management skills, be it change management, project management, or test management and in all shapes, be it agile or traditional, whatever is feasible for the task at hand. As Dirk Frobese puts it: – It’s like performing heart surgery. The systems are all interconnected in complex structures and it’s difficult to change anything, because it has repercussions throughout the entire system. However, you must do it to move forward. – On top of that, in this sector transition is difficult and very costly. Just as an example, let’s look at the test management side of a project. When we are building a new system, we are required to build a test system similar to the production system, to be absolutely sure, that on D-day everything works exactly as it’s supposed to. We have to prove that the numbers are and will be correct, end of week, end of month and end of year. It’s very complex and costly to build such a shadow production. It takes a lot of effort, but there is no way around it. If anything goes wrong you can’t just say to the authorities in charge of banking oversight that you have for instance 20 million Euros you can’t account for. It’s in there somewhere, you just can’t find it right now. Although Dirk Frobese has built up a thriving business as a technology specialist catering to the banking and insurance sector primarily in Northern Germany, he feels that part of the industry is falling behind, being too cautious and conservative. In his opinion, too many executives still see IT as a cost, instead of a new business opportunity. Therefore they’re reluctant to invest in the digital transition necessary to secure the long-term success of their business. And thus, you’ll find ancient systems still running out there, or as Dirk Frobese puts it: “A lot of old iron”. – There is a lot of legacy IT running in basements in some places. We’re talking relational databases, we’re talking Cobol code. But by now, the guys who are able to maintain these old systems may be in their 70s and it is close to impossible finding anybody else wanting to keep the old code running. Good luck to you, if you’re trying to find a young programmer to maintaining your Cobol code. – I know there are a lot of managers out there ignoring this problem. You still see banks in which upper management doesn’t understand IT, and some banks are big and old fashioned and unwilling to change. But they’ll slowly go out of business if they’re unable to reinvent themselves. – Banks and insurance companies are IT driven and they can’t exist without it. Luckily more and more companies in the sector are realizing this and I’m glad to see a change, because what the sector needs is digital transition. According to Dirk Frobese, the banking and insurance sector started out being quite innovative. That was a few decades ago, when large scale and personal computers revolutionized data management and workplaces. Since then, many organisations have failed to keep up with technological development and failed to utilize what technology has to offer. In his view, banking has to become more like the auto industry, with platforms, standard components and well-defined workflows. Banking executives have to learn how to make a VW Golf, meaning a standardized, high-volume product, efficiently produced. Only when they’ve redesigned their workflows and processes to achieve that, they can begin thinking about a Bentley or a Ferrari. Then they can offer handmade and expensive products and make a profit from them, because they have a solid base with standard components, automatic workflows etc. – Banks need to become more transparent and open to their customers. One of our customers, a large German savings bank, has realized that and has laid out a nice vision for their future business: They want to enable their customers to do almost everything from their sofa at home. Let’s say you want a loan to buy a house. You can fill in all the numbers at home, anytime it suits you. Only when you get to the point, when the regulations require you to meet with your banker you go to the bank. At that point he already has all the data you’ve submitted and together you make a decision. Then you get the money and you can buy your house. That’s it. You did most of the work yourself and that reduces cost for the bank, as well as being more convenient for you as a customer. – We’re developing technology and processes to support this kind of lean, fast, and convenient services. The banks, which are successful, know that this is the way forward. The others will slowly vanish, if they don’t make that transition di|Arne Vollertsen for Data Respons st|BY: Full bandwidth of skills Translators Software and compliance Large-scale projects Conservative sector Making a VW Golf Openness needed OUR COMPANIES Newsletter sign up h1|Frobese – the strength of being an expert in both IT and banking h2|Dirk Frobese started out developing software for banks and insurance companies. But he soon found out that software alone wouldn’t solve his customers’ problems. They needed somebody to analyse and improve their workflows and processes as well. Since then the 90+ Frobese team has become highly successful in serving as translators and mediators between banking and technology. sp|> Frobese – the strength of being an expert in both IT and banking Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Being both shock proof and watertight, the Reachi device will stand up to extreme weather conditions. On top of being robust, the Reachi device is designed with user friendliness in mind. And the device is extremely energy effective, so that its battery can last as long as possible in areas where power supply is shaky or even completely absent. TechPeople has assisted LinkAiders in choosing the right components for the Reachi device, including rechargeable battery, micro controller etc. and stitching it all together to achieve the robustness needed. The project has come quite far, with prototypes having been tested twice in the Philippines already. Also in the Philippines, a pilot with 1,000 devices is to be scheduled for sometime during 2018. At that time one of the key features of the Reachi device will be tested in a real life setting: The devices functioning together to form a dynamic and flexible Mesh network. As mentioned, the Reachi device has to function in spite of communication infrastructure in an area being destroyed. That requirement, combined with the need for ultra low power consumption, made LinkAiders turn to the Danish wireless mesh network company NeoCortec and their NeoMesh technology. TechPeople took up the challenge to integrate NeoMesh into the Reachi device. NeoMesh is developed for IOT applications and allows for up to 65K mobile nodes. Moreover, it is able to handle dynamic topologies in real-time. As opposed to other technologies in the IoT workspace, NeoMesh allows any node in the network to dynamically change position. This feature makes NeoMesh ideal for the Reachi use case, as emergency workers equipped with a Reachi device move about and change position constantly, while the devices still can play their part in the network, receiving and transmitting data. This flexibility is possible due to the NeoMesh Speed Routing Protocol. It replaces a central Network Manager with autonomous intelligent nodes, enabling all network nodes to link to each other automatically and dynamically, forming one single network that works, even if nodes change position or are replaced. The NeoMesh routing protocol routes data seamlessly through the network and eliminates any concern in performance created by obstacles in the RF path, nodes being blocked or simply moving around within the network. Weak spots in a real life network can easily be fixed by just adding another node. Given it has the right network ID, it automatically becomes a part of the network. Unlike other routing protocols the Speed Routing Protocol does not create the exact route from A to B in advance. Each NeoMesh node maintains a knowledge of which of the nearest nodes would be the best for the next hop. While data travels through the network this knowledge of the best next hop is used to decide the route of the data. The knowledge is kept up-to-date in real-time and the route is adjusted dynamically according to changes in topology and link quality. Utilizing these network features the developers have been able to design an extremely flexible communication infrastructure. The network is divided into subsections, consisting of 1,000 Reachi device-equipped emergency workers. For this dedicated long-range deployment NeoCortec has developed a special radio module, with a line-of-sight range of >2,5 km. Each sub-network hosts a NeoMesh gateway transmitting data via a satellite uplink. Thus vital information can be sent to relief coordinators, even with a country’s communication infrastructure completely damaged. To cover the whole of the Philippines, the Reachi deployment plan envisions 1,000 sub-networks, each with its own satellite-enabled gateway. This means a million devices all in all, as each sub-network consists of 1,000 Reachi devices. As the Philippines is one of the world’s most disaster-prone countries, LinkAiders have chosen a tough environment to test their device. If successful it will not only provide help where help is desperately needed. The Reachi system will also set new standards for dynamic and flexible IOT network solutions. TechPeople is a consultancy house within the Data Respons group. The company is based in Copenhagen, and specialises in embedded solutions and IT business systems. TechPeople have specialists within hardware, software, mechanic development, project management and product testing. TechPeople’s innovative customers range from large international companies to creative start ups di|Ole Larsen, Software Development Engineer, Tech People & Thomas Halkier CEO, Neocortec st|BY: OUR COMPANIES Newsletter sign up h1|Developing an emergency communication device for disaster relief work h2|In the immediate aftermath of a natural disaster, with all communication infrastructure destroyed, emergency workers need to send damage reports from their local communities to relief coordinators and other authorities. To do that, the Danish company LinkAiders is developing the Reachi device, a communication device for use under extreme conditions. LinkAiders cooperates with the Danish Red Cross, and the Reachi device will be pilot tested in the Philippines during 2018. The Danish tech consulting company TechPeople, owned by Data Respons, is helping LinkAiders design a solution that will function under the toughest conditions. h3|Choosing the right components New network Flexible communication infrastructure 1,000 sub-networks sp|> Developing an emergency communication device for disaster relief work Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa| st|OUR COMPANIES Newsletter sign up h1|Annual reports sp|> > > Annual reports 2019 Archive - annual reports Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|My name is Kenneth Ragnvaldsen, I’m a 53-year-old family man and sports lover from Norway. I’ve been working with tech, software and digitalization since I graduated many years ago. When it comes to Data Respons, I have been the CEO for the last 18 years. Together with fantastic people we have transformed the company from a small national player to one of the fastest growing digital companies in Europe, with 20% growth annually for the last 20 years. Data Respons has been working with IoT – internet of things – long before this was even a name. Internally we call ourselves “the change agents” assisting our customers in their digital transformation. For me, it means that we are supporting the evolution where everything is getting automated, connected, smarter and digitalized. As everything around us is going to be more and more focused on data we are enabling new products, processes and business models that are truly digital. By connecting everything and using data more intelligently (IoT), building smarter products and systems, we can create a more efficient, productive, and sustainable world. For instance, in the future, we will most likely not own our own car, but we will share it and we will use our phone just to pick it up and then go wherever we want. Sharing platforms in an interconnected world is going to be the future everywhere around us. In addition to connectivity I believe processes is another key topic within digitalization. Most processes can be automated and be done smarter. Digitalizing whatever we’re doing manually in each factory, or in the office, or even in the car. We have the technology to automate and digitalize almost everything. But to create real value, digitalization of processes requires substantial investments, new ways of working and a set of new internal processes that ensures that your new digital process is up to date and in sync with a dynamic world. Finally, every product and service we have around us needs to be data driven. From every product there is a lot data generated and until recently the world has been more focused on gathering all this data, and not enough on understanding and using it. Big data has been a buzzword for years, but I believe that it’s the years ahead that will demonstrate what big data entails. As companies better understand the potential and value their data stream can provide, things will change. And they must. As consumers we are increasingly expecting that we get tailored made offers and experiences because we know it’s possible through the technology and data that’s available today. Data Respons is involved in all the mega trends that are really changing the play of every industry you can imagine. Making data driven products and services means you need to have expertise and experience from the sensor level to the final app on your mobile. There are lots of examples to draw upon, but let’s talk about a few big industries that are embarking on huge digital transformations. A good example is the future of the car. Most cars are stationary 90% of the time, which is not very sustainable and owning a car is becoming increasingly difficult in big cities. Last, but not least the next generation of urban young people will not want to own a car, they just want the flexibility and the freedom of being able to use a car, whenever they need it. For this purpose, we have built an e-mobility solution for one of our biggest clients. We developed a new cloud-based car-sharing platform where the user can locate an available electric car on their phone, drive it wherever they want and when they are done, leave it for the next user. The platform impresses with its rapid, automated registration process, app-driven locking and unlocking of vehicles and automated billing of parking fees without any user effort. When talking about mobility we have a long track record in/of digitalizing the transportation industry – the future goal is of course to make transportation more like a service. Together with our client, we have built a complete digital fleet management system. Today between 30 and 50% of the capacity of trucks is empty. If that number can be reduced, it will generate enormous efficiency gains and enable more sustainability, and you can achieve that with intelligent systems interacting. The platform allows real-time re-routing of trucks and more efficient use of the entire fleet, thus saving cost, protecting investments, and extending the life cycle of hardware components with connectivity and software updates. Last, but not least we have an example on how software and digitalization is making a difference. In Germany we are working on an online energy trading platform for renewable energy. On this platform anyone can sell their own renewable energy from a min. quantitiy of 3000 MWh, like solar, wind, water or biogas. As an energy supplier you can thus be sure that your offer is taken to market in the best possible way, and that you will get the correct market price, without any delay. This platform also indirectly incentivizes more people to invest in small scale renewable energy by making it possible and easy to sell their excess energy to the market. I could give a thousand more examples! Externally I strongly believe that with Data Respons as a part of AKKA, the Group has become a leading player in industrial digitalization. Hardware and software specialists across industries can support each other and help our customers gain the competitive edge they need in a digitalized world. Internally we are sharing our 30 years of thinking and working on digitalization. Also, as Data Respons has grown we have become quite skilled at building agile digital companies, and we are sharing those experiences with the rest of the Group. Sharing best practices on a culture that embraces digital opportunities is valuable for everyone. To succeed in becoming a trusted digital specialist you need to be the best at what you do, by having a lot of high-level skilled experts. By offering our culture, and know-how we are contributing to making AKKA a digital powerhouse across every industry. Our goal, for AKKA and Data Respons, is to be a global and leading player within industrial digitalization. To achieve that goal, it’s not enough to have only the digital expertise, nor to have only product engineering know-how. Combining these two skillsets in every dimension is where we’re going. We are strengthening our role as the best partner for our customers and bring real value add to their digital transformation. Read the original interview ! st|Could you please introduce yourself and describe your role at Data Respons? What does “Enabling a digital future” mean for you? Could you give us a few examples of some of the most successful digital projects that Data Respons has worked on? In what way(s) is Data Respons supporting AKKA in driving the digital transformation of its clients? How do you see AKKA and Data Respons evolving over the next few years? OUR COMPANIES Newsletter sign up h1|Kenneth Ragnvaldsen – Enabling a Digital future with Data Respons h2|Kenneth Ragnvaldsen, CEO at Data Respons, had a chat with AKKA Technologies on how Data Respons is Enabling a Digital Future. As everything around us is getting more and more connected and gathers more and more data, we are constantly enabling digital products, processes and business models. Together, we develop smarter products and systems. As a consequence, we create a more efficient, productive, and sustainable world. sp|> Kenneth Ragnvaldsen – Enabling a Digital future with Data Respons Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|It was designed to be the programming interface of the future for the Oracle database, and therefore much effort and vast resources have been put into the development of the Graal virtual machine. According to MicroDoc’s managing director Dr. Christian Kuka this ensures not only an array of interesting features that will make life easier for developers and project managers. On top of that, you can consider GraalVM to be future proof. As part of the Oracle database product it has a life cycle is 10+ years. – The polyglot features of the GraalVM give you a whole range of advantages. It can execute software written in different languages in the same controlled environment. It allows you to run Java code and everything that is based on Java byte-code, including Kotlin, Scala etc. It also provides a runtime for Python, Ruby, and JavaScript and has the ability to host C, C++ and even Rust code. – This allows for easy integrating of machine learning, neural networks and deep learning applications into your usual business applications. Most of the KI algorithms are written in Python, but currently, if you want to integrate them into some kind of mainstream software-product, you have to rewrite them in Java or C/C++ to use them in your code. So, normally you would have one developer write the prototype application in Python and another rewrite it in whatever language is running on the target platform/device. With GraalVM there’s no need for rewriting, because you can run Python, Java and C not only on the same virtual machine but also at the same time in the same process. Moreover, developers can choose the programming tools they are most comfortable with and which fit the best for the task at hand. They can even use different languages in one single program, and run everything using the safe environment of the GraalVM. Also, they can debug from one language to another. – Typically, the most common and well-established languages like Java or JavaScript have the better tools, says Bruno Caballero, head of Virtual Machine Technologies at Microdoc. – Many people have put a lot of effort into the infrastructure of theses languages and that gives you more quality and stability, compared to newer and less mature languages like R or Ruby. – If we take a look at the Java ecosystem for instance, you have code review tooling, static code analysers, and many other tools that have been there for years. It’s all really stable and high quality. So, if you’re a young programmer who has used JavaScript to develop a library to do some mathematic processing, you may not even have tested that library properly. Using the GraalVM you can still write code in JavaScript, but use the math-libraries from the Java world because they are of proven quality. GraalVM gives you that possibility, and you can do so in every language. You can code in Python, but use the libraries from the Java world or the other way around, if you want. – Furthermore you can use GraalVM as the runtime for you own application specific language by extending it. You can choose to extend it yourself or let MicroDoc do it for you. Another strongpoint of the GraalVM is that it allows developers to work together, even if their skill sets are very different. You can have a diverse team of experts in Python, Ruby, Java, JavaScript etc. working together on the same product. And even when it comes to long-term projects running over several years, due to the polyglot features of the GraalVM, senior developers can write in their preferred coding language, while new developers are free to choose something else. And senior developers will not be restricted to just maintaining old code, while the youngsters can have a go at the new and fancy stuff. Both parties can work in the same world, on equal terms. Also, the GraalVM creates a new possibility for handling legacy code. Currently if you want to update legacy applications, you can choose to rewrite them from scratch, or write software around them to add additional APIs. Rewriting part of the legacy software in a different language, and keeping the remaining part unchanged has been very complicated in the past because you had to integrate code over language and runtime system barriers, for instance code running in a VM with native code running directly on the OS. Now you can just run the legacy stuff in the GraalVM together with new code. For instance you can take out one class, write it in Java and take another functionality and rewrite it in Python. Everything will be running on the same VM, communicating inside the virtual environment. At the same time, using the GraalVM makes decision-making easier, when it comes to new projects. – When you start a new project you always discuss which language and which libraries to use, says Dr. Christian Kuka. – That can be tricky, as one project is always connected somehow to a lot of other projects. The polyglot feature of the GraalVM allows you to choose much more freely. So, all in all, the GraalVM has a lot to offer, to developers as well as to project managers. It will make embedded development significantly more agile, efficient and future-proof di|Arne Vollertsen for Data Respons st|BY: Speaking all languages Mix as you like Good for working together Updating legacy code OUR COMPANIES Newsletter sign up h1|The “One VM” Concept – towards h2|Would it be fair to say that the GraalVM is a step closer to? Yes, according to the virtual machine experts from MicroDoc. Here’s an overview of what the GraalVM can do for software developers. sp|> The “One VM” Concept – towards Want to Know More? Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Compared with 20 years ago, we are now seeing entirely different business models emerge, developed by quick start-ups which turn new ideas into marketable products with astonishing speed. Organizations benefiting most from this success are those that are versatile, flexible or agile enough to be capable of uncompromising customer orientation. A start-up is usually established by the decision makers from scratch, allowing them extensive creative freedom. But what about the many organizations whose decision-makers see the need for agility, but whose structures and culture have developed far from the market? With a collaboration team of about ten people, EPOS CAT GmbH looks after both such customer groups, and over the last few years has implemented a wide variety of projects in which extensive experience has been gained. For the most part, software developers do not need to be told about Atlassian tools and their benefits – they generally use them as a matter of course and without major difficulties. However, for business teams in the fields of design, IT service management (ITSM) or human resources the implementation requires more explanation and is more challenging. And even here there is a strong culture gap. While young employees often demand up-to-date work tools – and are thus often the drivers behind the introduction of Jira or Confluence – older employees are frequently afraid of, or have at least strong reservations about these modern tools. Functions we know and are familiar with from social media, which enable the mentioning of colleagues in a text or the sharing of articles without sending an e-mail, require explanation. Needless to say, project teams which are forced to use collaborative tools are doomed to fail. The lack of interest or acceptance, and in the worst case a boycott, all prevent progress. This applies more than ever to the introduction of new working procedures. We have thus found that the success of a project depends entirely on taking in the entire team, with all its different roles and diverse (cultural) prerequisites, and on arousing enthusiasm in individual cases for the implementation. After all, the product owner’s requirements for an application differ from those of the colleague on the support desk, or the tester. The Atlassian suite has the right tool, specific extensions as well as plug-ins for all parties. In our projects, Confluence has proven itself as an entry point. Depending on the requirements of the project, additional tools from the suite complement the enterprise wiki. Easy to use, it can replace Word and, thanks to its central location, complement or in some cases replace the classic e-mail. The collaboration team at EPOS uses this wiki for working together on documents, creating and maintaining a manual, setting up FAQs or drafting offers and exporting them in one click as a PDF; these are just a few examples of the diverse application possibilities. It may sound simple at first, but there are still hidden risks. Therefore, the EPOS team has created an overview with several rules for successful project implementation. For example, new accounts require new passwords or an authorization structure that ensures that only your own team members have access to specific areas. Once again it is important right from the start to reduce inhibitions and minimize hurdles. Why not ask new colleagues to introduce themselves to the team in the wiki instead of writing a portrait for the intranet? Or encourage new colleagues to organize their entire training on Confluence in order to become familiar with the tool and its features. According to Atlassian, Jira is the most widely used project management add-on for Confluence users. While it is well-suited to agile processes in software development, Jira is more of an organizational tool to most business teams. The tool lends itself to restrictive processes that require traceability and transparency or that need to be evaluated; budget approvals or decision making processes are examples of possible applications. It is also obvious that tools are only put to everyday use if they are simple to operate and their functions are mastered. Accordingly, training and coaching as well as the subsequent support of the user are important. The training courses are mostly short standardized introductions to convey the features by demonstrating best practices and use cases. In addition to these basic training courses, there is also individually tailored coaching on Scrum, Kanban and plug-ins for software development and test management. That was the human side of things. Now let us turn to our experiences with the methodological and technical requirements and hurdles. Last year our collaboration team had the task of equipping a young company specialized in autonomous driving with the complete Atlassian suite. The team consists of approximately 200 people and wants to use the tools to develop software for autonomous driving from layers 1-5. From the beginning, the applications were organized uniformly, with no configurations nor customizing. Jira was set up with exactly three different project types with remarkably simple authorization structures, so that each team can look at everything, apart from areas with explicit data protection restrictions. It is kept quite simple and is thus very sustainable, as it can be maintained cyclically. The use of Atlassian applications has also proven itself in companies and corporations that have grown over the decades or which have a hierarchical organization, as the next project example shows. For almost ten years, EPOS has been operating a Jira in automotive R & D. It was introduced in version 3.14 and is currently maintained by our experts in version 7.3. By March 2018, it had around 600,000 tickets with 1.7 million comments and around 4,000 active users per month. This Jira is controlled centrally and, as a corporate service, is designed so that project-specific configurations can take place. The ratio of projects by business teams and software development is estimated at 80% to 20% and therefore there are numerous project workflows which map out request evaluation processes or approval processes for the commissioning of external service providers. One of our team members has the sole task of training users and adjusting configurations. In order to be able to keep this technically sophisticated system up to date, an internal billing model has been established by our customer. The research and development departments therefore use their Jira in their corporation as a paid service. This makes it possible to finance lifecycle, support or documentation and to offer a technically and methodically clean system. Financing through an internal billing model has proven itself many times and is therefore often standard in the enterprise environment. For strategically placed business services, it is particularly important that products are up-to-date, which is why we are principally concerned with keeping customizing low. It pays off to take into account the follow-up costs for licenses right from the start of the project, and to stay as close to the product as possible and to keep up with product updates cyclically. The dangers of customizing include translations into your own language. In our experience this often leads to confusion and can also lead to amusing or misleading errors. We therefore urgently advise to use the tools in the English original. The wording can be found in the introductions, making it easier to seek help from Support or further workshops. The use of specialist terminology ensures that all team members worldwide can communicate in a consistent and comprehensible way. After ten years of Atlassian projects, our experience in short is quite simple: take the whole project team through training and continuous support, motivate them in the individual roles for using the new tools to assist them in their daily work, and keep the applications technically and methodically simple and lean. The EPOS team would be pleased to hear about other experiences and to answer your questions. We will be at the next Atlassian Summit in autumn in Barcelona. If Ingolstadt or Spain are not on your travel plan, local user groups are a good alternative. In German speaking countries, especially in Germany, there are Atlassian user groups in many cities online at https://aug.atlassian.com/. In Scandinavia there are also local groups and partners with great expertise in Oslo, Copenhagen or Stockholm di|Alexander Sowatsch, Solutions Architect, EPOS CAT GmbH st|BY: OUR COMPANIES Newsletter sign up h1|Atlassian Suite: tools for every team and more agility in projects h2|A report on how our specialists at EPOS CAT GmbH uses the collaboration tools of the Australian software manufacturer Atlassian in the automotive industry and what experience it has gained in the course of the years. sp|> Atlassian Suite: tools for every team and more agility in projects Jira Understanding users and requirements Knowledge management with confluence Project organization with Jira Practical experience Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa| di|There are no press releases for this year. . st|OUR COMPANIES Newsletter sign up h1|Archive sp|> > > Archive Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa| st|OUR COMPANIES Newsletter sign up h1|Other presentations sp|> > > Other presentations 2019 Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Traditionally, the business of insurance is based on statistical risk models: The price of your car insurance depends on your risk profile – age, job, where you live, etc. But now, with the emergence of the Internet of Things comes a possibility to add a new layer of proactive prevention to the traditional insurance business model of reactive damage compensation. Together with Machine Learning algorithms and powerful cloud services the Internet of Things creates new opportunities: It can unearth valuable information to enable insurance companies to focus much more on prevention. A continuous feed of live information allows them to prevent incidents rather than compensating for them when they have occurred. This trend of combining known risk parameters with new data streams has the potential to change the insurance business significantly. Topdanmark has responded to that challenge by going all the way, developing its own IoT platform. It is a generic end-to-end visualization platform based on Amazon Web Services and designed with scalability, robustness and security in mind. The platform integrates different types of sensors, like temperature, humidity etc., to process and enrich the sensor data and give meaningful insights based on the sensor type. has contributed to the platform in various ways: Initially Topdanmark’s management trusted one of TechPeople’s IoT experts with launching IoT as a new business area in the company. He was commissioned to investigate the potential of emerging IoT technologies and how they could be harnessed to meet the requirements of the insurance business. He then gathered a small team of five developers to build the platform, among them another TechPeople expert, this one with a proven track record in Amazon Web Services technology. AWS experts are a rare species in Denmark, but TechPeople managed to attract an AWS specialist from Sri Lanka. The three main components of the framework are taken from the Amazon Web Services portfolio. The edge component of the platform handles the two dominant protocols used when devices push data into a system, MQTT and CoAP. Both protocols are implemented by Amazon in its IoT infrastructure, which means that if a device speaks one of these languages the platform can receive data from it. Traditional M2M communication and REST API can be processed as well. This enables the infrastructure to handle a very large number of incoming data streams from IoT devices, while receiving data from many different sources and radio technologies, like NB-IoT, LoRa, Sigfox etc. The second component secures the ability to perform real time analysis of the data received, regardless of the amount of data coming in. Amazon Kinesis, a technology similar to Apache Kafka, is the component that secures real-time processing of large streams of data. This enables the platform to generate incidents in real-time based on data transmitted from the connected devices. As the Amazon cloud infrastructure is extremely scalable, it can perform real time analysis on a very large scale, e.g. if you have 10 million devices streaming data to the system. The third component is storage. After data has been analysed in real-time it has to be stored. Again, this can mean very large amounts of data, and the system uses the Amazon S3 object storage service to secure the scalability and performance needed. After being stored the data can be used for machine learning or other forms of analysis. Furthermore, the platform can do provisioning. It can register a device and tie it to a specific customer and supply it with a certificate for the system to identify it correctly. One of the advantages of AWS is that it is cloud-only, so Topdanmark avoids having to build a new data centre. The server room is replaced by a configuration file. Everything is virtual. The entire IoT platform is defined in code, and using the CloudFormation tool you can describe and provision all the infrastructure resources in your cloud environment. As an example, if Topdanmark decides to run its IoT infrastructure out of an Amazon datacentre in Ireland, that can be done via CloudFormation. If the development team decides to move the platform to a datacentre in Sweden, it can be done within 20 minutes. This cloud-only setup gives extreme flexibility and scalability, allowing the platform to grow very large if needed. Regarding security, from when the data goes into the gateway and from there further into the system it is secure. In addition to AWS’ strong layer of security, Topdanmark has its own Authentication & Authorization protection, securing that only known and well-defined data sources are allowed to go into the system. Everything is encrypted, in-transit as well as at-rest. On top of that, Topdanmark is GDPR compliant, which among other things means guaranteeing the ability to erase data if a customer requests it. The Topdanmark development team works in Java and Python with a Continuous Integration / Continuous Delivery architecture. All code is stored in Git, and the team has a Jenkins infrastructure and a Docker platform that produces their APIs and deploys them to the Amazon cloud. The development team is now working on a number of use cases. One of them is deploying monitoring devices in industrial cooling environments, to secure correct temperature and to avoid expensive damage, if the cooling fails which can compromise the goods. The use case has been selected because the owner of the facility as well as the insurance company want to secure that all the stored goods are uncompromised, so it is a win-win situation for both parties. The team started out with experimenting with temperature sensors and hooking them up to the infrastructure to make sure the data could fit into the system. The data comes in two categories. The first is master data, which is static metadata about the customer, devices, addresses, type of sensors, their location, what certificate they are equipped with etc. With that data in place the system is ready to receive actual measurement data from the sensors. When the sensor data hits the gateway it goes into the Kinesis infrastructure and from there to a database, ready to be processed and analysed. The analysis shows if it is necessary to raise an event, e.g. when the temperature rises to a critical level. These events then can be handled in various ways, e.g. sending a text message directly to the customer or alerting customer service. On top of that, measurement data can be fed into other systems and be used for dynamic pricing, statistics etc. As Topdanmark is a big insurance player in agriculture, Smart Farming concepts are being developed as well. As an example, the development team has integrated slurry sensors, which enable farmers to monitor the slurry level and additional metrics of their slurry tanks through the portal. Each integrated slurry tank is equipped with a sensor reporting the slurry level and identifying which farmer, farm and unit the sensor is associated with. The platform enables farmers to detect slurry level anomalies by triggering actionable alerts in real-time. Also, the Topdanmark development team is collaborating with the people behind the LeakBot device, a sensor designed to detect water leaks in private homes, to integrate LeakBot data into the platform. With the framework in place, the Topdanmark developers are now focusing on adapting it to various use cases presented by the company’s business developers. Every new use case demands a number of customizations, like analysis of what events that need to be detected, integration of new device types, new protocols etc. Also, the data collected is passed on to the Topdanmark data scientists for them to design machine learning algorithms to extract new knowledge from the data. Looking into the future there are a number of challenges ahead, some of them technical, others not so much. The platform is able to receive data, process it and generate trigger warnings. But it is not yet able to control different kinds of machinery, like closing a valve to shut down a leaking heating system. Actions like that require a whole other level of validation and verification and will be a future challenge to consider. Also, when insurance companies go into the business of sensor networks and IoT platforms, they need to rethink their customer service and the skill sets required for call centre personnel, and they need to adjust their logistics to handle physical products. All this is far from the core task of developing an IoT platform, but yet essential to transforming the promise of IoT into good business. TechPeople is a consultancy house within the Data Respons group. The company is based in Copenhagen, and specialises in embedded solutions and IT business systems. TechPeople have specialists within hardware, software, mechanic development, project management and product testing. TechPeople’s innovative customers range from large international companies to creative start ups di|Kjetil Kræmer, TechPeople & Peter Reetz, Topdanmark st|BY: OUR COMPANIES Newsletter sign up h1|The Internet of Insured Things – IoT platform for preventive monitoring h2|Topdanmark, Denmark’s second largest insurance company, is developing an IoT platform to process and enrich sensor data coming from its customers. Based on the processed data the platform generates reports and trigger warnings, thus adding preventive monitoring to Topdanmark’s service portfolio. Data Respons subsidiary TechPeople contributed to the project with expertise and project management. Scalability and flexibility Three components Real-time processing and storage No datacentre Security and GDPR Use cases Detecting slurry levels and water leaks Challenges ahead sp|> The Internet of Insured Things – IoT platform for preventive monitoring Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|With all the buzz surrounding sustainable energy you would think being in the wind turbine business could be compared to winning the lottery. Far from it: The global wind turbine industry is under considerable pressure, with only a handful of manufacturers making a profit. Why? Because governments are gradually reducing their subsidies expecting renewable energy to become competitive on its own. Subsequently, competition is fierce and development engineers are working tirelessly to find new ways to cut production costs while increasing the output and the durability of each new generation of wind turbines. So, although software developers come at a comparatively high price for wind turbine manufacturers, their work is crucial for cost savings in the long run. They continuously find ways to reduce what the energy sector refers to as LCoE, Levelised Cost of Energy, being the summary measure of the overall competitiveness of different energy generating technologies. Software is a key component in this effort. Software is an important tool for optimising cost in the wind industry, on many levels and touching on all parts of a wind turbine: tower, nacelle, hub, rotor, and power electronics. TechPeople is a long-standing partner of the Danish wind industry and TechPeople software engineers are contributing to numerous projects using software to optimize the design and the output of wind turbines. However, due to the competitive situation in the sector, many of these projects are subject to Non Disclosure Agreements. That is why this article will be focusing not so much on specific projects as on presenting a high-level view of the challenges and achievements in using software as a cost-optimising tool in the wind industry. Eliminating a piece of hardware and replacing it with software is a well-known cost-saving measure in many industries. It is done in the wind industry as well. As an example, a hardware counter module monitoring the toothed ring to measure speed and angle of the hub can be replaced by transferring the hardware functionality into FPGA code. In building wind turbine towers, the amount of steel needed is an important cost factor. The tower must be able to cope with the pressure on the rotor blades, dependent on wind speeds in the specific area where the wind turbine is deployed. The blades can be pitched to manage the pressure against the tower, adjusting to different wind speeds, and changing the angle of the rotors towards the wind. This reduces wind pressure and subsequently reduces the need for steel used in the tower. Furthermore, the pitch system allows the turbine to operate in conditions where a turbine with fixed wings would be forced to shut down to protect itself. The pitch system of a state-of-the-art wind turbine is controlled by a distributed real-time system enabling the wings to pitch very quickly. You can even pitch each rotor blade separately and make it pitch automatically when it passes the tower, to reduce stress on the tower resulting from the change in wind characteristics when the blade passes the tower. This increases the lifespan of the wind turbine. A distributed real-time system consists of a number of (computer) nodes that are interconnected by a real-time communication network. Most distributed real-time systems are embedded in larger systems, like a mobile phone, a car or a wind turbine, interacting closely with their physical environment. The performance of such a system depends not only on the logical results of the computation but also on the exact time these results are produced. Many applications are safety or mission critical, so fault tolerance and reliability are crucial features. A distributed real-time system can contribute to optimization and cost saving by enabling a device to react immediately to outside input and thus achieve a higher degree of efficiency and performance. A modern wind turbine is equipped with a vast number of sensors measuring e.g. speed, temperature, vibration, light etc. Without sensors, wind turbines would be less safe, more costly to operate, and have lifetimes less than the 25 years they are expected to run. Furthermore, wind turbine operators rely on accurate data about every turbine and its components, to secure operational safety and efficient maintenance. As an example, dedicated sensors can detect sparks produced by faulty machinery, to prevent fire. Also, increased processing capabilities lead to new ways of using sensors, including using them for other tasks than they were designed for. For instance, data from a wind speed sensor can detect ice on the rotor blades. With multi-sensor data fusion you can design sophisticated fault detection systems with a higher diagnostic accuracy than individual sensors, with an array of vibrational, acoustic, temperature etc. sensors monitoring gearboxes, blades, and other mission critical parts of the wind turbine. Multi-sensor data fusion refers to combining observations from a number of different sensor types to monitor complex machinery e.g. self-driving vehicles, based on the assumption that evaluating data from disparate sources leads to a more precise result than if the sources were used individually. In a sense, multi-sensor data fusion tries to replicate the work performed by the human brain, weaving diverse input together to form a complex picture, taking advantage of different “points-of-view”. Multi-sensor data fusion is widely used in robotics and can utilize techniques like pattern recognition, artificial intelligence and statistical estimation. It may come as a surprise to many, but wind turbines need electricity to run. Not only do they produce energy, they consume energy as well, so they need back-up power supply, for instance for starting up again after shutdown due to strong winds. Restarting a modern wind turbine is a complex task. You have to re-calibrate the wind turbine and synchronize it to the grid, before releasing the brake. Offshore turbines in particular have to be designed to use as little power as possible produced by their diesel generators, to make fuel supply last as long as possible and keep expensive re-filling at a minimum. Software is used to optimise the energy consumption of the turbines. Manufacturing wind turbines is a global business, so manufacturers use much manpower to adjust their products to local legal and environmental requirements. This also goes for the lights on top of the turbines. They have to be adapted to local requirements, both when it comes to light intensity, colour and frequency. In some places the lights have to blink 24/7, elsewhere only at night or only when airplanes are approaching. Detecting airplanes requires a radar system, and similar measures are taken for “bat detection”. Bats and wind turbines don’t go well together, and in many countries bats are protected animals. To prevent them collide with the rotor blades, a bat detecting system stops the wind turbine when bats are detected in the vicinity. The reason for choosing these seemingly extreme measures is that a bat detection system makes it possible to deploy wind turbines in areas where it would be otherwise prohibited. Wind turbines can be a lethal threat to bats. Not only do they risk direct collision, but also the high air pressure differences in the area surrounding the turning blades can cause internal injuries. One form of bat protection strategy is to limit the operating period of the turbine based on time of day and year, as research shows, that bats are most active within two hours of sunset and in temperatures between 19 and 21 degrees. The disadvantage is a reduction in operating time and thus power production. Another approach would be to place hyper-sensitive microphones around the turbines to detect the ultrasound signals bats use to orient and forage. The ultrasound signals are then analysed, and according to the specific bat species identified and its behavioural pattern the operation of the wind turbine shifts to bat-mode, e.g. changing rotor speed, changing the pitch angle of the rotor blades etc. In this way the turbine can still produce energy while reducing the risk of bat encounters. Also, bat deterrent systems are being developed that use ultrasonic speakers to discourage bats from approaching operating wind turbines. The speakers produce ultrasonic sound in a range of frequencies that negate the bat’s own signals. Bats send out ultrasound signals and use the reflections of these signals to navigate and find insects etc. The deterrent system sends out a signal that masks the bat’s return signal, so that it cannot locate any prey in the airspace surrounding the turbine rotors. Nothing indicates, that the competition in the wind sector will diminish in coming years. So, probably the world will run out of fossil fuel before software engineers in the wind energy business will run out of challenges. Software will continue to play a crucial role in cost cutting and optimisation, including utilizing Machine Learning and Artificial Intelligence, together with an ever-increasing number of sensors for control and monitoring. Also, with wind turbines getting larger and larger, and offshore wind farms moving further away from land, much remains to be done. TechPeople is a consultancy house within the Data Respons group. The company is based in Copenhagen, and specialises in embedded solutions and IT business systems. TechPeople have specialists within hardware, software, mechanic development, project management and product testing. TechPeople’s innovative customers range from large international companies to creative start ups di|Morten Fogtmann and Anthony Roberts, TechPeople st|BY: “ “ Want To Know More? OUR COMPANIES Newsletter sign up h1|Software-driven cost cutting and performance optimisation of wind turbines h2|Wind turbines are fascinating, not only due to their size, but also because of their hi-tech combination of large-scale mechanics, power engineering, sensors, and sophisticated software. Yet the wind turbine business is no different from any other industry, with fierce competition and a strong focus on optimisation and cost cutting. Software plays an important part in this game. Levelised Cost of Energy Turning hardware into software Optimising the structure of the tower What is a distributed real-time system? Sensor fusion What is multi-sensor data fusion? Wind turbines need electricity to run Adjusting wind turbines to national requirements How does a bat detection system work? Competition continues sp|> Software-driven cost cutting and performance optimisation of wind turbines Also, increased processing capabilities lead to new ways of using sensors, including using them for other tasks than they were designed for. Managing Director TechPeople Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|This type of feeding machines are one of many important innovations within sustainable farming solutions. Agricultural precision feeding systems can reduce the amount of methane and nitrogen lost in production, directly reducing the impact of farming operations on the environment. By not overfeeding or underfeeding your stock the farmer use the correct amount of food, and less feeding waste is left, as well as keeping the stock healthy st|TKS Agri is a complete provider of feeding systems for livestock, ranging from manually operated machines to fully automated systems. Specialists from helped TKS Agri with software, hardware, FPGA and connectivity expertise to ensure non-disruption operations and renew the solution to make it more user friendly. OUR COMPANIES Newsletter sign up h1|Smart farming: automated precision feeding station promotes sustainable live stock production h2|The Data Respons subsidiary, Data Respons R&D Services, have assisted TKS Agri, a Norwegian producer of agricultural solutions, to optimise their precision feeding solution, FeedStation (TM). sp|> Smart farming: automated precision feeding station promotes sustainable live stock production Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page. pa|Cube is a great tool for narrowing down the possible choices when selecting a microcontroller. It let the user choose a microcontroller based on the required peripherals, family of microcontrollers, package type, flash size, ram size, minimum number of input/output pins and so on. For getting started quick and easy with prototyping, a board support package and a lot of example project are readily available for the three IDEs: Keil uVision, IAR Embedded Workbench and Atollic TrueStudio. After selecting the correct microcontroller for the application, the user interface has four main views. Following is a brief explanation of these four views. This view is shown in figure 1. It includes a vizualization of the microcontroller and its pins and has a vertical side toolbar. This is where the required peripherals are selected. Selection of drivers can be made by choosing a pin that support the peripheral directly or by selecting the particular peripheral from the toolbar. The tool will automatically assign the peripheral to the appropriate pins. When using the toolbar, it will solve pin conflicts by moving a conflicted peripheral to unused pins that also support the peripheral. Sometimes this automatic conflict resolver might not be wanted and therefore it is possible to lock a peripheral to a pin if necessary. Both ways of selecting peripherals do not allow a combination that is not supported by the selected microcontroller. This view has a focus on pinout and therefore only settings related to each pins possible configuration are set. For example, one can choose to enable SPI in either full duplex, receive only or transmit only. The three possible selections all enables the SPI peripheral, but they also have an effect on which pins are utilized. In this view, it is not possible to set the baud rate, data size, endianness, prescaler, clock polarity, etc., as they do not affect the pinout. Another example is enabling an ADC and ADC channel in the pinout view. This affects the pinout, but the selected sampling rate, data conversion mode, resolution, etc., does not. The pinout view lets the developer enable and configure peripherals that affect the pinout. The only exception to this is that some middleware libraries can be enabled from this view even though they do not change the pinout. An example of this is FreeRTOS, which is a real-time operating system. More settings related to the peripherals can be chosen in the configuration view. The configuration view shows all the enabled peripherals and middlewarelibraries. In addition, it is possible to configure watchdog functionality, DMA transfers, enable the different interrupts and set additional clock and reset behavior. In this view, the configuration of the peripherals is done. It is possible to set the baud rate, data size, endianness, prescaler, clock polarity, etc. of the SPI peripherals or the sampling rate, data conversion mode, resolution, etc., of the ADCs. An example is shown in figure 2. All clock configurations should be done in the clock configuration view. This view is shown in figure 3 and provides a good overview of the clock tree. This view enables the developer to choose between external and internal clock sources. Clock frequencies in the clock hierarchy are automatically calculated by setting the oscillator frequency used and adjusting the many prescalers and PLLs. Invalid clock configurations are clearly shown in red, which makes it easy to discover and fix any incompatibilities. This view is used to calculate approximately how much current the microcontroller, with the selected peripherals and settings, will draw in different modes and in average. Control of the current consumption is important for low-power applications and this tool greatly helps the developer. The view lets the developer set parameters as supply voltage, clock frequency, run/sleep/standby mode, RAM voltage, enabled peripherals, etc. In lower power applications, the microcontroller will often sleep most of the time, only waking up periodically to check for events or when an interrupt occurs. This means that the microcontroller most likely has at least two modes with very different parameter values. The power consumption view makes it possible to add a sequence of steps, where one step equals a mode for a set time interval. When all the steps corresponding to the different modes have been added, a graphical representation is created. An example of this is shown in figure 4. In addition, the most regular batteries can be selected from a list and the approximate battery life can be estimated for a full charge. After selecting the required peripherals in the pinout view and configuring the clocks and peripherals, it is possible to generate the initialization code. Not only is the code generated, but all necessary project files for a chosen IDE as well. In addition to this, template files are provided in the project, which gives some guidance on how one might structure the code. In the generated giles, there are commented sections where the custom code should be inserted. It is very important that the code the developer Writes in generated files is written within these sections. Otherwise it will be gone when the project is regenerated. Even a minor change like changing the baud rate for a peripheral is recommended doing in Cube and not in the generated source file itself. Not because this is any faster (or especially slower), but it will prevent unnecessary possible errors and keep the generated documentation updated. In addition, if at a later time more major changes are to be done and the Cube project is not up to date with the generated code, one must remember to add the changes done to the source file. Always keep Cube updated! Cube uses the STM32 hardware abstraction layer (HAL) library to create the initialization code, which makes it a lot easier to migrate between STM32 microcontrollers if needed. By default, all generated code is put in a header and source file. The generated files can be separated into headers and source files for each type of peripheral to get a better overview by adjusting the settings in the top toolbar. This does not include middleware libraries. As mentioned, the STM32 HAL library is used. After the code is generated, everything should be ready to use the HAL library to control the peripherals. The syntax of the HAL library is shown in the table below: It is the function calls as shown first inthe table that should be used to controlthe behavior of the peripherals. To start a basic timer the HAL_TIM_Base_Start() can be called or to send data over UART with DMA one could call HAL_UART_Transmit_DMA(). The second line in the table are macros that helps the developer change register values. The reason to use macros is that they are more portable and reduces the chance of setting the wrong bit. Depending on the application, these macros must sometimes be used. For example when changing the ADC sampling rate between two frequencies while the application is running. The last line in the table can typically only enable and disable clocks to peripherals and functionality related to reset. The Cube is a new software with its initial release in February 2014 and it still has some bugs. Bugs can be something as trivial as a missing or repeated line in the generated code or an error in the user interface preventing the use of an actual valid setting. Despite this, it saves a lot of time. Additionally, six minor version updates have been released in one year, so it is rapidly becoming better. It is easy to get a good overview of the peripherals used and which ones are still available. Making changes in the configuration is fast and easy, but keeping the default structure generated by Cube will probably save a lot of time when the code has to be regenerated. When new code is generated, one should check the initialization code that has been changed or added. A quick look can reveal missing lines or wrong settings that might otherwise be hard to detect. It is possible to add custom code to the initialization code that will not be removed when the code is regenerated. This makes it possible to correct many code generation errors. All necessary callback functions are already prototyped and only the definitions have to be wrtitten. If more than one peripheral or similar can trigger one interrupt, there is already a handler determining where it was called from and the appropriate callback function is called. If the macros provided by the HAL libraryis used, then the data sheet and reference manual are your friends to avoid errors. It should be easy to set up the project in IDEs other than the Three officially supported ones. Atollic TrueStudio uses the GCC compiler, which is also supported by several other IDEs. Cube currently does not support generation of flash initialization code for enabling reading and writing to flash, but it might be supported in the future. The pinout view automatically checks for conflicts and resolves them if possible. A pin list can be generated that is useful for hardware developers designing the custom hardware. Even if this tool is not used for code generation, it is useful for setting the pinout and determine if the selected combination of peripherals is valid. The normal workflow is setting the required peripherals, configure the clock and then configure the peripherals. The power consumption calculator is optional, but if used it should be used when everything else is set and configured. The Cube uses the HAL library and therefore ensures that the code can easily be ported to any other STM32 With minor effort, as long as the required hardware functionality is present di|Patrick Hisni Brataas, Development Engineer, Data Respons li|Reports can be created by a single click and contains much useful information. The generated report contains information like: The selected microcontroller. What version of Cube and firmware package version that was used to generate the code. Which compiler is used and what version. An overview of microcontroller pins as shown in the pinout view. Pin list with mappings to package pin number, internal pin number/port, peripheral active on pin and a user-selected label. All power consumption calculations done in the power consumption calculator view. st|BY: OUR COMPANIES Newsletter sign up h1|Pros & cons of using STM32CubeMX code generation tool insead of manually writing drivers for an ARM Cortex-M microcontroller h2|A new trend is emerging from several microcontroller manufacturers. Driver code can now be configured and generated using provided tools. This article will take a closer look at a tool named STM32Cube-MX (from here on called Cube) from ST Microelectronics. It is made for their STM32, an ARM based family of microcontrollers. Cube is a graphical tool for selecting, configuring and generating project reports and code. It currently supports all STM32 microcontrollers, both the STM32F and STM32L series, but is only available for the Windows operating system so far. It can be run as a stand-alone application or as a plugin for the Eclipse integrated development environment (IDE). h3|Configuration of drivers Pinout view Configuration view Clock configuration view Power consumption calculator view Generate project reports Code generation How to use the generated code and HAL Pros, cons and experiences sp|> Pros & cons of using STM32CubeMX code generation tool insead of manually writing drivers for an ARM Cortex-M microcontroller Contact Get in touch +47 67 11 20 00 / * We use cookies to provide the best possible experience for you. By closing this message you agree to use of cookies. You can learn more on our page.