Initiatives such as digital transformation and becoming a data-driven organization are increasing the importance of data within organizations. Organizations want to do more with data. Their existing IT landscape is often inadequate, so something needs to change. Many are looking for solutions based on data lakes, data hubs and data factories, but a data architecture that is also well-worth considering is the data mesh. While data warehouses, data lakes and data hubs are primarily centralistic and monolithic solutions, the data mesh is a distributed solution. The data architecture is not broken down based on the nature of the application, but based on business domains. The division is no longer transactional systems versus analytic systems. As a result, traditional responsibilities within an IT organization will shift dramatically. For example, single-domain engineers responsible for transactional systems will also become responsible for the interfaces that provide analytical capabilities to the organization.
Streaming Analytics (or Fast Data processing) is becoming an increasingly popular subject in financial services, marketing, the internet of things and healthcare. Organizations want to respond in real-time to events such as clickstreams, transactions, logs and sensory data. A typical streaming analytics solution follows a ‘pipes and filters’ pattern that consists of three main steps: detecting patterns on raw event data (Complex Event Processing), evaluating the outcomes with the aid of business rules and machine learning algorithms, and deciding on the next action. At the core of this architecture is the execution of predictive models that operate on enormous amounts of never-ending data streams.
But with opportunities comes complexity. When you’re switching from batch to streaming, suddenly time-related aspects of the data become important. Do you want to preserve the order of events, and have a guarantee that each event is only processed once? In this talk, I will present an architecture for streaming analytics solutions that covers many use cases, such as actionable insights in retail, fraud detection in finance, log parsing, traffic analysis, factory data, the IoT, and others. I will go through a few architecture challenges that will arise when dealing with streaming data, such as latency issues, event time versus server time, and exactly-once processing. Finally, I will discuss some technology options as possible implementations of the architecture.
Read less• Their journey to a better, faster, more structured and scalable data and information management environment using Datavault Builder
• How the model-driven development platform of Datavault Builder led to outstanding time-to-market results
• How the client increased transparency with Data Lineage and deployment module enabled a flawless deployment pipeline
Virtually all organizations have experience in developing traditional BI applications, such as dashboards and reports for employees. However, the development of Embeddded BI applications that are used by customers and suppliers as part of online applications is still unknown territory. Customer-facing BI applications can be used, for example, to speed up time to market, increase customer satisfaction and achieve greater reach.
These types of applications require a different development approach and the use of different technologies. In this session, the various building blocks are discussed such as web embedding, secure custom portal, SAAS/COTS embedding, embedding of real-time and interactive decision points and action-oriented dashboards. The importance of scalable cloud-based database servers such as Google BigQuery, Amazon RedShift, Snowflake and Starburst will also be discussed.
Topics:
Attempts to set up a data warehouse within the pension insurance company have had varying degrees of success. A large-scale quality survey of relevant administrations in 2009 created a new urgency to work with data, with data integration taking a central stage.
Since, from a business point of view, such an investment must have a long lifespan – there were no (large-scale) cloud solutions yet – sustainability was one of the design principles. Besides this principle, flexibility, reliability, availability and repeatability were also important design principles. The design was created by the team that had to realise the environment. In a period of six weeks, a prototype was built using various methods and techniques. This resulted in a ‘grand high level design’ for the data model and the technical solution for the environment, in which an iterative development strategy was chosen.
After the realisation of the quality survey and the associated in control statement, the environment was further expanded. This was important for execution of portfolio analyses, in-depth quality analyses, operational control and foundation for the migration process & datapipeline to select and (commercially) migrate customers to the new product propositions. In 2018, the same data environment was further expanded for the analysis and implementation of new legislation. Now this environment is being used for data science activities. Thus, this environment has celebrated its ten-year anniversary and has been able to provide many strategic, tactical and operational goals with data needed to achieve desired results.
During the session, Mark van der Veen will share his experiences on how to get value from the initial set-up of the data environment.
Read lessIn the past few years many organizations invested in experimenting with Data Science, predictive models and Analytics. Often we see these models used as point in time solutions in the business, with little attention for support and a lot of manual work to be done. The next challenge is to move from experimenting to operationalization. How to move the models and related data science activities to a governed IT data and application landscape? We do this to get an even more widely distributed and, more important, operationalized data enviroment within the organization. In addition, this will help to address the ambitions to become even more Data Driven.
In this session we will show you the journey from ambition to operationalization, the Architecture, a few important desicions to make and the way to migrate to such an environment. We will address the unique challenges, a lot of hands on and our lessons learned.
Many companies today are looking to migrate their existing data warehouse to the cloud as part of an a data warehouse modernisation programme. There are many reasons for doing this including the fact that many transactional data sources have now moved to the cloud or the capacity of an on-premises data warehouse has been reached with another project looming. Whatever the reason, data warehouse migration can be a daunting task because these systems are often five or ten years old. A lot may have happened in that timeframe and so there is a lot to think about. There are also a lot of temptations and decisions you can make that can increase risk of failure. This session looks at the what is involved in migrating data warehouses to the cloud environment, what options you have and how a migration can cause changes to data architecture.
AI is everywhere. Its early invasion of everyday life – from dating to policing – has succeeded beyond its proponents’ wildest dreams. Analytics and machine learning built on “big data” feature daily in the mainstream media.
In IT, BI and analytics vendors are adding artificial intelligence to enhance their offerings and tempt managers with the promise of better or faster decisions. So, how far could AI go? Will it take on a significant proportion of decision making across the entire enterprise, from operational actions to strategic management? What would be the consequences if it did?
In this session, Dr. Barry Devlin explores the challenges and potential benefits of moving from BI to AI. We explore its use in data management; its relationship to data warehouses, marts, and lakes; its emerging role in BI; its strengths and weaknesses at all levels of decision-making support; and the opportunities and threats inherent in its two main modes of deployment: automation and augmentation.
What you will learn:
In September 2018, the development of the OSDU Data Platform was started by the Open Group. The OSDU Forum started as a standard data platform for the oil and gas industry, which will reduce silos and put data at the center of the subsurface community. All types of data (structured and unstructured) from oil & gas exploration, development, and wells is loaded into this single OSDU Data Platform. The data is accessible via one set of APIs; some datatype optimized APIs will be added later. The platform enables secure, reliable, global, and performant access to all subsurface and wells data. It acts as an open, standards-based ecosystem that drives innovation.
On March 24, 2021 the first operational release was launched on the public cloud platforms from Amazon, Google, and Microsoft. Later in 2021, oil & gas production data and data from new energy sources, such as wind and solar farms, hydrogen, geothermal, and CCUS, will be added to this single, open source-based Energy data platform. The OSDU Data Platform acts as a system of record and acts therefore the master of that data. This session discusses the challenges involved in setting up such a challenging project and platform and the lessons learned along the way.
Topics covered include:
1. Automating an Oracle to Snowflake migration project.
2. Managing a Data Vault architecture that is growing in size and complexity.
3. How WhereScape Data Warehouse Automation performs in comparison to Allianz’s homegrown solution.
In an increasingly complex and interconnected world there is an increased need for autonomous systems that are beyond the capabilities of human operators. Swarm Intelligence systems rely on emergent intelligence for their problem solving issues. Decisions following out of these intelligent systems are dependent on the data in your organization. Implementing data quality leads to better data. But do you know whether the data is fit for purpose? Is the data used in the appropriate context within your BI systems?
A data strategy is needed to enable your organization to make fact based decisions through data literate employees supported by intelligent systems. Gamifaction and Data Literacy are meant to explain your data strategy. Peter Vieveen will guide you through the process of defining such a data strategy using the Data Management Body of Knowledge and explain how to use gamification and data literacy to explain the data strategy to your organization.
Around 2015, companies in the Netherlands started migrating their on-premise data warehouses to the public cloud. When doing this, it is important to realise that it is not always logical for the deployed physical data models to remain the same as they were in on-premise systems. The new technological possibilities not only allow new approaches, but can also cause anti-patterns within existing physical modelling techniques such as Dimensional modelling (Kimball) or Data Vault. Or you just need a slightly different approach to implement these techniques. The goal of this session is to give insight into the (im)possibilities in this area, looking at how this can be practically tackled within solutions like Snowflake or Google BigQuery.
Examples of physical data modelling topics we will cover:
Session highlights
On 27 February 2020, the SARS-CoV-2 virus was detected for the first time in a patient in the Netherlands. The importance of high-quality data from the entire care chain in fighting the pandemic quickly became clear. Every organization in the Dutch healthcare chain is involved: GGD, VWS, RIVM, laboratories, hospitals, care institutions, GPs, patient federations, ICT suppliers, and so on.
This presentation provides a glimpse into the RIVM’s data kitchen. What were the challenges in collecting all the ingredients, seasoning them and serving them on the (dash)plate? An important piece of kitchen equipment was the pressure cooker. This session will focus on the experiences gained during the development of the Corona dashboard and required systems under high pressure, with the whole population of the Netherlands watching.
Sustainable data architectures are needed to cope with the changing role of data within organizations and to take advantage of new technologies and insights. A sustainable data architecture is not a data architecture that only supports the current and upcoming requirements, but one that can survive for a long time because it is easy to adapt and expand. As requirements for data usage change, a sustainable data architecture should be able adapt without the need for major redevelopment and rebuilding exercises.
No magical products exist for developing sustainable architectures. Several product types are required to achieve this. Other design principles will also have to be applied and certain firm beliefs will have to be sacrificed. This session examines the requirements for sustainable data architectures and how these can be designed and developed.
As the pandemic has proven, digital transformation is possible—and at speed. Many more aspects of business operations have moved online or have enabled remote or no-touch access. This evolution has generated another growth spurt of “big data”, from websites, social media, and the Internet of Things (IoT). With new customer behaviour likely to stick after the pandemic and working from home remaining an important factor, novel approaches to decision-making support are an increasingly important consideration for many organisations.
In this context, the recent growth in interest in and focus on the use of artificial intelligence (AI) and machine learning (ML) across all aspects of business in every industry and government raises important questions. How can AI/ML be applied at management levels in support of decision making? What new possibilities or problems does it present? How far and how fast can businesses move to benefit? What are the downsides?
The seminar
AI, combined with big data, IoT and automation, offer both the threat and the promise of revolutionising all aspects of IT, business and, indeed, society. In this half-day session, Dr Barry Devlin explores what will enable you to take full advantage of emerging AI technology in your decision-making environment. Starting from the familiar worlds of BI and analytics, we position traditional and emerging BI and analytics tools and techniques in the practical application of AI in the business world. Extrapolating from the rapid growth of AI and IoT in the consumer world, we see where and how it will drive business decision making and likely impact IT. Based on new models of decision making at the organisational and personal levels, we examine where to apply augmentation and automation in the roll-out of AI. Finally, we address the ethical, economic and social implications of widespread adoption of artificial intelligence.
Learning objectives
Intended for you
This seminar is of interest to all IT professionals and tech-savvy businesspeople directly or indirectly involved the design, delivery, and innovative use of decision making support systems, including:
We will send the course materials and meeting instructions well in advance as well as the invitation with hyperlink to join us online. The seminar will start at 09:00 and lasts until 13:00. The online meeting will be available at least one half hour earlier so please log in timely in order to check your sound and video settings beforehand.
Limited time? Join one day & conference recordings
Can you only attend one day? It is possible to attend only the first or only the second conference day and of course the full conference. The presentations by our speakers have been selected in such a way that they can stand on their own. This enables you to attend the second conference day even if you did not attend the first (or the other way around). Delegates also gain four months access to the conference recordings so there’s no need to miss out on any session.
“Longer sessions created room for more depth and dialogue. That is what I appreciate about this summit.”
“Inspiring summit with excellent speakers, covering the topics well and from different angles. Organization and venue: very good!”
“Inspiring and well-organized conference. Present-day topics with many practical guidelines, best practices and do's and don'ts regarding information architecture such as big data, data lakes, data virtualisation and a logical data warehouse.”
“A fun event and you learn a lot!”
“As a BI Consultant I feel inspired to recommend this conference to everyone looking for practical tools to implement a long term BI Customer Service.”
“Very good, as usual!”