Mark Harman 

Research Scientist, Meta Research

 

Bio

Mark Harman is a full-time Software Engineer at Meta Platforms, working in the Simulation-Based Testing team. The team has developed and deployed both the Sapienz and WW platforms for client- and server- side testing. Simulation-based testing is helping to tackle challenging technical problems in software reliability, performance, safety, and privacy. These simulation-based testing technologies have been deployed to test systems of over 100 million lines of lines of code, daily relied upon by over 2.9 billion people for communications, business, social media, and community building. Sapienz grew out of Majicke (a startup Mark co-founded) that was acquired by Facebook (now Meta Platforms) in 2017. Prior to working at Facebook, Mark was Head of Software Engineering at UCL and Director of its CREST centre, where he remains a part time professor. In his more purely scientific work, he co-founded the field Search Based Software Engineering (SBSE) in 2001, now the subject of active research in over 40 countries worldwide. He received the IEEE Harlan Mills Award and the ACM Outstanding Research Award in 2019 for his work and was awarded a fellowship of the Royal Academy of Engineering in 2020.

Topic: Simulation-based Testing

Abstract
This talk will cover Simulation-based testing, drawing on experience on the development and deployment of client- and server-side testing platforms at Meta (formerly Facebook). The talk will review client-side testing using Sapienz, an automated test generation tool for Android and iOS, and server-side testing using the cyber-cyber digital twin WW. These technologies are in daily use at Meta platforms to test the back-end systems, infrastructure and apps that enable meta to provide software such as Facebook, Instagram and WhatsApp; some of the mostly widely used systems in the history of Software Engineering. This is joint work with Nadia Alshahwan, John Ahlgren, Johannes Bader, Maria Eugenia Berezin, Kinga Bojarczuk, Satish Chandra, Andrea Cinacone, Rafael Lopez Diez, Sophia Drossopoulou, Inna Dvortsova, Xinbo Gao, Johann George, Natalija Gucevska, Yue Jia, Michal Krolikowski, Will Lewis, Maria Lomeli, Simon Lucas, Ke Mao, Alexandru Marginean, Alexander Mols, Killian Murphy, Steve Omohundro, Erik Meijer, Rubmary Rojas, Silvia Sapora, Dave Soria Para, Andrew Scott, Federica Sarro, Teijin Tei, and Jie Zhang.



Heiko Ludwig

Senior Manager, AI Platforms at IBM Research

 

Bio

Heiko Ludwig is a Principal Research Scientist and Senior Manager of the AI Platforms department in IBM’s Almaden Research Center in San Jose, CA. Heiko is leading research work on computational platforms for AI, focusing security, privacy, performance, and resilience. The results of this work contribute to various IBM lines of business and open-source projects. Heiko is currently leading the initiative on federated machine learning initiative at IBM Research and Distributed AI. Heiko has more than 100 peer-reviewed publications with more than 8000 citations and more than 50 patents and patent applications, Heiko was recognized for his work in different ways, e.g., as an ACM Distinguished Engineer. Prior to the Almaden Research Center, Heiko held different positions at IBM Research labs. He holds a Master's degree and a PhD in information systems from Otto-Friedrich University Bamberg, Germany.

Topic: Learning-as-a-Service: Data and learning as part of a learning and evolving service-oriented system

Abstract
Machine learning has become an important component of applications and continuous learning enables us to adapt the behavior of a system throughout its life-cycle. However, in distributed and service-oriented systems, data privacy often becomes an issue if services and associated data are owned by different organizational entities and combining data in one location infringes on privacy regulation. Federated learning enables us to train a machine learning model in a distributed way such that a common model can be trained while data stays with their respective owners. In this way, learning services collocated with data can complement an SOA architecture and help a SOA solution incorporate model adaptation throughout its life cycle.



Janet George

Corporate Vice President and General Manager of Cloud and Enterprise Solutions Group, Intel

Bio

Janet George is currently serving as CVP and GM (Senior Executive) of Cloud and Enterprise, Strategic Customer group in Data center and Artificial Intelligence Business Unit in Intel corporation. In this role she is responsible for leading and driving co-engineering, co-innovation, and co-creation of new software revenue streams to drive intel affinity with top cloud, enterprise, and strategic customers. Prior to Intel she was serving as GVP of Autonomous Enterprise for Oracle driving Autonomous Enterprise transformation, solving for business outcomes with Advanced Analytics, Machine Learning, and conversational Artificial Intelligence. Including deep neural networks and Cognitive Automation for key customers.

Janet came to Oracle from Chief Data officer/Scientist & “Fellow” role at Western Digital responsible for Big Data/Artificial Intelligence & Cognitive Sciences transformational journey for the past 5 years.
Prior Janet has enjoyed a distinguished career track serving Accenture Technology Labs, as Managing Partner, as Head of Yahoo Research Engineering and Cloud Infrastructure and at eBay and Apple computer.

Janet holds an Advanced Master’s Degree in Computer Science, with thesis focus on Artificial Intelligence. She is deeply engaged with Stanford University and UC Berkeley CA to advance the frontiers of mind, brain, computation and technology, cross disciplinary innovations in the intersection of Computer Science and Neuroscience. She is also associated with engagements at USC Gould School of Law for new copyright and IP patent laws in the field of Artificial Intelligence and innovations with Generative Adversarial neural networks.


Topic: Autonomous Enterprises driven by AI and ML

Abstract
What is an Autonomous Enterprise? An Autonomous Enterprise is an enterprise that is prepared to tap into the power of ML and AI. In other words, these enterprises are AI ready, and they are powered by AI. They know how to win using AI technologies. In the age of AI which has become an existential threat for enterprises, winning with AI has become a necessity. Delayed adoption increases the risk of the enterprises being left behind their competition and the industry at large, conceding to business erosion and a slow painful long drawn-out eventual acquisition or potential death.

Like every major epoch, the Internet age, the social media age, the big data age and the AI age, these genetic markers bring profound technological advances and major seismic shifts in the operational aspects of the business.  In this talk we will explore how pioneers combine investment choices, strategy, organizational behaviors, and technological adoption as essential ingredients for winning with AI.




Stefan Tai

Professor, Head of Chair Information Systems Engineering, TU Berlin

 

Bio

Stefan Tai is Professor and Head of Chair Information Systems Engineering at TU Berlin, Faculty of Computer Science. Stefan has over 25 years of experience in cutting-edge IT research and development, having led and been involved in numerous, both industrial and scientific projects in the US, in Europe, and in Germany. His research interests center around creating quality-driven enterprise software systems, especially cloud- and blockchain-based systems.

Topic: If Sokrates only knew... Enhancing Privacy in dApps through Zero-Knowledge Proofs

Abstract
Zero-knowledge proofs (ZKPs) and zk-SNARKS in particular have been receiving increased interest in the blockchain community and beyond, mostly for reasons of both improving scalability and enhancing privacy in decentralized applications. Higher-level languages and tooling like ZoKrates make creating verifiable off-chain programs and linking them to smart contracts possible even for the non-crypto-expert, hiding some of the complexities associated with programming ZKPs. First-of-a-kind applications demonstrate the enormous power and benefits when combining blockchains and ZKPs, but many more applications are still to be developed and learned from. We call to focus attention on practical applications of ZKPs and invite everyone interested to participate in a new validation initiative aimed at identifying general guidelines and best practices for using ZKPs. With ZKPs, the possibilities of dApps both in what they can do and what qualities they can ensure in hybrid on-/off-chain environments will be significantly enhanced.



Karl-Erik Årzén

Professor, Department of Automatic Control, Lund University
Co-director, WASP - the Wallenberg AI, Autonomous Systems and Software Program

 

Bio

Prof. Karl-Erik Årzén received his M.Sc. in Electrical Engineering and Ph.D. in Automatic Control from Lund University in 1981 and 1987, respectively. He worked for ABB Corporate Research during 1992-1994, and was appointed Full Professor in Automatic Control in 2000. His research interests include cyber-physical systems, real-time systems, real-time and embedded control, control of computer systems, and cloud control.  He is co-director of WASP - the Wallenberg AI, Autonomous Systems and Software Program - the single largest individual research grant ever within Engineering Sciences in Sweden, with a total budget of $600 Million. Karl-Erik is an elected member of the Royal Swedish. Academy of Engineering Sciences.

Topic: Modeling, Control and Learning for Improved Cloud Predictability

Abstract
It is still not commonplace to deploy mission and time-critical applications, e.g., control applications in the cloud. There are several reasons for this, e.g., too long and too varying delays, connectivity issues, the lack of guarantees, and cyber-security issues. There are two complementary research directions to help overcome parts of this. The first is to make the control applications more resilient towards the temporal non-determinism caused by the cloud, and the second is to make the cloud more temporally predictable by developing real-time support for the involved system components, e.g. OS, networks, hypervisors, etc. and for the associated resource management and orchestration methods. One approach towards the latter is to use feedback-based resource management, i.e., control of the cloud. The first part of this keynote focuses on how to make control applications, closed over the cloud, more resilient. The second part investigates different feedback-based methods for resource management. The work presented has been done as a part of WASP - the Wallenberg AI, Autonomous Systems and Software Program - the single largest research grant within Engineering Sciences in Sweden ever with a total budget of $600 Million.