Agenda

Date and TimeTitle
Sep 27, 2023
10:30am - 4:30pm (Eastern)
Exhibitor Hall open

Your opportunity to visit our solution vendor partners, whose sponsorship makes SecureWorld possible! Booths have staff ready to answer your questions. Look for participating Dash For Prizes sponsors to be entered to win prizes.

Sep 27, 2023
11:00am - 12:09pm (Eastern)
Safeguarding Ethical Development in AI and Other LLMs
A Comprehensive Approach to Integrating Security, Psychological Considerations, and Governance
Sep 27, 2023
11:45am - 12:00pm (Eastern)
Networking Break

Visit the Exhibitor Hall for vendor displays or connect with attendees in the Networking Lounge.

Sep 27, 2023
12:00pm - 12:58pm (Eastern)
Risk and Rewards of Deploying AI/ML Technologies in Your Organization
Sep 27, 2023
12:00pm - 12:52pm (Eastern)
Examining the Impact of AI on Your Cybersecurity Program
Sep 27, 2023
12:45pm - 1:00pm (Eastern)
Networking Break

Visit the Exhibitor Hall for vendor displays or connect with attendees in the Networking Lounge.

Sep 27, 2023
1:00pm - 1:31pm (Eastern)
Protecting High-Value AI Assets: A Comprehensive Security Framework for In-Use Protection, Ownership, and Data Privacy Across Diverse Domains

High value AI/ML models are being increasingly used in various use cases (surveillance, industrial, retail, medical, financial, etc.). The cost of training these AI models is very high due to the high-quality training data sets and time required to train and optimize these models. Once deployed, these assets are subject to various attacks including tampering and theft. Additionally, the data used for performance inference is sensitive not only to business value (industrial, retail) but also subject to regulations such as GDPR and HIPPA (surveillance, medical, financial).

AI/ML models must be protected from tampering and theft while at rest, while in use (run time), and while in transit. Additionally, functionality to provide control to the Model Developer over the use of the model, along similar lines to traditional SW licensing is an added bonus and could deliver critical functionality such as Model Revocation if required.

This AI/ML Security Framework is designed to be used by AI/ML domain specialists (data scientists, etc.), who have limited security expertise. The framework ensures model protection at rest, during transit, and run time. It also introduces the concept of model ‘ownership’ and thereby, licensing of the model. Controls are in place to allow the model developer to track model deployment and potentially revoke the use of a model that is found to be misbehaving or has some other critical flaw. A license to an improved version of the model can then be issued to the customer.

Cryptographic techniques are used to ensure the integrity and confidentiality of the model, which protects it while in transit and at rest. Intel Trusted Execution Environments (TEEs) of varying strengths (VT-x, SGX, Containers) are used to protect the model at run time. Attestation is used to report the run time environment via a Licensing Protocol that determines if the model can be used in that environment or not. This forms the basis of providing the model developer with a level of control over the usage of the model.

From a Data Protection standpoint, the cryptographic and run-time protections are extended to data streams that are used as input to various AI/ML based analytics use cases. The output analytics results are also protected using the same scheme. Data is processed by a model within one of the aforementioned TEEs. Like AI/ML models, data is protected at rest, while in transit, and when used for inference operations.

Intel TEE attestation during licensing relies on a combination of Secure Boot and Intel Platform Trust Technology (firmware TPM) or SGX based DCAP. The framework also provides a transparent key-store mechanism for securely generating and using cryptographic keys for identity, confidentiality, and integrity. The key store mechanism is also bound to hardware via TPM/PTT based sealing or SGX sealing.

All the above-mentioned functionality is wrapped in a set of easy-to-use tools for asset protection, a reference license server implementation, and a TEE based run-time inference environment currently based on, but not limited to Intel’s OpenVINO framework. The entire suite of tools and components is currently open-sourced and available for the Linux KVM (VT-x), Intel SGX (with Gramine), and Kubernetes Containers.

Sep 27, 2023
1:00pm - 1:49pm (Eastern)
I Can See Clearly Now, the Threats Are Gone

Zero Trust is considered by many to be a marketing buzzword, but what it really alludes to is having good, basic cybersecurity hygiene. It’s what any cybersecurity professional worth their salt has been doing, and does, daily. Ransomware, phishing, and BEC grab the headlines, but your run-of-the-mill cyberattacks can’t be ignored because of the shiny new thing garnering all the attention.

The CISO is like a musical conductor that must pay attention to all the resources at his or her disposal—be it people, tools, technologies, systems, and more. How is the organization handling security awareness training? What about staffing shortages affecting the organization, or even the vendors with which CISOs and their teams work?

Join this session to hear insights and takeaways on the state of the information security profession today, including tips for seeing clearly and staying ahead of threats.

Sep 27, 2023
1:45pm - 2:00pm (Eastern)
Networking Break

Visit the Exhibitor Hall for vendor displays or connect with attendees in the Networking Lounge.

Sep 27, 2023
2:00pm - 3:04pm (Eastern)
Believe the Hype: The Robots Are Coming!
Sep 27, 2023
2:00pm - 2:48pm (Eastern)
AI Confidential: Behind the Scenes of Legal Ethics in the Digital Age

Rapid advancements in AI technology have spurred its integration into the legal landscape, raising the imperative to establish ethical guidelines. This abstract explores the symbiotic relationship between AI and law, emphasizing ethical dimensions. It defines AI in a legal context, showcasing its potential in tasks like research and predictions while addressing challenges like bias and accountability. Urgency in AI ethics is discussed alongside real-world ethical dilemmas. Principles for AI and legal ethics are outlined, including transparency, fairness, accountability, and human-AI collaboration. The abstract emphasizes translating ethics into practice through standards, technological safeguards, and continuous learning. Case studies delve into AI’s role in legal advice, sentencing, and policing. Stakeholder collaboration involving legal communities and the public is stressed, as is the enduring significance of AI ethics in an ever-evolving technological and legal landscape.

Sep 27, 2023
2:45pm - 3:00pm (Eastern)
Networking Break

Visit the Exhibitor Hall for vendor displays or connect with attendees in the Networking Lounge.

Sep 27, 2023
3:00pm - 3:49pm (Eastern)
Strengthening Cybersecurity with Generative AI: A Guide for Improving Team Effectiveness

Join us for this session on how generative AI can be used to improve the workforce of cybersecurity professionals. Robert Loy will discuss the latest trends in generative AI and how it can generate realistic cybersecurity scenarios, incident response plans, and policies and maximize your team’s potential. Take advantage of this opportunity to learn how to leverage generative AI to make cybersecurity even more of a strategic asset through education and training.

Sep 27, 2023
3:00pm - 3:57pm (Eastern)
Cyber-Enabled Fraud and Business Email Compromise in 2023
Sep 27, 2023
3:45pm - 4:00pm (Eastern)
Networking Break

Visit the Exhibitor Hall for vendor displays or connect with attendees in the Networking Lounge.

Sep 27, 2023
4:00pm - 5:03pm (Eastern)
Cyber Briefing: Artificial Intelligence