Security - Vertex AI

Information Security Vertex AI

Privacy is often the primary concern for enterprises using LLMs. While we are still working on finding the best locally run LLM that is powerful enough to provide us with the necessary quality and accuracy for generated texts, we make use of the cloud-based Gemini Pro model, which is provided within the Google Vertex AI service. We fully trust Google, as many large institutions in the financial industry do, to keep its data privacy and protection promises. For more information on their model, please refer to the respective original Google resources.

Data Governance and Privacy

Vertex AI is built on Google Cloud’s core data-handling principles.

Data Isolation

Your data is your data. Your prompts, responses, and any data you use for fine-tuning are considered “Customer Data.”

No Model Training

Google does not use your Customer Data (like prompts or fine-tuning datasets) to train or improve its foundation models (like Gemini) for other customers.

Data Control

Your fine-tuning data and the resulting adapted model layer are stored within your project. You control their lifecycle and deletion.

Data Residency

You can specify the region where your data is processed and stored at-rest, helping you meet data-locality requirements.

Compliance

Vertex AI services are covered by Google Cloud’s compliance certifications, such as SOC 1/2/3, ISO 27001, and HIPAA.

Data Processing

All data handling is governed by the Cloud Data Processing Addendum (CDPA), which legally defines Google’s role as a data processor.

Encryption and Data Protection

Vertex AI ensures your data is protected both as it moves and while it’s stored.

Encryption in Transit

All data sent to and from Vertex AI GenAI services is encrypted using TLS 1.3.

Encryption at Rest

All your data (like custom-trained models or evaluation datasets) is automatically encrypted at rest using Google-managed encryption keys.

Identity and Access Management (IAM)

IAM controls who (which user, group, or service) can do what (which action) on which (which resource) Vertex AI service.

Predefined Roles

Vertex AI provides standard roles like roles/aiplatform.user (to use models) and roles/aiplatform.admin (to manage all resources).

Principle of Least Privilege

For granular control, you can create IAM custom roles. For example, you can create a role that only contains the aiplatform.endpoints.predict permission and assign it to a service account for an application that only needs to get predictions from an LLM, but not manage or delete it.

For additional information on Vertex AI security features, please refer to Google´s Overview of Generative AI on Vertex AI

Stay in the Loop

Just drop us a line and we'll be in touch to update you on the latest developments and availability.