HIGH
The severity is rated HIGH due to the potential for sensitive data exposure and lack of user consent. This issue is highly exploitable in both homelab and production environments where developers use Vercel's free or hobby plans without opting out within the given timeframe.

Vercel, a popular serverless deployment platform for web applications, has introduced new terms and policies that allow the company to train machine learning models on user code by default. This change affects users under hobby or free plans who now have their code included in Vercel's training datasets unless they opt out within 10 days of notification. The vulnerability here lies in the potential for sensitive data, including proprietary algorithms and personal information, to be exposed through machine learning processes. Engineers and sysadmins must take immediate action to review their privacy settings and decide whether to continue using Vercel under these new terms or migrate to alternative platforms that offer more control over code usage.

Affected Systems
  • Vercel serverless deployment platform
Affected Versions: All users under hobby or free plan
Remediation
  • Log into your Vercel account and navigate to settings to opt-out of model training if you are a hobby or free user.
  • If opting out is not feasible, review the code deployed on Vercel to ensure no sensitive data is exposed and consider obfuscating critical information.
  • Consider migrating to an alternative deployment service that does not involve third-party machine learning processes on your code.
Stack Impact

This impacts users who rely on Vercel's free or hobby plans for deploying applications. The potential exposure of sensitive data could affect any project hosted under these terms, especially those dealing with proprietary algorithms or personal user information.

Source →