Meta, the parent company of Facebook and Instagram, announced that in January, it will begin labeling all political advertisements on its platforms that contain images, video, or audio that have been artificially generated using AI and deepfake technology. The goal of the new policy is to help voters determine if the media contained in political ads has been digitally altered, as advances in AI have made it easier to generate synthetic but realistic content that could mislead voters. Similarly, Microsoft announced initiatives to insert digital watermarks into political ads created by campaigns on its platforms, to help identify the ad's creator and verify if the content has been altered by others.