This morning, the FBI released two unclear images on X of an individual connected to the shooting of conservative activist Charlie Kirk. Almost immediately, social media users began responding with artificially enhanced versions of the images, transforming the grainy surveillance footage into clearer, high-definition visuals. However, it’s important to note that these AI enhancements do not actually reveal hidden details but rather speculate on potential features present in the images, which can result in inaccuracies.
Numerous variations of the AI-generated images appeared in the comments section under the original posts. Some were created using the Grok bot from platform X, while others utilized tools like ChatGPT. The level of realism in these variations varied significantly, with some clearly diverging from the original, including one rendition that depicted a distinctly different shirt and an exaggerated facial feature referred to as a “Gigachad-level chin”. While these enhanced images aim to aid in identifying the person of interest, they also serve as eye-catching content to garner likes and shares.
Despite the creativity involved, it’s improbable that these AI enhancements provide more valuable information than the FBI’s original images. Previous examples have shown that low-resolution images can be inaccurately transformed, such as one instance where a “depixelated” image of former President Barack Obama turned into the likeness of a white male, and another where an enhancement added a nonexistent bump to President Donald Trump’s head. AI algorithms extrapolate from existing visuals to fill in the blanks, which can occasionally prove advantageous, yet they should not be considered reliable evidence in a criminal investigation.
Below is the original post from the FBI for reference:
Additionally, here are some examples of the attempted enhancements: