July 25Memories.ai has officially released the world's first large-scalevisual memory model(Large Visual Memory Model (LVMM)), which is claimed to be an underlying capability product that allows AI to have "human-like memory".

According to the report, LVMM doesn't just process video, but compresses, indexes, and stores massive amounts of visual content as retrievable semantic memory, allowing AI to 'remember' what it saw in the past like a human being, and quickly recall, understand, and reason when needed.
Whether it's quickly pinpointing anomalous events from months of surveillance video or tracking brand trends across millions of social videos, LVMM helps AI build persistent visual context so that it can not only see, but also remember, understand, and use it, ultimately becoming an intelligent body that truly understands the world, officials said.
Memories.ai's founder, Junxia Shen, has a PhD in computer science from the University of Cambridge and worked as a researcher at Meta. Memories.ai has completed an $8 million seed round led by venture capital firm Susa Ventures to build a visual memory layer for AI.