Meta launches OpenEQA benchmark dataset, which aims to measure AI agents' understanding of the environment through situational memory and active exploration tasks; OpenEQA contains more than 1,600 questions covering attribute recognition, spatial understanding, etc., using real environment scans and video simulations; experiments found that multimodal visual language models (e.g., GPT-4V) outperform text-only models on EQA tasks , but there is still room for improvement. (Xiu Xiaoyao Tech Talk)
❯
Search
Scan to open current page
Top
Checking in, please wait
Click for today's check-in bonus!
You have earned {{mission.data.mission.credit}} points today!
My Coupons
-
¥CouponsLimitation of useExpired and UnavailableLimitation of use
before
Limitation of usePermanently validCoupon ID:×Available for the following products: Available for the following products categories: Unrestricted use:Available for all products and product types
No coupons available!
Unverify
Daily tasks completed:
