OmniParser AI tool for parsing screens

OmniParser is Microsoft’s pure vision–based screen parsing tool that converts UI screenshots into structured elements, enhancing GPT-4V’s ability to generate accurately grounded actions. It supports local trajectory logging and OmniTool–enabled Windows 11 automation, and integrates with multiple vision models for efficient interface control. Developed in Python, open-sourced on GitHub under the CC-BY-4.0 license, OmniParser has gained 22.1k stars and is actively maintained, with recent v2 improvements in accuracy and latency.

OmniParser AI tool for parsing screens

Source code: https://github.com/microsoft/OmniParser

Install

First clone the repo, and then install environment:

cd OmniParser
conda create -n "omni" python==3.12
conda activate omni
pip install -r requirements.txt

Ensure you have the V2 weights downloaded in weights folder (ensure caption weights folder is called icon_caption_florence). If not download them with:

# download the model checkpoints to local directory OmniParser/weights/
for f in icon_detect/{train_args.yaml,model.pt,model.yaml} icon_caption/{config.json,generation_config.json,model.safetensors}; do huggingface-cli download microsoft/OmniParser-v2.0 "$f" --local-dir weights; done
mv weights/icon_caption weights/icon_caption_florence

Examples:

We put together a few simple examples in the demo.ipynb.

Gradio Demo

To run gradio demo, simply run:

python gradio_demo.py

Model Weights License

For the model checkpoints on huggingface model hub, please note that icon_detect model is under AGPL license since it is a license inherited from the original yolo model. And icon_caption_blip2 & icon_caption_florence is under MIT license. Please refer to the LICENSE file in the folder of each model: https://huggingface.co/microsoft/OmniParser.

📚 Citation

Our technical report can be found here. If you find our work useful, please consider citing our work:

@misc{lu2024omniparserpurevisionbased,
      title={OmniParser for Pure Vision Based GUI Agent}, 
      author={Yadong Lu and Jianwei Yang and Yelong Shen and Ahmed Awadallah},
      year={2024},
      eprint={2408.00203},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2408.00203}, 
}

Libre Depot original article,Publisher:Libre Depot,Please indicate the source when reprinting:https://www.libredepot.top/5587.html

Like (0)
Libre DepotLibre Depot
Previous 7 hours ago
Next 7 hours ago

Related articles

Leave a Reply

Your email address will not be published. Required fields are marked *