Source
Raycast
Total Downloads
6,955,643
Last updated: May 20, 2025
Entries
1,853
Total records
    Downloads Name Description
    221 Bring! Add items to your Bring! shopping lists
    220 Open Laravel Herd Site Lists all your Laravel Herd sites and allows you to open them in Visual Studio Code, Finder or your browser.
    218 Folder Cleaner Organize files in a folder by moving them to designated locations based on file extensions
    217 Homepage Homepage services and bookmarks in Raycast
    216 Ethereum Price See the current price of Ethereum in various currencies.
    215 Fake Financial Data Generate fake financial data
    215 Time Show the current time.
    215 Rename Images with AI AI-powered images and screenshots renaming extension that intelligently names files based on their content
    215 NOS Nieuws Used to display the latest NOS nieuws from nos.nl
    214 Cheetah Search for a local Git project and open it with the specified application.
    214 Laravel Tips Get or search laravel tips in Raycast
    212 Bazinga Tools A shortcut to open tools on Bazinga.tools
    211 Mastodon Search Search for People or Hashtags on Mastodon.
    211 Static Marks - Bookmark Search Search and launch websites from your Static Marks bookmark YAML file.
    209 Kaleidoscope Opens Kaleidoscope from selected files
    208 Cache-Control Builder Build a HTTP Cache-Control response header
    208 MacRumors Browse MacRumors headlines from the comfort of Raycast.
    207 Sequel Ace search and connect databases in Sequel Ace
    205 .NET Documentation Search Search .NET API documentation.
    205 Just Breathe An instrument for relaxation through breathing
    205 Base64 to File convert base64 to file
    203 OpenStreetMap Search Quickly open OpenStreetMaps directions, for example from your current location to your home.
    202 Homey Homey Flows & Devices
    202 Badges - shields.io Concise, consistent, and legible badges.
    201 Not Diamond Not Diamond is an AI model router that automatically determines which LLM is best-suited to respond to any query, improving LLM output quality by combining multiple LLMs into a meta-model that learns when to call each LLM. This extension answers always with the best model depending on the prompt.