Skip to content

Commit

Permalink
add deployment content to florence-2 notebook
Browse files Browse the repository at this point in the history
  • Loading branch information
capjamesg authored Aug 9, 2024
1 parent 8f6efe7 commit dfd4e10
Showing 1 changed file with 116 additions and 0 deletions.
116 changes: 116 additions & 0 deletions notebooks/how-to-finetune-florence-2-on-detection-dataset.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5177,6 +5177,122 @@
}
]
},
{
"cell_type": "markdown",
"source": [
"## Upload model to Roboflow (optional)\n",
"\n",
"You can deploy your Florence-2 object detection model on your own hardware (i.e. a cloud GPu server or an NVIDIA Jetson) with Roboflow Inference, an open source computer vision inference server.\n",
"\n",
"To deploy your model, you will need a [free Roboflow account](https://app.roboflow.com).\n",
"\n",
"To get started, [create a new Project in Roboflow](https://docs.roboflow.com/datasets/create-a-project) if you don't already have one. Then, upload the dataset you used to train your model. Then, create a dataset Version, which is a snapshot of your dataset with which your model will be associated in Roboflow.\n",
"\n",
"You can read our full [Deploy Florence-2 with Roboflow](https://blog.roboflow.com/deploy-florence-2-with-roboflow/) guide for step-by-step instructions of these steps.\n",
"\n",
"Once you have trained your model A, you can upload it to Roboflow using the following code:"
],
"metadata": {
"id": "ZZ9q1Wa8FOg9"
}
},
{
"cell_type": "code",
"source": [
"import roboflow\n",
"\n",
"rf = Roboflow(api_key=\"API_KEY\")\n",
"project = rf.workspace(\"workspace-id\").project(\"project-id\")\n",
"version = project.version(VERSION)\n",
"\n",
"version.deploy(model_type=\"florence-2\", model_path=\"/content/florence2-lora\")"
],
"metadata": {
"id": "e1WLCEEBF2jk"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Above, replace:\n",
"\n",
"- API_KEY with your [Roboflow API key](https://docs.roboflow.com/api-reference/authentication#retrieve-an-api-key).\n",
"- workspace-id and project-id with your [workspace and project IDs](https://docs.roboflow.com/api-reference/workspace-and-project-ids).\n",
"- VERSION with your project version.\n",
"\n",
"If you are not using our notebook, replace /content/florence2-lora with the directory where you saved your model weights.\n",
"\n",
"When you run the code above, the model will be uploaded to Roboflow. It will take a few minutes for the model to be processed before it is ready for use.\n",
"\n",
"Your model will be uploaded to Roboflow.\n",
"\n",
"## Deploy to your hardware\n",
"\n",
"Once your model has been processed, you can download it to any device on which you want to deploy your model. Deployment is supported through Roboflow Inference, our open source computer vision inference server.\n",
"\n",
"Inference can be run as a microservice with Docker, ideal for large deployments where you may need a centralized server on which to run inference, or when you want to run Inference in an isolated container. You can also directly integrate Inference into your project through the Inference Python SDK.\n",
"\n",
"For this guide, we will show how to deploy the model with the Python SDK.\n",
"\n",
"First, install inference:"
],
"metadata": {
"id": "CLSODWkmGANb"
}
},
{
"cell_type": "code",
"source": [
"!pip install inference"
],
"metadata": {
"id": "jPDfMo8DGSsO"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Then, create a new Python file and add the following code:"
],
"metadata": {
"id": "7hShwaXwGXyi"
}
},
{
"cell_type": "code",
"source": [
"import os\n",
"from inference import get_model\n",
"from PIL import Image\n",
"import json\n",
"\n",
"lora_model = get_model(\"model-id/version-id\", api_key=\"KEY\")\n",
"\n",
"image = Image.open(\"containers.png\")\n",
"response = lora_model.infer(image)\n",
"print(response)"
],
"metadata": {
"id": "Qocv_VtQGVK1"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"In the code avove, we load our model, run it on an image, then plot the predictions with the supervision Python package.\n",
"\n",
"When you first run the code, your model weights will be downloaded and cached to your device for subsequent runs. This process may take a few minutes depending on the strength of your internet connection."
],
"metadata": {
"id": "o6knGuTaGZ-j"
}
},
{
"cell_type": "markdown",
"source": [
Expand Down

0 comments on commit dfd4e10

Please sign in to comment.