Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated the README. #89

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
64 changes: 30 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,27 @@
# **Django ASV**
# Django ASV

This repository contains the benchmarks for measuring Django's performance over time.

The benchmarking process is carried out by the benchmarking tool [airspeed velocity](https://asv.readthedocs.io/en/stable/) and the results can be viewed [here](https://django.github.io/django-asv/)
The benchmarking process is carried out by the benchmarking tool [Airspeed Velocity](https://asv.readthedocs.io/en/stable/) and the results can be viewed [Django ASV results page](https://django.github.io/django-asv/).

## **Running the benchmarks**
---
## Running the benchmarks

### **If you have installed Anaconda or miniconda**
### Using Conda or Miniconda

`Conda` is being used to run the benchmarks against different versions of python
**Conda** is being used to run the benchmarks against different versions of **Python**.

If you already have conda or miniconda installed,you can run the benchmarks by using the commands
If you already have **Conda** or **Miniconda** installed, you can run the benchmarks by using the commands:

```
pip install asv
asv run
```console
python -m pip install asv
python -m asv run
```

to run the benchmarks against the latest commit.

### Using virtualenv

### **If you have not installed Anaconda or miniconda**


If you do not have conda or miniconda installed, change the contents of the file `asv.conf.json` as follows to use `virutalenv` to run the benchmarks
To use `virutalenv` to run the benchmarks _(e.g. if you do not have **Conda** or **Miniconda** installed)_ change the contents of the file `asv.conf.json` as follows:

```json
{
Expand All @@ -40,33 +37,32 @@ If you do not have conda or miniconda installed, change the contents of the file

and run the benchmarks using the commands

```
pip install asv
asv run
```console
python -m pip install asv
python -m asv run
```

**Note**: `ASV` prompts you to set a machine name on the first run, please do not set it to 'ubuntu-22.04', 'windows-2022' or 'macos-12' as the results for the machines with these names are currently being stored in the repository
> [!NOTE]
> `ASV` prompts you to set a machine name on the first run, please do not set it to `ubuntu-22.04`, `ubuntu-latest`, `windows-2022` or `macos-12` as the results for the machines with these names are currently being stored in the repository

## **Comparing Benchmarks Results Of Different Commits Or Branches**
---
## Comparing benchmarks results

Benchmarking results of differnt branches can be compared using the following method
Benchmarking results of different commits or branches can be compared using the following method:

```
asv run <commit1 SHA or branch1 name>
asv run <commit2 SHA or branch2 name>
asv compare <commit1 SHA or branch name> <commit2 SHA or branch name>
```console
python -m asv run <commit1 SHA or branch1 name>
python -m asv run <commit2 SHA or branch2 name>
python -m asv compare <commit1 SHA or branch name> <commit2 SHA or branch name>
```

## **Writing New Benchmarks And Contributing**
---
## Writing new benchmarks and contributing

- Fork this repository and create a new branch
- Install `pre-commit` and run `pre-commit install` to install pre-commit hooks which will be used to format the code
- Create a new directory with the name `benchmark_name` under the appropriate category of benchmarks
- Add the files `__init__.py` and `benchmark.py` to the directory
- Add the directory to the list of `INSTALLLED_APPS` in settings.py
- Use the following format to write your benchmark in the file `benchmark.py`
1. Fork this repository and create a new branch.
2. Install **pre-commit** and run `python -m pre_commit install` to install hooks which will be used to format the code.
3. Create a new directory with the name `benchmark_name` under the appropriate category of benchmarks.
4. Add the files `__init__.py` and `benchmark.py` to the directory.
5. Add the directory to the list of `INSTALLLED_APPS` in `settings.py`.
6. Use the following format to write your benchmark in the file `benchmark.py`:

```python
from ...utils import bench_setup()
Expand All @@ -82,4 +78,4 @@ asv compare <commit1 SHA or branch name> <commit2 SHA or branch name>
def time_benchmark_name():
...
```
- Commit changes and create a pull request
7. Commit changes and create a pull request.
Loading