dp03 commited on
Commit
9ddd82d
·
verified ·
1 Parent(s): 5babe49

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -50,19 +50,20 @@ Less than a .04 difference between classes for each metric.
50
  <img alt="Precsiosn-Confidence Graph" src="https://huggingface.co/cvtechniques/DogTypeDetection/resolve/main/BoxP_curve.png" width="700"></img>
51
  ### *Performance Analysis*
52
 
53
- This model had high metrics across each of the classes, meeting the success threshold in precision, recall, and F1 score. The confusion matrix shows some slight overguessing, as each of the classes had a 25% to 40% rate of being predicted when that area was actual background. The model also predicted small dogs as large dogs 10% of the time, which was right at the limit set before training. That being said, the matrix still has high values of 80%-85% along the true positive diagonal. The 100% precision peak at 100% confidence does raise some red flags. This is addressed in the *Known failure cases* section.
54
  ***
55
  # Limitations and Biases
56
- ### *Known failure cases*
57
  <img alt="Failure Cases" src="https://huggingface.co/cvtechniques/DogTypeDetection/resolve/main/Screenshot%202026-03-15%20160209.png" height="500"></img>
58
 
59
- After testing with images outside of the training set, a pattern imerged where the model would perform well but consistently made mistakes on the same breeds(seen in the image above). This is because these breeds were not in the orginal training data.
60
 
61
  ### *Poor performing classes*
62
- There was a less than 5% difference between class performance. The class imbalance, with small breeds having over 1k less images than other classes, may have contributed to the realtivly high (10%) confusion rate between large and small breeds.
63
 
64
  ### *Data biases & Environmental/contextual limitations*
65
- The images found in the dataset, varried accross different conditions as well as enviroment situations.
66
  ### *Inappropriate use cases*
67
- This model has high accruacy and precision accross the breeds found in the dataset but perfroms poorly on those not found in the set. Addtioanally this model produces a realtivly high rate of false postives.
68
- ### *Sample size limitations*
 
 
50
  <img alt="Precsiosn-Confidence Graph" src="https://huggingface.co/cvtechniques/DogTypeDetection/resolve/main/BoxP_curve.png" width="700"></img>
51
  ### *Performance Analysis*
52
 
53
+ This model had high metrics across each of the classes, meeting the success threshold in precision, recall, and F1 score. The confusion matrix shows some slight overguessing, as each of the classes had a 25% to 40% rate of being predicted when that area was actual background. The model also predicted small dogs as large dogs 10% of the time, which was right at the limit set before training. That being said, the matrix still has high values of 80%-85% along the true positive diagonal. The 100% precision peak at 100% confidence does raise some red flags. This is addressed in the *Known Failure Cases* section.
54
  ***
55
  # Limitations and Biases
56
+ ### *Known Failure Cases*
57
  <img alt="Failure Cases" src="https://huggingface.co/cvtechniques/DogTypeDetection/resolve/main/Screenshot%202026-03-15%20160209.png" height="500"></img>
58
 
59
+ After testing with images outside the training set, a pattern emerged where the model would perform well but consistently made mistakes on the same breeds (seen in the image above). This is because these breeds were not in the original training data.
60
 
61
  ### *Poor performing classes*
62
+ There was a less than 5% difference between class performance. The class imbalance, with small breeds having over 1k fewer images than other classes, may have contributed to the relatively high (10%) confusion rate between large and small breeds.
63
 
64
  ### *Data biases & Environmental/contextual limitations*
65
+ The images found in the dataset varied across different conditions as well as environmental situations.
66
  ### *Inappropriate use cases*
67
+ This model has high accuracy and precision across the breeds found in the dataset but performs poorly on those not found in the set. Additionally, this model produces a relatively high rate of false positives. With this in mind, the model should not be used when specific counts are required or when it is unknown what types of breeds will be present. Instead, this model should be used to get general counts, especially in comparison to one another, when the desired breeds are accounted for in the dataset.
68
+ ### *Sample size limitations*
69
+ The main limitations come from the lack of some breeds in the original set, as well as the imbalance between the classes. The former could be fixed by adding more images of other breeds, while the latter could be fixed by adjusting the weight ranges to even out the classes.