repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers
|
tensorflow
| 36,352
|
Implement Titans Architecture with GRPO Fine-Tuning
|
### Model description
It would be highly valuable to extend the Transformers library with an implementation of the Titans model—a hybrid architecture that combines traditional attention-based processing with a dedicated long-term memory module (for test-time memorization) and fine-tuning using a Group Relative Policy Optimization (GRPO) method. This approach would allow LLMs to better handle extremely long contexts and improve chain-of-thought reasoning by dynamically adapting their memory during inference while being fine-tuned with reinforcement learning techniques.
Motivation and Rationale:
Enhanced Long-Context Modeling:
The Titans architecture integrates a neural long-term memory module that learns to store, update, and selectively forget information based on a “surprise” metric (e.g., gradient magnitude). This mimics human long-term memory and overcomes the quadratic complexity limitation of traditional attention for long sequences.
Adaptive Test-Time Learning:
By learning to memorize at test time, the model can update its context representation on the fly, allowing for more robust reasoning in tasks with millions of tokens.
Reinforcement Learning Fine-Tuning via GRPO:
The GRPO method, a variant of PPO, uses group-based advantage estimates and ratio clipping to stabilize policy updates. Incorporating this into Transformers would allow for more efficient fine-tuning, reducing reliance on extensive supervised datasets and improving chain-of-thought outputs.
Proposed Implementation:
Titans Architecture:
Introduce a new model class (e.g., TitansForCausalLM) that wraps a standard Transformer with an additional long-term memory module.
The module should accept token embeddings and update a memory vector using an MLP with momentum-based updates and an adaptive forgetting gate.
Incorporate a set of persistent memory tokens (learnable parameters) that are concatenated with the Transformer’s output before the final prediction layer.
GRPO Fine-Tuning:
Create a custom trainer (e.g., subclassing TRL’s PPOTrainer) that overrides the loss computation to implement GRPO.
The loss should compute token-level log probabilities from both a reference (old) policy and the updated policy, compute the probability ratio, and then apply clipping based on a configurable epsilon value.
Integrate a dummy or real KL penalty term to control deviations between policies.
Integration with TRL:
Provide example scripts demonstrating the fine-tuning loop using TRL’s APIs with the custom GRPO loss function.
Update documentation and examples to guide users on how to apply this technique to long-context reasoning tasks.
Environment:
Transformers version: (latest)
Python version: 3.8+
Additional libraries: TRL (for PPOTrainer extension), PyTorch
Implementing Titans with GRPO fine-tuning can potentially revolutionize how we approach long-context learning and chain-of-thought reasoning. Several recent research efforts (e.g., the Titans paper [arXiv:2501.00663] and DeepSeek's work) have demonstrated promising results with these techniques. An open implementation in Transformers would help the community experiment with these ideas and possibly drive further research in scalable and adaptive LLMs.
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
https://arxiv.org/html/2501.00663v1
https://github.com/rajveer43/titan_transformer
|
open
|
2025-02-23T09:32:17Z
|
2025-02-24T14:42:43Z
|
https://github.com/huggingface/transformers/issues/36352
|
[
"New model"
] |
rajveer43
| 2
|
paulbrodersen/netgraph
|
matplotlib
| 88
|
Add multiple edges simultaneously
|
Hi guys, I hope you're well.
I work on a project called [GraphFilter](https://github.com/GraphFilter), where we use your library (we've even opened some issues here). Recently, a user asked us about the possibility of adding [multiple edges simultaneously](https://github.com/GraphFilter/GraphFilter/issues/453). I'd like to know if it's possible to implement this functionality.
We even tried to implement something similar, replacing the `_on_key_press` method, so that we could select all the desired vertices by dragging the left mouse button, and then press the button to add a new node, but we got the following traceback:
```
Traceback (most recent call last):
File "C:\Users\Fernando Pimenta\Documents\Github\GraphFilter\venv\lib\site-packages\matplotlib\cbook\__init__.py", line 307, in process
func(*args, **kwargs)
File "C:\Users\Fernando Pimenta\Documents\Github\GraphFilter\source\view\project\docks\visualize_graph_dock.py", line 116, in _on_key_press
self._add_node(event)
File "C:\Users\Fernando Pimenta\Documents\Github\GraphFilter\venv\lib\site-packages\netgraph\_interactive_variants.py", line 227, in _add_node
node_properties = self._extract_node_properties(self._selected_artists[-1])
File "C:\Users\Fernando Pimenta\Documents\Github\GraphFilter\venv\lib\site-packages\netgraph\_interactive_variants.py", line 178, in _extract_node_properties
radius = node_artist.radius,
AttributeError: 'EdgeArtist' object has no attribute 'radius'.
```
We are willing to contribute if you find it interesting.
Thank you in advance for all your support!
|
open
|
2024-04-09T21:09:23Z
|
2024-06-19T10:16:59Z
|
https://github.com/paulbrodersen/netgraph/issues/88
|
[
"enhancement"
] |
fsoupimenta
| 14
|
Guovin/iptv-api
|
api
| 329
|
m3u中存在使用错误的地址
|
CCTV-5+频道中出现以下地址,地址本身可响应,但返回内容中存在404错误
Name: CCTV-5+, URL: http://220.179.68.222:9901/tsfile/live/0016_1.m3u8?key=txiptv&playlive=1&authid=0, Date: None, Resolution: None, Response Time: 34 ms
`{"timestamp":"2024-09-20T17:08:33.979+0800","status":404,"error":"Not Found","message":"Not Found","path":"/tsfile/live/0016_1.m3u8"}`
是否有办法过滤此类频道?因为生成m3u会默认用Response Time最短的地址?
|
closed
|
2024-09-20T09:12:18Z
|
2024-09-23T01:43:24Z
|
https://github.com/Guovin/iptv-api/issues/329
|
[
"enhancement"
] |
zid99825
| 1
|
schemathesis/schemathesis
|
graphql
| 2,713
|
[BUG] "ignored_auth" check causes SSLError exception when verify is set to False in the case
|
### Checklist
- [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [x] I am using the latest version of Schemathesis
### Describe the bug
When attempting to test an openAPI spec with stateful testing, if TLS verification is set to False (i.e. defining `get_call_kwargs` to return `{"verify": False}`, even though the response received from the API is 200 an SSLError exception will be raised.
The bug manifests from L436 of `schemathesis/specs/openapi/checks.py`, but I think the issue starts in L174 of `schemathesis/stateful/state_machine.py` where the **kwargs are not passed to `validate_response` (like they are passed in L171 to `self.call()` and which contain the verify=False information).
### To Reproduce
🚨 **Mandatory** 🚨: Steps to reproduce the behavior:
Start a stateful test in any way you prefer, but make sure to enforce `verify=False` for TLS verification. Something along these lines
```python
@pytest.fixture(scope="class")
def generated_schema_local(my_endpoint):
return schemathesis.from_path(
path="my-spec.yaml",
base_url=f"https://{my_endpoint}/",
)
@pytest.fixture
def state_machine(generated_schema_local, current_client_token):
class APIWorkflow(generated_schema_local.as_state_machine()):
headers: dict
def setup(self):
self.headers = {"Authorization": f"Bearer {current_client_token}", "Content-Type": "application/json"}
# these kwargs are passed to requests.request()
def get_call_kwargs(self, case):
return {"verify": False, "headers": self.headers}
return APIWorkflow
def test_stateful_api(state_machine):
state_machine.run()
```
Please include a minimal API schema causing this issue:
I think any schema triggering the check would cause this, so it has to declare authentication as a requirement
### Expected behavior
if TLS verification is set to False any requests that are made should honor this setting or checks which enforce verification should be skipped
### Environment
```
- OS: MacOS 15.2
- Python version: 3.11
- Schemathesis version: 3.39.8
- Spec version: 3.0.3
```
### Additional context
Excluding the check "ignored_auth" with e.g.
```python
def validate_response(self, response, case, additional_checks):
case.validate_response(response, excluded_checks=(schemathesis.checks.ignored_auth,), additional_checks=additional_checks)
```
Allows the test to progress further
|
closed
|
2025-01-30T13:31:42Z
|
2025-02-03T10:07:30Z
|
https://github.com/schemathesis/schemathesis/issues/2713
|
[
"Type: Bug",
"Status: Needs Triage"
] |
lugi0
| 5
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 719
|
Replacing synthesizer from Tacotron to Non Attentive Tacotron
|
Working on it!
|
closed
|
2021-04-02T07:59:16Z
|
2021-04-20T03:01:11Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/719
|
[] |
Garvit-32
| 2
|
coqui-ai/TTS
|
pytorch
| 3,264
|
[Bug] XTTS v2 keeps downloading model
|
### Describe the bug
model = 'tts_models/multilingual/multi-dataset/xtts_v2'
tts = TTS(model).to(device)
everytime I call this i get it redownloads the model
### To Reproduce
Running TTS generation
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA GeForce GTX 1650"
],
"available": true,
"version": "12.1"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.0+cu121",
"TTS": "0.20.4",
"numpy": "1.22.0"
},
"System": {
"OS": "Windows",
"architecture": [
"64bit",
"WindowsPE"
],
"processor": "Intel64 Family 6 Model 165 Stepping 5, GenuineIntel",
"python": "3.9.16",
"version": "10.0.19045"
}
}
```
### Additional context
_No response_
|
closed
|
2023-11-18T20:57:06Z
|
2023-11-20T08:41:36Z
|
https://github.com/coqui-ai/TTS/issues/3264
|
[
"bug"
] |
darkzbaron
| 2
|
dgtlmoon/changedetection.io
|
web-scraping
| 1,672
|
(changed) and (into) string
|
How to get rid of "(changed)" and "(into)" string in order to leave changed things alone?
|
closed
|
2023-07-04T17:36:05Z
|
2023-07-04T19:34:36Z
|
https://github.com/dgtlmoon/changedetection.io/issues/1672
|
[] |
lukaskrol7
| 0
|
Morizeyao/GPT2-Chinese
|
nlp
| 35
|
Fail to run train_single
|
Great repo. However, the train_single script seems to be broken.
```Traceback (most recent call last):
File "train_single.py", line 223, in <module>
main()
File "train_single.py", line 74, in main
full_tokenizer = tokenization_bert.BertTokenizer(vocab_file=args.tokenizer_path)
UnboundLocalError: local variable 'tokenization_bert' referenced before assignment
```
|
closed
|
2019-08-24T12:30:35Z
|
2019-08-25T14:58:49Z
|
https://github.com/Morizeyao/GPT2-Chinese/issues/35
|
[] |
diansheng
| 1
|
quokkaproject/quokka
|
flask
| 270
|
Filter problem at the post list admin page
|
These options of filter: Title, Summary, Created At, Available At aren't working well. The first time that page is loaded and some filter is added, the filter wasn't work.
|
closed
|
2015-07-20T02:51:21Z
|
2016-03-02T15:18:06Z
|
https://github.com/quokkaproject/quokka/issues/270
|
[
"bug",
"EASY"
] |
felipevolpone
| 2
|
noirbizarre/flask-restplus
|
api
| 233
|
Swagger doesn't support converters with optional arguments
|
If one defines a route with optional arguments, http://werkzeug.pocoo.org/docs/0.11/routing/#rule-format, i.e., `@api.route('/my-resource/<string(length=2):id>')`, Swagger raises a `ValueError` (swagger.py#L82) because in senses the type converter is unsupported, i.e.,
```python
from flask import Flask
from flask_restplus import Api, Resource
app = Flask(__name__)
api = Api(app)
@api.route('/my-resource/<string(length=2):id>')
class MyResource(Resource):
def get(self, id):
return id
@api.response(403, 'Not Authorized')
def post(self, id):
api.abort(403)
if __name__ == '__main__':
app.run(debug=True)
```
The workaround seems to be to explicitly register the converter via,
```python
from werkzeug.routing import UnicodeConverter
app.url_map.converters['string(length=2)'] = UnicodeConverter
```
which seems somewhat kludgy, especially when one has multiple signatures which use arguments.
|
closed
|
2017-02-03T23:43:52Z
|
2017-03-04T20:49:48Z
|
https://github.com/noirbizarre/flask-restplus/issues/233
|
[] |
john-bodley
| 0
|
piskvorky/gensim
|
data-science
| 2,608
|
AttributeError in Doc2vec when compute_loss=True
|
<!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Description
Gensim's Doc2Vec include an initialization parameter compute_loss which, if True, cause the model to keep a running total of loss during training which can then be requested via get_training_loss(). But i get 'Doc2Vec' object has no attribute 'get_training_loss'. A quick look at doc2vec.py verifies that that method isn't implemented there.
What are you trying to achieve? What is the expected result? What are you seeing instead?
I am trying to get the loss so that i can figure out how many epochs to run my model on
#### Steps/code/corpus to reproduce
Include full tracebacks, logs and datasets if necessary. Please keep the examples minimal ("minimal reproducible example").
#############################################################################
```
AttributeError Traceback (most recent call last)
<ipython-input-9-8c794e318315> in <module>
4 else:
5 print('Model does not exists, creating new one. This will take some time...')
----> 6 create_doc2vec_model(data['content'])
7 model_doc = Doc2Vec.load("/home/ubuntu/Jupyter_Notebook/Akash_testing/d2v_testing.model")
8 print("Model Loaded")
<ipython-input-5-f3af8a0c71ca> in create_doc2vec_model(X)
20 model_doc.train(tagged_data,
21 total_examples=model_doc.corpus_count,
---> 22 epochs=model_doc.iter)
23 model_doc.alpha -= 0.0002
24 model_doc.min_alpha = model_doc.alpha
~/anaconda3/lib/python3.7/site-packages/gensim/models/doc2vec.py in train(self, documents, corpus_file, total_examples, total_words, epochs, start_alpha, end_alpha, word_count, queue_factor, report_delay, callbacks)
811 sentences=documents, corpus_file=corpus_file, total_examples=total_examples, total_words=total_words,
812 epochs=epochs, start_alpha=start_alpha, end_alpha=end_alpha, word_count=word_count,
--> 813 queue_factor=queue_factor, report_delay=report_delay, callbacks=callbacks, **kwargs)
814
815 @classmethod
~/anaconda3/lib/python3.7/site-packages/gensim/models/base_any2vec.py in train(self, sentences, corpus_file, total_examples, total_words, epochs, start_alpha, end_alpha, word_count, queue_factor, report_delay, compute_loss, callbacks, **kwargs)
1079 total_words=total_words, epochs=epochs, start_alpha=start_alpha, end_alpha=end_alpha, word_count=word_count,
1080 queue_factor=queue_factor, report_delay=report_delay, compute_loss=compute_loss, callbacks=callbacks,
-> 1081 **kwargs)
1082
1083 def _get_job_params(self, cur_epoch):
~/anaconda3/lib/python3.7/site-packages/gensim/models/base_any2vec.py in train(self, data_iterable, corpus_file, epochs, total_examples, total_words, queue_factor, report_delay, callbacks, **kwargs)
537
538 for callback in self.callbacks:
--> 539 callback.on_train_begin(self)
540
541 trained_word_count = 0
~/anaconda3/lib/python3.7/site-packages/keras/callbacks.py in on_train_begin(self, logs)
293
294 def on_train_begin(self, logs=None):
--> 295 self.verbose = self.params['verbose']
296 self.epochs = self.params['epochs']
297
AttributeError: 'ProgbarLogger' object has no attribute 'params'
```
#############################################################################
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
Linux-4.4.0-1094-aws-x86_64-with-debian-stretch-sid
Python 3.7.3 (default, Mar 27 2019, 22:11:17)
[GCC 7.3.0]
NumPy 1.16.4
SciPy 1.3.0
gensim 3.8.0
FAST_VERSION 1
```
|
closed
|
2019-09-25T11:03:29Z
|
2019-09-28T13:37:40Z
|
https://github.com/piskvorky/gensim/issues/2608
|
[] |
Infinity1008
| 2
|
deezer/spleeter
|
tensorflow
| 626
|
[Bug] Spleeter Separate on custom trained model tries to download another model
|
## Description
I have trained a custom model and the separate function does not work as intended as it tries to download the model from a nonexisting url.
## Step to reproduce
1. Created custom model training data / specs annotated in the custom_model_config.json
2. Trained model (succesfully, apparently) using `!spleeter train -p "custom_model_config.json" -d "PathToCustomDataset"`. I check that the trained model folder is created.
3. When trying to separate a file using `!spleeter separate -o sep_out -p "custom_model_config.json" "test.mp3"`
The function tries to download the model from a url which obviously does not exists. Why is this happening?
## Output
```
tcmalloc: large alloc 1694474240 bytes == 0x556df36f0000 @ 0x7f9e527bf1e7 0x7f9e4ef5f631 0x7f9e4efc3cc8 0x7f9e4efc3de3 0x7f9e4f061ed8 0x7f9e4f062734 0x7f9e4f062882 0x556d45843f68 0x7f9e4efaf53d 0x556d45841c47 0x556d45841a50 0x556d458b5453 0x556d458b04ae 0x556d458433ea 0x556d458b232a 0x556d458b04ae 0x556d458433ea 0x556d458b160e 0x556d4584330a 0x556d458b160e 0x556d458b04ae 0x556d458433ea 0x556d458b160e 0x556d458b04ae 0x556d458433ea 0x556d458b232a 0x556d458b04ae 0x556d45782e2c 0x556d458b2bb5 0x556d458b07ad 0x556d45782e2c
INFO:spleeter:Downloading model archive https://github.com/deezer/spleeter/releases/download/v1.4.0/bach10_model_2.tar.gz
Traceback (most recent call last):
File "/usr/local/bin/spleeter", line 8, in <module>
sys.exit(entrypoint())
File "/usr/local/lib/python3.7/dist-packages/spleeter/__main__.py", line 256, in entrypoint
spleeter()
File "/usr/local/lib/python3.7/dist-packages/typer/main.py", line 214, in _call_
return get_command(self)(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 829, in _call_
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/typer/main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "/usr/local/lib/python3.7/dist-packages/spleeter/__main__.py", line 137, in separate
synchronous=False,
File "/usr/local/lib/python3.7/dist-packages/spleeter/separator.py", line 382, in separate_to_file
sources = self.separate(waveform, audio_descriptor)
File "/usr/local/lib/python3.7/dist-packages/spleeter/separator.py", line 325, in separate
return self._separate_librosa(waveform, audio_descriptor)
File "/usr/local/lib/python3.7/dist-packages/spleeter/separator.py", line 269, in _separate_librosa
sess = self._get_session()
File "/usr/local/lib/python3.7/dist-packages/spleeter/separator.py", line 241, in _get_session
model_directory: str = provider.get(self._params["model_dir"])
File "/usr/local/lib/python3.7/dist-packages/spleeter/model/provider/__init__.py", line 80, in get
self.download(model_directory.split(sep)[-1], model_directory)
File "/usr/local/lib/python3.7/dist-packages/spleeter/model/provider/github.py", line 141, in download
response.raise_for_status()
File "/usr/local/lib/python3.7/dist-packages/httpx/_models.py", line 1103, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: 404 Client Error: Not Found for url: https://github.com/deezer/spleeter/releases/download/v1.4.0/bach10_model_2.tar.gz
For more information check: https://httpstatuses.com/404
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Linux (Colab environment) |
| Installation type | pip |
| RAM available | 16GB |
|
closed
|
2021-05-29T16:33:08Z
|
2021-05-31T09:36:24Z
|
https://github.com/deezer/spleeter/issues/626
|
[
"bug",
"invalid"
] |
andresC98
| 4
|
litestar-org/litestar
|
pydantic
| 3,396
|
Enhancement: SQLAdmin Support
|
### Summary
SQLAdmin has a fork for litestar, but it isn't properly supported and there are no issues or discussions around its limitations. Is there support for making this better integrated in the litestar community?
### Basic Example
When using both litestar sqlalchemy plugin and the sqladmin fork https://github.com/cemrehancavdar/sqladmin-litestar, I get errors about pickling a psycopg2 module or `AttributeError: Can't pickle local object 'create_engine.<locals>.connect'`
This seems to be due to the fact that litestar pickles each router mounted to the app, and the fact that sqladmin relies on passing the sql engine or session maker around which ends up getting pickled. If I want to implement a fix for this I don't know the right place to look. The examples in the fork only show sqlite and I don't see any similar discussions so I suppose this is not fully tested behavior.
### Drawbacks and Impact
Unfortunately sqladmin itself does not seem to be open to being framework agnostic, so any fork suffers the problems of forking and future updates.
### Unresolved questions
_No response_
|
closed
|
2024-04-16T16:34:28Z
|
2025-03-20T15:54:36Z
|
https://github.com/litestar-org/litestar/issues/3396
|
[
"Enhancement"
] |
colebaileygit
| 5
|
AutoGPTQ/AutoGPTQ
|
nlp
| 521
|
[BUG] Loading Saved Marlin Quantized Models Fails
|
**Describe the bug**
After saving a marlin model to disk with `save_pretrained`, reloading the model fails since the quantization config still has gptq in in.
**Hardware details**
A100
**Software version**
Current main
**To Reproduce**
1. Load a model in marlin format and save to disk
```python
from auto_gptq import AutoGPTQForCausalLM
model = AutoGPTQForCausalLM.from_quantized("TheBloke/Llama-2-7B-Chat-GPTQ", use_marlin=True)
model.save_pretrained("/network/rshaw/llama_marlin")
```
2. Restart python and try to reload the saved model:
```python
from auto_gptq import AutoGPTQForCausalLM
model = AutoGPTQForCausalLM.from_quantized("/network/rshaw/llama_marlin", use_marlin=True)
```
Fails with:
```bash
{
"name": "ValueError",
"message": "QuantLinear() does not have a parameter or a buffer named B.",
"stack": "---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[2], line 3
1 from auto_gptq import AutoGPTQForCausalLM
----> 3 model = AutoGPTQForCausalLM.from_quantized(\"/network/rshaw/llama_marlin\", use_marlin=True)
File /network/rshaw/gptq-benchmarking/AutoGPTQ/auto_gptq/modeling/auto.py:129, in AutoGPTQForCausalLM.from_quantized(cls, model_name_or_path, device_map, max_memory, device, low_cpu_mem_usage, use_triton, inject_fused_attention, inject_fused_mlp, use_cuda_fp16, quantize_config, model_basename, use_safetensors, trust_remote_code, warmup_triton, trainable, disable_exllama, disable_exllamav2, **kwargs)
123 # TODO: do we need this filtering of kwargs? @PanQiWei is there a reason we can't just pass all kwargs?
124 keywords = {
125 key: kwargs[key]
126 for key in list(signature(quant_func).parameters.keys()) + huggingface_kwargs
127 if key in kwargs
128 }
--> 129 return quant_func(
130 model_name_or_path=model_name_or_path,
131 device_map=device_map,
132 max_memory=max_memory,
133 device=device,
134 low_cpu_mem_usage=low_cpu_mem_usage,
135 use_triton=use_triton,
136 inject_fused_attention=inject_fused_attention,
137 inject_fused_mlp=inject_fused_mlp,
138 use_cuda_fp16=use_cuda_fp16,
139 quantize_config=quantize_config,
140 model_basename=model_basename,
141 use_safetensors=use_safetensors,
142 trust_remote_code=trust_remote_code,
143 warmup_triton=warmup_triton,
144 trainable=trainable,
145 disable_exllama=disable_exllama,
146 disable_exllamav2=disable_exllamav2,
147 **keywords
148 )
File /network/rshaw/gptq-benchmarking/AutoGPTQ/auto_gptq/modeling/_base.py:1109, in BaseGPTQForCausalLM.from_quantized(cls, model_name_or_path, device_map, max_memory, device, low_cpu_mem_usage, use_triton, use_qigen, use_marlin, torch_dtype, inject_fused_attention, inject_fused_mlp, use_cuda_fp16, quantize_config, model_basename, use_safetensors, trust_remote_code, warmup_triton, trainable, disable_exllama, disable_exllamav2, **kwargs)
1103 model = convert_to_marlin(model, quant_linear_class, quantize_config, repack=False)
1104 else:
1105 # Loading the GPTQ checkpoint to do the conversion.
1106 # TODO: Avoid loading the model with wrong QuantLinear, and directly use
1107 # Marlin ones. The repacking can be done directly on the safetensors, just
1108 # as for AWQ checkpoints.
-> 1109 accelerate.utils.modeling.load_checkpoint_in_model(
1110 model,
1111 dtype=torch_dtype, # This is very hacky but works due to https://github.com/huggingface/accelerate/blob/bd72a5f1a80d5146554458823f8aeda0a9db5297/src/accelerate/utils/modeling.py#L292
1112 checkpoint=model_save_name,
1113 device_map=device_map,
1114 offload_state_dict=True,
1115 offload_buffers=True
1116 )
1118 model = convert_to_marlin(model, quant_linear_class, quantize_config, repack=True)
1120 # Cache the converted model.
File ~/.conda/envs/autogptq-env/lib/python3.10/site-packages/accelerate/utils/modeling.py:1550, in load_checkpoint_in_model(model, checkpoint, device_map, offload_folder, dtype, offload_state_dict, offload_buffers, keep_in_fp32_modules, offload_8bit_bnb)
1548 offload_weight(param, param_name, state_dict_folder, index=state_dict_index)
1549 else:
-> 1550 set_module_tensor_to_device(
1551 model,
1552 param_name,
1553 param_device,
1554 value=param,
1555 dtype=new_dtype,
1556 fp16_statistics=fp16_statistics,
1557 )
1559 # Force Python to clean up.
1560 del checkpoint
File ~/.conda/envs/autogptq-env/lib/python3.10/site-packages/accelerate/utils/modeling.py:301, in set_module_tensor_to_device(module, tensor_name, device, value, dtype, fp16_statistics)
298 tensor_name = splits[-1]
300 if tensor_name not in module._parameters and tensor_name not in module._buffers:
--> 301 raise ValueError(f\"{module} does not have a parameter or a buffer named {tensor_name}.\")
302 is_buffer = tensor_name in module._buffers
303 old_value = getattr(module, tensor_name)
ValueError: QuantLinear() does not have a parameter or a buffer named B."
}
```
I believe this occurs because the safetensors file has the marlin formatted model, but the quantization config is still gptq.
**Expected behavior**
It would be nice if there were a way to serialize marlin models cleanly, such that they can be reloaded.
I am targeting the vLLM case right now and I need a serialized format that I can reload cleanly by iterating the safetensors file to make the integration with vLLM work nicely. I would rather rely on HF/AutoGPTQ's serialization format rather than making my own if possible.
Is there any way we can create a quantization config that will allow for reloading these models?
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
|
closed
|
2024-01-24T18:51:57Z
|
2024-02-12T13:51:07Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/521
|
[
"bug"
] |
robertgshaw2-redhat
| 4
|
ivy-llc/ivy
|
tensorflow
| 27,868
|
Fix Frontend Failing Test: paddle - tensor.torch.Tensor.__gt__
|
closed
|
2024-01-07T23:37:49Z
|
2024-01-07T23:48:41Z
|
https://github.com/ivy-llc/ivy/issues/27868
|
[
"Sub Task"
] |
NripeshN
| 0
|
|
python-restx/flask-restx
|
flask
| 377
|
Basic Auth for Swagger UI
|
Any suggestions on how to password protect the swagger UI with basic auth? I was going to use something like flask-httpauth or maybe just write my own wrapper but the Swagger UI route isn't exposed in an obvious way. If you wondering why, I may have to have my API available publicly and don't want anyone who isn't supposed to know the api have any more available information on it but it could be useful for some users. I may restrict it by IP but I'm on unsure of this yet.
|
open
|
2021-09-22T14:56:42Z
|
2023-06-26T14:33:10Z
|
https://github.com/python-restx/flask-restx/issues/377
|
[
"question"
] |
Bxmnx
| 2
|
paperless-ngx/paperless-ngx
|
machine-learning
| 8,289
|
[BUG] File permissions are not set correctly after e.g. deleting a page from a PDF
|
### Description
When deleting a page from a scanned PDF the newly created PDF does not inherrit the permissions from the folder. After deleting the file other process are not able to work with thenew file because of access issue.
### Steps to reproduce
1. Create a job / cron job which regularly syncs the files from the output directory to a different
--> this is the job which seems to fail due to incorrect permissions
2. Take a PDF with multiple pages
3. delete a page via the paperless ui

The permissions for normal files

The permission of the file where a page has been removed
### Webserver logs
```bash
[2024-11-15 08:05:03,698] [DEBUG] [paperless.tasks] Training data unchanged.
[2024-11-15 08:25:52,316] [DEBUG] [paperless.handlers] Deleted file /usr/src/paperless/media/documents/originals/2024/2024-10-18 - none - Dexcom - 20241103_113340_BRN3C2AF4DFAC1D_002264.pdf.
[2024-11-15 08:25:52,317] [DEBUG] [paperless.handlers] Deleted file /usr/src/paperless/media/documents/archive/2024/2024-10-18 - none - Dexcom - 20241103_113340_BRN3C2AF4DFAC1D_002264.pdf.
[2024-11-15 08:25:52,320] [DEBUG] [paperless.handlers] Deleted file /usr/src/paperless/media/documents/thumbnails/0003781.webp.
[2024-11-15 08:42:55,345] [DEBUG] [paperless.matching] Correspondent Deutsche Bank matched on document 2018-11-15 Deutsche Bank photoTAN-Aktivierungsgrafik because it contains all of these words: Deutsche Bank
[2024-11-15 08:43:10,770] [DEBUG] [paperless.matching] Correspondent Deutsche Bank matched on document 2018-11-15 Deutsche Bank photoTAN-Aktivierungsbrief because it contains all of these words: Deutsche Bank
[2024-11-15 08:54:59,693] [INFO] [paperless.bulk_edit] Attempting to delete pages [2] from 1 documents
[2024-11-15 08:54:59,956] [DEBUG] [paperless.filehandling] Document has storage_path 1 ({created_year}/{created} - {asn} - {correspondent} - {title}) set
[2024-11-15 08:55:00,353] [INFO] [paperless.bulk_edit] Deleted pages [2] from document 1033
[2024-11-15 08:55:00,957] [INFO] [paperless.parsing.tesseract] pdftotext exited 0
[2024-11-15 08:55:01,326] [DEBUG] [paperless.parsing.tesseract] Calling OCRmyPDF with args: {'input_file': PosixPath('/usr/src/paperless/media/documents/originals/2018/2018-11-15 - 379 - Deutsche Bank - photoTAN-Aktivierungsbrief.pdf'), 'output_file': PosixPath('/tmp/paperless/paperless-fkh6aute/archive.pdf'), 'use_threads': True, 'jobs': 4, 'language': 'deu+eng', 'output_type': 'pdfa', 'progress_bar': False, 'color_conversion_strategy': 'RGB', 'skip_text': True, 'clean': True, 'deskew': True, 'rotate_pages': True, 'rotate_pages_threshold': 12.0, 'sidecar': PosixPath('/tmp/paperless/paperless-fkh6aute/sidecar.txt'), 'invalidate_digital_signatures': True, 'continue_on_soft_render_error': True}
[2024-11-15 08:55:01,936] [INFO] [ocrmypdf._pipeline] skipping all processing on this page
[2024-11-15 08:55:01,941] [INFO] [ocrmypdf._pipelines.ocr] Postprocessing...
[2024-11-15 08:55:02,329] [ERROR] [ocrmypdf._exec.ghostscript] GPL Ghostscript 10.03.1 (2024-05-02)
Copyright (C) 2024 Artifex Software, Inc. All rights reserved.
This software is supplied under the GNU AGPLv3 and comes with NO WARRANTY:
see the file COPYING for details.
Processing pages 1 through 1.
Page 1
Loading font Helvetica (or substitute) from /usr/share/ghostscript/10.03.1/Resource/Font/NimbusSans-Regular
Loading font Times-Roman (or substitute) from /usr/share/ghostscript/10.03.1/Resource/Font/NimbusRoman-Regular
The following warnings were encountered at least once while processing this file:
A problem was encountered trying to preserve the Outlines
[2024-11-15 08:55:02,329] [ERROR] [ocrmypdf._exec.ghostscript] This file had errors that were repaired or ignored.
[2024-11-15 08:55:02,329] [ERROR] [ocrmypdf._exec.ghostscript] The file was produced by:
[2024-11-15 08:55:02,329] [ERROR] [ocrmypdf._exec.ghostscript] >>>> Adobe Acrobat Pro 11.0.23 Paper Capture Plug-in <<<<
[2024-11-15 08:55:02,330] [ERROR] [ocrmypdf._exec.ghostscript] Please notify the author of the software that produced this
[2024-11-15 08:55:02,330] [ERROR] [ocrmypdf._exec.ghostscript] file that it does not conform to Adobe's published PDF
[2024-11-15 08:55:02,330] [ERROR] [ocrmypdf._exec.ghostscript] specification.
[2024-11-15 08:55:02,353] [WARNING] [ocrmypdf._metadata] Some input metadata could not be copied because it is not permitted in PDF/A. You may wish to examine the output PDF's XMP metadata.
[2024-11-15 08:55:03,141] [INFO] [ocrmypdf._pipeline] Image optimization ratio: 1.27 savings: 21.1%
[2024-11-15 08:55:03,141] [INFO] [ocrmypdf._pipeline] Total file size ratio: 2.07 savings: 51.6%
[2024-11-15 08:55:03,144] [INFO] [ocrmypdf._pipelines._common] Output file is a PDF/A-2B (as expected)
[2024-11-15 08:55:03,570] [DEBUG] [paperless.parsing.tesseract] Incomplete sidecar file: discarding.
[2024-11-15 08:55:03,598] [INFO] [paperless.parsing.tesseract] pdftotext exited 0
[2024-11-15 08:55:03,600] [DEBUG] [paperless.parsing] Execute: convert -density 300 -scale 500x5000> -alpha remove -strip -auto-orient -define pdf:use-cropbox=true /tmp/paperless/paperless-fkh6aute/archive.pdf[0] /tmp/paperless/paperless-fkh6aute/convert.webp
[2024-11-15 08:55:04,983] [INFO] [paperless.parsing] convert exited 0
[2024-11-15 08:55:05,109] [INFO] [paperless.tasks] Updating index for document 1033 (77a22606b7bf3ca81a1586ae4745bf81)
[2024-11-15 08:55:05,230] [DEBUG] [paperless.parsing.tesseract] Deleting directory /tmp/paperless/paperless-fkh6aute
[2024-11-15 08:55:13,799] [DEBUG] [paperless.matching] Correspondent Deutsche Bank matched on document 2018-11-15 Deutsche Bank photoTAN-Aktivierungsbrief because it contains all of these words: Deutsche Bank
[2024-11-15 08:55:26,600] [DEBUG] [paperless.matching] Correspondent Deutsche Bank matched on document 2018-11-15 Deutsche Bank photoTAN-Aktivierungsgrafik because it contains all of these words: Deutsche Bank
[2024-11-15 08:55:31,959] [DEBUG] [paperless.matching] Correspondent Deutsche Bank matched on document 2018-12-31 Deutsche Bank Anlagen zum Kontoauszug because it contains all of these words: Deutsche Bank
[2024-11-15 08:55:31,971] [DEBUG] [paperless.matching] DocumentType Kontoauszug matched on document 2018-12-31 Deutsche Bank Anlagen zum Kontoauszug because it contains this string: "Kontoauszug"
[2024-11-15 08:55:33,325] [DEBUG] [paperless.matching] Correspondent Deutsche Bank matched on document 2018-11-15 Deutsche Bank photoTAN-Aktivierungsgrafik because it contains all of these words: Deutsche Bank
[2024-11-15 08:55:34,579] [DEBUG] [paperless.matching] Correspondent Deutsche Bank matched on document 2018-11-15 Deutsche Bank photoTAN-Aktivierungsbrief because it contains all of these words: Deutsche Bank
[2024-11-15 08:56:51,487] [DEBUG] [paperless.matching] Correspondent Deutsche Bank matched on document 2018-06-29 Deutsche Bank Kontoauszug because it contains all of these words: Deutsche Bank
[2024-11-15 08:56:51,556] [DEBUG] [paperless.matching] DocumentType Kontoauszug matched on document 2018-06-29 Deutsche Bank Kontoauszug because it contains this string: "Kontoauszug"
[2024-11-15 08:56:52,098] [DEBUG] [paperless.matching] Correspondent Deutsche Bank matched on document 2018-11-15 Deutsche Bank photoTAN-Aktivierungsbrief because it contains all of these words: Deutsche Bank
[2024-11-15 08:56:53,020] [DEBUG] [paperless.matching] Correspondent Deutsche Bank matched on document 2018-11-15 Deutsche Bank photoTAN-Aktivierungsgrafik because it contains all of these words: Deutsche Bank
[2024-11-15 08:57:31,656] [DEBUG] [paperless.filehandling] Document has storage_path 1 ({created_year}/{created} - {asn} - {correspondent} - {title}) set
[2024-11-15 08:57:48,464] [DEBUG] [paperless.filehandling] Document has storage_path 1 ({created_year}/{created} - {asn} - {correspondent} - {title}) set
[2024-11-15 08:57:51,950] [DEBUG] [paperless.filehandling] Document has storage_path 1 ({created_year}/{created} - {asn} - {correspondent} - {title}) set
[2024-11-15 09:00:02,462] [INFO] [paperless.management.consumer] Adding /usr/src/paperless/consume/20241114_SerienanschreibenDepot-Kontoinformationallgemein_18872298_436725975.pdf to the task queue.
[2024-11-15 09:00:02,587] [INFO] [paperless.management.consumer] Adding /usr/src/paperless/consume/Abrechnung_240753339100EUR_2024-11-01_KK_240753339100KD401H06110200342210879.pdf to the task queue.
[2024-11-15 09:00:02,825] [DEBUG] [paperless.tasks] Skipping plugin CollatePlugin
[2024-11-15 09:00:02,825] [DEBUG] [paperless.tasks] Executing plugin BarcodePlugin
[2024-11-15 09:00:02,825] [DEBUG] [paperless.barcodes] Scanning for barcodes using PYZBAR
[2024-11-15 09:00:02,828] [DEBUG] [paperless.barcodes] PDF has 2 pages
[2024-11-15 09:00:02,828] [DEBUG] [paperless.barcodes] Processing page 0
```
### Browser logs
```bash
Log which the schedules sends after
Der Aufgabenplaner hat eine geplante Aufgabe abgeschlossen.
Aufgabe: rsync Dokumente
Start: Fri, 15 Nov 2024 15:00:01 +0100
Ende: Fri, 15 Nov 2024 15:00:01 +0100
Aktueller Status: 23 (Unterbrochen)
Standardausgabe/Fehler:
rsync: send_files failed to open "/volume1/Dokumente/documents/archive/2018/2018-11-15 - 379 - Deutsche Bank - photoTAN-Aktivierungsbrief.pdf": Permission denied (13)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1464) [sender=3.1.2]
```
### Paperless-ngx version
2.12.0
### Host OS
Linux-4.4.302+-x86_64-with-glibc2.36
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.12.0",
"server_os": "Linux-4.4.302+-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 2848078716928,
"available": 2106884804608
},
"database": {
"type": "sqlite",
"url": "/usr/src/paperless/data/db.sqlite3",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "documents.1052_document_transaction_id",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://broker:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-11-15T14:00:05.589408Z",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2024-11-15T13:05:03.671148Z",
"classifier_error": null
}
}
```
### Browser
Chrome
### Configuration changes
I used a youtube video to create the container, the only think that i remeber is that i changed some volumes:
/volume1/docker/paperless-ngx/consume >> /usr/src/paperless/consume
/volume1/docker/paperless-ngx/data >> /usr/src/paperless/data
/volume1/docker/paperless-ngx/export >> /usr/src/paperless/export
/volume1/Dokumente >> /usr/src/paperless/media
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description.
|
closed
|
2024-11-15T14:16:19Z
|
2024-12-16T03:19:27Z
|
https://github.com/paperless-ngx/paperless-ngx/issues/8289
|
[
"not a bug"
] |
Kopierwichtel
| 3
|
xinntao/Real-ESRGAN
|
pytorch
| 87
|
Sample images
|
input

output

Of course the quality of this picture is very low, but these three areas circled should have the possibility of improvement.

|
open
|
2021-09-21T10:56:50Z
|
2021-09-26T02:52:17Z
|
https://github.com/xinntao/Real-ESRGAN/issues/87
|
[] |
tumuyan
| 4
|
FactoryBoy/factory_boy
|
sqlalchemy
| 1,044
|
Unusable password generator for Django
|
#### The problem
The recently added `Password` generator for Django is helpful, but it's not clear how to use it to create an unusable password (similar to calling `set_unusable_password` on the generated user).
#### Proposed solution
Django's `set_unusable_password` is a call to `make_password` with `None` as the password argument:
https://github.com/django/django/blob/0b506bfe1ab9f1c38e439c77b3c3f81c8ac663ea/django/contrib/auth/base_user.py#L118-L120
Using `password = factory.django.Password(None)` will actually work (and will allow factory users to override the password if desired). However, currently the password argument to this factory is documented as a string and this option is not mentioned.
#### Extra notes
The default value of the `password` argument to `factory.django.Password` could also be set to `None`. This would make that factory generate unusable passwords by default, which may or may not be desired.
|
closed
|
2023-09-20T07:47:12Z
|
2023-09-26T06:50:38Z
|
https://github.com/FactoryBoy/factory_boy/issues/1044
|
[
"Doc",
"BeginnerFriendly"
] |
jaap3
| 1
|
sktime/sktime
|
data-science
| 7,224
|
[BUG] DartsLinearRegression fails instead of giving warning message
|
**Describe the bug**
`DartsLinearRegressionModel` fails when a warning should be raised
**To Reproduce**
```python
from sktime.datasets import load_airline
from sktime.forecasting.darts import DartsLinearRegressionModel
y = load_airline()
forecaster = DartsLinearRegressionModel(output_chunk_length=6,likelihood="quantile",quantiles=[0.33,0.5,0.67])
forecaster.fit(y=y)
```
ouput:
```
TypeError: warn() got an unexpected keyword argument 'message'
```
**Expected behavior**
A python warning message saying:
```
"Setting multi_models=True with quantile regression may cause issues. Consider using multi_models=False."
```
**Additional context**
It looks like this is the wrong key word arguments were passed into warn (`message` instead of `msg`) at [line 595](https://github.com/sktime/sktime/blob/0f75b7ad0dce8b722c81fe49bb9624de20cc4923/sktime/forecasting/darts.py#L595) and [line 369](https://github.com/sktime/sktime/blob/0f75b7ad0dce8b722c81fe49bb9624de20cc4923/sktime/forecasting/darts.py#L369)
If you are okay with it, I could give a PR for this.
**Versions**
Python dependencies:
pip: 23.2.1
sktime: 0.33.1
sklearn: 1.5.2
skbase: 0.8.3
numpy: 1.26.4
scipy: 1.14.1
pandas: 2.1.2
matplotlib: 3.8.1
joblib: 1.4.2
numba: 0.60.0
statsmodels: 0.14.4
pmdarima: 2.0.4
statsforecast: 1.7.8
tsfresh: 0.20.3
tslearn: None
torch: None
tensorflow: 2.16.2
|
closed
|
2024-10-04T16:56:02Z
|
2024-10-11T09:21:09Z
|
https://github.com/sktime/sktime/issues/7224
|
[
"bug",
"module:forecasting"
] |
wilsonnater
| 2
|
davidsandberg/facenet
|
computer-vision
| 1,001
|
classifier problem
|
I trained the model with my own images (10k images, 1k classes) by train_softmax.py,
My settings :
`
--max_nrof_epochs 100 \
--epoch_size 100 \
--batch_size 30 \
`
Other settings are default: (emb size is 128)
I got Accuracy ~0.98 at epoch 80, Loss ~ 1.7
But when I use this trained model to calculate 128 features of the same images and then use these features on SVC I got 0.00149 accuracy, I've try another classifier models and got result ~0.65 accuracy on this train images but ~0.0003 on public test.(17k images, 1k classes)
Another problem is I can't use my 8GB RTX 2070 to train this model. I think my GPU has not enough memory to run but I'm not sure, so I trained the model on my CPU and It took ~4hours to run 80epochs.
|
open
|
2019-03-30T08:02:17Z
|
2019-03-30T08:02:17Z
|
https://github.com/davidsandberg/facenet/issues/1001
|
[] |
ducnguyen96
| 0
|
rthalley/dnspython
|
asyncio
| 277
|
In macos dnspython can not resolve some TLD domains.
|
In macos Sierra 10.12.4
pip list |grep dns
dnspython (1.15.0)

In Kali linux
pip list |grep dns
dnspython (1.15.0)

|
closed
|
2017-09-07T18:58:38Z
|
2017-09-08T08:38:12Z
|
https://github.com/rthalley/dnspython/issues/277
|
[] |
eldraco
| 2
|
google-research/bert
|
nlp
| 1,195
|
How we can fine-tune BERT by using multi-GPUs?
|
open
|
2021-01-20T15:14:49Z
|
2021-01-20T15:14:49Z
|
https://github.com/google-research/bert/issues/1195
|
[] |
FatmaSayedAhmed
| 0
|
|
flasgger/flasgger
|
flask
| 141
|
The Validation of API Payload does not passes if the optional Fields are of type other than String
|
I wrote this in YML file . The validation passes only if there is no optional field (Not required field in the definitions ) of type object or array such as approval_list in the below code.
```
Create new Deployment Unit
---
tags:
- Deployment Unit
parameters:
- name: Token
in: header
description: API key
required: true
type: string
format: string
- name: input
in: body
description: Deployment Unit data
required: true
type: string
format: string
schema:
$ref: "#/definitions/DeploymentUnit" # <---------
responses:
200:
description: New Deployment Unit has been added successfully
schema:
$ref: "#/definitions/Data" # <---------
examples:
result: "success"
message: "DeploymentUnit and deployment fields created successfully"
404:
description: exception in adding Deployment Unit
examples:
result: "failed"
message: "Throws an exception"
definitions:
DeploymentUnit: # <----------
type: object
required:
- name
- type
properties:
name:
type: string
type:
type: string
release_notes:
type: string
branch:
type: string
approval_status:
type: string
approval_list:
type: array
items:
type: object
properties:
approval_status:
type: string
approved_by:
type: string
approved_date:
type: string
Data:
type: object
properties:
data:
type: string
result:
type: string
message:
type: string
```
|
open
|
2017-08-03T10:12:24Z
|
2018-10-01T17:31:30Z
|
https://github.com/flasgger/flasgger/issues/141
|
[
"hacktoberfest"
] |
VjSng
| 2
|
xinntao/Real-ESRGAN
|
pytorch
| 905
|
After executing ”python inference_realesrgan.py -n RealESRGAN_x4plus -i inputs” command, it gets stuck and there is no other information
|
**The environment is as follows:**
(ml-env) PS C:\workspace\ml\Real-ESRGAN\Real-ESRGAN> pip list
Package Version Editable project location
----------------------- --------------- ---------------------------------------
absl-py 2.1.0
addict 2.4.0
autocommand 2.2.2
backports.tarfile 1.2.0
basicsr 1.4.2
certifi 2025.1.31
charset-normalizer 3.4.1
colorama 0.4.6
contourpy 1.3.1
cycler 0.12.1
facexlib 0.3.0
filelock 3.18.0
filterpy 1.4.5
fonttools 4.56.0
fsspec 2025.3.0
future 1.0.0
gfpgan 1.3.8
grpcio 1.71.0
idna 3.10
imageio 2.37.0
importlib_metadata 8.0.0
inflect 7.3.1
jaraco.collections 5.1.0
jaraco.context 5.3.0
jaraco.functools 4.0.1
jaraco.text 3.12.1
Jinja2 3.1.6
kiwisolver 1.4.8
lazy_loader 0.4
llvmlite 0.44.0
lmdb 1.6.2
Markdown 3.7
MarkupSafe 3.0.2
matplotlib 3.10.1
more-itertools 10.3.0
mpmath 1.3.0
networkx 3.4.2
numba 0.61.0
numpy 2.1.3
opencv-python 4.11.0.86
packaging 24.2
pillow 11.1.0
pip 25.0.1
platformdirs 4.3.6
protobuf 6.30.1
pyparsing 3.2.1
python-dateutil 2.9.0.post0
PyYAML 6.0.2
realesrgan 0.3.0 C:\workspace\ml\Real-ESRGAN\Real-ESRGAN
requests 2.32.3
scikit-image 0.25.2
scipy 1.15.2
setuptools 76.1.0
six 1.17.0
sympy 1.13.1
tb-nightly 2.20.0a20250318
tensorboard-data-server 0.7.2
tifffile 2025.3.13
tomli 2.0.1
torch 2.6.0
torchvision 0.21.0
tqdm 4.67.1
typeguard 4.3.0
typing_extensions 4.12.2
urllib3 2.3.0
Werkzeug 3.1.3
wheel 0.43.0
yapf 0.43.0
zipp 3.19.2
**Execute the following command:**
python inference_realesrgan.py -n RealESRGAN_x4plus -i inputs
Testing 0 00017_gray
**Output after manual cancellation via ctrl+c:**
Traceback (most recent call last):
File "C:\workspace\ml\Real-ESRGAN\Real-ESRGAN\inference_realesrgan.py", line 166, in <module>
main()
File "C:\workspace\ml\Real-ESRGAN\Real-ESRGAN\inference_realesrgan.py", line 147, in main
output, _ = upsampler.enhance(img, outscale=args.outscale) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\workspace\ml\ml-env\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\workspace\ml\Real-ESRGAN\Real-ESRGAN\realesrgan\utils.py", line 223, in enhance
self.process()
File "C:\workspace\ml\Real-ESRGAN\Real-ESRGAN\realesrgan\utils.py", line 115, in process
self.output = self.model(self.img)
^^^^^^^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\basicsr\archs\rrdbnet_arch.py", line 113, in forward
body_feat = self.conv_body(self.body(feat))
^^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\torch\nn\modules\container.py", line 250, in forward
input = module(input)
^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\basicsr\archs\rrdbnet_arch.py", line 60, in forward
out = self.rdb2(out)
^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\basicsr\archs\rrdbnet_arch.py", line 35, in forward
x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\torch\nn\modules\conv.py", line 554, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\workspace\ml\ml-env\Lib\site-packages\torch\nn\modules\conv.py", line 549, in _conv_forward
return F.conv2d(
^^^^^^^^^
KeyboardInterrupt
|
open
|
2025-03-19T06:08:30Z
|
2025-03-19T08:29:04Z
|
https://github.com/xinntao/Real-ESRGAN/issues/905
|
[] |
Le1q
| 1
|
encode/databases
|
sqlalchemy
| 161
|
Getting row count from an update
|
I'm aware of e.g. #108 but I'm wondering what the best way to get the row count for an update or delete is for now? On Postgres if it makes a difference.
|
open
|
2019-11-15T15:00:25Z
|
2019-12-11T11:11:26Z
|
https://github.com/encode/databases/issues/161
|
[] |
knyghty
| 6
|
Lightning-AI/LitServe
|
fastapi
| 121
|
during manual local testing, the processes are not killed if the test fails
|
We need to terminate the processes if test fails for whatsoever reason:
## Current
```python
def test_e2e_default_batching(killall):
process = subprocess.Popen(
["python", "tests/e2e/default_batching.py"],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
stdin=subprocess.DEVNULL,
)
time.sleep(5)
resp = requests.post("http://127.0.0.1:8000/predict", json={"input": 4.0}, headers=None)
assert resp.status_code == 200, f"Expected response to be 200 but got {resp.status_code}"
assert resp.json() == {"output": 16.0}, "tests/simple_server.py didn't return expected output"
killall(process)
```
## Proposed
```py
def test_e2e_default_batching(killall):
process = subprocess.Popen(
["python", "tests/e2e/default_batching.py"],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
stdin=subprocess.DEVNULL,
)
time.sleep(5)
try:
resp = requests.post("http://127.0.0.1:8000/predict", json={"input": 4.0}, headers=None)
assert resp.status_code == 200, f"Expected response to be 200 but got {resp.status_code}"
assert resp.json() == {"output": 16.0}, "tests/simple_server.py didn't return expected output"
except Exception as e:
raise e
finally: # kill the process before raising the exception
killall(process)
```
---
_Originally posted by @bhimrazy in https://github.com/Lightning-AI/LitServe/issues/119#issuecomment-2141734005_
|
closed
|
2024-05-31T12:02:02Z
|
2024-06-03T18:46:25Z
|
https://github.com/Lightning-AI/LitServe/issues/121
|
[
"bug",
"good first issue",
"ci / tests"
] |
aniketmaurya
| 0
|
donnemartin/system-design-primer
|
python
| 691
|
can you make it be a book?
|
open
|
2022-07-25T03:27:19Z
|
2023-10-02T12:12:55Z
|
https://github.com/donnemartin/system-design-primer/issues/691
|
[
"needs-review"
] |
aexftf
| 2
|
|
albumentations-team/albumentations
|
deep-learning
| 1,973
|
Supported mask formats with Albumentations
|
## Your Question
From the documentation, both API reference and [user guide] (https://albumentations.ai/docs/getting_started/mask_augmentation/) sections, it's not straightforward to understand which kind of mask format is supported and more importantly, if different mask formats can lead to different transformation outputs due to some internal implementation details.
Take for example a semantic segmentation task with 3 classes: A, B, and C, each class has an associated mask Ma, Mb, Mc stored as a different file. Besides RLE encoding and similar sparse formats, the most basic ways to encode a dense mask, and augment a sample are:
* Read Ma, Mb, and Mc as an np array and store them in a Python list, eg `masks`. The transform API allows to call `transformed = transform(image=image, masks=masks)` and gets the augmented image and mask pair.
* Read Ma, Mb, and Mc as a np array and stack them in a `mask` np array of shape (H, W, C), where C=3 and each array's element is True or False. Let's refer to this as _one-hot boolean encoding_. The transform API allows to call `transformed = transform(image=image, mask=mask)` and gets the augmented image and mask pair.
* Read Ma, Mb, and Mc as a np array and encode them in a `mask` array of shape (H, W), where each array's item represents the class index (0, 1, 2). Let's refer to this as _integer tensor encoding_. Then I can call `transformed = transform(image=image, mask=mask)` and get the augmented image and mask pair.
Now, my questions are:
* Does Albumentations support all of the 3 types of encodings for every transform?
* Does the encoding type affect the output of a given transformation?
* Is one approach better than another in terms of performance?
|
open
|
2024-10-07T16:07:00Z
|
2024-10-08T20:13:27Z
|
https://github.com/albumentations-team/albumentations/issues/1973
|
[
"question"
] |
PRFina
| 1
|
FlareSolverr/FlareSolverr
|
api
| 1,085
|
Error solving the challenge. Timeout after 60.0 seconds
|
### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.3.14
- Last working FlareSolverr version: 3.3.13
- Operating system: Unraid
- Are you using Docker: [yes/no] yes
- FlareSolverr User-Agent (see log traces or / endpoint): Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
- Are you using a VPN: [yes/no] no
- Are you using a Proxy: [yes/no] no
- Are you using Captcha Solver: [yes/no] no
- If using captcha solver, which one:
- URL to test this issue: https://1337x.to/cat/Movies/time/desc/1/
```
### Description
Since updating to 3.3.14 I have been noticing FlareSolverr is not working for ANY of the indexers I use it with (1337x, ExtraTorrent.st, iDope).
Communication test between Prowlarr and FlareSolverr is successful, but tests from an indexer through FlareSolverr fail.
I have also tried ensuring Prowlarr is up to date, and tried rebooting containers / host to no avail.
### Logged Error Messages
```text
2024-02-20 19:43:34 INFO Incoming request => POST /v1 body: {'maxTimeout': 60000, 'cmd': 'request.get', 'url': 'https://1337x.to/cat/Movies/time/desc/1/', 'proxy': {}}
version_main cannot be converted to an integer
2024-02-20 19:43:35 INFO Challenge detected. Title found: Just a moment...
2024-02-20 19:44:35 ERROR Error: Error solving the challenge. Timeout after 60.0 seconds.
2024-02-20 19:44:35 INFO Response in 60.708 s
2024-02-20 19:44:35 INFO 192.168.1.61 POST http://192.168.1.59:8191/v1 500 Internal Server Error
```
### Screenshots
_No response_
|
closed
|
2024-02-20T19:48:46Z
|
2024-02-20T19:50:46Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/1085
|
[] |
nickydd9
| 1
|
deepset-ai/haystack
|
nlp
| 8,366
|
Remove deprecated `Pipeline` init argument `debug_path`
|
The argument `debug_path` has been deprecate with PR #8364 and will be released with Haystack `2.6.0`.
We need to remove it before releasing version `2.7.0`.
|
closed
|
2024-09-16T07:52:13Z
|
2024-09-30T15:11:50Z
|
https://github.com/deepset-ai/haystack/issues/8366
|
[
"breaking change",
"P3"
] |
silvanocerza
| 0
|
fugue-project/fugue
|
pandas
| 377
|
[FEATURE] Create bag
|
Fugue has been built on top of the DataFrame concept. Although a collection of arbitrary objects can be converted to DataFrame to be distributed in Fugue, it is not always efficient or intuitive to do so. Looking at Spark (RDD), Dask (Bag) and even Ray, they all have separate ways to handle a distributed collection of arbitrary objects. So in Fugue, we should have the correspondent concept. And immediate benefit and distributing a collection of tasks, we no longer need to consider it in a dataframe way.
Regarding name, `bag` is a really nice name and a perfect term that is defined in mathematics, see https://en.wikipedia.org/wiki/Multiset It is unordered, and platform/scale agnostic matching Fugue's design philosophy. And this is also why Dask is using this name.
As an initial version, we don't plan to add many features like what RDD does. One major feature NOT to have in v1 is partitioning and shuffling. In order to do these, DataFrame is required.
|
closed
|
2022-10-22T05:14:59Z
|
2022-11-17T05:32:59Z
|
https://github.com/fugue-project/fugue/issues/377
|
[
"enhancement",
"high priority",
"programming interface",
"core feature",
"bag"
] |
goodwanghan
| 0
|
zappa/Zappa
|
flask
| 1,111
|
“python_requires” should be set with “>=3.6, <3.10”, as zappa 0.54.1 is not compatible with all Python versions.
|
Currently, the keyword argument **python_requires** of **setup()** is not set, and thus it is assumed that this distribution is compatible with all Python versions.
However, I found the following code checking Python compatibility locally in **zappa/\_\_init\_\_.py**
```python
SUPPORTED_VERSIONS = [(3, 6), (3, 7), (3, 8), (3, 9)]
if sys.version_info[:2] not in SUPPORTED_VERSIONS:
……
raise RuntimeError(err_msg)
```
I think it is a better way to declare Python compatibility by using the keyword argument **python_requires** than checking compatibility locally for some reasons:
* Descriptions in **python_requires** will be reflected in the metadata
* “pip install” can check such metadata on the fly during distribution selection, and prevent from downloading and installing the incompatible package versions.
* If the user does not specify any version constraint, pip can automatically choose the latest compatible package version for users.
Way to improve:
modify **setup()** in **setup.py,** add **python_requires** keyword argument:
```python
setup(…
python_requires=">=3.6, <3.10",
…)
```
Thanks for your attention.
Best regrads,
PyVCEchecker
|
closed
|
2022-02-22T03:46:22Z
|
2022-08-05T10:36:23Z
|
https://github.com/zappa/Zappa/issues/1111
|
[] |
PyVCEchecker
| 2
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 789
|
conda install dependencies: downgrade python to 2.x
|
I am a conda user.
When I try './scripts/conda_deps.sh' to install dependencies.
It try to downgrade my python from 3.x to 2.x
`The following packages will be DOWNGRADED:
python: 3.6.5-hc3d631a_2 --> 2.7.16-h9bab390_7 `
|
open
|
2019-10-11T09:03:11Z
|
2019-10-12T08:54:54Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/789
|
[] |
H0icky
| 2
|
Evil0ctal/Douyin_TikTok_Download_API
|
api
| 284
|
[BUG] /tiktok_profile_videos 没有定义get_tiktok_user_profile_videos方法
|
***发生错误的平台?***
TikTok /tiktok_profile_videos/ 没有定义get_tiktok_user_profile_videos方法
***发生错误的端点?***
如:API-V1/API-V2/Web APP
***提交的输入值?***
短视频链接
***是否有再次尝试?***
如:是,发生错误后X时间后错误依旧存在。
***你有查看本项目的自述文件或接口文档吗?***
如:有,并且很确定该问题是程序导致的。
|
closed
|
2023-09-28T07:14:00Z
|
2023-09-29T10:46:02Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/284
|
[
"BUG"
] |
wahahababaozhou
| 1
|
littlecodersh/ItChat
|
api
| 22
|
有没有办法将个人名片或自己关注的公众号名片转发出去?
|
本issue记录个人名片、公众号、文章转发的相关讨论。
|
closed
|
2016-06-18T13:48:56Z
|
2016-11-13T12:13:35Z
|
https://github.com/littlecodersh/ItChat/issues/22
|
[
"enhancement",
"help wanted"
] |
jireh-he
| 12
|
AntonOsika/gpt-engineer
|
python
| 134
|
[windows] File system "permission denied"
|
I downloaded the new repo today, and when running a prompt i receive the following error.
It worked fine last night on the old repo of the previous day.
I have full administrative rights.
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\Admin\Github\gpt-engineer\gpt_engineer\main.py", line 49, in <module>
app()
File "C:\Users\Admin\Github\gpt-engineer\gpt_engineer\main.py", line 45, in chat
messages = step(ai, dbs)
^^^^^^^^^^^^^
File "C:\Users\Admin\Github\gpt-engineer\gpt_engineer\steps.py", line 129, in gen_code
to_files(messages[-1]["content"], dbs.workspace)
File "C:\Users\Admin\Github\gpt-engineer\gpt_engineer\chat_to_files.py", line 29, in to_files
workspace[file_name] = file_content
~~~~~~~~~^^^^^^^^^^^
File "C:\Users\Admin\Github\gpt-engineer\gpt_engineer\db.py", line 20, in __setitem__
with open(self.path / key, 'w', encoding='utf-8') as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\Admin\\Github\\gpt-engineer\\my-new-project\\workspace'
|
closed
|
2023-06-18T01:23:25Z
|
2023-07-12T12:00:32Z
|
https://github.com/AntonOsika/gpt-engineer/issues/134
|
[] |
DoLife
| 4
|
biolab/orange3
|
scikit-learn
| 6,769
|
Unable to run when there is a logging.py in the directory Orange is launched in
|
Similar issue to https://github.com/jupyter/notebook/issues/4892
|
closed
|
2024-03-21T09:09:14Z
|
2024-04-12T07:20:43Z
|
https://github.com/biolab/orange3/issues/6769
|
[
"bug report"
] |
zactionn
| 3
|
coqui-ai/TTS
|
python
| 3,360
|
[Bug] Cannot restore from checkpoint
|
### Describe the bug
I'm training a vits model , when continue a training process using
````
python TTS/bin/train_tts.py --continue_path path/to/training/model/ouput/checkpoint/
````
the code in function _restore_best_loss ( trainer.py: line 1720 ) did not check type of ch["model_loss"] .
Restoring from best_model_xxx.pth or best_model.pth the ch["model_loss"] is a float/real number , when restore from checkpoint_xxx.pth , it becomes a dict.
At the end of one epoch , it will compare a loss value , it raise a error says 'dict' cannot compare with real number , and the training process exits.
currently I modify the code to following to avoid this problem.
````
def _restore_best_loss(self):
"""Restore the best loss from the args.best_path if provided else
from the model (`args.continue_path`) used for resuming the training"""
if self.args.continue_path and (self.restore_step != 0 or self.args.best_path):
logger.info(" > Restoring best loss from %s ...", os.path.basename(self.args.best_path))
ch = load_fsspec(self.args.restore_path, map_location="cpu")
if "model_loss" in ch:
theLoss = ch["model_loss"]
if type(theLoss)==dict:
self.best_loss = ch["model_loss"]["train_loss"]
else:
self.best_loss = theLoss
logger.info(" > Starting with loaded last best loss %f", self.best_loss)
````
### To Reproduce
retore training process using a checkpoint , triggered from ctrl-c or 'save_best_after'
### Expected behavior
continue a training without process exit
### Logs
_No response_
### Environment
```shell
Windows 10 with RTX3060
Colab With T4
Git Branch : Dev (11ec9f7471620ebaa57db7ff5705254829ffe516)
In both environment I encounter the issue.
```
### Additional context
_No response_
|
closed
|
2023-12-04T02:53:38Z
|
2023-12-07T13:21:33Z
|
https://github.com/coqui-ai/TTS/issues/3360
|
[
"bug"
] |
YuboLong
| 2
|
JoeanAmier/XHS-Downloader
|
api
| 9
|
可以输入作者主页链接,然后下载作者全部作品吗?
|
可以在xhs.txt中填写作者主页链接,实现批量下载
|
open
|
2023-11-04T08:12:59Z
|
2023-11-05T06:24:16Z
|
https://github.com/JoeanAmier/XHS-Downloader/issues/9
|
[] |
wwkk2580
| 3
|
cvat-ai/cvat
|
computer-vision
| 8,899
|
Problem with exporting annotations to Datumaro
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
When exporting my annotations to datumaro format I get the following error: ValueError: could not broadcast input array from shape (13,11) into shape (0,11).
It is possible to export the annotations with the cvat format, but not with COCO or Datumaro. I also tried to perform the conversion via the CLI with datumaro, but without success. I get the same error message.
### Expected Behavior
The annotations to be correctly exported in datumaro format.
### Possible Solution
I was looking into the downloaded annotations in cvat format for empty annotations, but not sure if this is the right approach and what the best way is to search for these.
### Context
I uploaded some of the annotations a week ago and have already succesfully exported them then.
### Environment
```Markdown
- I'm using cvat.ai online
- my username is toon5
```
|
closed
|
2025-01-06T15:50:04Z
|
2025-01-06T22:10:28Z
|
https://github.com/cvat-ai/cvat/issues/8899
|
[
"bug"
] |
toolambr
| 1
|
ludwig-ai/ludwig
|
computer-vision
| 3,915
|
Ray parallelization does not work
|
**Describe the bug**
Does not work model parallelization with Ray and a custom model from huggingface.
**To Reproduce**
I want to train a neural network using ludwig and molecular encoder from huggingface. My config is:
```
model_type: ecd
input_features:
- name: Smiles
type: text
preprocessing:
tokenizer: molecules
encoder: auto_transformer
pretrained_model_name_or_path: ibm/MoLFormer-XL-both-10pct
trainable: false
output_features:
- name: Measured
type: number
decoder:
num_fc_layers: 1
output_size: 64
trainer:
epochs: 20
optimizer:
type: adam
beta1: 0.9 # Corrected 'beat1' to 'beta1'
learning_rate: 0.001
```
It works perfectly with local backend, however when I try to run multi-gpu training with Ray, it fails
```
ModuleNotFoundError: No module named 'transformers_modules'
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) No module named 'transformers_modules'
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) Traceback (most recent call last):
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) File "/home/sergeys/miniconda3/lib/python3.11/site-packages/ray/_private/serialization.py", line 404, in deserialize_objects
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) obj = self._deserialize_object(data, metadata, object_ref)
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) File "/home/sergeys/miniconda3/lib/python3.11/site-packages/ray/_private/serialization.py", line 270, in _deserialize_object
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) return self._deserialize_msgpack_data(data, metadata_fields)
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) File "/home/sergeys/miniconda3/lib/python3.11/site-packages/ray/_private/serialization.py", line 225, in _deserialize_msgpack_data
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) python_objects = self._deserialize_pickle5_data(pickle5_data)
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) File "/home/sergeys/miniconda3/lib/python3.11/site-packages/ray/_private/serialization.py", line 215, in _deserialize_pickle5_data
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) obj = pickle.loads(in_band)
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) ^^^^^^^^^^^^^^^^^^^^^
(dask:('map-3ad119a87f1f9eca9ea3cfc5d1963787', 0) pid=790622) ModuleNotFoundError: No module named 'transformers_modules'
```
- OS: Linux
- Version Ubuntu 20.04.6 LTS
- Python 3.11.4
- ludwig v0.9.2
- transformers 4.36.2
|
open
|
2024-01-24T16:41:35Z
|
2024-10-21T18:51:06Z
|
https://github.com/ludwig-ai/ludwig/issues/3915
|
[
"bug",
"ray",
"dependency"
] |
sergsb
| 1
|
graphql-python/graphene-django
|
graphql
| 970
|
ManyToMany through model handling via edges
|
Reopening with reference to: https://github.com/graphql-python/graphene/issues/83
To quote @adamcharnock from https://github.com/graphql-python/graphene/issues/83
> When a DjangoConnectionField traverses a many-to-many field it would be nice to have the option to expose the fields of any through-table on the edges of the relationship.
|
open
|
2020-05-23T11:05:55Z
|
2024-06-23T09:07:23Z
|
https://github.com/graphql-python/graphene-django/issues/970
|
[
"✨enhancement",
"help wanted"
] |
Eraldo
| 8
|
stanfordnlp/stanza
|
nlp
| 473
|
To what degree is Stanford Stanza case sensitive?
|
In most languages, upper-case letters can sometimes be used as indicators about the part-of-speech of a word (e.g. proper names). In German particularly, "Gehen Sie?" is second person formal, and "Gehen sie?" is third person plural - the only way to know what is meant (apart from context) is the casing.
NLP tools like part-of-speech taggers could thus benefit from the information in upper-case letters. However, I could not find any information in the documentation about Stanza on this topic.
**Is Stanza designed to be case-sensitive?**
If it is: how is that done exactly? When training or evaluating the models, is there some connection between the German words "Sie" and "sie"? How about "SIe" or "SIE" or "sIE", is that a completely different word (which would cause huge amounts of words), or is the information about the casing somehow encoded separately, in the model input?
I tried to test it with some German sentences (using the pre-built `de_core_news_lg` model), and it seems that it **is** case-sensitive, however not in a usable way:
* `Gehen sie?` is correctly tagged, but in `Gehen Sie?`, `Gehen` is tagged as `VerbForm=Inf`, and `Sie?` is tagged as `upos="PUNCT"`.
* `Du musst dir merken was ich sage!` is correctly tagged, but in `Du musst Dir merken was ich sage!`, only the first two words are tagged at all, with the second word in the wrong grammatical number.
Are these just flaws of the model, or is Stanza not really case-sensitive?
|
closed
|
2020-09-24T21:20:59Z
|
2020-11-06T16:45:56Z
|
https://github.com/stanfordnlp/stanza/issues/473
|
[
"question"
] |
yolpsoftware
| 2
|
ionelmc/pytest-benchmark
|
pytest
| 3
|
Warn if benchmarks in the same group have different options
|
It's quite a bad idea to compare tests that don't have same `disable_gc` settings at least.
|
open
|
2014-12-15T00:39:13Z
|
2015-08-17T22:42:01Z
|
https://github.com/ionelmc/pytest-benchmark/issues/3
|
[] |
ionelmc
| 0
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 3,098
|
Date selection in "Postpone expiration date" is not working
|
I'm in the process of creating a user manual, and has discovered, that the date selection feature in "Postpone expiration date" only allows to postpone 1 day

It is not possible to select another date in the calendar
Using GL 4.5.1
|
closed
|
2021-11-11T05:59:26Z
|
2021-11-11T14:58:00Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3098
|
[] |
schris-dk
| 8
|
marshmallow-code/flask-smorest
|
rest-api
| 543
|
Pagination documentation example is incorrect
|
The pagination docs example for cursor pagination is incorrect as written: https://flask-smorest.readthedocs.io/en/latest/pagination.html#cursor-pager
The example used will always raise exceptions using SQLAlchemy or Mongoengine (the two examples provided):
```python
from flask_smorest import Page
class CursorPage(Page):
@property
def item_count(self):
return self.collection.count()
@blp.route("/")
class Pets(MethodView):
@blp.response(200, PetSchema(many=True))
@blp.paginate(CursorPage)
def get(self):
return Pet.get()
```
Both libraries have queries that implement a `.get()` method, but that is for single item lookup only, which is not something that makes sense with pagination:
- [Mongoengine QuerySet.get()](https://docs.mongoengine.org/apireference.html#mongoengine.queryset.QuerySet.get): Raises exception on multiple results
- [SQLAlcemy Query.get()](https://docs.sqlalchemy.org/en/20/orm/session_api.html#sqlalchemy.orm.Session.get): Deprecated in favor of `Session.get`, both return a single object, not useful for multiple results where pagination would apply
I understand flask-smorest doesn't concern itself with the ORM, but since e.g. SQLAlchemy is so widely used with Flask, it could be nice to see a more fully-fleshed out and functioning example. In particular, because the documentation suggests "it is generally good practice to paginate the resource", it would follow showing how to do so in practice could be overall beneficial improvement to the documentation.
|
closed
|
2023-08-18T18:20:14Z
|
2024-03-11T23:30:34Z
|
https://github.com/marshmallow-code/flask-smorest/issues/543
|
[
"documentation"
] |
brendan-morin
| 3
|
flavors/django-graphql-jwt
|
graphql
| 3
|
Protecting Mutation or Queries
|
This is a helpful project, I just have one question, is there a way to protect mutations and queries from unauthorized uses?
|
closed
|
2018-01-27T07:16:04Z
|
2023-10-24T06:38:24Z
|
https://github.com/flavors/django-graphql-jwt/issues/3
|
[
"enhancement",
"question"
] |
CBinyenya
| 11
|
huggingface/transformers
|
nlp
| 36,745
|
Gemma 3 1B - TypeError: 'NoneType' object is not callable
|
### System Info
I'm trying to run Gemma3 using pipeline, and after updating Transformers to the latest version, making sure my token is set up to work with gated repositories, I still can't run Gemma 3.
Environment:
```
- `transformers` version: 4.50.0.dev0
- Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.29.3
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
```
The error message:
```
Traceback (most recent call last):
File "MyFolder/Test.py", line 80, in <module>
output = generator(prompt, do_sample=False)
File "/MyFolder/env/lib/python3.12/site-packages/transformers/pipelines/text_generation.py", line 287, in __call__
return super().__call__(text_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/MyFolder/env/lib/python3.12/site-packages/transformers/pipelines/base.py", line 1371, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "MyFolder/env/lib/python3.12/site-packages/transformers/pipelines/base.py", line 1377, in run_single
model_inputs = self.preprocess(inputs, **preprocess_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/MyFolder/env/lib/python3.12/site-packages/transformers/pipelines/text_generation.py", line 325, in preprocess
inputs = self.tokenizer(prefix + prompt_text, return_tensors=self.framework, **tokenizer_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'apply_chat_template'
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code to replicate the error:
```
from transformers import pipeline
msg = """Blah blah = Ok and Foo Foo = Not ok. """
prompt = (
"Extract the following information from the message below:\n"
"1. Blah blah\n"
"2. Foo Foo\n"
"Message:\n"
f"{msg}\n\n"
"Provide your answer in JSON format with keys 'Blah blah', 'Foo Foo'."
)
# Tried providing various lengths - didn't help
output = generator(prompt, do_sample=False)
# Print the generated output
print(output[0]['generated_text'])
```
### Expected behavior
Expected output:
`$ { "Ok", "Not ok" }`
|
open
|
2025-03-15T23:41:51Z
|
2025-03-18T13:14:21Z
|
https://github.com/huggingface/transformers/issues/36745
|
[
"bug"
] |
amemov
| 8
|
inventree/InvenTree
|
django
| 8,649
|
Access project information on stock item labels.
|
### Body of the issue
Hi,
Hope my question is posted at the right place.
I am responsible implementing Inventree at the company I work for. Currently I am developing label templates.
Our operation includes installing equipment in racks. These racks are serialized stock items. They are the output of a build order. Each build order is associated with a project.
I would like to create a label that has the following information:
1. project.id (serialised project number)
2. project.description (we use this as project name)
3. part.name
4. part.IPN
5. item.serial
1 and 2 are not part of the stock item context variable list. I would like to get the project id and description on the labels without extra user input. Can you suggest a workaround for this? Can something be done in the background?
Many thanks!
|
closed
|
2024-12-10T08:29:04Z
|
2024-12-10T12:23:31Z
|
https://github.com/inventree/InvenTree/issues/8649
|
[
"question",
"report"
] |
akrly
| 2
|
plotly/dash
|
data-visualization
| 2,907
|
Differences in dcc.Store storage_type performance with dash 2.17.1
|
Background callback running correctly (loading animations appearing, print statements appearing too, etc) buut when it finishes it errors out (no output is returned) with this error in the console:
```
Failed to execute 'setItem' on 'Storage': Setting the value of 'flag_storage' exceeded the quota.
```
It was resolved by changing the `storage_type` to `'memory'` as per: https://community.plotly.com/t/error-the-quota-has-been-exceeded/26944
**Description by the user:**
> When tested with `storage_type = 'memory'` instead of `'session'`, we don’t get the issue, so I tried to understand more why the issue happened only in the past weeks while the storage as `session` was used for one year on server (Dash Enterprise) and still works in our local machine.
> The only difference is that on the server we recently switched from the 2.16.1 version (that we still use on local machine) to the 2.17.1; if we specify `dash==2.16.1` even with `storage_type='session'` we get no issue, but with 2.17.1 we have it.
I don't have additional information and haven't had the opportunity to try to replicate this.
|
open
|
2024-06-28T08:29:16Z
|
2024-08-13T14:19:36Z
|
https://github.com/plotly/dash/issues/2907
|
[
"feature",
"P3"
] |
celia-lm
| 0
|
ivy-llc/ivy
|
numpy
| 27,966
|
Fix Ivy Failing Test: jax - elementwise.maximum
|
closed
|
2024-01-20T16:16:20Z
|
2024-01-25T09:54:26Z
|
https://github.com/ivy-llc/ivy/issues/27966
|
[
"Sub Task"
] |
samthakur587
| 0
|
|
torchbox/wagtail-grapple
|
graphql
| 63
|
Issue with GraphQLColletion model when required
|
# 🐛 Bug Report
I'm having an issue with the `GraphQLColletion` grapple model. When `required=True` is passed, all `QuerySetList` arguments disappear.
## 💻 Code Sample
When we add a `GraphQLColletion` to the `graphql_fields` list, we get something like this on the GraphQL schema by default:
```python
class Article(Page):
graphql_fields = [
GraphQLCollection(GraphQLForeignKey, "tags", "taxonomies.Tag"),
]
```
`tags` field inside `Article` type:
```graphql
tags(
limit: PositiveInt
offset: PositiveInt
order: String
searchQuery: String
id: ID
): [Tag]
```
## 😯 Current Behavior
If I add `required=True` to the `GraphQLColletion` on the `graphql_fields` list:
```python
class Article(Page):
graphql_fields = [
GraphQLCollection(GraphQLForeignKey, "tags", "taxonomies.Tag", required=True),
]
```
I get this on the GraphQL schema:
```graphql
tags: [Tag]!
```
All `QuerySetList` arguments just disappear 😱.
## 🤔 Expected Behavior
I was expecting something like this:
```graphql
tags(
limit: PositiveInt
offset: PositiveInt
order: String
searchQuery: String
id: ID
): [Tag]!
```
**I already found the issue and I'm going to open a PR to fix this**
|
closed
|
2020-04-08T21:24:00Z
|
2020-04-17T09:58:11Z
|
https://github.com/torchbox/wagtail-grapple/issues/63
|
[] |
ruisaraiva19
| 0
|
streamlit/streamlit
|
deep-learning
| 10,126
|
`st.close()` or `st.end()` to close or mark the end of a container and remove stale elements (like a frontend version of `st.stop()`)
|
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
Call `st.close()` to immediately discard all existing elements that follow (within whatever container context the command was called). e.g. Call `st.close()` within `st.chat_message()` to discard any remaining (usually stale) elements beyond that point.
### Why?
With dynamic pages and long computation times come stale elements. Especially in LLM apps that include streamed content, app developers need to be mindful of transient elements that may linger in stale form while a script run completes. When you reduce the total number of elements in a container from one script run to the next, the extra elements do not get discarded until the end of the script run. This necessitates extra boilerplate code to handle a decrease in total elements, especially when run times are longer than a second (the fade time on stale elements).
### How?
MVP: `st.close()` could discard all following frontend elements within whatever container context its called (instead of Streamlit waiting for the end of the script run to know it's done writing to the container).
Optional: A parameter could modify how widgets are discarded if they exist (to fully clean them up and make the next call "like new" or merely discard the frontend as if it was replaced by something else, holding off until the end of the script run to discard its state).
I imagine you could still "reopen" the container by writing to it again. Also, a configuration option could be set to implicitly utilitize `st.close()` when leaving a container's context. (e.g. When using `with` notation, Streamlit will "close" the container when you leave the context manager.)
### Additional Context
Related: #5044, #2820, #9239
|
open
|
2025-01-07T20:23:29Z
|
2025-01-07T21:18:40Z
|
https://github.com/streamlit/streamlit/issues/10126
|
[
"type:enhancement",
"area:utilities"
] |
sfc-gh-dmatthews
| 4
|
Yorko/mlcourse.ai
|
numpy
| 764
|
topic01 - small section of python code doesn't run - out of sync with main content
|
Seems like the latest mlcourse.ai/mlcourse_ai_jupyter_book/book/topic01/topic01_pandas_data_analysis.md already has this change but the Jupyter notebook doesn't.
Here is what is on the website and in the .md file:
```
What are the average values of numerical features for churned users?
Here we’l resort to an additional method select_dtypes to select all numeric columns.
df.select_dtypes(include=np.number)[df["Churn"] == 1].mean()
```
Here is what is in the Jupyter notebook:
```
What are the average values of numerical features for churned users?
df[df["Churn"] == 1].mean()
```
The latter results in this error:

|
closed
|
2024-08-03T19:10:26Z
|
2024-08-19T15:09:15Z
|
https://github.com/Yorko/mlcourse.ai/issues/764
|
[] |
j-silv
| 0
|
matterport/Mask_RCNN
|
tensorflow
| 2,183
|
How to display result using OpenCV?
|
Hello everyone, anybody know how to I display the detection using OpenCV?
|
open
|
2020-05-14T05:09:32Z
|
2020-06-01T04:28:37Z
|
https://github.com/matterport/Mask_RCNN/issues/2183
|
[] |
sgbatman
| 4
|
chaoss/augur
|
data-visualization
| 2,894
|
Celery : Handle Task Error (issue in `dev`)
|
It looks as though an array of integers are being passed for comparison to a `repo_git` parameter in a query, and that's a string:
```bash
File "/home/ubuntu/github/augur/augur/tasks/init/celery_app.py", line 105, in on_failure
self.augur_handle_task_failure(exc, task_id, repo_git, "core_task_failure")
File "/home/ubuntu/github/augur/augur/tasks/init/celery_app.py", line 88, in augur_handle_task_failure
repo = session.query(Repo).filter(Repo.repo_git == repo_git).one()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/query.py", line 2798, in one
return self._iter().one() # type: ignore
```
The full stack trace is:
```bash
Traceback (most recent call last):
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/augur/augur/tasks/gitlab/issues_task.py", line 224, in collect_gitlab_issue_comments
process_gitlab_issue_messages(comments, f"{owner}/{repo}: Gitlab issue messages task", repo_id, logger, session)
File "/home/ubuntu/github/augur/augur/tasks/gitlab/issues_task.py", line 287, in process_gitlab_issue_messages
issues = session.session.query(Issue).filter(Issue.repo_id == repo_id).all()
^^^^^^^^^^^^^^^
AttributeError: 'Session' object has no attribute 'session'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UndefinedFunction: operator does not exist: character varying = integer[]
LINE 3: WHERE augur_data.repo.repo_git = ARRAY[59,58,57,56,55,54,53,...
^
HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 468, in trace_task
I, R, state, retval = on_error(task_request, exc, uuid)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 379, in on_error
R = I.handle_error_state(
^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 178, in handle_error_state
return {
^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 231, in handle_failure
task.on_failure(exc, req.id, req.args, req.kwargs, einfo)
File "/home/ubuntu/github/augur/augur/tasks/init/celery_app.py", line 105, in on_failure
self.augur_handle_task_failure(exc, task_id, repo_git, "core_task_failure")
File "/home/ubuntu/github/augur/augur/tasks/init/celery_app.py", line 88, in augur_handle_task_failure
repo = session.query(Repo).filter(Repo.repo_git == repo_git).one()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/query.py", line 2798, in one
return self._iter().one() # type: ignore
^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/query.py", line 2847, in _iter
result: Union[ScalarResult[_T], Result[_T]] = self.session.execute(
^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2306, in execute
return self._execute_internal(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2188, in _execute_internal
result: Result[Any] = compile_state_cls.orm_execute_statement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/context.py", line 293, in orm_execute_statement
result = conn.execute(
^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1416, in execute
return meth(
^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 516, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1639, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2343, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) operator does not exist: character varying = integer[]
LINE 3: WHERE augur_data.repo.repo_git = ARRAY[59,58,57,56,55,54,53,...
^
HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
[SQL: SELECT augur_data.repo.repo_id AS augur_data_repo_repo_id, augur_data.repo.repo_group_id AS augur_data_repo_repo_group_id, augur_data.repo.repo_git AS augur_data_repo_repo_git, augur_data.repo.repo_path AS augur_data_repo_repo_path, augur_data.repo.repo_name AS augur_data_repo_repo_name, augur_data.repo.repo_added AS augur_data_repo_repo_added, augur_data.repo.repo_type AS augur_data_repo_repo_type, augur_data.repo.url AS augur_data_repo_url, augur_data.repo.owner_id AS augur_data_repo_owner_id, augur_data.repo.description AS augur_data_repo_description, augur_data.repo.primary_language AS augur_data_repo_primary_language, augur_data.repo.created_at AS augur_data_repo_created_at, augur_data.repo.forked_from AS augur_data_repo_forked_from, augur_data.repo.updated_at AS augur_data_repo_updated_at, augur_data.repo.repo_archived_date_collected AS augur_data_repo_repo_archived_date_collected, augur_data.repo.repo_archived AS augur_data_repo_repo_archived, augur_data.repo.tool_source AS augur_data_repo_tool_source, augur_data.repo.tool_version AS augur_data_repo_tool_version, augur_data.repo.data_source AS augur_data_repo_data_source, augur_data.repo.data_collection_date AS augur_data_repo_data_collection_date
FROM augur_data.repo
WHERE augur_data.repo.repo_git = %(repo_git_1)s]
[parameters: {'repo_git_1': [59, 58, 57, 56, 55, 54, 53, 52, 51, 50, 49, 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]}]
(Background on this error at: https://sqlalche.me/e/20/f405)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UndefinedFunction: operator does not exist: character varying = integer[]
LINE 3: WHERE augur_data.repo.repo_git = ARRAY[59,58,57,56,55,54,53,...
^
HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/billiard/pool.py", line 362, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 649, in fast_trace_task
R, I, T, Rstr = tasks[task].__trace__(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 572, in trace_task
I, _, _, _ = on_error(task_request, exc, uuid)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 379, in on_error
R = I.handle_error_state(
^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 178, in handle_error_state
return {
^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/celery/app/trace.py", line 231, in handle_failure
task.on_failure(exc, req.id, req.args, req.kwargs, einfo)
File "/home/ubuntu/github/augur/augur/tasks/init/celery_app.py", line 105, in on_failure
self.augur_handle_task_failure(exc, task_id, repo_git, "core_task_failure")
File "/home/ubuntu/github/augur/augur/tasks/init/celery_app.py", line 88, in augur_handle_task_failure
repo = session.query(Repo).filter(Repo.repo_git == repo_git).one()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/query.py", line 2798, in one
return self._iter().one() # type: ignore
^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/query.py", line 2847, in _iter
result: Union[ScalarResult[_T], Result[_T]] = self.session.execute(
^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2306, in execute
return self._execute_internal(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2188, in _execute_internal
result: Result[Any] = compile_state_cls.orm_execute_statement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/orm/context.py", line 293, in orm_execute_statement
result = conn.execute(
^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1416, in execute
return meth(
^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 516, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1639, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2343, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/home/ubuntu/github/virtualenvs/hosted/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) operator does not exist: character varying = integer[]
LINE 3: WHERE augur_data.repo.repo_git = ARRAY[59,58,57,56,55,54,53,...
^
HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
[SQL: SELECT augur_data.repo.repo_id AS augur_data_repo_repo_id, augur_data.repo.repo_group_id AS augur_data_repo_repo_group_id, augur_data.repo.repo_git AS augur_data_repo_repo_git, augur_data.repo.repo_path AS augur_data_repo_repo_path, augur_data.repo.repo_name AS augur_data_repo_repo_name, augur_data.repo.repo_added AS augur_data_repo_repo_added, augur_data.repo.repo_type AS augur_data_repo_repo_type, augur_data.repo.url AS augur_data_repo_url, augur_data.repo.owner_id AS augur_data_repo_owner_id, augur_data.repo.description AS augur_data_repo_description, augur_data.repo.primary_language AS augur_data_repo_primary_language, augur_data.repo.created_at AS augur_data_repo_created_at, augur_data.repo.forked_from AS augur_data_repo_forked_from, augur_data.repo.updated_at AS augur_data_repo_updated_at, augur_data.repo.repo_archived_date_collected AS augur_data_repo_repo_archived_date_collected, augur_data.repo.repo_archived AS augur_data_repo_repo_archived, augur_data.repo.tool_source AS augur_data_repo_tool_source, augur_data.repo.tool_version AS augur_data_repo_tool_version, augur_data.repo.data_source AS augur_data_repo_data_source, augur_data.repo.data_collection_date AS augur_data_repo_data_collection_date
FROM augur_data.repo
WHERE augur_data.repo.repo_git = %(repo_git_1)s]
[parameters: {'repo_git_1': [59, 58, 57, 56, 55, 54, 53, 52, 51, 50, 49, 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]}]
(Background on this error at: https://sqlalche.me/e/20/f405)
```
|
open
|
2024-08-14T19:47:25Z
|
2024-08-14T19:49:44Z
|
https://github.com/chaoss/augur/issues/2894
|
[
"bug",
"server"
] |
sgoggins
| 0
|
wkentaro/labelme
|
computer-vision
| 1,040
|
Not working in Ubuntu 22.04
|
As soon as I just installed it from the gnome store, it opens and closes immediately. Any chance to export the project to flatpak?
|
closed
|
2022-06-22T23:56:29Z
|
2022-09-26T14:53:33Z
|
https://github.com/wkentaro/labelme/issues/1040
|
[
"issue::bug"
] |
ffreitas-dev
| 2
|
facebookresearch/fairseq
|
pytorch
| 4,691
|
Unable to load Wav2Vec 2.0 models - wav2vec2_vox_960h_new.pt
|
## 🐛 Bug
Hello.
Firstly , thank you for sharing all of the work and results and code. Its no small task.
I am attempting to load `wav2vec2_vox_960h_new.pt` but am getting the following errors:
`TypeError: object of type 'NoneType' has no len()`
after calling
`model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task(['wav2vec2_vox_960h_new.pt'])`
### To Reproduce
install torch for cuda 11.6 via website docs:
`conda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge`
install dev fairseq:
```
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./
```
in python notebook or wherever:
```
import torch
import fairseq
print(torch.__version__)
print(fairseq.__version__)
# I see
# 1.12.1
# 0.12.2
use_cuda = torch.cuda.is_available()
print(use_cuda)
# True for me
# load model
model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task(['wav2vec2_vox_960h_new.pt'])
```
I am then greeted with the following error
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [5], in <cell line: 1>()
----> 1 model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task(['wav2vec2_vox_960h_new.pt'])
File ~/miniconda3/envs/pyav-wav2vec/lib/python3.9/site-packages/fairseq/checkpoint_utils.py:473, in load_model_ensemble_and_task(filenames, arg_overrides, task, strict, suffix, num_shards, state)
471 argspec = inspect.getfullargspec(task.build_model)
472 if "from_checkpoint" in argspec.args:
--> 473 model = task.build_model(cfg.model, from_checkpoint=True)
474 else:
475 model = task.build_model(cfg.model)
File ~/miniconda3/envs/pyav-wav2vec/lib/python3.9/site-packages/fairseq/tasks/audio_pretraining.py:197, in AudioPretrainingTask.build_model(self, model_cfg, from_checkpoint)
196 def build_model(self, model_cfg: FairseqDataclass, from_checkpoint=False):
--> 197 model = super().build_model(model_cfg, from_checkpoint)
199 actualized_cfg = getattr(model, "cfg", None)
200 if actualized_cfg is not None:
201 # if "w2v_args" in actualized_cfg:
File ~/miniconda3/envs/pyav-wav2vec/lib/python3.9/site-packages/fairseq/tasks/fairseq_task.py:338, in FairseqTask.build_model(self, cfg, from_checkpoint)
326 """
327 Build the :class:`~fairseq.models.BaseFairseqModel` instance for this
328 task.
(...)
334 a :class:`~fairseq.models.BaseFairseqModel` instance
335 """
336 from fairseq import models, quantization_utils
--> 338 model = models.build_model(cfg, self, from_checkpoint)
339 model = quantization_utils.quantize_model_scalar(model, cfg)
340 return model
File ~/miniconda3/envs/pyav-wav2vec/lib/python3.9/site-packages/fairseq/models/__init__.py:106, in build_model(cfg, task, from_checkpoint)
98 ARCH_CONFIG_REGISTRY[model_type](cfg)
100 assert model is not None, (
101 f"Could not infer model type from {cfg}. "
102 "Available models: {}".format(MODEL_DATACLASS_REGISTRY.keys())
103 + f" Requested model type: {model_type}"
104 )
--> 106 return model.build_model(cfg, task)
File ~/miniconda3/envs/pyav-wav2vec/lib/python3.9/site-packages/fairseq/models/wav2vec/wav2vec2_asr.py:208, in Wav2VecCtc.build_model(cls, cfg, task)
205 @classmethod
206 def build_model(cls, cfg: Wav2Vec2CtcConfig, task: FairseqTask):
207 """Build a new model instance."""
--> 208 w2v_encoder = Wav2VecEncoder(cfg, len(task.target_dictionary))
209 return cls(cfg, w2v_encoder)
TypeError: object of type 'NoneType' has no len()
```
#### Code sample
See above
### Expected behavior
a properly loaded model.
### Environment
- fairseq Version 0.12.2
- PyTorch Version 1.12.1
- OS (e.g., Linux): `Linux frank-exchange-of-views 5.15.0-43-generic #46~20.04.1-Ubuntu SMP Thu Jul 14 15:20:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux`
- How you installed fairseq (`pip`, source): pip install --editable ./
- Build command you used (if compiling from source):
- Python version: 3.8.10
- CUDA/cuDNN version: 11.6 / 510.85.02
- GPU models and configuration: 2x 3090
- Any other relevant information:
It seems almost all wav2vec2 models dont load properly. Ive tried a variety of calls, and looked through git. Documentation for properly loading these models is *sorely* lacking
I understanding HuggingFace Transformers may be the preferred way these days to use these models, but it seems very odd to me that there's such a variety of model loading methods, quirks, and special sauce - none of which seems properly documented, reproducible or available.
Is there a resource that I perhaps have missed that properly documents how to use these models?
Thank you in advance
|
open
|
2022-09-01T21:00:08Z
|
2022-10-25T09:26:59Z
|
https://github.com/facebookresearch/fairseq/issues/4691
|
[
"bug",
"needs triage"
] |
vade
| 5
|
tortoise/tortoise-orm
|
asyncio
| 1,445
|
database connection not established after calling Tortoise.init
|
**Describe the bug**
I configure the orm with Tortoise.init. I can specify any connection data (host, port, user, password) there and no exception will be thrown
**To Reproduce**
```
host = "wrong_host"
port = 1234
user = "wrong_user"
password = "wrong_password"
sslmode = "require"
con_str = f"postgres://" \
f"{user}:{password}" \
f"@{host}:{port}" \
f"/{db_name}" \
f"?ssl={sslmode}"
await Tortoise.init(
db_url=con_str,
modules={"models": model_paths}
)
```
**Expected behavior**
I'm expecting some kind of exception to be thrown. So that I can understand that I entered the wrong data for the connection. Now I have to do some database queries to understand that I did something wrong. Can I somehow force a connection to the database without making additional queries?
|
open
|
2023-07-31T21:35:50Z
|
2023-09-14T10:34:11Z
|
https://github.com/tortoise/tortoise-orm/issues/1445
|
[] |
Prof1-web
| 1
|
miguelgrinberg/Flask-Migrate
|
flask
| 75
|
Add edit command
|
Alembic added an edit command, which seems very useful. It opens a migration, by default the last one, in the default editor. It would be cool if this command was added to flask-migrate as well.
|
closed
|
2015-08-20T09:57:07Z
|
2015-09-17T18:39:13Z
|
https://github.com/miguelgrinberg/Flask-Migrate/issues/75
|
[] |
JelteF
| 0
|
BlinkDL/RWKV-LM
|
pytorch
| 232
|
RWKV 5 supported vLLM?LMdeploy?TGI?Fastllm?FasterTransformer?
|
RWKV 5 supported vLLM?LMdeploy?TGI?Fastllm?FasterTransformer?
What should I do to get the inference performance?like throughput, token latency and latency?
|
open
|
2024-03-19T10:44:12Z
|
2024-09-25T01:23:23Z
|
https://github.com/BlinkDL/RWKV-LM/issues/232
|
[] |
lanzhoushaobing
| 2
|
autogluon/autogluon
|
data-science
| 4,806
|
[BUG] Some unit tests cannot be run externally
|
**Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [ ] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
When running multimodal unit tests, I'm getting below failure:
```
============================================================================== test session starts ===============================================================================
platform linux -- Python 3.11.11, pytest-8.3.4, pluggy-1.5.0
Fugue tests will be initialized with options:
rootdir: /home/sagemaker-user/autogluon
configfile: pyproject.toml
plugins: dash-2.18.1, anyio-4.8.0, fugue-0.9.1
collected 308 items / 9 errors / 2 skipped
===================================================================================== ERRORS =====================================================================================
___________________________________________________ ERROR collecting multimodal/tests/unittests/others/test_deployment_onnx.py ___________________________________________________
tests/unittests/others/test_deployment_onnx.py:19: in <module>
"petfinder": PetFinderDataset(),
tests/unittests/utils/unittest_datasets.py:30: in __init__
download(
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:266: in download
raise e
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:208: in download
response = s3.meta.client.head_object(Bucket=s3_bucket_name, Key=s3_key)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:565: in _api_call
return self._make_api_call(operation_name, kwargs)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:1017: in _make_api_call
raise error_class(parsed_response, operation_name)
E botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
-------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 4 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 3 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 2 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 1 attempt left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
_____________________________________________________ ERROR collecting multimodal/tests/unittests/others/test_dump_model.py ______________________________________________________
tests/unittests/others/test_dump_model.py:13: in <module>
ALL_DATASETS = {"petfinder": PetFinderDataset(), "ae": AEDataset()}
tests/unittests/utils/unittest_datasets.py:30: in __init__
download(
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:266: in download
raise e
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:208: in download
response = s3.meta.client.head_object(Bucket=s3_bucket_name, Key=s3_key)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:565: in _api_call
return self._make_api_call(operation_name, kwargs)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:1017: in _make_api_call
raise error_class(parsed_response, operation_name)
E botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
-------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 4 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 3 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 2 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 1 attempt left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
______________________________________________ ERROR collecting multimodal/tests/unittests/others_2/test_backward_compatibility.py _______________________________________________
tests/unittests/others_2/test_backward_compatibility.py:8: in <module>
from ..predictor.test_predictor import verify_predictor_save_load
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
/opt/conda/lib/python3.11/site-packages/_pytest/assertion/rewrite.py:184: in exec_module
exec(co, module.__dict__)
tests/unittests/predictor/test_predictor.py:40: in <module>
"petfinder": PetFinderDataset(),
tests/unittests/utils/unittest_datasets.py:30: in __init__
download(
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:266: in download
raise e
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:208: in download
response = s3.meta.client.head_object(Bucket=s3_bucket_name, Key=s3_key)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:565: in _api_call
return self._make_api_call(operation_name, kwargs)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:1017: in _make_api_call
raise error_class(parsed_response, operation_name)
E botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
-------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 4 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 3 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 2 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 1 attempt left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
_________________________________________________ ERROR collecting multimodal/tests/unittests/others_2/test_data_augmentation.py _________________________________________________
tests/unittests/others_2/test_data_augmentation.py:30: in <module>
from ..predictor.test_predictor import verify_predictor_save_load
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
/opt/conda/lib/python3.11/site-packages/_pytest/assertion/rewrite.py:184: in exec_module
exec(co, module.__dict__)
tests/unittests/predictor/test_predictor.py:40: in <module>
"petfinder": PetFinderDataset(),
tests/unittests/utils/unittest_datasets.py:30: in __init__
download(
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:266: in download
raise e
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:208: in download
response = s3.meta.client.head_object(Bucket=s3_bucket_name, Key=s3_key)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:565: in _api_call
return self._make_api_call(operation_name, kwargs)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:1017: in _make_api_call
raise error_class(parsed_response, operation_name)
E botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
-------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 4 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 3 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 2 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 1 attempt left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
_____________________________________________________ ERROR collecting multimodal/tests/unittests/others_2/test_distiller.py _____________________________________________________
tests/unittests/others_2/test_distiller.py:6: in <module>
from ..predictor.test_predictor import verify_predictor_save_load
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
/opt/conda/lib/python3.11/site-packages/_pytest/assertion/rewrite.py:184: in exec_module
exec(co, module.__dict__)
tests/unittests/predictor/test_predictor.py:40: in <module>
"petfinder": PetFinderDataset(),
tests/unittests/utils/unittest_datasets.py:30: in __init__
download(
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:266: in download
raise e
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:208: in download
response = s3.meta.client.head_object(Bucket=s3_bucket_name, Key=s3_key)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:565: in _api_call
return self._make_api_call(operation_name, kwargs)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:1017: in _make_api_call
raise error_class(parsed_response, operation_name)
E botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
-------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 4 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 3 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 2 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 1 attempt left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
_____________________________________________________ ERROR collecting multimodal/tests/unittests/others_2/test_few_shot.py ______________________________________________________
tests/unittests/others_2/test_few_shot.py:17: in <module>
from ..predictor.test_predictor import verify_predictor_save_load, verify_realtime_inference
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
/opt/conda/lib/python3.11/site-packages/_pytest/assertion/rewrite.py:184: in exec_module
exec(co, module.__dict__)
tests/unittests/predictor/test_predictor.py:40: in <module>
"petfinder": PetFinderDataset(),
tests/unittests/utils/unittest_datasets.py:30: in __init__
download(
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:266: in download
raise e
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:208: in download
response = s3.meta.client.head_object(Bucket=s3_bucket_name, Key=s3_key)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:565: in _api_call
return self._make_api_call(operation_name, kwargs)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:1017: in _make_api_call
raise error_class(parsed_response, operation_name)
E botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
-------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 4 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 3 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 2 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 1 attempt left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
________________________________________________________ ERROR collecting multimodal/tests/unittests/others_2/test_hpo.py ________________________________________________________
tests/unittests/others_2/test_hpo.py:12: in <module>
from ..predictor.test_predictor import verify_predictor_save_load
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
/opt/conda/lib/python3.11/site-packages/_pytest/assertion/rewrite.py:184: in exec_module
exec(co, module.__dict__)
tests/unittests/predictor/test_predictor.py:40: in <module>
"petfinder": PetFinderDataset(),
tests/unittests/utils/unittest_datasets.py:30: in __init__
download(
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:266: in download
raise e
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:208: in download
response = s3.meta.client.head_object(Bucket=s3_bucket_name, Key=s3_key)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:565: in _api_call
return self._make_api_call(operation_name, kwargs)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:1017: in _make_api_call
raise error_class(parsed_response, operation_name)
E botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
-------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 4 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 3 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 2 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 1 attempt left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
______________________________________________________ ERROR collecting multimodal/tests/unittests/others_2/test_images.py _______________________________________________________
tests/unittests/others_2/test_images.py:11: in <module>
from ..predictor.test_predictor import verify_predictor_save_load
<frozen importlib._bootstrap>:1176: in _find_and_load
???
<frozen importlib._bootstrap>:1147: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:690: in _load_unlocked
???
/opt/conda/lib/python3.11/site-packages/_pytest/assertion/rewrite.py:184: in exec_module
exec(co, module.__dict__)
tests/unittests/predictor/test_predictor.py:40: in <module>
"petfinder": PetFinderDataset(),
tests/unittests/utils/unittest_datasets.py:30: in __init__
download(
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:266: in download
raise e
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:208: in download
response = s3.meta.client.head_object(Bucket=s3_bucket_name, Key=s3_key)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:565: in _api_call
return self._make_api_call(operation_name, kwargs)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:1017: in _make_api_call
raise error_class(parsed_response, operation_name)
E botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
-------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 4 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 3 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 2 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 1 attempt left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
____________________________________________________ ERROR collecting multimodal/tests/unittests/predictor/test_predictor.py _____________________________________________________
tests/unittests/predictor/test_predictor.py:40: in <module>
"petfinder": PetFinderDataset(),
tests/unittests/utils/unittest_datasets.py:30: in __init__
download(
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:266: in download
raise e
/opt/conda/lib/python3.11/site-packages/autogluon/multimodal/utils/download.py:208: in download
response = s3.meta.client.head_object(Bucket=s3_bucket_name, Key=s3_key)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:565: in _api_call
return self._make_api_call(operation_name, kwargs)
/opt/conda/lib/python3.11/site-packages/botocore/client.py:1017: in _make_api_call
raise error_class(parsed_response, operation_name)
E botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
-------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 4 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 3 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 2 attempts left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
download failed due to ClientError('An error occurred (403) when calling the HeadObject operation: Forbidden'), retrying, 1 attempt left
Downloading /home/sagemaker-user/.automm_unit_tests/datasets/petfinder_for_unit_tests.zip from s3://automl-mm-bench/unit-tests-0.4/datasets/petfinder_for_unit_tests.zip...
================================================================================ warnings summary ================================================================================
../../../../opt/conda/lib/python3.11/site-packages/pip/_vendor/pkg_resources/__init__.py:3116
../../../../opt/conda/lib/python3.11/site-packages/pip/_vendor/pkg_resources/__init__.py:3116
../../../../opt/conda/lib/python3.11/site-packages/pip/_vendor/pkg_resources/__init__.py:3116
../../../../opt/conda/lib/python3.11/site-packages/pip/_vendor/pkg_resources/__init__.py:3116
../../../../opt/conda/lib/python3.11/site-packages/pip/_vendor/pkg_resources/__init__.py:3116
../../../../opt/conda/lib/python3.11/site-packages/pip/_vendor/pkg_resources/__init__.py:3116
../../../../opt/conda/lib/python3.11/site-packages/pip/_vendor/pkg_resources/__init__.py:3116
/opt/conda/lib/python3.11/site-packages/pip/_vendor/pkg_resources/__init__.py:3116: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('autogluon')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)
../../../../opt/conda/lib/python3.11/site-packages/pip/_vendor/pkg_resources/__init__.py:3116
/opt/conda/lib/python3.11/site-packages/pip/_vendor/pkg_resources/__init__.py:3116: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
declare_namespace(pkg)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
============================================================================ short test summary info =============================================================================
ERROR tests/unittests/others/test_deployment_onnx.py - botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
ERROR tests/unittests/others/test_dump_model.py - botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
ERROR tests/unittests/others_2/test_backward_compatibility.py - botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
ERROR tests/unittests/others_2/test_data_augmentation.py - botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
ERROR tests/unittests/others_2/test_distiller.py - botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
ERROR tests/unittests/others_2/test_few_shot.py - botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
ERROR tests/unittests/others_2/test_hpo.py - botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
ERROR tests/unittests/others_2/test_images.py - botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
ERROR tests/unittests/predictor/test_predictor.py - botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 9 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
=================================================================== 2 skipped, 8 warnings, 9 errors in 24.96s ====================================================================
```
Some test cases require access to `s3://automl-mm-bench`, but this is not accessible externally.
**Expected behavior**
All unit tests should rely on public resources only
**To Reproduce**
Pull autogluon into a docker container with autogluon.multimodal installed, then run command
```
~ cd autogluon/multimodal
~ python -m pytest tests/unittests/
```
**Screenshots / Logs**
As attached above
**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
```python
# Replace this code with the output of the following:
from autogluon.core.utils import show_versions
show_versions()
```
</details>
```
INSTALLED VERSIONS
------------------
date : 2025-01-17
time : 00:01:52.034616
python : 3.11.11.final.0
OS : Linux
OS-release : 5.10.230-202.885.amzn2int.x86_64
Version : #1 SMP Tue Dec 3 16:44:20 UTC 2024
machine : x86_64
processor : x86_64
num_cores : 96
cpu_ram_mb : 382645.20703125
cuda version : None
num_gpus : 0
gpu_ram_mb : []
avail_disk_size_mb : 580551
accelerate : 0.34.2
autogluon : 1.2
autogluon.common : 1.2
autogluon.core : 1.2
autogluon.features : 1.2
autogluon.multimodal : 1.2
autogluon.tabular : 1.2
autogluon.timeseries : 1.2
boto3 : 1.34.162
catboost : 1.2.7
coreforecast : 0.0.12
defusedxml : 0.7.1
einops : None
evaluate : 0.4.1
fastai : 2.7.18
fugue : 0.9.1
gluonts : 0.16.0
huggingface_hub : 0.27.1
hyperopt : 0.2.7
imodels : None
jinja2 : 3.1.5
joblib : 1.4.2
jsonschema : 4.23.0
lightgbm : 4.5.0
lightning : 2.5.0.post0
matplotlib : 3.10.0
mlforecast : 0.13.4
networkx : 3.4.2
nlpaug : 1.1.11
nltk : 3.9.1
numpy : 1.26.4
nvidia-ml-py3 : None
omegaconf : 2.3.0
onnx : None
onnxruntime : None
onnxruntime-gpu : None
openmim : 0.3.7
optimum : None
optimum-intel : None
orjson : 3.10.14
pandas : 2.2.3
pdf2image : 1.17.0
Pillow : 11.1.0
psutil : 5.9.8
pyarrow : 17.0.0
pytesseract : 0.3.10
pytorch-metric-learning: 2.3.0
pytorch_lightning : 2.5.0.post0
ray : 2.37.0
requests : 2.32.3
scikit-image : 0.20.0
scikit-learn : 1.5.2
scikit-learn-intelex : None
scipy : 1.15.1
seqeval : 1.2.2
skl2onnx : None
spacy : 3.8.2
statsforecast : 1.7.8
tabpfn : None
tensorboard : 2.17.1
text-unidecode : 1.3
timm : 1.0.3
torch : 2.4.1.post100
torchmetrics : 1.2.1
torchvision : 0.19.1
tqdm : 4.67.1
transformers : 4.48.0
utilsforecast : 0.2.3
vowpalwabbit : None
xgboost : 2.1.3
```
|
closed
|
2025-01-16T23:28:43Z
|
2025-01-17T20:34:41Z
|
https://github.com/autogluon/autogluon/issues/4806
|
[
"bug",
"module: multimodal"
] |
TRNWWZ
| 3
|
axnsan12/drf-yasg
|
rest-api
| 556
|
[BUG] Missing endpints
|
file: drf_yasg/generators.py
func: EndpointEnumerator.get_api_endpoints
line: 102-110
code:
path = self.replace_version(path, callback)
# avoid adding endpoints that have already been seen,
# as Django resolves urls in top-down order
if path in ignored_endpoints:
continue
ignored_endpoints.add(path)
for method in self.get_allowed_methods(callback):
endpoint = (path, method, callback)
api_endpoints.append(endpoint)
error:
endpoint is contain of path, method and callback, not only path !!!!
right:
path = self.replace_version(path, callback)
for method in self.get_allowed_methods(callback):
endpoint = (path, method, callback)
# avoid adding endpoints that have already been seen,
# as Django resolves urls in top-down order
if endpoint in ignored_endpoints:
continue
ignored_endpoints.add(endpoint)
api_endpoints.append(endpoint)
|
open
|
2020-03-10T06:06:35Z
|
2025-03-07T12:15:25Z
|
https://github.com/axnsan12/drf-yasg/issues/556
|
[
"triage"
] |
daleeg
| 0
|
PaddlePaddle/ERNIE
|
nlp
| 687
|
ERINIE-doc的预训练数据
|
请问,ERINIE-doc的预训练数据的CC-NEWS和STORIES的获取和处理方式能分享一下吗?
|
closed
|
2021-05-30T09:32:32Z
|
2021-08-06T08:26:42Z
|
https://github.com/PaddlePaddle/ERNIE/issues/687
|
[
"wontfix"
] |
xyltt
| 3
|
jacobgil/pytorch-grad-cam
|
computer-vision
| 134
|
AttributeError: 'GradCAM' object has no attribute 'activations_and_grads'
|
I used this code to convit model,and ues my own datasets.But get a problem.
The problem is:
cam = methods[args.method](model=model,
File "D:\anaconda\anaconda\envs\ViT\lib\site-packages\grad_cam-1.3.2-py3.8.egg\pytorch_grad_cam\grad_cam.py", line 8, in __init__
File "D:\anaconda\anaconda\envs\ViT\lib\site-packages\grad_cam-1.3.2-py3.8.egg\pytorch_grad_cam\base_cam.py", line 25, in __init__
File "D:\anaconda\anaconda\envs\ViT\lib\site-packages\grad_cam-1.3.2-py3.8.egg\pytorch_grad_cam\activations_and_gradients.py", line 11, in __init__
TypeError: 'LayerNorm' object is not iterable
Exception ignored in: <function BaseCAM.__del__ at 0x000002595B6CF550>
Traceback (most recent call last):
File "D:\anaconda\anaconda\envs\ViT\lib\site-packages\grad_cam-1.3.2-py3.8.egg\pytorch_grad_cam\base_cam.py", line 191, in __del__
AttributeError: 'GradCAM' object has no attribute 'activations_and_grads'

|
closed
|
2021-09-13T15:09:54Z
|
2023-01-03T11:18:36Z
|
https://github.com/jacobgil/pytorch-grad-cam/issues/134
|
[] |
Joker-ZXR
| 7
|
gevent/gevent
|
asyncio
| 1,510
|
gevent.subprocess communicate suppresses UnicodeDecodeError and returns empty string instead
|
* gevent version: 1.4.0
* Python version: cPython 3.7.5 (default, Dec 18 2019, 12:57:24) \n[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
* Operating System: CentOS Linux release 7.7.1908 (Core)
### Description:
Running `gevent.subprocess.Popen.communicate()`, where text is expected but fails to decode, results in a traceback from the greenlet that is **not** propagated to the caller. Instead, the call returns an empty string:
```python
import gevent
import gevent.monkey
gevent.monkey.patch_subprocess()
from gevent import subprocess
out, _ = subprocess.Popen(['printf', r'\xff'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE).communicate()
assert out == b'\xff'
out, _ = subprocess.Popen(['printf', r'\xff'],
universal_newlines=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE).communicate()
assert out, "Why is there no output?"
```
the console shows this traceback, which the Hub writes on greenlets that raised unhandled exceptions:
```python traceback
Traceback (most recent call last):
File "src/gevent/greenlet.py", line 766, in gevent._greenlet.Greenlet.run
File "/usr/local/lib/python3.7/site-packages/gevent/subprocess.py", line 725, in _read
data = pipe.read()
File "/usr/local/lib/python3.7/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 0-1: invalid continuation byte
2020-01-14T10:08:13Z <Greenlet at 0x7ff576becdd0: _read> failed with UnicodeDecodeError
```
|
closed
|
2020-01-14T10:15:42Z
|
2020-01-14T20:29:51Z
|
https://github.com/gevent/gevent/issues/1510
|
[
"Type: Bug"
] |
koreno
| 0
|
openapi-generators/openapi-python-client
|
rest-api
| 984
|
Timeout issues due to client.beta.threads.runs.retrieve()
|
~ Snip ~
Had the wrong repo.
|
closed
|
2024-02-29T23:02:23Z
|
2024-02-29T23:11:46Z
|
https://github.com/openapi-generators/openapi-python-client/issues/984
|
[] |
JeretSB
| 1
|
gee-community/geemap
|
jupyter
| 854
|
The feature export problem by Map.draw_features
|
<!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- geemap version: the least version
- Python version: 3.8
- Operating System: Windows
### Description
I used the code showed in following:
"Map = geemap.Map()
Map"
to show a map and selected some point in the map.
but when I used code "print(len(Map.draw_features))" to show the number of point. But the result was 0.
And when I used "Map.draw_features", it also shows that there is not any point in the map
### What I Did
```
import geemap
import ee
import os
os.environ['HTTP_PROXY'] = 'http://127.0.0.1:7890'
os.environ['HTTPS_PROXY'] = 'http://127.0.0.1:7890'
Map = geemap.Map()
Map
dataset = ee.ImageCollection('USDA/NASS/CDL').filter(ee.Filter.date('2020-01-01', '2020-12-31')).first();
cropLandcover = dataset.select('cropland');
Map.setCenter(-100.55, 40.71, 4);
Map.addLayer(cropLandcover, {}, 'Crop Landcover');
area1 = ee.FeatureCollection('users/gaoliaoran2020/addition/area4')
Map.addLayer(area1, {}, "area4")
area1_CDL = cropLandcover.clip(area1)
area1_Geometry = area1.geometry()
Map.addLayer(area1_CDL, {}, 'area1_CDL');
Map.add_legend(builtin_legend='USDA/NASS/CDL')
print(len(Map.draw_features))
```
|
closed
|
2022-01-07T08:43:44Z
|
2022-01-07T13:42:10Z
|
https://github.com/gee-community/geemap/issues/854
|
[
"bug"
] |
Godjobgerry
| 1
|
developmentseed/lonboard
|
jupyter
| 312
|
[EPIC] Optimize user notebook experience
|
## Context
Keeping our dependencies and development environment trimmed to what is necessary can keep our project tidy and help load times and execution times.
## Issue
Let's remove unnecessary dependencies and optimize the notebook experience.
## Acceptance-Criteria
List the tasks that need to be completed or artifacts that need to be produced
- [x] https://github.com/developmentseed/lonboard/issues/101
- [x] https://github.com/developmentseed/lonboard/issues/236
|
closed
|
2024-01-11T15:42:12Z
|
2024-09-24T19:43:08Z
|
https://github.com/developmentseed/lonboard/issues/312
|
[
"python"
] |
emmalu
| 1
|
yt-dlp/yt-dlp
|
python
| 12,013
|
[BiliBili] Circumvent 412 Error (Request is blocked by server) & Possible fix for no-account downloads.
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Singapore
### Provide a description that is worded well enough to be understood
Exceeding request limit gets you blocked from bilibili for what seems to be anywhere between minutes to hours. Even with cookies, you're blocked with 412 error as shown below.
Behavior departing from YouTube is that being blocked prevents you from parsing videos from channel, but doesn't stop you from downloading videos from BV-id. **Important fact here.**
However, even when blocked, you can still parse videos from channel, just via /dynamic, which lists all posts from the uploader, including videos, but also word and image posts. This is not blocked. Infact, it does not even need an account to parse.
Eg https://space.bilibili.com/287143274/dynamic
Eg https://api.bilibili.com/x/polymer/web-dynamic/v1/feed/space?offset=965506324593901573&host_mid=287143274
At the bottom is offset=930111056702865432, which can be used to load another page. So on and so-forth.
This can be used to recursively parse videos, which can still download while being "blocked"
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp https://space.bilibili.com/287143274 --cookies-from-browser firefox --ignore-config --verbose
[debug] Command-line config: ['https://space.bilibili.com/287143274', '--cookies-from-browser', 'firefox', '--ignore-config', '--verbose']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.12.26.232815 from yt-dlp/yt-dlp-nightly-builds [0b6b7742c] (pip)
[debug] Python 3.12.6 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 3.0.15 3 Sep 2024)
[debug] exe versions: ffmpeg N-117825-g970d57988d-20241118 (setts), ffprobe N-117825-g970d57988d-20241118, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
Extracting cookies from firefox
[debug] Extracting cookies from: "C:\Users\USER\AppData\Roaming\Mozilla\Firefox\Profiles\\cookies.sqlite"
Extracted cookies from firefox
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Plugin directories: ['C:\\Users\\USER\\AppData\\Roaming\\yt-dlp\\plugins\\yt-dlp-ChromeCookieUnlock\\yt_dlp_plugins']
[debug] Loaded 1837 extractors
[BilibiliSpaceVideo] Extracting URL: https://space.bilibili.com/287143274
[BilibiliSpaceVideo] A channel URL was given. Only the channel's videos will be downloaded. To download audios, add a "/audio" to the URL
[BilibiliSpaceVideo] 287143274: Downloading wbi sign
[BilibiliSpaceVideo] 287143274: Downloading space page 0
ERROR: [BilibiliSpaceVideo] 287143274: Request is blocked by server (412), please add cookies, wait and try later.
File "C:\Users\USER\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\extractor\common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\extractor\bilibili.py", line 1248, in _real_extract
metadata, paged_list = self._extract_playlist(fetch_page, get_metadata, get_entries)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\extractor\bilibili.py", line 1169, in _extract_playlist
first_page = fetch_page(0)
^^^^^^^^^^^^^
File "C:\Users\USER\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\extractor\bilibili.py", line 1223, in fetch_page
raise ExtractorError(
```
|
closed
|
2025-01-06T19:28:45Z
|
2025-01-26T00:54:34Z
|
https://github.com/yt-dlp/yt-dlp/issues/12013
|
[
"duplicate",
"site-bug"
] |
pxssy
| 3
|
autokey/autokey
|
automation
| 920
|
Keyboard keys not importing or loading in.
|
### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Crash/Hang/Data loss
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [X] autokey-gtk
- [ ] autokey-qt
- [ ] beta
- [ ] bug
- [ ] critical
- [ ] development
- [ ] documentation
- [ ] enhancement
- [X] installation/configuration
- [ ] phrase expansion
- [X] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
_No response_
### Which Linux distribution did you use?
Linux 6.5.9-arch2-1
Laptop Ryzen chip with amdgpu drivers
### Which AutoKey GUI did you use?
GTK
### Which AutoKey version did you use?
0.96.0
### How did you install AutoKey?
git
### Can you briefly describe the issue?
I don't exactly know what is going on but everything keyboard related does not work properly. I've made 2 scripts with mouse and cursor only, those worked fine. I think something is going wrong at the startup, but I cannot decrypt the logs.
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
1. Launch autokey-gtk
### What should have happened?
I hope work properly.
### What actually happened?
My keyboard keys have not been imported or configured correctly.
### Do you have screenshots?
[logs.txt](https://github.com/autokey/autokey/files/13291682/logs.txt)
These are the startup logs. I hope I am not very stupid.
### Can you provide the output of the AutoKey command?
```bash
[logs.txt](https://github.com/autokey/autokey/files/13291682/logs.txt)
```
### Anything else?
No rush, I really like your program. I like making these simple scripts. Keep up the good work!
|
open
|
2023-11-08T03:26:48Z
|
2023-11-18T09:21:46Z
|
https://github.com/autokey/autokey/issues/920
|
[] |
ArcSpammer
| 5
|
iperov/DeepFaceLab
|
machine-learning
| 5,341
|
Issue with quick96 training model
|
Hello im having problems with my training model ! For some reason my dfl is not remembering my trained model .Even though I saved it and I am currently around at 71000 iteration
It says no saved model found
Please help me out


|
open
|
2021-05-29T15:38:13Z
|
2023-06-08T22:40:45Z
|
https://github.com/iperov/DeepFaceLab/issues/5341
|
[] |
nabjit
| 2
|
TencentARC/GFPGAN
|
deep-learning
| 399
|
Allow to define model_rootpath for FaceRestoreHelper
|
Hey, it would be great if all weights could be stored in one directory or set to None:
```
gfpgan.GFPGANer(
model_path='weights/GFPGANv1.3.pth',
model_rootpath='weights'
)
```
Related code that enforces `gfpgan/weights`:
```
self.face_helper = FaceRestoreHelper(
upscale,
face_size=512,
crop_ratio=(1, 1),
det_model='retinaface_resnet50',
save_ext='png',
use_parse=True,
device=self.device,
model_rootpath='gfpgan/weights'
)
```
General speaking, think about giving more control over the `FaceRestoreHelper` instance.
|
closed
|
2023-06-15T11:41:38Z
|
2025-01-22T23:52:35Z
|
https://github.com/TencentARC/GFPGAN/issues/399
|
[] |
henryruhs
| 2
|
explosion/spaCy
|
deep-learning
| 12,854
|
morph reading in token is not merged properly when using merge_entities pipeline
|
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
```
import spacy
import json
import fileinput
from pprint import pprint
# returns start and end index, end not inclusive
def process(nlp, texts):
docs = list(nlp.pipe(texts, n_process=1, batch_size=2000))
for doc in docs:
for sent in doc.sents:
for token in sent:
tokenInfo = {
"idx": token.i,
"orth": token.orth_,
"pos": token.pos_,
"lemma": token.lemma_,
"norm": token.norm_,
"dep": token.dep_,
"morph": token.morph.to_json(),
}
print(json.dumps(tokenInfo, ensure_ascii=False))
nlp = spacy.load('ja_core_news_lg')
nlp.add_pipe("merge_subtokens")
nlp.add_pipe("merge_entities")
texts = []
for line in fileinput.input():
texts.append(line.strip())
process(nlp, texts)
```
## Command to test
`echo "4月1日に試験があるので" | python parse-jap.py`
returns
```
{"idx": 0, "orth": "4月1日", "pos": "NOUN", "lemma": "4月1日", "norm": "4月1日", "dep": "obl", "morph": "Reading=ツイタチ"}
{"idx": 1, "orth": "に", "pos": "ADP", "lemma": "に", "norm": "に", "dep": "case", "morph": "Reading=ニ"}
{"idx": 2, "orth": "試験", "pos": "NOUN", "lemma": "試験", "norm": "試験", "dep": "nsubj", "morph": "Reading=シケン"}
{"idx": 3, "orth": "が", "pos": "ADP", "lemma": "が", "norm": "が", "dep": "case", "morph": "Reading=ガ"}
{"idx": 4, "orth": "ある", "pos": "VERB", "lemma": "ある", "norm": "有る", "dep": "ROOT", "morph": "Inflection=五段-ラ行;連体形-一般|Reading=アル"}
{"idx": 5, "orth": "の", "pos": "SCONJ", "lemma": "の", "norm": "の", "dep": "mark", "morph": "Reading=ノ"}
{"idx": 6, "orth": "で", "pos": "AUX", "lemma": "だ", "norm": "だ", "dep": "fixed", "morph": "Inflection=助動詞-ダ;連用形-一般|Reading=デ"}
```
Note how for 4月1日, it shows morph": "Reading=ツイタチ". It removed the reading from 4月
## Your Environment
- **spaCy version:** 3.5.3
- **Platform:** macOS-12.5-arm64-arm-64bit
- **Python version:** 3.10.10
- **Pipelines:** ja_core_news_sm (3.2.0), ja_ginza (5.1.2), ja_core_news_trf (3.2.0), ja_ginza_electra (5.1.2), ja_core_news_lg (3.2.0)
|
closed
|
2023-07-24T19:06:54Z
|
2023-07-25T06:24:01Z
|
https://github.com/explosion/spaCy/issues/12854
|
[
"feat / doc",
"feat / morphology"
] |
lawctan
| 2
|
tflearn/tflearn
|
tensorflow
| 255
|
fix accuray for binary_crossentropy
|
There is a bug when calculating 'accuracy' metric along with binary_crossentropy. 'accuracy' should have different behavior if incoming tensor is 1-D or 2-D.
|
closed
|
2016-08-03T00:10:18Z
|
2016-08-31T19:41:41Z
|
https://github.com/tflearn/tflearn/issues/255
|
[
"bug"
] |
aymericdamien
| 2
|
PaddlePaddle/ERNIE
|
nlp
| 222
|
BERT、ERNIE、TextCNN做文本分类任务性能对比
|
以下模型的推理速度、内存占用等均在‘CPU’上考察
【TextCNN、pytorch_bert、tensorflow_bert、ERNIE文本分类任务性能对比】
【以下性能考察结果均经过多次测试】
推理时的数据最长文本有75万中文字符,利用100个文本进行测试。
从内存占用及推理速度指标来看,四种算法中,TextCNN均是最优的。
由于bert及ERNIE并未经过多次fine-tune就已经达到较好泛化效果,因此可以认为其泛化能力会相对textcnn更好些。
pytorch_bert、tensorflow_bert、ERNIE三者相比较,在内存占用方面相差不是很大;但ERNIE在推理速度方面稍差(**这个蛮重要**),不过ERNIE版本更新很快、另可直接联系其工作团队解答疑问
**长文本预测时**,尝试了将文本截断(例如:文本长度100000, 则分n=100000//510次)预测,因为长文本前510长度有可能没有重要信息。(例如:【‘娱乐’, ‘赌博’】二分类,截断n次,只要有一次预测为‘赌博’那文本就是赌博类别)
【TextCNN】
推理时模型占用内存大约:546M(稳定)
推理时预测一个文件(完整长文本)平均所需时间:0.095s
多次训练,保存泛化效果最好模型,其在测试集上准确率:95.312%
【Pytorch_bert】
推理时模型占用内存:942M(峰值)
推理时预测一个文本(前128+后382字符)平均所需时间:1.149S
推理时预测一个文本(前510 * 1长度+尾部数据;相当于预测截断成n个510文本)平均所需时间:2.658s
推理时预测一个文本(前510 * 2长度+尾部数据)平均所需时间:3.529s
推理时预测一个文本(前510 * 5长度+尾部数据)平均所需时间:5.233s
推理时预测一个文本(完整长文本)平均所需时间:38.77s
fine-tune模型,其在测试集上准确率:98.82%
【tensorflow_bert】
推理时模型占用内存:988M(峰值)
推理时预测一个文本(前128+后382字符)平均所需时间:1.332S
推理时预测一个文本(前510 * 1长度+尾部数据)平均所需时间:1.485s
推理时预测一个文本(前510 * 2长度+尾部数据)平均所需时间:3.570s
推理时预测一个文本(前510 * 5长度+尾部数据)平均所需时间:7.033s
推理时预测一个文本(完整长文本)平均所需时间:56.18s
fine-tune模型(调节的参数与pytorch_bert一致),其在测试集上准确率:98.90%
【ERNIE】
推理时模型占用内存:1072M(峰值)
推理时预测一个文本(前128+后382字符)平均所需时间:2.227s
推理时预测一个文本(前510 * 1长度+尾部数据)平均所需时间:3.934s
推理时预测一个文本(前510 * 2长度+尾部数据)平均所需时间:6.001s
推理时预测一个文本(前510 * 5长度+尾部数据)平均所需时间:9.835s
推理时预测一个文本(完整长文本)平均所需时间:
fine-tune模型,其在测试集上准确率:98.74%
|
closed
|
2019-07-24T03:48:11Z
|
2020-05-28T09:52:45Z
|
https://github.com/PaddlePaddle/ERNIE/issues/222
|
[
"wontfix"
] |
Biaocsu
| 17
|
aio-libs/aiomysql
|
sqlalchemy
| 605
|
Sometimes it broke by concurrent.futures._base.CancelledError
|
When I using Aiomysql in a sanic high concurrency web server:
``` python
async with self._pool.acquire() as conn:
async with conn.cursor() as cur:
await cur.execute(query, param)
if is_all:
res = await cur.fetchall()
else:
res = await cur.fetchone()
```
Sometimes it broken this error:
``` text
Traceback (most recent call last):
File "/usr/local/mycodes/amysql.py", line 249, in query
res = await cur.fetchone()
File "/usr/local/lib/python3.6/site-packages/aiomysql/utils.py", line 103, in __aexit__
await self._pool.release(self._conn)
concurrent.futures._base.CancelledError
```
or
``` text
Traceback (most recent call last):
File "/usr/local/mycodes/amysql.py", line 243, in query
async with self._pool.acquire() as conn:
File "/usr/local/lib/python3.6/site-packages/aiomysql/utils.py", line 98, in __aenter__
self._conn = await self._coro
File "/usr/local/lib/python3.6/site-packages/aiomysql/pool.py", line 133, in _acquire
async with self._cond:
File "/usr/lib64/python3.6/asyncio/locks.py", line 79, in __aenter__
yield from self.acquire()
File "/usr/lib64/python3.6/asyncio/locks.py", line 181, in acquire
yield from fut
concurrent.futures._base.CancelledError
```
About 1000 request in 10 seconds will cause this error.
What can I do for avoiding this error?
|
open
|
2021-08-09T02:27:00Z
|
2023-04-12T02:22:44Z
|
https://github.com/aio-libs/aiomysql/issues/605
|
[
"bug"
] |
yanjieee
| 5
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,252
|
Define Dropout Value for Cycle & Pix2Pix
|
I have managed to turn dropout on for both pix2pix and cycle gan during training and in inference.
I would now like to explore the impact different values for dropout has on the predictions that are drawn from of each of the two models when running inference on the same dataset multiple times. How can I determine the value of dropout used? The only parameters I see within options are a binary True or False.
|
open
|
2021-03-12T15:58:05Z
|
2021-04-14T15:56:44Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1252
|
[] |
Tonks684
| 1
|
graphql-python/gql
|
graphql
| 391
|
Is there a reason TransportQueryError doesn't extend TransportError?
|
I expected all exceptions to be captured by `gql.exceptions.TransportError`:
```python
from gql.exceptions import TransportError
try:
...
except TransportError as e:
...
```
But `TransportQueryError` extends from `Exception`:
https://github.com/graphql-python/gql/blob/2827d887db4c6951899a8e242af55863328f68a2/gql/transport/exceptions.py#L30
Is this by design?
|
closed
|
2023-02-23T13:19:11Z
|
2023-02-23T17:09:56Z
|
https://github.com/graphql-python/gql/issues/391
|
[
"type: bug"
] |
helderco
| 2
|
deepset-ai/haystack
|
nlp
| 8,437
|
Support Claude Sonnet 3.5 for AmazonBedrockGenerator
|
**Is your feature request related to a problem? Please describe.**
We'd like to use Sonnet 3.5 in Bedrock for some of our projects but need Haystack to support it (if it doesn't already)
**Describe the solution you'd like**
Haystack supports Sonnet 3 in Bedrock but we'd like support for Sonnet 3.5
**Describe alternatives you've considered**
N/A
**Additional context**
N/A
|
closed
|
2024-10-02T22:00:14Z
|
2024-10-10T14:21:08Z
|
https://github.com/deepset-ai/haystack/issues/8437
|
[] |
jkruzek
| 3
|
deepset-ai/haystack
|
nlp
| 8,649
|
Importing `FileTypeRouter` imports all converters
|
**Describe the bug**
When using/importing `FileTypeRouter` all converters are imported as well. This makes it a heavier operation than necessary and can increase the probability for further issues (e.g. cyclic dependencies, load-time, import deadlocks when used in multithreaded env). E.g. importing `AzureOCRDocumentConverter` loads additional external depencies.
Line causing this:
https://github.com/deepset-ai/haystack/blob/78292422f00592bb0a6b5d58bbbb679f4b8718da/haystack/components/routers/file_type_router.py#L12
**Error message**
-
**Expected behavior**
Using/importing `FileTypeRouter` does not load all converters / has no dependency to converters.
E.g. the two methods in question could be moved to the `haystack.utils` module.
**Additional context**
-
**To Reproduce**
-
**FAQ Check**
- [x] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS:
- GPU/CPU:
- Haystack version (commit or version number):
- DocumentStore:
- Reader:
- Retriever:
|
closed
|
2024-12-17T11:56:49Z
|
2025-02-17T07:50:00Z
|
https://github.com/deepset-ai/haystack/issues/8649
|
[
"P2"
] |
tstadel
| 0
|
NullArray/AutoSploit
|
automation
| 339
|
Unhandled Exception (3b02048a5)
|
Autosploit version: `3.0`
OS information: `Linux-4.18.0-kali3-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/root/Github/AutoSploit/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/root/Github/AutoSploit/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
|
closed
|
2019-01-06T09:54:33Z
|
2019-01-14T18:06:36Z
|
https://github.com/NullArray/AutoSploit/issues/339
|
[] |
AutosploitReporter
| 0
|
ivy-llc/ivy
|
tensorflow
| 28,066
|
Fix Frontend Failing Test: torch - tensor.torch.Tensor.__mul__
|
ToDo: https://github.com/unifyai/ivy/issues/27498
Type: Priority
|
closed
|
2024-01-27T08:02:17Z
|
2024-01-27T09:01:23Z
|
https://github.com/ivy-llc/ivy/issues/28066
|
[
"Sub Task"
] |
Aryan8912
| 1
|
awtkns/fastapi-crudrouter
|
fastapi
| 134
|
Question: Model with different look than the Input SCHEMA
|
I have an API that for some reasons has a certain structure which is different from what the route receives from external actors (they send with their schema which is a json from mongodb), how do I handle the conversion from this different schema to my model?
|
open
|
2022-01-06T18:44:09Z
|
2022-01-06T18:44:09Z
|
https://github.com/awtkns/fastapi-crudrouter/issues/134
|
[] |
jeanlst
| 0
|
ageitgey/face_recognition
|
python
| 929
|
Can we make face_recognition.face_encodings a bit faster
|
* face_recognition version:
* Python version 3.6:
* Windows 10
GTX 1060
16 gb ram
I have notices ` face_recognition.face_encodings` takes a lot of time, is there a way to make it bit faster?
|
open
|
2019-09-13T20:28:05Z
|
2019-10-15T01:45:11Z
|
https://github.com/ageitgey/face_recognition/issues/929
|
[] |
talhaanwarch
| 2
|
google-research/bert
|
nlp
| 461
|
Reduce prediction time for question answering
|
Hi,
i am executing BERT solution on machine with GPU (Tesla K80 - 12 GB) . for question answering prediction for single question is taking more than 5 seconds. Can we reduce it to below 1 second.
Do we need to configure any thing to make it possible ?
Thank you
|
open
|
2019-02-28T09:28:09Z
|
2019-09-19T04:36:36Z
|
https://github.com/google-research/bert/issues/461
|
[] |
shivamani-ans
| 9
|
pydantic/pydantic
|
pydantic
| 10,787
|
`TypeAdapter.json_schema()` unable to render schema for custom `Annotated` type having a pydantic type in `PlainValidator.json_schema_input_type`
|
### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
I define a custom Pydantic type with `typing.Annotated` + `pydantic.PlainValidator(func, json_schema_input_type=OtherType)` syntax :
- Validation of this custom type works as expected
- but I am unable to render its **JSON schema** with `TypeAdapter.json_schema`, as soon as **`OtherType` is a pydantic type** (e.g. a pydantic dataclass in my below example).
### Full stack trace from example
```python
{'properties': {'x': {'title': 'X', 'type': 'integer'}}, 'required': ['x'], 'title': 'MyNestedData', 'type': 'object'} # OK
Traceback (most recent call last):
File "test_pydantic_bug.py", line 29, in <module>
print(TypeAdapter(MyRootData).json_schema()) # KeyError: '__main____MyNestedData-Input__1'
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/pydantic/type_adapter.py", line 135, in wrapped
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/pydantic/type_adapter.py", line 542, in json_schema
return schema_generator_instance.generate(self.core_schema, mode=mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/pydantic/json_schema.py", line 416, in generate
json_ref_counts = self.get_json_ref_counts(json_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/pydantic/json_schema.py", line 2181, in get_json_ref_counts
_add_json_refs(json_schema)
File ".venv/lib/python3.11/site-packages/pydantic/json_schema.py", line 2170, in _add_json_refs
_add_json_refs(self.definitions[defs_ref])
~~~~~~~~~~~~~~~~^^^^^^^^^^
KeyError: '__main____MyNestedData-Input__1'
```
### Example Code
```Python
from typing import Annotated, Self
from pydantic import TypeAdapter, PlainValidator
from pydantic.dataclasses import dataclass
@dataclass
class MyNestedData:
x: int
print(TypeAdapter(MyNestedData).json_schema()) # OK
class _MyRootData:
@classmethod
def from_unsafe(cls, xxx) -> Self: ...
MyRootData = Annotated[
_MyRootData,
PlainValidator(_MyRootData.from_unsafe, json_schema_input_type=MyNestedData),
]
print(TypeAdapter(MyRootData).json_schema()) # KeyError: '__main____MyNestedData-Input__1'
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: .venv/lib/python3.11/site-packages/pydantic
python version: 3.11.9 (main, May 2 2024, 10:11:35) [GCC 12.2.0]
platform: Linux-6.1.0-26-amd64-x86_64-with-glibc2.36
related packages: typing_extensions-4.12.2
```
|
closed
|
2024-11-07T16:35:22Z
|
2024-12-05T19:47:46Z
|
https://github.com/pydantic/pydantic/issues/10787
|
[
"bug V2"
] |
emaheuxPEREN
| 5
|
OWASP/Nettacker
|
automation
| 991
|
test coverage for `api/core.py`
|

|
open
|
2025-01-20T16:54:41Z
|
2025-01-20T16:58:46Z
|
https://github.com/OWASP/Nettacker/issues/991
|
[] |
nitinawari
| 1
|
ray-project/ray
|
deep-learning
| 51,071
|
[core] Only one of the threads in a thread pool will be initialized as a long-running Python thread
|
### What happened + What you expected to happen
Currently, only one of the threads in a thread pool will be initialized as a long-running Python thread. I should also investigate whether it's possible to call `PyGILState_Release` on a different thread other than the one calls `PyGILState_Ensure` in the thread pool.
### Versions / Dependencies
TODO
### Reproduction script
TODO
### Issue Severity
None
|
open
|
2025-03-04T22:03:14Z
|
2025-03-04T23:02:20Z
|
https://github.com/ray-project/ray/issues/51071
|
[
"bug",
"core"
] |
kevin85421
| 0
|
vastsa/FileCodeBox
|
fastapi
| 290
|
Cannot read properties of undefined reading ‘digest‘
|
因为内部站点没有域名,纯ip访问所以是用的http访问
能否兼容http的情况


|
closed
|
2025-03-13T08:44:23Z
|
2025-03-15T15:42:39Z
|
https://github.com/vastsa/FileCodeBox/issues/290
|
[] |
BlackWhite2000
| 1
|
serengil/deepface
|
deep-learning
| 542
|
What is the target_size = (224, 224) for each of the models?
|
What value of target_size should I use for each of the models?
"VGG-Face",
"Facenet",
"Facenet512",
"OpenFace",
"DeepFace",
"DeepID",
"ArcFace",
"Dlib",
"SFace",
]
|
closed
|
2022-08-20T08:28:51Z
|
2022-08-20T09:25:38Z
|
https://github.com/serengil/deepface/issues/542
|
[
"question"
] |
martinenkoEduard
| 1
|
tiangolo/uwsgi-nginx-flask-docker
|
flask
| 140
|
Image failing on start
|
Since the new change on nginx.conf we added the following line to our Dockerfile
`COPY ./nginx.conf /app/nginx.conf`
but now on startup we get the following errors
any ideas??
6/14/2019 4:06:30 PMworker 1 buried after 1 seconds
6/14/2019 4:06:30 PMworker 2 buried after 1 seconds
6/14/2019 4:06:30 PMgoodbye to uWSGI.
6/14/2019 4:06:32 PM/usr/lib/python2.7/dist-packages/supervisor/options.py:298: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
6/14/2019 4:06:32 PM 'Supervisord is running as root and it is searching '
6/14/2019 4:06:33 PM[uWSGI] getting INI configuration from /app/uwsgi.ini
6/14/2019 4:06:33 PM[uWSGI] getting INI configuration from /etc/uwsgi/uwsgi.ini
6/14/2019 4:06:33 PM
6/14/2019 4:06:33 PM;uWSGI instance configuration
6/14/2019 4:06:33 PM[uwsgi]
6/14/2019 4:06:33 PMcheaper = 2
6/14/2019 4:06:33 PMprocesses = 16
6/14/2019 4:06:33 PMini = /app/uwsgi.ini
6/14/2019 4:06:33 PMmodule = app.main
6/14/2019 4:06:33 PMcallable = app
6/14/2019 4:06:33 PMenable-threads = true
6/14/2019 4:06:33 PMini = /etc/uwsgi/uwsgi.ini
6/14/2019 4:06:33 PMsocket = /tmp/uwsgi.sock
6/14/2019 4:06:33 PMchown-socket = nginx:nginx
6/14/2019 4:06:33 PMchmod-socket = 664
6/14/2019 4:06:33 PMhook-master-start = unix_signal:15 gracefully_kill_them_all
6/14/2019 4:06:33 PMneed-app = true
6/14/2019 4:06:33 PMdie-on-term = true
6/14/2019 4:06:33 PMshow-config = true
6/14/2019 4:06:33 PM;end of configuration
6/14/2019 4:06:33 PM
6/14/2019 4:06:33 PM*** Starting uWSGI 2.0.18 (64bit) on [Fri Jun 14 14:06:33 2019] ***
6/14/2019 4:06:33 PMcompiled with version: 6.3.0 20170516 on 16 May 2019 03:07:24
6/14/2019 4:06:33 PMos: Linux-4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018
6/14/2019 4:06:33 PMnodename: 32988a72b7a3
6/14/2019 4:06:33 PMmachine: x86_64
6/14/2019 4:06:33 PMclock source: unix
6/14/2019 4:06:33 PMpcre jit disabled
6/14/2019 4:06:33 PMdetected number of CPU cores: 4
6/14/2019 4:06:33 PMcurrent working directory: /app
6/14/2019 4:06:33 PMdetected binary path: /usr/local/bin/uwsgi
6/14/2019 4:06:33 PMyour memory page size is 4096 bytes
6/14/2019 4:06:33 PMdetected max file descriptor number: 1048576
6/14/2019 4:06:33 PMlock engine: pthread robust mutexes
6/14/2019 4:06:33 PMthunder lock: disabled (you can enable it with --thunder-lock)
6/14/2019 4:06:33 PMuwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
6/14/2019 4:06:33 PMuWSGI running as root, you can use --uid/--gid/--chroot options
6/14/2019 4:06:33 PM*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
6/14/2019 4:06:33 PMPython version: 3.6.8 (default, May 8 2019, 05:35:00) [GCC 6.3.0 20170516]
6/14/2019 4:06:33 PMPython main interpreter initialized at 0x5585518b4390
6/14/2019 4:06:33 PMuWSGI running as root, you can use --uid/--gid/--chroot options
6/14/2019 4:06:33 PM*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
6/14/2019 4:06:33 PMpython threads support enabled
6/14/2019 4:06:33 PMyour server socket listen backlog is limited to 100 connections
6/14/2019 4:06:33 PMyour mercy for graceful operations on workers is 60 seconds
6/14/2019 4:06:33 PMmapped 1239640 bytes (1210 KB) for 16 cores
6/14/2019 4:06:33 PM*** Operational MODE: preforking ***
6/14/2019 4:06:51 PM/usr/local/lib/python3.6/site-packages/torch/serialization.py:425: SourceChangeWarning: source code of class 'acceptability.models.elmo_classifier.ELMOClassifier' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
6/14/2019 4:06:51 PM warnings.warn(msg, SourceChangeWarning)
6/14/2019 4:06:51 PM/usr/local/lib/python3.6/site-packages/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.dropout.Dropout' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
6/14/2019 4:06:51 PM warnings.warn(msg, SourceChangeWarning)
6/14/2019 4:06:51 PM/usr/local/lib/python3.6/site-packages/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.sparse.Embedding' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
6/14/2019 4:06:51 PM warnings.warn(msg, SourceChangeWarning)
6/14/2019 4:06:51 PM/usr/local/lib/python3.6/site-packages/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.rnn.LSTM' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
6/14/2019 4:06:51 PM warnings.warn(msg, SourceChangeWarning)
6/14/2019 4:06:51 PM/usr/local/lib/python3.6/site-packages/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
6/14/2019 4:06:51 PM warnings.warn(msg, SourceChangeWarning)
6/14/2019 4:06:51 PM/usr/local/lib/python3.6/site-packages/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.container.ModuleList' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
6/14/2019 4:06:51 PM warnings.warn(msg, SourceChangeWarning)
6/14/2019 4:06:51 PM/usr/local/lib/python3.6/site-packages/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.activation.ReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
6/14/2019 4:06:51 PM warnings.warn(msg, SourceChangeWarning)
6/14/2019 4:06:51 PM/usr/local/lib/python3.6/site-packages/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.activation.Sigmoid' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
6/14/2019 4:06:51 PM warnings.warn(msg, SourceChangeWarning)
6/14/2019 4:06:51 PM/usr/local/lib/python3.6/site-packages/torch/serialization.py:425: SourceChangeWarning: source code of class 'torch.nn.modules.activation.Softmax' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
6/14/2019 4:06:51 PM warnings.warn(msg, SourceChangeWarning)
6/14/2019 4:06:51 PMWSGI app 0 (mountpoint='') ready in 18 seconds on interpreter 0x5585518b4390 pid: 16 (default app)
6/14/2019 4:06:51 PMuWSGI running as root, you can use --uid/--gid/--chroot options
6/14/2019 4:06:51 PM*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
6/14/2019 4:06:51 PM*** uWSGI is running in multiple interpreter mode ***
6/14/2019 4:06:51 PMspawned uWSGI master process (pid: 16)
6/14/2019 4:06:51 PMspawned uWSGI worker 1 (pid: 24, cores: 1)
6/14/2019 4:06:51 PMspawned uWSGI worker 2 (pid: 25, cores: 1)
6/14/2019 4:06:51 PMrunning "unix_signal:15 gracefully_kill_them_all" (master-start)...
|
closed
|
2019-06-14T14:21:10Z
|
2020-04-10T20:01:27Z
|
https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/140
|
[] |
sepharg
| 1
|
MODSetter/SurfSense
|
fastapi
| 11
|
Problem
|
uvicorn server:app --host 0.0.0.0 --port 8000
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Scripts\uvicorn.exe\__main__.py", line 7, in <module>
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\site-packages\uvicorn\main.py", line 410, in main
run(
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\site-packages\uvicorn\main.py", line 577, in run
server.run()
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\site-packages\uvicorn\server.py", line 65, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\asyncio\runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\site-packages\uvicorn\server.py", line 69, in serve
await self._serve(sockets)
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\site-packages\uvicorn\server.py", line 76, in _serve
config.load()
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\site-packages\uvicorn\config.py", line 434, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\site-packages\uvicorn\importer.py", line 22, in import_from_string
raise exc from None
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\site-packages\uvicorn\importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\forte\AppData\Local\Programs\Python\Python311\Lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\forte\Desktop\clone\SurfSense\backend\server.py", line 8, in <module>
from langchain_ollama import OllamaLLM
ModuleNotFoundError: No module named 'langchain_ollama'
(venv) PS C:\Users\forte\Desktop\clone\SurfSense\backend>
|
closed
|
2024-11-13T18:07:37Z
|
2024-11-16T19:08:30Z
|
https://github.com/MODSetter/SurfSense/issues/11
|
[] |
Claudioappassionato
| 1
|
chainer/chainer
|
numpy
| 7,726
|
`chainerx.flip` returns incorrect value for non-contiguous inputs
|
`chainerx.flip` (supported in #7065) sometimes returns incorrect value for non-contiguous inputs.
```py
>>> import chainerx
>>> chainerx.flip(chainerx.array([1, 2, 3, 4], dtype='int32')[::-1])
array([32534, 33, 0, 1], shape=(4,), dtype=int32, device='native:0')
```
|
closed
|
2019-07-09T04:11:49Z
|
2019-07-10T08:46:18Z
|
https://github.com/chainer/chainer/issues/7726
|
[
"cat:bug",
"pr-ongoing",
"ChainerX"
] |
asi1024
| 1
|
matplotlib/matplotlib
|
data-science
| 29,224
|
[Bug]: Matplotlib don't take into account savefig.pad_inches when plt.plot(... transform=fig.dpi_scale_trans)
|
### Bug summary
Hello! When I draw line from bottom left figure corner up to top right figure corner, I see then that there some figure paddings:
```pyhon
fig = plt.figure(facecolor='#ccc')
ax = fig.gca()
ax.set_axis_off()
line_1, = plt.plot([0, 1], [0, 1], transform=fig.transFigure, clip_on=False, lw=2, c='blue')
plt.show()
```

`fig.get_size_inches()` gives me 6.4×4.8
When I use `plt.plot` with `transform=fig.dpi_scale_trans`, I see that line starts from the left bottom corner:
```python
fig = plt.figure(facecolor='#ccc')
ax = fig.gca()
ax.set_axis_off()
line_1, = plt.plot([0, 1], [0, 1], transform=fig.transFigure, clip_on=False, lw=2, c='blue')
line_2, = plt.plot([0,6.4], [0,4.8], transform=fig.dpi_scale_trans, clip_on=False, lw=2, c='black')
plt.show()
```

`plt.rcParams['savefig.pad_inches']` gives me 0.1. When I add 0.1 to line_2 coordinates, line_2 will be placed the same as line_1:
```python
fig = plt.figure(facecolor='#ccc')
ax = fig.gca()
ax.set_axis_off()
line_1, = plt.plot([0, 1], [0, 1], transform=fig.transFigure, clip_on=False, lw=2, c='blue')
ofs = plt.rcParams['savefig.pad_inches']
line_2, = plt.plot([0+ofs,6.4+ofs], [0+ofs,4.8+ofs], transform=fig.dpi_scale_trans, clip_on=False, lw=2, c='black')
plt.show()
```

It seems to me this is a bug.
### Code for reproduction
```Python
fig = plt.figure(facecolor='#ccc')
ax = fig.gca()
ax.set_axis_off()
line_1, = plt.plot([0, 1], [0, 1], transform=fig.transFigure, clip_on=False, lw=2, c='blue')
ofs = 0
# uncomment line below for workaround:
# ofs = plt.rcParams['savefig.pad_inches']
fig_w, fig_h = fig.get_size_inches()
line_2, = plt.plot([0+ofs,fig_w+ofs], [0+ofs,fig_h+ofs], transform=fig.dpi_scale_trans, clip_on=False, lw=2, c='black')
plt.show()
```
### Actual outcome

### Expected outcome

### Additional information
_No response_
### Operating system
Windows 10
### Matplotlib Version
3.9.3
### Matplotlib Backend
inline
### Python version
3.12.5
### Jupyter version
ms-toolsai.jupyter v2024.10.0
### Installation
pip
|
open
|
2024-12-03T18:42:04Z
|
2024-12-05T17:48:34Z
|
https://github.com/matplotlib/matplotlib/issues/29224
|
[] |
sindzicat
| 12
|
clovaai/donut
|
computer-vision
| 237
|
ValueError: `num_beams` is set to 1
|
Hi,
Thank you for your work.
I tried to use this demo on [CORD](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/Donut/CORD/Fine_tune_Donut_on_a_custom_dataset_(CORD)_with_PyTorch_Lightning.ipynb) from Niels Rogge, but during the training, it says that:
ValueError: `num_beams` is set to 1. However, early_stopping is set to True -- this flag is only used in beam-based generation modes. Set `num_beams>1` or unset early_stopping to continue.
When I used num_beams from 1 to 2 at the point where I defined the Lighting Module, it worked. It worked without this change before, but this time it started giving this error
|
open
|
2023-08-09T12:03:20Z
|
2024-03-07T19:30:56Z
|
https://github.com/clovaai/donut/issues/237
|
[] |
yonlas
| 1
|
jina-ai/clip-as-service
|
pytorch
| 878
|
how to transform CLIP to TensorRT, ONNX, TorchScript?
|
Could you please share the processing code of converting the original CLIP to TensorRT, ONNX, TorchScript model?
|
closed
|
2022-12-20T03:28:59Z
|
2023-03-02T08:22:12Z
|
https://github.com/jina-ai/clip-as-service/issues/878
|
[] |
FD-Liekkas
| 1
|
mage-ai/mage-ai
|
data-science
| 5,183
|
Allow for registration of custom pipeline notification listeners
|
**Is your feature request related to a problem? Please describe.**
Mage-AI allows for pipeline failure/success/etc. events on Slack, Discord, Teams, etc. Sometimes, however, we would like to be able to react in different ways to a pipeline event: perhaps dropping a message on a queue, talking to some other API, etc. Currently there is no extension point for this.
**Describe the solution you'd like**
I see that in source there is a class called `NotificationSender` ... might be nice to break this into separate subclasses, one each for the different supported channels... then, to allow user-supplied subclasses of the same to accomplish whatever the notification handler should. I'd recommend a rename to `NotificationHandler` also.
**Describe alternatives you've considered**
Thought about cracking open source code and changing what I want, but that introduces other problems.
**Additional context**
None.
|
open
|
2024-06-10T20:09:51Z
|
2024-06-10T20:09:51Z
|
https://github.com/mage-ai/mage-ai/issues/5183
|
[] |
pholser
| 0
|
wandb/wandb
|
data-science
| 9,022
|
[Bug]: Error 403 When Using Wandb in Accelerator
|
Apologies for the oversight; I’m a beginner. When I saw the example provided:
accelerator = Accelerator(
kwargs_handlers=[ddp_kwargs],
deepspeed_plugin=deepspeed_plugin,
log_with="wandb"
)
accelerator.init_trackers(
"Accelerator",
config=hps,
init_kwargs={
"wandb": {
"notes": "testing accelerate pipeline",
"tags": ["tag_a", "tag_b"],
"entity": "gladiator",
}
},
)
I forgot to modify the entity field. I hope others can avoid making the same mistake.
|
closed
|
2024-12-05T01:12:49Z
|
2024-12-05T02:03:15Z
|
https://github.com/wandb/wandb/issues/9022
|
[
"ty:bug",
"a:sdk"
] |
MstarLioning
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.