repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
flaskbb/flaskbb
|
flask
| 189
|
make: *** [install] Error 1
|
When i input `make install` it showed me as below instantly:
InsecurePlatformWarning
> /home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/requests/packages/urllib3/util/ssl_.py:120: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
> Then I solve this problem by `pip install pyopenssl ndg-httpsclient pyasn1`
but the error message was still going on .so I used `control+ c` to stop it
and when i input `make install`again ,it return me that:
> pip install -r requirements.txt
> Requirement already satisfied (use --upgrade to upgrade): alembic==0.8.4 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 1))
> Requirement already satisfied (use --upgrade to upgrade): Babel==2.2.0 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 2))
> Requirement already satisfied (use --upgrade to upgrade): blinker==1.3 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 3))
> Requirement already satisfied (use --upgrade to upgrade): cov-core==1.15.0 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 4))
> Requirement already satisfied (use --upgrade to upgrade): coverage==4.0.3 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 5))
> Requirement already satisfied (use --upgrade to upgrade): Flask==0.10.1 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 6))
> Requirement already satisfied (use --upgrade to upgrade): flask-allows==0.1.0 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 7))
> Requirement already satisfied (use --upgrade to upgrade): Flask-BabelPlus==1.0.1 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 8))
> Requirement already satisfied (use --upgrade to upgrade): Flask-Cache==0.13.1 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 9))
> Requirement already satisfied (use --upgrade to upgrade): Flask-DebugToolbar==0.10.0 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 10))
> Requirement already satisfied (use --upgrade to upgrade): Flask-Login==0.3.2 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 11))
> Requirement already satisfied (use --upgrade to upgrade): Flask-Mail==0.9.1 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 12))
> Requirement already satisfied (use --upgrade to upgrade): Flask-Migrate==1.7.0 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 13))
> Requirement already satisfied (use --upgrade to upgrade): Flask-Plugins==1.6.1 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 14))
> Requirement already satisfied (use --upgrade to upgrade): Flask-Redis==0.1.0 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 15))
> Requirement already satisfied (use --upgrade to upgrade): Flask-Script==2.0.5 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 16))
> Requirement already satisfied (use --upgrade to upgrade): Flask-SQLAlchemy==2.1 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 17))
> Requirement already satisfied (use --upgrade to upgrade): Flask-Themes2==0.1.4 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 18))
> Requirement already satisfied (use --upgrade to upgrade): Flask-WTF==0.12 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 19))
> Requirement already satisfied (use --upgrade to upgrade): itsdangerous==0.24 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 20))
> Requirement already satisfied (use --upgrade to upgrade): Jinja2==2.8 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 21))
> Requirement already satisfied (use --upgrade to upgrade): Mako==1.0.3 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 22))
> Requirement already satisfied (use --upgrade to upgrade): MarkupSafe==0.23 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 23))
> Requirement already satisfied (use --upgrade to upgrade): mistune==0.7.1 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 24))
> Requirement already satisfied (use --upgrade to upgrade): Pygments==2.1 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 25))
> Requirement already satisfied (use --upgrade to upgrade): pytz==2015.7 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 26))
> Requirement already satisfied (use --upgrade to upgrade): redis==2.10.5 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 27))
> Requirement already satisfied (use --upgrade to upgrade): requests==2.9.1 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 28))
> Requirement already satisfied (use --upgrade to upgrade): simplejson==3.8.1 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 29))
> Requirement already satisfied (use --upgrade to upgrade): six==1.10.0 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 30))
> Requirement already satisfied (use --upgrade to upgrade): speaklater==1.3 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 31))
> Requirement already satisfied (use --upgrade to upgrade): SQLAlchemy==1.0.11 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 32))
> Requirement already satisfied (use --upgrade to upgrade): SQLAlchemy-Utils==0.31.6 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 33))
> Requirement already satisfied (use --upgrade to upgrade): Unidecode==0.04.19 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 34))
> Requirement already satisfied (use --upgrade to upgrade): Werkzeug==0.11.3 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 35))
> Requirement already satisfied (use --upgrade to upgrade): Whoosh==2.7.0 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 36))
> Requirement already satisfied (use --upgrade to upgrade): WTForms==2.1 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 37))
> Requirement already satisfied (use --upgrade to upgrade): Flask-Whooshalchemy from https://github.com/jshipley/Flask-WhooshAlchemy/archive/master.zip in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from -r requirements.txt (line 38))
> Requirement already satisfied (use --upgrade to upgrade): python-editor>=0.3 in /home/xzp/.virtualenvs/flaskbb/lib/python2.7/site-packages (from alembic==0.8.4->-r requirements.txt (line 1))
> Cleaning up...
> clear
>
> python manage.py install
> Creating default data...
> 2016-03-23 05:52:01,852 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1
> 2016-03-23 05:52:01,852 INFO sqlalchemy.engine.base.Engine ()
> 2016-03-23 05:52:01,853 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1
> 2016-03-23 05:52:01,853 INFO sqlalchemy.engine.base.Engine ()
> 2016-03-23 05:52:01,853 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
> 2016-03-23 05:52:01,854 INFO sqlalchemy.engine.base.Engine INSERT INTO groups (name, description, admin, super_mod, mod, guest, banned, mod_edituser, mod_banuser, editpost, deletepost, deletetopic, posttopic, postreply) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
> 2016-03-23 05:52:01,854 INFO sqlalchemy.engine.base.Engine ('Administrator', 'The Administrator Group', 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1)
> 2016-03-23 05:52:01,854 INFO sqlalchemy.engine.base.Engine ROLLBACK
> No database found.
> Do you want to create the database now? (y/n) [n]: y
> INFO [alembic.runtime.migration] Context impl SQLiteImpl.
> INFO [alembic.runtime.migration] Will assume non-transactional DDL.
> INFO [sqlalchemy.engine.base.Engine] BEGIN (implicit)
> INFO [sqlalchemy.engine.base.Engine] INSERT INTO groups (name, description, admin, super_mod, mod, guest, banned, mod_edituser, mod_banuser, editpost, deletepost, deletetopic, posttopic, postreply) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
> INFO [sqlalchemy.engine.base.Engine]('Administrator', 'The Administrator Group', 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1)
> INFO [sqlalchemy.engine.base.Engine] ROLLBACK
> Traceback (most recent call last):
> File "manage.py", line 317, in <module>
> manager.run()
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/flask_script/**init**.py", line 412, in run
> result = self.handle(sys.argv[0], sys.argv[1:])
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/flask_script/**init**.py", line 383, in handle
> res = handle(_args, *_config)
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/flask_script/commands.py", line 216, in **call**
> return self.run(_args, *_kwargs)
> File "manage.py", line 137, in install
> create_default_groups()
> File "/home/xzp/PycharmProjects/flaskbb/flaskbb/utils/populate.py", line 155, in create_default_groups
> group.save()
> File "/home/xzp/PycharmProjects/flaskbb/flaskbb/utils/database.py", line 21, in save
> db.session.commit()
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/scoping.py", line 150, in do
> return getattr(self.registry(), name)(_args, *_kwargs)
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 813, in commit
> self.transaction.commit()
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 392, in commit
> self._prepare_impl()
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 372, in _prepare_impl
> self.session.flush()
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2027, in flush
> self._flush(objects)
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2145, in _flush
> transaction.rollback(_capture_exception=True)
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__
> compat.reraise(exc_type, exc_value, exc_tb)
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2109, in _flush
> flush_context.execute()
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 373, in execute
> rec.execute(self)
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 532, in execute
> uow
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 174, in save_obj
> mapper, table, insert)
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 800, in _emit_insert_statements
> execute(statement, params)
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute
> return meth(self, multiparams, params)
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection
> return connection._execute_clauseelement(self, multiparams, params)
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement
> compiled_sql, distilled_params
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
> context)
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1341, in _handle_dbapi_exception
> exc_info
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 200, in raise_from_cause
> reraise(type(exception), exception, tb=exc_tb)
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
> context)
> File "/home/xzp/.virtualenvs/flaskbb/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute
> cursor.execute(statement, parameters)
> sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: groups [SQL: u'INSERT INTO groups (name, description, admin, super_mod, mod, guest, banned, mod_edituser, mod_banuser, editpost, deletepost, deletetopic, posttopic, postreply) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)'] [parameters: ('Administrator', 'The Administrator Group', 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1)]
> make: **\* [install] Error 1
How to solve this?
|
closed
|
2016-03-23T06:01:13Z
|
2018-04-15T07:47:38Z
|
https://github.com/flaskbb/flaskbb/issues/189
|
[] |
XzAmrzs
| 3
|
deepspeedai/DeepSpeed
|
pytorch
| 6,522
|
[BUG] error :past_key, past_value = layer_past,how to solve this ?
|
**Describe the bug**
when i run train,rlhf step 3;
```
Actor_Lr=9.65e-6
Critic_Lr=5e-6
#--data_path Dahoas/rm-static \
#--offload_reference_model \
deepspeed --master_port 12346 main_step3.py \
--data_path ${data_path}/beyond/rlhf-reward-single-round-trans_chinese_step3 \
--data_split 2,4,4 \
--actor_model_name_or_path $ACTOR_MODEL_PATH \
--critic_model_name_or_path $CRITIC_MODEL_PATH \
--data_output_path ${data_path}/train_data_file_step3 \
--num_padding_at_beginning 1 \
--per_device_generation_batch_size 1 \
--per_device_training_batch_size 1 \
--generation_batches 1 \
--ppo_epochs 1 \
--max_answer_seq_len 256 \
--max_prompt_seq_len 256 \
--actor_learning_rate ${Actor_Lr} \
--critic_learning_rate ${Critic_Lr} \
--actor_weight_decay 0.1 \
--critic_weight_decay 0.1 \
--num_train_epochs 1 \
--lr_scheduler_type cosine \
--gradient_accumulation_steps 1 \
--actor_gradient_checkpointing \
--critic_gradient_checkpointing \
--actor_dropout 0.0 \
--num_warmup_steps 100 \
--deepspeed --seed 1234 \
--enable_hybrid_engine \
--actor_zero_stage $ACTOR_ZERO_STAGE \
--critic_zero_stage $CRITIC_ZERO_STAGE \
--enable_ema \
--output_dir $output_path \
```
**Log output**
i got error:
```
[rank3]: ValueError: not enough values to unpack (expected 2, got 0)
[rank1]: Traceback (most recent call last):
[rank1]: File "/home/deepspeed/DeepSpeedExamples/applications/DeepSpeed-Chat/main_step3.py", line 673, in <module>
[rank1]: main()
[rank1]: File "/home/deepspeed/DeepSpeedExamples/applications/DeepSpeed-Chat/main_step3.py", line 527, in main
[rank1]: out = trainer.generate_experience(batch_prompt['prompt'],
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/deepspeed/DeepSpeedExamples/applications/DeepSpeed-Chat/dschat/rlhf/ppo_trainer.py", line 140, in generate_experience
[rank1]: seq = self._generate_sequence(prompts, mask, step)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/deepspeed/DeepSpeedExamples/applications/DeepSpeed-Chat/dschat/rlhf/ppo_trainer.py", line 87, in _generate_sequence
[rank1]: seq = self.actor_model.module.generate(
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/deepspeed/runtime/hybrid_engine.py", line 253, in generate
[rank1]: generate_ret_vals = self._generate(*inputs, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank1]: return func(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/transformers/generation/utils.py", line 2024, in generate
[rank1]: result = self._sample(
[rank1]: ^^^^^^^^^^^^^
[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/transformers/generation/utils.py", line 2982, in _sample
[rank1]: outputs = self(**model_inputs, return_dict=True)[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1609, in _call_impl
[rank1]: result = forward_call(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/transformers/models/bloom/modeling_bloom.py", line 955, in forward
[rank1]: transformer_outputs = self.transformer(
[rank1]: ^^^^^^^^^^^^^^^^^[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1609, in _call_impl
[rank1]: result = forward_call(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/transformers/models/bloom/modeling_bloom.py", line 744, in forward
[rank1]: outputs = block(
[rank1]: ^^^^^^
[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl[rank1]: return self._call_impl(*args, **kwargs)[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1609, in _call_impl[rank1]: result = forward_call(*args, **kwargs)[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/deepspeed/model_implementations/transformers/ds_transformer.py", line 171, in forward
[rank1]: self.attention(input,
[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
[rank1]: return forward_call(*args, **kwargs)[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/deepspeed/ops/transformer/inference/ds_attention.py", line 160, in forward
[rank1]: context_layer, key_layer, value_layer = self.compute_attention(qkv_out=qkv_out,
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^[rank1]: File "/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/deepspeed/ops/transformer/inference/ds_attention.py", line 239, in compute_attention
[rank1]: past_key, past_value = layer_past
[rank1]: ^^^^^^^^^^^^^^^^^^^^
[rank1]: ValueError: not enough values to unpack (expected 2, got 0)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
```
**To Reproduce**
Steps to reproduce the behavior:
1. Command/Script to reproduce
2. What packages are required and their versions
3. How to run the script
4. ...
**Expected behavior**
A clear and concise description of what you expected to happen.
**ds_report output**
# ds_report
```
[2024-09-11 19:27:52,618] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
async_io ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fp_quantizer ........... [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
gds .................... [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.4
[WARNING] using untested triton version (3.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/torch']
torch version .................... 2.4.0+cu121
deepspeed install path ........... ['/home/tools/anaconda3/envs/deepspeed/lib/python3.12/site-packages/deepspeed']
deepspeed info ................... 0.15.1, unknown, unknown
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.1
deepspeed wheel compiled w. ...... torch 2.4, cuda 12.1
shared memory (/dev/shm) size .... 503.77 GB
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
**System info (please complete the following information):**
```
- OS: Ubuntu 20.04.6 LTS
- GPU :NVIDIA L20*4 46G
- (if applicable) what [DeepSpeed-MII](https://github.com/microsoft/deepspeed-mii) 0.15.1
- (if applicable) Hugging Face Transformers/Accelerate/etc. versions 4.44.2
- Python 3.12.0
- transformers 4.44.2
- cuda 12.1
- torch 2.4.0
- deepspeed 0.15.1
- accelerate 0.33.0
- Any other relevant info about your setup
```
**Docker context**
Are you using a specific docker image that you can share?
**Additional context**
```
home/deepspeed/DeepSpeedExamples/applications/DeepSpeed-Chat/dschat/utils/model/model_utils.py:155: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model_ckpt_state_dict = torch.load(model_ckpt_path, map_location='cpu')
```
```[tasklist]
### Tasks
```
|
open
|
2024-09-11T11:25:48Z
|
2024-10-08T19:47:54Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6522
|
[
"bug",
"deepspeed-chat"
] |
lovychen
| 2
|
huggingface/text-generation-inference
|
nlp
| 2,819
|
Failure when start the model using TGI 3
|
### System Info
I tried to serve llama3.1-8b using TGI on A10 (24G) on context length 4k.
coomand:
```
docker run --gpus all -it --rm -p 8000:80 ghcr.io/huggingface/text-generation-inference:3.0.0 --model-id NousResearch/Meta-Llama-3.1-8B-Instruct --max-total-tokens 4096 --dtype bfloat16
```
- However it work with the same command using image `ghcr.io/huggingface/text-generation-inference:2.2.0`
but i got the following error:
```
2024-12-10T21:24:12.674619Z INFO text_generation_launcher: Starting Webserver
2024-12-10T21:24:12.849356Z INFO text_generation_router_v3: backends/v3/src/lib.rs:125: Warming up model
2024-12-10T21:25:42.531534Z ERROR warmup{max_input_length=None max_prefill_tokens=8192 max_total_tokens=Some(4096) max_batch_size=None}:warmup: text_generation_router_v3::client: backends/v3/src/client/mod.rs:45: Server error: transport error
Error: Backend(Warmup(Generation("transport error")))
2024-12-10T21:25:42.679824Z ERROR text_generation_launcher: Webserver Crashed
2024-12-10T21:25:42.684321Z INFO text_generation_launcher: Shutting down shards
2024-12-10T21:25:42.698301Z ERROR shard-manager: text_generation_launcher: Shard complete standard error output:
2024-12-10 21:23:52.620 | INFO | text_generation_server.utils.import_utils:<module>:80 - Detected system cuda
/opt/conda/lib/python3.11/site-packages/text_generation_server/layers/gptq/triton.py:242: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@custom_fwd(cast_inputs=torch.float16)
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/selective_scan_interface.py:158: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@custom_fwd
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/selective_scan_interface.py:231: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
@custom_bwd
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/triton/layernorm.py:507: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@custom_fwd
/opt/conda/lib/python3.11/site-packages/mamba_ssm/ops/triton/layernorm.py:566: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
@custom_bwd
/opt/conda/lib/python3.11/site-packages/torch/distributed/c10d_logger.py:79: FutureWarning: You are using a Backend <class 'text_generation_server.utils.dist.FakeGroup'> as a ProcessGroup. This usage is deprecated since PyTorch 2.0. Please use a public API of PyTorch Distributed instead.
return func(*args, **kwargs) rank=0
2024-12-10T21:25:42.700830Z ERROR shard-manager: text_generation_launcher: Shard process was signaled to shutdown with signal 9 rank=0
```
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
```
docker run --gpus all -it --rm -p 8000:80 ghcr.io/huggingface/text-generation-inference:3.0.0 --model-id NousResearch/Meta-Llama-3.1-8B-Instruct --max-total-tokens 4096 --dtype bfloat16
```
### Expected behavior
Should serve the model successfully
|
open
|
2024-12-10T21:36:23Z
|
2024-12-11T09:05:01Z
|
https://github.com/huggingface/text-generation-inference/issues/2819
|
[] |
hahmad2008
| 0
|
PablocFonseca/streamlit-aggrid
|
streamlit
| 108
|
Customize headers and hover behavior
|
Hey @PablocFonseca , thanks for this amazing streamlit component. Is there a way to customize the following items:
1. header rows for bg-color, font-color, font-size etc - I tried a custom css, but it doesn't seem to be working
```
AgGrid(
final_df,
fit_columns_on_grid_load=True,
custom_css={
"header-background-color": "#7FB56C",
"background-color": "#3B506C",
},
)
```
2. change the hover and selection behavior for rows and/or columns
3. change the default size of the aggrid table?
|
closed
|
2022-06-23T22:04:06Z
|
2024-04-04T17:53:58Z
|
https://github.com/PablocFonseca/streamlit-aggrid/issues/108
|
[] |
hummingbird1989
| 1
|
iperov/DeepFaceLab
|
machine-learning
| 5,468
|
Can't access earlier backups
|
THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
## Expected behavior
"start over" from an earlier backup of the model
## Actual behavior
When I delete the backup folders and start to train, it just keeps on training as if i wouldn't have deleted anything.
So how am i supposed to access an earlier backup when for example I'm not happy with the result?
## Steps to reproduce
Open the model folder, go to "new_SAEHD_autobackups"
## Other relevant information
- **Command lined used (if not specified in steps to reproduce)**: main.py ...
- **Operating system and version:** Windows, macOS, Linux
- **Python version:** 3.5, 3.6.4, ... (if you are not using prebuilt windows binary)
|
closed
|
2022-02-01T13:53:19Z
|
2022-03-19T07:15:56Z
|
https://github.com/iperov/DeepFaceLab/issues/5468
|
[] |
bioheater
| 0
|
ansible/ansible
|
python
| 84,636
|
Data Tagging PR Merge Blocking Tracker
|
This is an omnibus issue to track items blocking merge of #84621.
@ansibot bot_skip
|
open
|
2025-01-30T00:49:17Z
|
2025-01-30T01:04:01Z
|
https://github.com/ansible/ansible/issues/84636
|
[] |
nitzmahone
| 0
|
wagtail/wagtail
|
django
| 12,937
|
CSP style-src refactorings to avoid unsafe-inline
|
### Issue Summary
Part of [CSP compatibility issues #1288](https://github.com/wagtail/wagtail/issues/1288). There are a few places in Wagtail where styling can be refactored to avoid inline styles.
- Half that seem like the refactoring can be done with HTML-only changes, either removing the inline styles altogether, or replacing with Tailwind, or refactoring to use an existing CSS class / component.
- The other half that are similar but more likely to also require JS changes.
#### HTML & CSS refactorings
- [ ] [wagtailadmin/pages/add_subpage.html#L32](https://github.com/wagtail/wagtail/blob/main/wagtail/admin/templates/wagtailadmin/pages/add_subpage.html#L32)
- [ ] [wagtailadmin/pages/confirm_delete.html#L95](https://github.com/wagtail/wagtail/blob/main/wagtail/admin/templates/wagtailadmin/pages/confirm_delete.html#L95)
- [ ] [wagtailadmin/pages/edit_alias.html#L5](https://github.com/wagtail/wagtail/blob/main/wagtail/admin/templates/wagtailadmin/pages/edit_alias.html#L5)
- [ ] [wagtailstyleguide/base.html#L115](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/styleguide/templates/wagtailstyleguide/base.html#L115)
- [ ] [wagtailstyleguide/base.html#L124](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/styleguide/templates/wagtailstyleguide/base.html#L124)
- [ ] [wagtailstyleguide/base.html#L133](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/styleguide/templates/wagtailstyleguide/base.html#L133)
- [ ] [wagtailstyleguide/base.html#L470](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/styleguide/templates/wagtailstyleguide/base.html#L470)
- [ ] [wagtailstyleguide/base.html#L496](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/styleguide/templates/wagtailstyleguide/base.html#L496)
- [ ] [wagtaildocs/multiple/add.html#L52](https://github.com/wagtail/wagtail/blob/main/wagtail/documents/templates/wagtaildocs/multiple/add.html#L52)
- [ ] [wagtaildocs/multiple/add.html#L62](https://github.com/wagtail/wagtail/blob/main/wagtail/documents/templates/wagtaildocs/multiple/add.html#L62)
- [ ] [wagtailimages/images/url_generator.html#L5](https://github.com/wagtail/wagtail/blob/main/wagtail/images/templates/wagtailimages/images/url_generator.html#L5)
### JS changes possibly required
For those, there’s more of a need to confirm what the correct change is, and integrate with existing JS code. The `display: none` ones might be refactor-able to the `hidden` attribute or a "hidden" or `hidden!` utility class, checking specificity.
- [ ] [wagtailadmin/shared/icons.html#L2](https://github.com/wagtail/wagtail/blob/main/wagtail/admin/templates/wagtailadmin/shared/icons.html#L2)
- [ ] [wagtailsearchpromotions/includes/searchpromotion_form.html#L5](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/search_promotions/templates/wagtailsearchpromotions/includes/searchpromotion_form.html#L5)
- [ ] [wagtailstyleguide/base.html#L423](https://github.com/wagtail/wagtail/blob/main/wagtail/contrib/styleguide/templates/wagtailstyleguide/base.html#L423)
- [ ] [wagtailimages/images/edit.html#L32](https://github.com/wagtail/wagtail/blob/main/wagtail/images/templates/wagtailimages/images/edit.html#L32)
### Steps to Reproduce
Search for `style=` in the Wagtail code (ignoring email templates, tests, docs, and developer tools) or run a [CSP scanner](https://github.com/thibaudcolas/wagtail-tooling/tree/main/csp)
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
See [CSP compatibility issues #1288](https://github.com/wagtail/wagtail/issues/1288). View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start. Consider picking only some of the items on this list
|
open
|
2025-03-04T12:23:00Z
|
2025-03-04T14:12:56Z
|
https://github.com/wagtail/wagtail/issues/12937
|
[
"type:Cleanup/Optimisation",
"component:Security"
] |
thibaudcolas
| 1
|
brightmart/text_classification
|
tensorflow
| 112
|
suggest upgrade to support python3
|
open
|
2019-03-13T12:34:27Z
|
2023-11-13T10:05:56Z
|
https://github.com/brightmart/text_classification/issues/112
|
[] |
kevinew
| 1
|
|
amidaware/tacticalrmm
|
django
| 2,095
|
[Feature Request] Advanced Detailed Logging and Features to accommodate
|
Feature Request: Advanced Detailed Logging
Description:
Requesting an option for [Advanced Detailed Logging] to enhance the system's logging capabilities for device connectivity and remote session events. This feature would provide deeper insights and traceability, particularly for actions that are currently logged only when specific alerts are enabled.
Key Features:
Device Connectivity Logging:
Log [disconnection] and [re-connection] events for all devices, regardless of whether alerts for these events are enabled.
Include timestamps and device identifiers to accurately track offline and online durations.
Remote Session Logging:
Disconnection Events: Log when a [remote session ends], capturing details about the session duration.
Session Type: Log the type of remote session:
[Take Control]
[Remote Background]
Idle Timeout Auto-Disconnect:
Introduce an optional feature to automatically terminate remote sessions after a specified period of inactivity (e.g., 15, 30, or 60 minutes).
Enhanced Details:
Provide a clear distinction between user-initiated disconnections and system-triggered (e.g., auto-disconnect) disconnections.
Include IP addresses or user identifiers (if available) associated with the session for audit purposes.
Optional Settings for Advanced Logging:
Allow administrators to toggle Advanced Detailed Logging on or off per site, client, or globally.
Include filters to specify which event types are logged (e.g., device disconnections, session auto-disconnects, or specific session types).
Benefits:
Enhanced Auditability:
Provides comprehensive logs for compliance and troubleshooting.
Tracks exact periods of device downtime and remote session usage.
Improved Security:
Automatically terminate idle sessions to reduce unauthorized access risks.
Maintain detailed logs for session activity for forensic purposes.
Operational Efficiency:
Enables better monitoring of device and session uptime/downtime.
Offers clear insights into inactive session trends to optimize resource usage.
This feature would significantly enhance the system's logging capabilities, making it a more powerful tool for administrators and auditors. Please consider this addition to improve traceability and operational oversight.
|
open
|
2024-12-06T00:23:42Z
|
2025-02-18T16:09:57Z
|
https://github.com/amidaware/tacticalrmm/issues/2095
|
[] |
NavCC
| 6
|
miguelgrinberg/python-socketio
|
asyncio
| 614
|
Code which was working on socketio version 4.6.0 is not working now in version 5.0.4
|
`pythons-socketio version 4.6.0 & engineio version 3.13.1`
Connects and works perfectly withtout any problem.
Note: `Namespace / is connected` and no rejection from server side
Version information of socketio & engineio
```
>pip show python-socketio
Name: python-socketio
Version: 4.6.0
Summary: Socket.IO server
Home-page: http://github.com/miguelgrinberg/python-socketio/
Author: Miguel Grinberg
Author-email: miguelgrinberg50@gmail.com
License: MIT
Location: c:\python38\lib\site-packages
Requires: six, python-engineio
Required-by:
>pip show python-engineio
Name: python-engineio
Version: 3.13.1
Summary: Engine.IO server
Home-page: http://github.com/miguelgrinberg/python-engineio/
Author: Miguel Grinberg
Author-email: miguelgrinberg50@gmail.com
License: MIT
Location: c:\python38\lib\site-packages
Requires: six
Required-by: python-socketio
```
logging enabled in both library
```
>python main.py
23:37:40.891.263, client, INFO, Attempting WebSocket connection to wss://ws.upstox.com/socket.io/? information removed &transport=websocket&EIO=3
23:37:41.931.590, client, INFO, WebSocket connection accepted with {'sid': 'AKfyGHnlpuv_91ZTAJFS', 'upgrades': [], 'pingInterval': 2000, 'pingTimeout': 60000}
23:37:41.931.590, client, INFO, Engine.IO connection established
23:37:41.932.587, client, INFO, Sending packet PING data None
23:37:41.942.563, client, INFO, Received packet MESSAGE data 0
23:37:41.943.559, client, INFO, Namespace / is connected
23:37:42.160.441, client, INFO, Received packet PONG data None
23:37:43.934.515, client, INFO, Sending packet PING data None
23:37:44.163.661, client, INFO, Received packet PONG data None
23:37:45.935.990, client, INFO, Sending packet PING data None
23:37:46.164.545, client, INFO, Received packet PONG data None
23:37:47.936.101, client, INFO, Sending packet PING data None
23:37:48.168.060, client, INFO, Received packet PONG data None
23:37:49.936.557, client, INFO, Sending packet PING data None
23:37:50.165.909, client, INFO, Received packet PONG data None
23:37:51.936.624, client, INFO, Sending packet PING data None
23:37:51.944.657, broker, INFO, Logging in to upstox server
23:37:51.944.657, client, INFO, Emitting event "message" [/]
23:37:51.944.657, client, INFO, Sending packet MESSAGE data 2["message",{"method":"client_login","type":"interactive","data":{"client_id":"","password":""}}]
23:37:52.170.459, client, INFO, Received packet PONG data None
23:37:52.222.219, client, INFO, Received packet MESSAGE data 2["message",{"timestamp":1610734070921,"response_type":"client_login","guid":null,"data":{"success":true,"statusCode":1,"connected_server":"ip-172-31-21-200"}}]
23:37:52.223.191, client, INFO, Received event "message" [/]
23:37:52.223.191, client, INFO, Emitting event "message" [/]
23:37:56.175.874, client, INFO, Received packet PONG data None
23:37:57.939.303, client, INFO, Sending packet PING data None
23:37:58.180.645, client, INFO, Received packet PONG data None
23:37:59.939.735, client, INFO, Sending packet PING data None
23:38:00.166.902, client, INFO, Received packet PONG data None
23:38:01.940.612, client, INFO, Sending packet PING data None
23:38:02.169.766, client, INFO, Received packet PONG data None
23:38:03.940.732, client, INFO, Sending packet PING data None
23:38:04.180.604, client, INFO, Received packet PONG data None
23:38:05.941.043, client, INFO, Sending packet PING data None
23:38:06.169.315, client, INFO, Received packet PONG data None
23:38:07.941.815, client, INFO, Sending packet PING data None
23:38:08.170.819, client, INFO, Received packet PONG data None
23:38:09.942.222, client, INFO, Sending packet PING data None
23:38:10.171.609, client, INFO, Received packet PONG data None
23:38:11.942.829, client, INFO, Sending packet PING data None
23:38:12.172.043, client, INFO, Received packet PONG data None
23:38:13.943.691, client, INFO, Sending packet PING data None
```
Uninstalled both and installed latest version of both. `python-socketio = 5.0.4` `python-engineio = 4.0.0`
```
>pip uninstall python-engineio
Found existing installation: python-engineio 3.13.1
Uninstalling python-engineio-3.13.1:
Would remove:
c:\python38\lib\site-packages\engineio\*
c:\python38\lib\site-packages\python_engineio-3.13.1.dist-info\*
Proceed (y/n)? y
Successfully uninstalled python-engineio-3.13.1
>pip uninstall python-socketio
Found existing installation: python-socketio 4.6.0
Uninstalling python-socketio-4.6.0:
Would remove:
c:\python38\lib\site-packages\python_socketio-4.6.0.dist-info\*
c:\python38\lib\site-packages\socketio\*
Proceed (y/n)? y
Successfully uninstalled python-socketio-4.6.0
>pip install python-socketio
Collecting python-socketio
Using cached python_socketio-5.0.4-py2.py3-none-any.whl (52 kB)
Requirement already satisfied: bidict>=0.21.0 in c:\python38\lib\site-packages (from python-socketio) (0.21.2)
Collecting python-engineio>=4
Using cached python_engineio-4.0.0-py2.py3-none-any.whl (50 kB)
Installing collected packages: python-engineio, python-socketio
Successfully installed python-engineio-4.0.0 python-socketio-5.0.4
```
Latest library connects, but server sending `namespace / was rejected message`, eventually leading to BadNamespaceError \ error
`python-socketio = 5.0.4` `python-engineio = 4.0.0`
```
>python main.py
23:42:35.111.527, client, INFO, Attempting WebSocket connection to wss://ws.upstox.com/socket.io/? Information removed &transport=websocket&EIO=4
23:42:36.158.799, client, INFO, WebSocket connection accepted with {'sid': 'xzIaRZIKYz1BwKKkAJEw', 'upgrades': [], 'pingInterval': 2000, 'pingTimeout': 60000}
23:42:36.158.799, client, INFO, Engine.IO connection established
23:42:36.158.799, client, INFO, Sending packet MESSAGE data 0
23:42:36.184.729, client, INFO, Received packet MESSAGE data 0
23:42:36.185.752, client, INFO, Namespace / is connected
23:42:36.390.209, client, INFO, Received packet MESSAGE data 4"{\"timestamp\":1610734355081,\"response_type\":\"server_error\",\"error\":{\"message\":\"You have not been authorized for this action\",\"statusCode\":500}}"
23:42:36.395.195, client, INFO, Connection to namespace / was rejected
Exception in thread Thread-3:
Traceback (most recent call last):
File "C:\Python38\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Python38\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Python38\lib\site-packages\socketio\client.py", line 611, in _handle_eio_message
self._handle_connect(pkt.namespace, pkt.data)
File "C:\Python38\lib\site-packages\socketio\client.py", line 485, in _handle_connect
self._trigger_event('connect', namespace=namespace)
File "C:\Python38\lib\site-packages\socketio\client.py", line 547, in _trigger_event
return self.handlers[namespace][event](*args)
File "D:\projects\PTrade\broker_upstox_hacked\broker.py", line 205, in __login_to_upstox_server
self.__sio.send(loginJSON)
File "C:\Python38\lib\site-packages\socketio\client.py", line 364, in send
self.emit('message', data=data, namespace=namespace,
File "C:\Python38\lib\site-packages\socketio\client.py", line 328, in emit
raise exceptions.BadNamespaceError(
socketio.exceptions.BadNamespaceError: / is not a connected namespace.
23:43:36.157.881, client, WARNING, WebSocket connection was closed, aborting
23:43:36.158.879, client, INFO, Waiting for write loop task to end
23:43:36.158.879, client, INFO, Exiting write loop task
23:43:36.160.875, client, INFO, Engine.IO connection dropped
23:43:36.161.870, client, INFO, Exiting read loop task
```
In both client code is same, server is same only difference is socketio and enginio libraries.
Please let me know what I'm doing wrong or how to tackle this.
|
closed
|
2021-01-15T21:24:19Z
|
2021-06-27T19:44:59Z
|
https://github.com/miguelgrinberg/python-socketio/issues/614
|
[
"question"
] |
krishnavelu
| 10
|
katanaml/sparrow
|
computer-vision
| 52
|
When running Unstructured, { ModuleNotFoundError: No module named 'backoff._typing' }
|
(.env_unstructured) root@testvm:/home/testvmadmin/main/sparrow/sparrow-ml/llm# pip install backoff==1.11.1
Collecting backoff==1.11.1
Using cached backoff-1.11.1-py2.py3-none-any.whl (13 kB)
Installing collected packages: backoff
Attempting uninstall: backoff
Found existing installation: backoff 2.2.1
Uninstalling backoff-2.2.1:
Successfully uninstalled backoff-2.2.1
Successfully installed backoff-1.11.1
WARNING: You are using pip version 22.0.4; however, version 24.0 is available.
You should consider upgrading via the '/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/bin/python -m pip install --upgrade pip' command.
(.env_unstructured) root@testvm:/home/testvmadmin/main/sparrow/sparrow-ml/llm# ./sparrow.sh "invoice_number, invoice_date, total_gross_worth" "int, str, str" --agent unstructured --file-path ./data/invoice_1.pdf
Detected Python version: Python 3.10.4
Running pipeline with unstructured
⠸ Processing file with unstructured...Traceback (most recent call last):
File "/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/bin/unstructured-ingest", line 5, in <module>
from unstructured.ingest.main import main
File "/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/lib/python3.10/site-packages/unstructured/ingest/main.py", line 2, in <module>
from unstructured.ingest.cli.cli import get_cmd
File "/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/lib/python3.10/site-packages/unstructured/ingest/cli/__init__.py", line 5, in <module>
from unstructured.ingest.cli.cmds import base_dest_cmd_fns, base_src_cmd_fns
File "/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/lib/python3.10/site-packages/unstructured/ingest/cli/cmds/__init__.py", line 6, in <module>
from unstructured.ingest.cli.base.src import BaseSrcCmd
File "/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/lib/python3.10/site-packages/unstructured/ingest/cli/base/src.py", line 13, in <module>
from unstructured.ingest.runner import runner_map
File "/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/lib/python3.10/site-packages/unstructured/ingest/runner/__init__.py", line 4, in <module>
from .airtable import AirtableRunner
File "/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/lib/python3.10/site-packages/unstructured/ingest/runner/airtable.py", line 7, in <module>
from unstructured.ingest.runner.base_runner import Runner
File "/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/lib/python3.10/site-packages/unstructured/ingest/runner/base_runner.py", line 20, in <module>
from unstructured.ingest.processor import process_documents
File "/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/lib/python3.10/site-packages/unstructured/ingest/processor.py", line 15, in <module>
from unstructured.ingest.pipeline import (
File "/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/lib/python3.10/site-packages/unstructured/ingest/pipeline/__init__.py", line 1, in <module>
from .doc_factory import DocFactory
File "/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/lib/python3.10/site-packages/unstructured/ingest/pipeline/doc_factory.py", line 4, in <module>
from unstructured.ingest.pipeline.interfaces import DocFactoryNode
File "/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/lib/python3.10/site-packages/unstructured/ingest/pipeline/interfaces.py", line 15, in <module>
from unstructured.ingest.ingest_backoff import RetryHandler
File "/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/lib/python3.10/site-packages/unstructured/ingest/ingest_backoff/__init__.py", line 1, in <module>
from ._wrapper import RetryHandler
File "/home/testvmadmin/main/sparrow/sparrow-ml/llm/.env_unstructured/lib/python3.10/site-packages/unstructured/ingest/ingest_backoff/_wrapper.py", line 9, in <module>
from backoff._typing import (
ModuleNotFoundError: No module named 'backoff._typing'
Command failed. Error:
⠴ Processing file with unstructured...
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /home/testvmadmin/main/sparrow/sparrow-ml/llm/engine.py:31 in run │
│ │
│ 28 │ │
│ 29 │ try: │
│ 30 │ │ rag = get_pipeline(user_selected_agent) │
│ ❱ 31 │ │ rag.run_pipeline(user_selected_agent, query_inputs_arr, query_t │
│ 32 │ │ │ │ │ │ debug) │
│ 33 │ except ValueError as e: │
│ 34 │ │ print(f"Caught an exception: {e}") │
│ │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │ agent = 'unstructured' │ │
│ │ debug = False │ │
│ │ file_path = './data/invoice_1.pdf' │ │
│ │ index_name = None │ │
│ │ inputs = 'invoice_number, invoice_date, total_gross_worth' │ │
│ │ options = None │ │
│ │ query = 'retrieve invoice_number, invoice_date, │ │
│ │ total_gross_worth' │ │
│ │ query_inputs_arr = [ │ │
│ │ │ 'invoice_number', │ │
│ │ │ 'invoice_date', │ │
│ │ │ 'total_gross_worth' │ │
│ │ ] │ │
│ │ query_types = 'int, str, str' │ │
│ │ query_types_arr = ['int', 'str', 'str'] │ │
│ │ rag = <rag.agents.unstructured.unstructured.Unstructure… │ │
│ │ object at 0x7f1d83b82f80> │ │
│ │ types = 'int, str, str' │ │
│ │ user_selected_agent = 'unstructured' │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
│ │
│ /home/testvmadmin/main/sparrow/sparrow-ml/llm/rag/agents/unstructured/unstru │
│ ctured.py:71 in run_pipeline │
│ │
│ 68 │ │ │ │
│ 69 │ │ │ os.makedirs(temp_output_dir, exist_ok=True) │
│ 70 │ │ │ │
│ ❱ 71 │ │ │ files = self.invoke_pipeline_step( │
│ 72 │ │ │ │ lambda: self.process_files(temp_output_dir, temp_input │
│ 73 │ │ │ │ "Processing file with unstructured...", │
│ 74 │ │ │ │ local │
│ │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │ debug = False │ │
│ │ device = 'cpu' │ │
│ │ embedding_model_name = 'all-MiniLM-L6-v2' │ │
│ │ file_path = './data/invoice_1.pdf' │ │
│ │ index_name = None │ │
│ │ input_dir = 'data/pdf' │ │
│ │ local = True │ │
│ │ options = None │ │
│ │ output_dir = 'data/json' │ │
│ │ payload = 'unstructured' │ │
│ │ query = 'retrieve invoice_number, invoice_date, │ │
│ │ total_gross_worth' │ │
│ │ query_inputs = [ │ │
│ │ │ 'invoice_number', │ │
│ │ │ 'invoice_date', │ │
│ │ │ 'total_gross_worth' │ │
│ │ ] │ │
│ │ query_types = ['int', 'str', 'str'] │ │
│ │ self = <rag.agents.unstructured.unstructured.Unstructur… │ │
│ │ object at 0x7f1d83b82f80> │ │
│ │ start = 6444.234209002 │ │
│ │ temp_dir = '/tmp/tmpf7ym66qi' │ │
│ │ temp_input_dir = '/tmp/tmpf7ym66qi/data/pdf' │ │
│ │ temp_output_dir = '/tmp/tmpf7ym66qi/data/json' │ │
│ │ weaviate_url = 'http://localhost:8080' │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
│ │
│ /home/testvmadmin/main/sparrow/sparrow-ml/llm/rag/agents/unstructured/unstru │
│ ctured.py:364 in invoke_pipeline_step │
│ │
│ 361 │ │ │ │ │ transient=False, │
│ 362 │ │ │ ) as progress: │
│ 363 │ │ │ │ progress.add_task(description=task_description, total= │
│ ❱ 364 │ │ │ │ ret = task_call() │
│ 365 │ │ else: │
│ 366 │ │ │ print(task_description) │
│ 367 │ │ │ ret = task_call() │
│ │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │ local = True │ │
│ │ progress = <rich.progress.Progress object at 0x7f1ca9cb5c00> │ │
│ │ self = <rag.agents.unstructured.unstructured.UnstructuredPi… │ │
│ │ object at 0x7f1d83b82f80> │ │
│ │ task_call = <function │ │
│ │ UnstructuredPipeline.run_pipeline.<locals>.<lambda> │ │
│ │ at 0x7f1d83b8cee0> │ │
│ │ task_description = 'Processing file with unstructured...' │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
│ │
│ /home/testvmadmin/main/sparrow/sparrow-ml/llm/rag/agents/unstructured/unstru │
│ ctured.py:72 in <lambda> │
│ │
│ 69 │ │ │ os.makedirs(temp_output_dir, exist_ok=True) │
│ 70 │ │ │ │
│ 71 │ │ │ files = self.invoke_pipeline_step( │
│ ❱ 72 │ │ │ │ lambda: self.process_files(temp_output_dir, temp_input │
│ 73 │ │ │ │ "Processing file with unstructured...", │
│ 74 │ │ │ │ local │
│ 75 │ │ │ ) │
│ │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │ self = <rag.agents.unstructured.unstructured.UnstructuredPip… │ │
│ │ object at 0x7f1d83b82f80> │ │
│ │ temp_input_dir = '/tmp/tmpf7ym66qi/data/pdf' │ │
│ │ temp_output_dir = '/tmp/tmpf7ym66qi/data/json' │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
│ │
│ /home/testvmadmin/main/sparrow/sparrow-ml/llm/rag/agents/unstructured/unstru │
│ ctured.py:123 in process_files │
│ │
│ 120 │ │ return answer │
│ 121 │ │
│ 122 │ def process_files(self, temp_output_dir, temp_input_dir): │
│ ❱ 123 │ │ self.process_local(output_dir=temp_output_dir, num_processes=2 │
│ 124 │ │ files = self.get_result_files(temp_output_dir) │
│ 125 │ │ return files │
│ 126 │
│ │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │ self = <rag.agents.unstructured.unstructured.UnstructuredPip… │ │
│ │ object at 0x7f1d83b82f80> │ │
│ │ temp_input_dir = '/tmp/tmpf7ym66qi/data/pdf' │ │
│ │ temp_output_dir = '/tmp/tmpf7ym66qi/data/json' │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
│ │
│ /home/testvmadmin/main/sparrow/sparrow-ml/llm/rag/agents/unstructured/unstru │
│ ctured.py:171 in process_local │
│ │
│ 168 │ │ │ print(output.decode()) │
│ 169 │ │ else: │
│ 170 │ │ │ print('Command failed. Error:') │
│ ❱ 171 │ │ │ print(error.decode()) │
│ 172 │ │
│ 173 │ def get_result_files(self, folder_path) -> List[Dict]: │
│ 174 │ │ file_list = [] │
│ │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │ command = [ │ │
│ │ │ 'unstructured-ingest', │ │
│ │ │ 'local', │ │
│ │ │ '--input-path', │ │
│ │ │ '/tmp/tmpf7ym66qi/data/pdf', │ │
│ │ │ '--output-dir', │ │
│ │ │ '/tmp/tmpf7ym66qi/data/json', │ │
│ │ │ '--num-processes', │ │
│ │ │ '2', │ │
│ │ │ '--recursive', │ │
│ │ │ '--verbose' │ │
│ │ ] │ │
│ │ error = None │ │
│ │ input_path = '/tmp/tmpf7ym66qi/data/pdf' │ │
│ │ num_processes = 2 │ │
│ │ output = b'' │ │
│ │ output_dir = '/tmp/tmpf7ym66qi/data/json' │ │
│ │ process = <Popen: returncode: 1 args: ['unstructured-ingest', │ │
│ │ 'local', '--input-path',...> │ │
│ │ self = <rag.agents.unstructured.unstructured.UnstructuredPipel… │ │
│ │ object at 0x7f1d83b82f80> │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'NoneType' object has no attribute 'decode'
|
closed
|
2024-05-13T07:30:42Z
|
2024-07-14T12:00:49Z
|
https://github.com/katanaml/sparrow/issues/52
|
[] |
pitbuk101
| 2
|
twopirllc/pandas-ta
|
pandas
| 291
|
New indicators issue
|
**Expected behavior**
I was keen to try new indicators add into this beautiful library however, the list of new indicators shows attribute error
AttributeError: module 'pandas_ta' has no attribute 'stc'
**Screenshots**

|
closed
|
2021-05-20T02:41:59Z
|
2021-05-26T21:11:44Z
|
https://github.com/twopirllc/pandas-ta/issues/291
|
[
"question",
"info"
] |
satishchaudhary382
| 2
|
Lightning-AI/pytorch-lightning
|
deep-learning
| 19,813
|
Existing metric keys not moved to device after LearningRateFinder
|
### Bug description
Running `LearningRateFinder` leads to `teardown()` on training epoch loop's results being moved to "cpu" [here](https://github.com/Lightning-AI/pytorch-lightning/blob/master/src/lightning/pytorch/loops/training_epoch_loop.py#L314).
The problem is that loop results are only moved to device when registering for the first time [here](https://github.com/Lightning-AI/pytorch-lightning/blob/b9680a364da4e875b237ec3c03e67a9c32ef475b/src/lightning/pytorch/trainer/connectors/logger_connector/result.py#L423). This leads to an issue for `cumulated_batch_size` reduction which used the device of the original `value` tensor when it was first created. So when it's still on `cpu` when the training starts for real after `lr_find` we face `RuntimeError('No backend type associated with device type cpu')`.
E.g. the issue happens when using 2 GPU device (see logs below).
I'll submit a fix for review shortly.
### What version are you seeing the problem on?
master
### How to reproduce the bug
_No response_
### Error messages and logs
```
train/0 [1]:-> s.trainer.fit(s.model, **kwargs)
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py(543)fit()
train/0 [1]:-> call._call_and_handle_interrupt(
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py(43)_call_and_handle_interrupt()
train/0 [1]:-> return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py(105)launch()
train/0 [1]:-> return function(*args, **kwargs)
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py(579)_fit_impl()
train/0 [1]:-> self._run(model, ckpt_path=ckpt_path)
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py(986)_run()
train/0 [1]:-> results = self._run_stage()
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py(1032)_run_stage()
train/0 [1]:-> self.fit_loop.run()
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py(205)run()
train/0 [1]:-> self.advance()
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py(363)advance()
train/0 [1]:-> self.epoch_loop.run(self._data_fetcher)
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py(139)run()
train/0 [1]:-> self.on_advance_end(data_fetcher)
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py(287)on_advance_end()
train/0 [1]:-> self.val_loop.run()
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/loops/utilities.py(182)_decorator()
train/0 [1]:-> return loop_run(self, *args, **kwargs)
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/loops/evaluation_loop.py(142)run()
train/0 [1]:-> return self.on_run_end()
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/loops/evaluation_loop.py(254)on_run_end()
train/0 [1]:-> self._on_evaluation_epoch_end()
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/loops/evaluation_loop.py(336)_on_evaluation_epoch_end()
train/0 [1]:-> trainer._logger_connector.on_epoch_end()
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py(195)on_epoch_end()
train/0 [1]:-> metrics = self.metrics
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py(234)metrics()
train/0 [1]:-> return self.trainer._results.metrics(on_step)
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py(483)metrics()
train/0 [1]:-> value = self._get_cache(result_metric, on_step)
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py(447)_get_cache()
train/0 [1]:-> result_metric.compute()
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py(289)wrapped_func()
train/0 [1]:-> self._computed = compute(*args, **kwargs)
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py(251)compute()
train/0 [1]:-> cumulated_batch_size = self.meta.sync(self.cumulated_batch_size)
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/pytorch_lightning/strategies/ddp.py(342)reduce()
train/0 [1]:-> return _sync_ddp_if_available(tensor, group, reduce_op=reduce_op)
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/lightning_fabric/utilities/distributed.py(172)_sync_ddp_if_available()
train/0 [1]:-> return _sync_ddp(result, group=group, reduce_op=reduce_op)
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/lightning_fabric/utilities/distributed.py(222)_sync_ddp()
train/0 [1]:-> torch.distributed.all_reduce(result, op=op, group=group, async_op=False)
train/0 [1]: /opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/distributed/c10d_logger.py(72)wrapper()
train/0 [1]:-> return func(*args, **kwargs)
train/0 [1]:> /opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py(1996)all_reduce()
train/0 [0]:RuntimeError('No backend type associated with device type cpu')
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_
cc @carmocca
|
closed
|
2024-04-25T14:31:17Z
|
2024-07-26T18:03:19Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19813
|
[
"bug",
"tuner",
"logging",
"ver: 2.2.x"
] |
clumsy
| 0
|
jina-ai/serve
|
deep-learning
| 6,226
|
Add support for mamba, alternative to transformers
|
**Describe the feature**
Is it possible to add support to Mamba, a deep learning architecture focused on long sequence modeling, for details please see https://en.wikipedia.org/wiki/Deep_learning
**Your proposal**
Just asking
|
open
|
2025-01-25T09:54:39Z
|
2025-01-27T07:42:20Z
|
https://github.com/jina-ai/serve/issues/6226
|
[] |
geoman2
| 1
|
zappa/Zappa
|
flask
| 1,291
|
About Python 3.12 support
|
Hello, Can you please tell, when will expect zappa support with python 3.12?
|
closed
|
2024-01-06T12:52:44Z
|
2024-01-10T17:07:23Z
|
https://github.com/zappa/Zappa/issues/1291
|
[
"enhancement",
"python"
] |
jagadeesh32
| 1
|
Gozargah/Marzban
|
api
| 719
|
environment: line 46: lsb_release: command not found
|
environment: line 46: lsb_release: command not found
|
closed
|
2023-12-28T08:34:14Z
|
2024-01-11T20:13:56Z
|
https://github.com/Gozargah/Marzban/issues/719
|
[
"Bug"
] |
saleh2323
| 2
|
scikit-learn/scikit-learn
|
python
| 30,138
|
How do I ensure IsolationForest detects only statistical outliers?
|
Hello Everyone! I am starting to learn how to utilize IsolationForest to detect outliers/anomalies. When I input a dataset of y = x with x going from 1 to 101 and contamination='auto' as the only argument, roughly the 20 lowest values and the 20 highest values are identified as outliers. I don't want these points to appear as outliers since they fall along a perfect straight line fit with none of the x-values being outliers. Am I using this correctly? What arguments do I insert to ensure the model generates the expected no outliers in this case?
import pandas as pd
import numpy as np
from sklearn.ensemble import IsolationForest
import matplotlib.pyplot as plt
import seaborn as sns
data = {
'x': range(1, 101),
'y': range(1, 101)
}
df = pd.DataFrame(data)
model = IsolationForest(contamination='auto') # Expecting 20% anomalies
df['anomaly'] = model.fit_predict(df[['x','y']])
plt.figure(figsize=(12, 6))
sns.scatterplot(x='x', y='y', hue='anomaly', palette={-1: 'red', 1: 'blue'}, data=df)
plt.title('Y=X')
plt.xlabel('X')
plt.ylabel('Y')
plt.legend(title='Anomaly', loc='upper right')
plt.show()

|
closed
|
2024-10-23T15:17:36Z
|
2024-10-29T09:12:26Z
|
https://github.com/scikit-learn/scikit-learn/issues/30138
|
[
"Needs Triage"
] |
BradBroo
| 0
|
hankcs/HanLP
|
nlp
| 1,464
|
繁体转简体有一些错误
|
<!--
Thank you for reporting a possible bug in HanLP.
Please fill in the template below to bypass our spam filter.
以下必填,否则直接关闭。
-->
- Java Code: `String simplified=HanLP.convertToSimplifiedChinese(tradition);`
- HanLP version: 1.7.7
比如“陷阱” 被 转换成 “猫腻”
“猛烈”被转换成“勐烈"
”顺口溜“被转换成”顺口熘"
"脊梁"被转换成“嵴梁”
“通道”被转换成“信道”
这些转换都没有必要,转换前后并不是简体与繁体的关系。
|
closed
|
2020-04-22T08:48:44Z
|
2020-04-24T19:28:39Z
|
https://github.com/hankcs/HanLP/issues/1464
|
[
"auto-replied"
] |
yangxudong
| 2
|
laughingman7743/PyAthena
|
sqlalchemy
| 130
|
Workgroup setting of _build_list_query_executions_request method is wrong
|
https://github.com/laughingman7743/PyAthena/commit/f91bf97e59e6d220eac6bc2400747157a9a80090
|
closed
|
2020-03-25T07:56:53Z
|
2020-03-26T15:04:46Z
|
https://github.com/laughingman7743/PyAthena/issues/130
|
[] |
laughingman7743
| 0
|
yt-dlp/yt-dlp
|
python
| 12,064
|
How to download video from Telegram?
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
I can not download video from Telegram. Please help me.
yt-dlp -vU https://t.me/asiadrama99/3983
[debug] Command-line config: ['-vU', 'https://t.me/asiadrama99/3983']
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.23 from yt-dlp/yt-dlp [65cf46cdd] (win_exe)
[debug] Python 3.10.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1q 5 Jul 2022)
[debug] exe versions: ffmpeg 2024-12-19-git-494c961379-full_build-www.gyan.dev (setts), ffprobe 2024-12-19-git-494c961379-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.39.4, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.12.23 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.12.23 from yt-dlp/yt-dlp)
[telegram:embed] Extracting URL: https://t.me/asiadrama99/3983
[telegram:embed] 3983: Downloading embed frame
WARNING: Extractor telegram:embed returned nothing; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_
|
closed
|
2025-01-12T12:19:17Z
|
2025-01-21T13:54:22Z
|
https://github.com/yt-dlp/yt-dlp/issues/12064
|
[
"question"
] |
k15fb-mmo
| 5
|
explosion/spaCy
|
deep-learning
| 13,139
|
spacy.load error decorative function
|
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. --> !python3 -m spacy download en_core_web_sm
import spacy
nlp = spacy.load("en_core_web_sm")
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
2023-11-20 22:52:39.399591: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
## Info about spaCy
- **spaCy version:** 3.5.0
- **Platform:** Linux-6.2.0-36-generic-x86_64-with-glibc2.35
- **Python version:** 3.10.12
- **Pipelines:** fr_core_news_sm (3.5.0), en_core_web_sm (3.5.0), fr_core_news_md (3.5.0)
* Python Version Used: 3.10.12
* spaCy Version Used: 3.5.0
TypeError Traceback (most recent call last)
Cell In[33], line 1
----> 1 nlp = spacy.load("en_core_web_sm")
File [~/.local/lib/python3.10/site-packages/spacy/__init__.py:54](https://file+.vscode-resource.vscode-cdn.net/home/amine/Documents/S9/PFEE/PFEE_project2/research/~/.local/lib/python3.10/site-packages/spacy/__init__.py:54), in load(name, vocab, disable, enable, exclude, config)
30 def load(
31 name: Union[str, Path],
32 *,
(...)
37 config: Union[Dict[str, Any], Config] = util.SimpleFrozenDict(),
38 ) -> Language:
39 """Load a spaCy model from an installed package or a local path.
40
41 name (str): Package name or model path.
(...)
52 RETURNS (Language): The loaded nlp object.
53 """
---> 54 return util.load_model(
55 name,
56 vocab=vocab,
57 disable=disable,
58 enable=enable,
59 exclude=exclude,
60 config=config,
...
141 assert hasattr(self, 'name')
142 if not hasattr(self, 'signature'):
--> 143 raise TypeError('You are decorating a non function: %s' % func)
TypeError: You are decorating a non function: that is odd
|
closed
|
2023-11-20T22:01:25Z
|
2023-11-22T20:57:48Z
|
https://github.com/explosion/spaCy/issues/13139
|
[
"install"
] |
AMS-L
| 1
|
apify/crawlee-python
|
automation
| 635
|
Add support for `preNavigationHooks` in crawlers other than `PlaywrightCrawler`
|
- This is an extension of #427 - `ParselCrawler`, `BeautifulSoupCrawler` and basically everything should support `preNavigationHooks` as well.
- It might be a good idea to wait for #350 to be resolved before going for this.
|
closed
|
2024-10-30T10:36:15Z
|
2024-12-12T07:00:31Z
|
https://github.com/apify/crawlee-python/issues/635
|
[
"enhancement",
"t-tooling"
] |
janbuchar
| 0
|
axnsan12/drf-yasg
|
django
| 245
|
There should be a way to override serializer fields in the generated ``Schema`` objects
|
I've a model where few fields are auto-generated. How do I hide those fields from the Swagger UI during POST request? Following is the example:-
class ModelX(models.Model):
a = models.CharField()
b = models.CharField()
c = models.CharField()
d = models.CharField()
Below is my serialzer:-
class Serializerx(serializers.Serializer):
class Meta:
model = ModelX
fields = '__all__
In the above model, fields `b` and `d` are auto generated from my code, which means these fields are not required as an input from the user.
If I add `b` and `d` as read-only fields, then I wont be able to create an object with these values.
How do I hide some attributes from the payload request.?
|
open
|
2018-11-05T09:01:18Z
|
2025-03-07T12:16:43Z
|
https://github.com/axnsan12/drf-yasg/issues/245
|
[
"triage"
] |
prafulbagai
| 4
|
open-mmlab/mmdetection
|
pytorch
| 11,278
|
I get an error when I try to test the number of parameters of DiffusionDet.
|
I want to research the DiffusionDet of the mmdetction project.
when I run "python tools/analysis_tools/get_flops.py D:\BaiduSyncdisk\paper2\experiments\diffusiondet\visdrone\1-base\visdrone_diffusiondet_base\visdrone_diffusiondet_base.py".
I got "TypeError: forward() missing 2 required positional arguments: 'init_bboxes' and 'init_t'
". Would you want to help me solve it? thanks.
the mmdetction version is 3.2.0.
|
open
|
2023-12-13T02:27:09Z
|
2024-08-12T08:02:51Z
|
https://github.com/open-mmlab/mmdetection/issues/11278
|
[] |
Edenmm
| 5
|
home-assistant/core
|
python
| 140,456
|
Unable to install Z-wave USB Stick ZST39 LR
|
### The problem
New to HA. bought the Zooz 800 Series Z-Wave to integrate my z-wave device. But unable to install the driver, i followed the instruction online. Under logs I am seeing the below error. I am not hands on how to modify the code, etc. i did look up their documentation but did not know what to do. *pls help*
s/src/lib/controller/Controller.ts:1144:37)
at Driver.initializeControllerAndNodes (file:///usr/src/node_modules/zwave
-js/src/lib/driver/Driver.ts:1665:46)
at Immediate.<anonymous> (file:///usr/src/node_modules/zwave-js/src/lib/dr
iver/Driver.ts:1466:16)
Error in driver ZWaveError: Failed to initialize the driver: ZWaveError: Timeout while waiting for an ACK from the controller (ZW0200)
at Driver.sendMessage (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:6059:23)
at ZWaveController.queryCapabilities (file:///usr/src/node_modules/zwave-js/src/lib/controller/Controller.ts:1144:37)
at Driver.initializeControllerAndNodes (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:1665:46)
at Immediate.<anonymous> (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:1466:16) (ZW0100)
at Immediate.<anonymous> (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:1486:6) {
code: 100,
context: undefined,
transactionSource: undefined
}
Shutting down
[23:15:01] WARNING: Halt add-on
s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
cont-init: info: running /etc/cont-init.d/config.sh
[09:55:15] INFO: No s0_legacy_key is set, generating one...
[09:55:16] INFO: No 'network_key' detected, setting it to 's0_legacy_key' for backwards compatibility
[09:55:16] INFO: No s2_access_control_key is set, generating one...
[09:55:17] INFO: No s2_authenticated_key is set, generating one...
[09:55:17] INFO: No s2_unauthenticated_key is set, generating one...
[09:55:18] INFO: No lr_s2_access_control_key is set, generating one...
[09:55:18] INFO: No lr_s2_authenticated_key is set, generating one...
[09:55:19] INFO: Flushing config to disk due to creation of new key(s)...
[09:55:19] INFO: Soft-reset set to automatic
[09:55:19] INFO: Virtual Machine not detected, enabling soft-reset
cont-init: info: /etc/cont-init.d/config.sh exited 0
cont-init: info: running /etc/cont-init.d/structure.sh
cont-init: info: /etc/cont-init.d/structure.sh exited 0
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun zwave_js (no readiness notification)
s6-rc: info: service legacy-services successfully started
[09:55:20] INFO: Successfully send discovery information to Home Assistant.
2025-03-12T14:55:22.013Z DRIVER ███████╗ ██╗ ██╗ █████╗ ██╗ ██╗ ███████╗ ██╗ ███████╗
╚══███╔╝ ██║ ██║ ██╔══██╗ ██║ ██║ ██╔════╝ ██║ ██╔════╝
███╔╝ █████╗ ██║ █╗ ██║ ███████║ ██║ ██║ █████╗ ██║ ███████╗
███╔╝ ╚════╝ ██║███╗██║ ██╔══██║ ╚██╗ ██╔╝ ██╔══╝ ██ ██║ ╚════██║
███████╗ ╚███╔███╔╝ ██║ ██║ ╚████╔╝ ███████╗ ╚█████╔╝ ███████║
╚══════╝ ╚══╝╚══╝ ╚═╝ ╚═╝ ╚═══╝ ╚══════╝ ╚════╝ ╚══════╝
2025-03-12T14:55:22.015Z DRIVER version 14.3.8
2025-03-12T14:55:22.016Z DRIVER
2025-03-12T14:55:23.659Z CONFIG version 14.3.8
2025-03-12T14:55:27.331Z CNTRLR querying Serial API capabilities...
2025-03-12T14:55:28.497Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2025-03-12T14:55:29.602Z CNTRLR Failed to execute controller command after 2/3 attempts. Scheduling next try i
n 1100 ms.
2025-03-12T14:55:31.711Z DRIVER Failed to initialize the driver: ZWaveError: Timeout while waiting for an ACK
from the controller (ZW0200)
at Driver.sendMessage (file:///usr/src/node_modules/zwave-js/src/lib/drive
r/Driver.ts:6059:23)
at ZWaveController.queryCapabilities (file:///usr/src/node_modules/zwave-j
s/src/lib/controller/Controller.ts:1144:37)
at Driver.initializeControllerAndNodes (file:///usr/src/node_modules/zwave
-js/src/lib/driver/Driver.ts:1665:46)
at Immediate.<anonymous> (file:///usr/src/node_modules/zwave-js/src/lib/dr
iver/Driver.ts:1466:16)
Error in driver ZWaveError: Failed to initialize the driver: ZWaveError: Timeout while waiting for an ACK from the controller (ZW0200)
at Driver.sendMessage (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:6059:23)
at ZWaveController.queryCapabilities (file:///usr/src/node_modules/zwave-js/src/lib/controller/Controller.ts:1144:37)
at Driver.initializeControllerAndNodes (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:1665:46)
at Immediate.<anonymous> (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:1466:16) (ZW0100)
at Immediate.<anonymous> (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:1486:6) {
code: 100,
context: undefined,
transactionSource: undefined
}
Shutting down
[14:55:31] WARNING: Halt add-on
s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
### What version of Home Assistant Core has the issue?
core-2025.3.2
### What was the last working version of Home Assistant Core?
core-2025.3.2
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
z-wave JS
### Link to integration documentation on our website
_No response_
### Diagnostics information
s/src/lib/controller/Controller.ts:1144:37)
at Driver.initializeControllerAndNodes (file:///usr/src/node_modules/zwave
-js/src/lib/driver/Driver.ts:1665:46)
at Immediate.<anonymous> (file:///usr/src/node_modules/zwave-js/src/lib/dr
iver/Driver.ts:1466:16)
Error in driver ZWaveError: Failed to initialize the driver: ZWaveError: Timeout while waiting for an ACK from the controller (ZW0200)
at Driver.sendMessage (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:6059:23)
at ZWaveController.queryCapabilities (file:///usr/src/node_modules/zwave-js/src/lib/controller/Controller.ts:1144:37)
at Driver.initializeControllerAndNodes (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:1665:46)
at Immediate.<anonymous> (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:1466:16) (ZW0100)
at Immediate.<anonymous> (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:1486:6) {
code: 100,
context: undefined,
transactionSource: undefined
}
Shutting down
[23:15:01] WARNING: Halt add-on
s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
cont-init: info: running /etc/cont-init.d/config.sh
[09:55:15] INFO: No s0_legacy_key is set, generating one...
[09:55:16] INFO: No 'network_key' detected, setting it to 's0_legacy_key' for backwards compatibility
[09:55:16] INFO: No s2_access_control_key is set, generating one...
[09:55:17] INFO: No s2_authenticated_key is set, generating one...
[09:55:17] INFO: No s2_unauthenticated_key is set, generating one...
[09:55:18] INFO: No lr_s2_access_control_key is set, generating one...
[09:55:18] INFO: No lr_s2_authenticated_key is set, generating one...
[09:55:19] INFO: Flushing config to disk due to creation of new key(s)...
[09:55:19] INFO: Soft-reset set to automatic
[09:55:19] INFO: Virtual Machine not detected, enabling soft-reset
cont-init: info: /etc/cont-init.d/config.sh exited 0
cont-init: info: running /etc/cont-init.d/structure.sh
cont-init: info: /etc/cont-init.d/structure.sh exited 0
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun zwave_js (no readiness notification)
s6-rc: info: service legacy-services successfully started
[09:55:20] INFO: Successfully send discovery information to Home Assistant.
2025-03-12T14:55:22.013Z DRIVER ███████╗ ██╗ ██╗ █████╗ ██╗ ██╗ ███████╗ ██╗ ███████╗
╚══███╔╝ ██║ ██║ ██╔══██╗ ██║ ██║ ██╔════╝ ██║ ██╔════╝
███╔╝ █████╗ ██║ █╗ ██║ ███████║ ██║ ██║ █████╗ ██║ ███████╗
███╔╝ ╚════╝ ██║███╗██║ ██╔══██║ ╚██╗ ██╔╝ ██╔══╝ ██ ██║ ╚════██║
███████╗ ╚███╔███╔╝ ██║ ██║ ╚████╔╝ ███████╗ ╚█████╔╝ ███████║
╚══════╝ ╚══╝╚══╝ ╚═╝ ╚═╝ ╚═══╝ ╚══════╝ ╚════╝ ╚══════╝
2025-03-12T14:55:22.015Z DRIVER version 14.3.8
2025-03-12T14:55:22.016Z DRIVER
2025-03-12T14:55:23.659Z CONFIG version 14.3.8
2025-03-12T14:55:27.331Z CNTRLR querying Serial API capabilities...
2025-03-12T14:55:28.497Z CNTRLR Failed to execute controller command after 1/3 attempts. Scheduling next try i
n 100 ms.
2025-03-12T14:55:29.602Z CNTRLR Failed to execute controller command after 2/3 attempts. Scheduling next try i
n 1100 ms.
2025-03-12T14:55:31.711Z DRIVER Failed to initialize the driver: ZWaveError: Timeout while waiting for an ACK
from the controller (ZW0200)
at Driver.sendMessage (file:///usr/src/node_modules/zwave-js/src/lib/drive
r/Driver.ts:6059:23)
at ZWaveController.queryCapabilities (file:///usr/src/node_modules/zwave-j
s/src/lib/controller/Controller.ts:1144:37)
at Driver.initializeControllerAndNodes (file:///usr/src/node_modules/zwave
-js/src/lib/driver/Driver.ts:1665:46)
at Immediate.<anonymous> (file:///usr/src/node_modules/zwave-js/src/lib/dr
iver/Driver.ts:1466:16)
Error in driver ZWaveError: Failed to initialize the driver: ZWaveError: Timeout while waiting for an ACK from the controller (ZW0200)
at Driver.sendMessage (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:6059:23)
at ZWaveController.queryCapabilities (file:///usr/src/node_modules/zwave-js/src/lib/controller/Controller.ts:1144:37)
at Driver.initializeControllerAndNodes (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:1665:46)
at Immediate.<anonymous> (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:1466:16) (ZW0100)
at Immediate.<anonymous> (file:///usr/src/node_modules/zwave-js/src/lib/driver/Driver.ts:1486:6) {
code: 100,
context: undefined,
transactionSource: undefined
}
Shutting down
[14:55:31] WARNING: Halt add-on
s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_
|
open
|
2025-03-12T15:52:51Z
|
2025-03-12T19:33:00Z
|
https://github.com/home-assistant/core/issues/140456
|
[
"needs-more-information"
] |
bagavaga
| 1
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 400
|
Lora训练时节省显存删除--modules_to_save ${modules_to_save} \和--gradient_checkpointing \依旧报错
|
*提示:将[ ]中填入x,表示打对钩。提问时删除这行。只保留符合的选项。*
### 详细描述问题
第二训练阶段,fine-tuning 模型的时候
Lora训练时,为了节省显存删除--modules_to_save ${modules_to_save} \和--gradient_checkpointing \依旧报错
*请尽量具体地描述您遇到的问题,**必要时给出运行命令**。这将有助于我们更快速地定位问题所在。*
<img width="319" alt="image" src="https://github.com/ymcui/Chinese-LLaMA-Alpaca/assets/108610753/46dcb2b5-1f18-4ef0-b5c0-c196b9dafcf6">
### 运行截图或日志
<img width="1067" alt="image" src="https://github.com/ymcui/Chinese-LLaMA-Alpaca/assets/108610753/17717ea1-ed07-44f7-b8be-f8f865baeae8">
### 必查项目(前三项只保留你要问的)
- [ ] **基础模型**:Alpaca-Plus
- [ ] **运行系统**: Linux
- [ ] **问题分类**:模型训练与精调
- [ ] (必选)由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [ ] (必选)我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
|
closed
|
2023-05-21T13:17:20Z
|
2023-05-31T22:02:01Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/400
|
[
"stale"
] |
jjyu-ustc
| 3
|
onnx/onnx
|
scikit-learn
| 6,811
|
failed to convert opset to 17
|
# Bug report
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from torch.export import export
import onnx
model_name = "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
class Qwen2(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.qwen = model
def forward(self, x):
result = self.qwen(x)
result.past_key_values = ()
return result
qwen2 = Qwen2()
# Define a prompt for the model
prompt = "What are the benefits of using AI in healthcare?"
# Encode the prompt
input_ids = tokenizer.encode(prompt, return_tensors="pt")
# Get the model's output (logits)
with torch.no_grad():
outputs = qwen2(input_ids)
# Extract the logits from the output
logits = outputs.logits
# Get the predicted token (the last token in the sequence)
predicted_token_id = torch.argmax(logits[:, -1, :], dim=-1)
# Decode the predicted token to get the text
predicted_token = tokenizer.decode(predicted_token_id)
# Print the result
print("Response from DeepSeek-R1-Distill-Qwen-1.5B:")
print(predicted_token)
exported_program: torch.export.ExportedProgram = export (
qwen2, (input_ids,)
)
torch.onnx.export(
exported_program,
input_ids,
"qwen-1.5b.onnx",
#input_names=["input"],
opset_version=17,
dynamo=True
)
original_model = onnx.load_model("qwen-1.5b.onnx")
converted_model = onnx.version_converter.convert_version(original_model, 17)
onnx.save(converted_model, "model_17.onnx")
```
The above script fails with following message.
```shell
Traceback (most recent call last):
File "/home/hmsjwzb/work/models/QWEN/./qwen4.py", line 67, in <module>
converted_model = onnx.version_converter.convert_version(original_model, 17)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/onnx/version_converter.py", line 37, in convert_version
converted_model_str = C.convert_version(model_str, target_version)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: /github/workspace/onnx/common/ir_pb_converter.cc:715: assertNonNull: Assertion `g.get() != nullptr` failed: Warning: onnx version converter is unable to parse input model. (The IR version of the ONNX model may be too old.)
```
|
open
|
2025-03-13T08:27:02Z
|
2025-03-14T14:50:31Z
|
https://github.com/onnx/onnx/issues/6811
|
[
"bug",
"module: version converter"
] |
FlintWangacc
| 3
|
huggingface/peft
|
pytorch
| 1,890
|
ValueError: Trying to set a tensor of shape torch.Size([43176, 8192]) in "weight" (which has shape torch.Size([0])), this look incorrect.
|
### System Info
bitsandbytes==0.43.1
peft==0.11.0
accelerate==0.31.0
transformers==4.38.2
trl==0.9.4
### Who can help?
@BenjaminBossan @sayakpaul
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
Hello
I have 8 Nvidia H100 GPUs and trying to do some training with qlora and deepspeed zero3.
I'm using the code examples/sft/train.py but have no luck.
script:
!accelerate launch --config_file "configs/deepspeed_config_z3_qlora.yaml" train.py
--seed 100 \
--model_name_or_path "tokyotech-llm/Swallow-70b-hf" \
--dataset_name "smangrul/ultrachat-10k-chatml" \
--chat_template_format "chatml" \
--add_special_tokens False \
--append_concat_token False \
--splits "train,test" \
--max_seq_len 4096 \
--num_train_epochs 3 \
--logging_steps 1 \
--log_level "info" \
--logging_strategy "steps" \
--evaluation_strategy "epoch" \
--save_strategy "epoch" \
--bf16 True \
--packing True \
--learning_rate 1e-4 \
--lr_scheduler_type "cosine" \
--weight_decay 1e-4 \
--warmup_ratio 0.0 \
--max_grad_norm 1.0 \
--output_dir "mistral-sft-lora-multigpu" \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 4 \
--gradient_checkpointing True \
--report_to "tensorboard" \
--use_reentrant False \
--dataset_text_field "content" \
--use_peft_lora True \
--lora_r 8 \
--lora_alpha 16 \
--lora_dropout 0.1 \
--lora_target_modules "all-linear" \
--use_4bit_quantization True \
--use_nested_quant True \
--bnb_4bit_compute_dtype "bfloat16" \
--use_flash_attn False
Got error below:
ValueErrorValueError: : Trying to set a tensor of shape torch.Size([43176, 8192]) in "weight" (which has shape torch.Size([0])), this look incorrect.Trying to set a tensor of shape torch.Size([43176, 8192]) in "weight" (which has shape torch.Size([0])), this look incorrect.
### Expected behavior
I'm not sure, but the train.py should work without flash_attention.
|
closed
|
2024-06-27T05:47:16Z
|
2024-07-02T04:12:35Z
|
https://github.com/huggingface/peft/issues/1890
|
[] |
KarasZhang
| 15
|
pytorch/vision
|
computer-vision
| 8,292
|
All CI job are failing
|
e.g. https://github.com/pytorch/vision/actions/runs/8141135591/job/22247761309
```
Traceback (most recent call last):
File "/pytorch/vision/test/smoke_test.py", line 7, in <module>
import torchvision
File "/pytorch/vision/torchvision/__init__.py", line 6, in <module>
from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils
File "/pytorch/vision/torchvision/models/__init__.py", line 2, in <module>
from .convnext import *
File "/pytorch/vision/torchvision/models/convnext.py", line 8, in <module>
from ..ops.misc import Conv2dNormActivation, Permute
File "/pytorch/vision/torchvision/ops/__init__.py", line 23, in <module>
from .poolers import MultiScaleRoIAlign
File "/pytorch/vision/torchvision/ops/poolers.py", line 10, in <module>
from .roi_align import roi_align
File "/pytorch/vision/torchvision/ops/roi_align.py", line 4, in <module>
import torch._dynamo
File "/opt/conda/envs/ci/lib/python3.9/site-packages/torch/_dynamo/__init__.py", line 2, in <module>
from . import convert_frame, eval_frame, resume_execution
File "/opt/conda/envs/ci/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 31, in <module>
from torch.fx.experimental.symbolic_shapes import (
File "/opt/conda/envs/ci/lib/python3.9/site-packages/torch/fx/experimental/symbolic_shapes.py", line 63, in <module>
from torch.utils._sympy.functions import FloorDiv, Mod, IsNonOverlappingAndDenseIndicator
File "/opt/conda/envs/ci/lib/python3.9/site-packages/torch/utils/_sympy/functions.py", line 1, in <module>
import sympy
File "/opt/conda/envs/ci/lib/python3.9/site-packages/sympy/__init__.py", line 30, in <module>
from sympy.core.cache import lazy_function
File "/opt/conda/envs/ci/lib/python3.9/site-packages/sympy/core/__init__.py", line 9, in <module>
Traceback (most recent call last):
File "/home/ec2-user/actions-runner/_work/vision/vision/test-infra/.github/scripts/run_with_env_secrets.py", line 100, in <module>
main()
File "/home/ec2-user/actions-runner/_work/vision/vision/test-infra/.github/scripts/run_with_env_secrets.py", line 96, in main
run_cmd_or_die(f"docker exec -t {container_name} /exec")
File "/home/ec2-user/actions-runner/_work/vision/vision/test-infra/.github/scripts/run_with_env_secrets.py", line 38, in run_cmd_or_die
raise RuntimeError(f"Command {cmd} failed with exit code {exit_code}")
RuntimeError: Command docker exec -t a54479f4bb64811255ff0c9e7db93fbc369ef516bfc5be032a430222e6c2499a /exec failed with exit code 1
from .expr import Expr, AtomicExpr, UnevaluatedExpr
File "/opt/conda/envs/ci/lib/python3.9/site-packages/sympy/core/expr.py", line 4159, in <module>
from .mul import Mul
File "/opt/conda/envs/ci/lib/python3.9/site-packages/sympy/core/mul.py", line 2193, in <module>
from .numbers import Rational
File "/opt/conda/envs/ci/lib/python3.9/site-packages/sympy/core/numbers.py", line 4567, in <module>
_sympy_converter[type(mpmath.rational.mpq(1, 2))] = sympify_mpmath_mpq
AttributeError: module 'mpmath' has no attribute 'rational'
```
@vmoens 's investigations suggest that the `--pre` flag in our `pip install` command will also install the newly pre-released `mpmath-1.4.0a0` which leads to the above error.
(PR in torchRL: https://github.com/pytorch/rl/pull/1988)
|
closed
|
2024-03-04T13:51:53Z
|
2024-03-05T12:47:19Z
|
https://github.com/pytorch/vision/issues/8292
|
[] |
NicolasHug
| 0
|
strawberry-graphql/strawberry
|
django
| 3,514
|
Subscriptions fail to create WebSocket connections
|
<!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
I have a FastAPI server with Strawberry. It requires to have a WebSocket system for chat functionality and hence it uses the Subscriptions.
While the server is up the WebSocket connection gets created in the playground of one window and doesn't in another window.
The developer tools console gives the following error:
```
WebSocket connection to 'wss://my_url.com/api/graphql' failed:
```
In the network tab, except the request URL everything is empty. Further, there are no errors visible in the server logs
The setup is as below:
```
@strawberry.type
class Subscription:
@strawberry.subscription
async def test(self, data: str) -> AsyncGenerator[str, None]:
print("Test Subscription")
yield "Hello World"
schema = strawberry.Schema(query=Query, subscription=Subscription, mutation=Mutation)
graphql_app = GraphQLRouter(schema, graphiql=None)
app = FastAPI(docs_url="/docs", redoc_url=None)
audio_router = APIRouter()
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # Allows all origins
allow_credentials=True,
allow_methods=["*"], # Allows all methods
allow_headers=["*"], # Allows all headers
)
app.include_router(graphql_app, prefix="/api/graphql")
```
In the graphql playground i get:
```
{
"errors": [
{
"isTrusted": true
}
]
}
```
Now, I have tried it in multiple browsers, their windows on multiple operating systems as well.
The problem doesn't occur on Windows Chrome browsers and Mac Safari
Though it is intermittently faced on MacOS chrome browser.
The weird part is that it on the chrome browser, it works fine in one window while doesn't in another.
I faced this initially on the deployed server which has TLS but it is reproducible locally as well (again happens 50% of times).
<!-- A clear and concise description of what the bug is. -->
## System Information
- Operating system: Mac OS Sonoma
- Strawberry version (if applicable): 0.215.1
## Additional Context
<!-- Add any other relevant information about the problem here. -->
|
closed
|
2024-05-26T11:56:27Z
|
2025-03-20T15:56:45Z
|
https://github.com/strawberry-graphql/strawberry/issues/3514
|
[
"bug",
"info-needed"
] |
amoghjalan
| 2
|
oegedijk/explainerdashboard
|
dash
| 46
|
hide pdp on What if... tab
|
Thanks for working on this (https://github.com/oegedijk/explainerdashboard/issues/41).
Just tested and the pdp plot remained in the What if... tab. Is they a way to remove it similar to removing it from the Individual Predictions tab?
```
from explainerdashboard.datasets import (
titanic_fare,
titanic_names,
feature_descriptions,
)
from sklearn.ensemble import RandomForestRegressor
from explainerdashboard import RegressionExplainer, ExplainerDashboard
X_train, y_train, X_test, y_test = titanic_fare()
model = RandomForestRegressor(n_estimators=50, max_depth=5)
model.fit(X_train, y_train)
train_names, test_names = titanic_names()
explainer = RegressionExplainer(
model,
X_test,
y_test,
cats=["Sex", "Deck", "Embarked"],
idxs=test_names,
target="Fare",
descriptions=feature_descriptions,
units="$",
)
db = ExplainerDashboard(
explainer,
importances=False,
model_summary=False,
decision_trees=False,
no_permutations=True,
hide_depth=True,
hide_pdp=True,
)
db.run()
```
|
closed
|
2020-12-10T15:31:45Z
|
2020-12-10T20:45:05Z
|
https://github.com/oegedijk/explainerdashboard/issues/46
|
[] |
raybellwaves
| 2
|
litestar-org/litestar
|
api
| 3,211
|
Docs: Code block line length
|
### Summary
For documentation, and only documentation, if you have an overly long code block it enters scrollable window.
We should set `blacken-docs` configuration (and eventually `ruff` when https://github.com/astral-sh/ruff/issues/8237 happens) to line lengths somewhere on the lower end (maybe 80?); this goes together with manually ensuring `.. code-block::` directives are not overly long
|
open
|
2024-03-16T05:56:46Z
|
2025-03-24T19:17:57Z
|
https://github.com/litestar-org/litestar/issues/3211
|
[
"Documentation :books:",
"Help Wanted :sos:",
"Good First Issue"
] |
JacobCoffee
| 1
|
AirtestProject/Airtest
|
automation
| 1,121
|
部分android 13手机, minicap截图不可用
|
(请尽量按照下面提示内容填写,有助于我们快速定位和解决问题,感谢配合。否则直接关闭。)
**(重要!问题分类)**
* 测试开发环境AirtestIDE使用问题 -> https://github.com/AirtestProject/AirtestIDE/issues
* 控件识别、树状结构、poco库报错 -> https://github.com/AirtestProject/Poco/issues
* 图像识别、设备控制相关问题 -> 按下面的步骤
**描述问题bug**
(简洁清晰得概括一下遇到的问题是什么。或者是报错的traceback信息。)
[20:34:33][ERROR]<airtest.core.android.cap_methods.screen_proxy> b''
[20:34:33][ERROR]<airtest.core.android.cap_methods.screen_proxy> b'PID: 26453\r\nINFO: Using projection 1060x2376@1060x2376/0\r\nINFO: (external/minicap/src/minicap_33.cpp:245) Creating SurfaceComposerClient\r\nINFO: (external/minicap/src/minicap_33.cpp:248) Performing SurfaceComposerClient init check\r\nINFO: (external/minicap/src/minicap_33.cpp:259) Creating virtual display\r\nINFO: (external/minicap/src/minicap_33.cpp:265) Creating buffer queue\r\nINFO: (external/minicap/src/minicap_33.cpp:268) Setting buffer options\r\nINFO: (external/minicap/src/minicap_33.cpp:272) Creating CPU consumer\r\nINFO: (external/minicap/src/minicap_33.cpp:276) Creating frame waiter\r\nINFO: (external/minicap/src/minicap_33.cpp:280) Publishing virtual display\r\nSegmentation fault \r\n'
[20:34:33][ERROR]<airtest.core.android.cap_methods.screen_proxy> Minicap setup up failed!
也有Android13能用的手机
```
(在这里粘贴traceback或其他报错信息)
```
**相关截图**

(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)
**复现步骤**
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**预期效果**
(预期想要得到什么、见到什么)
**python 版本:** `python3.9`
python3.9
**airtest 版本:** `1.2.10`
> airtest版本通过`pip freeze`可以命令可以查到
1.2.10
**设备:**
- 型号: [e.g. google pixel 2]
- 系统: [e.g. Android 8.1] Android 13
- (别的信息)
**其他相关环境信息**
(其他运行环境,例如在linux ubuntu16.04上运行异常,在windows上正常。)
|
open
|
2023-04-04T12:38:29Z
|
2024-04-17T04:54:05Z
|
https://github.com/AirtestProject/Airtest/issues/1121
|
[] |
suchxyz
| 2
|
google-deepmind/graph_nets
|
tensorflow
| 120
|
What is the most efficient way to create a GraphTuple from batched tensors
|
My use case is that I would like to use graph_nets to perform logical reasoning on the output of some prior batched TF operations. Let me present an example:
``` python
B,W,H,C = 10, 200,200,3
data = tf.random.normal(shape=(B, W,H,C))
#[B,W,H,8]
A = tf.keras.layers.Conv2D(8, (3,3), padding='same')(data)
# I would now like to now turn each "pixel" into its own node and add fully connected
# edges. Thus I would like a GraphTuple representing a batch of B graphs with W*H
# nodes with node attributes of shape [8], and (W*H)**2 edges (including self loops)
```
What I can imagine doing is creating a dynamic fully connect graph for the nodes of a single graph and then add offsets to the senders and recievers for the batches. Is there a prebuilt function for this usecase (which I imagine would be quite common)?
|
closed
|
2020-06-02T17:00:16Z
|
2020-06-04T11:00:57Z
|
https://github.com/google-deepmind/graph_nets/issues/120
|
[] |
Joshuaalbert
| 5
|
psf/black
|
python
| 3,638
|
Black adds unexpected trailing comma in return type wrapped in parenthesis
|
Code before:
```python
def asdf_asdf_asdf_asdf_asdf() -> my_module.Asdf | my_module.AnotherType | my_module.YetAnotherType | None:
pass
```
Black formatted to:
```python
def asdf_asdf_asdf_asdf_asdf() -> (
my_module.Asdf | my_module.AnotherType | my_module.YetAnotherType | None,
):
pass
```
The trailing comma is incorrect, see [playground](https://black.vercel.app/?version=stable&state=_Td6WFoAAATm1rRGAgAhARYAAAB0L-Wj4ADcAH1dAD2IimZxl1N_WlbvK5V8JNI78iDZsHigDVyjE4tQWha-38JzxuA6NBKsp2kEtoZDrhhVy-9l-YBts85jfOzh4b304nwAUyJ1g-2SirojMNIYrAMgd5VvDgyl5O8GOXrT088SOzYVPBPYBGEPOKU2W0UtnaTHwP-n62bl9QsAAAAAAM5q0vB1zdcRAAGZAd0BAACVsaARscRn-wIAAAAABFla).
|
closed
|
2023-04-03T16:25:48Z
|
2023-06-16T00:08:28Z
|
https://github.com/psf/black/issues/3638
|
[
"T: bug"
] |
yilei
| 0
|
slackapi/bolt-python
|
fastapi
| 539
|
not_allowed_token_type when calling client.search_messages
|
(Describe your issue and goal here)
Trying to implement a feature where the app(bot) does a historical search in a channel for a given search string.
This api call requires a user token and I'm not sure how to implement it with Bolt.
Any examples or advice on how to do this since the app is being instantiated with only a bot token and signing secret:
```
APP = App(
token=SLACK_BOT_TOKEN,
signing_secret=SLACK_SIGNING_SECRET,
process_before_response=True)
```
Thanks in advance!
### Reproducible in:
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The `slack_bolt` version
(Paste the output of `pip freeze | grep slack`)
#### Python runtime version
(Paste the output of `python --version`)
#### OS info
(Paste the output of `sw_vers && uname -v` on macOS/Linux or `ver` on Windows OS)
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
1.
2.
3.
### Expected result:
(Tell what you expected to happen)
### Actual result:
(Tell what actually happened with logs, screenshots)
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
|
closed
|
2021-12-08T04:39:41Z
|
2023-10-27T07:10:30Z
|
https://github.com/slackapi/bolt-python/issues/539
|
[
"question"
] |
joshight
| 5
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,702
|
Could not find a version that satisfies Collecting selenium>=4.9.0 (from undetected-chromedriver)
|
Installed Python: 3.6.15
Run: pip install undetected-chromedriver
Can you help me? Thanks.

|
open
|
2023-12-19T22:07:03Z
|
2024-01-23T04:11:36Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1702
|
[] |
devhi0000
| 2
|
gradio-app/gradio
|
deep-learning
| 10,143
|
image.no_webcam_support
|
### Describe the bug
input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True)
这个代码部署在192.168.0.3服务器上面,我在192.168.0.5服务器访问项目 192.168.0.3:5019,然后点击摄像头报错image.no_webcam_support,这是为什么?该怎么修改
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
```
input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True)
### Screenshot
input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True)
### Logs
```shell
input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True)
```
### System Info
```shell
input_image = gr.Image(type='pil', label='图像', sources=['webcam'], interactive=True, show_fullscreen_button=True)
```
### Severity
Blocking usage of gradio
|
closed
|
2024-12-06T04:04:52Z
|
2024-12-06T14:41:23Z
|
https://github.com/gradio-app/gradio/issues/10143
|
[
"bug"
] |
geekplusaa
| 1
|
arogozhnikov/einops
|
tensorflow
| 60
|
Introduce anonymous axes into rearrange/reduce/repeat
|
closed
|
2020-08-23T17:41:39Z
|
2020-08-24T08:10:39Z
|
https://github.com/arogozhnikov/einops/issues/60
|
[] |
arogozhnikov
| 0
|
|
OpenInterpreter/open-interpreter
|
python
| 1,525
|
Code execution never comes back
|
### Describe the bug
I am trying to have my python project execute code dynamically via OI and every time it tries to run some shell script as described below, it never comes back. It gets stuck inside the method `for line in interpreter.computer.run(language, code, stream=True):`
This is the description generated.
1. First, I'll use SSH to connect to the server and run the disk usage command:
Code:
```ssh -i /root/blabla.pem root@192.168.1.53 "df -h"```
Filesystem Size Used Avail Use% Mounted on
tmpfs 778M 2.0M 776M 1% /run
/dev/sda2 440G 37G 380G 9% /
tmpfs 3.8G 0 3.8G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 778M 4.0K 778M 1% /run/user/1000
I think it gets stuck inside this loop because `self.done` never gets set.
```
while True:
if not self.output_queue.empty():
yield self.output_queue.get()
else:
time.sleep(0.1)
try:
output = self.output_queue.get(timeout=0.3) # Waits for 0.3 seconds
yield output
except queue.Empty:
if self.done.is_set():
# Try to yank 3 more times from it... maybe there's something in there...
# (I don't know if this actually helps. Maybe we just need to yank 1 more time)
for _ in range(3):
if not self.output_queue.empty():
yield self.output_queue.get()
time.sleep(0.2)
break
```
I am using version `open-interpreter = "^0.4.3"`
Can you please advise?
### Reproduce
I am something like this:
for chunk in interpreter.chat(
f"{task_description}\n\n",
stream=True,
display=False
):
print out the chunks
### Expected behavior
I would expect it to return, get out of the method.
### Screenshots
_No response_
### Open Interpreter version
0.4.3
### Python version
3.11.5
### Operating System name and version
mac 14
### Additional context
_No response_
|
open
|
2024-11-07T04:53:36Z
|
2024-11-07T04:53:36Z
|
https://github.com/OpenInterpreter/open-interpreter/issues/1525
|
[] |
guiramos
| 0
|
marshmallow-code/apispec
|
rest-api
| 201
|
Why doesn't Marshmallow Dict field resolve to additionalProperties?
|
Any Dict field in a schema seems to resolve to just 'type': 'object' in the spec output. Why not to additionalProperties?
```
class ASchema(MM.Schema):
data = MM.fields.Dict(keys=MM.fields.String(), values=MM.fields.Nested(OtherSchema), required=True)
```
the yaml output for:
`
spec.definition('A', ASchema, schema=ASchema)
`
is this:
`data: {type: object}`
Why not this:
```
data: {
type: object,
additionalProperties: {
$ref: '#/definitions/A'
}
}
```
|
closed
|
2018-05-04T10:43:18Z
|
2018-05-07T10:16:40Z
|
https://github.com/marshmallow-code/apispec/issues/201
|
[] |
UrKr
| 4
|
graphdeco-inria/gaussian-splatting
|
computer-vision
| 406
|
Just an advice about the install requirements
|
Hi, there, I tried the install env of conda, and I found I cannot install normally by just run
```bash
conda env create --file environment.yml
```
At first, I thought it was the version is not compatable with my version of CUDA (yours in file is 11.6, and Mine is 11.7 in system), So I changed the file to 11.7 and torch for 1.13.1 and etc.
Then I get the error like “”NO RUNTIME CUDA in /usr/local/cuda-11.7" ; I believe most people have met this problem.
So, I change the procedure, I create a conda env firstly of python 3.9 , install the torch of 1.13.1 of cuda 11.7 version, then I install the other package respectly, finally, I install the submodules, every thing works fine on my computers.
Here is my commands:
```bash
conda create -n gs python=3.9
conda actiavte gs
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
pip install plyfile tqdm
cd THEDIROF3DGS
cd submodules/
cd diff-gaussian-rasterization/
pip install .
cd ../simple-knn
pip install .
```
Just an advice for who have met the same problem.
|
closed
|
2023-10-30T02:14:25Z
|
2024-08-06T02:15:27Z
|
https://github.com/graphdeco-inria/gaussian-splatting/issues/406
|
[] |
StarsTesla
| 7
|
scikit-learn/scikit-learn
|
machine-learning
| 30,139
|
The input_tags.sparse flag is often incorrect
|
### Describe the bug
If I understood correctly the developer API for tags, `input_tags.sparse` tells us whether an estimator can accept sparse data or not. For many estimators it seems that `input_tags.sparse` is False but should be True.
### Steps/Code to Reproduce
```python
from sklearn.linear_model import LinearRegression
from sklearn.utils import get_tags
reg = LinearRegression()
tags = get_tags(reg)
tags.input_tags.sparse
```
### Expected Results
`True` as `LinearRegression` accepts sparse input data.
### Actual Results
`False`
### Versions
```shell
System:
python: 3.12.5 | packaged by conda-forge | (main, Aug 8 2024, 18:32:50) [Clang 16.0.6 ]
executable: /Users/abaker/miniforge3/envs/sklearn-dev/bin/python
machine: macOS-14.5-arm64-arm-64bit
Python dependencies:
sklearn: 1.6.dev0
pip: 24.2
setuptools: 73.0.1
numpy: 2.1.0
scipy: 1.14.1
Cython: 3.0.11
pandas: 2.2.2
matplotlib: 3.9.2
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
num_threads: 8
prefix: libopenblas
filepath: /Users/abaker/miniforge3/envs/sklearn-dev/lib/libopenblas.0.dylib
version: 0.3.27
threading_layer: openmp
architecture: VORTEX
user_api: openmp
internal_api: openmp
num_threads: 8
prefix: libomp
filepath: /Users/abaker/miniforge3/envs/sklearn-dev/lib/libomp.dylib
version: None
```
|
closed
|
2024-10-23T16:03:08Z
|
2025-01-02T12:06:20Z
|
https://github.com/scikit-learn/scikit-learn/issues/30139
|
[
"Bug",
"Developer API"
] |
antoinebaker
| 3
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 709
|
Advice Needed for Improving Naturalness
|
Hello, I am studying this repo as a part of my research project.
Thank you for sharing this amazing repo.
I have read some instructions in #437.
I have collected around 0.3 hours training data (not clean, with background music) of a speaker that is not seen in pretrained model.
The speaker is an Indian American. His voice cannot be well cloned by pretrained models. (Maybe because of his accent?)
Thus I tried the single speaker finetune.
I uploaded some audio samples:
[Sample_360K.zip](https://github.com/CorentinJ/Real-Time-Voice-Cloning/files/6187014/Sample_360K.zip)
I also uploaded some of the real audio clips for your reference:
[Data.zip](https://github.com/CorentinJ/Real-Time-Voice-Cloning/files/6187015/Data.zip)
I finetuned the synthesizer upon 360k steps and at this point of time the average loss is around 0.31.
I feel it is near a local minimum.
I am quite satisfied with the similarity during my subjective listening.
As an objective measure, the similarity score measured by this repo is also quite good:
[https://github.com/resemble-ai/Resemblyzer](url)
But I notice that the audio generated is a little bit unnatural.
I tried finetune the vocoder but seems not practical as the loss did not seem to be decreasing.
Is there any way I can improve the naturalness?
So far I can only think of starting a new training to see if it can reach better minimum.
Also may I ask is there any suggested metric to measure speech naturalness?
I do not have budget to conduct a MOS test.
|
closed
|
2021-03-23T05:36:18Z
|
2021-03-31T10:26:50Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/709
|
[] |
chankl3579
| 1
|
erdewit/ib_insync
|
asyncio
| 442
|
positions() and portfolio() inconsistent
|
For our live trading account, ib.positions() and ib.portfolio() return inconsistent results, per below:
>>
from ib_insync import *
util.startLoop()
ib = IB()
ib.connect('127.0.0.1', 7496, clientId=1)
<IB connected to 127.0.0.1:7496 clientId=1>
len(ib.positions())
5
len(ib.portfolio())
0
>>
Why is len(ib.portfolio()) 0, while len(ib.positions()) is 5?
The true number of portfolio items is 5.
Thanks for advice!

|
closed
|
2022-02-11T22:20:55Z
|
2022-04-28T11:42:55Z
|
https://github.com/erdewit/ib_insync/issues/442
|
[] |
marksandstrom
| 1
|
jupyter/nbgrader
|
jupyter
| 1,428
|
Proposal/Request/Help - Step by step guide NbGrader for multilpe courses tutorial
|
It's taking me months to deploy on a Cloud Server ( AWS in my case ) the Nbgrader working for multiple courses and instructors.
I started this journey in January, and I've learned a lot. I have no previous knowledge of Linux, how to manage users in PAM, or create a JupyterHub.
Today I have successfully achieved the One-Class-One-Instructor example, with Github OAUTH. I have also configured the log file option for Nbgrader. But looking at the One Class, Multiple Graders example, it just too many things I don't know how to do.
The example of creating a "shared notebook server as a JupyterHub service" doesn't allow me to follow some steps.
Can you shed some light on this? I'm feeling overwhelmed.
|
open
|
2021-04-01T13:59:22Z
|
2021-05-05T23:19:29Z
|
https://github.com/jupyter/nbgrader/issues/1428
|
[
"question"
] |
Tebinski
| 4
|
mirumee/ariadne
|
api
| 952
|
GraphQL websocket handler not working on Ariadne/Starlette (object is not callable)
|
I'm using Uvicorn and Starlette with Ariadne for GraphQL queries, but the GraphQL websocket handler does not seem to be working.
I'm using poetry so this is what my pyproject.toml file looks like:
```
[tool.poetry]
name = "app-backend"
version = "0.1.0"
description = ""
authors = ["user <email@gmail.com>"]
license = "BSD-2"
packages = [
{ include="api", from="." }
]
[tool.poetry.dependencies]
python = "^3.10"
ariadne = "=0.16.1"
alembic = "1.8.1"
psycopg2-binary = "2.9.4"
PyJWT = "^2.3.0"
starlette = "0.20.4"
"marrow.mailer" = "^4.0.3"
gunicorn = "^20.1.0"
python-multipart = "^0.0.5"
SQLAlchemy = "1.4.42"
rauth = "^0.7.3"
aiohttp = "3.8.3"
aiofiles = "22.1.0"
SQLAlchemy-Utils = "^0.38.3"
BroadCast = "^1.1.2"
broadcaster = {extras = ["postgres"], version = "^0.2.0"}
websockets = "^10.3"
graphql-core = "3.2.3"
anyio = "3.6.2"
uvicorn = {extras = ["standard"], version = "^0.19.0"}
pytest = "^7.1.3"
[tool.poetry.scripts]
init_app = "api.scripts.initial_config:main"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
```
This is my code. I've used GraphQLTransportWSHandler in the manner documented [here](https://ariadnegraphql.org/docs/subscriptions#graphql-ws).
```
type_defs = """
type Query {
_unused: Boolean
}
type Message {
sender: String
message: String
}
type Mutation {
send(sender: String!, message: String!): Boolean
}
type Subscription {
message: Message
}
"""
mutation = MutationType()
schema = make_executable_schema(type_defs, mutation)
graphql_handler = GraphQL(
schema=schema,
debug=True,
introspection=True,
websocket_handler=GraphQLTransportWSHandler(),
logger="admin.graphql"
)
routes = [
Route("/api/graphql", graphql_handler, name="graphql"),
WebSocketRoute("/api/subscriptions", endpoint=graphql_handler.websocket_handler, name="graphqlws"),
]
app = Starlette(routes=routes)
async def main():
config = uvicorn.Config("main:app", port=8000, log_level="info", log_config="log.json", ws="websockets")
server = uvicorn.Server(config)
await server.serve()
if __name__ == "__main__":
asyncio.run(main())
```
Which I'm running like this:
`poetry run python3 main2.py`
But trying to open a new websocket connection seems to fail:
`new WebSocket('ws://localhost:8000/api/subscriptions');`
I get the following error:
```
Traceback (most recent call last):
File "<...>/lib/python3.10/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 230, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "<...>/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "<...>/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "<...>/lib/python3.10/site-packages/starlette/middleware/errors.py", line 149, in __call__
await self.app(scope, receive, send)
File "<...>/lib/python3.10/site-packages/starlette/middleware/cors.py", line 76, in __call__
await self.app(scope, receive, send)
File "<...>/lib/python3.10/site-packages/starlette/middleware/authentication.py", line 48, in __call__
await self.app(scope, receive, send)
File "<...>/lib/python3.10/site-packages/starlette/middleware/base.py", line 24, in __call__
await self.app(scope, receive, send)
File "<...>/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 51, in __call__
await self.app(scope, receive, send)
File "<...>/lib/python3.10/site-packages/starlette/routing.py", line 680, in __call__
await route.handle(scope, receive, send)
File "<...>/lib/python3.10/site-packages/starlette/routing.py", line 334, in handle
await self.app(scope, receive, send)
TypeError: 'GraphQLTransportWSHandler' object is not callable
```
The error seems to come from this line:
```
async def handle(self, scope: Scope, receive: Receive, send: Send) -> None:
await self.app(scope, receive, send)
```
Which attempts to call an `ariadne.asgi.handlers.graphql_transport_ws.GraphQLTransportWSHandler` object, but fails because the object is not callable.
|
closed
|
2022-10-21T15:57:19Z
|
2022-12-05T17:31:55Z
|
https://github.com/mirumee/ariadne/issues/952
|
[] |
fdemian
| 5
|
nvbn/thefuck
|
python
| 1,443
|
[Suggestion]
|
Please add another option for recursive:
`--itall` or
`--everything`
|
open
|
2024-04-29T03:23:14Z
|
2024-04-29T03:23:14Z
|
https://github.com/nvbn/thefuck/issues/1443
|
[] |
celtboy
| 0
|
sktime/pytorch-forecasting
|
pandas
| 1,663
|
[MNT] `conda-forge` releases - 0.10.3, 1.0.0, 1.1.0 or 1.1.1
|
The conda feedstock is years out of date, last release is 0.10.2, we should get the newest releases available on `conda-forge` as well: 0.10.3, 1.0.0, 1.1.0, 1.1.1,
https://github.com/conda-forge/pytorch-forecasting-feedstock
Re 1.1.0, we may want to avoid this due to the package name bug.
|
closed
|
2024-09-08T19:38:23Z
|
2025-01-12T16:54:21Z
|
https://github.com/sktime/pytorch-forecasting/issues/1663
|
[
"maintenance",
"release"
] |
fkiraly
| 4
|
sepandhaghighi/samila
|
matplotlib
| 66
|
Discord Channel
|
#### Description
Discord channel would be a nice place for `Samila`'s users to share their experiences using samila, request new features and/or issue and etc.
We can made a discord badge linking to our channel using [this](https://github.com/coderplex-org/coderplex/issues/150) tutorial.
|
closed
|
2021-10-29T14:47:29Z
|
2021-11-10T13:57:17Z
|
https://github.com/sepandhaghighi/samila/issues/66
|
[
"enhancement"
] |
sadrasabouri
| 0
|
tableau/server-client-python
|
rest-api
| 763
|
Urgent: Filter doesn't recognize multi-word option
|
Hi!
That's urgent. Please take a look!
I have many automated tasks using TSC created back in 2019. After recent package update (downloaded new TSC and updated anaconda3/pkg) filtering by name containing a whitespace doesn't work.
For example,
My Tableau site contains projects 'SignWorks' , 'Labor Statistics'.
Use these values for variable ind_prj in the filter option below.
req_option = TSC.RequestOptions()
req_option.filter.add(TSC.Filter(TSC.RequestOptions.Field.Name,
TSC.RequestOptions.Operator.Equals,
ind_prj))
pub_projects, pagination_item=server_pub.projects.get(req_option)
if pub_projects:
project_name = pub_projects.pop().name
print(project_name)
else:
error = "No project named '{}' found".format(ind_prj)
print(error)
Only project 'SignWorks' returned.
'Labor Statistics' returned only when compared explicitly like here:
all_projects, pagination_item=server_pub.projects.get()
for project in all_projects:
if project.name ==ind_prj:
print(f'\tProject found:{project.name}')
Will greatly appreciate your help with this!
Thank you,
Elena
|
closed
|
2020-12-17T16:45:52Z
|
2022-06-02T03:46:07Z
|
https://github.com/tableau/server-client-python/issues/763
|
[
"bug"
] |
ElenaSemMD
| 6
|
apache/airflow
|
data-science
| 47,579
|
Calling a BigQuery Stored Procedure aggregates all inputs & outputs at the Airflow Task level
|
### Apache Airflow Provider(s)
openlineage, google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==10.26.0
apache-airflow-providers-openlineage==2.1.0
### Apache Airflow version
2.10.5
### Operating System
macOS Sequoia Version 15.2 (24C101)
### Deployment
Astronomer
### Deployment details
FROM quay.io/astronomer/astro-runtime:12.7.1
### What happened
When I call a BigQuery routine (stored procedure) that contains multiple CTAS statements using `BigQueryInsertJobOperator`, OpenLineage produces a COMPLETE Task event with aggregated inputs and outputs. The problem with this is that you don't get lineage at the "sub-task" CTAS level. For example,
CTAS 1: creates output_table1 from input_table1
CTAS 2: creates output_table2 from input_table2
mock OL event:
```
{
"inputs": [
{
"namespace": "bigquery",
"name": "input_table1"
},
{
"namespace": "bigquery",
"name": "input_table2"
}
],
"outputs": [
{
"namespace": "bigquery",
"name": "output_table1"
},
{
"namespace": "bigquery",
"name": "output_table2"
}
]
}
```
Now it looks like input_table1 and 2 went to both output tables, but that isn't accurate from the "sub-task" perspective.
### What you think should happen instead
I'm not certain if this issue constitutes a bug or rather a potential design or implementation gap. The stored procedure operates at a 'sub-task' level, but OpenLineage appears to only support events at the DAG and Task levels. I'm unaware if there have been prior discussions on this topic, but ideally, it would be beneficial to obtain sub-task events from OpenLineage that provide a breakdown of inputs and outputs, aligning with the read and write operations performed by the CTAS statements in the stored procedure.
### How to reproduce
DAG:
```
from airflow import DAG
from airflow.providers.google.cloud.operators.bigquery import (
BigQueryInsertJobOperator
)
from datetime import datetime
dag = DAG(
dag_id="dag_execute_bq_stored_proc",
schedule_interval=None,
start_date=datetime(2025, 3, 4), # Start date
)
stored_proc = (
"<bq-project-id>.<bq-dataset>.<bq-routine>"
)
task1 = BigQueryInsertJobOperator(
task_id="task1",
gcp_conn_id="bq_conn",
configuration={
"query": {
"query": f"CALL `{stored_proc}`();",
"useLegacySql": False,
"priority": "BATCH",
}
},
dag=dag,
)
task1
```
BigQuery Routine/Stored Procedure
```
BEGIN
-- Create a dummy table
CREATE TEMP TABLE dummy_table AS
SELECT 1 AS id, 'row1' AS value
UNION ALL
SELECT 2 AS id, 'row2' AS value
UNION ALL
SELECT 3 AS id, 'row3' AS value;
-- Create a dummy table
CREATE TEMP TABLE dummy_table2 AS
SELECT 1 AS id, 'row1' AS value
UNION ALL
SELECT 2 AS id, 'row2' AS value
UNION ALL
SELECT 3 AS id, 'row3' AS value;
-- Read the dummy table and create another table via CTAS
CREATE OR REPLACE TABLE <bq-dataset>.another_table AS
SELECT * FROM dummy_table;
CREATE OR REPLACE TABLE <bq-dataset>.another_table2 AS
SELECT * FROM dummy_table2;
END
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
open
|
2025-03-10T16:59:11Z
|
2025-03-19T16:36:02Z
|
https://github.com/apache/airflow/issues/47579
|
[
"kind:bug",
"provider:google",
"area:providers",
"needs-triage",
"provider:openlineage"
] |
luke-hoffman1
| 5
|
aminalaee/sqladmin
|
sqlalchemy
| 480
|
Update Fontawesome Version
|
### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
Version 6.0 was published over a year ago
### Describe the solution you would like.
Update to recent Version 6.4
### Describe alternatives you considered
_No response_
### Additional context
_No response_
|
closed
|
2023-04-27T08:31:01Z
|
2023-04-29T09:15:00Z
|
https://github.com/aminalaee/sqladmin/issues/480
|
[] |
KreppelKlatscher
| 1
|
mwouts/itables
|
jupyter
| 299
|
`OverflowError: can't convert negative int to unsigned`
|
After converting lazy frame's frequency, an exception is thrown by `itables` v2.1.3:
```
lf = ... # contains ts_event column with UInt64 type
lf.collect() # displays the data frame nicely
lf.with_columns(polars.from_epoch("ts_event", "ns")).collect() # problematic
```
Traceback:
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File ~/mambaforge-pypy3/envs/proj/lib/python3.11/site-packages/itables/datatables_format.py:98, in datatables_rows(df, count, warn_on_unexpected_types, pure_json)
94 try:
95 # Pandas DataFrame
96 data = list(
97 zip(
---> 98 *(empty_columns + [_format_column(x, pure_json) for _, x in df.items()])
99 )
100 )
101 has_bigints = any(
102 x.dtype.kind == "i"
103 and ((x > JS_MAX_SAFE_INTEGER).any() or (x < JS_MIN_SAFE_INTEGER).any())
104 for _, x in df.items()
105 )
AttributeError: 'DataFrame' object has no attribute 'items'
During handling of the above exception, another exception occurred:
OverflowError Traceback (most recent call last)
File ~/mambaforge-pypy3/envs/proj/lib/python3.11/site-packages/IPython/core/formatters.py:347, in BaseFormatter.__call__(self, obj)
345 method = get_real_method(obj, self.print_method)
346 if method is not None:
--> 347 return method()
348 return None
349 else:
File ~/mambaforge-pypy3/envs/proj/lib/python3.11/site-packages/itables/javascript.py:305, in _datatables_repr_(df)
304 def _datatables_repr_(df):
--> 305 return to_html_datatable(df, connected=_CONNECTED)
File ~/mambaforge-pypy3/envs/proj/lib/python3.11/site-packages/itables/javascript.py:433, in to_html_datatable(df, caption, tableId, connected, use_to_html, **kwargs)
430 # When the header has an extra column, we add
431 # an extra empty column in the table data #141
432 column_count = _column_count_in_header(table_header)
--> 433 dt_data = datatables_rows(
434 df,
435 column_count,
436 warn_on_unexpected_types=warn_on_unexpected_types,
437 )
439 return html_table_from_template(
440 table_header,
441 table_id=tableId,
(...)
445 column_filters=column_filters,
446 )
File ~/mambaforge-pypy3/envs/proj/lib/python3.11/site-packages/itables/datatables_format.py:116, in datatables_rows(df, count, warn_on_unexpected_types, pure_json)
113 data = list(df.iter_rows())
114 import polars as pl
--> 116 has_bigints = any(
117 x.dtype in [pl.Int64, pl.UInt64]
118 and ((x > JS_MAX_SAFE_INTEGER).any() or (x < JS_MIN_SAFE_INTEGER).any())
119 for x in (df[col] for col in df.columns)
120 )
121 js = json.dumps(data, cls=generate_encoder(False), allow_nan=not pure_json)
123 if has_bigints:
File ~/mambaforge-pypy3/envs/proj/lib/python3.11/site-packages/itables/datatables_format.py:118, in <genexpr>(.0)
113 data = list(df.iter_rows())
114 import polars as pl
116 has_bigints = any(
117 x.dtype in [pl.Int64, pl.UInt64]
--> 118 and ((x > JS_MAX_SAFE_INTEGER).any() or (x < JS_MIN_SAFE_INTEGER).any())
119 for x in (df[col] for col in df.columns)
120 )
121 js = json.dumps(data, cls=generate_encoder(False), allow_nan=not pure_json)
123 if has_bigints:
File ~/mambaforge-pypy3/envs/proj/lib/python3.11/site-packages/polars/series/series.py:800, in Series.__lt__(self, other)
798 if isinstance(other, pl.Expr):
799 return F.lit(self).__lt__(other)
--> 800 return self._comp(other, "lt")
File ~/mambaforge-pypy3/envs/proj/lib/python3.11/site-packages/polars/series/series.py:749, in Series._comp(self, other, op)
746 if f is None:
747 return NotImplemented
--> 749 return self._from_pyseries(f(other))
OverflowError: can't convert negative int to unsigned
```
|
closed
|
2024-06-25T14:35:02Z
|
2024-07-03T22:27:20Z
|
https://github.com/mwouts/itables/issues/299
|
[] |
jmakov
| 2
|
wkentaro/labelme
|
deep-learning
| 381
|
how to get white masks?
|
Thank you for your contribution. I m using this tool to get the masks. However, when I am running labelme_json_to_dataset 000001.json. In the dataset, I get the red mask with black background. how to get white mask with black background? please help me, thank you.
By the way, I have tried to change from
colormap = label_colormap(255)
lbl_pil.putpalette((colormap * 255).astype(np.uint8).flatten())
to
colormap = np.ones((255, 3), dtype=float)
colormap[0] = [0, 0, 0]
lbl_pil.putpalette((colormap * 255).astype(np.uint8).flatten())
but it does not work
|
closed
|
2019-04-22T12:37:35Z
|
2021-08-17T09:36:13Z
|
https://github.com/wkentaro/labelme/issues/381
|
[] |
SonOfCoding
| 6
|
pydantic/logfire
|
pydantic
| 210
|
Incompatible with newrelic-admin CLI: "Couldn't build proto file into descriptor pool: duplicate file name opentelemetry/proto/common/v1/common.proto"
|
### Description
Hey all, thanks for your work and the fantastic product.
I was trying to try out logfire and it works fine locally, but sadly it breaks as soon as I put it on a GKE / Kubernetes pod (on the same docker container). I have a quite standard FastAPI application on Python 3.12.
EDIT: See first comment for the probable cause
The error is the following:
```
Traceback (most recent call last):
File "/usr/local/bin/uvicorn", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.12/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/uvicorn/main.py", line 409, in main
run(
File "/usr/local/lib/python3.12/site-packages/uvicorn/main.py", line 575, in run
server.run()
File "/usr/local/lib/python3.12/site-packages/uvicorn/server.py", line 65, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
File "/usr/local/lib/python3.12/site-packages/uvicorn/server.py", line 69, in serve
await self._serve(sockets)
File "/usr/local/lib/python3.12/site-packages/uvicorn/server.py", line 76, in _serve
config.load()
File "/usr/local/lib/python3.12/site-packages/uvicorn/config.py", line 433, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/uvicorn/importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 995, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/home/xxx/xxx/xxx/app.py", line 3, in <module>
from fastapi import Depends, FastAPI, Request
File "/usr/local/lib/python3.12/site-packages/fastapi/__init__.py", line 7, in <module>
from .applications import FastAPI as FastAPI
File "/usr/local/lib/python3.12/site-packages/fastapi/applications.py", line 16, in <module>
from fastapi import routing
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "/usr/local/lib/python3.12/site-packages/newrelic/api/import_hook.py", line 174, in exec_module
self.loader.exec_module(module)
File "/usr/local/lib/python3.12/site-packages/fastapi/routing.py", line 22, in <module>
from fastapi import params
File "/usr/local/lib/python3.12/site-packages/fastapi/params.py", line 5, in <module>
from fastapi.openapi.models import Example
File "/usr/local/lib/python3.12/site-packages/fastapi/openapi/models.py", line 4, in <module>
from fastapi._compat import (
File "/usr/local/lib/python3.12/site-packages/fastapi/_compat.py", line 20, in <module>
from fastapi.exceptions import RequestErrorModel
File "/usr/local/lib/python3.12/site-packages/fastapi/exceptions.py", line 139, in <module>
RequestErrorModel: Type[BaseModel] = create_model("Request")
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydantic/main.py", line 1531, in create_model
return meta(
^^^^^
File "/usr/local/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py", line 202, in __new__
complete_model_class(
File "/usr/local/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py", line 557, in complete_model_class
cls.__pydantic_validator__ = create_schema_validator(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydantic/plugin/_schema_validator.py", line 37, in create_schema_validator
plugins = get_plugins()
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydantic/plugin/_loader.py", line 47, in get_plugins
_plugins[entry_point.value] = entry_point.load()
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/importlib/metadata/__init__.py", line 205, in load
module = import_module(match.group('module'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/logfire/__init__.py", line 9, in <module>
from ._internal.config import (
File "/usr/local/lib/python3.12/site-packages/logfire/_internal/config.py", line 22, in <module>
from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExporter
File "/usr/local/lib/python3.12/site-packages/opentelemetry/exporter/otlp/proto/http/metric_exporter/__init__.py", line 25, in <module>
from opentelemetry.exporter.otlp.proto.common._internal import (
File "/usr/local/lib/python3.12/site-packages/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py", line 31, in <module>
from opentelemetry.proto.common.v1.common_pb2 import (
File "/usr/local/lib/python3.12/site-packages/opentelemetry/proto/common/v1/common_pb2.py", line 17, in <module>
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n*opentelemetry/proto/common/v1/common.proto\x12\x1dopentelemetry.proto.common.v1\"\x8c\x02\n\x08\x41nyValue\x12\x16\n\x0cstring_value\x18\x01 \x01(\tH\x00\x12\x14\n\nbool_value\x18\x02 \x01(\x08H\x00\x12\x13\n\tint_value\x18\x03 \x01(\x03H\x00\x12\x16\n\x0c\x64ouble_value\x18\x04 \x01(\x01H\x00\x12@\n\x0b\x61rray_value\x18\x05 \x01(\x0b\x32).opentelemetry.proto.common.v1.ArrayValueH\x00\x12\x43\n\x0ckvlist_value\x18\x06 \x01(\x0b\x32+.opentelemetry.proto.common.v1.KeyValueListH\x00\x12\x15\n\x0b\x62ytes_value\x18\x07 \x01(\x0cH\x00\x42\x07\n\x05value\"E\n\nArrayValue\x12\x37\n\x06values\x18\x01 \x03(\x0b\x32\'.opentelemetry.proto.common.v1.AnyValue\"G\n\x0cKeyValueList\x12\x37\n\x06values\x18\x01 \x03(\x0b\x32\'.opentelemetry.proto.common.v1.KeyValue\"O\n\x08KeyValue\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\x36\n\x05value\x18\x02 \x01(\x0b\x32\'.opentelemetry.proto.common.v1.AnyValue\"\x94\x01\n\x14InstrumentationScope\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0f\n\x07version\x18\x02 \x01(\t\x12;\n\nattributes\x18\x03 \x03(\x0b\x32\'.opentelemetry.proto.common.v1.KeyValue\x12 \n\x18\x64ropped_attributes_count\x18\x04 \x01(\rB{\n io.opentelemetry.proto.common.v1B\x0b\x43ommonProtoP\x01Z(go.opentelemetry.io/proto/otlp/common/v1\xaa\x02\x1dOpenTelemetry.Proto.Common.V1b\x06proto3')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Couldn't build proto file into descriptor pool: duplicate file name opentelemetry/proto/common/v1/common.proto
```
I don't have opentelemetry among my standard dependencies, the opentelemetry version is automatically resolved by poetry when installing logfire with `poetry add logfire[fastapi, psycopg]`. (I use poetry as a package manager & export versions to requirements. The container only sees the exported requirements.txt).
~~A possible issue may be a conflicting protobuf version installed automatically from google cloud dependencies (protobuf==4.25.3)?~~
As an additional info: the same error happens on a similar configuration where I have installed logfire on a Flask app (as opposed to fastapi): as soon as I import any pydantic model it crashes. (In this case, I'm not even importing logfire)
### Python, Logfire & OS Versions, related packages (not required)
```TOML
logfire="0.35.0"
platform="Linux-5.14.0-1054-oem-x86_64-with-glibc2.31"
python="3.12.3 (main, May 14 2024, 07:54:30) [GCC 10.2.1 20210110]"
[related_packages]
requests="2.32.2"
pydantic="2.7.0"
protobuf="4.25.3"
rich="13.7.1"
executing="2.0.1"
opentelemetry-api="1.24.0"
opentelemetry-exporter-otlp-proto-common="1.24.0"
opentelemetry-exporter-otlp-proto-http="1.24.0"
opentelemetry-instrumentation="0.45b0"
opentelemetry-instrumentation-asgi="0.45b0"
opentelemetry-instrumentation-dbapi="0.45b0"
opentelemetry-instrumentation-fastapi="0.45b0"
opentelemetry-instrumentation-psycopg="0.45b0"
opentelemetry-proto="1.24.0"
opentelemetry-sdk="1.24.0"
opentelemetry-semantic-conventions="0.45b0"
opentelemetry-util-http="0.45b0"
```
|
open
|
2024-05-24T14:54:48Z
|
2025-03-19T14:30:08Z
|
https://github.com/pydantic/logfire/issues/210
|
[
"bug"
] |
Giuzzilla
| 11
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,629
|
生成图像的质量太低怎么办
|
生成的图像质量太低,甚至感觉有点抽象派(比喻一下),这是怎么回事呀?
|
open
|
2024-03-09T08:02:07Z
|
2024-03-09T08:02:07Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1629
|
[] |
LauraABCD
| 0
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 737
|
Allow string for `device` argument of AccuracyCalculator
|
Calling `AccuracyCalculator(device="cpu")` instead of `AccuracyCalculator(device=torch.device("cpu"))` crashes with an unclear exception `(AttributeError: 'str' object has no attribute 'type')`.
It'd be better to check if `isinstance(device, str)`, and then do `device=torch.device(device)`
|
open
|
2024-12-21T13:34:06Z
|
2024-12-21T13:39:49Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/737
|
[
"enhancement"
] |
KevinMusgrave
| 1
|
jackzhenguo/python-small-examples
|
tensorflow
| 33
|
23 str1是否为str2的permutation这里的for循环错了
|
关键点如下:
```
for c1 in str1:
unq_s1[c1] += 1
for c2 in str2:
unq_s2[c2] += 1
```
应该改成这样才对吧~
|
closed
|
2020-04-01T07:34:04Z
|
2020-04-01T07:55:24Z
|
https://github.com/jackzhenguo/python-small-examples/issues/33
|
[] |
xin7c
| 1
|
plotly/dash-core-components
|
dash
| 385
|
Split up integration tests per component
|
Something I've wanted to do for a while is split up the integration tests per component, as in, have all integration tests related to the Tabs component in a separate file, so that you can run just the integration tests related to Tabs. It's a small improvement that'll help with developing and maintaining - now if you want to check if the integration of the Tabs component has changed, you have to either run all integration tests, or manually pick and run the ones related to the Tabs component.
|
open
|
2018-11-15T15:25:24Z
|
2018-11-15T15:25:24Z
|
https://github.com/plotly/dash-core-components/issues/385
|
[
"dash-type-enhancement"
] |
valentijnnieman
| 0
|
ranaroussi/yfinance
|
pandas
| 2,033
|
Calculating the yf.download function in a pandas column
|
Hello!
The script downloads stock quotes from Yahoo Finance (module yfinance). Then it downloads the quarterly option chain. Then the script calculates and adds the lower and upper strike columns and the lower and upper option ticker columns. In the penultimate line, the script adds the option Close price column from the 'Call_lower_ticker' column to the date from Stocks.index (using the yf.download function).
Everything works correctly until the penultimate line: _Stocks['Call_lower_price'] = Stocks['Call_lower_ticker'].apply(yf.download(['Call_lower_ticker'], start=Stocks.index)['Close'].iloc[0])_
1. How do I make the function work?
2. And is it possible to download historical data for yf.option.chain?
**Code:**
`%%time
import yfinance as yf
import pandas as pd
import warnings
import datetime
warnings.filterwarnings("ignore", message="The 'unit' keyword in TimedeltaIndex construction is deprecated and will be removed in a future version. Use pd.to_timedelta instead.", category=FutureWarning, module="yfinance.utils")
Stocks=yf.download('SPY', period="1y", interval="1d", group_by='ticker')
stock = yf.Ticker('SPY')
Expirations = stock.options
options_chain = stock.option_chain('2024-09-20')
Calls_desk=options_chain.calls
Stocks['110_for_Call'] = Stocks['Close']*1.1
Stocks['Call_lower_strike'] = Stocks['110_for_Call'].apply(lambda x: Calls_desk.iloc[Calls_desk[Calls_desk['strike'] < x]['strike'].idxmax()]['strike'])
Stocks['Call_upper_strike'] = Stocks['110_for_Call'].apply(lambda x: Calls_desk.iloc[Calls_desk[Calls_desk['strike'] > x]['strike'].idxmin()]['strike'])
Stocks['Call_lower_ticker'] = Stocks['110_for_Call'].apply(lambda x: Calls_desk.iloc[Calls_desk[Calls_desk['strike'] < x]['strike'].idxmax()]['contractSymbol'])
Stocks['Call_upper_ticker'] = Stocks['110_for_Call'].apply(lambda x: Calls_desk.iloc[Calls_desk[Calls_desk['strike'] > x]['strike'].idxmin()]['contractSymbol'])
Stocks['Call_lower_price'] = Stocks['Call_lower_ticker'].apply(yf.download(['Call_lower_ticker'], start=Stocks.index)['Close'].iloc[0])
Stocks`
**Error:**
`[*********************100%%**********************] 1 of 1 completed
[*********************100%%**********************] 1 of 1 completed
1 Failed download:
['CALL_LOWER_TICKER']: ValueError('The truth value of a DatetimeIndex is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().')
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
File <timed exec>:19
File C:\ProgramData\anaconda3\Lib\site-packages\pandas\core\indexing.py:1191, in _LocationIndexer.__getitem__(self, key)
1189 maybe_callable = com.apply_if_callable(key, self.obj)
1190 maybe_callable = self._check_deprecated_callable_usage(key, maybe_callable)
-> 1191 return self._getitem_axis(maybe_callable, axis=axis)
File C:\ProgramData\anaconda3\Lib\site-packages\pandas\core\indexing.py:1752, in _iLocIndexer._getitem_axis(self, key, axis)
1749 raise TypeError("Cannot index by location index with a non-integer key")
1751 # validate the location
-> 1752 self._validate_integer(key, axis)
1754 return self.obj._ixs(key, axis=axis)
File C:\ProgramData\anaconda3\Lib\site-packages\pandas\core\indexing.py:1685, in _iLocIndexer._validate_integer(self, key, axis)
1683 len_axis = len(self.obj._get_axis(axis))
1684 if key >= len_axis or key < -len_axis:
-> 1685 raise IndexError("single positional indexer is out-of-bounds")
IndexError: single positional indexer is out-of-bounds`
|
closed
|
2024-08-22T06:06:07Z
|
2025-02-16T18:47:03Z
|
https://github.com/ranaroussi/yfinance/issues/2033
|
[] |
iiivasyaiii
| 3
|
pydantic/pydantic-ai
|
pydantic
| 1,086
|
Anthropic system messages are joined without separation
|
### Initial Checks
- [x] I confirm that I'm using the latest version of Pydantic AI
### Description
When multiple system messages are provided for an anthropic model, they are joined back to back without any sort of separation (e.g. whitespace or newline). This concatenation can cause the instructions to become slightly malformed.
### Example Code
https://github.com/pydantic/pydantic-ai/blob/0a37989704c927ec72e3c3b667237ce60505a557/pydantic_ai_slim/pydantic_ai/models/anthropic.py#L294
### Python, Pydantic AI & LLM client version
```Text
Python 3.12.4
pydantic-ai 0.0.36
```
### Extra
I would submit a pull request, but not sure how you would prefer to address it, and it's a really trivial fix anyway. Personally I probably go with ensuring a blank line (double newline) is in between each message concatenation.
|
open
|
2025-03-09T20:40:39Z
|
2025-03-09T20:42:19Z
|
https://github.com/pydantic/pydantic-ai/issues/1086
|
[
"need confirmation"
] |
phemmer
| 0
|
pyeve/eve
|
flask
| 681
|
[0.6-dev] current_mongo_prefix doesn't work well with test_request_context()
|
I'm using the following construct in some of my tests:
``` python
url = self.resolve_resource('items', item['_id'])
with self.app.test_request_context(url):
items = self.app.data.driver.db['items']
item = items.find_one(ObjectId(item['_id']))
items.update({'_id': ObjectId(item['_id'])}, {'$set': {'_updated': date}})
```
With 0.6-dev, I get the following error:
```
../../.virtualenvs/.../src/eve/eve/io/mongo/mongo.py:865: in db
return self.mongo.pymongo().db
../../.virtualenvs/.../src/eve/eve/io/mongo/mongo.py:833: in pymongo
px = prefix if prefix else self.current_mongo_prefix(resource=resource)
../../.virtualenvs/.../src/eve/eve/io/mongo/mongo.py:808: in current_mongo_prefix
resource = request.endpoint[:request.endpoint.index('|')]
E AttributeError: 'NoneType' object has no attribute 'index'
```
Looks like the hack to get the resource name from the request doesn't work in this case as the request endpoint is `None` here.
|
closed
|
2015-08-05T17:00:22Z
|
2015-08-12T14:20:22Z
|
https://github.com/pyeve/eve/issues/681
|
[
"bug"
] |
rs
| 1
|
falconry/falcon
|
api
| 2,016
|
Throw an exception into req.stream's generator upon disconnect?
|
When `req.stream` is set to a generator, but the client disconnects prior to exhausting iteration, we could communicate this by throwing a specialized exception via `.throw()` or `.athrow` (`async`), respectively.
Also decide whether this a good idea™ at all.
For WSGI, this would probably require wrapping the generator, and checking whether the generator was exhausted prior to calling its `.close()` method by the WSGI server.
For ASGI, this would require implementing a mechanism to detect disconnect events while streaming, for instance, as proposed in #2015.
This is a breaking change if enabled by default; but it could also be made configurable by [`resp_options`](https://falcon.readthedocs.io/en/stable/api/app.html#falcon.ResponseOptions).
|
open
|
2022-02-07T20:57:07Z
|
2022-02-07T22:03:30Z
|
https://github.com/falconry/falcon/issues/2016
|
[
"enhancement",
"needs-decision",
"proposal",
"breaking-change"
] |
vytas7
| 0
|
huggingface/datasets
|
pandas
| 7,222
|
TypeError: Couldn't cast array of type string to null in long json
|
### Describe the bug
In general, changing the type from string to null is allowed within a dataset — there are even examples of this in the documentation.
However, if the dataset is large and unevenly distributed, this allowance stops working. The schema gets locked in after reading a chunk.
Consequently, if all values in the first chunk of a field are, for example, null, the field will be locked as type null, and if a string appears in that field in the second chunk, it will trigger this error:
<details>
<summary>Traceback </summary>
```
TypeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1868 try:
-> 1869 writer.write_table(table)
1870 except CastError as cast_error:
14 frames
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_table(self, pa_table, writer_batch_size)
579 pa_table = pa_table.combine_chunks()
--> 580 pa_table = table_cast(pa_table, self._schema)
581 if self.embed_local_files:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in table_cast(table, schema)
2291 if table.schema != schema:
-> 2292 return cast_table_to_schema(table, schema)
2293 elif table.schema.metadata != schema.metadata:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_table_to_schema(table, schema)
2244 )
-> 2245 arrays = [
2246 cast_array_to_feature(
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in <listcomp>(.0)
2245 arrays = [
-> 2246 cast_array_to_feature(
2247 table[name] if name in table_column_names else pa.array([None] * len(table), type=schema.field(name).type),
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in wrapper(array, *args, **kwargs)
1794 if isinstance(array, pa.ChunkedArray):
-> 1795 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1796 else:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in <listcomp>(.0)
1794 if isinstance(array, pa.ChunkedArray):
-> 1795 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1796 else:
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_array_to_feature(array, feature, allow_primitive_to_str, allow_decimal_to_str)
2101 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 2102 return array_cast(
2103 array,
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in wrapper(array, *args, **kwargs)
1796 else:
-> 1797 return func(array, *args, **kwargs)
1798
[/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in array_cast(array, pa_type, allow_primitive_to_str, allow_decimal_to_str)
1947 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):
-> 1948 raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
1949 return array.cast(pa_type)
TypeError: Couldn't cast array of type string to null
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[<ipython-input-353-e02f83980611>](https://localhost:8080/#) in <cell line: 1>()
----> 1 dd = load_dataset("json", data_files=["TEST.json"])
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2094
2095 # Download and prepare data
-> 2096 builder_instance.download_and_prepare(
2097 download_config=download_config,
2098 download_mode=download_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
922 if num_proc is not None:
923 prepare_split_kwargs["num_proc"] = num_proc
--> 924 self._download_and_prepare(
925 dl_manager=dl_manager,
926 verification_mode=verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
997 try:
998 # Prepare split will record examples associated to the split
--> 999 self._prepare_split(split_generator, **prepare_split_kwargs)
1000 except OSError as e:
1001 raise OSError(
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1738 job_id = 0
1739 with pbar:
-> 1740 for job_id, done, content in self._prepare_split_single(
1741 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1742 ):
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1894 if isinstance(e, DatasetGenerationError):
1895 raise
-> 1896 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1897
1898 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
</details>
### Steps to reproduce the bug
```python
import json
from datasets import load_dataset
with open("TEST.json", "w") as f:
row = {"ballast": "qwerty" * 1000, "b": None}
row_str = json.dumps(row) + "\n"
line_size = len(row_str)
chunk_size = 10 << 20
lines_in_chunk = chunk_size // line_size + 1
print(f"Writing {lines_in_chunk} lines")
for i in range(lines_in_chunk):
f.write(row_str)
null_row = {"ballast": "Gotcha", "b": "Not Null"}
f.write(json.dumps(null_row) + "\n")
load_dataset("json", data_files=["TEST.json"])
```
### Expected behavior
Concatenation of the chunks without errors
### Environment info
- `datasets` version: 3.0.1
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.24.7
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1
|
open
|
2024-10-12T08:14:59Z
|
2025-02-23T13:01:47Z
|
https://github.com/huggingface/datasets/issues/7222
|
[] |
nokados
| 4
|
pytorch/pytorch
|
numpy
| 149,100
|
Memory leak when using get_model_state_dict with FSDP-sharded models
|
### 🐛 Describe the bug
I'm attempting to use the FSDP2 API to shard a model, extract its state dictionary (for potential future use), and then completely remove the model from memory. Extracting the state dict somehow causes there to remain references to the underlying model around, and there ends up being a memory leak. Below i'll reuse the test [here](https://github.com/pytorch/pytorch/blob/420a9be743f8dd5d6296a32a1351c1baced12f1f/test/distributed/_composable/fsdp/test_fully_shard_memory.py#L198) to demonstrate the issue.
When I add the step of using get_model_state_dict to extract the state dictionary (marked by `DIFF STARTS HERE` below) the model continues to occupy memory even after both the model and state dictionary are explicitly deleted. This differs from the behavior in the original test, where memory is properly released.
This functionality is important especially in cases where we'd like to iteratively load a model, perform computation, offload it to cpu, then reload it when it's necessary. If this procedure is repeated, it blows up the GPU memory.
Below is the code snippet to reproduce the behavior, you will see that the test fail as it is, but will not fail if you simply comment out the part that goes with `DIFF STARTS HERE`.
```python
import gc
import torch
from torch.distributed.fsdp import fully_shard
from torch.testing._internal.common_fsdp import FSDPTest
from torch.testing._internal.common_utils import run_tests
from torch.testing._internal.distributed._tensor.common_dtensor import (
ModelArgs,
Transformer,
TransformerBlock,
)
import os
import torch
import gc
from torch.distributed import init_process_group
from datetime import timedelta
from torch.distributed.checkpoint.state_dict import get_model_state_dict, StateDictOptions
class TestFullyShardMemory(FSDPTest):
@property
def world_size(self) -> int:
return min(2, torch.cuda.device_count())
def _get_peak_active_memory_mb(self) -> int:
mem_stats = torch.cuda.memory_stats()
return round(mem_stats["active_bytes.all.peak"] / 1e6)
def _get_curr_active_memory_mb(self) -> int:
mem_stats = torch.cuda.memory_stats()
return round(mem_stats["active_bytes.all.current"] / 1e6)
def test_fully_shard_del_memory(self):
base_mem_mb = self._get_peak_active_memory_mb()
vocab_size = 32
model_args = ModelArgs(
vocab_size=vocab_size, n_layers=3, dim=768, n_heads=12, weight_tying=False
)
model = Transformer(model_args)
# Initializing the model on CPU should not change the GPU memory usage
post_model_init_mem_mb = self._get_peak_active_memory_mb()
self.assertEqual(base_mem_mb, post_model_init_mem_mb)
for module in model.modules():
if isinstance(module, TransformerBlock):
fully_shard(module)
fully_shard(model)
unsharded_numel = sum(p.numel() for p in model.parameters())
sharded_numel = unsharded_numel // self.world_size
buffer_mb = 4
mem_mb = self._get_curr_active_memory_mb()
expected_mb = sharded_numel * 4 / 1e6 + buffer_mb
self.assertLessEqual(mem_mb - base_mem_mb, expected_mb)
### DIFF STARTS HERE ###
sdo = StateDictOptions(full_state_dict=True, cpu_offload=True, broadcast_from_rank0=True)
state_dict = get_model_state_dict(model, options=sdo)
del state_dict
### DIFF ENDS HERE ###
# Deleting the model should free all of the FSDP-managed GPU memory
del model
# Manually call garbage collection since there are ref cycles in FSDP
gc.collect()
torch.cuda.empty_cache()
mem_mb = self._get_curr_active_memory_mb()
print(f"Mem MB: {mem_mb}")
print(f"Base Mem MB: {base_mem_mb}")
self.assertEqual(mem_mb, base_mem_mb)
if __name__ == "__main__":
init_process_group(backend="nccl", timeout=timedelta(hours=24))
dst_rank = int(os.environ['RANK'])
dst_local_rank = int(os.environ['LOCAL_RANK'])
dst_world_size = int(os.environ['WORLD_SIZE'])
device = f'cuda:{dst_local_rank}'
run_tests()
```
### Versions
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-163-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.54.03
cuDNN version: Probably one of the following:
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn.so.8.9.2
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.2
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.2
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.2
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.2
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.2
/usr/local/cuda-12.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7543 32-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1499.953
CPU max MHz: 2800.0000
CPU min MHz: 1500.0000
BogoMIPS: 5600.18
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchaudio==2.5.1
[pip3] triton==3.2.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang
|
open
|
2025-03-13T03:37:40Z
|
2025-03-20T04:12:06Z
|
https://github.com/pytorch/pytorch/issues/149100
|
[
"oncall: distributed",
"module: fsdp"
] |
mertyg
| 10
|
microsoft/nni
|
pytorch
| 5,451
|
export onnx model, size not become smaller after the quantization
|
open
|
2023-03-16T10:08:20Z
|
2023-03-27T02:54:28Z
|
https://github.com/microsoft/nni/issues/5451
|
[
"feature request"
] |
dlml
| 2
|
|
django-import-export/django-import-export
|
django
| 1,522
|
'list' object is not callable
|
**Describe the bug**
I tried to impliment admin class for Books table along with ImportExportModelAdmin and ImportExportActionModelAdmin. while exporting selected records from admin panel i got the error.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to admin.py and impliment custom model admin along with ImportExportModelAdmin and ImportExportActionModelAdmin
2. Got to respective table in admin panel. Select few records and export them as csv file
3. See error
```
class Base(models.Model):
slug = models.SlugField(max_length = 200)
class Meta:
abstract = True
class Books(Base):
name = models.CharField('Book name', max_length=100)
author = models.ForeignKey(Author, blank=True, null=True)
author_email = models.EmailField('Author email', max_length=75, blank=True)
imported = models.BooleanField(default=False)
published = models.DateField('Published', blank=True, null=True)
price = models.DecimalField(max_digits=10, decimal_places=2, null=True, blank=True)
categories = models.ManyToManyField(Category, blank=True)
def __str__(self):
return self.name
class BaseAdmin(admin.ModelAdmin):
list_display = ('slug', )
class BooksResource(ModelResource):
class Meta:
model = Books
class BooksAdmin(ImportExportModelAdmin, ImportExportActionModelAdmin, BaseAdmin):
resource_class = [BooksResource]
list_display = BaseAdmin.list_display + ('name', 'author', 'published', 'price', 'categories', )
admin.site.register(Books, BooksAdmin)
```
**Versions (please complete the following information):**
- Django Import Export: [e.g. 3.0.1]
- Python [e.g. 3.9]
- Django [e.g. 4.1]
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
|
closed
|
2022-12-06T06:17:43Z
|
2023-09-12T14:39:47Z
|
https://github.com/django-import-export/django-import-export/issues/1522
|
[
"bug"
] |
kumar-student
| 8
|
tiangolo/uvicorn-gunicorn-fastapi-docker
|
fastapi
| 95
|
Docker images missing from DockerHub
|
Hey @tiangolo, sorry to bother, but your [DockerHub repo](https://hub.docker.com/u/tiangolo) seems to be empty,
We can locally build the images and up to other repos, but a trusted and common source of the most recent and updated images was a nice thing.
Is there any way we can help restore this?
|
closed
|
2021-06-23T15:14:14Z
|
2021-06-24T15:00:08Z
|
https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/95
|
[] |
fakegermano
| 1
|
Lightning-AI/LitServe
|
api
| 237
|
Create Makefile to run common development tasks locally
|
## 🚀 Feature
Hi! I just found this project. Really cool!
I wanted to propose a feature to create a `Makefile` in the repo.
The objective is to simplify local development and contribution by creating a `Makefile` that encapsulates and standardizes common development tasks such as running tests, building docs, etc. It is inspired by pydantic's Makefile:
https://github.com/pydantic/pydantic/blob/main/Makefile
I would be happy to contribute.
A first stab at the problem would be to create a `Makefile` to run the following tasks:
- Install dependencies `make install`
- Run tests: `make test`
- Run linter/formater: `make lint` / `make format`
In the future, other checks could be run and integrated to the CI:
- Run benchmarks
- Run mypy.
- etc.
### Motivation
The motivation is to provide a toolkit for developers in order to simplify the process of contributing to this cool project
### Pitch
Add a Makefile to run common development tasks locally.
### Alternatives
### Additional context
|
closed
|
2024-08-28T06:43:39Z
|
2024-08-28T23:30:21Z
|
https://github.com/Lightning-AI/LitServe/issues/237
|
[
"enhancement",
"help wanted"
] |
AdolfoVillalobos
| 2
|
sloria/TextBlob
|
nlp
| 230
|
The `correct()` method in Python 3.7 gives "RuntimeError: generator raised StopIteration"
|
In Python 3.7 the following gives `RuntimeError: generator raised StopIteration`
````
>>> b = TextBlob("I havv goood speling!")
>>> print(b.correct())
````
I believe [this](https://stackoverflow.com/a/51371879/2445273) SO post describes what needs to be done to address this problem.
Edit: same problem with `spellcheck()` method.
|
closed
|
2018-10-03T19:01:17Z
|
2018-10-11T12:12:00Z
|
https://github.com/sloria/TextBlob/issues/230
|
[] |
vvaezian
| 1
|
recommenders-team/recommenders
|
data-science
| 1,613
|
[FEATURE] do DRY in def data_process_with_time in the notebook of examples/00_quick_start/sasrec_amazon.ipynb
|
### Description
<!--- Describe your expected feature in detail -->
The function data_process_with_time is very similar to https://github.com/microsoft/recommenders/blob/60033231b9167438032843c23158c0c776856e0e/recommenders/datasets/split_utils.py#L49
We can refactor
See more details https://github.com/microsoft/recommenders/pull/1530#discussion_r785934030
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
### Other Comments
|
open
|
2022-01-18T10:36:55Z
|
2022-01-18T10:37:05Z
|
https://github.com/recommenders-team/recommenders/issues/1613
|
[
"enhancement"
] |
miguelgfierro
| 0
|
pydantic/pydantic-settings
|
pydantic
| 455
|
BaseSettings overrides on instanstiation do not override values
|
Hi
I have some trouble overwriting config values on instantiation.
```python
from pydantic_settings import BaseSettings, SettingsConfigDict
class Settings(BaseSettings):
model_config = SettingsConfigDict(env_prefix='my_prefix_')
auth_key: str = 'xxx' # will be read from `my_prefix_auth_key`
a = Settings(_env_prefix="my_prefix")
a.model_config["env_prefix"]
# returns 'my_prefix_' as it remains the default value and my override is ignored
```
|
closed
|
2024-10-22T12:57:38Z
|
2024-10-22T14:35:17Z
|
https://github.com/pydantic/pydantic-settings/issues/455
|
[
"unconfirmed"
] |
timonviola
| 1
|
apachecn/ailearning
|
nlp
| 582
|
机器学习
|
NULL
|
closed
|
2020-04-09T04:03:19Z
|
2020-04-09T04:07:10Z
|
https://github.com/apachecn/ailearning/issues/582
|
[] |
dabaozizhang
| 0
|
dbfixtures/pytest-postgresql
|
pytest
| 1,063
|
Drop linters replaced by pre-commit hooks
|
Drop them from CI and Pipfile.
|
closed
|
2025-01-17T12:20:41Z
|
2025-01-17T16:24:50Z
|
https://github.com/dbfixtures/pytest-postgresql/issues/1063
|
[] |
fizyk
| 0
|
laughingman7743/PyAthena
|
sqlalchemy
| 354
|
Extra libraries are included in basic install
|
Hello and thanks for a very useful library! We updated from 2.10.0 to 2.11.0 yesterday and noticed we are now getting dependent libraries for numpy and pandas. However, we are only using a basic install `pip install PyAthena`. Is this expected? It is causing us to go over lambda max package size limit.
|
closed
|
2022-07-27T19:12:40Z
|
2022-07-28T12:25:50Z
|
https://github.com/laughingman7743/PyAthena/issues/354
|
[] |
JayFields
| 3
|
bendichter/brokenaxes
|
matplotlib
| 62
|
`constrained_layout=True` seems to cause the slash marks to be misplaced
|
Hi! `brokenaxes` seems amazing. I think I may have run into one corner-case which causes it to glitch a little bit.
I was following along [this GridSpec tutorial](https://matplotlib.org/stable/tutorials/intermediate/gridspec.html#gridspec-using-subplotspec) and was curious to see what would happen if I subbed in some `brokenaxes` instances for normal subplots. When I called `plt.figure(constrained_layout=True)`, I noticed that the "//" symbols end up floating in the wrong place.
Here is a screenshot from my Jupyter notebook:

[I've pasted the code into this gist](https://gist.github.com/tomr-stargazer/ac40cef643d608efa9da5eaa511b8524).
I wouldn't blame you if you flagged this as WONTFIX but I figured I'd pass along the issue just in case.
|
open
|
2021-03-17T00:55:29Z
|
2025-03-03T14:23:47Z
|
https://github.com/bendichter/brokenaxes/issues/62
|
[] |
tomr-stargazer
| 5
|
xinntao/Real-ESRGAN
|
pytorch
| 568
|
在使用libtorch部署时输出了令人困惑的九宫格图片
|
我用如下代码保存了模型:
```
loadnet = torch.load(
"weights/realesr-animevideov3.pth")
# prefer to use params_ema
if 'params_ema' in loadnet:
keyname = 'params_ema'
else:
keyname = 'params'
model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16,
upscale=4, act_type='prelu')
netscale = 4
model.load_state_dict(loadnet[keyname], strict=True)
use_gpu = torch.cuda.is_available() # 判断是否有GPU加速
if use_gpu:
model = model.cuda()
model.eval() # 关闭dropout等
torch.no_grad()
# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 64, 64)
if use_gpu:
example = Variable(example).cuda()
# label = Variable(label, volatile=True).cuda()
else:
example = Variable(example)
# label = Variable(label)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)
os.makedirs("./model", exist_ok=True)
traced_script_module.save("./model/animevideov3.pt")
```
然后分别在python和C++中加载模型,用我认为相同的前后处理方式做预测,这是python的结果,看起来正常:

这是C++的结果,令人费解:

这是python的代码:
```
class JitModelTester():
def __init__(self,
scale,
model_path,
half=False,
device=None,):
self.scale = scale
self.mod_scale = None
self.half = half
# initialize model
self.device = torch.device(
'cuda' if torch.cuda.is_available() else 'cpu') if device is None else device
# if the model_path starts with https, it will first download models to the folder: weights
model = torch.jit.load(model_path)
model.eval()
self.model = model.to(self.device)
if self.half:
self.model = self.model.half()
def pre_process(self, img):
"""Pre-process, such as pre-pad and mod pad, so that the images can be divisible
"""
img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float()
self.img = img.unsqueeze(0).to(self.device)
if self.half:
self.img = self.img.half()
# pre_pad
# if self.pre_pad != 0:
# self.img = F.pad(self.img, (0, self.pre_pad, 0, self.pre_pad), 'reflect')
# # mod pad for divisible borders
if self.scale == 2:
self.mod_scale = 2
elif self.scale == 1:
self.mod_scale = 4
if self.mod_scale is not None:
self.mod_pad_h, self.mod_pad_w = 0, 0
_, _, h, w = self.img.size()
if (h % self.mod_scale != 0):
self.mod_pad_h = (self.mod_scale - h % self.mod_scale)
if (w % self.mod_scale != 0):
self.mod_pad_w = (self.mod_scale - w % self.mod_scale)
self.img = F.pad(self.img, (0, self.mod_pad_w, 0, self.mod_pad_h),
'reflect')
def process(self):
# model inference
print("input size",self.img.size())
self.output = self.model(self.img)
print("output size",self.output.size())
def post_process(self):
# remove extra pad
if self.mod_scale is not None:
_, _, h, w = self.output.size()
self.output = self.output[:, :, 0:h - self.mod_pad_h * self.scale,
0:w - self.mod_pad_w * self.scale]
return self.output
@torch.no_grad()
def enhance(self, img, outscale=None, alpha_upsampler='realesrgan'):
h_input, w_input = img.shape[0:2]
# img: numpy
img = img.astype(np.float32)
img = img / 255
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# ------------------- process image (without the alpha channel) ------------------- #
self.pre_process(img)
# if self.tile_size > 0:
# self.tile_process()
# else:
self.process()
output_img = self.post_process()
output_img = output_img.data.squeeze().float().cpu().clamp_(0,
1).numpy()
output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0))
output = (output_img * 255.0).round().astype(np.uint8)
if outscale is not None and outscale != float(self.scale):
output = cv2.resize(
output, (
int(w_input * outscale),
int(h_input * outscale),
), interpolation=cv2.INTER_LANCZOS4)
return output
```
这是C++中我认为语义相同的代码:
```
class RealESRGANer{
private:
int scale = 4;
int modScale = NULL;
bool half = false;
torch::jit::script::Module model;
torch::Device device;
torch::Tensor imgTensor;
torch::Tensor output;
int modPadH = NULL;
int modPadW = NULL;
bool shouldPadForDivisibleBorders() {
return this->modScale != NULL;
}
void preProcess(cv::Mat img){
/*cv::imshow("input:",img);
cv::waitKey(0);*/
torch::Tensor input_tensor = torch::from_blob(img.data, { img.rows, img.cols, 3 }, torch::kU8);
this->imgTensor = input_tensor.permute({ 2,0,1 }) // (H,W,C)->(C,H,W)
.to(torch::kFloat32).div(255).unsqueeze(0) // (C,H,W)->(1,C,H,W)
.to(this->device);
if (this->half) {
this->imgTensor=this->imgTensor.to(torch::kF16);
}
if (this->scale == 2) {
this->modScale = 2;
}
else if (this->scale == 1) {
this->modScale = 4;
}
if (shouldPadForDivisibleBorders()) {
this->modPadH = 0, this->modPadW = 0;
int h = this->imgTensor.size(2), w = this->imgTensor.size(3);
if (h % this->modScale != 0) {
this->modPadH = this->modScale - h % this->modScale;
}
if (w % this->modScale != 0) {
this->modPadW = this->modScale - w % this->modScale;
}
this->imgTensor = torch::pad(this->imgTensor,
{ 0,this->modPadW,0,this->modPadH },
"reflect");
}
}
void process() {
std::vector<torch::jit::IValue> inputs;
inputs.push_back(this->imgTensor);
this->output = this->model.forward(inputs).toTensor();
}
cv::Mat postProcess() {
if (shouldPadForDivisibleBorders()) {
int h = this->output.size(2), w = this->output.size(3);
this->output = this->output.slice(2, 0, h - this->modPadH * this->scale).slice(3, 0, w - this->modPadW * this->scale);
}
this->output = this->output.data().squeeze(0) // (1,C,H,W)->(C,H,W)
.to(torch::kFloat32).to(at::kCPU);
this->output = this->output.clamp(0,
1).permute({ 1,2,0 }) // (C,H,W)->(H,W,C)
.mul(255).round().to(torch::kUInt8);
cv::Mat resultImg(this->output.size(0), this->output.size(1), CV_8UC3);
std::memcpy((void*)resultImg.data, this->output.data_ptr(), sizeof(torch::kU8) * this->output.numel());
return resultImg;
}
public:
RealESRGANer(
int scale,
torch::jit::script::Module model,
bool half,
torch::Device device
): scale(scale),half(half),device(device),model(model) {
this->model.eval();
this->model.to(device);
if (this->half) {
this->model.to(torch::kF16);
}
}
cv::Mat enhance(cv::Mat img,float outScale) {
int hInput = img.rows, wInput = img.cols;
/*img.convertTo(img, CV_32FC3);
img /= cv::Scalar(255, 255, 255);*/
cv::cvtColor(img, img, cv::COLOR_BGR2RGB);
this->preProcess(img);
this->process();
cv::Mat outputImg = this->postProcess();
cv::cvtColor(outputImg, outputImg, cv::COLOR_RGB2BGR);
if (outScale != NULL && outScale != float(this->scale)) {
std::cout << "w" << wInput << std::endl;
std::cout << "h" << hInput << std::endl;
cv::resize(outputImg, outputImg,
cv::Size(int(wInput * outScale), int(hInput * outScale)),
cv::INTER_LANCZOS4);
}
return outputImg;
}
cv::Mat test_enhance_without_module(cv::Mat img, float outScale) {
int hInput = img.rows, wInput = img.cols;
/*img.convertTo(img, CV_32FC3);
img /= cv::Scalar(255, 255, 255);*/
cv::cvtColor(img, img, cv::COLOR_BGR2RGB);
this->preProcess(img);
//this->process();
this->output = this->imgTensor.clone().detach();
cv::Mat outputImg = this->postProcess();
cv::cvtColor(outputImg, outputImg, cv::COLOR_RGB2BGR);
if (outScale != NULL && outScale != float(this->scale)) {
cv::resize(outputImg, outputImg,
cv::Size(int(wInput * outScale), int(hInput * outScale)),
cv::INTER_LANCZOS4);
}
return outputImg;
}
};
```
在C++中使用enhance方法进行推理就会得到这个令人费解的九宫格,而使用没有将图片张量输入模型(即执行process方法)而是直接拷贝的test_enhance代码则可以得到正常的图片,有人知道这是什么原因吗?我的process方法中出了什么错误?
|
open
|
2023-02-09T04:08:33Z
|
2023-11-17T11:13:46Z
|
https://github.com/xinntao/Real-ESRGAN/issues/568
|
[] |
Dong148
| 1
|
reloadware/reloadium
|
flask
| 2
|
Reloadium not installing
|
Pycharm Version:
PyCharm 2021.3.1 (Community Edition)
After installation bar completion, pycharm doesn't show that it is installed when plugin dialog is reopened.
|
closed
|
2022-04-13T09:59:57Z
|
2022-04-14T04:14:32Z
|
https://github.com/reloadware/reloadium/issues/2
|
[] |
rahulroxxx
| 1
|
marimo-team/marimo
|
data-science
| 3,738
|
New dataframe transform: Unique
|
### Description
I would like to be able to get the unique values of a column
### Suggested solution
Using the distinct operator in sql or the unique operators in both polars and pandas should do the trick
### Alternative
_No response_
### Additional context
_No response_
|
open
|
2025-02-10T08:12:06Z
|
2025-02-20T07:18:33Z
|
https://github.com/marimo-team/marimo/issues/3738
|
[
"enhancement",
"good first issue",
"help wanted"
] |
aglucky
| 7
|
igorbenav/fastcrud
|
sqlalchemy
| 121
|
Join list of objects
|
**Is your feature request related to a problem? Please describe.**
For OneToMany relationships, using `get_joined` or `get_multi_joined ` returns one nested object, while in some cases I'd like to return all related objects.
**Describe the solution you'd like**
fastcrud.get_joined_list(...) ->
```
{
"id": 1,
"joined_objects": [
{
"id":1
"name": Donald,
},
{
"id":2
"name": Joe,
},
]
}
```
|
open
|
2024-07-04T13:18:33Z
|
2024-09-17T06:25:49Z
|
https://github.com/igorbenav/fastcrud/issues/121
|
[
"enhancement"
] |
JakNowy
| 3
|
microsoft/UFO
|
automation
| 166
|
Error if RAG_OFFLINE_DOCS is configured to true
|
Welcome to use UFO🛸, A UI-focused Agent for Windows OS Interaction.
_ _ _____ ___
| | | || ___| / _ \
| | | || |_ | | | |
| |_| || _| | |_| |
\___/ |_| \___/
Please enter your request to be completed🛸:
请提醒我每天学习
Round 1, Step 1, HostAgent: Analyzing the user intent and decomposing the request...
{
"Observation": "I observe that the Microsoft To Do application is visible in the screenshot, which is suitable for setting reminders. The application appears to be open and ready for use, allowing me to create a new task for the user.",
"Thought": "The user request is to set a daily reminder for studying. This can be accomplished within the Microsoft To Do application. I will create a new task that specifies the reminder for daily study sessions.",
"CurrentSubtask": "Add a task in Microsoft To Do to remind the user to study every day.",
"Message": [
"(1) Please ensure that the task is set to repeat daily.",
"(2) You may want to specify a time for the reminder if needed."
],
"ControlLabel": "9",
"ControlText": "Microsoft To Do",
"Status": "ASSIGN",
"Plan": [],
"Bash": "",
"Questions": [],
"Comment": "I will proceed to add a daily study reminder in the Microsoft To Do application."
}
Observations👀: I observe that the Microsoft To Do application is visible in the screenshot, which is suitable for setting reminders. The application appears to be open and ready for use, allowing me to create a new task for the user.
Thoughts💡: The user request is to set a daily reminder for studying. This can be accomplished within the Microsoft To Do application. I will create a new task that specifies the reminder for daily study sessions.
Plans📚: (1) Add a task in Microsoft To Do to remind the user to study every day.
Next Selected application📲: Microsoft To Do
Messages to AppAgent📩: (1) Please ensure that the task is set to repeat daily.
(2) You may want to specify a time for the reminder if needed.
Status📊: ASSIGN
Comment💬: I will proceed to add a daily study reminder in the Microsoft To Do application.
9.419333934783936
Loading offline help document indexer for Microsoft To Do...
Creating an experience indexer...
Creating an demonstration indexer...
Round 1, Step 2, AppAgent: Completing the subtask [Add a task in Microsoft To Do to remind the user to study every day.] on application [Microsoft To Do].
Error Occurs at get_prompt_message
Traceback (most recent call last):
File "H:\LiProject\UFO - 副本\ufo\agents\processors\basic.py", line 196, in wrapper
func(self, *args, **kwargs)
File "H:\LiProject\UFO - 副本\ufo\agents\processors\basic.py", line 178, in wrapper
result = func(self, *args, **kwargs)
File "H:\LiProject\UFO - 副本\ufo\agents\processors\app_agent_processor.py", line 219, in get_prompt_message
external_knowledge_prompt = self.app_agent.external_knowledge_prompt_helper(
File "H:\LiProject\UFO - 副本\ufo\agents\agent\app_agent.py", line 225, in external_knowledge_prompt_helper
[doc.metadata["text"] for doc in offline_docs],
TypeError: 'NoneType' object is not iterable

|
open
|
2024-12-28T13:00:35Z
|
2024-12-28T13:00:35Z
|
https://github.com/microsoft/UFO/issues/166
|
[] |
lishaozheng
| 0
|
getsentry/sentry
|
python
| 86,821
|
Add amplitude analytics for clicking on schema hint
|
open
|
2025-03-11T18:10:42Z
|
2025-03-12T18:03:50Z
|
https://github.com/getsentry/sentry/issues/86821
|
[] |
nikkikapadia
| 0
|
|
horovod/horovod
|
tensorflow
| 3,781
|
Stalled ranks and deadlock when Horovod distribute train finished and start evaluation
|
**Environment:**
1. Framework: (TensorFlow2.1.0, Keras, PyTorch1.4.0, MXNet1.6.0)
2. Framework version:
3. Horovod version:0.19.2
4. MPI version:
5. CUDA version:
6. NCCL version:
7. Python version:3.6
8. Spark / PySpark version:
9. Ray version:
10. OS and version:
11. GCC version:
12. CMake version:
**Checklist:**
1、Did you search issues to find if somebody asked this question before? Yes
2、If your question is about hang, did you read this doc? Yes
3、If your question is about docker, did you read this doc? Yes
4、Did you check if you question is answered in the troubleshooting guide? Yes
**Bug report:**
I use the command to run job
mpirun -np 32 --allow-run-as-root -H ${ip[0]}:4,${ip[1]}:4,${ip[2]}:4,${ip[3]}:4,${ip[4]}:4,${ip[5]}:4,${ip[6]}:4,${ip[7]}:4 python ./run.py
train and evaluation:
if is_train:
random.shuffle(train_files)
estimator.train(input_fn=make_input_fn(train_files, batch_size=Configs['batch_size'], num_epochs=Configs['epochs'],shuffle=Configs['shuffle'], num_workers=hvd.size(), hvd_index=hvd.rank()), steps=None, hooks=[bcast_hook])
if is_eval:
random.shuffle(eval_files)
results = estimator.evaluate(input_fn=make_input_fn(eval_files, batch_size=Configs['batch_size'], num_epochs=1, shuffle=Configs['shuffle'], num_workers=hvd.size(), hvd_index=hvd.rank()), steps=None,hooks=[bcast_hook])
when train finished and start evaluation ,the job deadlock,shows this:
W horovod/common/stall_inspector.cc:105] One or more tensors were submitted to be reduced, gathered or broadcasted by subset of ranks and are waiting for remainder of ranks for more than 60 seconds. This may indicate that different ranks are trying to submit different tensors or that only subset of ranks is submitting tensors, which will cause deadlock.
Stalled ranks:
[DistributedAdamOptimizer_Allreduce/HorovodAllgather_gradients_feature_shallow_tower_1_input_layer_item_duration_bucketized_embedding_item_duration_bucketized_embedding_weights_embedding_lookup_sparse_embedding_lookup_grad_Reshape_0, DistributedAdamOptimizer_Allreduce/HorovodAllgather_gradients_feature_shallow_tower_1_input_layer_item_duration_bucketized_embedding_item_duration_bucketized_embedding_weights_embedding_lookup_sparse_embedding_lookup_grad_Reshape_1_0, DistributedAdamOptimizer_Allreduce/HorovodAllgather_gradients_feature_shallow_tower_1_input_layer_item_rtype_stay_duration_15d_bucketized_embedding_item_rtype_stay_duration_15d_bucketized_embedding_weights_embedding_lookup_sparse_embedding_lookup_grad_Reshape_0, DistributedAdamOptimizer_Allreduce/HorovodAllgather_gradients_feature_shallow_tower_1_input_layer_item_rtype_stay_duration_15d_bucketized_embedding_item_rtype_stay_duration_15d_bucketized_embedding_weights_embedding_lookup_sparse_embedding_lookup_grad_Reshape_1_0, DistributedAdamOptimizer_Allreduce/HorovodAllgather_gradients_feature_shallow_tower_1_input_layer_item_rtype_stay_duration_7d_bucketized_embedding_item_rtype_stay_duration_7d_bucketized_embedding_weights_embedding_lookup_sparse_embedding_lookup_grad_Reshape_0, DistributedAdamOptimizer_Allreduce/HorovodAllgather_gradients_feature_shallow_tower_1_input_layer_item_rtype_stay_duration_7d_bucketized_embedding_item_rtype_stay_duration_7d_bucketized_embedding_weights_embedding_lookup_sparse_embedding_lookup_grad_Reshape_1_0 ...]
|
closed
|
2022-11-22T01:34:15Z
|
2023-08-30T03:44:13Z
|
https://github.com/horovod/horovod/issues/3781
|
[
"question",
"wontfix"
] |
tiandongtao
| 2
|
LibrePhotos/librephotos
|
django
| 969
|
Allow to upload photos directly to user albums
|
**Describe the enhancement you'd like**
A clear and concise description of what you want to happen.
Allow user to upload photos directly to user albums instead uploading them to main page to clarify them later, would be a more easier way to add albums instead for example uploading 1000 photos from a vacations trip to create later the user album and having to select those 1000 photos manually to add them to the trip album 😄
**Describe why this will benefit the LibrePhotos**
A clear and concise explanation on why this will make LibrePhotos better.
Would improve user album management a lot.
**Additional context**
Add any other context or screenshots about the enhancement request here.
|
open
|
2023-07-25T17:02:17Z
|
2023-09-04T11:43:43Z
|
https://github.com/LibrePhotos/librephotos/issues/969
|
[
"enhancement"
] |
hardwareadictos
| 3
|
datapane/datapane
|
data-visualization
| 325
|
The table does not shown in Select
|
<!--
**NOTE** Please use this template to open issues, bugs, etc., only.
See our [GitHub Discussions Board](https://github.com/datapane/datapane/discussions) to discuss feature requests, general support, ideas, and to chat with the community.
-->
### System Information
<!-- Please fill this out to help us understand the bug/issue -->
- OS: Windows
- Python version: 3.9.13
- Python environment: pip
- Using jupyter: true
- Datapane version: 0.15.4
### Bug / Issue
Hi. I have a problem. When i insert iframe code on the site (tilda) my table does not shown (only plot shown). When I save my app in html, then everything is fine.
There is code i use for my app:
```
import pandas as pd
import json
import plotly.express as px
import plotly.figure_factory as ff
import plotly.graph_objects as go
import datapane as dp
restaurants_text = '[{"Media Count":424,"Name":"Bennigans","Category":"Family Style Restaurant","Price Range":"$$","lat":35.1646552703,"lng":33.3266263658,"Address":"","media_count_str":"Media count: 424","media_count_log":8.7279204546},{"Media Count":408,"Name":"Sushila Nicosia","Category":"Sushi Restaurant","Price Range":"$$","lat":35.1666571901,"lng":33.3670368218,"Address":"27 Pindarou Street, Ayios Antonios,","media_count_str":"Media count: 408","media_count_log":8.672425342},{"Media Count":104,"Name":"Karvounomageiremata Ledras","Category":"Greek Restaurant","Price Range":"$","lat":35.17228,"lng":33.36128,"Address":"","media_count_str":"Media count: 104","media_count_log":6.7004397181},{"Media Count":582,"Name":"\\u0396\\u03b1\\u03bd\\u03ad\\u03c4\\u03c4\\u03bf\\u03c2 \\u039a\\u03c5\\u03c0\\u03c1\\u03b9\\u03b1\\u03ba\\u03ae \\u03a4\\u03b1\\u03b2\\u03ad\\u03c1\\u03bd\\u03b1 (Zanettos Cyprus)","Category":"Greek Restaurant","Price Range":"$$","lat":35.17274,"lng":33.36475,"Address":"65 Trikoupi Str","media_count_str":"Media count: 582","media_count_log":9.1848753429},{"Media Count":371,"Name":"Winerys at Limoncello","Category":"Restaurant","Price Range":"$$","lat":35.167923972,"lng":33.371453497,"Address":"","media_count_str":"Media count: 371","media_count_log":8.5352753766},{"Media Count":410,"Name":"Cook Shop","Category":"Modern European Restaurant","Price Range":"$$","lat":35.1685316486,"lng":33.3693250871,"Address":"Pindarou Street 6A","media_count_str":"Media count: 410","media_count_log":8.6794800995},{"Media Count":11,"Name":"Giros Stavros O Salonikios Aglantzia","Category":"Fast food restaurant","Price Range":"$","lat":35.1541649122,"lng":33.4015935715,"Address":" Larnakos 110 Aglantzia , Nicosia, Cyprus","media_count_str":"Media count: 11","media_count_log":3.4594316186},{"Media Count":28,"Name":"Alakiko","Category":"Japanese Restaurant","Price Range":"","lat":35.1640041438,"lng":33.3264316146,"Address":"","media_count_str":"Media count: 28","media_count_log":4.8073549221},{"Media Count":60,"Name":"Kalamaki Bar Nicosia","Category":"Restaurant","Price Range":"$$","lat":35.1739563979,"lng":33.3614381389,"Address":"2109","media_count_str":"Media count: 60","media_count_log":5.9068905956},{"Media Count":40,"Name":"Nozomi","Category":"Japanese Restaurant","Price Range":"","lat":35.1602946,"lng":33.3733961167,"Address":"","media_count_str":"Media count: 40","media_count_log":5.3219280949}]'
restaurants_json = json.loads(restaurants_text)
restaurants = pd.DataFrame(restaurants_json)
cafe_text = '[{"Media Count":335,"Name":"\\u03a4\\u03bf \\u039a\\u03b1\\u03c6\\u03b5\\u03bd\\u03b5\\u03af\\u03bf \\u03a4\\u03b7\\u03c2 \\u03a3\\u03c4\\u03ac\\u03bb\\u03c9\\u03c2","Category":"Cafeteria","Price Range":"$","lat":35.1718,"lng":33.36223,"Address":"\\u03a0\\u03cd\\u03b8\\u03c9\\u03bd\\u03bf\\u03c2 6","media_count_str":"Media count: 335","media_count_log":8.3880172853},{"Media Count":323,"Name":"\\u03a7\\u03b1\\u03c1\\u03ac\\u03c4\\u03c3\\u03b9 \\u039a\\u03b1\\u03c6\\u03b5\\u03bd\\u03b5\\u03af\\u03bf\\u03bd","Category":"Cafe","Price Range":"$","lat":35.1746101817,"lng":33.365651602,"Address":"","media_count_str":"Media count: 323","media_count_log":8.3353903547},{"Media Count":17,"Name":"Koukounari Nicosia","Category":"Cafe","Price Range":"","lat":35.162347544,"lng":33.3828429371,"Address":"","media_count_str":"Media count: 17","media_count_log":4.0874628413},{"Media Count":5,"Name":"Byzantiou Cafe","Category":"Cafe","Price Range":"","lat":35.1584720333,"lng":33.342387708,"Address":"","media_count_str":"Media count: 5","media_count_log":2.3219280949},{"Media Count":104,"Name":"Momento Deoro","Category":"Cafe","Price Range":"$","lat":35.1402050357,"lng":33.337757,"Address":"145C Strovolos Avenue","media_count_str":"Media count: 104","media_count_log":6.7004397181},{"Media Count":657,"Name":"Giagia Viktoria","Category":"Dessert cafe","Price Range":"$$","lat":35.1744002635,"lng":33.3616326018,"Address":"","media_count_str":"Media count: 657","media_count_log":9.3597495603},{"Media Count":60,"Name":"Agios Demetrios Park - Cafe","Category":"Cafeteria","Price Range":"$","lat":35.1530952286,"lng":33.3551007186,"Address":"Agathonos","media_count_str":"Media count: 60","media_count_log":5.9068905956},{"Media Count":137,"Name":"Ermou 300 Kafeneio","Category":"Cafe","Price Range":"$","lat":35.1760169306,"lng":33.3687038731,"Address":"\\u0395\\u03c1\\u03bc\\u03bf\\u03cd","media_count_str":"Media count: 137","media_count_log":7.098032083},{"Media Count":45,"Name":"\\u039f\\u03c4\\u03b9 \\u039d\\u03b1\\u03bd\\u03b1\\u03b9","Category":"Cafe","Price Range":"$","lat":35.1730670736,"lng":33.3620823722,"Address":"","media_count_str":"Media count: 45","media_count_log":5.4918530963},{"Media Count":18,"Name":"\\u039a\\u03b1\\u03c6\\u03b5\\u03bd\\u03b5\\u03af\\u03bf \\u03b7 \\u03bc\\u03b9\\u03ba\\u03c1\\u03ae \\u03a1\\u03bf\\u03b4\\u03b9\\u03ac","Category":"Cafeteria","Price Range":"","lat":35.15318,"lng":33.39669,"Address":"\\u03a0\\u03bb\\u03b1\\u03c4\\u03b5\\u03af\\u03b1 \\u039a\\u03c5\\u03c1\\u03b9\\u03ac\\u03ba\\u03bf\\u03c5 \\u039a\\u03b1\\u03c1\\u03b1\\u03bf\\u03bb\\u03ae 5","media_count_str":"Media count: 18","media_count_log":4.1699250014}]'
cafe_json = json.loads(cafe_text)
cafe = pd.DataFrame(cafe_json)
def maps(df):
fig_rest = go.Figure()
fig_rest.add_trace(
go.Scattermapbox(
lat=df['lat'],
lon=df['lng'],
mode='markers',
marker=go.scattermapbox.Marker(
size=df['media_count_log']+3,
color='black',
opacity=1,
),
hoverinfo='none',
showlegend=False
)
)
fig_rest.add_trace(
go.Scattermapbox(
lat=df['lat'],
lon=df['lng'],
mode='markers',
marker=go.scattermapbox.Marker(
size=df['media_count_log']+3-1.5,
color='#ffe928',
opacity=1,
),
hoverinfo='text',
hovertext=df[['Name', 'media_count_str']],
showlegend=False
)
)
fig_rest.update_layout(
margin={"r":0,"t":0,"l":10,"b":50},
hovermode='closest',
mapbox=dict(
bearing=0,
center=go.layout.mapbox.Center(
lat=35.17,
lon=33.36
),
pitch=0,
zoom=12
),
mapbox_style='carto-positron'
)
return fig_rest
rest_fig = maps(restaurants)
cafe_fig = maps(cafe)
app = dp.App(
dp.Select(
blocks=[
dp.Group(blocks=[dp.Plot(rest_fig), dp.Table(restaurants)], label='Restaurants'),
dp.Group(blocks=[dp.Plot(cafe_fig), dp.Table(cafe)], label='Cafe')
]
),
)
app.upload(name='nikosia_6')
```
Embed link - https://cloud.datapane.com/apps/M38DVR3/nikosia-6/embed/
|
closed
|
2022-11-07T20:10:05Z
|
2022-11-29T08:53:01Z
|
https://github.com/datapane/datapane/issues/325
|
[
"triage",
"release pending"
] |
DanyarYusupov
| 3
|
sherlock-project/sherlock
|
python
| 2,148
|
Heavy-R F+ / APClips F+
|
### Site name
EyeEm, APClips, Heavy-R
### Additional info
No additional information provided
___
***Edited by reviewer for clarity***
|
open
|
2024-06-02T03:50:04Z
|
2024-11-21T10:22:43Z
|
https://github.com/sherlock-project/sherlock/issues/2148
|
[
"false positive"
] |
tf7software
| 9
|
opengeos/leafmap
|
plotly
| 401
|
Add bbox parameter to create_timelapse function
|
https://twitter.com/marcpfister/status/1639324207191560198
|
closed
|
2023-03-24T17:57:00Z
|
2023-04-23T04:13:19Z
|
https://github.com/opengeos/leafmap/issues/401
|
[
"Feature Request"
] |
giswqs
| 0
|
deezer/spleeter
|
tensorflow
| 784
|
[Discussion] how does changing F from 1024 to 1536 affects the upper frequency processed (since the model was trained at 1024)?
|
Hey guys, so after spending a LOT of time reading the other issues/posts here, the source code, the research papers and viewing the model I really can't understand how you can process more than 11025 Hz just by modifying the F parameter from 1024 to 1536 (or 2048 to process all of them).
The network is based on a U-Net which in turn is based on convolutions - which by their definition are FIXED. More specifically your inputs are 512 (time steps/hops - the T) x 1024 (bins in the FFT - the F) x 2 (channels) and the output is the same shape (the masks). So, even if you set F to 2048 you can only input a 1024-bin spectrogram to the model and also get a 1024 spectrogram back. Is the model somehow dynamic in this regard and also accepts/outputs 2048? If so, how/where? (in the source code I could not find it) I have read something about the model accepting multiples of 64 (or 128) for T (the time step) but can't figure it out how you can achieve something similar using F.
So, while I am familiar with both DSP (including STFT, windowing, reconstruction, ratio-masks, etc) and ML/DNN stuff this is still something I can't understand and I have the feeling that it's either something simple I'm missing or I may be getting more stupid with age :)
If someone can enlighten me on this I would really appreciate it
P.S. I know (and understand) about the mask extending (btw, it sounds a lot better if you use the average from the last/top 25% bins instead of all of them - it greatly reduces the artefacts/interferences) but I only care about the other solution in the FAQ - the changing of the F parameter
|
closed
|
2022-08-31T15:31:09Z
|
2022-09-02T13:08:47Z
|
https://github.com/deezer/spleeter/issues/784
|
[
"question"
] |
netv1
| 1
|
mkhorasani/Streamlit-Authenticator
|
streamlit
| 53
|
Unsafe characters allowed in username creation
|
Currently there is no restriction on characters in a username, which can result in security issues if the value of the username is not handled properly post authentication. Example:
<img width="953" alt="Screen Shot 2023-03-15 at 8 43 48 AM" src="https://user-images.githubusercontent.com/5022772/225327155-f7b15f1e-a0fb-4258-8663-ac2979fc88de.png">
Would you be open to allowing only alphanumeric + `_` , or alternatively to let the auth module take a username validator as an optional parameter to decide the allowed character set?
|
closed
|
2023-03-15T13:47:27Z
|
2023-04-29T11:08:03Z
|
https://github.com/mkhorasani/Streamlit-Authenticator/issues/53
|
[] |
velicanu
| 1
|
deepspeedai/DeepSpeed
|
pytorch
| 6,736
|
nv-nightly CI test failure
|
The Nightly CI for https://github.com/microsoft/DeepSpeed/actions/runs/11760579904 failed.
|
closed
|
2024-11-10T00:59:49Z
|
2024-11-11T23:39:09Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6736
|
[
"ci-failure"
] |
github-actions[bot]
| 0
|
jupyter/nbviewer
|
jupyter
| 101
|
`%who` in notebook displays all names in `numpy`
|
I have a notebook where I was trying to explain (among other things) the use of `%who` and why `from numpy import *` is a bad idea:
https://raw.github.com/computo-fc/metodos_rigurosos/master/clases/02%20El%20paquete%20numpy%20para%20vectores%20y%20matrices
[This is the 2nd class from a course on numerical methods in Spanish that I am teaching. Each computational class will have its own notebook. It is a stream-of-conciousness-type dump of everything that I wrote in the class. Criticism is welcome :) ]
However, when rendered in NbViewer, even without `from numpy import *` explicitly in the notebook, `%who` returns all names in `numpy`.
[Although `from numpy import *` does appear _later_ in the notebook, to illustrate its bad effect].
This seems to be a bug. (Could it be to do with `%pylab` or something like that?)
|
closed
|
2013-08-15T04:19:27Z
|
2013-08-15T12:48:27Z
|
https://github.com/jupyter/nbviewer/issues/101
|
[] |
computo-fc
| 4
|
serengil/deepface
|
machine-learning
| 807
|
The real-time video process speed
|
thanks for your great work!
Maybe i meet a little peoblem.
when i use real-time video to detect gender, every pic costs 1.4s.
i find that:Because every time an image is read, the model(DeepFace.analyze -- retinaface) is reloaded.
Is there a good solution?
The method I can think of is to load the model in advance and directly analyze each read image.But the Deepface library is too good, making changes is very troublesome.
best wishes!
|
closed
|
2023-07-25T03:52:12Z
|
2023-07-25T05:54:00Z
|
https://github.com/serengil/deepface/issues/807
|
[
"question"
] |
wuli66ly
| 1
|
awesto/django-shop
|
django
| 618
|
Customer order list
|
The order list for the customer is limited to 12. This causes the customer to be unable to access older orders.
|
closed
|
2017-07-19T07:33:06Z
|
2017-08-11T23:06:06Z
|
https://github.com/awesto/django-shop/issues/618
|
[
"bug"
] |
maltitco
| 5
|
raphaelvallat/pingouin
|
pandas
| 7
|
qqplot() should allow for NaN removal
|
Currently, if the passed iterable contains one or more `NaN` values, `qqplot()` will return a basically empty plot. There should be the option to automatically remove `NaN` values before plotting (might even be the default).
|
closed
|
2019-01-09T15:16:29Z
|
2019-01-15T01:03:30Z
|
https://github.com/raphaelvallat/pingouin/issues/7
|
[
"feature request :construction:"
] |
hoechenberger
| 1
|
Yorko/mlcourse.ai
|
seaborn
| 740
|
Proofread the prereqs section
|
- Fix issues
- Fix typos
- Correct the translation where needed
- Add images where necessary
|
closed
|
2023-02-04T13:55:59Z
|
2023-05-17T13:32:49Z
|
https://github.com/Yorko/mlcourse.ai/issues/740
|
[
"enhancement"
] |
Yorko
| 0
|
scikit-learn-contrib/metric-learn
|
scikit-learn
| 81
|
LMNN - performance with shogun 6.1.3
|
I try the metric-learn library and it is said that if
> a recent version of the Shogun Python modular (modshogun) library is available, the LMNN implementation will use the fast C++ version from there
I installed the shogun 6.1.3 with conda.
I saw in the code that you try to `import modshogun` but with the shogun 6.1.3, the import is ` import shogun `.
I was wondering, if it will still use shogun or if it will use the pure python code.
I have not notice any change in the performance in time for my case.
|
closed
|
2018-01-16T14:31:39Z
|
2018-01-17T09:44:38Z
|
https://github.com/scikit-learn-contrib/metric-learn/issues/81
|
[] |
laurazh
| 2
|
httpie/cli
|
rest-api
| 622
|
"!!" in a parameter value does wierd things...
|
when i put "!!" somewhere in a parameter value i get a wierd repeated command line passed in as a parameter instead, and then of course httpie errors out
is there some significance to "!!" in the parameter value?
thanks
CLOSED - discovered it is a bash shell thing - sorry
|
closed
|
2017-10-14T14:40:54Z
|
2017-10-14T14:44:07Z
|
https://github.com/httpie/cli/issues/622
|
[] |
fake-fur
| 0
|
Gerapy/Gerapy
|
django
| 250
|
python3.7 安装换败
|
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Traceback**
Copy traceback displayed in console to here.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. macos 12]
- Browser [e.g. Chrome 67]
- Python Version [e.g. 3.7.9]
- Gerapy Version [e.g. 0.8.6]
**Additional context**
Add any other context about the problem here.
gerapy 安装失败. 报错信息:
pip subprocess to install build dependencies did not run successfully.
|
open
|
2022-07-28T00:21:09Z
|
2022-07-28T00:21:09Z
|
https://github.com/Gerapy/Gerapy/issues/250
|
[
"bug"
] |
songsh
| 0
|
google-research/bert
|
nlp
| 845
|
How to set weight decay other than BERT layer?
|
## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I notice that we should set weight decay of bias and LayerNorm.weight to zero and set weight decay of other parameter in BERT to 0.01. But how to set the weight decay of other layer such as the classifier after BERT? Thanks
|
open
|
2019-09-07T00:05:21Z
|
2019-09-07T00:05:21Z
|
https://github.com/google-research/bert/issues/845
|
[] |
g-jing
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.