repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
voila-dashboards/voila
|
jupyter
| 761
|
Distribute the JupyterLab preview extension with the Python package
|
This will be a follow-up to #732.
For JupyterLab 3.0 we should consider distributing the preview extension with the `voila` Python package using the federated extension system.
We should however still publish the extension to npm for those who want to depend on `@jupyter-voila/jupyterlab-preview` in their own extension.
|
closed
|
2020-11-11T11:56:06Z
|
2020-12-23T13:39:21Z
|
https://github.com/voila-dashboards/voila/issues/761
|
[
"jupyterlab-preview"
] |
jtpio
| 0
|
sgl-project/sglang
|
pytorch
| 3,996
|
Performance issue comparing on MI210x4
|
Hello developers,
I'm benchmarking sglang and vllm on my server with 4 AMD Instinct MI210 GPUs. Here're some configuration details and benchmark results:
vllm server with command:
```
python -m vllm.entrypoints.openai.api_server \ --model /app/wdxu/models/DeepSeek-R1-Distill-Qwen-32B \ -tp 4 \ --port 10000
```
configs:
```
INFO 03-02 15:20:30 [api_server.py:912] args: Namespace(host=None, port=10000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, enable_reasoning=False, reasoning_parser=None, tool_call_parser=None, tool_parser_plugin='', model='/app/wdxu/models/DeepSeek-R1-Distill-Qwen-32B', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=None, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=4, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False)
```
sglang server with command:
```
python -m sglang.launch_server \ --model-path /app/wdxu/models/DeepSeek-R1-Distill-Qwen-32B \ --host 0.0.0.0 \ --port 10000 \ --tp-size 4
```
configs:
```
[2025-03-02 15:19:44] server_args=ServerArgs(model_path='/app/wdxu/models/DeepSeek-R1-Distill-Qwen-32B', tokenizer_path='/app/wdxu/models/DeepSeek-R1-Distill-Qwen-32B', tokenizer_mode='auto', load_format='auto', trust_remote_code=False, dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, quantization=None, context_length=None, device='cuda', served_model_name='/app/wdxu/models/DeepSeek-R1-Distill-Qwen-32B', chat_template=None, is_embedding=False, revision=None, skip_tokenizer_init=False, host='0.0.0.0', port=10000, mem_fraction_static=0.85, max_running_requests=None, max_total_tokens=None, chunked_prefill_size=8192, max_prefill_tokens=16384, schedule_policy='lpm', schedule_conservativeness=1.0, cpu_offload_gb=0, prefill_only_one_req=False, tp_size=4, stream_interval=1, stream_output=False, random_seed=751863663, constrained_json_whitespace_pattern=None, watchdog_timeout=300, download_dir=None, base_gpu_id=0, log_level='info', log_level_http=None, log_requests=False, show_time_cost=False, enable_metrics=False, decode_log_interval=40, api_key=None, file_storage_pth='sglang_storage', enable_cache_report=False, dp_size=1, load_balance_method='round_robin', ep_size=1, dist_init_addr=None, nnodes=1, node_rank=0, json_model_override_args='{}', lora_paths=None, max_loras_per_batch=8, lora_backend='triton', attention_backend='triton', sampling_backend='pytorch', grammar_backend='outlines', speculative_draft_model_path=None, speculative_algorithm=None, speculative_num_steps=5, speculative_num_draft_tokens=64, speculative_eagle_topk=8, enable_double_sparsity=False, ds_channel_config_path=None, ds_heavy_channel_num=32, ds_heavy_token_num=256, ds_heavy_channel_type='qk', ds_sparse_decode_threshold=4096, disable_radix_cache=False, disable_jump_forward=False, disable_cuda_graph=False, disable_cuda_graph_padding=False, enable_nccl_nvls=False, disable_outlines_disk_cache=False, disable_custom_all_reduce=False, disable_mla=False, disable_overlap_schedule=False, enable_mixed_chunk=False, enable_dp_attention=False, enable_ep_moe=False, enable_torch_compile=False, torch_compile_max_bs=32, cuda_graph_max_bs=160, cuda_graph_bs=None, torchao_config='', enable_nan_detection=False, enable_p2p_check=False, triton_attention_reduce_in_fp32=False, triton_attention_num_kv_splits=16, num_continuous_decode_steps=1, delete_ckpt_after_loading=False, enable_memory_saver=False, allow_auto_truncate=False, return_hidden_states=False, enable_custom_logit_processor=False, tool_call_parser=None, enable_hierarchical_cache=False, enable_flashinfer_mla=False)
```
vllm benchmark
```
Namespace(backend='vllm', base_url=None, host='0.0.0.0', port=10000, dataset_name='sharegpt', dataset_path='', model='/app/wdxu/models/DeepSeek-R1-Distill-Qwen-32B', tokenizer=None, num_prompts=1000, sharegpt_output_len=None, sharegpt_context_len=None, random_input_len=1024, random_output_len=1024, random_range_ratio=0.0, request_rate=inf, max_concurrency=250, multi=False, request_rate_range='2,34,2', output_file=None, disable_tqdm=False, disable_stream=False, return_logprob=False, seed=1, disable_ignore_eos=False, extra_request_body=None, apply_chat_template=False, profile=False, lora_name=None, gsp_num_groups=64, gsp_prompts_per_group=16, gsp_system_prompt_len=2048, gsp_question_len=128, gsp_output_len=256)
```
sglang benchmark
```
Namespace(backend='sglang', base_url=None, host='0.0.0.0', port=10000, dataset_name='sharegpt', dataset_path='', model='/app/wdxu/models/DeepSeek-R1-Distill-Qwen-32B', tokenizer=None, num_prompts=1000, sharegpt_output_len=None, sharegpt_context_len=None, random_input_len=1024, random_output_len=1024, random_range_ratio=0.0, request_rate=inf, max_concurrency=250, multi=False, request_rate_range='2,34,2', output_file=None, disable_tqdm=False, disable_stream=False, return_logprob=False, seed=1, disable_ignore_eos=False, extra_request_body=None, apply_chat_template=False, profile=False, lora_name=None, gsp_num_groups=64, gsp_prompts_per_group=16, gsp_system_prompt_len=2048, gsp_question_len=128, gsp_output_len=256)
```
Results:
vllm

sglang

I got such a performance gap result between two inference frameworks and I'm wondering whether I made some mistakes when benchmarking.
Thanks!
|
open
|
2025-03-02T15:35:24Z
|
2025-03-03T06:33:03Z
|
https://github.com/sgl-project/sglang/issues/3996
|
[] |
VincentXWD
| 1
|
google-research/bert
|
tensorflow
| 1,025
|
BERT pretraining num_train_steps questions
|
Hello,
I would like to confirm how the number of training steps and hence the number of epochs used in the paper for pretraining BERT is calculated.
From the paper, I deduced (kindly correct me if I am mistaken):
#_training_steps = #_desired_epochs * (#_words_(not_sentences)_in_all_input_corpus / #tokens_per_batch)
where #tokens_per_batch = batch_size * max_seq_len
So using the numbers in the paper:
1,000,000 ~ 40 * (3,300,000,000 / (256*512))
**Question # 1) Is this deduction correct?**
---
Also, to my understanding, batch_size represents the "number of training instances consumed in 1 batch".
If we assume:
- the original number of sequences in my original dataset is 100 (a simple number for sake of easing the explanation) and
- we set the dupe_factor in "create_pretraining_data.py" to 5, resulting in a total of approximately 5x100=500 training instances for BERT.
**Question # 2) Is the "number of training instances consumed in 1 batch" concerned with original 100 or duped 500 instances?**
Thank you!
|
open
|
2020-03-07T14:02:32Z
|
2020-06-05T19:17:44Z
|
https://github.com/google-research/bert/issues/1025
|
[] |
MarwahAhmadHelaly
| 3
|
deepinsight/insightface
|
pytorch
| 2,262
|
Why Fp16 Grad Scale is 0?
|
Training: 2023-03-14 09:08:43,293-Speed 2178.40 samples/sec Loss nan LearningRate 0.001000
Epoch: 0 Global Step: 20 Fp16 Grad Scale: 0 Required: 194 hours
Training: 2023-03-14 09:08:47,984-Speed 2183.37 samples/sec Loss nan LearningRate 0.001000 Epoch: 0 Global Step: 30 Fp16 Grad Scale: 0 Required: 144 hours
Training: 2023-03-14 09:08:52,689-Speed 2177.01 samples/sec Loss nan LearningRate 0.001000 Epoch: 0 Global Step: 40 Fp16 Grad Scale: 0 Required: 120 hours
|
closed
|
2023-03-14T01:11:43Z
|
2023-03-27T02:18:13Z
|
https://github.com/deepinsight/insightface/issues/2262
|
[] |
deep-practice
| 0
|
AUTOMATIC1111/stable-diffusion-webui
|
pytorch
| 15,308
|
[Bug]: sdxl model with api generates simple/geometric shapes images
|
### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I downloaded a sdxl model as well as sdxl_vae.safetensors, both put in the respective folders.
The model works fine on the web UI, but when using the API the output images are, for lack of better words, simplistic, almost drawings of a 3-year-old, full of geometric shapes or random artifacts.
I initially thought it had to do with the vae, so I edited
`sd_vae = "Automatic"`
and posted to the /sdapi/v1/options endpoint, but nothing changed.
The thing is, this issue only arises when using the API.
Can someone tell me what could be the reason? I could not find anything online.
### Steps to reproduce the problem
1. load "sd_model" and "sd_vae" info on the options POST request
2. do a txt2img post request as normal, set width and height to 1024px.
3. get the output image, which looks overly simplistic and boring.
### What should have happened?
the output image generated via the API should look exactly like the images coming out when using the web UI => higher quality, better in general, no random artifacts
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
version: [v1.7.0](https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/cf2772fab0af5573da775e7437e6acdca424f26e) • python: 3.10.13 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.41.2 • checkpoint: [67ab2fd8ec](https://google.com/search?q=67ab2fd8ec439a89b3fedb15cc65f54336af163c7eb5e4f2acc98f090a29b0b3)
### Console logs
```Shell
there are no error messages or bugs.
```
### Additional information
_No response_
|
closed
|
2024-03-18T17:57:02Z
|
2024-03-19T12:04:48Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15308
|
[
"bug-report"
] |
victorbianconi
| 0
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 1,223
|
FFMPEG error, works most of the time but random error
|
Some voice files give me an error when I attempt to use my training over them. I have changed the format of the files, edited and saved them in Audition, so on, but I still occasionally get this output and it won't work:
`Traceback (most recent call last):
File "L:\RCV_voiceclone\my_utils.py", line 14, in load_audio
ffmpeg.input(file, threads=0)
File "L:\RCV_voiceclone\runtime\lib\site-packages\ffmpeg\_run.py", line 325, in run
raise Error('ffmpeg', out, err)
ffmpeg._run.Error: ffmpeg error (see stderr output for detail)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "L:\RCV_voiceclone\infer-web.py", line 161, in vc_single
audio = load_audio(input_audio_path, 16000)
File "L:\RCV_voiceclone\my_utils.py", line 19, in load_audio
raise RuntimeError(f"Failed to load audio: {e}")
RuntimeError: Failed to load audio: ffmpeg error (see stderr output for detail)
Traceback (most recent call last):
File "L:\RCV_voiceclone\runtime\lib\site-packages\gradio\routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "L:\RCV_voiceclone\runtime\lib\site-packages\gradio\blocks.py", line 1007, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "L:\RCV_voiceclone\runtime\lib\site-packages\gradio\blocks.py", line 953, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "L:\RCV_voiceclone\runtime\lib\site-packages\gradio\components.py", line 2076, in postprocess
processing_utils.audio_to_file(sample_rate, data, file.name)
File "L:\RCV_voiceclone\runtime\lib\site-packages\gradio\processing_utils.py", line 206, in audio_to_file
data = convert_to_16_bit_wav(data)
File "L:\RCV_voiceclone\runtime\lib\site-packages\gradio\processing_utils.py", line 219, in convert_to_16_bit_wav
if data.dtype in [np.float64, np.float32, np.float16]:
AttributeError: 'NoneType' object has no attribute 'dtype'
Traceback (most recent call last):
File "L:\RCV_voiceclone\my_utils.py", line 14, in load_audio
ffmpeg.input(file, threads=0)
File "L:\RCV_voiceclone\runtime\lib\site-packages\ffmpeg\_run.py", line 325, in run
raise Error('ffmpeg', out, err)
ffmpeg._run.Error: ffmpeg error (see stderr output for detail)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "L:\RCV_voiceclone\infer-web.py", line 161, in vc_single
audio = load_audio(input_audio_path, 16000)
File "L:\RCV_voiceclone\my_utils.py", line 19, in load_audio
raise RuntimeError(f"Failed to load audio: {e}")
RuntimeError: Failed to load audio: ffmpeg error (see stderr output for detail)
Traceback (most recent call last):
File "L:\RCV_voiceclone\runtime\lib\site-packages\gradio\routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "L:\RCV_voiceclone\runtime\lib\site-packages\gradio\blocks.py", line 1007, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "L:\RCV_voiceclone\runtime\lib\site-packages\gradio\blocks.py", line 953, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "L:\RCV_voiceclone\runtime\lib\site-packages\gradio\components.py", line 2076, in postprocess
processing_utils.audio_to_file(sample_rate, data, file.name)
File "L:\RCV_voiceclone\runtime\lib\site-packages\gradio\processing_utils.py", line 206, in audio_to_file
data = convert_to_16_bit_wav(data)
File "L:\RCV_voiceclone\runtime\lib\site-packages\gradio\processing_utils.py", line 219, in convert_to_16_bit_wav
if data.dtype in [np.float64, np.float32, np.float16]:
AttributeError: 'NoneType' object has no attribute 'dtype'`
This is not a constant thing, as I have had many times where files DO work. I'm not sure if you have an stderr capture in the code for the ffmeg errors, if so I can put those logs in as well if told where they save.
|
open
|
2023-06-04T19:54:44Z
|
2023-06-04T19:54:44Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1223
|
[] |
hobolyra
| 0
|
deepspeedai/DeepSpeed
|
deep-learning
| 6,006
|
[BUG] Bad compatibility check by testing the existence of a CHANGELOG.md file which is not always available depending on the way of CUTLASS library installation
|
In this code, if we have nvidia-cutlass package installed with pip, there will be no CHANGELOG.md file in the installed site-package directory, and hence the test result will be false negative.


https://github.com/microsoft/DeepSpeed/blob/4d4ff0edddb28a4d53d40dffad9ba16e59b83aec/op_builder/evoformer_attn.py#L55C41-L55C53
Please fix this by checking other files that are always available or maybe the best way to check is to see if the following python code return a valid result without throwing an exception:
```python
import os, cutlass
cutlass_path = os.path.dirname(cutlass.__file__)
cutlass_version = cutlass.__version__
```

|
closed
|
2024-08-16T00:33:29Z
|
2024-08-22T17:36:14Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6006
|
[] |
zhangwei217245
| 0
|
K3D-tools/K3D-jupyter
|
jupyter
| 393
|
Minor grid lines are not visible
|
* K3D version: 2.14.5
* ipywidgets: 7.7.1
* notebook: 6.5.2
* Python version: 3.10.6
* Operating System: Ubuntu 22.04
* Nvidia Driver: 515.86.01
### Description
I've just started using a new machine that comes with an nvidia RTX 3060. It seems like the minor grid lines are not rendered.

However, if I lower the opacity of the mesh, then they become visible underneath the mesh:

Note that if I take a PNG screenshot, they will be visible.

I'm wondering if this is a driver related issue, if anyone experienced something similar and eventually how to fix it?
### Web console log / python logs
```
K3D: (UNMASKED_VENDOR_WEBGL) NVIDIA Corporation [Renderer.js:64:12](webpack://k3d/src/providers/threejs/initializers/Renderer.js)
K3D: (UNMASKED_RENDERER_WEBGL) NVIDIA GeForce GTX 980/PCIe/SSE2 [Renderer.js:65:12](webpack://k3d/src/providers/threejs/initializers/Renderer.js)
K3D: (depth bits) 24 [Renderer.js:66:12](webpack://k3d/src/providers/threejs/initializers/Renderer.js)
K3D: (stencil bits) 8 [Renderer.js:67:12](webpack://k3d/src/providers/threejs/initializers/Renderer.js)
K3D: Object type "Mesh" loaded in: 0.287 s [Loader.js:61:32](webpack://k3d/src/core/lib/Loader.js)
```
|
closed
|
2022-12-12T22:24:16Z
|
2022-12-20T08:10:52Z
|
https://github.com/K3D-tools/K3D-jupyter/issues/393
|
[] |
Davide-sd
| 2
|
yt-dlp/yt-dlp
|
python
| 11,764
|
Sign the Mac executive file in the release
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Now the released executive for MacOS is unsigned, so it needs to be manually enabled in the security settings before being able to use.
Signing it would remove the warning. yt-dlp is now a big project and is worth to be signed :)
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_
|
closed
|
2024-12-08T12:10:00Z
|
2024-12-10T23:25:21Z
|
https://github.com/yt-dlp/yt-dlp/issues/11764
|
[
"docs/meta/cleanup",
"wontfix"
] |
asdf2adsfad
| 1
|
OpenBB-finance/OpenBB
|
machine-learning
| 6,680
|
[Bug] OpenBB Platform hijacks default logger
|
**Describe the bug**
By importing openbb the default logger is manipulated impacting host applications.
**To Reproduce**
Run this code:
```py
import logging
logging.basicConfig(level=logging.INFO)
logging.info("This message is shown")
import openbb
logging.info("This message is hidden")
```
_Notice: That the host application messages which are printed after importing openbb no longer function the same._
At the very least it would be nice to be able to disable this behavior with a user/system setting so that the host application can control logging. Yes the application can override, but because this happens when openbb is imported it can happen at any time while the application is running. This design effectively forces the user to import openbb at a controlled location to then override these settings which is of course not ideal.
**Desktop (please complete the following information):**
- OS: Mac affects all OSes
- Python version 3.11 affects all versions
**Additional context**
I believe this is due to the `LoggingService` changing the default logging configuration:
https://github.com/OpenBB-finance/OpenBB/blob/develop/openbb_platform/core/openbb_core/app/logs/logging_service.py#L122-L128
```py
logging.basicConfig(
level=self._logging_settings.verbosity,
format=FormatterWithExceptions.LOGFORMAT,
datefmt=FormatterWithExceptions.DATEFORMAT,
handlers=[],
force=True,
)
```
|
closed
|
2024-09-19T21:06:50Z
|
2024-09-23T12:55:32Z
|
https://github.com/OpenBB-finance/OpenBB/issues/6680
|
[
"bug"
] |
ncimino
| 4
|
huggingface/datasets
|
pytorch
| 7,249
|
How to debugging
|
### Describe the bug
I wanted to use my own script to handle the processing, and followed the tutorial documentation by rewriting the MyDatasetConfig and MyDatasetBuilder (which contains the _info,_split_generators and _generate_examples methods) classes. Testing with simple data was able to output the results of the processing, but when I wished to do more complex processing, I found that I was unable to debug (even the simple samples were inaccessible). There are no errors reported, and I am able to print the _info,_split_generators and _generate_examples messages, but I am unable to access the breakpoints.
### Steps to reproduce the bug
# my_dataset.py
import json
import datasets
class MyDatasetConfig(datasets.BuilderConfig):
def __init__(self, **kwargs):
super(MyDatasetConfig, self).__init__(**kwargs)
class MyDataset(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
BUILDER_CONFIGS = [
MyDatasetConfig(
name="default",
version=VERSION,
description="myDATASET"
),
]
def _info(self):
print("info") # breakpoints
return datasets.DatasetInfo(
description="myDATASET",
features=datasets.Features(
{
"id": datasets.Value("int32"),
"text": datasets.Value("string"),
"label": datasets.ClassLabel(names=["negative", "positive"]),
}
),
supervised_keys=("text", "label"),
)
def _split_generators(self, dl_manager):
print("generate") # breakpoints
data_file = "data.json"
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_file}
),
]
def _generate_examples(self, filepath):
print("example") # breakpoints
with open(filepath, encoding="utf-8") as f:
data = json.load(f)
for idx, sample in enumerate(data):
yield idx, {
"id": sample["id"],
"text": sample["text"],
"label": sample["label"],
}
#main.py
import os
os.environ["TRANSFORMERS_NO_MULTIPROCESSING"] = "1"
from datasets import load_dataset
dataset = load_dataset("my_dataset.py", split="train", cache_dir=None)
print(dataset[:5])
### Expected behavior
Pause at breakpoints while running debugging
### Environment info
pycharm
|
open
|
2024-10-24T01:03:51Z
|
2024-10-24T01:03:51Z
|
https://github.com/huggingface/datasets/issues/7249
|
[] |
ShDdu
| 0
|
ydataai/ydata-profiling
|
jupyter
| 779
|
'Heat Map tabs not clickable'
|
**Warning , Heat Map tabs not clickable**
<!--
The warning tab in Pandas Profile
[New PowerPoint.pptx](https://github.com/pandas-profiling/pandas-profiling/files/6448197/New.PowerPoint.pptx)
is not clickable in-spite multiple effort
-->
|
closed
|
2021-05-09T18:47:06Z
|
2021-05-11T23:34:55Z
|
https://github.com/ydataai/ydata-profiling/issues/779
|
[
"information requested ❔"
] |
Leneet603
| 1
|
openapi-generators/openapi-python-client
|
rest-api
| 211
|
Exception when running with Python 3.9
|
**Describe the bug**
Traceback when trying to run `openapi-python-client generate --url https://api.va.gov/services/va_facilities/docs/v0/api`
**To Reproduce**
Steps to reproduce the behavior:
1. Install via pipx
2. run `openapi-python-client generate --url https://api.va.gov/services/va_facilities/docs/v0/api`
**Expected behavior**
I'm not sure yet, this is my first interaction with the tool.
**OpenAPI Spec File**
n/a
**Desktop (please complete the following information):**
- OS: MacOS 10.14.6
- Python Version: 3.7.6
- openapi-python-client version - not sure
**Additional context**
```
Traceback (most recent call last):
File "/Users/tommy/.local/bin/openapi-python-client", line 5, in <module>
from openapi_python_client.cli import app
File "/Users/tommy/.local/pipx/venvs/openapi-python-client/lib/python3.9/site-packages/openapi_python_client/__init__.py", line 16, in <module>
from .parser import GeneratorData, import_string_from_reference
File "/Users/tommy/.local/pipx/venvs/openapi-python-client/lib/python3.9/site-packages/openapi_python_client/parser/__init__.py", line 5, in <module>
from .openapi import GeneratorData, import_string_from_reference
File "/Users/tommy/.local/pipx/venvs/openapi-python-client/lib/python3.9/site-packages/openapi_python_client/parser/openapi.py", line 8, in <module>
from .. import schema as oai
File "/Users/tommy/.local/pipx/venvs/openapi-python-client/lib/python3.9/site-packages/openapi_python_client/schema/__init__.py", line 41, in <module>
from .components import Components
File "/Users/tommy/.local/pipx/venvs/openapi-python-client/lib/python3.9/site-packages/openapi_python_client/schema/components.py", line 6, in <module>
from .header import Header
File "/Users/tommy/.local/pipx/venvs/openapi-python-client/lib/python3.9/site-packages/openapi_python_client/schema/header.py", line 3, in <module>
from .parameter import Parameter
File "/Users/tommy/.local/pipx/venvs/openapi-python-client/lib/python3.9/site-packages/openapi_python_client/schema/parameter.py", line 6, in <module>
from .media_type import MediaType
File "/Users/tommy/.local/pipx/venvs/openapi-python-client/lib/python3.9/site-packages/openapi_python_client/schema/media_type.py", line 8, in <module>
from .schema import Schema
File "/Users/tommy/.local/pipx/venvs/openapi-python-client/lib/python3.9/site-packages/openapi_python_client/schema/schema.py", line 559, in <module>
Schema.update_forward_refs()
File "/Users/tommy/.local/pipx/venvs/openapi-python-client/lib/python3.9/site-packages/pydantic/main.py", line 677, in update_forward_refs
update_field_forward_refs(f, globalns=globalns, localns=localns)
File "/Users/tommy/.local/pipx/venvs/openapi-python-client/lib/python3.9/site-packages/pydantic/typing.py", line 237, in update_field_forward_refs
update_field_forward_refs(sub_f, globalns=globalns, localns=localns)
File "/Users/tommy/.local/pipx/venvs/openapi-python-client/lib/python3.9/site-packages/pydantic/typing.py", line 237, in update_field_forward_refs
update_field_forward_refs(sub_f, globalns=globalns, localns=localns)
File "/Users/tommy/.local/pipx/venvs/openapi-python-client/lib/python3.9/site-packages/pydantic/typing.py", line 233, in update_field_forward_refs
field.type_ = evaluate_forwardref(field.type_, globalns, localns or None)
File "/Users/tommy/.local/pipx/venvs/openapi-python-client/lib/python3.9/site-packages/pydantic/typing.py", line 50, in evaluate_forwardref
return type_._evaluate(globalns, localns)
TypeError: _evaluate() missing 1 required positional argument: 'recursive_guard'
```
|
closed
|
2020-10-10T19:59:08Z
|
2020-11-03T16:38:16Z
|
https://github.com/openapi-generators/openapi-python-client/issues/211
|
[
"🐞bug"
] |
duganth
| 3
|
sqlalchemy/alembic
|
sqlalchemy
| 310
|
_iterate_related_revisions gets stuck with duplicate revisions
|
**Migrated issue, originally created by Michael Bayer ([@zzzeek](https://github.com/zzzeek))**
attached test illustrates a complex revision tree, where the _iterate_related_revisions() gets bogged down doing the same nodes over and over again. As only needs to emit unique nodes we need to put a seen set into it:
```
index e9958b1..40cfbb2 100644
--- a/alembic/script/revision.py
+++ b/alembic/script/revision.py
@@ -544,17 +544,24 @@ class RevisionMap(object):
if map_ is None:
map_ = self._revision_map
+ seen = set()
todo = collections.deque()
for target in targets:
+
todo.append(target)
if check:
per_target = set()
+
while todo:
rev = todo.pop()
- todo.extend(
- map_[rev_id] for rev_id in fn(rev))
if check:
per_target.add(rev)
+
+ if rev in seen:
+ continue
+ seen.add(rev)
+ todo.extend(
+ map_[rev_id] for rev_id in fn(rev))
yield rev
if check and per_target.intersection(targets).difference([target]):
raise RevisionError(
```
----------------------------------------
Attachments: [runtest.py](../wiki/imported_issue_attachments/310/runtest.py)
|
closed
|
2015-07-22T16:12:16Z
|
2015-07-22T16:38:22Z
|
https://github.com/sqlalchemy/alembic/issues/310
|
[
"bug",
"versioning model"
] |
sqlalchemy-bot
| 3
|
s3rius/FastAPI-template
|
asyncio
| 50
|
Template using syntax docker-compose v2 "depends_on", but docker-compose version: '3.7'
|
Issue:
```bash
$ docker-compose -f deploy/docker-compose.yml --project-directory . build
ERROR: The Compose file './deploy/docker-compose.yml' is invalid because:
services.api.depends_on contains an invalid type, it should be an array
services.migrator.depends_on contains an invalid type, it should be an array
```
In docker-compose file:
```
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
```
Should be:
```
depends_on:
- db
- redis
```
Reference: https://docs.docker.com/compose/compose-file/compose-file-v3/#depends_on
|
closed
|
2021-11-17T22:55:41Z
|
2022-04-12T22:09:04Z
|
https://github.com/s3rius/FastAPI-template/issues/50
|
[] |
yevhen-kalyna
| 4
|
Avaiga/taipy
|
automation
| 1,535
|
Improve style of metric control
|
### Description
Having two metrics takes all my page. Metric takes 100% of my screen. This is unusable by itself.

The size must be configurable.
This should look like this by default:

### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Propagate any change on the demos and run all of them to ensure there is no breaking change.
- [ ] Ensure any change is well documented.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional)
|
closed
|
2024-07-17T09:08:10Z
|
2024-08-23T07:32:21Z
|
https://github.com/Avaiga/taipy/issues/1535
|
[
"📈 Improvement",
"🖰 GUI",
"🆘 Help wanted",
"🟧 Priority: High"
] |
FlorianJacta
| 4
|
pydantic/pydantic-ai
|
pydantic
| 552
|
Longer text returns may seem to be missing at the end.
|
class QaResult(BaseModel):
answer: str = Field(description="Answer to question, Markdown format")
mermaid: str = Field(description="")
This answer may not be finished yet
|
closed
|
2024-12-27T06:24:56Z
|
2024-12-27T15:43:04Z
|
https://github.com/pydantic/pydantic-ai/issues/552
|
[
"question",
"more info"
] |
blackwhite084
| 1
|
laughingman7743/PyAthena
|
sqlalchemy
| 214
|
DurationSeconds in SQLAlchemy
|
Hi,
```
conn_str = f"awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com:443/?"\
f"s3_staging_dir={s3_staging_dir}&role_arn={role_arn}&role_session_name={role_session_name}"\
f"&serial_number={serial_number}&duration_seconds={duration_seconds}"
```
adding the argument duration_seconds in the SQL Alchemy connection string gives the following error:
```
Invalid type for parameter DurationSeconds, value: 10800, type: <class 'str'>, valid types: <class 'int'>
```
Which is not weird because they expect a int as option, however this is not something I think you can accomplish.
I guess there should be a check if it's possible to parse the string to an int or change the argument to a str in the end
Hope you understand the problem.
|
closed
|
2021-02-15T15:11:28Z
|
2021-02-21T14:49:48Z
|
https://github.com/laughingman7743/PyAthena/issues/214
|
[] |
acountrec
| 2
|
python-restx/flask-restx
|
flask
| 129
|
How to enable version switcher?
|
Hi team, I go through the doc but I didn't find a way to enable the version switcher in swagger, what I want to achieve is to present different swagger.json based on the version.
I've already used namespace for the API:
foo/v1/bar
foo/v1.1/bar
but the json contains the whole API v1 and v1.1, and it's kind of messy.
Thanks in advance
|
open
|
2020-05-06T20:09:25Z
|
2020-09-13T06:32:47Z
|
https://github.com/python-restx/flask-restx/issues/129
|
[
"question"
] |
rqiu25
| 2
|
sangaline/wayback-machine-scraper
|
web-scraping
| 4
|
Crashes (includes fix)
|
Since I can't commit to your project, here are two fixes that I had to made in order to get the scraper to run:
In mirror_spider.py, line 50, there is no check whether the output path is valid. The URL can contain ? characters which causes the script to crash.
Here's my solution, it's just a quick fix and may require elaboration for different characters and Linux/Windows compatibility:
```
url_parts = response.url.split('://')[1].split('/')
parent_directory = os.path.join(self.directory, *url_parts)
parent_directory = parent_directory.replace("?", "_q_") # normalize path
os.makedirs(parent_directory, exist_ok=True)
```
There is another bug in your other project scrapy_wayback_machine which is imported here, that causes a crash.
It's in __init__.py, line 91:
`cdx_url = self.cdx_url_template.format(url=pathname2url(request.url))`
At this point, request.url is something like http://website.com.
But pathname2url will look for a colon : and require that anything before that is only one letter (since we are dealing with regular paths here, like C:\mypath.
When I removed the call to pathname2url, it worked for me, but I don't know which other cases may break:
`cdx_url = self.cdx_url_template.format(url=request.url)`
|
closed
|
2018-04-29T09:07:56Z
|
2021-02-15T19:01:29Z
|
https://github.com/sangaline/wayback-machine-scraper/issues/4
|
[] |
Cerno-b
| 1
|
labmlai/annotated_deep_learning_paper_implementations
|
deep-learning
| 148
|
Guideline on setting "n_z" parameter of hyperLSTM
|
How to set "n_z" parameter of hyperLSTM? Are there any guidelines?
|
closed
|
2022-09-15T17:50:07Z
|
2022-12-18T15:20:42Z
|
https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/148
|
[
"question"
] |
Simply-Adi
| 1
|
viewflow/viewflow
|
django
| 458
|
Redirect to the End node details on a process finish
|
https://stackoverflow.com/questions/56349628/how-to-specify-what-django-view-to-display-when-a-process-ends
|
open
|
2024-07-18T07:33:09Z
|
2024-07-18T07:33:19Z
|
https://github.com/viewflow/viewflow/issues/458
|
[
"request/enhancement",
"dev/flow"
] |
kmmbvnr
| 0
|
apachecn/ailearning
|
python
| 624
|
M
|
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
closed
|
2021-08-06T08:02:03Z
|
2021-09-07T17:41:51Z
|
https://github.com/apachecn/ailearning/issues/624
|
[] |
king-005
| 0
|
ageitgey/face_recognition
|
machine-learning
| 1,147
|
Question delever requeriments of face_recognition on windows to end users
|
Hey,
my name is christopher proß. thank you for this grate project. My question is the following:
I am writing at the moment an add-on for a screen-reader called nvda, a program which reads out screen elements too the blind on windows system. This program has the ability to be extended with add-ons in python. Which are executed at runtime. I would use this project for an add-on, but my problem is, to delever the requeriments there are listed in #175 , to the end-users. exists there a way to precompile or prepack this all requeriments or pack it all to geter in an installer?
An executable file is not my first choice, because I write an add-on and need this library for some features, so I need to import this. Call it over subprocess seems to be inperformant and not very clean for me.
Maybe someone can help me here.
All the best.
|
open
|
2020-05-18T15:21:18Z
|
2021-05-02T12:22:23Z
|
https://github.com/ageitgey/face_recognition/issues/1147
|
[] |
christopherpross
| 2
|
nonebot/nonebot2
|
fastapi
| 2,832
|
Plugin: nonebot-plugin-autopush
|
### PyPI 项目名
nonebot-plugin-autopush
### 插件 import 包名
nonebot_plugin_autopush
### 标签
[]
### 插件配置项
```dotenv
AUTOPUSH_SERVERS=[]
```
|
closed
|
2024-07-21T06:50:04Z
|
2024-07-21T12:35:18Z
|
https://github.com/nonebot/nonebot2/issues/2832
|
[
"Plugin"
] |
This-is-XiaoDeng
| 1
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 965
|
I have this error with my gpu
|
Last Error Received:
Process: Ensemble Mode
The application was unable to allocate enough GPU memory to use this model. Please close any GPU intensive applications and try again.
If the error persists, your GPU might not be supported.
Raw Error Details:
OutOfMemoryError: "CUDA out of memory. Tried to allocate 3.91 GiB (GPU 0; 12.00 GiB total capacity; 16.84 GiB already allocated; 0 bytes free; 16.86 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"
Traceback Error: "
File "UVR.py", line 6638, in process_start
File "separate.py", line 652, in seperate
File "separate.py", line 771, in demix
File "torch\nn\modules\module.py", line 1190, in _call_impl
File "lib_v5\tfc_tdf_v3.py", line 226, in forward
File "torch\nn\modules\module.py", line 1190, in _call_impl
File "lib_v5\tfc_tdf_v3.py", line 143, in forward
File "torch\nn\modules\module.py", line 1190, in _call_impl
File "torch\nn\modules\container.py", line 204, in forward
File "torch\nn\modules\module.py", line 1190, in _call_impl
File "torch\nn\modules\activation.py", line 684, in forward
"
Error Time Stamp [2023-11-10 23:10:21]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 8000
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: v4 | htdemucs
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: MDX23C-InstVoc HQ
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_use_opencl: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
device_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: True
model_sample_mode_duration: 30
demucs_stems: Vocals
mdx_stems: Vocals
|
open
|
2023-11-10T15:12:01Z
|
2023-11-10T15:16:51Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/965
|
[] |
cwj7
| 0
|
marimo-team/marimo
|
data-science
| 3,200
|
Vim keybindings taking precedence over text area keybindings
|
### Describe the bug
- Go to https://marimo.app/?slug=ugvgap
- Enable vim keybindings
- Try typing `k/j` in `Prompt`, won't be able to
### Environment
playground
### Code to reproduce
_No response_
|
closed
|
2024-12-17T18:45:47Z
|
2024-12-18T23:34:05Z
|
https://github.com/marimo-team/marimo/issues/3200
|
[
"bug"
] |
akshayka
| 0
|
TracecatHQ/tracecat
|
automation
| 133
|
Add contextual information for ASGI requests
|
# Motivation
FastAPI calls don't include any contextual information, and it would be nice to include this for each request.
# Note
- Make sure not to include any PII (like IP address etc)
|
closed
|
2024-05-06T00:50:17Z
|
2024-05-08T18:26:59Z
|
https://github.com/TracecatHQ/tracecat/issues/133
|
[
"enhancement",
"engine"
] |
daryllimyt
| 0
|
Evil0ctal/Douyin_TikTok_Download_API
|
api
| 15
|
html = requests.get(url=original_url, headers=tiktok_headers) 报错 Object of type SSLError is not JSON serializable
|
http://127.0.0.1:2333/api?url=https://www.tiktok.com/@gabyluvsskz_/video/7075120105608203563?is_from_webapp=1&sender_device=pc 调用API的时候scraper.py 报的错误 请问是我操作的问题还是怎么回事啊
|
closed
|
2022-04-18T14:03:05Z
|
2022-04-23T22:06:41Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/15
|
[] |
qqiuqingx
| 6
|
frol/flask-restplus-server-example
|
rest-api
| 99
|
ValueError: too many values to unpack (expected 2) when add_columns in resources
|
i have 1 model which is related to another model by on to on relation, here my model
```
class User(db.Model):
"""
User database model.
"""
__tablename__ = 'simpati_mst_user'
user_id = db.Column(db.Integer, primary_key=True) # pylint: disable=invalid-name
nip = db.Column(db.String(length=255), unique=True, nullable=False)
user_name = db.Column(db.String(length=255), unique=True, nullable=False)
user_pass = db.Column(
db.String(length=255)
)
def validate_password(self, password):
return bcrypt.verify(password, self.password)
def __repr__(self):
return (
"<{class_name}("
"user_id={self.user_id}, "
"user_name=\"{self.user_name}\", "
"user_pass=\"{self.user_pass}\", "
")>".format(
class_name=self.__class__.__name__,
self=self
)
)
class Employe(db.Model):
"""
Employes Database Model
"""
__tablename__ = 'v_pegawai'
nip = db.Column(db.String(length=255), primary_key=True, unique=True, nullable=False)
nama = db.Column(db.String(length=255))
def __repr__(self):
return (
"<{class_name}("
"nip={self.nip}, "
"nama=\"{self.nama}\", "
")>".format(
class_name=self.__class__.__name__,
self=self
)
)
```
in my resource.py on get method, this is my query model
```
sql = User.query.join(Employe, User.nip==Employe.nip).add_columns(Employe.nama, Employe.nip).first()
log.debug(sql)
return sql
```
this is value returned by sql variable
` (<User(user_id=88, user_name="197810212009021002", user_pass="69584243de0848597be32548a55216cd", )>, 'WINDU AGUNG ASMORO, S.H, M.H.', '197810212009021002')`
this is my schema
```
class DetailEmploye(Schema):
nama = base_fields.String()
class Meta:
field = ('nip',)
```
but got this error
```
Traceback (most recent call last):
File "c:\users\user\appdata\local\programs\python\python36-32\lib\site-packages\flask\app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "c:\users\user\appdata\local\programs\python\python36-32\lib\site-packages\flask\app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "c:\users\user\appdata\local\programs\python\python36-32\lib\site-packages\flask_restplus\api.py", line 313, in wrapper
resp = resource(*args, **kwargs)
File "c:\users\user\appdata\local\programs\python\python36-32\lib\site-packages\flask\views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "c:\users\user\appdata\local\programs\python\python36-32\lib\site-packages\flask_restplus\resource.py", line 44, in dispatch_request
resp = meth(*args, **kwargs)
File "F:\Projects\Tugas Akhir\restful-api\app\extensions\auth\oauth2.py", line 156, in wrapper
return origin_decorated_func(*args, **kwargs)
File "c:\users\user\appdata\local\programs\python\python36-32\lib\site-packages\flask_oauthlib\provider\oauth2.py", line 565, in decorated
return f(*args, **kwargs)
File "c:\users\user\appdata\local\programs\python\python36-32\lib\site-packages\permission\permission.py", line 23, in decorator
return func(*args, **kwargs)
File "F:\Projects\Tugas Akhir\restful-api\flask_restplus_patched\namespace.py", line 148, in dump_wrapper
response, _code = response
ValueError: too many values to unpack (expected 2)
```
this happend when i trying to add column or custom alias field on querying.
|
closed
|
2018-03-18T06:57:39Z
|
2018-03-18T07:16:54Z
|
https://github.com/frol/flask-restplus-server-example/issues/99
|
[] |
tarikhagustia
| 1
|
blacklanternsecurity/bbot
|
automation
| 1,616
|
Engine Occasionally Hangs
|
```
[DBUG] dnsresolve.finished: False
[DBUG] running: True
[DBUG] tasks:
[DBUG] - dnsresolve.handle_event((DNS_NAME("us1.dev.emm.dell.com", module=subdomaincenter, tags={'in-scope', 'subdomain'}), {'abort_if': <bound method subdomain_enum.abort_if of
<bbot.modules.subdomaincenter.subdomaincenter object at 0x721cebfd6540>>})) running for 10 hours, 36 minutes, 39 seconds:
[DBUG] - dnsresolve.handle_event((DNS_NAME("securemail.vsu.edu", module=sslcert, tags={'distance-2', 'subdomain'}), {})) running for 9 hours, 56 minutes, 54 seconds:
[DBUG] - dnsresolve.handle_event((DNS_NAME("dell.cz", module=speculate, tags={'domain', 'distance-2'}), {})) running for 9 hours, 49 minutes, 25 seconds:
```
```
[DBUG] EngineClient DNSHelper: Timeout waiting for response for waiting for return value from is_wildcard((), {'query': 'us1.dev.emm.dell.com', 'ips': None, 'rdtype': None}), retrying...
[DBUG] EngineClient DNSHelper: Timeout waiting for response for waiting for new iteration from resolve_raw_batch((), {'queries': [('dell.cz', 'A'), ('dell.cz', 'AAAA'), ('dell.cz', 'SRV'), ('dell.cz', 'MX'), ('dell.cz', 'NS'), ('dell.cz', 'SOA'), ('dell.cz', 'CNAME'), ('dell.cz', 'TXT')]}), retrying...
[DBUG] EngineClient DNSHelper: Timeout waiting for response for waiting for new iteration from resolve_raw_batch((), {'queries': [('securemail.vsu.edu', 'A'), ('securemail.vsu.edu', 'AAAA'), ('securemail.vsu.edu', 'SRV'), ('securemail.vsu.edu', 'MX'), ('securemail.vsu.edu', 'NS'), ('securemail.vsu.edu', 'SOA'), ('securemail.vsu.edu', 'CNAME'), ('securemail.vsu.edu', 'TXT')]}), retrying...
```
|
closed
|
2024-08-02T14:28:02Z
|
2024-09-30T00:41:59Z
|
https://github.com/blacklanternsecurity/bbot/issues/1616
|
[
"bug",
"high-priority"
] |
TheTechromancer
| 21
|
streamlit/streamlit
|
machine-learning
| 9,977
|
Add a min_selections parameter to st.multiselect
|
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
I wanted to do a st.multiselect where users could select up to (and only) 2. I checked the [API Reference](https://docs.streamlit.io/develop/api-reference/widgets/st.multiselect) from docs.streamlit.io but I haven't found such thing.
### Why?
I am doing a case study project using streamlit. I wanted to use a st.multiselect, but I need to limit the minimum selections, or my exploratory graph won't work. There's no such argument as `min_selections` in the st.multiselect, I'm pretty sure.
### How?
I want a min_selections argument for st.multiselect
It should be easy, something like:
```python
if len(selections) < min_selection:
raise ValueError("The number of selections is smaller than the minimum allowed selections")
```
Thanks!
### Additional Context
_No response_
|
open
|
2024-12-09T04:17:42Z
|
2024-12-20T14:46:44Z
|
https://github.com/streamlit/streamlit/issues/9977
|
[
"type:enhancement",
"feature:st.multiselect"
] |
Unknownuserfrommars
| 3
|
psf/black
|
python
| 3,665
|
string_processing: duplicates comments when reformatting a multi-line call with comments in the middle
|
<!--
Please make sure that the bug is not already fixed either in newer versions or the
current development version. To confirm this, you have three options:
1. Update Black's version if a newer release exists: `pip install -U black`
2. Use the online formatter at <https://black.vercel.app/?version=main>, which will use
the latest main branch.
3. Or run _Black_ on your machine:
- create a new virtualenv (make sure it's the same Python version);
- clone this repository;
- run `pip install -e .[d]`;
- run `pip install -r test_requirements.txt`
- make sure it's sane by running `python -m pytest`; and
- run `black` like you did last time.
-->
**Describe the bug**
The following code
```python
a = b.c(
d, # comment
("e")
)
```
is formatted to:
```python
a = b.c(d, "e") # comment # comment
```
with the comment duplicated.
Only happens in preview style, and the parentheses in `("e")` are needed to reproduce. So this is likely something in removing these parentheses.
See [playground](https://black.vercel.app/?version=stable&state=_Td6WFoAAATm1rRGAgAhARYAAAB0L-Wj4ACPAGRdAD2IimZxl1N_WlOfrjrwLQ97dt3yCueH1hmYrl5aQFIsytphjn9cFnCpEP7801EqsEDiwP6AZK318iW2MayBXp_YZhdVQOu6-K8Lew01US-bdWed2tQxHmSNAOf0Sm1_fbC98gAAcEw4wdLZ5PYAAYABkAEAACZiB1yxxGf7AgAAAAAEWVo=).
|
open
|
2023-04-27T21:42:41Z
|
2024-01-30T18:27:48Z
|
https://github.com/psf/black/issues/3665
|
[
"T: bug",
"F: strings",
"C: preview style"
] |
yilei
| 1
|
FactoryBoy/factory_boy
|
django
| 704
|
Latest Doc version is always 0.1.0
|
#### Description
The latest doc version is 0.1.0 : see here https://factoryboy.readthedocs.io/_/downloads/en/latest/pdf/
This is probably a consequence of https://github.com/FactoryBoy/factory_boy/commit/93bbd0317092c8e804d039ed60413f4b75fd7e77
I guess we only need to fix this piece of code : https://github.com/FactoryBoy/factory_boy/blob/79161b0409842b5fc8a43d5844669f233403e14c/docs/conf.py#L63-L73
|
closed
|
2020-02-20T09:51:27Z
|
2020-03-09T17:39:27Z
|
https://github.com/FactoryBoy/factory_boy/issues/704
|
[
"Bug",
"Doc"
] |
tonial
| 2
|
custom-components/pyscript
|
jupyter
| 564
|
not implemented ast ast_generatorexp
|
I'm guessing someone hasn't come around to it, just parking this here for myself or if someone else has the time 😄
```
majo-hast-1 | 2024-01-05 15:23:09.477 ERROR (MainThread) [custom_components.pyscript.scripts.energy_price.fetch_prices] Exception in <scripts.energy_price.fetch_prices> line 49:
majo-hast-1 | hour['is_avarage'] = not any(value for key, value in hour.items() if key.endswith(('lowest', 'highest')))
majo-hast-1 | ^
majo-hast-1 | NotImplementedError: scripts.energy_price.fetch_prices: not implemented ast ast_generatorexp
```
|
open
|
2024-01-05T14:27:20Z
|
2024-01-05T17:28:55Z
|
https://github.com/custom-components/pyscript/issues/564
|
[] |
jkaberg
| 1
|
supabase/supabase-py
|
flask
| 1
|
create project base structure
|
Use [postgrest-py](https://github.com/supabase/postgrest-py) and [supabase-js](https://github.com/supabase/supabase-js) as reference implementations
|
closed
|
2020-08-28T06:38:31Z
|
2021-04-01T18:44:49Z
|
https://github.com/supabase/supabase-py/issues/1
|
[
"help wanted"
] |
awalias
| 0
|
SYSTRAN/faster-whisper
|
deep-learning
| 129
|
Open source speech translate stacks
|
Nowadays I am using open source models to realize a speech to speech translator. Because I only have a 1070ti, I have to use ctranslator models. I use faster-whisper(really amazing fast) as the ASR, nllb-200-3.3b-ct2 as the text translator and gTTS for the tts. I found nllb-200 is not very precise so I have to change to deepl api. For the tts, I have tried conqui tts, their models are scattered and not easy to use, so I use gTTS directly. This is the stacks I used now. fast whisper + deepl api + gTTS
For the stacks, can anyone give me some suggestion? Thank you very much.
|
closed
|
2023-04-09T11:11:25Z
|
2023-04-09T13:52:09Z
|
https://github.com/SYSTRAN/faster-whisper/issues/129
|
[] |
ILG2021
| 0
|
WZMIAOMIAO/deep-learning-for-image-processing
|
deep-learning
| 599
|
环境配置问题
|
**System information**
* Have I written custom code: pip install -r .\requirements.txt
* OS Platform(e.g., window10 or Linux Ubuntu 16.04): window10
* Python version: Python 3.7.13
**Describe the current behavior**
Wz学长,您好!
我在配置yolov3spp的环境过程中,运行pip install -r .\requirements.txt,问题显示:无法找到版本为0.8.2的torchvision。请问我该如何解决这个问题?
**Error info / logs**
ERROR: Ignored the following versions that require a different python version: 1.22.0 Requires-Python >=3.8; 1.22.0rc1 Requires-Python >=3.8; 1.22.0rc2 Requires-Python >=3.8; 1.22.0rc3 Requires-Python >=3.8; 1.22.1 Requires-Python >=3.8; 1.22.2 Requires-Python >=3.8; 1.22.3 Requires-Python >=3.8; 1.22.4 Requires-Python >=3.8; 1.23.0 Requires-Python >=3.8; 1.23.0rc1 Requires-Python >=3.8; 1.23.0rc2 Requires-Python >=3.8; 1.23.0rc3 Requires-Python >=3.8; 1.23.1 Requires-Python >=3.8
ERROR: Could not find a version that satisfies the requirement torchvision==0.8.2 (from versions: 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.2.0, 0.2.1, 0.2.2, 0.2.2.post2, 0.2.2.post3, 0.3.0, 0.4.1, 0.5.0, 0.9.0, 0.9.1, 0.10.0, 0.10.1, 0.11.0, 0.11.1, 0.11.2, 0.11.3, 0.12.0, 0.13.0)
ERROR: No matching distribution found for torchvision==0.8.2
|
closed
|
2022-07-21T01:12:33Z
|
2022-07-22T08:43:40Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/599
|
[] |
asapple
| 4
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,297
|
Size of ONNX
|
Hi guys,
I'm just curious about the final size of exported onnx file. Regardless of number of input channels, the size is always 212 MB. Is this expected?
|
open
|
2021-07-12T08:12:16Z
|
2023-07-18T12:58:29Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1297
|
[] |
synthetica3d
| 1
|
ageitgey/face_recognition
|
machine-learning
| 1,578
|
File "C:\Users\hp\PycharmProjects\face-reco-attendence\.venv\Lib\site-packages\face_recognition\api.py", line 105, in _raw_face_locations return face_detector(img, number_of_times_to_upsample) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Unsupported image type, must be 8bit gray or RGB image.
|
* face_recognition version:
* Python version:
* Operating System:
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
IMPORTANT: If your issue is related to a specific picture, include it so others can reproduce the issue.
### What I Did
import os
import cv2
import face_recognition
# Function to load images from a folder and encode them
def load_and_encode_images(folderPath):
PathList = os.listdir(folderPath)
print("Found images:", PathList)
imgList = []
studentIds = []
for path in PathList:
img_path = os.path.join(folderPath, path)
img = cv2.imread(img_path)
if img is None:
print(f"Error loading image: {img_path}")
continue
# Convert image to RGB format
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
imgList.append(img_rgb)
studentIds.append(os.path.splitext(path)[0])
encodeList = []
for img in imgList:
try:
# Ensure image is 8-bit RGB or grayscale
if img.shape[2] != 3:
print(f"Skipping image {img}: not 8-bit RGB")
continue
encode = face_recognition.face_encodings(img)[0]
encodeList.append(encode)
except IndexError:
print(f"No face found in image: {img}")
# You can choose to skip this image or handle the error as needed
return encodeList, studentIds
# Example usage
folderPath = 'Images'
print("Encoding started .......")
encodeListKnown, studentIds = load_and_encode_images(folderPath)
print("Encoding complete")
# Print the results
print("Encoded faces:", encodeListKnown)
print("Student IDs:", studentIds)
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
|
open
|
2024-07-30T14:09:26Z
|
2024-08-26T07:40:55Z
|
https://github.com/ageitgey/face_recognition/issues/1578
|
[] |
Ananya221203
| 1
|
gee-community/geemap
|
streamlit
| 1,812
|
Bug in displaying images in geemap
|
Hi,
Many thanks for solving the other bug with the drawing feature. I have now come across another bug which has appeared from the recent updates.
set up the map
```
geom = ee.Geometry.Point([-2.98333, 58.93078])
Map = geemap.Map()
Map.centerObject(geom, 13)
img_params = {'bands':'VV', 'min':-16, 'max':-6}
image = ee.Image('COPERNICUS/S1_GRD/S1A_IW_GRDH_1SDV_20231010T063002_20231010T063027_050699_061BCD_88FE')
Map.addLayer(image, img_params, 'Satellite Image',True)
Map
```
Now if remove the map and then add the same image using the code below it throws an error below. However if you run the last line again `Map.addLayer(image, img_params, 'Satellite Image',True) ` it adds the image to the map. I think there may be bug in remove_layer function.
```
for layer in Map.layers:
if layer.name == 'Satellite Image':
Map.remove_layer(layer)
Map.addLayer(image, img_params, 'Satellite Image',True)
```
Error
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-109-2d2be8686fdc>](https://localhost:8080/#) in <cell line: 4>()
2 if layer.name == 'Satellite Image':
3 Map.remove_layer(layer)
----> 4 Map.addLayer(image, img_params, 'Satellite Image',True)
5 Map.addLayer(image, img_params, 'Satellite Image',True)
6 frames
[/usr/local/lib/python3.10/dist-packages/geemap/geemap.py](https://localhost:8080/#) in add_ee_layer(self, ee_object, vis_params, name, shown, opacity)
337 )
338
--> 339 super().add_layer(ee_object, vis_params, name, shown, opacity)
340
341 if isinstance(ee_object, (ee.Image, ee.ImageCollection)):
[/usr/local/lib/python3.10/dist-packages/geemap/core.py](https://localhost:8080/#) in add_layer(self, ee_object, vis_params, name, shown, opacity)
754
755 # Remove the layer if it already exists.
--> 756 self.remove(name)
757
758 self.ee_layers[name] = {
[/usr/local/lib/python3.10/dist-packages/geemap/core.py](https://localhost:8080/#) in remove(self, widget)
719 tile_layer = ee_layer.get("ee_layer", None)
720 if tile_layer is not None:
--> 721 self.remove_layer(tile_layer)
722 if legend := ee_layer.get("legend", None):
723 self.remove(legend)
[/usr/local/lib/python3.10/dist-packages/ipyleaflet/leaflet.py](https://localhost:8080/#) in remove_layer(self, rm_layer)
2539 warnings.warn("remove_layer is deprecated, use remove instead", DeprecationWarning)
2540
-> 2541 self.remove(rm_layer)
2542
2543 def substitute_layer(self, old, new):
[/usr/local/lib/python3.10/dist-packages/geemap/core.py](https://localhost:8080/#) in remove(self, widget)
726 return
727
--> 728 super().remove(widget)
729 if isinstance(widget, ipywidgets.Widget):
730 widget.close()
[/usr/local/lib/python3.10/dist-packages/ipyleaflet/leaflet.py](https://localhost:8080/#) in remove(self, item)
2682 """
2683 if isinstance(item, Layer):
-> 2684 if item.model_id not in self._layer_ids:
2685 raise LayerException('layer not on map: %r' % item)
2686 self.layers = tuple([layer for layer in self.layers if layer.model_id != item.model_id])
[/usr/local/lib/python3.10/dist-packages/ipywidgets/widgets/widget.py](https://localhost:8080/#) in model_id(self)
518
519 If a Comm doesn't exist yet, a Comm will be created automagically."""
--> 520 return self.comm.comm_id
521
522 #-------------------------------------------------------------------------
AttributeError: 'NoneType' object has no attribute 'comm_id'
|
closed
|
2023-11-02T13:56:00Z
|
2023-11-02T14:46:10Z
|
https://github.com/gee-community/geemap/issues/1812
|
[
"bug"
] |
haydenclose
| 1
|
piskvorky/gensim
|
machine-learning
| 3,158
|
doc2vec's infer_vector has `epochs` and `steps` input parameters - `steps` not in use
|
Refer to doc2vec.py, `infer_vector` function seems to be using `epochs` for the number of iterations and `steps` is not in used.
However, in the `similarity_unseen_docs` function, `steps` is used when calling the infer_vector function.

|
closed
|
2021-05-28T03:40:01Z
|
2021-06-29T00:55:42Z
|
https://github.com/piskvorky/gensim/issues/3158
|
[
"bug",
"difficulty easy",
"good first issue"
] |
gohjunlin
| 1
|
pydantic/pydantic-core
|
pydantic
| 963
|
ValidationError.from_exception_data doesn't work with error type: 'ip_v4_address'"
|
**Describe the bug**
When constructing validation errors using `ValidationError.from_exception_data`, this fails when the error type is `ip_v4_address`.It will raise:
```
KeyError: "Invalid error type: 'ip_v4_address'
```
**To Reproduce**
Run the code below. It uses a custom model validator that manually constructs the validation error using `ValidationError.from_exception_data`
```python3
from pydantic import BaseModel, Field, model_validator, ValidationError
from ipaddress import IPv4Address
from pydantic_core import InitErrorDetails
class OutputStream(BaseModel):
destination_ip: IPv4Address = Field(
...,
)
class StreamingDevice(BaseModel):
output_stream: OutputStream = Field(
...,
)
@model_validator(mode="before")
@classmethod
def validate_model(cls, data: dict) -> dict:
validation_errors: list[InitErrorDetails] = []
if data.get("output_stream") is not None:
try:
OutputStream.model_validate(data["output_stream"])
except ValidationError as e:
validation_errors.extend(e.errors())
if validation_errors:
raise ValidationError.from_exception_data(title=cls.__name__, line_errors=validation_errors)
return data
streaming_device_payload = {"output_stream": {"destination_ip": "123"}}
streaming_device = StreamingDevice.model_validate(streaming_device_payload)
```
**Expected behavior**
Pydantic is capable of generating 'ip_v4_address' errors when using model validation without the custom model validator. Therefore, we should be able to manually construct the validation error using the error dictionary that contains the ip_v4_address error type as input into `ValidationError.from_exception_data`.
If you were to remove the custom model validator in the example above, Pydantic would successfully throw that error type:
```
pydantic_core._pydantic_core.ValidationError: 1 validation error for StreamingDevice
output_stream.destination_ip
Input is not a valid IPv4 address [type=ip_v4_address, input_value='123', input_type=str
```
Haven't tested it, but should ip_v4_address be in this error list here?
https://github.com/pydantic/pydantic-core/blob/main/python/pydantic_core/core_schema.py#L3900
**Version:**
- OS: Ubuntu 20.04.4 LTS
- Python version: 3.9.16
- pydantic version: 2.3.0
|
open
|
2023-09-15T13:57:29Z
|
2024-08-28T17:56:35Z
|
https://github.com/pydantic/pydantic-core/issues/963
|
[
"documentation"
] |
mikedavidson-evertz
| 9
|
jupyter-incubator/sparkmagic
|
jupyter
| 619
|
connect to pre-created livy session?
|
Hi Folks,
Here we have a use case like following:
1. John posts a request of provisioning a spark cluster to an in-house REST API
2. the rest API actually writes this into a database
3. another process constantly polls the database and finds the new request
4. it does some administrative work with this request(for example quota check), and then start a livy session in the mesos cluster with the requested resources(assume it passes the check)
5. John starts a jupyter notebook locally, and wants to connect to this livy session so he could use the cluster nodes to run his spark code
I searched around and didn't see any documentation about this. The closest I saw is this:
https://github.com/jupyter-incubator/sparkmagic/issues/286
The comments there were valid and I understand the concerns. Just, this use case seems valid too. With the admin tasks in the middle, it makes sense to decouple the creation and usage. Not sure if in the sparkmagic config I could directly point it to a session instead of livy url?
I also found this link:
https://www.qubole.com/blog/connecting-jupyter-remote-qubole-spark-cluster-aws-ms-azure-oracle-bmc/
where they probably use multiple livy servers by using different urls:
{
"kernel_python_credentials" : {
"username": "",
"password": "",
"url": "https://api.qubole.com/livy-spark-<cluster_id>"
},
"kernel_scala_credentials" : {
"username": "",
"password": "",
"url": "https://api.qubole.com/livy-spark-<cluster_id>"
}
Finally, I saw that sparkmagic could use a name to access a session, but it has to be created with sparkmagic. I may dig further to see how sparkmagic manages livy sessions and whether it could handle sessions not created by itself. At the same time if someone could share some information, I'll really appreciate. :-)
|
open
|
2020-01-20T20:24:21Z
|
2020-01-20T20:24:21Z
|
https://github.com/jupyter-incubator/sparkmagic/issues/619
|
[] |
rui-wang-codebase
| 0
|
horovod/horovod
|
deep-learning
| 3,711
|
TF DataService example fails with tf 2.11
|
Starting with Tensorflow 2.11, the DataService example fails for Gloo and MPI (but not Spark!) on GPU using CUDA (not reproducible locally with GPU but without CUDA). The identical MNIST example without DataService passes.
|
open
|
2022-09-22T06:02:31Z
|
2022-09-22T06:06:26Z
|
https://github.com/horovod/horovod/issues/3711
|
[
"bug"
] |
EnricoMi
| 0
|
ExpDev07/coronavirus-tracker-api
|
fastapi
| 163
|
Consider ulklc/covid19-timeseries as optional source
|
Because my app relies on recovered data to estimate "active" cases, I'm going to have to switch from this API (which is lovely, btw) to this data, unless you want to add it as an optional source here?
https://github.com/ulklc/covid19-timeseries
|
open
|
2020-03-24T13:02:22Z
|
2020-03-24T16:40:58Z
|
https://github.com/ExpDev07/coronavirus-tracker-api/issues/163
|
[
"enhancement"
] |
imjoshellis
| 1
|
open-mmlab/mmdetection
|
pytorch
| 11,720
|
[mm grounding dino]
|
Are there any plans to support mm grounding dino **using image as prompt**?
|
open
|
2024-05-16T01:01:07Z
|
2024-05-16T01:01:23Z
|
https://github.com/open-mmlab/mmdetection/issues/11720
|
[] |
SHX9610
| 0
|
pytorch/pytorch
|
machine-learning
| 148,951
|
DISABLED test_wrap_kwarg_default_dynamic_shapes (__main__.DynamicShapesHigherOrderOpTests)
|
Platforms: linux, rocm, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_wrap_kwarg_default_dynamic_shapes&suite=DynamicShapesHigherOrderOpTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38534736647).
Over the past 3 hours, it has been determined flaky in 17 workflow(s) with 34 failures and 17 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_wrap_kwarg_default_dynamic_shapes`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 1667, in test_wrap_kwarg_default
self._test_wrap_simple(f, default_args_generator((x, y)), arg_count)
File "/var/lib/jenkins/pytorch/test/dynamo/test_higher_order_ops.py", line 191, in _test_wrap_simple
self.assertEqual(len(wrap_node.args), expected_num_wrap_args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4091, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 4 but got 7.
Absolute difference: 3
Relative difference: 0.75
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_dynamic_shapes.py DynamicShapesHigherOrderOpTests.test_wrap_kwarg_default_dynamic_shapes
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_dynamic_shapes.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
|
open
|
2025-03-11T06:44:50Z
|
2025-03-12T09:42:35Z
|
https://github.com/pytorch/pytorch/issues/148951
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] |
pytorch-bot[bot]
| 4
|
ranaroussi/yfinance
|
pandas
| 2,245
|
`returnOnAssets` might be returning faulty data
|
### Describe bug
I am using yfinance 0.2.51, and `returnOnAssets` might be returning faulty data, or the the ROA is calculated calculacted with unknown method to me.
I noticed this because my script was calculating the ROA itself using the yfinance income statment and balance sheet data, and I was comparing it to the yfinance values.
If we take the TSM ticker for example:
Annual Net Income 2024 = 1.173.268.000
Annual Total Assets 2024 = 6.691.938.000
ROA = Net Income / Total Assets = 0,1753 = 17,53 %
However, if you use the yfinance function `stock_info.get("returnOnAssets", None)` directly, then it returns ROA of 0.12409.
I am also encountering this issue for other tickers:
2025-01-27 17:32:56,955 - INFO - ROA - yFinance (Ticker: ASML): 12.71 %
2025-01-27 17:32:56,956 - INFO - ROA - Calc (Ticker: ASML): 19.62 %
2025-01-27 17:32:59,345 - INFO - ROA - yFinance (Ticker: AWI): 9.84 %
2025-01-27 17:32:59,346 - INFO - ROA - Calc (Ticker: AWI): 13.38 %
2025-01-27 17:33:00,438 - INFO - ROA - yFinance (Ticker: RHI): 5.08 %
2025-01-27 17:33:00,439 - INFO - ROA - Calc (Ticker: RHI): 13.66 %
What is strange, that the difference between yFinance and my calculated values is quite big.
### Simple code that reproduces your problem
# Fetch the stock data from Yahoo Finance
stock = yf.Ticker(ticker) # Ticker e.g. TSM
stock_info = stock.info
# Fetch ROA from the Yahoo Finance data
roa_yfinance = stock_info.get("returnOnAssets", None) # Returned ROA is 0.12409
### Debug log
DEBUG get_raw_json(): https://query2.finance.yahoo.com/v10/finance/quoteSummary/TSM
DEBUG Entering get()
DEBUG Entering _make_request()
DEBUG url=https://query2.finance.yahoo.com/v10/finance/quoteSummary/TSM
DEBUG params={'modules': 'financialData,quoteType,defaultKeyStatistics,assetProfile,summaryDetail', 'corsDomain': 'finance.yahoo.com', 'formatted': 'false', 'symbol': 'TSM'}
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG loaded persistent cookie
DEBUG reusing cookie
DEBUG crumb = 'c3P895az14W'
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting _make_request()
DEBUG Exiting get()
DEBUG Entering get()
DEBUG Entering _make_request()
DEBUG url=https://query1.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/TSM?symbol=TSM&type=trailingPegRatio&period1=1722211200&period2=1738022400
DEBUG params=None
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG reusing cookie
DEBUG reusing crumb
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting _make_request()
DEBUG Exiting get()
2025-01-27 17:37:57,605 - INFO - ---------------------- TSM ----------------------
DEBUG Entering _fetch_time_series()
DEBUG Entering get()
DEBUG Entering _make_request()
DEBUG url=https://query2.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/TSM?symbol=TSM&type=annualTaxEffectOfUnusualItems,annualTaxRateForCalcs,annualNormalizedEBITDA,annualNormalizedDiluted...
DEBUG params=None
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG reusing cookie
DEBUG reusing crumb
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting _make_request()
DEBUG Exiting get()
DEBUG Exiting _fetch_time_series()
2025-01-27 17:37:57,772 - INFO - Fetched Earnings data (EPS) for 4 years period (Ticker: TSM).
2025-01-27 17:37:57,772 - INFO - EPS (latest) (Ticker: TSM): 226.21
2025-01-27 17:37:57,773 - INFO - Fetched Earnings data (Net Income) for 4 years period (Ticker: TSM).
2025-01-27 17:37:57,773 - INFO - Calculating Earnings Variability (EVAR) over 3 year period (Ticker: TSM).
2025-01-27 17:37:57,774 - INFO - EVAR 4Y (EPS) (Ticker: TSM): 44.62%
2025-01-27 17:37:57,774 - INFO - Calculating Earnings Variability (EVAR) over 3 year period (Ticker: TSM).
2025-01-27 17:37:57,774 - INFO - EVAR 4Y (Net Income) (Ticker: TSM): 42.35%
2025-01-27 17:37:57,774 - INFO - Calculating Compound Annual Growth Rate (CAGR) (Ticker: TSM).
2025-01-27 17:37:57,775 - INFO - CAGR 4Y (EPS) (Ticker: TSM): 25.28 %
2025-01-27 17:37:57,775 - INFO - Calculating Compound Annual Growth Rate (CAGR) (Ticker: TSM).
2025-01-27 17:37:57,775 - INFO - CAGR 4Y (Net Income) (Ticker: TSM): 25.58 %
2025-01-27 17:37:57,775 - INFO - Dividend Yield (Ticker: TSM): 1.34 %
2025-01-27 17:37:57,775 - INFO - ROE - yFinance (Ticker: TSM): 28.03 %
DEBUG Entering _fetch_time_series()
DEBUG Entering get()
DEBUG Entering _make_request()
DEBUG url=https://query2.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/TSM?symbol=TSM&type=annualTreasurySharesNumber,annualPreferredSharesNumber,annualOrdinarySharesNumber,annualShareIssue...
DEBUG params=None
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG reusing cookie
DEBUG reusing crumb
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting _make_request()
DEBUG Exiting get()
DEBUG Exiting _fetch_time_series()
DEBUG Entering _fetch_time_series()
DEBUG Entering get()
DEBUG Entering _make_request()
DEBUG url=https://query2.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/TSM?symbol=TSM&type=quarterlyTaxEffectOfUnusualItems,quarterlyTaxRateForCalcs,quarterlyNormalizedEBITDA,quarterlyNorma...
DEBUG params=None
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG reusing cookie
DEBUG reusing crumb
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting _make_request()
DEBUG Exiting get()
DEBUG Exiting _fetch_time_series()
DEBUG Entering _fetch_time_series()
DEBUG Entering get()
DEBUG Entering _make_request()
DEBUG url=https://query2.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/TSM?symbol=TSM&type=quarterlyTreasurySharesNumber,quarterlyPreferredSharesNumber,quarterlyOrdinarySharesNumber,quarter...
DEBUG params=None
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG reusing cookie
DEBUG reusing crumb
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting _make_request()
DEBUG Exiting get()
DEBUG Exiting _fetch_time_series()
2025-01-27 17:37:58,417 - INFO - ROE - Calc (Ticker: TSM): 27.36 %
2025-01-27 17:37:58,417 - INFO - ROA - yFinance (Ticker: TSM): 12.41 %
2025-01-27 17:37:58,418 - INFO - ROA - Calc (Ticker: TSM): 17.53 %
DEBUG Entering _fetch_time_series()
DEBUG Entering get()
DEBUG Entering _make_request()
DEBUG url=https://query2.finance.yahoo.com/ws/fundamentals-timeseries/v1/finance/timeseries/TSM?symbol=TSM&type=annualForeignSales,annualDomesticSales,annualAdjustedGeographySegmentData,annualFreeCashFlow,annua...
DEBUG params=None
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG reusing cookie
DEBUG reusing crumb
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting _make_request()
DEBUG Exiting get()
DEBUG Exiting _fetch_time_series()
2025-01-27 17:37:58,594 - INFO - CFOA (Ticker: TSM): 27.29 %
2025-01-27 17:37:58,595 - INFO - GPOA (Ticker: TSM): 24.27 %
2025-01-27 17:37:58,596 - INFO - GPMAR (Ticker: TSM): 56.12 %
2025-01-27 17:37:58,596 - INFO - Profit Margin (Ticker: TSM): 39.12 %
2025-01-27 17:37:58,596 - INFO - P/E (Forward) (Ticker: TSM): 17.94
2025-01-27 17:37:58,597 - INFO - P/E (Trailing) (Ticker: TSM): 27.59
2025-01-27 17:37:58,597 - INFO - P/B (Ticker: TSM): 1.24
2025-01-27 17:37:58,597 - INFO - Sector: Technology
2025-01-27 17:37:58,597 - INFO - Industry: Semiconductors
### Bad data proof
https://finance.yahoo.com/quote/TSM/financials/
https://finance.yahoo.com/quote/TSM/balance-sheet/
### `yfinance` version
0.2.51
### Python version
3.13.1
### Operating system
Windows 11 Enterprise
|
closed
|
2025-01-27T16:40:41Z
|
2025-01-27T21:58:21Z
|
https://github.com/ranaroussi/yfinance/issues/2245
|
[] |
SimpleThings07
| 3
|
deepspeedai/DeepSpeed
|
machine-learning
| 6,817
|
deepspeed installation problem
|
Hi,
During the installation, i get the following error:
```python
Collecting deepspeed
Using cached deepspeed-0.16.0.tar.gz (1.4 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [32 lines of output]
[2024-12-04 11:05:58,747] [WARNING] [real_accelerator.py:174:get_accelerator] Setting accelerator to CPU. If you have GPU or other accelerator, we were unable to detect it.
[2024-12-04 11:05:58,767] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cpu (auto detect)
Warning: The cache directory for DeepSpeed Triton autotune, /home/sagie.dekel/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
[2024-12-04 11:06:00,385] [WARNING] [real_accelerator.py:174:get_accelerator] Setting accelerator to CPU. If you have GPU or other accelerator, we were unable to detect it.
[2024-12-04 11:06:00,391] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cpu (auto detect)
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/home/sagie.dekel/tmp/pip-install-c2oi7t19/deepspeed_6339861123c84ed3856f1feb571e3473/setup.py", line 40, in <module>
from op_builder import get_default_compute_capabilities, OpBuilder
File "/home/sagie.dekel/tmp/pip-install-c2oi7t19/deepspeed_6339861123c84ed3856f1feb571e3473/op_builder/__init__.py", line 18, in <module>
import deepspeed.ops.op_builder # noqa: F401 # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sagie.dekel/tmp/pip-install-c2oi7t19/deepspeed_6339861123c84ed3856f1feb571e3473/deepspeed/__init__.py", line 25, in <module>
from . import ops
File "/home/sagie.dekel/tmp/pip-install-c2oi7t19/deepspeed_6339861123c84ed3856f1feb571e3473/deepspeed/ops/__init__.py", line 15, in <module>
from ..git_version_info import compatible_ops as __compatible_ops__
File "/home/sagie.dekel/tmp/pip-install-c2oi7t19/deepspeed_6339861123c84ed3856f1feb571e3473/deepspeed/git_version_info.py", line 29, in <module>
op_compatible = builder.is_compatible()
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sagie.dekel/tmp/pip-install-c2oi7t19/deepspeed_6339861123c84ed3856f1feb571e3473/op_builder/cpu/async_io.py", line 80, in is_compatible
aio_compatible = self.has_function('io_submit', ('aio', ))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sagie.dekel/tmp/pip-install-c2oi7t19/deepspeed_6339861123c84ed3856f1feb571e3473/op_builder/builder.py", line 388, in has_function
shutil.rmtree(tempdir)
File "/home/sagie.dekel/PROGS/anaconda3/envs/RLRF_env/lib/python3.12/shutil.py", line 759, in rmtree
_rmtree_safe_fd(stack, onexc)
File "/home/sagie.dekel/PROGS/anaconda3/envs/RLRF_env/lib/python3.12/shutil.py", line 703, in _rmtree_safe_fd
onexc(func, path, err)
File "/home/sagie.dekel/PROGS/anaconda3/envs/RLRF_env/lib/python3.12/shutil.py", line 662, in _rmtree_safe_fd
os.rmdir(name, dir_fd=dirfd)
OSError: [Errno 39] Directory not empty: '/home/sagie.dekel/tmp/tmp4yqce7ut'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
**My environment:**
- OS: Linux
Python: 3.12.7 (also tried to downgrade without success)
Transformers: 2.4.1
PyTorch: 4.46.1
CUDA (python -c 'import torch; print(torch.version.cuda)'): 11.8
**Packages Version:**
```python
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
absl-py 2.1.0 py312h06a4308_0
accelerate 1.0.0 pypi_0 pypi
aiohappyeyeballs 2.4.3 pypi_0 pypi
aiohttp 3.10.10 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
annotated-types 0.7.0 pypi_0 pypi
argparse 1.4.0 pypi_0 pypi
attrs 24.2.0 pypi_0 pypi
blas 1.0 mkl
bottleneck 1.4.2 py312ha883a20_0
brotli-python 1.0.9 py312h6a678d5_8
bzip2 1.0.8 h5eee18b_6
c-ares 1.19.1 h5eee18b_0
ca-certificates 2024.11.26 h06a4308_0
certifi 2024.8.30 py312h06a4308_0
charset-normalizer 3.4.0 pypi_0 pypi
click 8.1.7 pypi_0 pypi
cuda-cudart 11.8.89 0 nvidia
cuda-cupti 11.8.87 0 nvidia
cuda-libraries 11.8.0 0 nvidia
cuda-nvrtc 11.8.89 0 nvidia
cuda-nvtx 11.8.86 0 nvidia
cuda-runtime 11.8.0 0 nvidia
cuda-version 12.4 hbda6634_3
datasets 3.0.1 pypi_0 pypi
dill 0.3.8 pypi_0 pypi
docstring-parser 0.16 pypi_0 pypi
expat 2.6.3 h6a678d5_0
ez-setup 0.9 pypi_0 pypi
ffmpeg 4.3 hf484d3e_0 pytorch
filelock 3.16.1 pypi_0 pypi
freetype 2.12.1 h4a9f257_0
frozenlist 1.4.1 pypi_0 pypi
fsspec 2024.6.1 pypi_0 pypi
gmp 6.2.1 h295c915_3
gnutls 3.6.15 he1e5248_0
grpcio 1.62.2 py312h6a678d5_0
hjson 3.1.0 pypi_0 pypi
huggingface-hub 0.25.2 pypi_0 pypi
idna 3.10 pypi_0 pypi
intel-openmp 2023.1.0 hdb19cb5_46306
jinja2 3.1.4 py312h06a4308_1
jpeg 9e h5eee18b_3
lame 3.100 h7b6447c_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.40 h12ee557_0
lerc 3.0 h295c915_0
libabseil 20240116.2 cxx17_h6a678d5_0
libcublas 11.11.3.6 0 nvidia
libcufft 10.9.0.58 0 nvidia
libcufile 1.9.1.3 h99ab3db_1
libcurand 10.3.5.147 h99ab3db_1
libcusolver 11.4.1.48 0 nvidia
libcusparse 11.7.5.86 0 nvidia
libdeflate 1.17 h5eee18b_1
libffi 3.4.4 h6a678d5_1
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libgrpc 1.62.2 h2d74bed_0
libiconv 1.16 h5eee18b_3
libidn2 2.3.4 h5eee18b_0
libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
libnpp 11.8.0.86 0 nvidia
libnvjpeg 11.9.0.86 0 nvidia
libpng 1.6.39 h5eee18b_0
libprotobuf 4.25.3 he621ea3_0
libstdcxx-ng 11.2.0 h1234567_1
libtasn1 4.19.0 h5eee18b_0
libtiff 4.5.1 h6a678d5_0
libunistring 0.9.10 h27cfd23_0
libuuid 1.41.5 h5eee18b_0
libwebp-base 1.3.2 h5eee18b_1
llvm-openmp 14.0.6 h9e868ea_0
lz4-c 1.9.4 h6a678d5_1
markdown 3.4.1 py312h06a4308_0
markdown-it-py 3.0.0 pypi_0 pypi
markupsafe 3.0.1 pypi_0 pypi
mdurl 0.1.2 pypi_0 pypi
mkl 2023.1.0 h213fc3f_46344
mkl-service 2.4.0 py312h5eee18b_1
mkl_fft 1.3.11 py312h5eee18b_0
mkl_random 1.2.8 py312h526ad5a_0
mpmath 1.3.0 py312h06a4308_0
msgpack 1.1.0 pypi_0 pypi
multidict 6.1.0 pypi_0 pypi
multiprocess 0.70.16 pypi_0 pypi
ncurses 6.4 h6a678d5_0
nettle 3.7.3 hbbd107a_1
networkx 3.3 py312h06a4308_0
ninja 1.11.1.2 pypi_0 pypi
numexpr 2.10.1 py312h3c60e43_0
numpy 1.26.4 py312hc5e2394_0
numpy-base 1.26.4 py312h0da6c21_0
nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
nvidia-nccl-cu12 2.20.5 pypi_0 pypi
nvidia-nvjitlink-cu12 12.6.77 pypi_0 pypi
nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
openh264 2.1.1 h4ff587b_0
openjpeg 2.5.2 he7f1fd0_0
openssl 3.0.15 h5eee18b_0
packaging 24.1 pypi_0 pypi
pandas 2.2.3 pypi_0 pypi
pillow 8.2.0 pypi_0 pypi
pip 24.3.1 pypi_0 pypi
propcache 0.2.0 pypi_0 pypi
protobuf 3.20.0 pypi_0 pypi
psutil 6.0.0 pypi_0 pypi
py-cpuinfo 9.0.0 pypi_0 pypi
pyarrow 17.0.0 pypi_0 pypi
pydantic 2.10.1 pypi_0 pypi
pydantic-core 2.27.1 pypi_0 pypi
pygments 2.18.0 pypi_0 pypi
pysocks 1.7.1 py312h06a4308_0
python 3.12.7 h5148396_0
python-dateutil 2.9.0post0 py312h06a4308_2
python-tzdata 2023.3 pyhd3eb1b0_0
pytorch 2.4.1 py3.12_cuda11.8_cudnn9.1.0_0 pytorch
pytorch-cuda 11.8 h7e8668a_5 pytorch
pytorch-mutex 1.0 cuda pytorch
pytz 2024.1 py312h06a4308_0
pyyaml 6.0.2 py312h5eee18b_0
rbo 0.1.3 pypi_0 pypi
re2 2022.04.01 h295c915_0
readline 8.2 h5eee18b_0
regex 2024.9.11 pypi_0 pypi
requests 2.32.3 py312h06a4308_1
rich 13.9.2 pypi_0 pypi
safetensors 0.4.5 pypi_0 pypi
scipy 1.14.1 pypi_0 pypi
setuptools 69.5.0 pypi_0 pypi
shtab 1.7.1 pypi_0 pypi
six 1.16.0 pyhd3eb1b0_1
sqlite 3.45.3 h5eee18b_0
sympy 1.13.3 pypi_0 pypi
tbb 2021.8.0 hdb19cb5_0
tensorboard 2.17.0 py312h06a4308_0
tensorboard-data-server 0.7.0 py312h52d8a92_1
tk 8.6.14 h39e8969_0
tokenizers 0.20.0 pypi_0 pypi
torch 2.4.1 pypi_0 pypi
torchaudio 2.4.1 py312_cu118 pytorch
torchtriton 3.0.0 py312 pytorch
torchvision 0.19.1 py312_cu118 pytorch
tqdm 4.66.5 py312he106c6f_0
transformers 4.46.1 pypi_0 pypi
triton 3.0.0 pypi_0 pypi
trl 0.12.0 pypi_0 pypi
typing-extensions 4.12.2 pypi_0 pypi
typing_extensions 4.11.0 py312h06a4308_0
tyro 0.8.12 pypi_0 pypi
tzdata 2024b h04d1e81_0
urllib3 2.2.3 py312h06a4308_0
werkzeug 3.0.6 py312h06a4308_0
wheel 0.44.0 py312h06a4308_0
xxhash 3.5.0 pypi_0 pypi
xz 5.4.6 h5eee18b_1
yaml 0.2.5 h7b6447c_0
yarl 1.14.0 pypi_0 pypi
zlib 1.2.13 h5eee18b_1
zstd 1.5.6 hc292b87_0
```
Thanks for helping :)
|
closed
|
2024-12-04T09:21:11Z
|
2024-12-11T06:01:56Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6817
|
[] |
sagie-dekel
| 8
|
marcomusy/vedo
|
numpy
| 802
|
Plot surface into variable with out showing result
|
Hi @marcomusy ,
I'm trying to plot a surface mesh into a variable/pillow-image/mumpy-array that I can post-process. I saw `vedo.Plotter()` has two functions `.show()` and `.screenshot()`. When calling `show()` from a juypter notebook, the result is immediately shown. I was now wondering if it is possible to draw a plot of a surface without showing it and retrieving the plot in a numpy array or Pillow Image instead.
Background: I would like to have the screenshot in a variable and build some stuff around before showing it, similar to [`stackview.insight()`](https://github.com/haesleinhuepf/stackview#static-insight-views) but with surfaces instead of images...
Any hint is welcome. Thanks!
Best,
Robert
|
closed
|
2023-02-04T13:31:48Z
|
2023-02-04T15:44:28Z
|
https://github.com/marcomusy/vedo/issues/802
|
[] |
haesleinhuepf
| 4
|
ydataai/ydata-profiling
|
pandas
| 966
|
latest version 3.2.0 is broken in google colab
|
**Describe the bug**
<!--
cannot import ProfileReport
-->
**To Reproduce**
profile = ProfileReport(df, title="Pandas Profiling Report")
profile.to_notebook_iframe()
will throw error
<!--
anyone want a dirty fix
!pip install pandas-profiling==3.1.0
instead
|
closed
|
2022-05-06T17:00:23Z
|
2022-05-07T15:31:59Z
|
https://github.com/ydataai/ydata-profiling/issues/966
|
[
"bug 🐛"
] |
athulkrishna2015
| 3
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 3,069
|
"Multimedia content" disappears after globaleaks update
|
**Describe the bug**
In a questionnaire step there is only a group of questions. I have checked "add multimedia content" and added an mp4 video locally uploaded. No other question has been added. The video is educational on the specific whistleblowing case.
When updating the platform to the latest version the checkbox becomes unchecked and the video does not appear.
When checking again the box and entering the file path the issue is restored.
This has happened at least two times.
|
closed
|
2021-10-12T09:08:40Z
|
2021-10-21T07:50:09Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3069
|
[
"T: Bug",
"C: Backend"
] |
elbill
| 2
|
JoeanAmier/TikTokDownloader
|
api
| 351
|
请问终端交互模式下怎么修改默认的下载文件夹呢
|
### Discussed in https://github.com/JoeanAmier/TikTokDownloader/discussions/350
<div type='discussions-op-text'>
<sup>Originally posted by **imzhanglibo** December 12, 2024</sup>
输入完链接确认后默认下载到_internal文件夹里面了,请问下可以修改到其他文件夹吗</div>
|
open
|
2024-12-12T15:04:12Z
|
2025-01-12T10:05:02Z
|
https://github.com/JoeanAmier/TikTokDownloader/issues/351
|
[] |
imzhanglibo
| 1
|
autogluon/autogluon
|
computer-vision
| 3,919
|
Not able install Autogluon in my windows system. It is not able to find numpy==1.21.3
|
Hi,
I am not able to install Autogluon in my windows 10 system.
I am using "pip install autogluon**" command in command prompt. I have python 3.11.0 install and can't downgrade it due to other dependencies
.
While installation giving below error. I tried manually installing numpy==1.21.3 but I guess it is not supporting python 3.11.0.
Please help.
ERROR: Could not find a version that satisfies the requirement numpy==1.21.3 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.3, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0, 1.21.1, 1.22.0, 1.22.1, 1.22.2, 1.22.3, 1.22.4, 1.23.0rc1, 1.23.0rc2, 1.23.0rc3, 1.23.0, 1.23.1, 1.23.2, 1.23.3, 1.23.4, 1.23.5, 1.24.0rc1, 1.24.0rc2, 1.24.0, 1.24.1, 1.24.2, 1.24.3, 1.24.4, 1.25.0rc1, 1.25.0, 1.25.1, 1.25.2, 1.26.0b1, 1.26.0rc1, 1.26.0, 1.26.1, 1.26.2, 1.26.3, 1.26.4)
ERROR: No matching distribution found for numpy==1.21.3
Thanks
|
closed
|
2024-02-14T04:08:26Z
|
2024-02-14T08:32:45Z
|
https://github.com/autogluon/autogluon/issues/3919
|
[] |
rahulhiware
| 1
|
litestar-org/polyfactory
|
pydantic
| 531
|
Docs: Document functions/classes/methods that have no docstrings and are missing from the built docs
|
We should document these. I don't think any are exposed in the reference documentation because of lack of docstrings
_Originally posted by @JacobCoffee in https://github.com/litestar-org/polyfactory/pull/530#discussion_r1585191572_
|
open
|
2024-04-30T16:47:18Z
|
2025-03-20T15:53:16Z
|
https://github.com/litestar-org/polyfactory/issues/531
|
[
"documentation"
] |
JacobCoffee
| 0
|
google-research/bert
|
tensorflow
| 878
|
how to realize the tokenization of BERT model in c++
|
Thanks for your work.
If I want to use tensorflow c++ api to import the pretrained BERT model, how could I process the txt data in C++, including tokenization of BERT? is there c++ wrapper for Bert? or does tensorfow c++ api provide the tokenization of Bert? Or do I need to implement the same tokenization.py in c++?
Thanks for any information.
|
closed
|
2019-10-14T15:45:49Z
|
2019-11-22T08:29:58Z
|
https://github.com/google-research/bert/issues/878
|
[] |
lytum
| 14
|
marcomusy/vedo
|
numpy
| 93
|
migrating: Actor -> Mesh
|
Hi Marco,
my code was still using .Actor and getActors() which are depreciated in the latest version.
It updating my code as easy as replacing the Actor() [class] by the Mesh and renaming getActors to getMeshes?
|
closed
|
2020-01-09T16:37:59Z
|
2020-01-09T16:42:08Z
|
https://github.com/marcomusy/vedo/issues/93
|
[] |
RubendeBruin
| 2
|
mckinsey/vizro
|
plotly
| 776
|
Add other elements to Filter/Control sidebar
|
### Question
Is it possible to add any component to the Filter/Parameter sidebar?
As far as I understood, only elements which are added as children of controls will be added to the sidebar.
But controls only takes either vm.Filter() or vm.Parameter().
I want to add a dropdown in the sidebar, which I want to control using dash callbacks.
I hope the code example helps to make clear what I want.
### Code/Examples
```
import dash_core_components as dcc
import vizro.models as vm
import vizro.plotly.express as px
from vizro import Vizro
iris = px.data.iris()
page = vm.Page(
title="My first page",
components=[
vm.Graph(
id="scatter_chart",
figure=px.scatter(iris, title="My scatter chart", x="sepal_length", y="petal_width", color="species"),
),
],
controls=[
#Here I want to add any component, could also be dcc.Dropdown or any other custom element.
vm.Dropdown(
id="my-dropdown",
...
),
vm.Parameter(
targets=["scatter_chart.title"],
selector=vm.Dropdown(
options=["My scatter chart", "A better title!", "Another title..."],
multi=False,
),
),
],
)
dashboard = vm.Dashboard(pages=[page])
Vizro().build(dashboard).run()
```
### Which package?
vizro
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md).
|
closed
|
2024-10-04T09:54:58Z
|
2024-10-04T10:15:51Z
|
https://github.com/mckinsey/vizro/issues/776
|
[
"General Question :question:"
] |
FSaal
| 2
|
pytorch/vision
|
machine-learning
| 8,509
|
[Documentation] Improve Clarity on `torchvision.io.write_video` options
|
### 📚 The doc issue
### tl;dr `options` parameter under `torchvision.io.write_video` is inadequately documented, challenging for end-users to tweak videos to their liking. same goes for `audio_options`
In [`torchvision.io.write_video`](https://pytorch.org/vision/stable/generated/torchvision.io.write_video.html#torchvision.io.write_video), `options` and `audio_options` parameters are not sufficiently elaborated upon.
At first glance, this is okay as the function is ultimately just a wrapper around `PyAV`'s video recording capabilities.
In practice, however, **it is nontrivial to find relevant documentation to finetune the output.**
This is especially so if the end-user, like me, is not familiar with `PyAV` or video recording in general.
[Here is an example](https://github.com/pytorch/rl/issues/2258) of the potential issues this may cause.
### Suggest a potential alternative/fix
This is an issue we are working on as well in [TorchRL](https://github.com/pytorch/rl/tree/4e60774dedb6d5847c163c985b71928598123b46).
As suggested by @vmoens, it would be nice if a link could be provided to the [FFmpeg documentation](https://trac.ffmpeg.org/wiki), which is ultimately behind the `options` exposed by `PyAV`.
A potential extra step could then be to translate the wiki information into *specific code examples* that end-users can easily read and apply. This will probably benefit more people than if such information were only being documented in downstream libraries. [Here's what a "specific code example" could look like](https://github.com/pytorch/rl/blob/4e60774dedb6d5847c163c985b71928598123b46/knowledge_base/VIDEO_CUSTOMISATION.md),
|
open
|
2024-07-02T14:31:10Z
|
2024-07-04T10:32:11Z
|
https://github.com/pytorch/vision/issues/8509
|
[] |
N00bcak
| 1
|
ansible/awx
|
django
| 15,725
|
A project update is triggered every time a task related to this project is started
|
### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
I have a project that sees IT as a source, and the branch name is listed there. I also have an inventory that lists this project as the source.
The Startup Update and Startup Revision Update options are disabled.
When I start inventory synchronization, the project update starts. I don't need to update the project, it's updated.
I tried to use the corrected version of the branch, intercept the branch and try to recreate the project and other branches.
The problem remains, the project is updated every time inventory synchronization is started.
### AWX version
24.6.1
### Select the relevant components
- [X] UI
- [ ] UI (tech preview)
- [X] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
Chrome
### Steps to reproduce
1) create a project.
1.1 Select Source control type - GIT and source control branch
1.2 sync project
2) create an inventory. Select project and inventory_file
3) sync inventory
### Expected results
I expect that the project will not be updated. If you click on the details, you can see the update status of the project and the messages say that the project was updated.
### Actual results
1) create a project.
1.1 Select Source control type - GIT and source control branch

1.2 sync project

2) create an inventory. Select project and inventory_file

3) sync inventory


Select details:

Click Project Update Status:

### Additional information
_No response_
|
open
|
2024-12-24T14:43:38Z
|
2025-01-22T18:46:57Z
|
https://github.com/ansible/awx/issues/15725
|
[
"type:bug",
"component:api",
"component:ui",
"needs_triage",
"community"
] |
nikiforova13
| 0
|
jowilf/starlette-admin
|
sqlalchemy
| 319
|
Question: How to password protect the admin page?
|
closed
|
2023-10-02T22:13:03Z
|
2023-10-04T15:31:48Z
|
https://github.com/jowilf/starlette-admin/issues/319
|
[] |
magedhelmy1
| 1
|
|
pennersr/django-allauth
|
django
| 3,603
|
Amazon Cognito - Where to define my CustomAmazonCognitoProvider
|
Hi all, thanks for this wonderful package. I have a small question.
I've completed setting up AWS Cognito account sign-in. Now, I need a small thing as well. It is to get custom user attributes from Cognito.
### Question in short
Where can i add my `CustomAmazonCognitoProvider` after extending?
### Brief overview
In [`allauth/socialaccount/providers/amazon_cognito/provider.py`](https://github.com/pennersr/django-allauth/blob/105aace58fe767595c61e3a43563af7e1c277797/allauth/socialaccount/providers/amazon_cognito/provider.py#L52) , we see the class `AmazonCognitoProvider`. Inside this class, there is `extract_extra_data` method. This method is responsible for getting attributes and it does the job well.
Now, i want a custom attribute say; `foo` from Cognito. I've monkey patched the method - inside `return` just added `"zoneinfo": data.get("custom:foo")`. It does work as well.
### What i've tried
Now that i know it can get the info, since its always better to extend the class, i did below;
```
from allauth.socialaccount.providers.amazon_cognito.provider import (
AmazonCognitoProvider as BaseAmazonCognitoProvider,
)
class CustomAmazonCognitoProvider(BaseAmazonCognitoProvider):
def extract_extra_data(self, data):
extra_data = super(CustomAmazonCognitoProvider, self).extract_extra_data(data)
# Add your custom field extraction logic here
custom_field = data.get("custom:foo")
extra_data["foo"] = custom_field
print(f">>>>>>>>> hung:{extra_data}")
return extra_data
```
I've added this `CustomAmazonCognitoProvider` in settings as:
```
SOCIALACCOUNT_PROVIDERS = {
"amazon_cognito": {
# "SOCIALACCOUNT_ADAPTER": "myapp.views.CustomAmazonCognitoProvider",
"DOMAIN": "https://cogauth.auth.ap-south-1.amazoncognito.com",
"APP": {"class": "polls.views.CustomAmazonCognitoProvider"},
},
}
```
This caused ```raise MultipleObjectsReturned
django.core.exceptions.MultipleObjectsReturned```
### Question
What i posted above, it might be also wrong, not sure where to put my `CustomAmazonCognitoProvider`. Question is, where do i put `CustomAmazonCognitoProvider`? Does `CustomAmazonCognitoProvider` seems to be okay?
Thanks for reading!
|
closed
|
2024-01-19T11:18:28Z
|
2024-04-16T07:15:19Z
|
https://github.com/pennersr/django-allauth/issues/3603
|
[] |
b3nsh4
| 3
|
indico/indico
|
sqlalchemy
| 6,058
|
[A11Y] File field in the registration form is not accessible
|
**Describe the bug**
The field name is not mentioned when the file upload button is focused.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to a registration form that includes a file field using a screen reader
2. Tab through the fields and focus the upload button
3. Observe that the field name is not announced
**Expected behavior**
The field name should be included in the announcement.
**Additional context**
https://www.w3.org/WAI/WCAG21/Understanding/labels-or-instructions.html
|
open
|
2023-11-24T09:31:02Z
|
2023-11-24T09:31:02Z
|
https://github.com/indico/indico/issues/6058
|
[
"bug"
] |
foxbunny
| 0
|
STVIR/pysot
|
computer-vision
| 450
|
When I set__ C. TRAIN.EXEMPLAR_ SIZE=512.. my loc loss is always 0
|
When I set__ C. TRAIN.EXEMPLAR_ SIZE=512,__ C. TRAIN.SEARCH_ SIZE=800,__ C. TRAIN.OUTPUT_ When size = 37, my loc loss is always 0. What other parameters do I need to change
|
closed
|
2020-10-15T02:07:53Z
|
2020-11-10T09:21:08Z
|
https://github.com/STVIR/pysot/issues/450
|
[
"invalid"
] |
XuShoweR
| 2
|
plotly/dash
|
data-science
| 2,519
|
[BUG] `dash.get_relative_path()` docstring out of date
|
Docstrings for `dash.get_relative_path()` and `dash.strip_relative_path()` still refer to the `app` way of accessing those functions, which creates inconsistency in the docs:

https://dash.plotly.com/reference#dash.get_relative_path
|
closed
|
2023-05-02T18:57:09Z
|
2023-05-15T20:29:16Z
|
https://github.com/plotly/dash/issues/2519
|
[] |
emilykl
| 0
|
modelscope/modelscope
|
nlp
| 543
|
各个微调代码的差别
|
请问大牛们和开发者们[https://github.com/modelscope/modelscope/blob/master/examples/pytorch/llama/run_train_lora.sh代码和https://github.com/modelscope/swift/blob/main/examples/pytorch/llm/scripts/llama2_70b_chat/qlora/sft.sh这段代码是不是都可以模型分割并行呢?除了用torchrun、swift方面和qlora、lora方面有区别](https://github.com/modelscope/modelscope/blob/master/examples/pytorch/llama/run_train_lora.sh%E4%BB%A3%E7%A0%81%E5%92%8Chttps://github.com/modelscope/swift/blob/main/examples/pytorch/llm/scripts/llama2_70b_chat/qlora/sft.sh%E8%BF%99%E6%AE%B5%E4%BB%A3%E7%A0%81%E6%98%AF%E4%B8%8D%E6%98%AF%E9%83%BD%E5%8F%AF%E4%BB%A5%E6%A8%A1%E5%9E%8B%E5%88%86%E5%89%B2%E5%B9%B6%E8%A1%8C%E5%91%A2%EF%BC%9F%E9%99%A4%E4%BA%86%E7%94%A8torchrun%E3%80%81swift%E6%96%B9%E9%9D%A2%E5%92%8Cqlora%E3%80%81lora%E6%96%B9%E9%9D%A2%E6%9C%89%E5%8C%BA%E5%88%AB),在数据并行方面有什么区别呀,前一个可以设置per_device_train_batch_size会不会每个gpu都可以跑batch_size更快些呢,感激感激,希望能模型并行和数据并行一起上跑更快些
|
closed
|
2023-09-16T13:32:36Z
|
2023-09-18T10:24:33Z
|
https://github.com/modelscope/modelscope/issues/543
|
[] |
yzyz-77
| 3
|
pydantic/logfire
|
fastapi
| 313
|
Anthropic Instrumentation errors
|
### Description
Received the following error on a `instrument_anthopic` call.
```
File ~/an-env/lib/python3.11/site-packages/logfire/_internal/main.py:1012, in Logfire.instrument_anthropic(self, anthropic_client, suppress_other_instrumentation)
962 """Instrument an Anthropic client so that spans are automatically created for each request.
963
964 The following methods are instrumented for both the sync and the async clients:
(...)
1008 Use of this context manager is optional.
1009 """
1010 import anthropic
-> 1012 from .integrations.llm_providers.anthropic import get_endpoint_config, is_async_client, on_response
1013 from .integrations.llm_providers.llm_provider import instrument_llm_provider
1015 self._warn_if_not_initialized_for_instrumentation()
File ~/an-env/lib/python3.11/site-packages/logfire/_internal/integrations/llm_providers/anthropic.py:6
3 from typing import TYPE_CHECKING, Any
5 import anthropic
----> 6 from anthropic.types import Message, RawContentBlockDeltaEvent, RawContentBlockStartEvent, TextBlock, TextDelta
8 from .types import EndpointConfig
10 if TYPE_CHECKING:
ImportError: cannot import name 'RawContentBlockDeltaEvent' from 'anthropic.types' (/Users/someuser/an-env/lib/python3.11/site-packages/anthropic/types/__init__.py)
### Python, Logfire & OS Versions, related packages (not required)
```
```TOML
logfire="0.46.1"
platform="macOS-10.16-x86_64-i386-64bit"
python="3.11.5 (main, Sep 11 2023, 08:19:27) [Clang 14.0.6 ]"
anthropic==0.25.9
```
|
closed
|
2024-07-13T16:06:48Z
|
2024-07-17T15:13:11Z
|
https://github.com/pydantic/logfire/issues/313
|
[
"bug",
"P1"
] |
bllchmbrs
| 2
|
tflearn/tflearn
|
data-science
| 878
|
TensorBoard: how to plot custom variable?
|
How can i plot changes of some custom variable at each epoch with tflearn and tensorboard? I found some way how i can show my custom variable with callbacks in console. But i want visualize that changes using tensorboard.
```python
NR = 3500 #Amount of first class data
NS = 3500 #Amount of second class data
class CustomCallback(tflearn.callbacks.Callback):
def __init__(self):
pass
def on_epoch_end(self, training_state):
TP = 0
TN = 0
FP = 0
FN = 0
predictions = model.predict(X)
for prediction, actual in zip(predictions, Y):
predicted_class = np.argmax(prediction)
actual_class = np.argmax(actual)
if predicted_class == 1:
if actual_class == 1:
TP+=1
else:
FP+=1
else:
if actual_class == 0:
TN+=1
else:
FN+=1
Metric = (TP / NR) - (FP / NS)
print ("Metric: " + str(Metric))
print("--")
callback = CustomCallback()
model = tflearn.DNN(net, tensorboard_verbose=0)
model.fit(X, Y, n_epoch=3, show_metric=True, callbacks=callback)
```
Like this

|
open
|
2017-08-20T09:31:59Z
|
2017-08-20T09:37:27Z
|
https://github.com/tflearn/tflearn/issues/878
|
[] |
kulsemig
| 0
|
bauerji/flask-pydantic
|
pydantic
| 49
|
Using flask-pydantic with Python Dependency Injection Framework
|
Hi! I am using this module with Python Dependency Injection Framework and when i try to use Provider in kwargs of the view i am getting:
`RuntimeError: no validator found for <class '...'>, see arbitrary_types_allowed in Config`
Reason of this error is the fact that @validate decorator ignores only kwargs named as "query", "body", "return".
It would be better if @validate decorator ignored kwargs which default is Provide instance
Sample code:
```
@bp.route('/api/posts', methods=['GET'])
@inject
@validate(response_many=True)
def get_posts(
service: PostService = Provide[Container.services.post_service]
) -> List[ReadPost]:
...
```
|
open
|
2022-03-30T11:55:59Z
|
2024-03-21T21:18:34Z
|
https://github.com/bauerji/flask-pydantic/issues/49
|
[] |
akhundMurad
| 7
|
pennersr/django-allauth
|
django
| 3,282
|
`OpenIDConnectProvider` generates slug using `_server_id` and does not fallback to inherited implementation
|
KeycloakProvider started using `openid_connect` as its base url since the `OpenIDConnectProvider` implemented `get_slug`
|
closed
|
2023-03-15T08:00:58Z
|
2023-03-16T15:43:36Z
|
https://github.com/pennersr/django-allauth/issues/3282
|
[] |
ManiacMaxo
| 1
|
biolab/orange3
|
pandas
| 6,664
|
orange-canvas does not work
|
<!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
orange-canvas does not work and says "illegal instruction (core dumped)" after the splash screen, so impossible to open it
(Idem with python -m Orange.canvas)
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
By trying orange-canvas after having installed with pip3 install orange3
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Linux Mint 21.2
- Orange version: 3.36.1
- How you installed Orange: pip3 install orange3
- I use Python 3.10.12 with numpy, scipy, PyQt5 (version 5.15.6) and PyQtWebEngine (version 5.15.5)
Thank you.
|
open
|
2023-12-03T10:03:53Z
|
2024-01-01T19:12:27Z
|
https://github.com/biolab/orange3/issues/6664
|
[
"bug report"
] |
pgr123
| 6
|
gee-community/geemap
|
streamlit
| 1,928
|
Release geemap
|
Test
|
closed
|
2024-02-29T21:39:15Z
|
2024-05-24T20:43:28Z
|
https://github.com/gee-community/geemap/issues/1928
|
[
"release"
] |
jdbcode
| 0
|
adbar/trafilatura
|
web-scraping
| 560
|
Use `with_metadata` parameter to decide whether to run metadata extraction
|
So far this parameter is pending deprecation. It could be re-used to do what most expect: decide manually (not only based on output format) whether to run metadata extraction. Focusing on the main text only speeds up things.
|
closed
|
2024-04-15T13:20:09Z
|
2024-06-07T14:40:11Z
|
https://github.com/adbar/trafilatura/issues/560
|
[
"enhancement"
] |
adbar
| 0
|
AirtestProject/Airtest
|
automation
| 715
|
为什么要新增一步『根据识别区域,求出结果可信度』
|
如SURF这种算法,既然都获得了计算结果,为啥要单独再去计算一个可信度?
我觉得没有必要。如分辨率出现了变化或二次截图这种明显会导致可信度计算非常差,我测试二次截图在第四步的可信度计算哪儿可信度会出现通不过的情况。如果去掉第四步是否可行?
我没有理解到为啥需要这一步,烦请告知!!!
https://github.com/AirtestProject/Airtest/blob/master/airtest/aircv/keypoint_base.py#L86
|
closed
|
2020-04-05T07:46:37Z
|
2020-05-25T06:03:27Z
|
https://github.com/AirtestProject/Airtest/issues/715
|
[] |
enlangs792
| 3
|
zama-ai/concrete-ml
|
scikit-learn
| 789
|
Adding encrypted training for other ML models and DL models
|
## Feature request
From the [doc of encrypted training](https://docs.zama.ai/concrete-ml/built-in-models/training), only `SGDClassifier` is mentioned. I would like to train other ML/DL models on encrypted data and also would love to contribute.
Here are some questions after reading some of the codes related to encrypted training :
1. What are the reasons that there is no encrypted training for other ML/DL models? Is it because there is some limitation in either concrete or concrete-ml that blocks this development? If so, what are those limitations?
2. Some potential constraints I observed from the code that does encrypted learning on `SGDClassifier`:
(1) [Parameter range has to be preset](https://github.com/zama-ai/concrete-ml/blob/77cbdd61d0e44ab5875f8ee1c57d0831577042da/src/concrete/ml/sklearn/linear_model.py#L136)
$~~~~~$ * Is this inevitable due to overflowing during FHE computation?
(2) [Floating point distribution of input has to be similar](https://github.com/zama-ai/concrete-ml/blob/main/docs/advanced_examples/LogisticRegressionTraining.ipynb)
$~~~~~$ * Could you elaborate more on this?
(3) [Learning rate == 1](https://github.com/zama-ai/concrete-ml/blob/77cbdd61d0e44ab5875f8ee1c57d0831577042da/src/concrete/ml/sklearn/linear_model.py#L190)
$~~~~~$ * Does it mean we cannot have arbitrary learning rates?
It would be much appreciated if you could explain the causes of them.
## Motivation
I would like to contribute to encrypted learning for other models.
\
\
Thanks a lot in advance.
|
open
|
2024-07-09T05:52:09Z
|
2024-07-26T22:13:51Z
|
https://github.com/zama-ai/concrete-ml/issues/789
|
[] |
riemanli
| 5
|
ranaroussi/yfinance
|
pandas
| 2,166
|
yfinance earnings date error
|
### Describe bug
Earnings was working 3 weeks ago. Have been trying to debug - but it looks like somethings changed.
### Simple code that reproduces your problem
Here's a simple test script:
import yfinance as yf
dat = yf.Ticker("MSFT")
dat.earnings_dates
### Debug log
DEBUG Entering get_earnings_dates()
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\indexes\base.py:3802, in Index.get_loc(self, key, method, tolerance)
3801 try:
-> 3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\_libs\index.pyx:138, in pandas._libs.index.IndexEngine.get_loc()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\_libs\index.pyx:165, in pandas._libs.index.IndexEngine.get_loc()
File pandas\_libs\hashtable_class_helper.pxi:5745, in pandas._libs.hashtable.PyObjectHashTable.get_item()
File pandas\_libs\hashtable_class_helper.pxi:5753, in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'Earnings Date'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
Cell In[27], line 4
2 yf.enable_debug_mode()
3 dat = yf.Ticker("MSFT")
----> 4 dat.earnings_dates
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\yfinance\ticker.py:295, in Ticker.earnings_dates(self)
293 @property
294 def earnings_dates(self) -> _pd.DataFrame:
--> 295 return self.get_earnings_dates()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\yfinance\utils.py:104, in log_indent_decorator.<locals>.wrapper(*args, **kwargs)
101 logger.debug(f'Entering {func.__name__}()')
103 with IndentationContext():
--> 104 result = func(*args, **kwargs)
106 logger.debug(f'Exiting {func.__name__}()')
107 return result
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\yfinance\base.py:629, in TickerBase.get_earnings_dates(self, limit, proxy)
627 cn = "Earnings Date"
628 # - remove AM/PM and timezone from date string
--> 629 tzinfo = dates[cn].str.extract('([AP]M[a-zA-Z]*)$')
630 dates[cn] = dates[cn].replace(' [AP]M[a-zA-Z]*$', '', regex=True)
631 # - split AM/PM from timezone
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\frame.py:3807, in DataFrame.__getitem__(self, key)
3805 if self.columns.nlevels > 1:
3806 return self._getitem_multilevel(key)
-> 3807 indexer = self.columns.get_loc(key)
3808 if is_integer(indexer):
3809 indexer = [indexer]
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\indexes\base.py:3804, in Index.get_loc(self, key, method, tolerance)
3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
-> 3804 raise KeyError(key) from err
3805 except TypeError:
3806 # If we have a listlike key, _check_indexing_error will raise
3807 # InvalidIndexError. Otherwise we fall through and re-raise
3808 # the TypeError.
3809 self._check_indexing_error(key)
KeyError: 'Earnings Date'
### Bad data proof
Error = "Earnings Date'
### `yfinance` version
latest
### Python version
3.1
### Operating system
Windows
|
closed
|
2024-12-04T00:02:55Z
|
2025-01-24T11:07:36Z
|
https://github.com/ranaroussi/yfinance/issues/2166
|
[] |
rhnagpal
| 2
|
ivy-llc/ivy
|
tensorflow
| 28,430
|
fix the `ivy.not_equal` to support the float dtype and numeric dtype
|
closed
|
2024-02-26T20:12:21Z
|
2024-02-29T13:17:36Z
|
https://github.com/ivy-llc/ivy/issues/28430
|
[
"Sub Task"
] |
samthakur587
| 0
|
|
newpanjing/simpleui
|
django
| 268
|
希望可以加载一个菜单生成器。
|
**你希望增加什么功能?**
** what features do you wish to add? * *
1.动态加载菜单和路由的时候,可以通过加载函数的方式,传入登录客户的user,做到自定义权限控制数据表。而不是只能加载一个dict。
**留下你的联系方式,以便与你取得联系**
** what would you like to see optimized? * *
QQ:531189371
E-mail:chise123@live.com
|
closed
|
2020-06-02T05:53:01Z
|
2020-06-22T07:08:19Z
|
https://github.com/newpanjing/simpleui/issues/268
|
[
"enhancement"
] |
Chise1
| 1
|
jumpserver/jumpserver
|
django
| 14,284
|
[Feature] 数据库客户端方式拉起之后没法自定义设置token过期时间
|
### 产品版本
v3.10.13
### 版本类型
- [ ] 社区版
- [ ] 企业版
- [X] 企业试用版
### 安装方式
- [ ] 在线安装 (一键命令安装)
- [X] 离线包安装
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### ⭐️ 需求描述
客户端方式访问数据库,建立连接之后不操作就会过期失效
<img width="725" alt="b62db472b7c8cda119221670c418e55" src="https://github.com/user-attachments/assets/3a85b0df-fd17-4797-ad21-19fefc47975e">
<img width="365" alt="a7afd57157ac8beeacd5dad7a0cacb7" src="https://github.com/user-attachments/assets/398dd76c-6ca5-4986-b594-288d6f511c0d">
### 解决方案
这个数据库客户端方式可以跟连接向导一样设置成可配置项,由用户自己设置他的过期时间么
### 补充信息
_No response_
|
closed
|
2024-10-12T02:41:03Z
|
2024-11-28T03:22:31Z
|
https://github.com/jumpserver/jumpserver/issues/14284
|
[
"⭐️ Feature Request"
] |
zhenzhendong
| 1
|
deezer/spleeter
|
tensorflow
| 922
|
How to export onnx model
|
How to export spleeter to onnx model,and call by C# by onnxruntime
|
open
|
2024-12-29T06:16:13Z
|
2024-12-29T06:16:13Z
|
https://github.com/deezer/spleeter/issues/922
|
[
"question"
] |
dfengpo
| 0
|
coqui-ai/TTS
|
deep-learning
| 3,427
|
[Bug] when i run print(TTS().list_models()) I get <TTS.utils.manage.ModelManager object at 0x7fa9d4c5a7a0>
|
### Describe the bug
was found in coqui tts V0.22.0 not sure why its happening the but when i run print(TTS().list_models()) i get <TTS.utils.manage.ModelManager object at 0x7fa9d4c5a7a0>
### To Reproduce
When i run print(TTS().list_models()) I get <TTS.utils.manage.ModelManager object at 0x7fa9d4c5a7a0>
### Expected behavior
It should be giving me a list of all of the models but it doesn't
### Logs
_No response_
### Environment
```shell
Seems to occur in what I,ve tested being python 3.11 and 3.10
```
### Additional context
I found a fix tho
|
closed
|
2023-12-14T01:13:58Z
|
2024-01-28T22:40:23Z
|
https://github.com/coqui-ai/TTS/issues/3427
|
[
"bug",
"wontfix"
] |
DrewThomasson
| 2
|
d2l-ai/d2l-en
|
data-science
| 2,573
|
Incorrect Use of torch.no_grad() in fit_epoch Method in d2l/torch.py::Trainer::fit_epoch
|
Hello,
I noticed a potential issue in the fit_epoch method in https://github.com/d2l-ai/d2l-en/blob/master/d2l/torch.py, where loss.backward() is called within a torch.no_grad() block:
```
self.optim.zero_grad()
with torch.no_grad():
loss.backward()
...
```
This usage likely prevents the calculation of gradients, as loss.backward() should not be inside a torch.no_grad() block. The correct approach would be:
```
self.optim.zero_grad()
loss.backward()
...
```
Here is the original code:
```
def fit_epoch(self):
"""Defined in :numref:`sec_linear_scratch`"""
self.model.train()
for batch in self.train_dataloader:
loss = self.model.training_step(self.prepare_batch(batch))
self.optim.zero_grad()
with torch.no_grad():
loss.backward()
if self.gradient_clip_val > 0: # To be discussed later
self.clip_gradients(self.gradient_clip_val, self.model)
self.optim.step()
self.train_batch_idx += 1
if self.val_dataloader is None:
return
self.model.eval()
for batch in self.val_dataloader:
with torch.no_grad():
self.model.validation_step(self.prepare_batch(batch))
self.val_batch_idx += 1
```
|
open
|
2023-12-19T16:21:48Z
|
2024-10-29T17:35:14Z
|
https://github.com/d2l-ai/d2l-en/issues/2573
|
[] |
caydenwei
| 3
|
uriyyo/fastapi-pagination
|
fastapi
| 857
|
Display total number of rows when using Cursor pagination
|
The regular pagination function returns the total number of rows available when used, I would like to have that information available while using cursor pagination as well, is that possible currently?
|
closed
|
2023-10-04T22:35:14Z
|
2023-10-30T17:34:20Z
|
https://github.com/uriyyo/fastapi-pagination/issues/857
|
[
"enhancement"
] |
graham-atom
| 2
|
matplotlib/mplfinance
|
matplotlib
| 63
|
Get renko values
|
Hi, how can I get renko values that are calculated for plot ?
And also moving average values associated ?
|
closed
|
2020-03-22T21:00:44Z
|
2020-05-06T01:32:51Z
|
https://github.com/matplotlib/mplfinance/issues/63
|
[
"enhancement",
"released"
] |
haybb
| 6
|
ageitgey/face_recognition
|
machine-learning
| 1,550
|
Facial
|
open
|
2024-01-14T22:17:30Z
|
2024-01-14T22:17:30Z
|
https://github.com/ageitgey/face_recognition/issues/1550
|
[] |
Kenforme300
| 0
|
|
mouredev/Hello-Python
|
fastapi
| 33
|
Aprender python
|
closed
|
2024-03-21T22:16:03Z
|
2024-05-11T08:47:21Z
|
https://github.com/mouredev/Hello-Python/issues/33
|
[] |
bryanjm2
| 0
|
|
httpie/cli
|
api
| 1,608
|
Install httpie-edgegrid plugins fails
|
## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
1. Execute `httpie cli plugins install httpie-edgegrid`
## Current result
```
Installing httpie-edgegrid...
Collecting httpie-edgegrid
Using cached httpie_edgegrid-2.1.4-py3-none-any.whl.metadata (3.5 kB)
Collecting httpie==3.2.2 (from httpie-edgegrid)
Using cached httpie-3.2.2-py3-none-any.whl.metadata (7.6 kB)
Collecting edgegrid-python==1.3.1 (from httpie-edgegrid)
Using cached edgegrid_python-1.3.1-py3-none-any.whl.metadata (754 bytes)
Collecting pyOpenSSL==24.1.0 (from httpie-edgegrid)
Using cached pyOpenSSL-24.1.0-py3-none-any.whl.metadata (12 kB)
Requirement already satisfied: urllib3<3.0.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie-edgegrid) (2.2.3)
Requirement already satisfied: requests>=2.3.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from edgegrid-python==1.3.1->httpie-edgegrid) (2.32.3)
Requirement already satisfied: requests-toolbelt>=0.9.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from edgegrid-python==1.3.1->httpie-edgegrid) (1.0.0)
Collecting ndg-httpsclient (from edgegrid-python==1.3.1->httpie-edgegrid)
Using cached ndg_httpsclient-0.5.1-py3-none-any.whl.metadata (6.2 kB)
Collecting pyasn1 (from edgegrid-python==1.3.1->httpie-edgegrid)
Using cached pyasn1-0.6.1-py3-none-any.whl.metadata (8.4 kB)
Requirement already satisfied: pip in /opt/homebrew/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (24.2)
Requirement already satisfied: charset-normalizer>=2.0.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (3.4.0)
Requirement already satisfied: defusedxml>=0.6.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (0.7.1)
Requirement already satisfied: Pygments>=2.5.2 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (2.18.0)
Requirement already satisfied: multidict>=4.7.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (6.1.0)
Requirement already satisfied: setuptools in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (75.3.0)
Requirement already satisfied: rich>=9.10.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (13.9.4)
Collecting cryptography<43,>=41.0.5 (from pyOpenSSL==24.1.0->httpie-edgegrid)
Using cached cryptography-42.0.8-cp39-abi3-macosx_10_12_universal2.whl.metadata (5.3 kB)
Collecting cffi>=1.12 (from cryptography<43,>=41.0.5->pyOpenSSL==24.1.0->httpie-edgegrid)
Using cached cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl.metadata (1.5 kB)
Requirement already satisfied: idna<4,>=2.5 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from requests>=2.3.0->edgegrid-python==1.3.1->httpie-edgegrid) (3.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/homebrew/opt/certifi/lib/python3.13/site-packages (from requests>=2.3.0->edgegrid-python==1.3.1->httpie-edgegrid) (2024.8.30)
Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from requests[socks]>=2.22.0->httpie==3.2.2->httpie-edgegrid) (1.7.1)
Requirement already satisfied: markdown-it-py>=2.2.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from rich>=9.10.0->httpie==3.2.2->httpie-edgegrid) (3.0.0)
Collecting pycparser (from cffi>=1.12->cryptography<43,>=41.0.5->pyOpenSSL==24.1.0->httpie-edgegrid)
Using cached pycparser-2.22-py3-none-any.whl.metadata (943 bytes)
Requirement already satisfied: mdurl~=0.1 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from markdown-it-py>=2.2.0->rich>=9.10.0->httpie==3.2.2->httpie-edgegrid) (0.1.2)
Using cached httpie_edgegrid-2.1.4-py3-none-any.whl (9.2 kB)
Using cached edgegrid_python-1.3.1-py3-none-any.whl (17 kB)
Using cached httpie-3.2.2-py3-none-any.whl (127 kB)
Using cached pyOpenSSL-24.1.0-py3-none-any.whl (56 kB)
Using cached cryptography-42.0.8-cp39-abi3-macosx_10_12_universal2.whl (5.9 MB)
Using cached ndg_httpsclient-0.5.1-py3-none-any.whl (34 kB)
Using cached pyasn1-0.6.1-py3-none-any.whl (83 kB)
Using cached cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl (178 kB)
Using cached pycparser-2.22-py3-none-any.whl (117 kB)
Installing collected packages: pycparser, pyasn1, cffi, httpie, cryptography, pyOpenSSL, ndg-httpsclient, edgegrid-python, httpie-edgegrid
Attempting uninstall: httpie
Found existing installation: httpie 3.2.4
Can't install 'httpie-edgegrid'
```
## Expected result
Successful installation
---
## Debug output
Please re-run the command with `--debug`, then copy the entire command & output and paste both below:
Command: `httpie cli plugins install httpie-edgegrid --debug`
Output:
```bash
HTTPie 3.2.4
Requests 2.32.3
Pygments 2.18.0
Python 3.13.0 (main, Oct 7 2024, 05:02:14) [Clang 15.0.0 (clang-1500.3.9.4)]
/opt/homebrew/Cellar/httpie/3.2.4/libexec/bin/python
Darwin 23.5.0
<Environment {'apply_warnings_filter': <function Environment.apply_warnings_filter at 0x1019dcc20>,
'args': Namespace(),
'as_silent': <function Environment.as_silent at 0x1019dcae0>,
'colors': 256,
'config': {'default_options': []},
'config_dir': PosixPath('/Users/glen.thomas/.config/httpie'),
'devnull': <property object at 0x1019cd260>,
'is_windows': False,
'log_error': <function Environment.log_error at 0x1019dcb80>,
'program_name': 'httpie',
'quiet': 0,
'rich_console': <functools.cached_property object at 0x10196d350>,
'rich_error_console': <functools.cached_property object at 0x1019d03b0>,
'show_displays': True,
'stderr': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>,
'stderr_isatty': True,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>,
'stdin_encoding': 'utf-8',
'stdin_isatty': True,
'stdout': <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>
<PluginManager {'adapters': [],
'auth': [<class 'httpie.plugins.builtin.BasicAuthPlugin'>,
<class 'httpie.plugins.builtin.DigestAuthPlugin'>,
<class 'httpie.plugins.builtin.BearerAuthPlugin'>],
'converters': [],
'formatters': [<class 'httpie.output.formatters.headers.HeadersFormatter'>,
<class 'httpie.output.formatters.json.JSONFormatter'>,
<class 'httpie.output.formatters.xml.XMLFormatter'>,
<class 'httpie.output.formatters.colors.ColorFormatter'>]}>
Installing httpie-edgegrid...
Collecting httpie-edgegrid
Using cached httpie_edgegrid-2.1.4-py3-none-any.whl.metadata (3.5 kB)
Collecting httpie==3.2.2 (from httpie-edgegrid)
Using cached httpie-3.2.2-py3-none-any.whl.metadata (7.6 kB)
Collecting edgegrid-python==1.3.1 (from httpie-edgegrid)
Using cached edgegrid_python-1.3.1-py3-none-any.whl.metadata (754 bytes)
Collecting pyOpenSSL==24.1.0 (from httpie-edgegrid)
Using cached pyOpenSSL-24.1.0-py3-none-any.whl.metadata (12 kB)
Requirement already satisfied: urllib3<3.0.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie-edgegrid) (2.2.3)
Requirement already satisfied: requests>=2.3.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from edgegrid-python==1.3.1->httpie-edgegrid) (2.32.3)
Requirement already satisfied: requests-toolbelt>=0.9.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from edgegrid-python==1.3.1->httpie-edgegrid) (1.0.0)
Collecting ndg-httpsclient (from edgegrid-python==1.3.1->httpie-edgegrid)
Using cached ndg_httpsclient-0.5.1-py3-none-any.whl.metadata (6.2 kB)
Collecting pyasn1 (from edgegrid-python==1.3.1->httpie-edgegrid)
Using cached pyasn1-0.6.1-py3-none-any.whl.metadata (8.4 kB)
Requirement already satisfied: pip in /opt/homebrew/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (24.2)
Requirement already satisfied: charset-normalizer>=2.0.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (3.4.0)
Requirement already satisfied: defusedxml>=0.6.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (0.7.1)
Requirement already satisfied: Pygments>=2.5.2 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (2.18.0)
Requirement already satisfied: multidict>=4.7.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (6.1.0)
Requirement already satisfied: setuptools in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (75.3.0)
Requirement already satisfied: rich>=9.10.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (13.9.4)
Collecting cryptography<43,>=41.0.5 (from pyOpenSSL==24.1.0->httpie-edgegrid)
Using cached cryptography-42.0.8-cp39-abi3-macosx_10_12_universal2.whl.metadata (5.3 kB)
Collecting cffi>=1.12 (from cryptography<43,>=41.0.5->pyOpenSSL==24.1.0->httpie-edgegrid)
Using cached cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl.metadata (1.5 kB)
Requirement already satisfied: idna<4,>=2.5 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from requests>=2.3.0->edgegrid-python==1.3.1->httpie-edgegrid) (3.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/homebrew/opt/certifi/lib/python3.13/site-packages (from requests>=2.3.0->edgegrid-python==1.3.1->httpie-edgegrid) (2024.8.30)
Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from requests[socks]>=2.22.0->httpie==3.2.2->httpie-edgegrid) (1.7.1)
Requirement already satisfied: markdown-it-py>=2.2.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from rich>=9.10.0->httpie==3.2.2->httpie-edgegrid) (3.0.0)
Collecting pycparser (from cffi>=1.12->cryptography<43,>=41.0.5->pyOpenSSL==24.1.0->httpie-edgegrid)
Using cached pycparser-2.22-py3-none-any.whl.metadata (943 bytes)
Requirement already satisfied: mdurl~=0.1 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from markdown-it-py>=2.2.0->rich>=9.10.0->httpie==3.2.2->httpie-edgegrid) (0.1.2)
Using cached httpie_edgegrid-2.1.4-py3-none-any.whl (9.2 kB)
Using cached edgegrid_python-1.3.1-py3-none-any.whl (17 kB)
Using cached httpie-3.2.2-py3-none-any.whl (127 kB)
Using cached pyOpenSSL-24.1.0-py3-none-any.whl (56 kB)
Using cached cryptography-42.0.8-cp39-abi3-macosx_10_12_universal2.whl (5.9 MB)
Using cached ndg_httpsclient-0.5.1-py3-none-any.whl (34 kB)
Using cached pyasn1-0.6.1-py3-none-any.whl (83 kB)
Using cached cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl (178 kB)
Using cached pycparser-2.22-py3-none-any.whl (117 kB)
Installing collected packages: pycparser, pyasn1, cffi, httpie, cryptography, pyOpenSSL, ndg-httpsclient, edgegrid-python, httpie-edgegrid
Attempting uninstall: httpie
Found existing installation: httpie 3.2.4
Can't install 'httpie-edgegrid'
```
## Additional information, screenshots, or code examples
…
|
closed
|
2024-11-04T17:36:34Z
|
2024-11-04T20:42:15Z
|
https://github.com/httpie/cli/issues/1608
|
[
"bug",
"new"
] |
glenthomas
| 2
|
serengil/deepface
|
deep-learning
| 666
|
Problem with Deepface.stream
|
After I run this line:
DeepFace.stream(db_path = "./faces", enable_face_analysis=False)
It's showing AttributeError: module 'deepface.commons.functions' has no attribute 'preprocess_face'.
How do I solve this issue?
|
closed
|
2023-02-09T05:22:54Z
|
2023-02-09T13:21:15Z
|
https://github.com/serengil/deepface/issues/666
|
[
"duplicate"
] |
LaThanhTrong
| 1
|
ageitgey/face_recognition
|
python
| 767
|
problems with using the library with celery
|
Hi, I have flask app with following extensions used:
```
Python 3.7.2
Flask 1.0.2
Celery 4.3.0rc2
Face-recognition 1.2.3
```
And when I'm running this:
`face_recognition.face_locations(image, model='cnn)`
my worker fails with this output:
```
[2019-03-06 19:51:15,454: ERROR/MainProcess] Process 'ForkPoolWorker-8' pid:93763 exited with 'signal 11 (SIGSEGV)'
[2019-03-06 19:51:15,472: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: signal 11 (SIGSEGV).')
```
I know, that celery workers are not fork safe but still hope that there's a solution for my case
Any help appreciated
|
open
|
2019-03-06T17:13:36Z
|
2019-10-28T09:41:39Z
|
https://github.com/ageitgey/face_recognition/issues/767
|
[] |
chickenfresh
| 2
|
jmcnamara/XlsxWriter
|
pandas
| 152
|
list format for sheet range entry for error bars not supported
|
Hello,
Usually, excel ranges can be specified in XlsxWriter by giving a list [sheetname, startRow, startCol, endRow, endCol]. However, when trying to enter a range in that format for the 'minus_values' for 'y_error_bars', it appears to be incompatible: the output excel file has weird error bars not corresponding to the correct values.
If I change the code so that I specify the sheet name using the regular excel format of '=SheetName!', then everything works perfectly. I believe I am using the list method of entering ranges correctly because in the lines specifying the category names and values it works fine.
Below is my code, there are two alternate versions for setting the 'minus_values'. The first is currently commented, but I believe should be supported. The second works correctly.
Awesome Python package, hugely helpful.
Thanks,
Eli
```
workbook = xlsxwriter.Workbook('testBook.xlsx')
spikeInBoxWhisker = workbook.add_chart({'type': 'column', 'subtype': 'stacked'})
currentChart = spikeInBoxWhisker
trim5PrimeWorksheetName = 'Trim5Prime_Summary'
trim5PrimeWorksheet = workbook.add_worksheet(trim5PrimeWorksheetName)
thisWorksheet = trim5PrimeWorksheet
thisWorksheet.write_column('A1', ['a', 'b', 'c', 'd'])
thisWorksheet.write_column('B1', [1, 2, 1.5, 3])
thisWorksheet.write_column('C1', [.3, .35, .25, .15])
currentChart.add_series({
'name': 'First Quartile',
'categories': [trim5PrimeWorksheetName, 0, 0, 3, 0],
'values': [trim5PrimeWorksheetName, 0, 1, 3, 1],
'fill': {'none': True},
'border': {'none': True},
'y_error_bars': {
'type': 'custom',
'direction': 'minus',
#'minus_values': [trim5PrimeWorksheetName, 0, 2, 3, 2]
'minus_values': '=Trim5Prime_Summary!' + xl_range(0, 2, 3, 2)
}
})
thisWorksheet.insert_chart(0, 6, spikeInBoxWhisker)
workbook.close()
```
|
closed
|
2014-08-27T17:39:08Z
|
2014-11-01T17:11:00Z
|
https://github.com/jmcnamara/XlsxWriter/issues/152
|
[
"question",
"ready to close"
] |
ejfine
| 4
|
serengil/deepface
|
machine-learning
| 1,012
|
Can't run deepface api on ubuntu
|
I'm trying to deploy this on AWS ubuntu instance, i cloned the repository, installed deepface through pip install deepface and flask but it can't run api.py,
i tried this localy on macos M1 it works fine.
Error:
Could not find cuda drivers on your machine, GPU will not be used.
File "/home/ubuntu/deepface/deepface/api/src/api.py", line 2, in <module>
import app
File "/home/ubuntu/deepface/deepface/api/src/app.py", line 3, in <module>
from deepface.api.src.modules.core.routes import blueprint
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
|
closed
|
2024-02-07T11:53:34Z
|
2024-02-07T12:36:16Z
|
https://github.com/serengil/deepface/issues/1012
|
[
"dependencies"
] |
sana2024
| 8
|
roboflow/supervision
|
deep-learning
| 1,073
|
The line is not being displayed
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
thats my code:
`
import cv2
from ultralytics import YOLO
import supervision as sv
import sys
def main(VIDEO_SOURCE):
# Points to draw trigger lines
# Because at the current there is no implementation from supervision
# to switch counter for IN/OUT result, and the LineZoneAnnotator results
# depends on results from LineZone then we have to swap START
# and END points to swap IN/OUT result
# Road from EAST
END_E = sv.Point(1750, 800)
START_E = sv.Point(1280, 520)
# Road from WEST
START_W = sv.Point(500, 1000)
END_W = sv.Point(200, 580)
# Road from NORTH
END_N = sv.Point(1280, 520)
START_N = sv.Point(200, 580)
# Road from SOUTH
START_S = sv.Point(1750, 800)
END_S = sv.Point(500, 1000)
# car, motocycle, bus, truck
CLASS_ID = [0,2, 3, 5, 7]
# Load Model
model = YOLO("yolov8x.pt")
# Get video size to export to video file
video_info = sv.VideoInfo.from_video_path(VIDEO_SOURCE)
video_size = (video_info.width, video_info.height)
out = cv2.VideoWriter("output.avi", cv2.VideoWriter_fourcc(*'MJPG'), 20, (video_size))
# Init trigger lines
line_zone_E = sv.LineZone(start=START_E, end=END_E)
line_zone_W = sv.LineZone(start=START_W, end=END_W)
line_zone_N = sv.LineZone(start=START_N, end=END_N)
line_zone_S = sv.LineZone(start=START_S, end=END_S)
# Init annotator to draw trigger lines
line_zone_annotator = sv.LineZoneAnnotator(thickness=5, color=sv.Color.from_hex(color_hex="#00ff00"), text_thickness=2, text_scale=2)
# Init box to draw objects
box_annotator = sv.BoxAnnotator()
for result in model.track(VIDEO_SOURCE, show=True, stream=True, classes=CLASS_ID):
# Get frame
frame = result.orig_img
detections = sv.Detections.from_ultralytics(result)
# Set box ID so that we can count object when it cross trigger lines
if result.boxes.id is not None:
detections.tracker_id = result.boxes.id.cpu().numpy().astype(int)
# Draw box around objects
frame = box_annotator.annotate(scene=frame, detections=detections)
# Set triggers
line_zone_E.trigger(detections=detections)
line_zone_W.trigger(detections=detections)
line_zone_N.trigger(detections=detections)
line_zone_S.trigger(detections=detections)
# Draw trigger lines
line_zone_annotator.annotate(frame=frame, line_counter=line_zone_E)
line_zone_annotator.annotate(frame=frame, line_counter=line_zone_W)
line_zone_annotator.annotate(frame=frame, line_counter=line_zone_N)
line_zone_annotator.annotate(frame=frame, line_counter=line_zone_S)
# Write to file
out.write(frame)
# Show results as processing
cv2.imshow("result", frame)
if(cv2.waitKey(30) == 27):
break
if len(sys.argv) == 2:
main(sys.argv[1])`
I´m using supervision==0.17.0 and ultralytics==8.1.34, i´m using this "old code" because i need the line counter solution without depending of a center id or someting like this.... i need the counter throught the bounding box
The lines of the counting are not being displayed in the frame...
### Additional
_No response_
|
closed
|
2024-03-29T07:38:19Z
|
2024-03-29T12:17:42Z
|
https://github.com/roboflow/supervision/issues/1073
|
[
"question"
] |
Rasantis
| 1
|
NullArray/AutoSploit
|
automation
| 864
|
Divided by zero exception134
|
Error: Attempted to divide by zero.134
|
closed
|
2019-04-19T16:01:40Z
|
2019-04-19T16:37:27Z
|
https://github.com/NullArray/AutoSploit/issues/864
|
[] |
AutosploitReporter
| 0
|
widgetti/solara
|
jupyter
| 704
|
Documentation request: elaborate on proper use of `Reactive.subscribe()`
|
Use case: I want to add a listener to a reactive variable's changes, but unlike `use_effect`, I do _not_ want the listener to fire on the initial value -- only on changes.
I can do this:
```python
_unused = solara.use_reactive(variable, on_change=listener)
```
but I was wondering if I can simply leverage `subscribe()` directly. The mechanics around `forward_on_change` and `update` are confusing to me, so it would be helpful to understand how to leverage subscription hooks.
|
closed
|
2024-07-03T18:21:20Z
|
2024-07-13T22:25:50Z
|
https://github.com/widgetti/solara/issues/704
|
[] |
ntjess
| 4
|
pytorch/pytorch
|
python
| 149,550
|
Remove pre-cxx11 from the documentation and tutorials
|
### 🐛 Describe the bug
Please see: https://github.com/pytorch/pytorch/issues/123649
and https://dev-discuss.pytorch.org/t/pytorch-linux-wheels-switching-to-new-wheel-build-platform-manylinux-2-28-on-november-12-2024/2581/2
Pytorch is using D_GLIBCXX_USE_CXX11_ABI=1 and Manylinux 2.28
Hence we should remove the usage of PRE_CXX11_ABI from the documents
Example: https://pytorch.org/cppdocs/installing.html#system-requirements
### Versions
2.7.0
cc @svekars @sekyondaMeta @AlannaBurke
|
open
|
2025-03-19T20:14:31Z
|
2025-03-19T20:15:07Z
|
https://github.com/pytorch/pytorch/issues/149550
|
[
"module: docs",
"triaged",
"topic: docs"
] |
atalman
| 0
|
tqdm/tqdm
|
pandas
| 683
|
tqdm.write not working as expected
|
Both on Debian and Ubuntu, tqdm.write() is not working as expected.
Messages are written but the progress bar behaves as if I would be using print. This used to work correctly with a previous version of tqdm (4.19.4) but after installing a new server and using version 4.31.1 it now behaves like this:
-- sir.uylp -> no data for requested time window
-- sir.uyni -> no data for requested time window
-- sir.uypt -> no data for requested time window
-- sir.uyri -> no data for requested time window
99%|██████████████████████████████████████▊| 1120/1127 [01:16<00:00, 20.45it/s] -- sir.uyrv -> no data for requested time window
-- sir.uysj -> no data for requested time window
-- sir.uyta -> no data for requested time window
-- sir.valp -> no data for requested time window
100%|██████████████████████████████████████▉| 1124/1127 [01:16<00:00, 22.86it/s] -- sir.varg -> no data for requested time window
-- sir.vesl -> adding...
-- sir.vico -> no data for requested time window
100%|███████████████████████████████████████| 1127/1127 [01:18<00:00, 14.41it/s]
Note that the text starts at the end of the progress bar. I've searched the issues page but could not find anything similar. Not sure if it is relevant, but I'm running the program over ssh. Again, this work fine with the previous version I had. The code is very simple:
`for Stn in tqdm(sorted(stations), ncols=80):
NetworkCode = Stn['NetworkCode']
StationCode = Stn['StationCode']
rs = cnn.query(
'SELECT * FROM rinex_proc WHERE "NetworkCode" = \'%s\' AND "StationCode" = \'%s\' AND '
'"ObservationSTime" >= \'%s\' AND "ObservationETime" <= \'%s\''
% (NetworkCode, StationCode, (dates[0] - 1).first_epoch(), (dates[1] + 1).last_epoch()))
if rs.ntuples() > 0:
tqdm.write(' -- %s.%s -> adding...' % (NetworkCode, StationCode))
try:
stn_obj.append(Station(cnn, NetworkCode, StationCode, dates))
except pyETMException:
tqdm.write(' %s.%s -> station exists, but there was a problem initializing ETM.'
% (NetworkCode, StationCode))
else:
tqdm.write(' -- %s.%s -> no data for requested time window' % (NetworkCode, StationCode))`
|
open
|
2019-02-27T19:07:41Z
|
2024-06-19T17:15:57Z
|
https://github.com/tqdm/tqdm/issues/683
|
[
"question/docs ‽",
"to-fix ⌛",
"p2-bug-warning ⚠"
] |
demiangomez
| 3
|
holoviz/panel
|
matplotlib
| 6,996
|
ValueError when using inclusive_bounds
|
I'm working on adding support for Pydantic dataclasses to Panel when I stumbled upon this bug
```python
import param
import panel as pn
pn.extension()
class SomeModel(param.Parameterized):
int_field = param.Integer(default=1, bounds=(0,10), inclusive_bounds=(False, False))
float_field = param.Integer(default=1, bounds=(0, 10), inclusive_bounds=(False, False))
model = SomeModel()
pn.Param(model).servable()
```
If you drag either of the the slider to the end you will get one of
```bash
ValueError: Integer parameter 'SomeModel.float_field' must be less than 10, not 10.
ValueError: Integer parameter 'SomeModel.float_field' must be less than 10, not 10.
```
|
open
|
2024-07-17T05:18:23Z
|
2025-01-20T19:18:52Z
|
https://github.com/holoviz/panel/issues/6996
|
[] |
MarcSkovMadsen
| 1
|
deepspeedai/DeepSpeed
|
deep-learning
| 6,598
|
ZeRO Stage 2 with Offload: Unexpectedly large optimizer state files
|
I was running Stage 2 training of Llama 3.1 8B model on one 80GB GPU.
**Setup Info**
* OS: Ubuntu 22.04
* DeepSpeed: 0.15.1 (installed with `DS_BUILD_CPU_ADAM=1`)
* PyTorch: 2.4.0
* Transformers: 4.44.2
* CUDA: 12.1
* Model: Llama 3.1 8B Instruct
* DeepSpeed config: ZeRO Stage 2 with optimizer offloaded to CPU, bf16
* GPU: Single 80GB H100 GPU
The `global_step[xxx]` directory from checkpoints contains very large optimizer state files (> 105 GB):
```
90G bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt
15G mp_rank_00_model_states.pt
```
This is significantly larger than the actual model parameters, which take up about 15GB:
```
4.7G pytorch_model-00001-of-00004.bin
4.7G pytorch_model-00002-of-00004.bin
4.6G pytorch_model-00003-of-00004.bin
1.1G pytorch_model-00004-of-00004.bin
```
I'm surprised by the large size of the optimizer state files, even with Stage 2 and offload. Is this expected behavior, or is there a way to reduce the storage requirements?
---
**DeepSpeed configuration**
```python
deepspeed_config = {
"fp16": {
"enabled": False,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": True,
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": 0.01,
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": True,
},
"allgather_partitions": True,
"allgather_bucket_size": 5e8,
"overlap_comm": True,
"reduce_scatter": True,
"reduce_bucket_size": 5e8,
"contiguous_gradients": True
},
"steps_per_print": 100,
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"activation_checkpointing": {
"partition_activations": True,
"contiguous_memory_optimization": True
},
"wall_clock_breakdown": False
}
```
**TrainingArguments**
```python
train_args=transformers.TrainingArguments(
deepspeed=deepspeed_config,
per_device_train_batch_size=micro_batch_size,
per_device_eval_batch_size=micro_batch_size,
num_train_epochs=_num_epochs,
learning_rate=learning_rate,
fp16=deepspeed_config['fp16']['enabled'],
bf16=deepspeed_config['bf16']['enabled'],
save_safetensors=False,
logging_steps=50,
logging_dir=os.path.join(stage_output_dir, 'logs'),
# optim="adamw_torch",
weight_decay=deepspeed_config['optimizer']['params']['weight_decay'],
evaluation_strategy=eval_strategy if val_set_size > 0 else "no",
save_strategy=save_strategy,
eval_steps=eval_steps if val_set_size > 0 else None,
save_steps=save_steps,
output_dir=stage_output_dir,
save_total_limit=1,
load_best_model_at_end=True if val_set_size > 0 else False,
group_by_length=group_by_length,
# save_only_model=True,
)
```
|
closed
|
2024-10-03T04:40:15Z
|
2024-10-16T09:09:39Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6598
|
[] |
joycey97
| 4
|
LibreTranslate/LibreTranslate
|
api
| 602
|
Docker Swarm "OSError: [Errno 97] Address family not supported by protocol"
|
Hello
Trying to run docker image of libretranslate in Docker Swarm environment
The Linux OS (Ubuntu 20.04.02 LTS) have **ipv6 disabled**
The docker version is :
```
Client: Docker Engine - Community
Version: 23.0.1
API version: 1.42
Go version: go1.19.5
Git commit: a5ee5b1
Built: Thu Feb 9 19:46:56 2023
OS/Arch: linux/amd64
Context: default
Server: Docker Engine - Community
Engine:
Version: 23.0.1
API version: 1.42 (minimum version 1.12)
Go version: go1.19.5
Git commit: bc3805a
Built: Thu Feb 9 19:46:56 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.16
GitCommit: 31aa4358a36870b21a992d3ad2bef29e1d693bec
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
```
When running libreTranslate image from docker hub we get this error :
```
Updating language models
Found 88 models
Keep 2 models
Loaded support for 2 languages (2 models total)!
Running on http://*:5000
Traceback (most recent call last):
File "/app/./venv/bin/libretranslate", line 8, in <module>
sys.exit(main())
File "/app/venv/lib/python3.10/site-packages/libretranslate/main.py", line 230, in main
serve(
File "/app/venv/lib/python3.10/site-packages/waitress/__init__.py", line 13, in serve
server = _server(app, **kw)
File "/app/venv/lib/python3.10/site-packages/waitress/server.py", line 78, in create_server
last_serv = TcpWSGIServer(
File "/app/venv/lib/python3.10/site-packages/waitress/server.py", line 237, in __init__
self.create_socket(self.family, self.socktype)
File "/app/venv/lib/python3.10/site-packages/waitress/wasyncore.py", line 352, in create_socket
sock = socket.socket(family, type)
File "/usr/local/lib/python3.10/socket.py", line 232, in __init__
_socket.socket.__init__(self, family, type, proto, fileno)
OSError: [Errno 97] Address family not supported by protocol
```
We tried different configuration in docker compose file, with same result in any case
- Tried -LT_HOST=127.0.0.1 to except to force ipv4 'only' bind of the container
- Tried also (simultaneously or not with above setting ) ports : "127.0.0.1:5000:5000"
(docker swarm tell 127.0.0.1 ignored in ports value with this ports parameter)
Here is docker-compose.yml file used (base file, without the above settings)
```
version: '3.7'
services:
libretranslate-nginx:
image: libretranslate/libretranslate:latest
# build:
# context: .
# dockerfile: docker/Dockerfile
ports:
- "5000:5000"
## Uncomment this for logging in docker compose logs
# tty: true
healthcheck:
test: ['CMD-SHELL', './venv/bin/python scripts/healthcheck.py']
## Uncomment above command and define your args if necessary
# command: --ssl --ga-id MY-GA-ID --req-limit 100 --char-limit 500
## Uncomment this section and the libretranslate_api_keys volume if you want to backup your API keys
environment:
# - LT_API_KEYS=true
# - LT_API_KEYS_DB_PATH=/app/db/api_keys.db # Same result as `db/api_keys.db` or `./db/api_keys.db`
## Uncomment these vars and libretranslate_models volume to optimize loading time.
- LT_UPDATE_MODELS=true
- LT_LOAD_ONLY=en,fr
volumes:
# - libretranslate_api_keys:/app/db
# Keep the models in a docker volume, to avoid re-downloading on startup
- libretranslate_models:/home/libretranslate/.local:rw
networks:
- traefik-public
- libretranslate-network
# depends_on:
# - mastack-phpfpm
deploy:
placement:
constraints:
- node.labels.production.worker == true
labels:
(here comes only traefik config, removed)
volumes:
# libretranslate_api_keys:
libretranslate_models:
networks:
libretranslate-network:
driver: overlay
attachable: true
traefik-public:
external: true
```
We also tried to run the container as simple as possible in no swarm context (no docker stack deploy ... ) with same result :
docker run -ti --rm -p 5000:5000 libretranslate/libretranslate --load-only en,fr
Seems the container try to bind to ipv6 even ipv6 not available on the system
We have others docker stacks et containers (elasticsearch, nginx, php-fpm, ...) running perfect on this system in docker swarm mode, with no specific configuration related to ipv6
Can you help ? bug in container initialisation when ipv6 is disabled ?
Francis
Nalta Systems
|
open
|
2024-03-20T13:18:21Z
|
2025-01-07T20:31:09Z
|
https://github.com/LibreTranslate/LibreTranslate/issues/602
|
[
"enhancement"
] |
Francis-Nalta
| 9
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.