repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
Nemo2011/bilibili-api
|
api
| 214
|
comment获取评论
|
传入的参数是这些:
视频:AV 号:av{170001}。
专栏:cv{9762979}。
动态(画册类型):{116859542}。
动态(纯文本):{497080393649439253}。
想问BV号怎么传入呢?
|
closed
|
2023-02-24T09:56:08Z
|
2023-03-01T09:37:01Z
|
https://github.com/Nemo2011/bilibili-api/issues/214
|
[
"question"
] |
poppy3333
| 1
|
Kludex/mangum
|
fastapi
| 260
|
Multiple event loops in custom handlers
|
### Discussed in https://github.com/jordaneremieff/mangum/discussions/256
<div type='discussions-op-text'>
<sup>Originally posted by **jrobbins-LiveData** March 20, 2022</sup>
The Mangum [doc](https://mangum.io/adapter/) shows this example of how one might handle a custom event:
```python
def handler(event, context):
if event.get("some-key"):
# Do something or return, etc.
return
asgi_handler = Mangum(app)
response = asgi_handler(event, context) # Call the instance with the event arguments
return response
```
I need to handle an incoming AWS EventBridge event. I want to invoke the same method that my HTTP API handler is also invoking -- and it is an async method.
Here's what I am using
```python
def handler(event: LambdaEvent, context: LambdaContext) -> dict:
if "requestContext" in event:
logger.debug("HTTP API")
response = fastapi_handler(event, context)
else:
logger.debug("Not HTTP API")
loop = asyncio.get_event_loop()
response = loop.run_until_complete(app_logic())
return response
```
My question(s): Is this the correct pattern to use? In the Mangum [source](https://github.com/jordaneremieff/mangum/blob/7ee6846f2c90815dd7d1e12607e8226bbb0058d9/mangum/protocols/http.py#L52), I see code like this, so I am trying to fit in. My code seems to work, but I read in the Python [docs](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.get_event_loop) that `get_event_loop()` is
> Deprecated since version 3.10: Deprecation warning is emitted if there is no running event loop. In future Python releases, this function will be an alias of [get_running_loop()](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.get_running_loop).
Which made me wonder if I were doing this correctly?</div>
|
open
|
2022-04-26T04:36:37Z
|
2022-04-26T18:25:11Z
|
https://github.com/Kludex/mangum/issues/260
|
[
"help wanted",
"improvement"
] |
jordaneremieff
| 1
|
ClimbsRocks/auto_ml
|
scikit-learn
| 263
|
New ML Idea: learning rate schedules for gradient boosting trees
|
learning rate schedules work really well for neural nets. might they work well for other types of sequentially-updated problems as well?
this one might be a bit tricker to implement. it might require us to build our own gbdt implementation.
|
open
|
2017-06-30T14:51:10Z
|
2017-06-30T14:51:10Z
|
https://github.com/ClimbsRocks/auto_ml/issues/263
|
[] |
ClimbsRocks
| 0
|
PaddlePaddle/ERNIE
|
nlp
| 160
|
fluid1.2是否可以运行BERT?
|
文档中指出需要使用paddle1.3以上训练,但如果只做预测,是否可以使用1.2版本fluid?
比如使用1.2版本paddle,加载预训练的bert或者fine-tune后的bert模型
如果不支持,希望能具体解释下原因,比如1.2上某些op缺失等..
谢谢
|
closed
|
2019-06-10T12:40:59Z
|
2019-06-11T06:34:22Z
|
https://github.com/PaddlePaddle/ERNIE/issues/160
|
[] |
xiehanlin
| 1
|
ansible/awx
|
automation
| 15,779
|
vCenter source in inventory defaults to 'community.vmware.vmware_vm_inventory' even though I state "plugin: vmware.vmware.vms"
|
### Please confirm the following
- [x] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [x] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [x] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [x] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
community.vmware inventory plugin is about to be deprecated in favor of vmware.vmware inventory plugin (plugin is available starting in v1.9.0).
I installed vmware.vmware v1.9.0 on my awx-ee and add "plugin: vmware.vmware.vms" at the top of the source variables, but AWX defaults to 'community.vmware.vmware_vm_inventory'.
The source is of type "VMware vCenter".
When running ansible-inventory -i hosts.vmware-vms.yml --list inside my awx-ee it is working as expected.
hosts.vmware-vms.yml has the same content as the AWX inventory source.
### AWX version
24.6.1
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [x] Other
### Installation method
kubernetes
### Modifications
yes
### Ansible version
2.18.1
### Operating system
MAC
### Web browser
_No response_
### Steps to reproduce
Inventory source:
```
plugin: vmware.vmware.vms
properties:
- all
compose:
ansible_host: guest.ipAddress
```
Sync error:
```
redirecting (type: inventory) ansible.builtin.vmware_vm_inventory to community.vmware.vmware_vm_inventory
[WARNING]: * Failed to parse /runner/inventory/vmware_vm_inventory.yml with
auto plugin: inventory config '/runner/inventory/vmware_vm_inventory.yml'
```
### Expected results
Populated inventory.
### Actual results
Sync error:
```
redirecting (type: inventory) ansible.builtin.vmware_vm_inventory to community.vmware.vmware_vm_inventory
[WARNING]: * Failed to parse /runner/inventory/vmware_vm_inventory.yml with
auto plugin: inventory config '/runner/inventory/vmware_vm_inventory.yml'
```
### Additional information
_No response_
|
closed
|
2025-01-28T10:23:12Z
|
2025-01-28T15:50:12Z
|
https://github.com/ansible/awx/issues/15779
|
[
"type:bug",
"needs_triage",
"community"
] |
devidebyzero
| 0
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 1,705
|
HAVE NVIDIA, BUT GPU CONVERSION NOT WORKING.
|
**_HELLO, I NEED HELP, I HAVE INVIDIA GPU BUT WHEN I TRY TO USE THE GPU COVERSION, I ALWAYS HAVE THIS ERROR, AND THE PROCESS DON'T START._**
Last Error Received:
Process: Ensemble Mode
If this error persists, please contact the developers with the error details.
Raw Error Details:
RuntimeError: ""
Traceback Error: "
File "UVR.py", line 6667, in process_start
File "separate.py", line 503, in seperate
File "torch\nn\modules\module.py", line 1145, in to
File "torch\nn\modules\module.py", line 797, in _apply
File "torch\nn\modules\module.py", line 797, in _apply
File "torch\nn\modules\module.py", line 844, in _apply
File "torch\nn\modules\module.py", line 1143, in convert
"
Error Time Stamp [2025-01-17 21:04:13]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Choose Model
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: True
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_use_opencl: True
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: MP3
wav_type_set: PCM_16
device_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems
|
open
|
2025-01-17T18:13:29Z
|
2025-02-02T21:21:08Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/1705
|
[] |
Jose213-cmd
| 3
|
hyperspy/hyperspy
|
data-visualization
| 3,128
|
Up-streaming Map-overlap Functions
|
#### Describe the functionality you would like to see.
There are a couple of functions using map-overlap in packages that extend hyperspy and I was thinking of up streaming their implementation to hyperspy. I figured we could give hyperspy the right of first refusal before implementing downstream.
Adding support for the [scipy.ndimage](https://docs.scipy.org/doc/scipy/reference/ndimage.html) functions in particular would be quite beneficial. These are already implemented lazily in [dask-image](https://image.dask.org/en/latest/) but I would also like to consider a slightly different option which allows users to have the option to reload data from memory rather than holding memory in cache. This is helpful for when you have many cores but are limited by the amount of memory per core as map_overlap functions are notorious for loading the entire dataset into RAM.
Some ideas of functions that might be universally useful:
- Gaussian Filtering in all dimensions
- Difference of Gaussian in n-dimensions to reduce noise
- Laplacian of Gaussian to find edges in n-dimensions
- Maximum/ Minimum filter to find peaks in n-dimensions
- Filter with a custom filter
#### Describe the context
I propose we add a generic function `map_overlap` function that applies some function to some or all of the axes.
```
def _map_overlap(self, func, overlaps, boundary, reload_tasks=False, **kwargs):
data_overlapped = da.overlap.overlap(self.data,
depth=overlaps,
boundary=boundary)
if reload_tasks:
# Clone original overlapped dask blocks to reload data from disk and stop spilling to disk
data_clones = da.concatenate(
[clone(b, omit=data_overlapped) for b in data_overlapped.blocks]
)
else:
data_clones = data_overlapped
mapped = data_clones.map_blocks(func, **kwargs)
return self.deep_copy_with_new_data(mapped)
```
#### Additional information
The `reload_task` option will only reload tasks along one of the chunked dimension. This means that for a 4 dimensional dataset it will load chunk a, b,e and f then operate on that chunk. Then it will load c and g while holding a,b,e and f in memory. Then it will drop a and e from memory while loading d, h into memory.
This means that each core will only ever have 9 chunks in memory at a time. This differs from normal `dask` operation in that the total RAM won't spike when using an overlapping function. Normally all of the chunks are loaded into ram during an overlapping computation, especially when using multiple cores because for each chunk it sees that it is needed for some other operation and as a result holds that chunk in RAM until it is needed.
| a | b | c | d |
+--+--+--+--+
| e | f | g | h |
+--+--+--+--+
| i | k | k | l |
The argument against this workflow is:
1. It gets really slow if some operation is applied to the data besides just loading that data. That operation would have to be repeated 3x as the entire task tree for the block is cloned.
2. It can potentially be slower if you can load the entire dataset into RAM as you don't have the extra overhead for reloading the data from disk
3. Ideally you would be able to spill loaded chunks to disk in such a way as to make this workflow unnecessary. In practice spilling to disk has some unintended consequences and causes operations to crash.
Another possibility is to force only one axis to be chunked. This reduces the complexity but the size of chunks can get very large and cause additional issues.
|
open
|
2023-04-14T21:34:43Z
|
2023-04-15T11:58:43Z
|
https://github.com/hyperspy/hyperspy/issues/3128
|
[] |
CSSFrancis
| 1
|
Gozargah/Marzban
|
api
| 863
|
xray core 1.8.9
|
درود
با عرض خسته نباشید ،
لطفا مرزبان رو بروز کنید قابلیت http upgrade به هسته اضافه شده که باعث میشه فینگرپرینت کمتری در مواقع استفاده از cdn به جای بگذاره و لطفا این رو به مرزبان اضافه کنید با تشکر
ی سری باگ هم فیکس شده و ی پچ امینتی هم برای vmess اومده رو همین ورژن
https://github.com/XTLS/Xray-core/releases/tag/v1.8.9
لطفا قابلیت استفاده از ماکس و تعین مقادیر اش رو هم به پنل اضافه کنید
tcp
xudp
xudp allow
با تشکر
|
closed
|
2024-03-11T14:55:23Z
|
2024-03-20T22:54:21Z
|
https://github.com/Gozargah/Marzban/issues/863
|
[
"Feature"
] |
w0l4i
| 3
|
dsdanielpark/Bard-API
|
nlp
| 88
|
why keyerror "image" happens
|
<img width="830" alt="image" src="https://github.com/dsdanielpark/Bard-API/assets/82095274/b9333272-0083-4bc4-8268-df068e3c0bb4">
I'm wondering why the error pops up, even we can use the try except to handle this, but nothing changes. If I input a normal text like :"hi", it will return nothing since it's still be recognized as an error.
Thanks a lot.
|
closed
|
2023-07-01T12:41:25Z
|
2023-07-03T06:30:21Z
|
https://github.com/dsdanielpark/Bard-API/issues/88
|
[] |
Xiansssss
| 2
|
onnx/onnx
|
machine-learning
| 6,103
|
Spec for ReduceSumSquare is incorrect when noop_with_empty_axes == 1
|
# Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
No
### Describe the bug
When noop_with_empty_axes == 1 & axes is empty, in ONNX spec, it will return input tensor directly.
But in reference in onnx, it is mismatch. it returned np.square of the input tensor

### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*):
- ONNX version (*e.g. 1.13*):
- Python version:
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):-->
### Reproduction instructions
<!--
- Describe the code to reproduce the behavior.
```
import onnx
model = onnx.load('model.onnx')
...
```
- Attach the ONNX model to the issue (where applicable)-->
### Expected behavior
return input tensor directly
### Notes
<!-- Any additional information -->
|
open
|
2024-04-29T02:43:06Z
|
2025-03-21T00:18:10Z
|
https://github.com/onnx/onnx/issues/6103
|
[
"bug",
"topic: documentation",
"topic: spec clarification",
"contributions welcome"
] |
RunnerZhong
| 8
|
prkumar/uplink
|
rest-api
| 289
|
Type hinting Optional parameters with the Field annotation
|
**Is your feature request related to a problem? Please describe.**
Imagine the following scenario where you want to make a `POST` request with a body that contains a only a few fields and creating a seperate `Pydantic` (or Marshmallow) model would be just more boilerplate (I'm not considerring passing a dict or kwargs directly as it is not that user friendly). Right now, you can always specify the `JSON` body fields as parameters, but it doesn't seem to play nice with the `Optional` type hint (which means with the `Union` type actually).
```python
import uplink
from typing import Optional
@uplink.json
@uplink.post("/foo/{id}", args={
"id": Path,
"bar": Field,
"optional_bar": Field
})
def make_request(self, id: str, bar: str, optional_bar: Optional[str] = None) -> None:
pass
```
What I am seeing in the stack trace is:
<pre><code>
File "/python3.10/site-packages/uplink/builder.py", line 100, in __call__
self._request_definition.define_request(request_builder, args, kwargs)
File "/python3.10/site-packages/uplink/commands.py", line 284, in define_request
self._argument_handler.handle_call(
File "/python3.10/site-packages/uplink/arguments.py", line 154, in handle_call
self.handle_call_args(request_builder, call_args)
File "/python3.10/site-packages/uplink/arguments.py", line 159, in handle_call_args
annotation.modify_request(request_builder, call_args[name])
File "/python3.10/site-packages/uplink/arguments.py", line 182, in modify_request
converter = request_builder.get_converter(converter_key, argument_type)
File "/python3.10/site-packages/uplink/helpers.py", line 96, in get_converter
return self._converter_registry[converter_key](*args, **kwargs)
File "/python3.10/site-packages/uplink/converters/__init__.py", line 52, in __call__
converter = self._converter_factory(*args, **kwargs)
File "/python3.10/site-packages/uplink/converters/__init__.py", line 112, in chain
converter = func(factory)(*args, **kwargs)
File "/python3.10/site-packages/uplink/converters/typing_.py", line 130, in create_request_body_converter
return self._base_converter(type_)
File "/python3.10/site-packages/uplink/converters/typing_.py", line 121, in _base_converter
if issubclass(type_.__origin__, self.typing.Sequence):
File "/python3.10/typing.py", line 1158, in __subclasscheck__
return issubclass(cls, self.__origin__)
File "/python3.10/abc.py", line 123, in __subclasscheck__
return _abc_subclasscheck(cls, subclass)
TypeError: issubclass() arg 1 must be a clas
</code></pre>
**Describe the solution you'd like**
Parameters should work with the `Union` type hint.
P.S: I haven't looked into the library hence, I don't even know if that's possible. That's why I am posting it as a feature
request and not a bug. Thanks for the great library :).
|
open
|
2023-03-20T09:32:35Z
|
2023-03-20T09:32:53Z
|
https://github.com/prkumar/uplink/issues/289
|
[] |
koioannis
| 0
|
serengil/deepface
|
machine-learning
| 1,075
|
Preprocessing ascii script check is unsuported for python 3.6
|
## Issues
I am using Python 3.6 to run DeepFace version 0.0.85 (which supported Python >= 3.5.5 from PyPi). However, I got an error as above:
My code was like
```python3
from deepface import DeepFace
import deepface
import sys
print(deepface.__version__)
print(sys.version)
print(DeepFace.analyze('image1.png'))
```
Trace
```
0.0.85
3.6.9 (default, Mar 10 2023, 16:46:00)
[GCC 8.4.0]
Traceback (most recent call last):
File "/home/ubuntu/.../index.py", line 4, in <module>
print(DeepFace.analyze('image1.png'))
File "/usr/local/lib/python3.6/dist-packages/deepface/DeepFace.py", line 222, in analyze
silent=silent,
File "/usr/local/lib/python3.6/dist-packages/deepface/modules/demography.py", line 126, in analyze
expand_percentage=expand_percentage,
File "/usr/local/lib/python3.6/dist-packages/deepface/modules/detection.py", line 79, in extract_faces
img, img_name = preprocessing.load_image(img_path)
File "/usr/local/lib/python3.6/dist-packages/deepface/modules/preprocessing.py", line 47, in load_image
if img.isascii() is False:
AttributeError: 'str' object has no attribute 'isascii'
```
After a quick research, I know that the `str.isascii()` is only supported for Python >= 3.7, which part is missing on Python 3.6.
## Environment
Running on
- **Ubuntu 18.04 (arch64)**
- **Python 3.6.9**
- **DeepFace 0.0.85**
## Solution
- Re-implement an check ascii on string instead of using a built-in function (or change the required Python >= 3.7).
|
closed
|
2024-03-09T05:59:08Z
|
2024-03-10T09:17:52Z
|
https://github.com/serengil/deepface/issues/1075
|
[
"bug"
] |
PlayerNguyen
| 3
|
rougier/numpy-100
|
numpy
| 74
|
Hide answers
|
It would be nice if the answers were hidden so one wouldn't accidentally read the answers.
|
open
|
2018-12-04T22:32:33Z
|
2018-12-05T17:28:32Z
|
https://github.com/rougier/numpy-100/issues/74
|
[] |
tkwon
| 1
|
pallets-eco/flask-wtf
|
flask
| 417
|
validate_on_submit() expects a tuple list for SelectField even though a normal list is a valid argument
|
### Actual Behavior
`form.validate on submit()` evaluates to `False` when `choices` in `SelectField` is a list of values only. The same form validation evaluates to `True` when using `(value, label)` tuples as used with version 2.2.1
```python
class RegistrationForm(FlaskForm):
language = SelectField('Language', choices=['python', 'ruby', 'rust'], validators=[DataRequired()])
submit = SubmitField('Submit')
@app.route('/', methods=['GET', 'POST'])
def index():
form = RegistrationForm()
if form.validate_on_submit():
return redirect(url_for('test'))
return render_template('index.html', form=form)
```
The `SelectField()` is rendered properly in the HTML form and its value can be accessed with `form.language.data` but `form.validate_on_submit()` still evaluates to `False`.
### Expected Behavior
`form.validate on submit()` to evaluate to `True` when `choices` in `SelectField` is a list of values only as well.
### Environment
* Python version: 3.8.2
* Flask-wtf version: 0.14.3
* wtforms version: 2.3.1
|
closed
|
2020-07-29T13:46:52Z
|
2021-05-26T00:54:52Z
|
https://github.com/pallets-eco/flask-wtf/issues/417
|
[] |
ghost
| 1
|
graphql-python/graphene-django
|
graphql
| 1,356
|
Check dependent libraries: https://github.com/tfoxy/graphene-django-optimizer and https://github.com/eamigo86/graphene-django-extras
|
Check dependent libraries: https://github.com/tfoxy/graphene-django-optimizer and https://github.com/eamigo86/graphene-django-extras
|
closed
|
2022-09-26T10:49:08Z
|
2023-06-11T20:07:08Z
|
https://github.com/graphql-python/graphene-django/issues/1356
|
[
"✨enhancement"
] |
firaskafri
| 3
|
dmlc/gluon-cv
|
computer-vision
| 1,060
|
Faster-RCNN-custom cannot transfer pretrained weight from coco or voc
|
To reproduce:
```python
import gluoncv as gcv
net1 = gcv.model_zoo.get_model('yolo3_mobilenet1.0_custom',
pretrained_base=False, transfer='coco',
classes=['a', 'b', 'c'])
net2 = gcv.model_zoo.get_model('faster_rcnn_resnet50_v1b_custom',
pretrained_base=False, transfer='coco',
classes=['a', 'b', 'c'])
```
The error message:
```
AssertionError: Parameter '_target_generator._box_encoder.means' is missing in file '/home/ubuntu/.mxnet/models/faster_rcnn_resnet50_v1b_coco-5b4690fb.params', which contains parameters: 'features.5.3.bn3.gamma', 'top_features.0.0.bn1.gamma', 'top_features.0.2.bn1.beta', ..., 'top_features.0.2.bn2.beta', 'features.6.5.conv1.weight', 'top_features.0.1.bn3.gamma', 'features.6.1.bn1.running_var'. Set allow_missing=True to ignore missing parameters.
```
Notice that Yolo/SSD support this customization, but Faster RCNN breaks it.
|
closed
|
2019-11-27T09:06:44Z
|
2019-11-28T01:49:55Z
|
https://github.com/dmlc/gluon-cv/issues/1060
|
[] |
hetong007
| 2
|
mwaskom/seaborn
|
data-visualization
| 3,632
|
Palette does not support the use of defaultdict with missing values
|
Currently, Seaborn does not permit the use of defaultdict with missing values as a palette. A minimal example that reproduces this issue is:
```python
import seaborn as sns
import pandas as pd
from collections import defaultdict
data = pd.DataFrame({
"values": [1, 2, 3],
"hues": ["foo", "bar", "baz"],
})
palette = defaultdict(lambda: "#000000", {
"foo": "#ff0000",
"bar": "#00ff00",
})
sns.histplot(
x="values",
data=data,
hue="hues",
palette=palette,
)
```
My expectation is that this should use the default value of `#000000` for `baz`, which is missing from the palette. Instead, this raises an exception:
```python-traceback
Traceback (most recent call last):
File "/home/ehermes/test/seaborn_defaultdict.py", line 15, in <module>
sns.histplot(
File "/home/ehermes/venvs/seaborn/lib/python3.10/site-packages/seaborn/distributions.py", line 1384, in histplot
p.map_hue(palette=palette, order=hue_order, norm=hue_norm)
File "/home/ehermes/venvs/seaborn/lib/python3.10/site-packages/seaborn/_base.py", line 838, in map_hue
mapping = HueMapping(self, palette, order, norm, saturation)
File "/home/ehermes/venvs/seaborn/lib/python3.10/site-packages/seaborn/_base.py", line 150, in __init__
levels, lookup_table = self.categorical_mapping(
File "/home/ehermes/venvs/seaborn/lib/python3.10/site-packages/seaborn/_base.py", line 234, in categorical_mapping
raise ValueError(err.format(missing))
ValueError: The palette dictionary is missing keys: {'baz'}
```
For this test, I have used `seaborn-0.13.2` and `matplotlib-3.8.2`.
I have a fix for this problem in a personal branch (https://github.com/ehermes/seaborn/tree/palette_defaultdict), but per your contribution guidelines, I have opened a bug report first. With permission, I can also create a PR for my fix.
|
open
|
2024-02-07T15:40:34Z
|
2024-02-10T19:02:10Z
|
https://github.com/mwaskom/seaborn/issues/3632
|
[] |
ehermes
| 4
|
litestar-org/litestar
|
asyncio
| 3,248
|
build: `make docs-serve` watching polyfactory dir in litestar project
|
https://github.com/litestar-org/litestar/blob/042468cd8af6a852d1758735507ed9305daf9734/Makefile#L149
|
closed
|
2024-03-23T23:20:00Z
|
2025-03-20T15:54:31Z
|
https://github.com/litestar-org/litestar/issues/3248
|
[
"Bug :bug:",
"Good First Issue"
] |
peterschutt
| 2
|
kymatio/kymatio
|
numpy
| 292
|
BUG 1D scattering does not respect `max_order`
|
Specifically, if we set `max_order = 1`, it creates the same size output as for `max_order = 2`, but fills the remaining elements with zero.
|
closed
|
2019-01-15T04:22:03Z
|
2019-01-15T15:02:25Z
|
https://github.com/kymatio/kymatio/issues/292
|
[] |
janden
| 0
|
collerek/ormar
|
fastapi
| 435
|
Many-to-many fields have strange types
|
**Describe the bug**
Consider the following contrived example:
```python
class Actor(ormar.Model):
class Meta(MainMeta):
pass
id = ormar.Integer(primary_key=True)
name = ormar.String(max_length=255)
class Film(ormar.Model):
class Meta(MainMeta):
pass
id = ormar.Integer(primary_key=True)
cast = ormar.ManyToMany(to=Actor, skip_reverse=True)
```
I would expect the type of `film.cast` to be `List[Actor]`, however, inspecting `Film.schema()` it looks like it has type `Union[int, Actor, List[Actor]]`:
```python
>>> Film.schema()
>>> {'title': 'Film',
'description': 'Schema for the `Film` model.',
'type': 'object',
'properties': {'id': {'title': 'Id',
'type': 'integer'},
'cast': {'title': 'Cast',
'anyOf': [{'type': 'integer'},
{'$ref': '#/definitions/Actor'},
{'type': 'array', 'items': {'$ref': '#/definitions/Actor'}}]}},
...
```
Note that if I cast to a `pydantic` model it appears the field has the correct type:
```python
>>> PydanticFilm = Film.get_pydantic()
>>> PydanticFilm.schema()
>>> {'title': 'Film_FNN',
'type': 'object',
'properties': {'id': {'title': 'Id',
'minimum': None,
'maximum': None,
'type': 'integer'},
'cast': {'title': 'Cast',
'type': 'array',
'items': {'$ref': '#/definitions/Actor_IAY'}},
...
```
This is causing issues with the OpenAPI schema generated by FastAPI with ormar
**Expected behavior**
A many-to-many field should have the type `List[Relation]` (unless I'm missing something?)
**Versions (please complete the following information):**
- Database backend used (mysql/sqlite/postgress): postgres
- Python version: 3.9.0
- `ormar` version: 0.10.22
- `pydantic` version: 1.8.2
- if applicable `fastapi` version: 0.70.0
|
closed
|
2021-11-17T10:18:37Z
|
2021-12-06T18:56:17Z
|
https://github.com/collerek/ormar/issues/435
|
[
"bug"
] |
dbatten5
| 3
|
harry0703/MoneyPrinterTurbo
|
automation
| 99
|
字幕生成一半
|
生成了56秒的视频,但是字幕只到29秒,后面就没有字幕了。
|
closed
|
2024-03-28T14:18:32Z
|
2024-03-31T15:34:26Z
|
https://github.com/harry0703/MoneyPrinterTurbo/issues/99
|
[
"bug"
] |
duffercn
| 4
|
Yorko/mlcourse.ai
|
plotly
| 608
|
Potentially incorrect statement about .map vs .replace in topic1_pandas_data_analysis.ipynb
|
In **Applying Functions to Cells, Columns and Rows** section of topic1_pandas_data_analysis.ipynb exercise when explaining how to replace values in column it is stated that `.replace` does the same thing as `.map` that, I think, is just partially correct.
While `.map` is applied to the dataframe produces NaN values for keys not found in the map, `.replace` updates values matching keys in the map only.
|
closed
|
2019-09-04T19:06:22Z
|
2019-09-08T13:59:36Z
|
https://github.com/Yorko/mlcourse.ai/issues/608
|
[] |
andrei-khveras
| 1
|
ets-labs/python-dependency-injector
|
asyncio
| 394
|
Context manager for the providers.Configuration
|
My proposal is to add the `__enter__` and `__exit__` methods to the `providers.Configuration` class to reduce code duplication and improve the visualization of configuration blocks.
Example of such case:
```python
from dependency_injector import providers
from environs import Env
env = Env()
config = providers.Configuration()
config.some_plugin_name.some_interval_ms.override(
env.int(
"SOME_INTERVAL_MS",
default=30000
)
)
config.some_plugin_name.kafka.bootstrap_servers.override(
env.list(
"KAFKA_BOOTSTRAP_SERVERS"
)
)
config.some_plugin_name.kafka.security_protocol.override(
env.str(
"KAFKA_SECURITY_PROTOCOL",
default="SASL_SSL"
)
)
# ...
```
```python
from dependency_injector import providers
from environs import Env
env = Env()
config = providers.Configuration()
with config.some_plugin_name as plugin:
plugin.some_interval_ms.override(
env.int(
"SOME_INTERVAL_MS",
default=30000
)
)
with plugin.kafka as kafka:
kafka.bootstrap_servers.override(
env.list(
"KAFKA_BOOTSTRAP_SERVERS"
)
)
kafka.security_protocol.override(
env.str(
"KAFKA_SECURITY_PROTOCOL",
default="SASL_SSL"
)
)
# ...
```
|
closed
|
2021-02-10T09:57:30Z
|
2021-02-15T22:09:02Z
|
https://github.com/ets-labs/python-dependency-injector/issues/394
|
[
"feature"
] |
gtors
| 5
|
coqui-ai/TTS
|
pytorch
| 3,077
|
[Feat] : Multilingual Support for Tacotron2
|
<!-- Welcome to the 🐸TTS project!
We are excited to see your interest, and appreciate your support! --->
**🚀 Feature Description**
The Tacotron2 model is a powerful tool for text-to-speech synthesis, but it currently lacks built-in support for multiple languages. Adding multilingual capabilities would significantly enhance its utility for a global user base.
**Solution**
Implement a mechanism to support multiple languages in the Tacotron2 model. This could involve adapting the model architecture, incorporating language embeddings, and providing tools for data preprocessing in different languages.
**Alternative Solutions**
One alternative approach to achieving multilingual support is to explore the possibility of using a language-agnostic pre-trained model as a starting point. This could involve fine-tuning the model with data from different languages to adapt it to various linguistic contexts.
**Additional context**
Adding multilingual support to Tacotron2 is an important step towards making the TTS library more versatile and globally accessible. It will empower users from diverse linguistic backgrounds to leverage the capabilities of Tacotron2 for speech synthesis in their respective languages. Your contributions to this enhancement will play a significant role in expanding the reach and impact of the TTS library. We look forward to your insights and expertise!
|
closed
|
2023-10-17T16:37:42Z
|
2023-11-13T12:44:23Z
|
https://github.com/coqui-ai/TTS/issues/3077
|
[
"feature request"
] |
Parvezkhan0
| 1
|
approximatelabs/sketch
|
pandas
| 8
|
Not working
|
Hello, Tried to use sketch with a simple use case, but I'm not getting results. I'm working in Spyder IDE on Anaconda, pip install of sketch worked fine, but my console output is always: "<IPython.core.display.HTML object>"
This happens regardless of the type of question I ask it, following the documentation.
Thank you
|
closed
|
2023-01-26T02:01:14Z
|
2023-01-26T16:40:35Z
|
https://github.com/approximatelabs/sketch/issues/8
|
[] |
pdavis01
| 2
|
coqui-ai/TTS
|
python
| 3,515
|
Design improvements in the bash entry point
|
Theres a few design weaknesses in the bash wrapper and the project in general that if fixed could make this more useful.
1. If cuda exists it should be the used by default.
2. arguments are different per model and it isnt clear why unless one digs into the models
3. There are too many similarly named files and folders.
Other than that, thanks for putting all this together.
4. It would be a boon if one could have scripts that point out strengths and weaknesses of each model.
I wrote that script only to find issue (2) and an issue with the bark model lacking a config file - therefore defaulting to save weigths in /root/.local on a docker container - instead of on TTS_HOME or XDG_CACHE_HOME ( which the bark repo if one follows its instalation and example defaults to)
I havent gone thru all models. Again, itd be great if the bolts are tightened a bit on this project, it could e very useful to have a project that one can use for evaluation and training starting point.
|
closed
|
2024-01-13T15:15:44Z
|
2024-02-23T03:28:15Z
|
https://github.com/coqui-ai/TTS/issues/3515
|
[
"wontfix",
"feature request"
] |
xvdp
| 1
|
hzwer/ECCV2022-RIFE
|
computer-vision
| 62
|
Pre-trained LiteFlowNet model
|
Thanks for your wonderful work!
As stated in `training strategy: A LiteFlowNet [14] pre-trained on the FlyingChairs [9] dataset is used as the overpowered teacher in the leakage distillation`, can u release the pre-trained LiteFlowNet model? I can only find the model (released in [LiteFlowNet](https://github.com/twhui/LiteFlowNet)) trained on FlayingChairs+FlayingThings.
|
closed
|
2020-12-10T03:40:46Z
|
2020-12-10T07:09:37Z
|
https://github.com/hzwer/ECCV2022-RIFE/issues/62
|
[] |
lhao0301
| 1
|
predict-idlab/plotly-resampler
|
plotly
| 44
|
Problem with pyinstaller
|
Hi,
First thing - thanks for your work 👍 This Plotly-Resampler allow me to handle 30bln datapoints plots. Really great job!
Unfortunately I have a problem with your library combined with PyInstaller. Code works properly inside PyCharm but when I add to the code "from plotly_resampler import FigureResampler", build standalone .exe file and run it then it crash without any error.
Do you know what can be a root-cause or how can I WA this issue?
Ofc I added all needed dependencies to the *.spec file.
Update: when I removed Jupyter and show_dash() function then .exe file starts working so maybe there is some routing conflict between two local servers?

Thanks,
Piotr
|
closed
|
2022-04-23T16:09:36Z
|
2022-08-12T07:39:04Z
|
https://github.com/predict-idlab/plotly-resampler/issues/44
|
[
"installation"
] |
Piotrkow
| 4
|
graphistry/pygraphistry
|
jupyter
| 633
|
[BUG] scene settings for point_size not always setting to correct value
|
**Describe the bug**
Customer reports that trying to set the point_size via scene_settings doesn't always set correctly.
Comment from user:
> mismatch in the scale of the UI input and what is available on the API. Either update documentation or make the UI and API like-for-like
**To Reproduce**
use for_analysis.ipynb, set the following scene settings and see point_size doesn't match in the UI
```python
g2 = g.scene_settings(
point_size=0.2,
edge_curvature=0.0,
edge_opacity=0.5,
point_opacity=0.9
)
g2.plot()
```
**Expected behavior**
point size should be 20
**Actual behavior**
point size showing 15, sometimes other values
**Screenshots**


|
open
|
2025-01-07T03:46:50Z
|
2025-01-15T19:30:58Z
|
https://github.com/graphistry/pygraphistry/issues/633
|
[
"bug"
] |
DataBoyTX
| 0
|
WZMIAOMIAO/deep-learning-for-image-processing
|
deep-learning
| 668
|
About https://github.com/dosafe
|
Hi,
I'm sorry for bothering but are you the owner of https://github.com/dosafe? We would like to use this username for our brand.
|
closed
|
2022-10-24T21:24:40Z
|
2022-11-06T03:14:53Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/668
|
[] |
JOY
| 1
|
pyg-team/pytorch_geometric
|
pytorch
| 8,957
|
Graph Transformer implementation
|
### 🚀 The feature, motivation and pitch
DGL has graph transformer implementation here: https://docs.dgl.ai/en/1.1.x/notebooks/sparse/graph_transformer.html
Is it possible to include this architecture in pytorch geometric?
### Alternatives
We may already have the implementation of this but I can not find it.
### Additional context
_No response_
|
closed
|
2024-02-23T05:20:09Z
|
2024-02-23T15:36:21Z
|
https://github.com/pyg-team/pytorch_geometric/issues/8957
|
[
"feature"
] |
ck-amrahd
| 2
|
seleniumbase/SeleniumBase
|
pytest
| 2,138
|
Add new examples that make use of https://seleniumbase.io/simple/login
|
## Add new examples that make use of https://seleniumbase.io/simple/login
There's a new website just for testing basic login and showing off SeleniumBase methods and features.
We still have the existing MFA (Multi-factor auth / 2FA) site here: https://seleniumbase.io/realworld/login.
(The sites are nearly identical, but the new one removed the MFA part to make some examples simpler.)
|
closed
|
2023-09-24T15:53:26Z
|
2023-09-26T01:36:01Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2138
|
[
"documentation",
"tests"
] |
mdmintz
| 1
|
nschloe/matplotx
|
matplotlib
| 1
|
meet at center
|
If a quad has one edge with a contour, draw a line from that segment to the quad center. This avoids situations like:

|
open
|
2021-10-25T20:37:13Z
|
2021-10-25T20:37:13Z
|
https://github.com/nschloe/matplotx/issues/1
|
[] |
nschloe
| 0
|
holoviz/panel
|
jupyter
| 7,386
|
Cannot sync date and time widgets to url
|
panel 1.5.2
URL syncing does not working for date and time widgets.
```python
import datetime as dt
import panel as pn
pn.extension()
test_widget = pn.widgets.DatetimePicker(
name='Datetime Picker', value=dt.datetime(2021, 3, 2, 12, 10)
)
pn.state.location.sync(test_widget, parameters=["value"])
pn.template.FastListTemplate(
title="Sync a Time-based widget and the url parameters",
main=[test_widget]
).servable()
```
Does not update the URL when the widget value changes.
```python
import datetime as dt
import panel as pn
pn.extension()
test_widget = pn.widgets.DatetimeRangeInput(
name='Datetime Range Input',
start=dt.datetime(2017, 1, 1), end=dt.datetime(2019, 1, 1),
value=(dt.datetime(2017, 1, 1), dt.datetime(2018, 1, 10)),
width=300
)
pn.state.location.sync(test_widget, parameters=["value"])
pn.template.FastListTemplate(
title="Sync a Time-based widget and the url parameters",
main=[test_widget]
).servable()
```
causes an exception:
```python
2024-10-09 18:28:58,404 Error running application handler <panel.io.handlers.ScriptHandler object at 0x7f94d4365350>: Object of type datetime is not JSON serializable
File 'encoder.py', line 180, in default:
raise TypeError(f'Object of type {o.__class__.__name__} ' Traceback (most recent call last):
File "/usr/local/python/3.11.7/lib/python3.11/site-packages/panel/io/handlers.py", line 405, in run
exec(self._code, module.__dict__)
File "/workspaces/stratolaunch-data-eng-pipelines/ui/test.py", line 13, in <module>
pn.state.location.sync(test_widget, parameters=["value"])
File "/usr/local/python/3.11.7/lib/python3.11/site-packages/panel/io/location.py", line 267, in sync
v = json.dumps(v)
^^^^^^^^^^^^^
File "/usr/local/python/3.11.7/lib/python3.11/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/python/3.11.7/lib/python3.11/json/encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/python/3.11.7/lib/python3.11/json/encoder.py", line 258, in iterencode
return _iterencode(o, 0)
^^^^^^^^^^^^^^^^^
File "/usr/local/python/3.11.7/lib/python3.11/json/encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type datetime is not JSON serializable
```
- [x] I may be interested in making a pull request to address this
|
open
|
2024-10-09T18:50:04Z
|
2024-10-09T18:50:04Z
|
https://github.com/holoviz/panel/issues/7386
|
[] |
electrophys
| 0
|
lepture/authlib
|
django
| 284
|
Some tests fail if all tests are run together
|
**Describe the bug**
Some tests in `tests/core/test_oauth2/test_rfc8414.py` and `tests/core/test_oidc/test_discovery.py` fail if they are run with Django or Flask tests.
**Error Stacks**
<details>
<summary>pytest logs</summary>
```
============================= test session starts ==============================
platform linux -- Python 3.8.6, pytest-6.1.1, py-1.9.0, pluggy-0.13.1
django: settings: tests.django.settings (from ini)
rootdir: /build/python-authlib/src/authlib, configfile: setup.cfg
plugins: asyncio-0.14.0, django-3.10.0
collected 503 items
tests/django/test_client/test_oauth_client.py ............. [ 2%]
tests/django/test_oauth1/test_authorize.py .... [ 3%]
tests/django/test_oauth1/test_resource_protector.py .... [ 4%]
tests/django/test_oauth1/test_token_credentials.py ..... [ 5%]
tests/django/test_oauth2/test_authorization_code_grant.py ....... [ 6%]
tests/django/test_oauth2/test_client_credentials_grant.py ..... [ 7%]
tests/django/test_oauth2/test_implicit_grant.py ... [ 8%]
tests/django/test_oauth2/test_password_grant.py ..... [ 9%]
tests/django/test_oauth2/test_refresh_token.py ...... [ 10%]
tests/django/test_oauth2/test_resource_protector.py ..... [ 11%]
tests/django/test_oauth2/test_revocation_endpoint.py .... [ 12%]
tests/core/test_jose/test_jwe.py ............... [ 15%]
tests/core/test_jose/test_jwk.py ................... [ 18%]
tests/core/test_jose/test_jws.py .............. [ 21%]
tests/core/test_jose/test_jwt.py ............... [ 24%]
tests/core/test_oauth2/test_rfc6749_misc.py ..... [ 25%]
tests/core/test_oauth2/test_rfc7591.py ...... [ 26%]
tests/core/test_oauth2/test_rfc7662.py ...... [ 28%]
tests/core/test_oauth2/test_rfc8414.py ....F..F..FF..F..F....F... [ 33%]
tests/core/test_oidc/test_core.py .......... [ 35%]
tests/core/test_oidc/test_discovery.py ............F.......... [ 39%]
tests/core/test_requests_client/test_assertion_session.py .. [ 40%]
tests/core/test_requests_client/test_oauth1_session.py ................ [ 43%]
tests/core/test_requests_client/test_oauth2_session.py ................. [ 46%]
........... [ 48%]
tests/flask/test_client/test_oauth_client.py ................... [ 52%]
tests/flask/test_client/test_user_mixin.py ...... [ 53%]
tests/flask/test_oauth1/test_authorize.py ...... [ 55%]
tests/flask/test_oauth1/test_resource_protector.py ........ [ 56%]
tests/flask/test_oauth1/test_temporary_credentials.py ................ [ 59%]
tests/flask/test_oauth1/test_token_credentials.py ...... [ 61%]
tests/flask/test_oauth2/test_authorization_code_grant.py ............ [ 63%]
tests/flask/test_oauth2/test_client_credentials_grant.py ..... [ 64%]
tests/flask/test_oauth2/test_client_registration_endpoint.py ......... [ 66%]
tests/flask/test_oauth2/test_code_challenge.py ............ [ 68%]
tests/flask/test_oauth2/test_device_code_grant.py ......... [ 70%]
tests/flask/test_oauth2/test_implicit_grant.py ...... [ 71%]
tests/flask/test_oauth2/test_introspection_endpoint.py .... [ 72%]
tests/flask/test_oauth2/test_jwt_bearer_client_auth.py ....... [ 73%]
tests/flask/test_oauth2/test_jwt_bearer_grant.py ..... [ 74%]
tests/flask/test_oauth2/test_oauth2_server.py ....... [ 76%]
tests/flask/test_oauth2/test_openid_code_grant.py ........ [ 77%]
tests/flask/test_oauth2/test_openid_hybrid_grant.py .......... [ 79%]
tests/flask/test_oauth2/test_openid_implict_grant.py ........ [ 81%]
tests/flask/test_oauth2/test_password_grant.py ....... [ 82%]
tests/flask/test_oauth2/test_refresh_token.py .......... [ 84%]
tests/flask/test_oauth2/test_revocation_endpoint.py .... [ 85%]
tests/py3/test_httpx_client/test_assertion_client.py .. [ 85%]
tests/py3/test_httpx_client/test_async_assertion_client.py .. [ 86%]
tests/py3/test_httpx_client/test_async_oauth1_client.py ....... [ 87%]
tests/py3/test_httpx_client/test_async_oauth2_client.py ................ [ 90%]
.... [ 91%]
tests/py3/test_httpx_client/test_oauth1_client.py ....... [ 93%]
tests/py3/test_httpx_client/test_oauth2_client.py ................... [ 96%]
tests/py3/test_starlette_client/test_oauth_client.py ........... [ 99%]
tests/py3/test_starlette_client/test_user_mixin.py ..... [100%]
=================================== FAILURES ===================================
_____ AuthorizationServerMetadataTest.test_validate_authorization_endpoint _____
self = <test_oauth2.test_rfc8414.AuthorizationServerMetadataTest testMethod=test_validate_authorization_endpoint>
def test_validate_authorization_endpoint(self):
# https
metadata = AuthorizationServerMetadata({
'authorization_endpoint': 'http://authlib.org/'
})
with self.assertRaises(ValueError) as cm:
metadata.validate_authorization_endpoint()
> self.assertIn('https', str(cm.exception))
E AttributeError: '_AssertRaisesContext' object has no attribute 'exception'
tests/core/test_oauth2/test_rfc8414.py:93: AttributeError
_____ AuthorizationServerMetadataTest.test_validate_introspection_endpoint _____
self = <test_oauth2.test_rfc8414.AuthorizationServerMetadataTest testMethod=test_validate_introspection_endpoint>
def test_validate_introspection_endpoint(self):
metadata = AuthorizationServerMetadata()
metadata.validate_introspection_endpoint()
# https
metadata = AuthorizationServerMetadata({
'introspection_endpoint': 'http://authlib.org/'
})
with self.assertRaises(ValueError) as cm:
metadata.validate_introspection_endpoint()
> self.assertIn('https', str(cm.exception))
E AttributeError: '_AssertRaisesContext' object has no attribute 'exception'
tests/core/test_oauth2/test_rfc8414.py:430: AttributeError
_____________ AuthorizationServerMetadataTest.test_validate_issuer _____________
self = <test_oauth2.test_rfc8414.AuthorizationServerMetadataTest testMethod=test_validate_issuer>
def test_validate_issuer(self):
#: missing
metadata = AuthorizationServerMetadata({})
with self.assertRaises(ValueError) as cm:
metadata.validate()
self.assertEqual('"issuer" is required', str(cm.exception))
#: https
metadata = AuthorizationServerMetadata({
'issuer': 'http://authlib.org/'
})
with self.assertRaises(ValueError) as cm:
metadata.validate_issuer()
> self.assertIn('https', str(cm.exception))
E AttributeError: '_AssertRaisesContext' object has no attribute 'exception'
tests/core/test_oauth2/test_rfc8414.py:63: AttributeError
____________ AuthorizationServerMetadataTest.test_validate_jwks_uri ____________
self = <test_oauth2.test_rfc8414.AuthorizationServerMetadataTest testMethod=test_validate_jwks_uri>
def test_validate_jwks_uri(self):
# can missing
metadata = AuthorizationServerMetadata()
metadata.validate_jwks_uri()
metadata = AuthorizationServerMetadata({
'jwks_uri': 'http://authlib.org/jwks.json'
})
with self.assertRaises(ValueError) as cm:
metadata.validate_jwks_uri()
> self.assertIn('https', str(cm.exception))
E AttributeError: '_AssertRaisesContext' object has no attribute 'exception'
tests/core/test_oauth2/test_rfc8414.py:150: AttributeError
_____ AuthorizationServerMetadataTest.test_validate_registration_endpoint ______
self = <test_oauth2.test_rfc8414.AuthorizationServerMetadataTest testMethod=test_validate_registration_endpoint>
def test_validate_registration_endpoint(self):
metadata = AuthorizationServerMetadata()
metadata.validate_registration_endpoint()
metadata = AuthorizationServerMetadata({
'registration_endpoint': 'http://authlib.org/'
})
with self.assertRaises(ValueError) as cm:
metadata.validate_registration_endpoint()
> self.assertIn('https', str(cm.exception))
E AttributeError: '_AssertRaisesContext' object has no attribute 'exception'
tests/core/test_oauth2/test_rfc8414.py:166: AttributeError
______ AuthorizationServerMetadataTest.test_validate_revocation_endpoint _______
self = <test_oauth2.test_rfc8414.AuthorizationServerMetadataTest testMethod=test_validate_revocation_endpoint>
def test_validate_revocation_endpoint(self):
metadata = AuthorizationServerMetadata()
metadata.validate_revocation_endpoint()
# https
metadata = AuthorizationServerMetadata({
'revocation_endpoint': 'http://authlib.org/'
})
with self.assertRaises(ValueError) as cm:
metadata.validate_revocation_endpoint()
> self.assertIn('https', str(cm.exception))
E AttributeError: '_AssertRaisesContext' object has no attribute 'exception'
tests/core/test_oauth2/test_rfc8414.py:368: AttributeError
_________ AuthorizationServerMetadataTest.test_validate_token_endpoint _________
self = <test_oauth2.test_rfc8414.AuthorizationServerMetadataTest testMethod=test_validate_token_endpoint>
def test_validate_token_endpoint(self):
# implicit
metadata = AuthorizationServerMetadata({
'grant_types_supported': ['implicit']
})
metadata.validate_token_endpoint()
# missing
metadata = AuthorizationServerMetadata()
with self.assertRaises(ValueError) as cm:
metadata.validate_token_endpoint()
self.assertIn('required', str(cm.exception))
# https
metadata = AuthorizationServerMetadata({
'token_endpoint': 'http://authlib.org/'
})
with self.assertRaises(ValueError) as cm:
metadata.validate_token_endpoint()
> self.assertIn('https', str(cm.exception))
E AttributeError: '_AssertRaisesContext' object has no attribute 'exception'
tests/core/test_oauth2/test_rfc8414.py:132: AttributeError
______________ OpenIDProviderMetadataTest.test_validate_jwks_uri _______________
self = <test_oidc.test_discovery.OpenIDProviderMetadataTest testMethod=test_validate_jwks_uri>
def test_validate_jwks_uri(self):
# required
metadata = OpenIDProviderMetadata()
with self.assertRaises(ValueError) as cm:
metadata.validate_jwks_uri()
self.assertEqual('"jwks_uri" is required', str(cm.exception))
metadata = OpenIDProviderMetadata({
'jwks_uri': 'http://authlib.org/jwks.json'
})
with self.assertRaises(ValueError) as cm:
metadata.validate_jwks_uri()
> self.assertIn('https', str(cm.exception))
E AttributeError: '_AssertRaisesContext' object has no attribute 'exception'
tests/core/test_oidc/test_discovery.py:48: AttributeError
=============================== warnings summary ===============================
../../../../usr/lib/python3.8/site-packages/_pytest/config/__init__.py:1230
/usr/lib/python3.8/site-packages/_pytest/config/__init__.py:1230: PytestConfigWarning: Unknown config option: python_paths
self._warn_or_fail_if_strict("Unknown config option: {}\n".format(key))
tests/flask/test_oauth2/test_openid_code_grant.py::OpenIDCodeTest::test_authorize_token
tests/flask/test_oauth2/test_openid_code_grant.py::OpenIDCodeTest::test_nonce_replay
tests/flask/test_oauth2/test_openid_code_grant.py::OpenIDCodeTest::test_prompt
tests/flask/test_oauth2/test_openid_code_grant.py::OpenIDCodeTest::test_pure_code_flow
tests/flask/test_oauth2/test_openid_code_grant.py::RSAOpenIDCodeTest::test_authorize_token
tests/flask/test_oauth2/test_openid_code_grant.py::JWKSOpenIDCodeTest::test_authorize_token
tests/flask/test_oauth2/test_openid_code_grant.py::ECOpenIDCodeTest::test_authorize_token
tests/flask/test_oauth2/test_openid_code_grant.py::PEMOpenIDCodeTest::test_authorize_token
/build/python-authlib/src/authlib/authlib/integrations/flask_oauth2/authorization_server.py:73: AuthlibDeprecationWarning: Define "get_jwt_config" in OpenID Connect grants
It will be compatible before version 1.0.
deprecate('Define "get_jwt_config" in OpenID Connect grants', '1.0')
-- Docs: https://docs.pytest.org/en/stable/warnings.html
=========================== short test summary info ============================
FAILED tests/core/test_oauth2/test_rfc8414.py::AuthorizationServerMetadataTest::test_validate_authorization_endpoint
FAILED tests/core/test_oauth2/test_rfc8414.py::AuthorizationServerMetadataTest::test_validate_introspection_endpoint
FAILED tests/core/test_oauth2/test_rfc8414.py::AuthorizationServerMetadataTest::test_validate_issuer
FAILED tests/core/test_oauth2/test_rfc8414.py::AuthorizationServerMetadataTest::test_validate_jwks_uri
FAILED tests/core/test_oauth2/test_rfc8414.py::AuthorizationServerMetadataTest::test_validate_registration_endpoint
FAILED tests/core/test_oauth2/test_rfc8414.py::AuthorizationServerMetadataTest::test_validate_revocation_endpoint
FAILED tests/core/test_oauth2/test_rfc8414.py::AuthorizationServerMetadataTest::test_validate_token_endpoint
FAILED tests/core/test_oidc/test_discovery.py::OpenIDProviderMetadataTest::test_validate_jwks_uri
================== 8 failed, 495 passed, 9 warnings in 6.37s ===================
```
</details>
**To Reproduce**
1. Clone 0.15.1
2. python -m pytest
**Expected behavior**
All tests pass
**Environment:**
- OS: Arch Linux
- Python Version: 3.8.6
- Authlib Version: 0.15.1
**Additional context**
I'm interested in updating the python-authlib package for Arch Llinux for compatibility with newer httpx. That package runs all tests together [1], which worked for authlib 0.14.3.
Do you suggest that packagers for Linux distributions should test different parts separately like the official GitHub workflows [2] do?
[1] https://github.com/archlinux/svntogit-community/blob/2b480fcd73e2ef5fa6ad6d42bd2fdda8fa9a5606/trunk/PKGBUILD#L42
[2] https://github.com/lepture/authlib/blob/84ba75ab04606bbf5d7cc783b9b09f5b00291d9c/.github/workflows/python.yml#L42-L44
|
closed
|
2020-10-16T07:41:36Z
|
2020-10-20T15:56:09Z
|
https://github.com/lepture/authlib/issues/284
|
[
"bug"
] |
yan12125
| 7
|
netbox-community/netbox
|
django
| 18,004
|
Job scheduling is not working
|
### Deployment Type
Self-hosted
### Triage priority
I volunteer to perform this work (if approved)
### NetBox Version
v4.1.6
### Python Version
3.11
### Steps to Reproduce
1. Create any script
```python
from extras.scripts import Script
class TestScript(Script):
def run(self, data, commit):
self.log_info("This is test script")
```
2. Run the script with any interval set

### Expected Behavior
The script successfully runs and creates a new scheduled job
### Observed Behavior
A new job is not scheduled, traceback from worker
```
13:48:59 default: handle(commit=True, data={}, job=<Job: 086a5aae-5c60-4746-bde4-f77800c795ea>, request=<utilities.request.NetBoxFakeRequest object at 0x7fba0c3b7ef0>) (086a5aae-5c60-4746-bde4-f77800c795ea)
13:48:59 [Job 086a5aae-5c60-4746-bde4-f77800c795ea]: exception raised while executing (handle)
Traceback (most recent call last):
File "/home/miaow/work/netbox/venv/lib/python3.12/site-packages/rq/worker.py", line 1430, in perform_job
rv = job.perform()
^^^^^^^^^^^^^
File "/home/miaow/work/netbox/venv/lib/python3.12/site-packages/rq/job.py", line 1280, in perform
self._result = self._execute()
^^^^^^^^^^^^^^^
File "/home/miaow/work/netbox/venv/lib/python3.12/site-packages/rq/job.py", line 1317, in _execute
result = self.func(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miaow/work/netbox/netbox/netbox/jobs.py", line 74, in handle
cls.enqueue(
File "/home/miaow/work/netbox/netbox/netbox/jobs.py", line 107, in enqueue
return Job.enqueue(cls.handle, name=name, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/miaow/work/netbox/netbox/core/models/jobs.py", line 237, in enqueue
job.full_clean()
File "/home/miaow/work/netbox/venv/lib/python3.12/site-packages/django/db/models/base.py", line 1502, in full_clean
raise ValidationError(errors)
django.core.exceptions.ValidationError: {'name': ['This field cannot be blank.']}
```
This happens because the rescheduling mechanism does not pass `name` to the Job enqueue and `name` field is obligatory in Job model.
This can be fixed by passing `job.name` to `cls.enqueue` [here](https://github.com/netbox-community/netbox/blob/develop/netbox/netbox/jobs.py#L73).
Feel free to assign this issue to me, I will create PR to fix this issue.A new
|
closed
|
2024-11-13T14:08:45Z
|
2025-02-20T03:04:59Z
|
https://github.com/netbox-community/netbox/issues/18004
|
[
"type: bug",
"status: duplicate",
"netbox"
] |
miaow2
| 2
|
deepset-ai/haystack
|
machine-learning
| 8,704
|
Imports inside init causing extremely slow load times
|
**Describe the bug**
Any import made to a component is causing unnecessary packages to be loaded and increasing load times significantly
The design of the `__init__.py` files in the component directories are not optimized because importing a single component will bring all the others in even when using the full path to file. The users of this framework have no control over this
Here's a simple example.
`from haystack.components.routers.conditional_router import ConditionalRouter`
The above line of code will actually still import all the other components defined in the router `__init__.py` which includes `TransformersZeroShotTextRouter` and that loads in ML libraries `transformers` and `sklearn`. This is because of how python works where parent packages will be initialized completely when a child submodule is referenced
This is problematic because load times are much higher due to components & packages that are not being used. In a complete RAG pipeline, I have seen load times spike up all the way 5-7 seconds with several ML libraries such as `torch` being loaded from the `/utils/init.py` which on its own takes a few seconds
**Expected behavior**
The expected behavior is that the imports are lazily loaded to only when they are accessed and __init__.py should not automatically load everything. This will significantly improve load times
I have put together a pull request with suggested changes for how to lazily import efficiently while still maintain type checking for IDEs. Please prioritize reviewing this as soon as possible as the performance is an issue for our users
Other related issues/PRS
https://github.com/deepset-ai/haystack/issues/8650
https://github.com/deepset-ai/haystack/pull/8706
https://github.com/deepset-ai/haystack/pull/8655
**FAQ Check**
- [X] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
|
closed
|
2025-01-10T15:43:24Z
|
2025-02-21T08:55:21Z
|
https://github.com/deepset-ai/haystack/issues/8704
|
[
"P2"
] |
lohit8846
| 0
|
autokey/autokey
|
automation
| 675
|
Inconsistency of "New..." button behavior in the two front ends
|
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
UI/Usability
### Which Linux distribution did you use?
Any.
### Which AutoKey GUI did you use?
Both
### Which AutoKey version did you use?
Any.
### How did you install AutoKey?
Operating system repository for standard versions and pip for the beta.
### Can you briefly describe the issue?
In any version of AutoKey, the GTK and Qt front ends handle the behavior of the "New..." button in different ways.
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
1. Open one of the AutoKey front ends.
2. Note whether a phrase, script, or folder is currently selected.
3. Click the "New..." button.
4. Examine the context menu and note the result.
5. If a phrase or script was selected, select a folder. If a folder was selected, select a phrase or script.
6. Repeat steps 3 and 4.
7. Close AutoKey.
8. Open the other AutoKey front end.
9. Repeat steps 2 through 7.
10. Compare all of your results with each other.
### What should have happened?
I'm not sure which of the two behaviors would be considered desirable, but the behavior should be consistent regardless of which front end is used.
### What actually happened?
In the GTK front end, you can click the "New..." button in the toolbar at any time to create a new folder, subfolder, phrase, or script. The GTK front end doesn't care if you currently have a folder or a phrase or a script currently selected.
In the Qt front end, you can click the "New..." button in the toolbar at any time to create a new folder. If, however, you want to create a subfolder, phrase, or script, you must select a folder before clicking the "New..." button. If you have a phrase or script currently selected, the subfolder, phrase, and script options will be greyed-out and unselectable in the "New.." button's context menu.
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_
|
open
|
2022-02-24T20:11:11Z
|
2022-02-27T22:37:23Z
|
https://github.com/autokey/autokey/issues/675
|
[
"bug",
"autokey-qt",
"autokey-gtk",
"user interface"
] |
Elliria
| 2
|
scrapy/scrapy
|
python
| 5,747
|
How to enable certificate verification when Scrapy accesses https?
|
Check the documentation and find that you need to add the following configuration in settings.py
```
DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.BrowserLikeContextFactory"
```
spider.py
```
class GoogleSpider(scrapy.Spider):
name = 'google'
allowed_domains = [ 'google.com']
start_urls = ['https://www.google.com/']
def parse(self, response):
yield scrapy.Request("https://www.google.com/", callback=self.xxx, dont_filter=True)
```
Running the crawler after adding it will report an error, the following is the complete log
```
2022-12-07 18:09:03 [scrapy.core.engine] INFO: Spider opened
2022-12-07 18:09:03 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-12-07 18:09:03 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-12-07 18:09:04 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.google.com/> (failed 1 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('STORE routines', '', 'unregistered scheme'), ('system library', '', ''), ('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('STORE routines', '', 'unregistered scheme'), ('system library', '', ''), ('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('STORE routines', '', 'unregistered scheme'), ('system library', '', ''), ('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('SSL routines', '', 'certificate verify failed')]>]
2022-12-07 18:09:05 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.google.com/> (failed 2 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('STORE routines', '', 'unregistered scheme'), ('system library', '', ''), ('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('STORE routines', '', 'unregistered scheme'), ('system library', '', ''), ('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('STORE routines', '', 'unregistered scheme'), ('system library', '', ''), ('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('SSL routines', '', 'certificate verify failed')]>]
2022-12-07 18:09:06 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://www.google.com/> (failed 3 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('STORE routines', '', 'unregistered scheme'), ('system library', '', ''), ('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('STORE routines', '', 'unregistered scheme'), ('system library', '', ''), ('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('STORE routines', '', 'unregistered scheme'), ('system library', '', ''), ('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('SSL routines', '', 'certificate verify failed')]>]
2022-12-07 18:09:06 [scrapy.core.scraper] ERROR: Error downloading <GET https://www.google.com/>
Traceback (most recent call last):
File "C:\Users\00000\AppData\Local\Programs\Python\Python39\lib\site-packages\scrapy\core\downloader\middleware.py", line 49, in process_request
return (yield download_func(request=request, spider=spider))
twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('STORE routines', '', 'unregistered scheme'), ('system library', '', ''), ('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('STORE routines', '', 'unregistered scheme'), ('system library', '', ''), ('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('STORE routines', '', 'unregistered scheme'), ('system library', '', ''), ('STORE routines', '', 'unregistered scheme'), ('STORE routines', '', 'unsupported'), ('SSL routines', '', 'certificate verify failed')]>]
2022-12-07 18:09:06 [scrapy.core.engine] INFO: Closing spider (finished)
```
|
closed
|
2022-12-07T12:29:00Z
|
2023-10-25T15:49:42Z
|
https://github.com/scrapy/scrapy/issues/5747
|
[] |
Akise
| 8
|
d2l-ai/d2l-en
|
tensorflow
| 2,306
|
potential issue in bi-rnn
|
```python
@d2l.add_to_class(BiRNNScratch)
def forward(self, inputs, Hs=None):
f_H, b_H = Hs if Hs is not None else (None, None)
f_outputs, f_H = self.f_rnn(inputs, f_H)
b_outputs, b_H = self.b_rnn(reversed(inputs), b_H)
outputs = [torch.cat((f, b), -1) for f, b in zip(f_outputs, b_outputs)] # should b_outputs be reversed?
return outputs, (f_H, b_H)
```
Here, should we also reverse b_outputs?
Reasoning:
```python
f_outputs = [ t ... T ]
b_outputs = [ T ... t ]
```
|
closed
|
2022-09-15T01:55:25Z
|
2022-09-15T18:34:19Z
|
https://github.com/d2l-ai/d2l-en/issues/2306
|
[] |
ronnieqt
| 1
|
xorbitsai/xorbits
|
numpy
| 500
|
DOC: supported python version
|
https://doc.xorbits.io/en/latest/getting_started/installation.html
Since we have dropped py 3.7, the doc should be updated as well.
|
closed
|
2023-06-06T09:59:47Z
|
2023-06-21T02:43:18Z
|
https://github.com/xorbitsai/xorbits/issues/500
|
[
"documentation",
"good first issue"
] |
UranusSeven
| 3
|
betodealmeida/shillelagh
|
sqlalchemy
| 58
|
Move `Order` enum to `types`
|
closed
|
2021-07-03T20:25:22Z
|
2021-07-03T20:43:59Z
|
https://github.com/betodealmeida/shillelagh/issues/58
|
[
"good first issue"
] |
betodealmeida
| 0
|
|
tortoise/tortoise-orm
|
asyncio
| 1,768
|
When I initialize the database for the first time, if I have a field in my model with unique=True, then when I set this field's unique to False, it throws tortoise.exceptions.OperationalError: ( 1091, “Can't DROP ‘idx_tb_tasks_name_d55a80’; check that column/key exists”) error
|
**Describe the bug**
When I initialize the database for the first time, if I have a field in my model with unique=True, then when I set this field's unique to False, it throws `tortoise.exceptions.OperationalError: ( 1091, “Can't DROP ‘idx_tb_tasks_name_d55a80’; check that column/key exists”)` error.
I located the problem and it is due to these two reasons:
1. a misjudgment of the unique field in aerich, which has been fixed in this commit https://github.com/tortoise/aerich/commit/e9716538519c07d46a2dc6cba87fd096758c9471 in the dev branch of aerich.
2. the prefixes of the unique index in aerich and tortoise-orm were not consistent, and there were some problems with the timing of the unique handling in the generated sql.
**To Reproduce**
version:
mysql8.0,tortoise-orm==0.21.7,aerich==0.7.2
Steps to Reproduction:
1.Manually modify the aerich source code to fix the problem fixed in the above commit
Add a line below line 462 in aerich.migrate
`unique = old_data_field.get("unique")`
2.This is my model
```
class Tasks(Model):
"""任务表"""
id = fields.IntField(pk=True)
name = fields.CharField(max_length=100, unique=True, description="任务名称")
status = fields.IntEnumField(enum_type=TaskStatus, description="任务状态")
class Meta:
table = "tb_tasks"
```
3.Init db
```
aerich init -t utils.config.TORTOISE_CONFIG
aerich init-db
```
4.Delete unique=True to the name field of the model from step 2
```
class Tasks(Model):
"""任务表"""
id = fields.IntField(pk=True)
name = fields.CharField(max_length=100, description="任务名称")
status = fields.IntEnumField(enum_type=TaskStatus, description="任务状态")
class Meta:
table = "tb_tasks"
```
5.Migrate and upgrade
```
aerich migrate
aerich upgrade
```
6.It will raise a exception
`tortoise.exceptions.OperationalError: (1091, "Can't DROP 'uid_tb_tasks_name_d55a80'; check that column/key exists")`
|
closed
|
2024-11-14T07:22:51Z
|
2024-11-14T08:32:17Z
|
https://github.com/tortoise/tortoise-orm/issues/1768
|
[] |
gck123
| 0
|
autokey/autokey
|
automation
| 594
|
(CI) don't test the version number when running CI tests on merges
|
I think the reason [this commit](https://github.com/autokey/autokey/runs/3335287095) fails is because merge commits don't have the rest of the git history, so there is no recent git tag to check the version against.
We should add a `skipif` to the pytest and find a way to check if we are testing a merge commit
|
open
|
2021-08-15T23:04:38Z
|
2023-08-24T14:41:23Z
|
https://github.com/autokey/autokey/issues/594
|
[
"bug",
"easy fix",
"development"
] |
BlueDrink9
| 1
|
alteryx/featuretools
|
data-science
| 1,794
|
RollingMean does not compute features for rows whose dates are in the future.
|
### Bug/Feature Request Title
RollingMean does not compute features for rows whose dates are in the future.
-----
*Issues created here on Github are for bugs or feature requests. For usage questions and questions about errors, please ask on Stack Overflow with the [featuretools](https://stackoverflow.com/questions/tagged/featuretools) tag. Check the [documentation](https://featuretools.alteryx.com/en/stable/resources/help.html) for further guidance on where to ask your question.*
#### Bug/Feature Request Description
```python
import pandas as pd
import featuretools as ft
dates = pd.concat([pd.Series(pd.date_range("2022-02-11", "2022-02-16")),
pd.Series(pd.date_range("2022-02-16", "2022-02-21"))])
df = pd.DataFrame({"date": pd.date_range("2021-10-01", periods=200),
"feature": range(200),})
df.index = list(range(20, 20 + df.shape[0]))
df.ww.init()
es = ft.EntitySet()
ltypes = df.ww.logical_types
del df.ww
es.add_dataframe(
dataframe_name="X",
dataframe=df,
index="index",
make_index=True,
time_index="date",
logical_types=ltypes)
features = ft.dfs(entityset=es, target_dataframe_name="X",
trans_primitives=[ft.primitives.RollingMean(window_length=3, gap=1, min_periods=3)],
max_depth=1, features_only=True)
matrix = ft.calculate_feature_matrix(features, entityset=es)
assert matrix.shape[0] != df.shape[0]
```
If the dates are way off in to the future an error is raised instead:
```python
import pandas as pd
import featuretools as ft
import pytest
dates = pd.concat([pd.Series(pd.date_range("2022-02-11", "2022-02-16")),
pd.Series(pd.date_range("2022-02-16", "2022-02-21"))])
df = pd.DataFrame({"date": pd.date_range("2022-10-01", periods=200),
"feature": range(200),})
df.index = list(range(20, 20 + df.shape[0]))
df.ww.init()
es = ft.EntitySet()
ltypes = df.ww.logical_types
del df.ww
es.add_dataframe(
dataframe_name="X",
dataframe=df,
index="index",
make_index=True,
time_index="date",
logical_types=ltypes)
features = ft.dfs(entityset=es, target_dataframe_name="X",
trans_primitives=[ft.primitives.RollingMean(window_length=3, gap=1, min_periods=3)],
max_depth=1, features_only=True)
with pytest.raises(AssertionError, match="0 instance ids provided"):
matrix = ft.calculate_feature_matrix(features, entityset=es)
```
#### Expected Output
`matrix` should be the same size as `df`.
I think for most problems, the dates will not extend really far into the future but given that the mathematical computation for this primitive is defined for any date range regardless of when that date is, I would expect dfs to always be able to compute features.
Not sure if the other rolling window primitives are affected by this as well.
#### Output of ``featuretools.show_info()``
<details>
[paste the output of ``featuretools.show_info()`` here below this line]
Featuretools version: 1.2.0
Featuretools installation directory: /Users/freddy.boulton/opt/miniconda3/envs/evalml/lib/python3.8/site-packages/featuretools
SYSTEM INFO
-----------
python: 3.8.12.final.0
python-bits: 64
OS: Darwin
OS-release: 19.6.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
INSTALLED VERSIONS
------------------
numpy: 1.21.4
pandas: 1.3.4
tqdm: 4.62.3
PyYAML: 5.4
cloudpickle: 2.0.0
dask: 2021.11.1
distributed: 2021.11.1
psutil: 5.8.0
pip: 21.3.1
setuptools: 58.5.3
</details>
|
closed
|
2021-11-22T19:15:39Z
|
2021-11-22T20:53:46Z
|
https://github.com/alteryx/featuretools/issues/1794
|
[] |
freddyaboulton
| 5
|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 11,776
|
Add sqlalchemy.orm.Session.merge_all / delete_all
|
### Describe the use case
`add` has `add_all` but `merge` doesn't seems to have `merge_all`
```
self.db_session.merge_all(users)
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Session' object has no attribute 'merge_all'
```
### Databases / Backends / Drivers targeted
Postgresql/psycopg2 (but shouldn't be specific to a db)
### Example Use
`session.merge([ model1, model2 ])`
### Additional context
Workaround is to loops on models array but it might be less efficient (because `UPDATE`/`INSERT` might not be enough batched)
|
closed
|
2024-08-22T11:16:15Z
|
2025-02-05T19:06:03Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/11776
|
[
"orm",
"use case"
] |
Et7f3
| 9
|
dropbox/PyHive
|
sqlalchemy
| 339
|
I am trying to add http capabilities with Signed Auth Header for my team in Oracle cloud. Can I get some pointer
|
open
|
2020-06-25T05:19:47Z
|
2020-06-25T05:20:12Z
|
https://github.com/dropbox/PyHive/issues/339
|
[] |
satyamsah
| 0
|
|
slackapi/bolt-python
|
fastapi
| 269
|
None values from event Slack
|
I have event listener:
```
@app.event("message")
def message(payload):
event = payload.get("event", {})
channel_id = event.get("channel")
user_id = event.get("user")
text = event.get("text")
user_id = event.get("user")
print(user_id,text,channel_id)
```
When i type something in Slack i'm getting None values from the event, why?
|
closed
|
2021-03-25T18:15:20Z
|
2021-03-25T20:05:46Z
|
https://github.com/slackapi/bolt-python/issues/269
|
[
"question"
] |
vasiliy-grinko
| 1
|
ultralytics/ultralytics
|
machine-learning
| 19,685
|
How to change bounding box color?
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I need to change the bbox color. It seems there is no argument for color when perform a detection.
Is there a built in function to do so?
I want to be able to see both labels and bbox in the desired color.
Thanks alot
### Additional
_No response_
|
open
|
2025-03-13T17:51:17Z
|
2025-03-15T10:38:20Z
|
https://github.com/ultralytics/ultralytics/issues/19685
|
[
"question",
"detect"
] |
sipov
| 8
|
hootnot/oanda-api-v20
|
rest-api
| 96
|
Jupyter exampleAuth error
|
I'm getting an error in this line of code under accounts.ipynb
```
import json
import oandapyV20
import oandapyV20.endpoints.accounts as accounts
from exampleauth import exampleauth
accountID, access_token = exampleauth.exampleAuth()
client = oandapyV20.API(access_token=access_token)
r = accounts.AccountList()
response = client.request(r)
print(json.dumps(response, indent=2))
```
> ---------------------------------------------------------------------------
> ModuleNotFoundError Traceback (most recent call last)
> <ipython-input-1-ad842502edc9> in <module>()
> 2 import oandapyV20
> 3 import oandapyV20.endpoints.accounts as accounts
> ----> 4 from exampleauth import exampleauth
> 5
> 6 accountID, access_token = exampleauth.exampleAuth()
>
> ModuleNotFoundError: No module named 'exampleauth'
>
|
closed
|
2017-09-10T01:59:52Z
|
2017-09-10T09:31:24Z
|
https://github.com/hootnot/oanda-api-v20/issues/96
|
[] |
alexismanalo
| 1
|
ageitgey/face_recognition
|
python
| 623
|
Error while running "pip install dlib"
|
* face_recognition version:
* Python version:
* Operating System:
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
IMPORTANT: If your issue is related to a specific picture, include it so others can reproduce the issue.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
|
closed
|
2018-09-19T05:14:29Z
|
2018-10-08T16:39:04Z
|
https://github.com/ageitgey/face_recognition/issues/623
|
[] |
Keshri99
| 2
|
littlecodersh/ItChat
|
api
| 136
|
报告语法错误,运行readme代码
|
```
#coding=utf8
import itchat, time
from itchat.content import *
@itchat.msg_register([TEXT, MAP, CARD, NOTE, SHARING])
def text_reply(msg):
itchat.send('%s: %s' % (msg['Type'], msg['Text']), msg['FromUserName'])
@itchat.msg_register([PICTURE, RECORDING, ATTACHMENT, VIDEO])
def download_files(msg):
msg['Text'](msg['FileName'])
return '@%s@%s' % ({'Picture': 'img', 'Video': 'vid'}.get(msg['Type'], 'fil'), msg['FileName'])
@itchat.msg_register(FRIENDS)
def add_friend(msg):
itchat.add_friend(**msg['Text']) # 该操作会自动将新好友的消息录入,不需要重载通讯录
itchat.send_msg('Nice to meet you!', msg['RecommendInfo']['UserName'])
@itchat.msg_register(TEXT, isGroupChat=True)
def text_reply(msg):
if msg['isAt']:
itchat.send(u'@%s\u2005I received: %s' % (msg['ActualNickName'], msg['Content']), msg['FromUserName'])
itchat.auto_login(True)
itchat.run()
```
```
Traceback (most recent call last):
File "wx.py", line 2, in <module>
import itchat, time
File "/usr/lib/python2.6/site-packages/itchat/__init__.py", line 3, in <module>
from .client import client
File "/usr/lib/python2.6/site-packages/itchat/client.py", line 434
cookiesList = {name:data for name,data in self.s.cookies.items()}
^
SyntaxError: invalid syntax
```
|
closed
|
2016-11-01T13:37:49Z
|
2016-11-02T02:56:01Z
|
https://github.com/littlecodersh/ItChat/issues/136
|
[] |
chijiao
| 1
|
flairNLP/flair
|
pytorch
| 3,428
|
[Bug]: Error message: "learning rate too small - quitting training!"
|
### Describe the bug
Model training quits after epoch 1 with a "learning rate too small - quitting training!" error message even though the "patience" parameter is set to 10.
### To Reproduce
```python
In Google Colab:
!pip install flair -qq
import os
from os import mkdir, listdir
from os.path import join, exists
import re
from torch.optim.adam import Adam
from flair.datasets import CSVClassificationCorpus
from flair.data import Corpus, Sentence
from flair.embeddings import TransformerDocumentEmbeddings
from flair.models import TextClassifier
from flair.trainers import ModelTrainer
for embedding in ["distilbert-base-uncased"]:
print("Training on", embedding)
# 1a. define the column format indicating which columns contain the text and labels
column_name_map = {1: "text", 2: "label"}
# 1b. load the preprocessed training, development, and test sets
corpus: Corpus = CSVClassificationCorpus(processed_dir,
column_name_map,
label_type="label",
skip_header=True,
delimiter='\t')
# 2. create the label dictionary
label_dict = corpus.make_label_dictionary(label_type="label")
# 3. initialize the transformer document embeddings
document_embeddings = TransformerDocumentEmbeddings(embedding,
fine_tune=True,
layers="all")
#document_embeddings.tokenizer.pad_token = document_embeddings.tokenizer.eos_token
# 4. create the text classifier
classifier = TextClassifier(document_embeddings,
label_dictionary=label_dict,
label_type="label")
# 5. initialize the trainer
trainer = ModelTrainer(classifier,
corpus)
# 6. start the training
trainer.train('model/'+embedding,
learning_rate=1e-5,
mini_batch_size=8,
max_epochs=3,
patience=10,
optimizer=Adam,
train_with_dev=False,
save_final_model=False
)
```
### Expected behavior
In this case, the model should be trained for 3 epochs without reducing the learning rate. In prior cases, even when a learning rate of 1e-5 was reduced by an anneal factor of 0.5, I did not receive a "learning rate too small - quitting training!" error message.
### Logs and Stack traces
```stacktrace
2024-03-18 14:11:51,783 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,786 Model: "TextClassifier(
(embeddings): TransformerDocumentEmbeddings(
(model): DistilBertModel(
(embeddings): Embeddings(
(word_embeddings): Embedding(30523, 768)
(position_embeddings): Embedding(512, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(transformer): Transformer(
(layer): ModuleList(
(0-5): 6 x TransformerBlock(
(attention): MultiHeadSelfAttention(
(dropout): Dropout(p=0.1, inplace=False)
(q_lin): Linear(in_features=768, out_features=768, bias=True)
(k_lin): Linear(in_features=768, out_features=768, bias=True)
(v_lin): Linear(in_features=768, out_features=768, bias=True)
(out_lin): Linear(in_features=768, out_features=768, bias=True)
)
(sa_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(ffn): FFN(
(dropout): Dropout(p=0.1, inplace=False)
(lin1): Linear(in_features=768, out_features=3072, bias=True)
(lin2): Linear(in_features=3072, out_features=768, bias=True)
(activation): GELUActivation()
)
(output_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
)
)
)
)
(decoder): Linear(in_features=5376, out_features=2, bias=True)
(dropout): Dropout(p=0.0, inplace=False)
(locked_dropout): LockedDropout(p=0.0)
(word_dropout): WordDropout(p=0.0)
(loss_function): CrossEntropyLoss()
(weights): None
(weight_tensor) None
)"
2024-03-18 14:11:51,787 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,789 Corpus: 8800 train + 2200 dev + 2200 test sentences
2024-03-18 14:11:51,793 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,794 Train: 8800 sentences
2024-03-18 14:11:51,795 (train_with_dev=False, train_with_test=False)
2024-03-18 14:11:51,799 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,802 Training Params:
2024-03-18 14:11:51,804 - learning_rate: "1e-05"
2024-03-18 14:11:51,806 - mini_batch_size: "8"
2024-03-18 14:11:51,807 - max_epochs: "3"
2024-03-18 14:11:51,812 - shuffle: "True"
2024-03-18 14:11:51,813 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,814 Plugins:
2024-03-18 14:11:51,816 - AnnealOnPlateau | patience: '10', anneal_factor: '0.5', min_learning_rate: '0.0001'
2024-03-18 14:11:51,817 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,818 Final evaluation on model from best epoch (best-model.pt)
2024-03-18 14:11:51,820 - metric: "('micro avg', 'f1-score')"
2024-03-18 14:11:51,821 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,823 Computation:
2024-03-18 14:11:51,825 - compute on device: cuda:0
2024-03-18 14:11:51,835 - embedding storage: cpu
2024-03-18 14:11:51,836 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,837 Model training base path: "model/distilbert-base-uncased"
2024-03-18 14:11:51,840 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:51,846 ----------------------------------------------------------------------------------------------------
2024-03-18 14:11:55,845 epoch 1 - iter 110/1100 - loss 0.57600509 - time (sec): 4.00 - samples/sec: 220.19 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:11:58,978 epoch 1 - iter 220/1100 - loss 0.50393908 - time (sec): 7.13 - samples/sec: 246.84 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:01,876 epoch 1 - iter 330/1100 - loss 0.46954644 - time (sec): 10.03 - samples/sec: 263.27 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:05,276 epoch 1 - iter 440/1100 - loss 0.44181235 - time (sec): 13.43 - samples/sec: 262.14 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:08,456 epoch 1 - iter 550/1100 - loss 0.41807515 - time (sec): 16.61 - samples/sec: 264.93 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:11,447 epoch 1 - iter 660/1100 - loss 0.40403758 - time (sec): 19.60 - samples/sec: 269.41 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:14,420 epoch 1 - iter 770/1100 - loss 0.38948912 - time (sec): 22.57 - samples/sec: 272.91 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:17,914 epoch 1 - iter 880/1100 - loss 0.38118810 - time (sec): 26.07 - samples/sec: 270.09 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:21,085 epoch 1 - iter 990/1100 - loss 0.37110791 - time (sec): 29.24 - samples/sec: 270.89 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:24,027 epoch 1 - iter 1100/1100 - loss 0.36139164 - time (sec): 32.18 - samples/sec: 273.47 - lr: 0.000010 - momentum: 0.000000
2024-03-18 14:12:24,030 ----------------------------------------------------------------------------------------------------
2024-03-18 14:12:24,032 EPOCH 1 done: loss 0.3614 - lr: 0.000010
2024-03-18 14:12:28,158 DEV : loss 0.28874295949935913 - f1-score (micro avg) 0.9095
2024-03-18 14:12:29,719 - 0 epochs without improvement
2024-03-18 14:12:29,721 ----------------------------------------------------------------------------------------------------
2024-03-18 14:12:29,723 learning rate too small - quitting training!
2024-03-18 14:12:29,725 ----------------------------------------------------------------------------------------------------
2024-03-18 14:12:29,727 Done.
2024-03-18 14:12:29,729 ----------------------------------------------------------------------------------------------------
2024-03-18 14:12:29,733 Testing using last state of model ...
2024-03-18 14:12:33,651
Results:
- F-score (micro) 0.9132
- F-score (macro) 0.9029
- Accuracy 0.9132
By class:
precision recall f1-score support
0 0.9184 0.9511 0.9345 1432
1 0.9024 0.8424 0.8714 768
accuracy 0.9132 2200
macro avg 0.9104 0.8968 0.9029 2200
weighted avg 0.9128 0.9132 0.9125 2200
2024-03-18 14:12:33,653 ----------------------------------------------------------------------------------------------------
```
### Screenshots
_No response_
### Additional Context
_No response_
### Environment
#### Versions:
##### Flair
0.13.1
##### Pytorch
2.2.1+cu121
##### Transformers
4.38.2
#### GPU
True
|
closed
|
2024-03-18T14:58:03Z
|
2024-03-18T16:14:55Z
|
https://github.com/flairNLP/flair/issues/3428
|
[
"bug"
] |
azkgit
| 1
|
s3rius/FastAPI-template
|
asyncio
| 22
|
Fix postgres error in docker-compose.
|
We need to change healthcheck command.
Because healthcheck command is incorrect postgres is constantly write in log "FATAL: role "root" does not exist".
It must be fixed by providing different healthcheck command.
|
closed
|
2021-09-30T22:43:15Z
|
2021-10-01T10:25:29Z
|
https://github.com/s3rius/FastAPI-template/issues/22
|
[] |
s3rius
| 0
|
joeyespo/grip
|
flask
| 298
|
TypeError: required field "type_ignores" missing from Module
|
I have tried uninstalling and re-installing grip. I believe this may be to do with a bug in [Python 3.8](https://github.com/beetbox/beets/pull/3202), but I don't see a way to change what version of Python grip will use (and I have the latest version of Python that Cygwin can give me - `python3.8 --version: Python 3.8.0a3`. Default python: `python --version: Python 2.7.16`
```
$ grip temp.md --export temp.html
Exporting to temp.html
Traceback (most recent call last):
File "/usr/bin/grip", line 10, in <module>
sys.exit(main())
File "/usr/lib/python3.8/site-packages/grip/command.py", line 107, in main
export(args['<path>'], args['--user-content'], args['--context'],
File "/usr/lib/python3.8/site-packages/grip/api.py", line 116, in export
page = render_page(path, user_content, context, username, password,
File "/usr/lib/python3.8/site-packages/grip/api.py", line 80, in render_page
return create_app(path, user_content, context, username, password,
File "/usr/lib/python3.8/site-packages/grip/api.py", line 45, in create_app
return grip_class(source, auth, renderer, None, render_wide,
File "/usr/lib/python3.8/site-packages/grip/app.py", line 70, in __init__
super(Grip, self).__init__(
File "/usr/lib/python3.8/site-packages/flask/app.py", line 558, in __init__
self.add_url_rule(
File "/usr/lib/python3.8/site-packages/flask/app.py", line 66, in wrapper_func
return f(self, *args, **kwargs)
File "/usr/lib/python3.8/site-packages/flask/app.py", line 1216, in add_url_rule
self.url_map.add(rule)
File "/usr/lib/python3.8/site-packages/werkzeug/routing.py", line 1388, in add
rule.bind(self)
File "/usr/lib/python3.8/site-packages/werkzeug/routing.py", line 730, in bind
self.compile()
File "/usr/lib/python3.8/site-packages/werkzeug/routing.py", line 794, in compile
self._build = self._compile_builder(False).__get__(self, None)
File "/usr/lib/python3.8/site-packages/werkzeug/routing.py", line 951, in _compile_builder
code = compile(module, "<werkzeug routing>", "exec")
TypeError: required field "type_ignores" missing from Module
```
|
open
|
2019-05-18T05:21:22Z
|
2019-10-25T08:11:45Z
|
https://github.com/joeyespo/grip/issues/298
|
[
"bug"
] |
robertmarkbram
| 2
|
qubvel-org/segmentation_models.pytorch
|
computer-vision
| 116
|
How to use the legendary Unet?
|
Hello, first of all thanks for an awesome library.
I'm confused when I want to use the legendary Unet.
The default encoder_name of class 'smp.Unet' is "resnet34", which means when i create model use 'smp.Unet()', the backbone will be resnest34. So how can i create the legendary Unet model?

|
closed
|
2019-12-11T07:56:50Z
|
2019-12-14T12:13:49Z
|
https://github.com/qubvel-org/segmentation_models.pytorch/issues/116
|
[] |
Legolas970424
| 1
|
arnaudmiribel/streamlit-extras
|
streamlit
| 220
|
🐛 [BUG] - Hardcoded main script path in switch_page_button extra
|
### Description
The main script path is hardcoded in `switch_page_button/__init__.py`
<img width="677" alt="Screenshot 2024-02-21 at 14 18 08" src="https://github.com/arnaudmiribel/streamlit-extras/assets/33152751/2bc0e22a-b3d8-4a6c-a4fe-fb881623413b">
It will always lead to a `RerunException` if the main script is renamed. Page switching still works, however the expection raising can lead to problems in error handling. For example it will not work if the `switch_page` method is part of a try/except block.
#### I already have a proposal to fix this and I'd love to contribute, however I'm not sure how to create a pull request 😄
### Reproduction steps
```bash
1. Rename script
2. wrap switch_page in try/except block
3. Press switch page button
4. Nothing will happen
```
### Version of streamlit
1.30.0
### Version of streamlit-extras
0.4.0
|
closed
|
2024-02-21T13:25:32Z
|
2024-07-31T12:20:31Z
|
https://github.com/arnaudmiribel/streamlit-extras/issues/220
|
[
"bug"
] |
ivaniliash
| 3
|
ni1o1/transbigdata
|
data-visualization
| 11
|
[JOSS] Language of status and error messages
|
I've been going through `Example 1-Taxi GPS data processing.ipynb` and found that some messages are in Chinese:


Considering a potentially global user base, I think the messages should be in English.
xref: [openjournals/joss-reviews#4021](https://github.com/openjournals/joss-reviews/issues/4021)
|
closed
|
2022-01-08T19:16:23Z
|
2022-02-20T16:12:22Z
|
https://github.com/ni1o1/transbigdata/issues/11
|
[] |
anitagraser
| 3
|
mljar/mljar-supervised
|
scikit-learn
| 569
|
Cross validation on imbalanced small dataset causes error
|
Hi.
I am using a small dataset of 120 rows for binary classification, which is also imbalanced (103:17).
When I use cross validation, the confusion matrix and out-of-folds predictions show that there are 123 datapoints instead of 120.
It seems to think there are 20 cases of the minority class.
I have been able to reproduce this issue with this attached dummy dataset.
Interestingly, if I change the distribution to say, 90:30, this issue doesn't happen.
What do you think is going on?
[dummy.csv](https://github.com/mljar/mljar-supervised/files/9566565/dummy.csv)
|
closed
|
2022-09-14T13:11:46Z
|
2022-09-14T13:55:23Z
|
https://github.com/mljar/mljar-supervised/issues/569
|
[] |
dracar2s
| 5
|
vaexio/vaex
|
data-science
| 2,455
|
[BUG-REPORT]: 'TypeError: Expected Array, got <class 'pyarrow.lib.ChunkedArray'>'
|
I have code in which I am using applyinPandas along witha a udf function that processes two dataframes, one on which the groupby is applied and another is passed as paramter to the udf function.
Now whenever, I run the function for a smaller dataset let's say for around ~200k record - it runs smoothly finished within an hours.
But when the data size increases to >800k records - It throws following error
**Error Description**
"Caused by: org.apache.spark.api.python.PythonException: 'TypeError: Expected Array, got <class 'pyarrow.lib.ChunkedArray'>'. Full traceback below: Traceback (most recent call last): File "pyarrow/array.pxi", line 2377, in pyarrow.lib.StructArray.from_arrays TypeError: Expected Array, got <class 'pyarrow.lib.ChunkedArray'>"
**Description**
Please provide a clear and concise description of the problem. This should contain all the steps needed to reproduce the problem. A minimal code example that exposes the problem is very appreciated.
I am running the above code on Databricks
**Software information**
- Databricks (15.4 ML, included Apache Spark 3.5.0)
- Python version - 3.11
**Additional information**
Below are the spark configurations in the cluster
spark.databricks.service.server.enabled true
spark.databricks.service.port 15001
spark.databricks.delta.preview.enabled true
There is not much content available on this issue. However there was something similar reported on Apache Spark Website and the issue is still open
https://statics.teams.cdn.office.net/evergreen-assets/safelinks/1/atp-safelinks.html
|
open
|
2025-02-06T14:18:53Z
|
2025-02-06T14:20:23Z
|
https://github.com/vaexio/vaex/issues/2455
|
[] |
Piyush23Rai
| 0
|
biolab/orange3
|
data-visualization
| 6,667
|
Feature Statistics Widget : count, count distinct, same for categorical, etc...
|
**What's your use case?**
Hello. I would like to analyze quickly a dataset before working on it. The Feature Statistics would do the job but sadly, I miss some informations.

**What's your proposed solution?**
-new calculations such as : count, count distinct,standard deviation, shortest and longest values
-apply to all fields when possible (so also categorical)
**Are there any alternative solutions?**
None I know
Best regards,
Simon
|
closed
|
2023-12-06T06:09:08Z
|
2024-11-23T08:51:07Z
|
https://github.com/biolab/orange3/issues/6667
|
[] |
simonaubertbd
| 4
|
marimo-team/marimo
|
data-visualization
| 3,771
|
`mo.ui.table()` renders dataframes and dictionaries differently
|
### Describe the bug
Maybe this isn't technically a bug but it caught me off guard. In order to hide data types in table header, I converted my Polars dataframe to a dictionary as instructed [here](https://github.com/marimo-team/marimo/pull/2907). This did hide data types but non-primitive data types like structs and lists are not rendered.
<img width="800" alt="Image" src="https://github.com/user-attachments/assets/b0042ba0-4908-4dae-8513-1952fc5186d7" />
### Environment
<details>
```
{
"marimo": "0.11.2",
"OS": "Darwin",
"OS Version": "22.6.0",
"Processor": "i386",
"Python Version": "3.13.1",
"Binaries": {
"Browser": "132.0.6834.160",
"Node": "v21.6.1"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.26.0",
"packaging": "24.2",
"psutil": "6.1.1",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.3",
"pyyaml": "6.0.2",
"ruff": "0.9.6",
"starlette": "0.45.3",
"tomlkit": "0.13.2",
"typing-extensions": "missing",
"uvicorn": "0.34.0",
"websockets": "14.2"
},
"Optional Dependencies": {},
"Experimental Flags": {}
}
```
</details>
### Code to reproduce
```python
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "marimo==0.11.2",
# "polars==1.22.0",
# "ruff==0.9.6",
# ]
# ///
import marimo
__generated_with = "0.11.2"
app = marimo.App(width="medium")
@app.cell
def _():
import marimo as mo
return (mo,)
@app.cell
def _():
import polars as pl
return (pl,)
@app.cell
def _(pl):
test_df = pl.DataFrame(
{
"str": ["a", "c"],
"num": [1, 2],
"list": [["a, b"], ["c"]],
"struct": [{"a": 0}, {"a": 1}],
}
)
return (test_df,)
@app.cell
def _(mo, test_df):
mo.ui.table(test_df)
return
@app.cell
def _(mo, test_df):
mo.ui.table(test_df.to_dicts())
return
if __name__ == "__main__":
app.run()
```
|
closed
|
2025-02-12T16:40:45Z
|
2025-02-12T19:06:25Z
|
https://github.com/marimo-team/marimo/issues/3771
|
[
"bug"
] |
tare
| 1
|
sktime/sktime
|
data-science
| 8,008
|
[ENH] Add param to `evaluate` to calculate each error metric across cv folds
|
**Is your feature request related to a problem? Please describe.**
`sktime.forecasting.model_evaluation.evaluate` calculates error metrics separately for each cv window. This adds an extra step for the user if we want to calculate error metrics across cv windows.
**Describe the solution you'd like**
`evaluate` can have an additional parameter called `score_across_windows` that defaults to `False`. If `False`, evaluate would have the current behavior. If `True`, then it would add an additional row to the returned dataframe with the true value of each error metric calculated across all cv windows.
**Describe alternatives you've considered**
Some metrics like `MeanAbsolutePercentageError` can be calculated across all windows correctly by averaging the `MeanAbsolutePercentageError` of each window. But other metrics like `RMSE` can't be calculated this way.
**Additional context**
Here is a bit of code that illustrates the problem, and shows what the output might look like for `score_across_windows=True`.
```python
import pandas as pd
from sktime.datasets import load_airline
from sktime.forecasting.model_evaluation import evaluate
from sktime.split import ExpandingWindowSplitter
from sktime.forecasting.naive import NaiveForecaster
from sktime.performance_metrics.forecasting import MeanAbsolutePercentageError, MeanSquaredError
# Load data
y = load_airline()[:24]
forecaster = NaiveForecaster(strategy="mean", sp=3)
cv = ExpandingWindowSplitter(initial_window=12, step_length=6, fh=[1, 2, 3])
######################### MAPE #########################
loss_mape = MeanAbsolutePercentageError()
results_mape = evaluate(forecaster=forecaster, y=y, cv=cv, scoring=loss_mape, return_data=True)
avg_of_mape = results_mape['test_MeanAbsolutePercentageError'].mean()
# Calculate the ture MAPE across all cv folds.
y_test = pd.concat([i for i in results_mape['y_test']])
y_pred = pd.concat([i for i in results_mape['y_pred']])
true_mape = loss_mape(y_test, y_pred)
print(f'Average of MAPE: {avg_of_mape:.6f}') # NOTE: These are the same
print(f'True MAPE: {true_mape:.6f}') # NOTE: These are the same
######################### RMSE #########################
loss_rmse = MeanSquaredError(square_root=True)
results_rmse = evaluate(forecaster=forecaster, y=y, cv=cv, scoring=loss_rmse, return_data=True)
avg_of_rmse = results_rmse['test_MeanSquaredError'].mean()
# Calculate the ture RMSE across all cv folds.
y_test = pd.concat([i for i in results_rmse['y_test']])
y_pred = pd.concat([i for i in results_rmse['y_pred']])
true_rmse = loss_rmse(y_test, y_pred)
print(f'Average of RMSE: {avg_of_rmse:.6f}') # NOTE: These are not the same
print(f'True RMSE: {true_rmse:.6f}') # NOTE: These are not the same
######################### Suggested Improvement #########################
# This is an example of what the returned dataframe might look like if
# score_across_windows=True
new_row = pd.DataFrame({
'test_MeanSquaredError':true_rmse,
}, index=['score_across_windows'])
results_suggestion = pd.concat([results_rmse, new_row])
```
|
open
|
2025-03-18T14:21:08Z
|
2025-03-18T19:28:31Z
|
https://github.com/sktime/sktime/issues/8008
|
[
"enhancement"
] |
gbilleyPeco
| 0
|
django-import-export/django-import-export
|
django
| 1,741
|
`get_resource_kwargs` should return kwargs passed
|
**Describe the bug**
```
def get_resource_kwargs(self, request, *args, **kwargs):
return {}
```
This block will remove any values currently present in `kwargs`, which means data is lost and it is not easy to override.
[source](https://github.com/django-import-export/django-import-export/blob/eabba74bbfe0128411612c064b8e65900ab377ee/import_export/mixins.py#L65)
The correct implementation should be:
```
def get_resource_kwargs(self, request, *args, **kwargs):
return kwargs
```
There should be tests to prove this fix
**Versions (please complete the following information):**
- Django Import Export: 3.3.6
|
closed
|
2024-01-19T13:40:38Z
|
2024-01-31T16:17:13Z
|
https://github.com/django-import-export/django-import-export/issues/1741
|
[
"bug"
] |
matthewhegarty
| 4
|
PokeAPI/pokeapi
|
graphql
| 708
|
No area data in Generation 7
|
<!--
Thanks for contributing to the PokéAPI project. To make sure we're effective, please check the following:
- Make sure your issue hasn't already been submitted on the issues tab. (It has search functionality!)
- If your issue is one of outdated API data, please note that we get our data from [veekun](https://github.com/veekun/pokedex/). If they are not up to date either, please look for or create an issue there. Otherwise, feel free to create an issue here.
- Provide a clear description of the issue.
- Provide a clear description of the steps to reproduce.
- Provide a clear description of the expected behavior.
Thank you!
-->
There are no areas in gen 7, even the encounters data for gen 7 pokemon is missing...
|
open
|
2022-04-04T12:58:14Z
|
2022-04-04T18:56:17Z
|
https://github.com/PokeAPI/pokeapi/issues/708
|
[] |
PolarsBear
| 1
|
ultralytics/ultralytics
|
pytorch
| 19,797
|
Why was the YOLOv8n-cls summary called twice?
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Why was the YOLOv8n-cls summary called twice? Which one should I take as the standard?
D:\anaconda\envs\YOLO\python.exe F:/HAM/YOLO+HAM.py
**_YOLOv8n-cls summary: 99 layers, 2,719,288 parameters, 2,719,288 gradients, 4.4 GFLOPs_**
New https://pypi.org/project/ultralytics/8.3.93 available 😃 Update with 'pip install -U ultralytics'
Ultralytics YOLOv8.2.81 🚀 Python-3.8.19 torch-2.0.1 CUDA:0 (NVIDIA GeForce RTX 4090, 24563MiB)
engine\trainer: task=classify, mode=train, model=yolov8n-cls.yaml, data=cifar100, epochs=200, time=None, patience=100, batch=200, imgsz=32, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=train22, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=runs\classify\train22
train: F:\知识蒸馏\datasets\cifar100\train... found 50000 images in 100 classes ✅
val: None...
test: F:\知识蒸馏\datasets\cifar100\test... found 10000 images in 100 classes ✅
Overriding model.yaml nc=1000 with nc=100
from n params module arguments
0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
2 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]
3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]
4 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]
5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]
6 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]
7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]
8 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]
9 -1 1 458340 ultralytics.nn.modules.head.Classify [256, 100]
**_YOLOv8n-cls summary: 99 layers, 1,566,388 parameters, 1,566,388 gradients, 3.5 GFLOPs_**
### Additional
_No response_
|
open
|
2025-03-20T11:59:30Z
|
2025-03-20T13:30:46Z
|
https://github.com/ultralytics/ultralytics/issues/19797
|
[
"question",
"classify"
] |
cainiao123s
| 2
|
pydantic/pydantic-ai
|
pydantic
| 1,126
|
Support for model Gemini Flash 2.0 Image Generation
|
### Description
model name: "gemini-2.0-flash-exp-image-generation"
gemini api doc: https://ai.google.dev/gemini-api/docs/image-generation#gemini
### References
_No response_
|
open
|
2025-03-15T06:40:35Z
|
2025-03-16T12:22:10Z
|
https://github.com/pydantic/pydantic-ai/issues/1126
|
[
"Feature request"
] |
tranhoangnguyen03
| 1
|
sinaptik-ai/pandas-ai
|
data-visualization
| 1,050
|
No module named 'pygwalker'
|
### System Info
Python version: 3.10
### 🐛 Describe the bug
I’ve got my app working perfectly on my local machine, but when trying to host it on the cloud here on streamlit I’m getting:
ModuleNotFoundError: No module named 'pygwalker'
I absolutely have ‘**pygwalker**’ included in my requirements.txt

Many thanks for your help.
|
closed
|
2024-03-19T05:15:22Z
|
2024-07-05T16:06:46Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/1050
|
[] |
yifvs
| 1
|
pytorch/vision
|
machine-learning
| 8,699
|
[Feature Request] PadToSquare: Square Padding to Preserve Aspect Ratios When Resizing Images with Varied Shapes in torchvision.transforms.v2
|
### 🚀 The feature
A new transform class, PadToSquare, that pads non-square images to make them square by adding padding to the shorter side. Configuration is inspired by `torchvision.transforms.v2.Pad`. Note that positional argument `size` is dropped since we calculate the target size based on the non-square image we want to square pad. This feature would be beneficial in situations where square inputs are required for downstream models or processes, and it simplifies the pipeline by embedding this transformation within torchvision.transforms.v2.
Case 1 (Width > Height):


Case 2: Height > Width:


Case 3: Height == Width:
Nothing changes :-)
Image Sources: [VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html#data)
### Motivation, pitch
I’m working on a multi-label classification project that requires images to be square, but the input dataset has a variety of shapes and aspect ratios. PadSquare would streamline the preprocessing pipeline by automatically resizing these images to square while allowing flexible padding modes. This avoids distortions when resizing further and simplifies handling various image shapes. This feature request is based on the need to make square inputs straightforward and robust with consistent padding.
### Alternatives
I have considered using existing padding methods within torchvision, but they require additional logic to conditionally apply padding only to the shorter side, making the code less modular, e.g. [as demonstrated in this discussion](https://discuss.pytorch.org/t/how-to-resize-and-pad-in-a-torchvision-transforms-compose/71850). Current alternatives involve manually calculating padding and applying it to achieve square shapes. By having a dedicated PadSquare transform, it would streamline this common operation into a more reusable and convenient utility.
### Additional context
The PadSquare class uses the _get_params method to calculate the necessary padding values, ensuring the padded image is centered. It also supports multiple padding modes and allows for a specified fill value when using 'constant' mode. It would enhance the versatility of torchvision.transforms.v2 by providing a reusable utility for data preprocessing. Let me know what you think of it! :-)
### Initial Implementation
My initial implementation of `PadSquare` is inspired by the implementation of [Pad](https://github.com/pytorch/vision/blob/main/torchvision/transforms/v2/_geometry.py#L419).
```python3
from typing import Any, Dict, List, Literal, Union, Type
import torchvision.transforms.v2.functional as F
from torchvision.transforms import v2
from torchvision.transforms.v2._utils import (
_check_padding_mode_arg,
_get_fill,
_setup_fill_arg,
_FillType,
)
class PadSquare(v2.Transform):
"""Pad a non-square input to make it square by padding the shorter side to match the longer side.
Args:
fill (number or tuple or dict, optional): Pixel fill value used when the ``padding_mode`` is constant.
Default is 0. If a tuple of length 3, it is used to fill R, G, B channels respectively.
Fill value can be also a dictionary mapping data type to the fill value, e.g.
``fill={tv_tensors.Image: 127, tv_tensors.Mask: 0}`` where ``Image`` will be filled with 127 and
``Mask`` will be filled with 0.
padding_mode (str, optional): Type of padding. Should be: constant, edge, reflect or symmetric.
Default is "constant".
- constant: pads with a constant value, this value is specified with fill
- edge: pads with the last value at the edge of the image.
- reflect: pads with reflection of image without repeating the last value on the edge.
For example, padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode
will result in [3, 2, 1, 2, 3, 4, 3, 2]
- symmetric: pads with reflection of image repeating the last value on the edge.
For example, padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode
will result in [2, 1, 1, 2, 3, 4, 4, 3]
Example:
>>> import torch
>>> from torchvision.transforms.v2 import PadSquare
>>> rectangular_image = torch.randint(0, 255, (3, 224, 168), dtype=torch.uint8)
>>> transform = PadSquare(padding_mode='constant', fill=0)
>>> square_image = transform(rectangular_image)
>>> print(square_image.size())
torch.Size([3, 224, 224])
"""
def __init__(
self,
fill: Union[_FillType, Dict[Union[Type, str], _FillType]] = 0,
padding_mode: Literal["constant", "edge", "reflect", "symmetric"] = "constant",
):
super().__init__()
_check_padding_mode_arg(padding_mode)
if padding_mode not in ["constant", "edge", "reflect", "symmetric"]:
raise ValueError(
"`padding_mode` must be one of 'constant', 'edge', 'reflect' or 'symmetric'."
)
self.padding_mode = padding_mode
self.fill = _setup_fill_arg(fill)
def _get_params(self, flat_inputs: List[Any]) -> Dict[str, Any]:
# Get the original height and width from the inputs
orig_height, orig_width = v2.query_size(flat_inputs)
# Find the target size (maximum of height and width)
target_size = max(orig_height, orig_width)
if orig_height < target_size:
# Need to pad height
pad_height = target_size - orig_height
pad_top = pad_height // 2
pad_bottom = pad_height - pad_top
pad_left = 0
pad_right = 0
else:
# Need to pad width
pad_width = target_size - orig_width
pad_left = pad_width // 2
pad_right = pad_width - pad_left
pad_top = 0
pad_bottom = 0
# The padding needs to be in the format [left, top, right, bottom]
return dict(padding=[pad_left, pad_top, pad_right, pad_bottom])
def _transform(self, inpt: Any, params: Dict[str, Any]) -> Any:
fill = _get_fill(self.fill, type(inpt))
return self._call_kernel(
F.pad,
inpt,
padding=params["padding"],
padding_mode=self.padding_mode,
fill=fill
)
```
|
open
|
2024-10-26T14:59:26Z
|
2024-10-28T18:23:59Z
|
https://github.com/pytorch/vision/issues/8699
|
[] |
geezah
| 4
|
biolab/orange3
|
pandas
| 6,787
|
Delete/rename features
|
### What's your use case?
I need to change the name of some features and remove some. A frequent question for which I could not find a specialised widget.
My data is a _Qty_ by _Date_ which needed to be grouped by week. Using a **Formula** I created a _week_ number followed a **Group by** to get 3 columns: _week_, _Date-Max. value_ and _Qty-Sum_. For some reason, I had to revert it to the columns of the original table: _Qty_, _Date_.
As I did not found any widget to do this, I used the following workaround:
1) A **Formula** to create _Qty_ :=_Qty-Sum_ and _Date_:=_Date-Max. value_;
2) A **Select Columns** to remove the three unwanted column.
Two steps (and a lot of reasoning and testing the first time) just to rearrange the column labels.
### What's your proposed solution?
I think you could adapt the **Formula** widget so as to only keep wanted columns.
For example:
- Copying by default all the existing features in the report frame ;
- Each of them with a tick box, to say if they had to apear or not in the output ;
- The current _remove_ button can be deleted, because the logic will become unmanagable with the number of columns.
- And to nicely remain compatible with usage in previous versions, replace the said _remove_ button by a check box labeled "Keep all features" which will be checked by default.
### Are there any alternative solutions?
- A totaly new widget.
- Adapting the **Select Column** widget.
- Implement the functionality on the **Formula** with a different UI.
Although adpting the **Formula** (which after all is a widget to specify the features) seems the more intuitive from a user point of view.
|
closed
|
2024-04-23T13:43:17Z
|
2024-04-23T15:17:01Z
|
https://github.com/biolab/orange3/issues/6787
|
[] |
alaindebecker
| 1
|
jofpin/trape
|
flask
| 362
|
Gt
|
open
|
2022-07-11T00:35:05Z
|
2022-07-11T00:46:04Z
|
https://github.com/jofpin/trape/issues/362
|
[] |
Sergo2106
| 1
|
|
unit8co/darts
|
data-science
| 2,689
|
`to_dataframe(backend)` and `to_series(backend)` extension
|
**Is your feature request related to a current problem? Please describe.**
[#2661](https://github.com/unit8co/darts/pull/2661/) makes Darts user inputs dataframe agnostic. It would be nice to close the loop and extend the current `TimeSeries.pd_dataframe()` and `TimeSeries.pd_series()` to any dataframe library (pandas, polars, arrow,. ..).
**Describe proposed solution**
Two new methods, `to_dataframe(backend)` and `to_series(backend)` where the `backend` is in ["pandas", "polars", "arrow"]. The most elegant solution would be using narwhals (https://github.com/narwhals-dev/narwhals) but handling the `pd.DatetimeIndex` may get a bit tricky!
**Describe potential alternatives**
A first naive approach would be to keep the whole processing in pandas and convert the dataframe to the desired backend.
|
closed
|
2025-02-20T14:17:12Z
|
2025-03-08T11:25:42Z
|
https://github.com/unit8co/darts/issues/2689
|
[
"improvement"
] |
authierj
| 0
|
TheKevJames/coveralls-python
|
pytest
| 66
|
Subprocess coverage missing
|
With the right configuration and some [monkey patching](https://bitbucket.org/ned/coveragepy/issue/117/enable-coverage-measurement-of-code-run-by), coverage.py supports line coverage in subprocess and multiprocessing. However, coveralls-python doesn't seems to report line covered in subprocess.
I did the necessary modification to be able to capture subprocees in spotify/luigi here spotify/luigi#827. I'm getting the right coverage with `coverage.py` but not `coveralls`.
I included the result of `coveralls debug` and `coverage report`. We can see that `coverage` reports higher coverage for `luigi/server.py`, `mrrunner.py`, `luigi/process`.
``` bash
$ coveralls debug
Testing coveralls-python...
{500 kb json}
==
Reporting 40 files
==
bin/luigi - 3/7
luigi/configuration.py - 46/114
luigi/contrib/esindex.py - 108/441
luigi/contrib/hive.py - 154/454
luigi/contrib/pig.py - 78/197
luigi/contrib/rdbms.py - 27/125
luigi/contrib/redshift.py - 57/424
luigi/contrib/spark.py - 179/400
luigi/contrib/sqla.py - 106/376
luigi/contrib/ssh.py - 124/264
luigi/contrib/target.py - 31/75
luigi/date_interval.py - 113/272
luigi/db_task_history.py - 96/199
luigi/deprecate_kwarg.py - 13/55
luigi/event.py - 10/31
luigi/file.py - 73/147
luigi/format.py - 275/518
luigi/hadoop.py - 387/940
luigi/hdfs.py - 373/964
luigi/hive.py - 4/28
luigi/interface.py - 195/448
luigi/lock.py - 34/97
luigi/mock.py - 84/170
luigi/notifications.py - 72/193
luigi/parameter.py - 204/556
luigi/postgres.py - 115/346
luigi/process.py - 18/183
luigi/rpc.py - 70/163
luigi/s3.py - 238/561
luigi/scheduler.py - 514/908
luigi/server.py - 82/195
luigi/target.py - 56/241
luigi/task.py - 251/749
luigi/task_history.py - 24/75
luigi/task_status.py - 8/28
luigi/tools/parse_task.py - 18/69
luigi/tools/range.py - 205/510
luigi/util.py - 101/243
luigi/worker.py - 439/787
mrrunner.py - 37/97
```
``` bash
$ coverage report
Name Stmts Miss Cover
--------------------------------------------
bin/luigi 3 0 100%
luigi/configuration 52 6 88%
luigi/contrib/esindex 129 21 84%
luigi/contrib/hive 244 90 63%
luigi/contrib/pig 107 29 73%
luigi/contrib/rdbms 36 9 75%
luigi/contrib/redshift 134 77 43%
luigi/contrib/spark 276 97 65%
luigi/contrib/sqla 112 6 95%
luigi/contrib/ssh 136 12 91%
luigi/contrib/target 31 0 100%
luigi/date_interval 135 22 84%
luigi/db_task_history 109 13 88%
luigi/deprecate_kwarg 15 2 87%
luigi/event 10 0 100%
luigi/file 83 10 88%
luigi/format 314 39 88%
luigi/hadoop 532 145 73%
luigi/hdfs 489 116 76%
luigi/hive 4 0 100%
luigi/interface 202 7 97%
luigi/lock 38 4 89%
luigi/mock 95 11 88%
luigi/notifications 100 28 72%
luigi/parameter 214 10 95%
luigi/postgres 135 20 85%
luigi/process 103 85 17%
luigi/rpc 78 8 90%
luigi/s3 286 48 83%
luigi/scheduler 536 22 96%
luigi/server 98 16 84%
luigi/target 65 9 86%
luigi/task 272 21 92%
luigi/task_history 27 3 89%
luigi/task_status 8 0 100%
luigi/tools/parse_task 18 0 100%
luigi/tools/range 215 10 95%
luigi/util 102 1 99%
luigi/worker 473 34 93%
mrrunner 46 9 80%
--------------------------------------------
TOTAL 6062 1040 83%
```
|
closed
|
2015-03-08T04:43:56Z
|
2015-03-08T11:13:41Z
|
https://github.com/TheKevJames/coveralls-python/issues/66
|
[] |
gpoulin
| 2
|
pydata/bottleneck
|
numpy
| 461
|
[BUG] Version 1.4.1 fails to build with Python 3.8
|
**Describe the bug**
I'm seeing the following error when installing Bottleneck as part of a `pandas[performance]` install:
```
Collecting bottleneck>=1.3.2 (from pandas[performance]~=2.0->-r requirements/core.txt (line 2))
Using cached bottleneck-1.4.1.tar.gz (103 kB)
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [9 lines of output]
Collecting setuptools
Using cached setuptools-75.1.0-py3-none-any.whl.metadata (6.9 kB)
Collecting versioneer
Using cached versioneer-0.29-py3-none-any.whl.metadata (16 kB)
Collecting wheel
Using cached wheel-0.44.0-py3-none-any.whl.metadata (2.3 kB)
ERROR: Ignored the following versions that require a different python version: 1.25.0 Requires-Python >=3.9; 1.25.1 Requires-Python >=3.9; 1.25.2 Requires-Python >=3.9; 1.26.0 Requires-Python <3.13,>=3.9; 1.26.1 Requires-Python <3.13,>=3.9; 1.26.2 Requires-Python >=3.9; 1.26.3 Requires-Python >=3.9; 1.26.4 Requires-Python >=3.9; 2.0.0 Requires-Python >=3.9; 2.0.1 Requires-Python >=3.9; 2.0.2 Requires-Python >=3.9; 2.1.0 Requires-Python >=3.10; 2.1.0rc1 Requires-Python >=3.10; 2.1.1 Requires-Python >=3.10; 2.1.2 Requires-Python >=3.10
ERROR: Could not find a version that satisfies the requirement numpy<2.3,>=2 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.3, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0, 1.21.1, 1.21.2, 1.21.3, 1.21.4, 1.21.5, 1.21.6, 1.22.0, 1.22.1, 1.22.2, 1.22.3, 1.22.4, 1.23.0, 1.23.1, 1.23.2, 1.23.3, 1.23.4, 1.23.5, 1.24.0, 1.24.1, 1.24.2, 1.24.3, 1.24.4)
ERROR: No matching distribution found for numpy<2.3,>=2
[end of output]
```
**To Reproduce**
Try executing `pip install pandas[performance]~=2.0` with a system running with Python 3.8. If you pin Bottleneck as `Bottleneck==1.4.0`, then it installs properly.
**Expected behavior**
Bottleneck should either install, or pip should pick a version of Bottleneck that installs.
|
closed
|
2024-10-14T18:00:45Z
|
2024-10-18T11:19:05Z
|
https://github.com/pydata/bottleneck/issues/461
|
[
"bug"
] |
GergelyKalmar
| 3
|
521xueweihan/HelloGitHub
|
python
| 2,722
|
GitHub
|
closed
|
2024-03-31T18:43:44Z
|
2024-04-01T02:32:12Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2722
|
[] |
Dimitrios69
| 0
|
|
FactoryBoy/factory_boy
|
django
| 183
|
Question about @mute_signals with @post_generation function
|
Hi,
I'm experiencing some troublesome issue when using @mute_signals decorator and custom code in a @post_generation function.
That's my current base factory, and all other factories are inheriting from this one to mute signals in every ones :
```
import factory
from django.db.models import signals
from my_app.models import MyModel
@factory.django.mute_signals(signals.pre_save, signals.post_save)
class BaseFactory(factory.django.DjangoModelFactory):
class Meta:
model = MyModel
```
Then I have created other factories that are subclasses of BaseFactory, and everything works fine: signals are perfectly muted/restored.
Except on that special factory :
```
class MyClassWithPostGenerationFactory(BaseFactory):
@factory.post_generation
def my_post_generation_function(self, create, extracted, **kwargs):
AnotherFactory.create(fkey=self)
class AnotherFactory(BaseFactory):
pass
```
Signals are correctly muted, but are not restored after MyClassWithPostGenerationFactory.create() call.
I have tried to guess what was happening in `@mute_signals`, and I have discovered that signal receivers were saved in a class attribute (`self.paused`), that can be a problem in this nested factory situation :
1. Entering `MyClassWithPostGenerationFactory`
- Calling `__ enter __()` on `@mute_signals`
- signals.receivers are saved in `self.paused` => OK (self.paused = {signal: receivers_list, ...})
- setting `signal.receivers` to [] to mute them => OK (self.paused = {signal: receivers_list], signal.receivers = [])
2. Entering `AnotherFactory` on `@post_generation`
- Calling `__ enter __()` on `@mute_signals`
- `signal.receivers` is empty for each signal (from last step of 1.), and saved in `self.paused` => NOK (self.paused = {})
- setting `signal.receivers` to [] to mute them => NOK (self.paused = {}, signal.receivers = [])
3. Getting out of `AnotherFactory`
- Calling `__ exit __()` on `@mute_signals`
- Setting back `signal.receivers` with saved `self.paused`, which is empty => NOK (self.paused = {}, signal.receivers = [])
4. Getting out of `MyClassWithPostGenerationFactory`
- Calling `__ exit __()` on `@mute_signals`
- Setting back `signal.receivers` with saved `self.paused`, which is empty => NOK (self.paused = {}, signal.receivers = [])
But maybe I'm not doing something well ?
It is not a good idea to call factories in a @post_generate ?
Currently, I have subclassed `@mute_signals` decorator and updated `self.paused` to manage signals saving depending on nested 'depth', to be sure that we are not erasing parent signal savings.
It's working as expected now, but not sure that's the most elegant way to do.
Any advices about that ?
|
closed
|
2015-02-10T10:28:12Z
|
2019-01-26T06:39:47Z
|
https://github.com/FactoryBoy/factory_boy/issues/183
|
[] |
romgar
| 1
|
tfranzel/drf-spectacular
|
rest-api
| 428
|
Default Response Code for post is 200?
|
**Describe the bug**
When annotating a generics.createAPIView or generics.ListCreateAPIView, it seems as if the default response code for post responses is 200. Should it be 201? I am using drf spectacular auto schema and generator.
**To Reproduce**
Create a class that derives from generics.ListCreateAPIView and look at the post method response.
**Expected behavior**
The default response should be 201.
|
closed
|
2021-06-11T15:12:53Z
|
2021-07-08T07:25:27Z
|
https://github.com/tfranzel/drf-spectacular/issues/428
|
[
"bug",
"fix confirmation pending"
] |
li-darren
| 4
|
tortoise/tortoise-orm
|
asyncio
| 1,703
|
🐛 Issue: `Model.annotate(...).update(...)` not working
|
### Description
`Model.annotate(...).update(...)` does not work, giving an error.
```python-traceback
tortoise.exceptions.FieldError: There is no non-virtual field bonus_salary on Model Employee
```
### Example code
```python
from tortoise import Tortoise, Model, fields, run_async
from datetime import datetime, timedelta
from tortoise.expressions import Case, When, Q, F
class Employee(Model):
id = fields.IntField(pk=True)
name = fields.TextField()
employed_at = fields.DatetimeField(auto_now_add=True)
salary = fields.BigIntField()
async def main():
await Tortoise.init(
db_url='postgres://username:password@localhost:5432/postgres',
modules={'models': ['__main__']},
)
await Tortoise.generate_schemas()
bob = await Employee.create(name='Bob', employed_at=datetime.now() - timedelta(days=30), salary=1000)
# If Bob was employed before yesterday, he gets a 10% bonus
# Doesn't work
await bob.filter(id=bob.id).select_for_update().annotate(
bonus_salary=Case(
When(Q(employed_at__lt=datetime.now() - timedelta(days=1)), then=F('salary') * 1.1),
default=F('salary'),
)
).update(salary=F('bonus_salary'))
await bob.refresh_from_db()
print(f"NEW SALARY:", bob.salary) # Bob's salary is now 110% of the original value
run_async(main())
```
#### Traceback
```python-traceback
Traceback (most recent call last):
File "/root/.local/lib/python3.12/site-packages/tortoise/expressions.py", line 53, in resolver_arithmetic_expression
arithmetic_expression_or_field.name = model._meta.fields_db_projection[name]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
KeyError: 'bonus_salary'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/tortoise_bug/test.py", line 66, in <module>
run_async(main())
File "/root/.local/lib/python3.12/site-packages/tortoise/__init__.py", line 624, in run_async
loop.run_until_complete(coro)
File "/usr/lib64/python3.12/asyncio/base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/tortoise_bug/test.py", line 34, in main
await bob.filter(id=bob.id).select_for_update().annotate(
File "/root/.local/lib/python3.12/site-packages/tortoise/queryset.py", line 1159, in __await__
self._make_query()
File "/root/.local/lib/python3.12/site-packages/tortoise/queryset.py", line 1144, in _make_query
value = F.resolver_arithmetic_expression(self.model, value)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.local/lib/python3.12/site-packages/tortoise/expressions.py", line 65, in resolver_arithmetic_expression
raise FieldError(f"There is no non-virtual field {name} on Model {model.__name__}")
tortoise.exceptions.FieldError: There is no non-virtual field bonus_salary on Model Employee
```
### Workaround (using raw sql with pypika)
<details>
<summary>Click to expand</summary>
```python
import pypika
from pypika.queries import QueryBuilder
from tortoise import Tortoise, Model, fields, run_async
from datetime import datetime, timedelta
from tortoise.expressions import Case, When, Q, F
class Employee(Model):
id = fields.IntField(pk=True)
name = fields.TextField()
employed_at = fields.DatetimeField(auto_now_add=True)
salary = fields.BigIntField()
async def main():
await Tortoise.init(
db_url='postgres://username:password@localhost:5432/postgres',
modules={'models': ['__main__']},
)
await Tortoise.generate_schemas()
bob = await Employee.create(name='Bob', employed_at=datetime.now() - timedelta(days=30), salary=1000)
EmployeePypika = pypika.Table(Employee._meta.db_table)
where_condition = EmployeePypika.id == bob.id
select_query = pypika.Query.from_(EmployeePypika).select(EmployeePypika.id).where(where_condition).for_update()
update_query: QueryBuilder = pypika.Query.update(EmployeePypika).where(where_condition)
update_query = update_query.set(
EmployeePypika.salary,
pypika.Case().when(EmployeePypika.employed_at < datetime.now() - timedelta(days=1), EmployeePypika.salary * 1.1)
.else_(EmployeePypika.salary),
)
print("SELECT:", select_query.get_sql())
print("UPDATE:", update_query.get_sql())
async with in_transaction() as conn:
await conn.execute_query(str(select_query))
await conn.execute_query(str(update_query))
await bob.refresh_from_db()
print(f"NEW SALARY:", bob.salary) # Bob's salary is now 110% of the original value
run_async(main())
```
</details>
### Expected behavior
Either `update()` should accept `tortoise.expressions.Case()` or it should consider annotated values.
Also, this is an issue with `.annotate().update()` directly, which in itself caused my `Case(...)` use case to fail.
### Additional context
The following snippet also gives an error:
```python
await bob.filter(id=bob.id).select_for_update().update(salary=Case(
When(Q(employed_at__lt=datetime.now() - timedelta(days=1)), then=F('salary') * 1.1),
default=F('salary'),
))
```
```python-traceback
TypeError: int() argument must be a string, a bytes-like object or a real number, not 'Case'
```
|
closed
|
2024-09-03T19:15:03Z
|
2024-11-10T09:40:55Z
|
https://github.com/tortoise/tortoise-orm/issues/1703
|
[] |
DavideGalilei
| 0
|
google-research/bert
|
tensorflow
| 1,007
|
Is it possible feed BERT to seq2seq encoder for NMT (for low resource language)?
|
I'm working on NMT model which the input and the target sentences are from the same language (but the grammar differs). I'm planning to pre-train and use BERT since I'm working on small dataset and low/under resource language. so is it possible to feed BERT to the seq2Seq encoder/decoder?
|
open
|
2020-02-22T20:52:10Z
|
2020-02-25T19:00:16Z
|
https://github.com/google-research/bert/issues/1007
|
[] |
JohnasSolomon
| 2
|
thtrieu/darkflow
|
tensorflow
| 738
|
GCloud ML Engine
|
I've custom darknet model and convert to .pb model using darkflow. Got an error in gcloud ml engine when I tried to creating a model. any suggestion?
> Create Version failed. Model validation failed: SavedModel must contain exactly one metagraph with tag: serve For more information on how to export Tensorflow SavedModel, see https://www.tensorflow.org/api_docs/python/tf/saved_model.
|
open
|
2018-04-26T11:48:55Z
|
2019-09-05T17:15:06Z
|
https://github.com/thtrieu/darkflow/issues/738
|
[] |
bduman
| 2
|
pyeve/eve
|
flask
| 673
|
Resources resolution does not work if _id is not of type ObjectId
|
If _id is stored as a String, resources lookup does not work.
Description:
http://127.0.0.1:5000/items/item1 works fine if _id is of type ObjectId("item1")
http://127.0.0.1:5000/items/item1 fails with 404, resource not found, if the same item is defined with _id of type String("item1")
|
closed
|
2015-07-18T23:16:08Z
|
2015-08-24T07:54:41Z
|
https://github.com/pyeve/eve/issues/673
|
[] |
afedulov
| 1
|
vitalik/django-ninja
|
rest-api
| 1,061
|
[testing] Add `resolver_match` to the mocked request object in the Ninja's TestClient
|
# The problem
Previously, the Ninja's `TestClient` did not include the `resolver_match` in the mocked request object. That's causing issue when having the following setup:
```python
# in schemas.py
class EntityUpdateSchema(Schema):
id: int
class Meta:
exclude: ("id",)
fields_optional = "__all__"
model = Entity
@staticmethod
def resolve_id(obj, context):
return context["request"].resolver_match.kwargs.get("id")
```
The above schema will infer the `id` property from the path. So, assuming you have the following path:
```python
@router.patch("/entities/{id}/")
def patch_entity(request: HttpRequest, id: int, partial_entity_data: EntityUpdateSchema):
...
```
the `EntityUpdateSchema` will be populated with the `id` received from the path. I find this useful for when we need to do some validation against the object and we need to query the object that we're currently working with.
And the good news is - this works perfectly! When I call the endpoint, everything is behaving as it should :chef-kiss:
## So, what's the problem?
When writing tests, though, and using the `TestClient` from Ninja, this is not working at all. Doing `client.patch("/entities/123")` will never populate the schema field, and we will get a validation error back, saying the id field is missing.
That is because `resolver_match` is never set as the property of the mocked request. So, doing `request.resolver_match.kwargs.get("id")` will **always** result in `None`.
# The solution
The solution for this is quite simple - when we build the mock request object, we need to set the `resolver_match` to the ninja resolver that we get back from the URL. By doing that, the schema is again able to infer the id and everything works as it should, even in tests.
I went ahead and created a PR for this: **https://github.com/vitalik/django-ninja/pull/1060**
Using the solution that I've introduced in this PR, everything works as expected when testing views. While I could just create my own TestClient that would inherit from Ninja's test client, and override the _resolve method (which I'm happy to do if for some reason we don't want this in the framework overall), I assume this would be beneficial to others as well, hence the PR 🙂
|
open
|
2024-01-22T08:42:49Z
|
2024-03-07T08:00:50Z
|
https://github.com/vitalik/django-ninja/issues/1061
|
[] |
zigcccc
| 2
|
jupyterhub/repo2docker
|
jupyter
| 1,206
|
Aim for general template variables for the Dockerfile template rendering
|
I think both of these PRs are suggesting we pass a value to repo2docker via a dedicated flag that can override config, that can then influence how a Dockerfile template renders.
- #909
- #957
I'd love to see a more general pattern, where we support passing template variables of all kinds instead and not needing to add dedicated flags etc.
|
open
|
2022-10-31T00:20:05Z
|
2022-10-31T00:20:10Z
|
https://github.com/jupyterhub/repo2docker/issues/1206
|
[
"new"
] |
consideRatio
| 0
|
AutoGPTQ/AutoGPTQ
|
nlp
| 568
|
CUDA Out of Memory when quantizing mistralai/Mistral-7B-Instruct-v0.2
|
## I fine-tune my model on Mistral-7B-Instruct-v0.2 using QLora, then merge it back to the base mode (I need to use vLLM). But I always have Cuda out of memory even I use an instance that has 48GB CPU and 48GB GPU. I wonder why it is loaded on CPU by default but still saying the GPU is out of memory?
```bash
import random
import numpy as np
import torch
from datasets import load_dataset
from transformers import TextGenerationPipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
pretrained_model_dir = merge_model_path
quantized_model_dir = "quantized_model"
datapath = "hieunguyenminh/roleplay"
def get_data(nsamples, seed, seqlen, tokenizer):
# set seed
random.seed(seed)
np.random.seed(seed)
torch.random.manual_seed(seed)
# load dataset and preprocess
traindata = load_dataset(datapath, split="train")
trainenc = tokenizer("\n\n".join(traindata["text"]), return_tensors="pt")
traindataset = []
for _ in range(nsamples):
i = random.randint(0, trainenc.input_ids.shape[1] - seqlen - 1)
j = i + seqlen
inp = trainenc.input_ids[:, i:j]
attention_mask = torch.ones_like(inp)
traindataset.append({"input_ids": inp, "attention_mask": attention_mask})
return traindataset
def main():
from transformers import AutoTokenizer
try:
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True, trust_remote_code=True)
except Exception as e:
print(e)
# load un-quantized model, the model will always be force loaded into cpu
quantize_config = BaseQuantizeConfig(
bits=4, # quantize model to 4-bit
group_size=128, # it is recommended to set the value to 128
desc_act=False, # desc_act and groupsize only works on triton
)
# get model maximum sequence length
model = AutoGPTQForCausalLM.from_pretrained(pretrained_model_dir, quantize_config, low_cpu_mem_usage=True, trust_remote_code=True, torch_dtype="auto")
model_config = model.config.to_dict()
seq_len_keys = ["max_position_embeddings", "seq_length", "n_positions"]
if any(k in model_config for k in seq_len_keys):
for key in seq_len_keys:
if key in model_config:
model.seqlen = model_config[key]
break
else:
print("can't get model's sequence length from model config, will set to 2048.")
model.seqlen = 2048
# load train dataset for quantize
traindataset = get_data(128, 0, model.seqlen, tokenizer)
# quantize model, the examples should be list of dict whose keys contains "input_ids" and "attention_mask"
# with value under torch.LongTensor type.
model.quantize(traindataset, use_triton=False)
# save quantized model
# model.save_quantized(quantized_model_dir)
# save quantized model using safetensors
model.save_quantized(quantized_model_dir, use_safetensors=True)
```
Error:
An error occurred: CUDA out of memory. Tried to allocate 4.00 GiB. GPU 0 has a total capacty of 44.35 GiB of which 107.38 MiB is free. Process 2034875 has 44.23 GiB memory in use. Of the allocated memory 35.59 GiB is allocated by PyTorch, and 8.34 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
|
open
|
2024-02-27T03:54:44Z
|
2024-02-29T19:30:28Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/568
|
[
"bug"
] |
hieuminh65
| 1
|
holoviz/panel
|
plotly
| 7,362
|
Functions vs. Classes doc page is blank
|
<!--
Thanks for contacting us! Please read and follow these instructions carefully, then you can delete this introductory text. Note that the issue tracker is NOT the place for usage questions and technical assistance; post those at [Discourse](https://discourse.holoviz.org) instead. Issues without the required information below may be closed immediately.
-->
The [Functions vs. Classes](https://panel.holoviz.org/explanation/api/functions_vs_classes.html) page in the documentation is currently blank.
|
open
|
2024-10-05T19:03:50Z
|
2024-10-05T19:03:50Z
|
https://github.com/holoviz/panel/issues/7362
|
[] |
bskubi
| 0
|
ets-labs/python-dependency-injector
|
flask
| 202
|
Installing in Amazon EC2 Ubuntu fails for version after 3.3.7
|
I have an EC2 machine running ubuntu, with a python3.5 virtual environment and when I install latest version of dependency-injector using pip (or pip3, doesn't matter), it stops at "Running setup.py install for dependency-injector", doesn't continue. Version 3.3.7 is the latest I can install (after trying all the rest).
|
closed
|
2018-08-08T20:06:33Z
|
2020-07-02T03:51:44Z
|
https://github.com/ets-labs/python-dependency-injector/issues/202
|
[
"question"
] |
arturoribes
| 12
|
airtai/faststream
|
asyncio
| 1,879
|
Bug: CLI can't resolve `psycopg` import
|
**How to reproduce**
Include source code:
```python
from sqlalchemy.ext.asyncio import create_async_engine
from faststream import FastStream
from faststream.nats import NatsBroker
broker = NatsBroker()
app = FastStream(broker)
engine = create_async_engine("postgresql+psycopg://user:pass@localhost:5432")
```
And/Or steps to reproduce the behavior:
```cmd
fastsream run serve:app
```
**Screenshots**
<img width="946" alt="Снимок экрана 2024-10-29 в 19 33 07" src="https://github.com/user-attachments/assets/15d64a37-f11c-42ee-98af-3ad36c61fe2e">
**Environment**
Include the output of the `faststream -v` command to display your current project and system environment.
```txt
faststream==0.5.28
psycopg==3.2.3
```
|
closed
|
2024-10-29T16:34:16Z
|
2024-10-29T16:40:23Z
|
https://github.com/airtai/faststream/issues/1879
|
[
"bug",
"good first issue"
] |
Lancetnik
| 0
|
pyro-ppl/numpyro
|
numpy
| 1,341
|
Error with log_likelihood with respect to random_flax_module
|
I am getting error while using `log_likelihood` with respect to `random_flax_module`. Is this a bug or I am making any mistake?
my model:
```
from flax import linen as nn
class Net(nn.Module):
n_units: int
@nn.compact
def __call__(self, x):
x = nn.Dense(8)(x) # Fist layer
x = nn.relu(x)
x = nn.Dense(8)(x) # Middle layer
x = nn.relu(x)
x = nn.Dense(4)(x) # Last layer
return x
def model(X, Y = None, D_H = None, D_Y = 1):
N, D_X = X.shape
lclass = 4
module = Net(n_units = D_H)
net = random_flax_module("net", module, prior = dist.Normal(0, 1), input_shape = (N, D_X))
# observe data
lvals = net(X)
with numpyro.plate("batch", X.shape[0]):
numpyro.sample("Y", dist.Categorical(logits = lvals), obs = Y)
```
Inference:
```
def run_inference_DELTA(model, args, rng_key, X, Y, D_H):
print("Starting DELTA inference")
guide = AutoDelta( model)
svi = SVI(model, guide, optim.Adam(0.001), Trace_ELBO())
svi_result = svi.run(random.PRNGKey(100), args.svi_steps, X, Y, D_H)
print("AutoDelta inference done")
params = svi_result.params
samples_1 = guide.sample_posterior(
random.PRNGKey(1), params, sample_shape=(args.num_samples,)
)
return samples_1
def run_inference(model, args, rng_key, X, Y, D_H):
return run_inference_DELTA(model, args, rng_key, X, Y, D_H)
```
Unfortunately I am getting error while using log_likelihood from numpyro.infer.
```
rng_key, rng_key_predict = random.split(random.PRNGKey(0))
samples_1 = run_inference(model, args, rng_key, X_train, Y_train, D_H)
ll = log_likelihood(model, samples_1, X = X_test, Y = Y_test, D_H = D_H)
```
The error is as follows:
```
File "main.py", line 221, in main
ll = log_likelihood(model, samples_1, X = X_test, Y = Y_test, D_H = D_H)
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/infer/util.py", line 1027, in log_likelihood
return soft_vmap(single_loglik, posterior_samples, len(batch_shape), chunk_size)
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/util.py", line 410, in soft_vmap
ys = lax.map(fn, xs) if num_chunks > 1 else fn(xs)
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/traceback_util.py", line 165, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/lax/control_flow.py", line 2263, in map
_, ys = scan(g, (), xs)
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/traceback_util.py", line 165, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/lax/control_flow.py", line 1623, in scan
init_flat, carry_avals, carry_avals_out, init_tree, *rest = _create_jaxpr(init)
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/lax/control_flow.py", line 1609, in _create_jaxpr
jaxpr, consts, out_tree = _initial_style_jaxpr(
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/util.py", line 210, in wrapper
return cached(config._trace_context(), *args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/util.py", line 203, in cached
return f(*args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/lax/control_flow.py", line 83, in _initial_style_jaxpr
jaxpr, consts, out_tree = _initial_style_open_jaxpr(
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/util.py", line 210, in wrapper
return cached(config._trace_context(), *args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/util.py", line 203, in cached
return f(*args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/lax/control_flow.py", line 77, in _initial_style_open_jaxpr
jaxpr, _, consts = pe.trace_to_jaxpr_dynamic(wrapped_fun, in_avals, debug)
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/profiler.py", line 206, in wrapper
return func(*args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/interpreters/partial_eval.py", line 1666, in trace_to_jaxpr_dynamic
jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/interpreters/partial_eval.py", line 1703, in trace_to_subjaxpr_dynamic
ans = fun.call_wrapped(*in_tracers_)
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/linear_util.py", line 166, in call_wrapped
ans = self.f(*args, **dict(self.params, **kwargs))
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/lax/control_flow.py", line 2262, in <lambda>
g = lambda _, x: ((), f(x))
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/infer/util.py", line 1002, in single_loglik
model_trace = trace(substituted_model).get_trace(*args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/handlers.py", line 171, in get_trace
self(*args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/primitives.py", line 87, in __call__
return self.fn(*args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/primitives.py", line 87, in __call__
return self.fn(*args, **kwargs)
File "main.py", line 68, in model
net = random_flax_module("net", module, prior = dist.Normal(0, 1), input_shape = (N, D_X))
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/contrib/module.py", line 355, in random_flax_module
nn = flax_module(
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/contrib/module.py", line 94, in flax_module
nn_vars = flax.core.unfreeze(nn_module.init(rngs, *args, **kwargs))
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/traceback_util.py", line 165, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/flax/linen/module.py", line 1183, in init
_, v_out = self.init_with_output(
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/traceback_util.py", line 165, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/flax/linen/module.py", line 1152, in init_with_output
return self.apply(
File "/home/bandyopn/local/lib/python3.8/site-packages/jax/_src/traceback_util.py", line 165, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/flax/linen/module.py", line 1119, in apply
return apply(
File "/home/bandyopn/local/lib/python3.8/site-packages/flax/core/scope.py", line 803, in wrapper
with bind(variables, rngs=rngs, mutable=mutable).temporary() as root:
File "/home/bandyopn/local/lib/python3.8/site-packages/flax/core/scope.py", line 774, in bind
raise errors.InvalidRngError(
jax._src.traceback_util.UnfilteredStackTrace: flax.errors.InvalidRngError: rngs should be a dictionary mapping strings to `jax.PRNGKey`. (https://flax.readthedocs.io/en/latest/flax.errors.html#flax.errors.InvalidRngError)
The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "main.py", line 271, in <module>
main(args)
File "main.py", line 221, in main
ll = log_likelihood(model, samples_1, X = X_test, Y = Y_test, D_H = D_H)
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/infer/util.py", line 1027, in log_likelihood
return soft_vmap(single_loglik, posterior_samples, len(batch_shape), chunk_size)
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/util.py", line 410, in soft_vmap
ys = lax.map(fn, xs) if num_chunks > 1 else fn(xs)
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/infer/util.py", line 1002, in single_loglik
model_trace = trace(substituted_model).get_trace(*args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/handlers.py", line 171, in get_trace
self(*args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/primitives.py", line 87, in __call__
return self.fn(*args, **kwargs)
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/primitives.py", line 87, in __call__
return self.fn(*args, **kwargs)
File "main.py", line 68, in model
net = random_flax_module("net", module, prior = dist.Normal(0, 1), input_shape = (N, D_X))
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/contrib/module.py", line 355, in random_flax_module
nn = flax_module(
File "/home/bandyopn/local/lib/python3.8/site-packages/numpyro/contrib/module.py", line 94, in flax_module
nn_vars = flax.core.unfreeze(nn_module.init(rngs, *args, **kwargs))
flax.errors.InvalidRngError: rngs should be a dictionary mapping strings to `jax.PRNGKey`. (https://flax.readthedocs.io/en/latest/flax.errors.html#flax.errors.InvalidRngError)
```
|
closed
|
2022-02-17T23:50:18Z
|
2022-02-23T12:24:51Z
|
https://github.com/pyro-ppl/numpyro/issues/1341
|
[
"enhancement",
"question"
] |
nirmalya-broad
| 4
|
serengil/deepface
|
deep-learning
| 947
|
Custom model/detector backend support
|
I noticed that DeepFace uses a dictionary to describe the model/backend, would DeepFace be able to accept custom models/backends via a custom loadModel/build_model? Thank you!
|
closed
|
2024-01-08T14:05:33Z
|
2024-01-08T14:06:48Z
|
https://github.com/serengil/deepface/issues/947
|
[
"question"
] |
xfqwdsj
| 1
|
gevent/gevent
|
asyncio
| 1,551
|
Fails to install on msys2, unless cffi is installed first
|
* gevent version: 1..4.0
* Python version: 3.8 from msys2 repo
* Operating System: Windows 10 x64
### Description:
Installing a python module, which requires gevent, or installing gevent on its own
### What I've run:
`pip install gevent`
```python
ERROR: Command errored out with exit status 1:
command: C:/msys64/mingw64/bin/python.exe -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:/Users/George/AppData/Local/Temp/pip-install-at7msw5e/gevent/setup.py'"'"'; __file__='"'"'C:/Users/George/AppData/Local/Temp/pip-install-at7msw5e/gevent/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base C:/Users/George/AppData/Local/Temp/pip-install-at7msw5e/gevent/pip-egg-info
cwd: C:/Users/George/AppData/Local/Temp/pip-install-at7msw5e/gevent/
Complete output (56 lines):
Compiling src/gevent/libev/corecext.pyx because it depends on C:/msys64/mingw64/lib/python3.8/site-packages/Cython/Includes/libc/string.pxd.
[1/1] Cythonizing src/gevent/libev/corecext.pyx
Compiling src/gevent/resolver/cares.pyx because it depends on C:/msys64/mingw64/lib/python3.8/site-packages/Cython/Includes/libc/string.pxd.
[1/1] Cythonizing src/gevent/resolver/cares.pyx
WARNING: The wheel package is not available.
ERROR: Command errored out with exit status 1:
command: C:/msys64/mingw64/bin/python.exe -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:/Users/George/AppData/Local/Temp/pip-wheel-1jhyp1oh/cffi/setup.py'"'"'; __file__='"'"'C:/Users/George/AppData/Local/Temp/pip-wheel-1jhyp1oh/cffi/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d C:/Users/George/AppData/Local/Temp/pip-wheel-8ey8vprz
cwd: C:/Users/George/AppData/Local/Temp/pip-wheel-1jhyp1oh/cffi/
Complete output (12 lines):
_configtest.c:2:2: error: #error "not MSVC"
2 | #error "not MSVC"
| ^~~~~
Note: will not use '__thread' in the C code
***** The above error message can be safely ignored.
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
----------------------------------------
ERROR: Failed building wheel for cffi
ERROR: Failed to build one or more wheels
Traceback (most recent call last):
File "C:/msys64/mingw64/lib/python3.8/site-packages/setuptools/installer.py", line 128, in fetch_build_egg
subprocess.check_call(cmd)
File "C:/msys64/mingw64/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['C:/msys64/mingw64/bin/python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:/Users/George/AppData/Local/Temp/tmpv8c49tqb', '--quiet', 'cffi>=1.11.5']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:/Users/George/AppData/Local/Temp/pip-install-at7msw5e/gevent/setup.py", line 427, in <module>
run_setup(EXT_MODULES, run_make=_BUILDING)
File "C:/Users/George/AppData/Local/Temp/pip-install-at7msw5e/gevent/setup.py", line 328, in run_setup
setup(
File "C:/msys64/mingw64/lib/python3.8/site-packages/setuptools/__init__.py", line 143, in setup
_install_setup_requires(attrs)
File "C:/msys64/mingw64/lib/python3.8/site-packages/setuptools/__init__.py", line 138, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "C:/msys64/mingw64/lib/python3.8/site-packages/setuptools/dist.py", line 695, in fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
File "C:/msys64/mingw64/lib/python3.8/site-packages/pkg_resources/__init__.py", line 781, in resolve
dist = best[req.key] = env.best_match(
File "C:/msys64/mingw64/lib/python3.8/site-packages/pkg_resources/__init__.py", line 1066, in best_match
return self.obtain(req, installer)
File "C:/msys64/mingw64/lib/python3.8/site-packages/pkg_resources/__init__.py", line 1078, in obtain
return installer(requirement)
File "C:/msys64/mingw64/lib/python3.8/site-packages/setuptools/dist.py", line 754, in fetch_build_egg
return fetch_build_egg(self, req)
File "C:/msys64/mingw64/lib/python3.8/site-packages/setuptools/installer.py", line 130, in fetch_build_egg
raise DistutilsError(str(e))
distutils.errors.DistutilsError: Command '['C:/msys64/mingw64/bin/python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:/Users/George/AppData/Local/Temp/tmpv8c49tqb', '--quiet', 'cffi>=1.11.5']' returned non-zero exit status 1.
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
```
|
closed
|
2020-04-05T05:38:21Z
|
2020-04-05T13:15:03Z
|
https://github.com/gevent/gevent/issues/1551
|
[] |
hgdagon
| 1
|
MaartenGr/BERTopic
|
nlp
| 1,207
|
Mismatched labels in visualize_hierarchy
|
Hi Maarten
Thanks for a super great tool and all the hard work that have clearly been put into BERTopic. I am currently working with data from the Danish parliament, but have some unexpected behavior with the labels of the visualize_hierarchy function. Perhaps I am missing something, but it seems like the Topics shown on the left side of the visualizing are shifted - they don't make sense in combination with the topics to which they are merged:
Example, on the left top side I have these 2 topics (screenshot 1)
44_oliefyr_energinet_varme
33_energiaftalen_energinet

But the supposed merge (screenshot 2) is per_clausen_per_skat_regning - which makes no sense. However, if I instead hover over the horisontal line next to the topics I believe I see what is the correct topics, fx. skatten_afgiften_skatter_no_skatteministeren (screenshot 1).

The topics seem to be correct if I create the tree and print that (hierachy.txt), so it just seem to be a matter of the labels on the y-axis of the visualize_hierarchy that have been shifted.
[hierachy.txt](https://github.com/MaartenGr/BERTopic/files/11304823/hierachy.txt)
```
sbert_model = SentenceTransformer("paraphrase-multilingual-MiniLM-L12-v2", device='cpu')
q_sbert_model = torch.quantization.quantize_dynamic(sbert_model, {torch.nn.Linear}, dtype=torch.qint8)
q_clim_statement_emb= q_sbert_model.encode(statements_climate, show_progress_bar=True)
from sklearn.feature_extraction.text import CountVectorizer
from hdbscan import HDBSCAN
hdbscan_model_cs1 = HDBSCAN(min_cluster_size=15, metric='euclidean', prediction_data=True)
umap_model_cs1 = UMAP(n_neighbors=10, n_components=5, min_dist=0.0, metric='cosine', angular_rp_forest=True, random_state=42)
vectorizer_model_cs1 = CountVectorizer(ngram_range=(1, 3), stop_words=dk_stop_words, encoding='utf-8', min_df=10, max_df=0.5)
topic_model_cs1 = BERTopic(calculate_probabilities=False, umap_model=umap_model_cs1, low_memory=True, hdbscan_model=hdbscan_model_cs1, min_topic_size=30, vectorizer_model=vectorizer_model_cs1, verbose=True, language=None)
topic_cs1,_ = topic_model_cs1.fit_transform(statements_climate, q_clim_statement_emb)
hierarchical_topics_cs1 = topic_model_cs1.hierarchical_topics(statements_climate)
topic_model_cs1.visualize_hierarchy(hierarchical_topics=hierarchical_topics_cs1)
```
(FYI I have also tried with the regular sbert not quantized just to check, and it makes no difference)
Thanks a lot for the help
Kind regards,
Jesper
|
closed
|
2023-04-23T20:23:59Z
|
2023-05-24T17:45:12Z
|
https://github.com/MaartenGr/BERTopic/issues/1207
|
[] |
jalkestrup
| 4
|
Lightning-AI/pytorch-lightning
|
machine-learning
| 19,558
|
Restoring a checkpoint (from model checkpoint) via `trainer.test(model_module, ckpt_path='best')` doesn't restore associated `current_epoch` within the trainer
|
### Bug description
When a checkpoint (presumably the 'best' one saved by model checkpoint monitor) is restored to be used for testing via `trainer.test(model_module, ckpt_path='best')` the `current_epoch` member of the trainer is not restored to the one associated with the checkpoint, even though the checkpoint file stores the correct epoch value.
I am not sure whether this is intended (seems rather incorrect to me). I asked a question about this problem at the Discord forum before, but got no response, so I decided to open an issue here.
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
```python
import os
import torch
from lightning.pytorch import LightningModule, Trainer, callbacks, seed_everything
from torch.utils.data import DataLoader, Dataset
class RandomDataset(Dataset):
def __init__(self, size, length, offset = 0):
self.len = length
self.data = torch.randn(length, size) + offset
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def training_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("train_loss", loss)
return {"loss": loss}
def validation_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("valid_loss", loss)
def test_step(self, batch, batch_idx):
loss = self(batch).sum()
print(f'CURRENT EPOCH: {self.current_epoch}')
self.log("test_loss", loss)
def configure_optimizers(self):
return torch.optim.SGD(self.layer.parameters(), lr=0.1)
def run():
seed_everything(10)
train_data = DataLoader(RandomDataset(32, 64), batch_size=2)
val_data = DataLoader(RandomDataset(32, 64, offset=16), batch_size=2)
test_data = DataLoader(RandomDataset(32, 64), batch_size=2)
model = BoringModel()
trainer = Trainer(
default_root_dir=os.getcwd(),
num_sanity_val_steps=0,
limit_train_batches=1,
limit_val_batches=1,
limit_test_batches=1,
max_epochs=2,
enable_model_summary=False,
callbacks=[(mc := callbacks.ModelCheckpoint(monitor="valid_loss", verbose=True))],
)
trainer.fit(model, train_dataloaders=train_data, val_dataloaders=val_data)
trainer.test(model, dataloaders=test_data, ckpt_path="best")
print(f'CKPT EPOCH: {torch.load(mc.best_model_path, map_location="cpu")["epoch"]}')
if __name__ == "__main__":
run()
```
### Error messages and logs
```
Epoch 0: 100%|███████████████████████████████████████████| 32/32 [00:00<00:00, 277.12it/s, v_num=7]
Epoch 0, global step 32: 'valid_loss' reached 183.69585 (best 183.69585), saving model to '/Users/maciej/Desktop/lightning_logs/version_7/checkpoints/epoch=0-step=32.ckpt' as top 1
Epoch 1: 100%|███████████████████████████████████████████| 32/32 [00:00<00:00, 432.74it/s, v_num=7]
Epoch 1, global step 64: 'valid_loss' was not in top 1
`Trainer.fit` stopped: `max_epochs=2` reached.
Epoch 1: 100%|███████████████████████████████████████████| 32/32 [00:00<00:00, 410.05it/s, v_num=7]
Restoring states from the checkpoint path at /Users/maciej/Desktop/lightning_logs/version_7/checkpoints/epoch=0-step=32.ckpt
Loaded model weights from the checkpoint at /Users/maciej/Desktop/lightning_logs/version_7/checkpoints/epoch=0-step=32.ckpt
/Users/maciej/.local/share/pyenv/versions/3.11.7/lib/python3.11/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:441: The 'test_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=9` in the `DataLoader` to improve performance.
Testing DataLoader 0: 0%| | 0/1 [00:00<?, ?it/s]
CURRENT EPOCH: 2
Testing DataLoader 0: 100%|█████████████████████████████████████████| 1/1 [00:00<00:00, 380.95it/s]
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Test metric ┃ DataLoader 0 ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ test_loss │ -40.08610534667969 │
└───────────────────────────┴───────────────────────────┘
CKPT EPOCH: 0
```
Notice that current epoch printed by the `test_step` after loading "best" does not match the one inside the checkpoint. This indicates that loading the 'best' model doesn't restore the epoch number. Should it?
### Environment
<details>
<summary>Current environment</summary>
```
* CUDA:
- GPU: None
- available: False
- version: None
* Lightning:
- lightning: 2.2.0.post0
- lightning-utilities: 0.10.1
- pytorch-lightning: 2.2.0.post0
- torch: 2.2.1
- torchmetrics: 1.3.1
- torchvision: 0.17.1
* Packages:
- aiohttp: 3.9.3
- aiohttp-retry: 2.8.3
- aiosignal: 1.3.1
- amqp: 5.2.0
- annotated-types: 0.6.0
- antlr4-python3-runtime: 4.9.3
- anyio: 4.3.0
- appdirs: 1.4.4
- appnope: 0.1.4
- argon2-cffi: 23.1.0
- argon2-cffi-bindings: 21.2.0
- arrow: 1.3.0
- astroid: 3.1.0
- asttokens: 2.4.1
- async-lru: 2.0.4
- asyncssh: 2.14.2
- atpublic: 4.0
- attrs: 23.2.0
- autopep8: 2.0.4
- babel: 2.14.0
- backcall: 0.2.0
- beautifulsoup4: 4.12.3
- billiard: 4.2.0
- black: 24.2.0
- bleach: 6.1.0
- blinker: 1.7.0
- celery: 5.3.6
- certifi: 2024.2.2
- cffi: 1.16.0
- charset-normalizer: 3.3.2
- click: 8.1.7
- click-didyoumean: 0.3.0
- click-plugins: 1.1.1
- click-repl: 0.3.0
- colorama: 0.4.6
- comm: 0.2.1
- configobj: 5.0.8
- contourpy: 1.2.0
- cryptography: 42.0.5
- cycler: 0.12.1
- dash: 2.15.0
- dash-core-components: 2.0.0
- dash-html-components: 2.0.0
- dash-table: 5.0.0
- debugpy: 1.8.1
- decorator: 5.1.1
- defusedxml: 0.7.1
- dictdiffer: 0.9.0
- dill: 0.3.8
- diskcache: 5.6.3
- distlib: 0.3.8
- distro: 1.9.0
- docopt: 0.6.2
- dpath: 2.1.6
- dulwich: 0.21.7
- dvc: 3.48.0
- dvc-data: 3.13.0
- dvc-http: 2.32.0
- dvc-objects: 5.0.0
- dvc-render: 1.0.1
- dvc-studio-client: 0.20.0
- dvc-task: 0.3.0
- einops: 0.7.0
- entrypoints: 0.4
- executing: 2.0.1
- fastjsonschema: 2.19.1
- filelock: 3.13.1
- flake8: 7.0.0
- flask: 3.0.2
- flatten-dict: 0.4.2
- flufl.lock: 7.1.1
- fonttools: 4.49.0
- fqdn: 1.5.1
- frozenlist: 1.4.1
- fsspec: 2024.2.0
- funcy: 2.0
- gitdb: 4.0.11
- gitpython: 3.1.42
- grandalf: 0.8
- greenlet: 3.0.3
- gto: 1.7.0
- h11: 0.14.0
- h5py: 3.10.0
- httpcore: 1.0.4
- httpx: 0.27.0
- hydra-core: 1.3.2
- idna: 3.6
- imageio: 2.34.0
- importlib-metadata: 7.0.1
- iniconfig: 2.0.0
- ipdb: 0.13.13
- ipykernel: 6.29.3
- ipython: 8.12.3
- isoduration: 20.11.0
- isort: 5.13.2
- iterative-telemetry: 0.0.8
- itsdangerous: 2.1.2
- jedi: 0.19.1
- jinja2: 3.1.3
- joblib: 1.3.2
- json5: 0.9.17
- jsonpointer: 2.4
- jsonschema: 4.21.1
- jsonschema-specifications: 2023.12.1
- jupyter-client: 8.6.0
- jupyter-core: 5.7.1
- jupyter-events: 0.9.0
- jupyter-lsp: 2.2.3
- jupyter-server: 2.12.5
- jupyter-server-mathjax: 0.2.6
- jupyter-server-terminals: 0.5.2
- jupyterlab: 4.1.2
- jupyterlab-pygments: 0.3.0
- jupyterlab-server: 2.25.3
- kiwisolver: 1.4.5
- kombu: 5.3.5
- lazy-loader: 0.3
- lightning: 2.2.0.post0
- lightning-utilities: 0.10.1
- markdown-it-py: 3.0.0
- markupsafe: 2.1.5
- matplotlib: 3.8.3
- matplotlib-inline: 0.1.6
- mccabe: 0.7.0
- mdurl: 0.1.2
- mistune: 3.0.2
- mpmath: 1.3.0
- msgpack: 1.0.7
- multidict: 6.0.5
- mypy: 1.8.0
- mypy-extensions: 1.0.0
- nbclient: 0.9.0
- nbconvert: 7.16.1
- nbdime: 4.0.1
- nbformat: 5.9.2
- neovim: 0.3.1
- nest-asyncio: 1.6.0
- networkx: 3.2.1
- notebook-shim: 0.2.4
- numpy: 1.26.4
- omegaconf: 2.3.0
- orjson: 3.9.15
- overrides: 7.7.0
- packaging: 23.2
- pandas: 2.2.1
- pandocfilters: 1.5.1
- parso: 0.8.3
- pathspec: 0.12.1
- pexpect: 4.9.0
- pickleshare: 0.7.5
- pillow: 10.2.0
- pip: 23.2.1
- pipreqs: 0.5.0
- platformdirs: 3.11.0
- plotly: 5.19.0
- pluggy: 1.4.0
- prometheus-client: 0.20.0
- prompt-toolkit: 3.0.43
- psutil: 5.9.8
- ptyprocess: 0.7.0
- pure-eval: 0.2.2
- pycodestyle: 2.11.1
- pycparser: 2.21
- pydantic: 2.6.3
- pydantic-core: 2.16.3
- pydocstyle: 6.3.0
- pydot: 2.0.0
- pyflakes: 3.2.0
- pygit2: 1.14.1
- pygments: 2.17.2
- pygtrie: 2.5.0
- pylint: 3.1.0
- pynvim: 0.5.0
- pyparsing: 3.1.1
- pytest: 8.0.2
- pytest-mock: 3.12.0
- python-dateutil: 2.8.2
- python-json-logger: 2.0.7
- pytoolconfig: 1.3.1
- pytorch-lightning: 2.2.0.post0
- pytz: 2024.1
- pyyaml: 6.0.1
- pyzmq: 25.1.2
- referencing: 0.33.0
- rentry: 1.0.1
- requests: 2.31.0
- retrying: 1.3.4
- rfc3339-validator: 0.1.4
- rfc3986-validator: 0.1.1
- rich: 13.7.0
- rope: 1.12.0
- rpds-py: 0.18.0
- ruamel.yaml: 0.18.6
- ruamel.yaml.clib: 0.2.8
- scikit-image: 0.22.0
- scikit-learn: 1.4.1.post1
- scipy: 1.12.0
- scmrepo: 3.1.0
- semver: 3.0.2
- send2trash: 1.8.2
- setuptools: 65.5.0
- shortuuid: 1.0.11
- shtab: 1.7.0
- six: 1.16.0
- smmap: 5.0.1
- sniffio: 1.3.1
- snowballstemmer: 2.2.0
- soupsieve: 2.5
- sqltrie: 0.11.0
- stack-data: 0.6.3
- sympy: 1.12
- tabulate: 0.9.0
- tenacity: 8.2.3
- terminado: 0.18.0
- threadpoolctl: 3.3.0
- tifffile: 2024.2.12
- tinycss2: 1.2.1
- tomlkit: 0.12.4
- torch: 2.2.1
- torchmetrics: 1.3.1
- torchvision: 0.17.1
- tornado: 6.4
- tqdm: 4.66.2
- traitlets: 5.14.1
- typer: 0.9.0
- types-python-dateutil: 2.8.19.20240106
- typing-extensions: 4.10.0
- tzdata: 2024.1
- uri-template: 1.3.0
- urllib3: 2.2.1
- vine: 5.1.0
- virtualenv: 20.25.1
- voluptuous: 0.14.2
- wcwidth: 0.2.13
- webcolors: 1.13
- webencodings: 0.5.1
- websocket-client: 1.7.0
- werkzeug: 3.0.1
- wheel: 0.42.0
- yarg: 0.1.9
- yarl: 1.9.4
- zc.lockfile: 3.0.post1
- zipp: 3.17.0
* System:
- OS: Darwin
- architecture:
- 64bit
-
- processor: arm
- python: 3.11.7
- release: 23.2.0
- version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:18 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6000
```
</details>
### More info
_No response_
|
open
|
2024-03-02T09:16:36Z
|
2024-10-10T17:15:43Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19558
|
[
"bug",
"needs triage"
] |
maciejzj
| 1
|
microsoft/unilm
|
nlp
| 1,351
|
[Beit3] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 65181 closing signal SIGTERM.
|
I want to finetune Beit3 on COCO Caption
When computing Bleu score , the error emerge
The code is
`python -m torch.distributed.launch --nproc_per_node=4 run_beit3_finetuning.py \
--model beit3_base_patch16_480 \
--input_size 480 \
--task coco_captioning \
--batch_size 32 \
--layer_decay 1.0 \
--lr 4e-5 \
--randaug \
--epochs 10 \
--warmup_epochs 1 \
--drop_path 0.1 \
--sentencepiece_model ./beit3.spm \
--finetune ./checkpoint/beit3_base_patch16_224.pth \
--data_path ./dataset \
--output_dir ./save \
--log_dir ./save/log \
--weight_decay 0.05 \
--seed 42 \
--save_ckpt_freq 5 \
--num_max_bpe_tokens 32 \
--captioning_mask_prob 0.7 \
--drop_worst_after 12000 \
--dist_eval \
--checkpoint_activations \
--enable_deepspeed`
The environment is
python 3.9
troch 2.1.0
The partial of log is
Test: Total time:0:01:24 (2.1691 s / it)
Infer 4992 examples into ./save/submit_coco_captioning_val_e0.json
Using downloaded and verified file: ./save/coco_karpathy_val_gt.json
loading annotations into memory...
Done (t=0.04s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.01s)
creating index...
index created!
tokenization...
PTBTokenizer tokenized 307821 tokens at 1151316.74 tokens per second.
PTBTokenizer tokenized 60431 tokens at 410147.53 tokens per second.
setting up scorers...
computing Bleu score...
Traceback (most recent call last):
File "/home/y/python/projects/beit3/run_beit3_finetuning.py", line 451, in <module>
main(opts, ds_init)
File "/home/y/python/projects/beit3/run_beit3_finetuning.py", line 408, in main
test_stats = utils.coco_caption_eval(args.output_dir, prediction_file, "{}_val".format(args.task))
File "/home/y/python/projects/beit3/utils.py", line 907, in coco_caption_eval
coco_eval.evaluate()
File "/home/y/miniconda3/envs/Beit3/lib/python3.9/site-packages/pycocoevalcap/eval.py", line 53, in evaluate
score, scores = scorer.compute_score(gts, res)
File "/home/y/miniconda3/envs/Beit3/lib/python3.9/site-packages/pycocoevalcap/bleu/bleu.py", line 23, in compute_score
assert(gts.keys() == res.keys())
AssertionError
[2023-11-03 14:50:46,818] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 65181 closing signal SIGTERM
[2023-11-03 14:50:46,819] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 65182 closing signal SIGTERM
[2023-11-03 14:50:46,820] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 65183 closing signal SIGTERM
[2023-11-03 14:50:48,701] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 65180) of binary: /home/y/miniconda3/envs/Beit3/bin/python
Traceback (most recent call last):
File "/home/y/miniconda3/envs/Beit3/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/y/miniconda3/envs/Beit3/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/y/miniconda3/envs/Beit3/lib/python3.9/site-packages/torch/distributed/launch.py", line 196, in <module>
main()
File "/home/y/miniconda3/envs/Beit3/lib/python3.9/site-packages/torch/distributed/launch.py", line 192, in main
launch(args)
File "/home/y/miniconda3/envs/Beit3/lib/python3.9/site-packages/torch/distributed/launch.py", line 177, in launch
run(args)
File "/home/y/miniconda3/envs/Beit3/lib/python3.9/site-packages/torch/distributed/run.py", line 797, in run
elastic_launch(
File "/home/y/miniconda3/envs/Beit3/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/y/miniconda3/envs/Beit3/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
**Please tell me what I should do to finetune successfully**
|
closed
|
2023-11-02T04:29:52Z
|
2024-01-20T15:43:03Z
|
https://github.com/microsoft/unilm/issues/1351
|
[] |
jinqin-99
| 5
|
pytorch/vision
|
machine-learning
| 8,971
|
pytorch 2.6 support
|
### 🚀 The feature
Since pytorch 2.6 has been out a while, has anyone validated support? It would be great to have this included in a release soon.
### Motivation, pitch
We run a large conda env that includes torchvision, and would like to upgrade it to pt2.6, but ideally without dropping torchvision.
### Alternatives
_No response_
### Additional context
_No response_
|
closed
|
2025-03-14T18:11:30Z
|
2025-03-14T19:06:05Z
|
https://github.com/pytorch/vision/issues/8971
|
[] |
das-intensity
| 1
|
OpenBB-finance/OpenBB
|
python
| 6,894
|
[🕹️]Starry-eyed supporter
|
### What side quest or challenge are you solving?
Get five friends to star repository
### Points
150 points
### Description
_No response_
### Provide proof that you've completed the task
Proof: Ask your friends to go to their GitHub Profile → Stars and take a screenshot where you can see their Github name and the last few starred repos:





|
closed
|
2024-10-27T16:20:55Z
|
2024-10-30T20:49:44Z
|
https://github.com/OpenBB-finance/OpenBB/issues/6894
|
[] |
sangram20
| 8
|
graphql-python/graphene
|
graphql
| 677
|
Recommended way to do Subscriptions in Flask.
|
I see a lot of open Issues talking about Subscriptions in 2.0 and Subscriptions in Django. I cannot wrap my head around whats the current status and how everyone has implemented Subscriptions on their apps.
Reading through threads, Looks like lot of people have made significant progress and implemented their solutions.
#430 #500 https://github.com/graphql-python/graphql-core/issues/149
For someone just starting now, Would it possible to give me pointers on how I should implement in Flask.
Also in Roadmap to 2.0 it seems like a goal to supporting Subscriptions, but looks like it was punted, Wondering if Graphene still plans to support Subscriptions ?
|
closed
|
2018-02-23T19:04:39Z
|
2024-12-27T15:29:42Z
|
https://github.com/graphql-python/graphene/issues/677
|
[
"wontfix"
] |
kavink
| 3
|
vllm-project/vllm
|
pytorch
| 15,300
|
[Bug]: OPEA/Mistral-Small-3.1-24B-Instruct-2503-int4-AutoRound-awq-sym error
|
### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-570.el9.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
GPU 2: NVIDIA GeForce RTX 4090
GPU 3: NVIDIA GeForce RTX 4090
GPU 4: NVIDIA GeForce RTX 4090
GPU 5: NVIDIA GeForce RTX 4090
GPU 6: NVIDIA GeForce RTX 4090
GPU 7: NVIDIA GeForce RTX 4090
Nvidia driver version: 570.86.16
cuDNN version: Probably one of the following:
/usr/local/lib/python3.9/site-packages/nvidia/cudnn/lib/libcudnn.so.9
/usr/local/lib/python3.9/site-packages/nvidia/cudnn/lib/libcudnn_adv.so.9
/usr/local/lib/python3.9/site-packages/nvidia/cudnn/lib/libcudnn_cnn.so.9
/usr/local/lib/python3.9/site-packages/nvidia/cudnn/lib/libcudnn_engines_precompiled.so.9
/usr/local/lib/python3.9/site-packages/nvidia/cudnn/lib/libcudnn_engines_runtime_compiled.so.9
/usr/local/lib/python3.9/site-packages/nvidia/cudnn/lib/libcudnn_graph.so.9
/usr/local/lib/python3.9/site-packages/nvidia/cudnn/lib/libcudnn_heuristic.so.9
/usr/local/lib/python3.9/site-packages/nvidia/cudnn/lib/libcudnn_ops.so.9
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
字节序: Little Endian
CPU: 128
在线 CPU 列表: 0-127
厂商 ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
型号名称: Intel(R) Xeon(R) Gold 6430
BIOS Model name: Intel(R) Xeon(R) Gold 6430
CPU 系列: 6
型号: 143
每个核的线程数: 2
每个座的核数: 32
座: 2
步进: 8
CPU(s) scaling MHz: 26%
CPU 最大 MHz: 3400.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4200.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 3 MiB (64 instances)
L1i 缓存: 2 MiB (64 instances)
L2 缓存: 128 MiB (64 instances)
L3 缓存: 120 MiB (2 instances)
NUMA 节点: 2
NUMA 节点0 CPU: 0-31,64-95
NUMA 节点1 CPU: 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] jj-pytorchvideo==0.1.5
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-ml-py==12.570.86
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnxruntime==1.21.0
[pip3] pynvml==12.0.0
[pip3] pytorch-lightning==2.5.1
[pip3] pytorch-wpe==0.0.1
[pip3] pyzmq==26.3.0
[pip3] sentence-transformers==3.4.1
[pip3] torch==2.6.0
[pip3] torch-complex==0.4.4
[pip3] torchao==0.9.0
[pip3] torchaudio==2.6.0
[pip3] torchdiffeq==0.2.5
[pip3] torchmetrics==1.7.0
[pip3] torchvision==0.21.0
[pip3] transformers==4.50.0.dev0
[pip3] transformers-stream-generator==0.0.5
[pip3] triton==3.2.0
[pip3] vector-quantize-pytorch==1.17.3
[pip3] x-transformers==2.1.37
[conda] jj-pytorchvideo 0.1.5 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-ml-py 12.570.86 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pynvml 12.0.0 pypi_0 pypi
[conda] pytorch-lightning 2.5.1 pypi_0 pypi
[conda] pytorch-wpe 0.0.1 pypi_0 pypi
[conda] pyzmq 26.3.0 pypi_0 pypi
[conda] sentence-transformers 3.4.1 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torch-complex 0.4.4 pypi_0 pypi
[conda] torchao 0.9.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchdiffeq 0.2.5 pypi_0 pypi
[conda] torchmetrics 1.7.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] transformers 4.50.0.dev0 pypi_0 pypi
[conda] transformers-stream-generator 0.0.5 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
[conda] vector-quantize-pytorch 1.17.3 pypi_0 pypi
[conda] x-transformers 2.1.37 pypi_0 pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.8.1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NODE NODE NODE SYS SYS SYS SYS NODE NODE 0-31,64-95 0 N/A
GPU1 NODE X NODE NODE SYS SYS SYS SYS NODE NODE 0-31,64-95 0 N/A
GPU2 NODE NODE X NODE SYS SYS SYS SYS NODE NODE 0-31,64-95 0 N/A
GPU3 NODE NODE NODE X SYS SYS SYS SYS NODE NODE 0-31,64-95 0 N/A
GPU4 SYS SYS SYS SYS X NODE NODE NODE SYS SYS 32-63,96-127 1 N/A
GPU5 SYS SYS SYS SYS NODE X NODE NODE SYS SYS 32-63,96-127 1 N/A
GPU6 SYS SYS SYS SYS NODE NODE X NODE SYS SYS 32-63,96-127 1 N/A
GPU7 SYS SYS SYS SYS NODE NODE NODE X SYS SYS 32-63,96-127 1 N/A
NIC0 NODE NODE NODE NODE SYS SYS SYS SYS X PIX
NIC1 NODE NODE NODE NODE SYS SYS SYS SYS PIX X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
CUDA_MODULE_LOADING=LAZY
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
```
</details>
### 🐛 Describe the bug
CUDA_VISIBLE_DEVICES=0,1,4,7 vllm serve OPEA/Mistral-Small-3.1-24B-Instruct-2503-int4-AutoRound-awq-sym --tensor-parallel-size 4 --config-format mistral --load-format mistral --limit-mm-per-prompt 'image=4' --max-model-len 8192 --port 11435 --quantization awq --tokenizer-mode mistral
ERROR:
` warnings.warn(
INFO 03-22 02:03:39 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 02:03:40 [api_server.py:977] vLLM API server version 0.8.1
INFO 03-22 02:03:40 [api_server.py:978] args: Namespace(subparser='serve', model_tag='OPEA/Mistral-Small-3.1-24B-Instruct-2503-int4-AutoRound-awq-sym', config='', host=None, port=11435, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='OPEA/Mistral-Small-3.1-24B-Instruct-2503-int4-AutoRound-awq-sym', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='mistral', config_format='mistral', dtype='auto', kv_cache_dtype='auto', max_model_len=8192, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=4, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=None, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization='awq', rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt={'image': 4}, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, use_tqdm_on_load=True, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=False, reasoning_parser=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False, dispatch_function=<function ServeSubcommand.cmd at 0x7f730df39240>)
Traceback (most recent call last):
File "/home/anaconda3/envs/xinference14/bin/vllm", line 8, in <module>
sys.exit(main())
File "/home/anaconda3/envs/xinference14/lib/python3.10/site-packages/vllm/entrypoints/cli/main.py", line 75, in main
args.dispatch_function(args)
File "/home/anaconda3/envs/xinference14/lib/python3.10/site-packages/vllm/entrypoints/cli/serve.py", line 33, in cmd
uvloop.run(run_server(args))
File "/home/anaconda3/envs/xinference14/lib/python3.10/site-packages/uvloop/__init__.py", line 82, in run
return loop.run_until_complete(wrapper())
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/home/anaconda3/envs/xinference14/lib/python3.10/site-packages/uvloop/__init__.py", line 61, in wrapper
return await main
File "/home/anaconda3/envs/xinference14/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 1012, in run_server
async with build_async_engine_client(args) as engine_client:
File "/home/anaconda3/envs/xinference14/lib/python3.10/contextlib.py", line 199, in __aenter__
return await anext(self.gen)
File "/home/anaconda3/envs/xinference14/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 141, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
File "/home/anaconda3/envs/xinference14/lib/python3.10/contextlib.py", line 199, in __aenter__
return await anext(self.gen)
File "/home/anaconda3/envs/xinference14/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 161, in build_async_engine_client_from_engine_args
vllm_config = engine_args.create_engine_config(usage_context=usage_context)
File "/home/anaconda3/envs/xinference14/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1206, in create_engine_config
model_config = self.create_model_config()
File "/home/anaconda3/envs/xinference14/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1121, in create_model_config
return ModelConfig(
File "/home/anaconda3/envs/xinference14/lib/python3.10/site-packages/vllm/config.py", line 333, in __init__
hf_config = get_config(self.hf_config_path or self.model,
File "/home/anaconda3/envs/xinference14/lib/python3.10/site-packages/vllm/transformers_utils/config.py", line 324, in get_config
config = load_params_config(model, revision, token=HF_TOKEN, **kwargs)
File "/home/anaconda3/envs/xinference14/lib/python3.10/site-packages/vllm/transformers_utils/config.py", line 623, in load_params_config
assert isinstance(config_dict, dict)
AssertionError`
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
open
|
2025-03-21T18:14:47Z
|
2025-03-24T08:18:34Z
|
https://github.com/vllm-project/vllm/issues/15300
|
[
"bug"
] |
moshilangzi
| 2
|
ymcui/Chinese-LLaMA-Alpaca-2
|
nlp
| 158
|
预训练之后推理问题
|
### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案
### 问题类型
模型推理
### 基础模型
Alpaca-2-7B
### 操作系统
Linux
### 详细描述问题
训练完,并且合并完之的使用 `scripts/inference/inference_hf.py` 进行推理,但输出的内容去各种乱。
**预训练脚本**:
```
wandb disabled
lr=1e-4
lora_rank=64
lora_alpha=128
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05
dir_date=$(date +%m%d)
train_version=v0.1
NUM_NODES=1
SEQ_LEN=2048
GC_SCALE=4
SKYPILOT_NUM_GPUS_PER_NODE=4
PER_DEVICE_BATCH_SIZE=64
GRADIENT_ACCUMULATION_STEPS=64
pretrained_model=/data/alpaca/chinese-llama-2-13b
chinese_tokenizer_path=/data/alpaca/chinese-alpaca-2-lora-13b
dataset_dir=/data/train-data/
# data cache 需要每次清理一下,否则加上之前的数据
data_cache=/data/cache/${dir_date}-${train_version}
rm -rf ${data_cache}*
per_device_train_batch_size=${PER_DEVICE_BATCH_SIZE}
per_device_eval_batch_size=${PER_DEVICE_BATCH_SIZE}
training_steps=1500
gradient_accumulation_steps=${GRADIENT_ACCUMULATION_STEPS}
output_dir=/data/output/pt-chinese-alpaca-2-13b-lora-${dir_date}-${train_version}
block_size=1024
max_seq_length=1024
deepspeed_config_file=ds_zero2_no_offload.json
run_clm_sft_with_peft=run_clm_pt_with_peft.py
torchrun --nnodes 1 --nproc_per_node ${SKYPILOT_NUM_GPUS_PER_NODE} ${run_clm_sft_with_peft} \
--deepspeed ${deepspeed_config_file} \
--model_name_or_path ${pretrained_model} \
--tokenizer_name_or_path ${chinese_tokenizer_path} \
--peft_path ${chinese_tokenizer_path} \
--dataset_dir ${dataset_dir} \
--per_device_train_batch_size ${per_device_train_batch_size} \
--do_train \
--seed $RANDOM \
--fp16 \
--num_train_epochs 10 \
--lr_scheduler_type cosine \
--learning_rate ${lr} \
--warmup_ratio 0.05 \
--weight_decay 0.01 \
--logging_strategy steps \
--logging_steps 10 \
--save_strategy steps \
--save_total_limit 3 \
--save_steps 500 \
--gradient_accumulation_steps ${gradient_accumulation_steps} \
--preprocessing_num_workers 8 \
--block_size ${block_size} \
--output_dir ${output_dir} \
--overwrite_output_dir \
--ddp_timeout 30000 \
--logging_first_step True \
--lora_rank ${lora_rank} \
--lora_alpha ${lora_alpha} \
--trainable ${lora_trainable} \
--modules_to_save ${modules_to_save} \
--lora_dropout ${lora_dropout} \
--torch_dtype float16 \
--gradient_checkpointing \
--ddp_find_unused_parameters False \
--flash_attn \
--data_cache_dir ${data_cache}
```
推理脚本:
```
$ python scripts/inference/inference_hf.py \
--base_model /data/output/merged-chinese-alpaca-2-13b-lora-0820-v0.2 \
--with_prompt \
--interactive
```
### 依赖情况(代码类问题务必提供)
```
peft 0.3.0.dev0
torch 2.0.1
transformers 4.31.0
```
### 运行日志或截图
<img width="1099" alt="image" src="https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/assets/6229526/fe748ae8-0201-4fdc-b2e0-8aa825061036">
|
closed
|
2023-08-21T01:58:11Z
|
2023-08-22T01:20:24Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/158
|
[] |
icowan
| 4
|
streamlit/streamlit
|
python
| 9,922
|
min_value for st.date_input greys out the whole year if min_value is in November or December
|
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
Somewhat related to https://github.com/streamlit/streamlit/issues/9667 but under different conditions.
I noticed if the min_value parameter's month is in November or December for a year, then that whole year is disabled from the dropdown.
Example, when the min_value is set to a date in October `min_date = datetime(2018, 10, 15)` (or before) then the year in 2018 is active:

When the min_value is set to a date in November or December `min_date = datetime(2018, 11, 1)`, then the year 2018 is disabled.

### Reproducible Code Example
```Python
import streamlit as st
from datetime import timedelta, datetime
min_date = datetime(2018, 11, 15)
max_date = datetime(2024, 11, 25)
if "filter_date" not in st.session_state:
st.session_state["filter_date"] = max(
max_date - timedelta(days=13), min_date
)
st.date_input(
"Start date",
key="filter_date",
min_value=min_date,
max_value=max_date
)
# The year 2018 will be disabled in the dropdown.
```
### Steps To Reproduce
1. Set a minimum date to be any year but the month is in November or December
2. max date can be any date above minimum date
3. the filter date value can be inbetween or the max date
4. When you go to select the dropdown to navigate to a date in the same year as your minimum date, then the whole year is disabled.
### Expected Behavior
Expected behavior is to be able to navigate to the year that my minimum date lives in.
### Current Behavior
There is a workaround to navigate to your minimum date. If your date is 2018-11-15 and 2018 is disabled on the year dropdown, you can navigate to 2019-01-01 and then use the month arrow on the top left to still navigate back to your minimum date:

### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.40.1
- Python version: 3.11.9
- Operating System: Mac OS 13.7.1
- Browser: Chrome Version 131.0.6778.69 (Official Build) (arm64)
### Additional Information
_No response_
|
closed
|
2024-11-25T17:46:29Z
|
2024-11-25T20:49:35Z
|
https://github.com/streamlit/streamlit/issues/9922
|
[
"type:bug",
"feature:st.date_input",
"upstream"
] |
blee-tetrascience
| 6
|
fbdesignpro/sweetviz
|
pandas
| 146
|
sweetviz installed but won't run: module 'numpy' has no attribute 'warnings'
|
Sweetviz is installed, version 2.1.4 (on Windows)

But when I try to use it I get an error

Numpy version is
numpy 1.24.3 py311hdab7c0b_1
numpy-base 1.24.3 py311hd01c5d8_1
I tried downgrading numpy as per a previous post, but it impacts other packages that I need. Please advise, thanks!
|
closed
|
2023-06-12T01:17:39Z
|
2023-11-15T12:13:33Z
|
https://github.com/fbdesignpro/sweetviz/issues/146
|
[] |
madisondevin
| 3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.