repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
open-mmlab/mmdetection
pytorch
11,805
Support for YOLO Dataset format?
Hello, I've noticed that MMDetection doesn't seem to support the YOLO dataset format: ``` <object-class> <x_center> <y_center> <width> <height> ``` Is it possible that I might have missed this functionality, or is it indeed not supported yet? If this feature is currently not available,. I'd be happy to contribute by making a PR to add this support.
open
2024-06-20T14:49:10Z
2025-02-24T12:10:53Z
https://github.com/open-mmlab/mmdetection/issues/11805
[]
bhyun-kim
1
bmoscon/cryptofeed
asyncio
984
Liquidation quantity?
Is the liquidation quantity in USD or base currency? If the latter, is there an option to converti to USD right away?
open
2023-07-06T07:51:46Z
2023-07-06T07:51:46Z
https://github.com/bmoscon/cryptofeed/issues/984
[ "question" ]
gigitalz
0
custom-components/pyscript
jupyter
27
Feature Request: access entity attributes as object attributes
I have pyscript app YAML like this: ```yaml - app: calc_setpoint sensor_temp: climate.downstairs sensor_temp_attr: current_temperature ``` Notice, sensor_temp_attr, which is optional as consumed by this code: ```python sensor_temp_sensor = data.get('sensor_temp') if sensor_temp_sensor is None: log.error(f'{climate_entity}: sensor_temp is required') return sensor_temp_attr = data.get('sensor_temp_attr') if sensor_temp_attr is None: sensor_temp = float(state.get(sensor_temp_sensor)) else: sensor_temp = float(state.get_attr(sensor_temp_sensor)[sensor_temp_attr]) ``` If entity attributes were accessible as object attributes my code would be simpler: ```yaml - app: calc_setpoint sensor_temp: climate.downstairs.current_temperature ``` ```python sensor_temp_sensor = data.get('sensor_temp') if sensor_temp_sensor is None: log.error(f'{climate_entity}: sensor_temp is required') return sensor_temp = float(state.get(sensor_temp_sensor)) ``` For other uses, it would also make state_trigger expressions easier: ```python @state_trigger('climate.downstairs.current_temperature > 75') ``` vs (and I don't even know if this code works): ```python @state_trigger('state.get_attr('climate.downstairs')['current_temperature'] > 75') ``` If there is a namespace collision issue at the root of the state variable, this syntax would be second best: ``` climate.downstairs.attributes.current_temperature ```
closed
2020-10-01T12:26:40Z
2020-10-01T21:17:14Z
https://github.com/custom-components/pyscript/issues/27
[]
dlashua
3
pydantic/pydantic-ai
pydantic
1,185
Responses API: openai 4o search preview model
### Description I've found that openai's new search model performs better than just giving the models a search tool as it was trained by them specifically for the search capabilities. I think it should be fairly easy as all we need is to add it to the `KnownModelName` list and make the system aware that it does not support any other tools at the moment (raise an error if tools are given for openai's search models). i think this will be helpful as this model is very good at searching the web and gives similar results to what openai's UI is giving (which for prompt engineers makes the life easier). I'll be happy to make my first contribution with this PR if you think its relevant and dont see other issues with it. cheers. ### References https://platform.openai.com/docs/models/gpt-4o-search-preview
closed
2025-03-20T08:37:20Z
2025-03-20T09:28:35Z
https://github.com/pydantic/pydantic-ai/issues/1185
[]
NotSoShaby
2
coqui-ai/TTS
deep-learning
4,045
A portable version is great
Hello admin and everyone For many people like me who don't use code clearly, a portable version on windows is great Can anyone make a portble version for the community? Thank you very much
closed
2024-11-03T23:56:17Z
2024-12-28T11:58:23Z
https://github.com/coqui-ai/TTS/issues/4045
[ "wontfix", "feature request" ]
kerlynla
2
alteryx/featuretools
data-science
2,042
"WindowExec: No Partition Defined for Window operation!" warnings on Spark EntitySets
Both when I add Spark DataFrames to my EntitySet and when I call `.dfs()` on the Spark EntitySet, I see a flood of warnings: ```22/04/26 16:41:09 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.``` It's even shown in the official documentation: https://featuretools.alteryx.com/en/stable/guides/using_spark_entitysets.html#Running-DFS Is this an indication of some fundamental scaling issues when using Spark, or can I safely ignore it? What is the root cause of the warning? #### Output of ``featuretools.show_info()`` <details> Featuretools version: 1.8.0 SYSTEM INFO ----------- python: 3.9.11.final.0 python-bits: 64 OS: Darwin OS-release: 21.4.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 INSTALLED VERSIONS ------------------ numpy: 1.22.3 pandas: 1.3.5 tqdm: 4.64.0 cloudpickle: 2.0.0 dask: 2022.4.1 distributed: 2022.4.1 psutil: 5.9.0 pip: 22.0.3 setuptools: 60.6.0 </details>
closed
2022-04-27T17:54:00Z
2022-08-26T20:21:42Z
https://github.com/alteryx/featuretools/issues/2042
[ "bug" ]
nicodv
1
ploomber/ploomber
jupyter
273
Task.debug not working when passing unserializer to PythonCallable
closed
2020-10-09T17:44:04Z
2020-12-30T22:43:51Z
https://github.com/ploomber/ploomber/issues/273
[]
edublancas
0
microsoft/nni
data-science
5,555
TypeError: Invalid shape (64, 64, 1, 1) for image data
**Environment**: VScode - NNI version: 2.10 - Training service:remote - Client OS: MacOS - Server OS (for remote mode only): Ubuntu - Python version:3.9 - PyTorch version: 1.12 - Is conda/virtualenv/venv used?: conda - Is running in Docker?: No Hi, I am trying to prune a Face detector with this architecture: ``` EXTD( (base): ModuleList( (0): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): PReLU(num_parameters=1) ) (1): InvertedResidual_dwc( (conv): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): PReLU(num_parameters=1) (3): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (2): InvertedResidual_dwc( (conv): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): PReLU(num_parameters=1) (3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128) (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): PReLU(num_parameters=1) (6): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (3): InvertedResidual_dwc( (conv): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): PReLU(num_parameters=1) (3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128) (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): PReLU(num_parameters=1) (6): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (4): InvertedResidual_dwc( (conv): Sequential( (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): PReLU(num_parameters=1) (3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256) (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): PReLU(num_parameters=1) (6): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (5): InvertedResidual_dwc( (conv): Sequential( (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): PReLU(num_parameters=1) (3): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=256) (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): PReLU(num_parameters=1) (6): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (upfeat): ModuleList( (0): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False) (1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): ReLU() ) (1): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False) (1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): ReLU() ) (2): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False) (1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): ReLU() ) (3): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False) (1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): ReLU() ) (4): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False) (1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): ReLU() ) ) (loc): ModuleList( (0): Conv2d(64, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): Conv2d(64, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): Conv2d(64, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): Conv2d(64, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): Conv2d(64, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (5): Conv2d(64, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (conf): ModuleList( (0): Conv2d(64, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): Conv2d(64, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (2): Conv2d(64, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): Conv2d(64, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): Conv2d(64, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (5): Conv2d(64, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) ) (softmax): Softmax(dim=-1) ) ``` I am using this config_list : ``` config_list = [{ 'sparsity_per_layer' : 0.2, 'op_types' : ['Conv2d'], }, { 'exclude' : True, 'op_names' : ['loc.0', 'loc.1', 'loc.2', 'loc.3', 'loc.4', 'loc.5', 'conf.0', 'conf.1', 'conf.2', 'conf.3', 'conf.4', 'conf.5', ] }] ``` and when I apply the pruner and try to visualize the mask I get the follownig error: ``` sparsity: 0.8125 Output exceeds the [size limit](command:workbench.action.openSettings?%5B%22notebook.output.textLineLimit%22%5D). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?29929b28-b222-4e75-80f2-fefedb0d1d62) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[5], line 7 4 mask = mask['weight'].detach().cpu().numpy() 6 print("sparsity: {}".format(mask.sum() [/](https://vscode-remote+ssh-002dremote-002b160-002e40-002e53-002e84.vscode-resource.vscode-cdn.net/) mask.size)) ----> 7 plt.imshow(mask) File [~/anaconda3/envs/gpu/lib/python3.9/site-packages/matplotlib/pyplot.py:2695](https://vscode-remote+ssh-002dremote-002b160-002e40-002e53-002e84.vscode-resource.vscode-cdn.net/m2/gkrispanis/Projects/EXTD_Pytorch-master2/~/anaconda3/envs/gpu/lib/python3.9/site-packages/matplotlib/pyplot.py:2695), in imshow(X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, interpolation_stage, filternorm, filterrad, resample, url, data, **kwargs) 2689 @_copy_docstring_and_deprecators(Axes.imshow) 2690 def imshow( 2691 X, cmap=None, norm=None, *, aspect=None, interpolation=None, 2692 alpha=None, vmin=None, vmax=None, origin=None, extent=None, 2693 interpolation_stage=None, filternorm=True, filterrad=4.0, 2694 resample=None, url=None, data=None, **kwargs): -> 2695 __ret = gca().imshow( 2696 X, cmap=cmap, norm=norm, aspect=aspect, 2697 interpolation=interpolation, alpha=alpha, vmin=vmin, 2698 vmax=vmax, origin=origin, extent=extent, 2699 interpolation_stage=interpolation_stage, 2700 filternorm=filternorm, filterrad=filterrad, resample=resample, 2701 url=url, **({"data": data} if data is not None else {}), 2702 **kwargs) 2703 sci(__ret) 2704 return __ret ... 716 # - otherwise casting wraps extreme values, hiding outliers and 717 # making reliable interpretation impossible. 718 high = 255 if np.issubdtype(self._A.dtype, np.integer) else 1 TypeError: Invalid shape (64, 64, 1, 1) for image data ``` The code I used is this: ``` from nni.compression.pytorch.pruning import L1NormPruner pruner = L1NormPruner(model, config_list) import matplotlib.pyplot as plt for _, mask in masks.items(): mask = mask['weight'].detach().cpu().numpy() print("sparsity: {}".format(mask.sum() / mask.size)) plt.imshow(mask) ``` It is also worth noting that even though I set `'sparsity_per_layer' : 0.2,` when I try to visualize the masks as you see it prints `sparsity: 0.8125` . Do you know why and how I can fix this issue ?
closed
2023-05-10T12:30:29Z
2023-05-27T12:28:55Z
https://github.com/microsoft/nni/issues/5555
[]
gkrisp98
6
ray-project/ray
machine-learning
51,094
[Data]Extend Ray Data with read/write hive
**Description** Currently, Ray do not support Hive catalog natively. The only way is read_sql/write_sql. The internal steps is following: 1. Connect to hiveserver through jdbc 2. Hiveserver parse sql and translate into MR or Spark program 3. Run MR/Spark on Hadoop cluster This way require the deployment of hiveserver and extra resouces to run MR/Spark from hadoop cluster. Make Ray data read/write hive directly is more efficient way. The file format of hive is still parquet, csv, text and Ray have implement releated datasource or sink to support these file format. It seem that there is no much work to do. Propose solution read_hive 1. create hive client (can reuse HiveCatalog module from pyiceberg) 2. get hive table and parse table's file format 3. construct underlying datasource(ParquetDatasource/JSONDatasource/CSVDatasource/TextDatasource etc) based on file format write_hive 1. create hive client (can reuse HiveCatalog module from pyiceberg) 2. get hive table and parse table's file format 3. call write_parquet/write_json/write_csv etc based on file format **Use case**
open
2025-03-05T07:43:43Z
2025-03-05T07:43:43Z
https://github.com/ray-project/ray/issues/51094
[]
laysfire
0
ckan/ckan
api
8,008
CKAN should not assume the database schema is always 'public'
## CKAN version 2.10 ## Describe the bug File ckan/model/group.py contains queries which specify the 'public' schema. This breaks if CKAN was installed in a different schema. ### Steps to reproduce Steps to reproduce the behavior: - Install CKAN in a database schema that is not 'public'; - Login; - Create an organization; - Logout and login again; - Check the error in the log (table 'member' is not found). ### Expected behavior CKAN should work when not installed in public schema. ### Additional details Error: "(psycopg2.errors.UndefinedTable) relation "public.member" does not exist"
open
2024-01-09T18:11:07Z
2024-06-20T03:37:13Z
https://github.com/ckan/ckan/issues/8008
[ "Good for Contribution" ]
temporarywork
2
nonebot/nonebot2
fastapi
2,668
Plugin: 战双表情
### PyPI 项目名 nonebot-plugin-zsmeme ### 插件 import 包名 nonebot_plugin_zsmeme ### 标签 [{"label":"帕弥什","color":"#ea5252"}] ### 插件配置项 _No response_
closed
2024-04-20T09:13:59Z
2024-04-22T06:33:59Z
https://github.com/nonebot/nonebot2/issues/2668
[ "Plugin" ]
shi-yingyingjiang
5
miguelgrinberg/Flask-SocketIO
flask
788
webrtc in flask with socketio
import os import requests from flask import Flask,jsonify,render_template,request from flask_socketio import SocketIO,emit,join_room,leave_room,send from werkzeug import secure_filename app = Flask(__name__) app.config["SECRET_KEY"] = os.getenv("SECRET_KEY") socketio=SocketIO(app) @app.route("/") def index(): return render_template("index.html") #1st function #2nd function @socketio.on('message') def on_message(): emit('message',{'message':message},broadcast=True) numClients = 0 #3rd function @socketio.on('join') def on_join(data): username = data['user'] room = data['room'] if(numClients == 0): join_room(room) emit('chat',{'username':username},room=room) elif(numClients == 1): join_room(room) emit('chat',{'username':username},room=room) else: emit('full',room=room) numClients =numClients + 1 @socketio.on('ipaddr') def ipaddr(): ifaces = os.networkInterfaces(); for dev in ifaces: #error here i tried to comment this website is run **if (details.family === 'IPv4' && details.address !== '127.0.0.1'):** emit('ipaddr',{'details.address':details.address}) # https://github.com/googlecodelabs/webrtc-web/blob/master/step-04/index.js this code is written in nodejs i am trying to change to python # is the python code corresponding to above link correct this is main.js code # https://github.com/googlecodelabs/webrtc-web/blob/master/step-04/js/main.js 'use strict'; var isInitiator; window.room = prompt("Enter room name:"); var socket = io.connect(location.protocol + '//' + document.domain + ':' + location.port); if (room !== "") { console.log('Message from client: Asking to join room ' + room); socket.emit('create or join', room); } socket.on('created', function(room, clientId) { isInitiator = true; }); socket.on('full', function(room) { console.log('Message from client: Room ' + room + ' is full :^('); }); socket.on('ipaddr', function(ipaddr) { console.log('Message from client: Server IP address is ' + ipaddr); }); socket.on('joined', function(room, clientId) { isInitiator = false; }); socket.on('log', function(array) { console.log.apply(console, array); }); # if any documentation or link is there please provide.
closed
2018-09-10T17:39:53Z
2018-09-29T09:48:53Z
https://github.com/miguelgrinberg/Flask-SocketIO/issues/788
[ "question" ]
rupesh2017
1
tatsu-lab/stanford_alpaca
deep-learning
306
NotImplementedError: offload_to_cpu=True and NO_SHARD is not supported yet
I am using a single GPU(A10) to run Bloom-560m model fine-tune, error, how to solve? I found similar problems in other projects, but I didn't know how to solve the problems in alpaca https://github.com/Alpha-VLLM/LLaMA2-Accessory/issues/76
open
2023-11-28T08:10:12Z
2023-11-28T08:16:30Z
https://github.com/tatsu-lab/stanford_alpaca/issues/306
[]
mechigonft
1
graphql-python/flask-graphql
graphql
67
Upgrade to graphql-core v3
Hi, I'm currently looking at upgrading an application to graphql-core v3 (graphql-core-next) and I was wondering if there are any plans to create a version that would be compatible. Besides this package there's `graphene` as the main dependency and they have released a pre-release that is compatible with graphql-core v3
closed
2020-01-13T15:08:36Z
2020-09-29T09:11:00Z
https://github.com/graphql-python/flask-graphql/issues/67
[]
fhennig
3
scrapy/scrapy
web-scraping
6,060
Explicitly mention PythonItemExporter output changes in 2.11
We removed the binary mode of `PythonItemExporter` in 2.11 but it was the default one, so we should have mentioned a backwards incompatible change which is "`PythonItemExporter` used the binary output by defauilt but it no longer does" in the "Backward-incompatible changes" entry for 2.11.
closed
2023-09-21T12:11:06Z
2023-10-02T10:14:06Z
https://github.com/scrapy/scrapy/issues/6060
[ "enhancement", "good first issue", "docs" ]
wRAR
2
lanpa/tensorboardX
numpy
372
Make blocking operations asynchronous
Hello- I've recently integrated tensorboardX with the core fastai library. One thing that I needed to work around in order to get acceptable performance was to put blocking operations behind a separate thread so that the rest of training would not be impeded. The two big operations causing the need for this were histograms (acknowledged already I can see) and writing images. Writing images appears to block on this line in def make_image, summary.py: `image.save(output, format='PNG') (in def make_image in summary.py).` I determined this using cProfile and snakeviz. Histograms appear to be bottlenecked at numpy construction. Would it be possible for TensorboardX to do the work of queuing up these sorts of bottle-necking requests and hiding the processing via another thread or process? It seems like this would be a broadly useful thing to do, and it would definitely make an impact on the user experience.
open
2019-03-04T22:08:32Z
2020-02-03T10:20:39Z
https://github.com/lanpa/tensorboardX/issues/372
[ "enhancement" ]
jantic
3
robinhood/faust
asyncio
167
Run faust without web server
I want to run multiple faust workers with docker. But I couldn't find how to start worker without web server.
closed
2018-09-18T13:46:35Z
2021-08-17T07:11:33Z
https://github.com/robinhood/faust/issues/167
[]
denizkaan
6
sunscrapers/djoser
rest-api
429
Should we remove LOGIN_FIELD feature
The `LOGIN_FIELD` feature allows djoser users to override the login field without creating a custom user model. I thinks this is a bad idea for the following reasons: 1. It is not a good idea not to let django know what is the the real login field. If someone decides to add admin to the application it could be hard for them to figure out what is going on. 1. There should be one way to do it. The right way to override the login field is by using custom user type. 1. Overriding the user model is not hard. It's just creating a new model and changing one parameter in the settings. There is really no rationale to avoid it. 1. Overriding the user model is a good practice *even* if you don't change anything *yet*, as it facilitates migrating the application later when you find some reason for this. 1. This feature is largely untested and we are not sure if it even works correctly. Writing tests and fixing bugs would be an expensive task, that is not worth the time.
open
2019-09-13T09:15:45Z
2020-02-05T13:19:00Z
https://github.com/sunscrapers/djoser/issues/429
[ "enhancement" ]
zefciu
2
napari/napari
numpy
6,852
Layer.bounding_box.line_color sets default to red color if ```ndisplay = 3``` is not set in advance
### 🐛 Bug Report It was first posted in image.sc forum. The layer bounding box shows the required effect in 2D viewing mode. However, as soon as we switch to 3D mode, the bounding box goes to the default color [red] and default thickness. Furthermore, if we specify `n_display=3` either when initializing viewer as ```viewer = napari.Viewer(ndisplay=3)``` or later by keeping ```viewer.dims.ndisplay = 3``` line before `img_layer.bounding_box.line_color = 'magenta'` line, we can see the desired effect at the first time. But if we switch to 2D mode and then to 3D mode, the bounding_box will regain its default color and thickness. ### 💡 Steps to Reproduce Please run the aboA simple code to generate this effect is shown below. ``` import numpy as np import napari # Create a simple 4D image (axis0 = time, axis1=Z, axis2=Y, axis3=X ) img = np.random.random((5, 25, 100, 100)) viewer = napari.Viewer() img_layer = viewer.add_image(img) img_layer.bounding_box.visible = True img_layer.bounding_box.points = False img_layer.bounding_box.line_color = 'magenta' # Set bbox line_color to magenta img_layer.bounding_box.line_thickness = 2 # Adjust bbox line_thickness napari.run() ``` # Expected behavior in 2D viewing mode ![2d_view](https://github.com/napari/napari/assets/107076601/2a62ee98-e5c9-4266-b2a0-3c8d123683e2) # Unexpected behavior in 2D viewing mode ![3d_view](https://github.com/napari/napari/assets/107076601/013ea016-4c90-4c68-81c3-4588a4dff735) ### 💡 Expected Behavior _No response_ ### 🌎 Environment napari: 0.4.19.post1 Platform: Windows-10-10.0.22631-SP0 Python: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] Qt: 5.15.2 PyQt5: 5.15.10 NumPy: 1.24.3 SciPy: 1.10.1 Dask: 2024.3.1 VisPy: 0.14.2 magicgui: 0.8.2 superqt: 0.6.2 in-n-out: 0.2.0 app-model: 0.2.5 npe2: 0.7.4 OpenGL: - GL version: 4.6.0 - Build 27.20.100.9664 - MAX_TEXTURE_SIZE: 16384 Screens: - screen 1: resolution 960x540, scale 2.0 Settings path: - C:\Users\ganes\AppData\Local\napari\Spille-env_37971cbecf9a2e8d7253a2dbc0abe906acec97a0\settings.yaml ### 💡 Additional Context _No response_
closed
2024-04-17T05:37:02Z
2024-04-18T09:38:38Z
https://github.com/napari/napari/issues/6852
[ "bug" ]
pganes
2
ansible/awx
django
14,904
Job stdout blank after reload
### Please confirm the following - [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html). - [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates. - [X] I understand that AWX is open source software provided for free and that I might not receive a timely response. - [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.) ### Bug Summary Since upgrading to 23.8.1 the job output doesn't load after a reload. This seems to be on jobs that have more than 4000 lines, as smaller jobs load their stdout no problem and can be reloaded as well. To be safe, we changed the setting to 8000 rather than the default 4000 but to no avail. ``` - setting: MAX_UI_JOB_EVENTS value: "8000" ``` ### AWX version 23.8.1 ### Select the relevant components - [X] UI - [ ] UI (tech preview) - [ ] API - [ ] Docs - [ ] Collection - [ ] CLI - [ ] Other ### Installation method kubernetes ### Modifications no ### Ansible version _No response_ ### Operating system rocky 8.8 (K8s cluster RKE2 v1.26.8+rke2r1) ### Web browser Firefox, Chrome, Safari ### Steps to reproduce Run a job that has a stdout of over 4000 lines. This job we are running calls multiple roles against about 40 hosts. Clean installation of AWX through the awx-operator helm chart ### Expected results Output that can be used for troubleshooting by other teams ![Screenshot 2024-02-20 at 9 44 04 AM](https://github.com/ansible/awx/assets/47154121/8a81687e-09be-45bf-8e69-5cbe3cb07a13) ### Actual results No output other than hostname is given ![Screenshot 2024-02-15 at 4 01 29 PM](https://github.com/ansible/awx/assets/47154121/ed64fff0-36b8-43df-8947-1af87a8f5d48) ![Screenshot 2024-02-20 at 9 54 12 AM](https://github.com/ansible/awx/assets/47154121/ce1711ed-66e2-4e45-aae2-a0f05d15382c) ### Additional information Reading through older posts, I found someone was able to temporarily resolve this by running awx-manage run_callback_receiver but my output is as follows if I try ``` sh-5.1# awx-manage run_callback_receiver Traceback (most recent call last): File "/usr/bin/awx-manage", line 8, in <module> sys.exit(manage()) File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/__init__.py", line 175, in manage execute_from_command_line(sys.argv) File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line utility.execute() File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/core/management/__init__.py", line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/core/management/base.py", line 412, in run_from_argv self.execute(*args, **cmd_options) File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/django/core/management/base.py", line 458, in execute output = self.handle(*args, **options) File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/management/commands/run_callback_receiver.py", line 30, in handle CallbackReceiverMetricsServer().start() File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/analytics/subsystem_metrics.py", line 455, in __init__ registry.register(CustomToPrometheusMetricsCollector(DispatcherMetrics(metrics_have_changed=False))) File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/prometheus_client/registry.py", line 40, in register names = self._get_names(collector) File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/prometheus_client/registry.py", line 80, in _get_names for metric in desc_func(): File "/var/lib/awx/venv/awx/lib64/python3.9/site-packages/awx/main/analytics/subsystem_metrics.py", line 440, in collect entry = host_metrics.get(metric.field) AttributeError: 'NoneType' object has no attribute 'get' ```
open
2024-02-20T16:54:35Z
2024-02-27T22:03:35Z
https://github.com/ansible/awx/issues/14904
[ "type:bug", "component:ui", "needs_triage", "community" ]
Tfinn92
4
fastapi/fastapi
api
13,440
Validations in `Annotated` like `AfterValidator` do not work in FastAPI 0.115.10
### Discussed in https://github.com/fastapi/fastapi/discussions/13431 <div type='discussions-op-text'> <sup>Originally posted by **amacfie-tc** February 28, 2025</sup> ### First Check - [X] I added a very descriptive title here. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I searched the FastAPI documentation, with the integrated search. - [X] I already searched in Google "How to X in FastAPI" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/pydantic/pydantic). - [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui). - [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python from typing import Annotated from pydantic import AfterValidator from fastapi import FastAPI app = FastAPI() def validator(v): raise ValueError() Ints = Annotated[list[int], AfterValidator(validator)] @app.post("/") def post(ints: Ints) -> None: return None ``` ### Description If we run the code and send a request to the endpoint, e.g. ``` echo -n '[2,3,4]' | http POST http://localhost:8000 ``` on version 0.115.9, we get a 422 but on 0.115.10 we get 200. Is this a bug? ### Operating System Linux ### Operating System Details _No response_ ### FastAPI Version 0.115.10 ### Pydantic Version 2.9.2, 2.10.6 ### Python Version 3.12 ### Additional Context _No response_</div> --- @tiangolo writes: This was introduced here: https://github.com/fastapi/fastapi/pull/13314 I'm currently investigating and a fix will be released shortly. The problem is only when using `Annotated` directly in FastAPI parameters, when used inside of Pydantic models the validators work (raise) as expected: ```Python from typing import Annotated from fastapi import FastAPI from pydantic import AfterValidator, BaseModel app = FastAPI() def validator(v): raise ValueError() Ints = Annotated[list[int], AfterValidator(validator)] class Model(BaseModel): ints: Ints @app.post("/") def post(ints: Model) -> None: return None ```
closed
2025-03-01T17:19:44Z
2025-03-01T22:40:52Z
https://github.com/fastapi/fastapi/issues/13440
[ "bug" ]
tiangolo
2
PaddlePaddle/models
nlp
5,165
由于自定义数据集较大,引起报错,怎样修改代码?
项目链接:https://github.com/PaddlePaddle/models/tree/4d87afd6480737b64b5974c9c40a5b1c5a4600b3/PaddleNLP/examples/text_classification/rnn 将C:\Users\Administrator.paddlenlp\datasets\chnsenticorp目录下的train.tsv 与dev.tsv和test.tsv替换成了自己的训练集,然后进行训练,发现当训练集总样本个数在3万左右时不会报错,可以进行训练得到模型,但超过了就会发生下面的错误,请问改怎样修改代码呢,摆脱大佬详细些哈 我这边东拼西凑32万个样本的数据集不容易啊!求助,求助! 报错代码如下 `step 30/47 - loss: 0.3494 - acc: 0.9693 - 290ms/step step 40/47 - loss: 0.3437 - acc: 0.9691 - 301ms/step Traceback (most recent call last): File "train.py", line 193, in <module> save_dir=args.save_dir) File "F:\aanaa\lib\site-packages\paddle\hapi\model.py", line 1503, in fit eval_logs = self._run_one_epoch(eval_loader, cbks, 'eval') File "F:\aanaa\lib\site-packages\paddle\hapi\model.py", line 1799, in _run_one_epoch data[len(self._inputs):]) File "F:\aanaa\lib\site-packages\paddle\hapi\model.py", line 991, in eval_batch loss = self._adapter.eval_batch(inputs, labels) File "F:\aanaa\lib\site-packages\paddle\hapi\model.py", line 681, in eval_batch outputs = self.model.network.forward(* [to_variable(x) for x in inputs]) File "F:\aanaa\lib\site-packages\paddlenlp\models\senta.py", line 104, in forward logits = self.model(text, seq_len) File "F:\aanaa\lib\site-packages\paddle\fluid\dygraph\layers.py", line 884, in __call__ outputs = self.forward(*inputs, **kwargs) File "F:\aanaa\lib\site-packages\paddlenlp\models\senta.py", line 186, in forward embedded_text = self.embedder(text) File "F:\aanaa\lib\site-packages\paddle\fluid\dygraph\layers.py", line 884, in __call__ outputs = self.forward(*inputs, **kwargs) File "F:\aanaa\lib\site-packages\paddle\nn\layer\common.py", line 1289, in forward name=self._name) File "F:\aanaa\lib\site-packages\paddle\nn\functional\input.py", line 202, in embedding 'remote_prefetch', False, 'padding_idx', padding_idx) ValueError: (InvalidArgument) Variable value (input) of OP(fluid.layers.embedding) expected >= 0 and < 857580, but got 858325. Please check input value. [Hint: Expected ids[i] < row_number, but received ids[i]:858325 >= row_number:857580.] (at D:\2.0.0rc1\paddle\paddle/fluid/operators/lookup_table_v2_op.h:81) [Hint: If you need C++ stacktraces for debugging, please set `FLAGS_call_stack_level=2`.] [operator < lookup_table_v2 > error] `
open
2020-12-30T16:53:28Z
2024-02-26T05:09:33Z
https://github.com/PaddlePaddle/models/issues/5165
[]
yizhipipixia
6
KevinMusgrave/pytorch-metric-learning
computer-vision
195
Make it easier to access recordable attributes of nested objects
One option could involve overriding ```__getattribute()__```
closed
2020-09-12T01:47:46Z
2023-01-21T04:52:24Z
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/195
[ "enhancement" ]
KevinMusgrave
0
DistrictDataLabs/yellowbrick
matplotlib
967
ModelError produced from code sample in Manifold Docs
An ModelError is raised when trying to run the sample code for the Manifold visualizer. I included the solution below. @Kautumn06 and I both tested it. Error ``` --------------------------------------------------------------------------- ModelError Traceback (most recent call last) <ipython-input-9-092c217b7f01> in <module> 8 visualizer = Manifold(manifold="tsne") 9 ---> 10 visualizer.fit(X, y) # Fit the data 11 visualizer.poof() # Draw/show/poof the data ~/anaconda3/envs/ais/lib/python3.7/site-packages/yellowbrick/features/manifold.py in fit(self, X, y, **kwargs) 361 "{} requires data to be simultaneously fit and transformed, " 362 "use fit_transform instead" --> 363 ).format(name) 364 ) 365 ModelError: TSNE requires data to be simultaneously fit and transformed, use fit_transform instead ``` CODE: ``` from yellowbrick.features import Manifold from yellowbrick.datasets import load_occupancy # Load the classification dataset X, y = load_occupancy() # Instantiate the visualizer visualizer = Manifold(manifold="tsne") visualizer.fit(X, y) # Fit the data visualizer.poof() # Draw/show/poof the data ``` SOLUTION: change fit to fit_transform @DistrictDataLabs/team-oz-maintainers
closed
2019-09-09T14:20:30Z
2019-10-06T13:12:39Z
https://github.com/DistrictDataLabs/yellowbrick/issues/967
[ "type: documentation" ]
lwgray
0
tensorly/tensorly
numpy
155
Tensor Classification
#### Is your feature request related to a problem? Please describe. <-- I'm always frustrated when I wonder to predict the type of an object. Usually, the information of the object is high dimensional or can be summarized in a tensor plus some covariates. --> #### Describe the solution you'd like <-- The classification problem can be transformed into multinomial logistic regression of the softmax regression problem. --> #### Additional context <-- If this feature is easy to achieve by changing some parts of the current codes. Could you please provide some guidance. I think the main part is how to change the loss function and whether the current algorithm is suitable.-->
closed
2020-03-11T08:06:04Z
2020-03-24T11:59:12Z
https://github.com/tensorly/tensorly/issues/155
[]
Haobo1108
3
polyaxon/traceml
plotly
13
Patch level went d?
Hi, I am trying to get `pandas-summary` working, and had version 0.0.41 installed. I ran into the issue fixed by #11/#12, so I tried to upgrade. It seems like the version was changed to 0.0.5 with https://github.com/mouradmourafiq/pandas-summary/commit/42227d51d8d458ebce7090db971259565fb6ccdf When I try to upgrade to 0.0.5, pip picks up the 0.0.41 version since its patch level is considered greater than 0.0.5. I am somewhat new to the Python world, so I could be missing an easy way to do this, but wondering if the new version should be updated to 0.0.42 or 0.1.0 to better comply with [semver](https://semver.org/). ``` pip install pandas-summary== Collecting pandas-summary== Could not find a version that satisfies the requirement pandas-summary== (from versions: 0.0.3, 0.0.4, 0.0.5, 0.0.41) No matching distribution found for pandas-summary== ``` For now my workaround is to use `pip install pandas-summary==0.0.5` to install the exact version that I want to use.
closed
2018-06-21T20:42:02Z
2018-06-21T20:42:13Z
https://github.com/polyaxon/traceml/issues/13
[]
panozzaj
0
ultralytics/ultralytics
python
19,419
Small classes data surprisingly, has highest mAP.
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question When im finetuning yolov11n with my custom dataset with custom classes. Aluminum foil class had relatively high score eventhough it's not on the original class from COCO dataset and the instances and images of that class so small. how is this happen? ### Additional <img width="851" alt="Image" src="https://github.com/user-attachments/assets/4c13797e-1cb0-4485-9d85-4ca7f2afbb94" /> <img width="692" alt="Image" src="https://github.com/user-attachments/assets/a41b55a4-347b-4d99-bc6e-51be897d7129" />
closed
2025-02-25T12:49:54Z
2025-02-25T19:38:33Z
https://github.com/ultralytics/ultralytics/issues/19419
[ "question", "detect" ]
nisrinaam29
4
streamlit/streamlit
machine-learning
10,385
Pinned columns (column config) do not work when hide_index=True
### Checklist - [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues. - [x] I added a very descriptive title to this issue. - [x] I have provided sufficient information below to help reproduce this issue. ### Summary As the title describes - this is a simple one ### Reproducible Code Example [![Open in Streamlit Cloud](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://issues.streamlitapp.com/?issue=gh-10385) ```Python import streamlit as st import pandas as pd test_df = pd.DataFrame({ 'Date': pd.date_range(start='2024-01-01', periods=10), 'Value1': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109], 'Value2': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109], 'Value3': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109], 'Value4': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109], 'Value5': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109], 'Value6': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109], 'Value7': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109], 'Value8': [100, 101, 102, 103, 104, 105, 106, 107, 108, 109] }) col_config = { 'Date': st.column_config.DateColumn(width="medium",pinned=True), 'Value1': st.column_config.NumberColumn('Value1 Pinned', width="medium", format='%.2f',pinned=True), 'Value2': st.column_config.NumberColumn(width="medium", format='%.2f',pinned=False), 'Value3': st.column_config.NumberColumn(width="medium", format='%.2f',pinned=False), 'Value4': st.column_config.NumberColumn(width="medium", format='%.2f',pinned=False), 'Value5': st.column_config.NumberColumn(width="medium", format='%.2f',pinned=False), 'Value6': st.column_config.NumberColumn(width="medium", format='%.2f',pinned=False), 'Value7': st.column_config.NumberColumn(width="medium", format='%.2f',pinned=False), 'Value8': st.column_config.NumberColumn(width="medium", format='%.2f',pinned=False) } st.write('hide index = True') st.dataframe(test_df, column_config=col_config, hide_index=True) st.write('hide index = False') st.dataframe(test_df, column_config=col_config, hide_index=False) ``` ### Steps To Reproduce Run code. Using data_editor instead of dataframe gets the same result btw. ### Expected Behavior If *any* columns are pinned, and hide_index=False, then pin the index column as well. Or, make the index column configurable (example below) so the user can decide col_config = { 'idx': st.column_config.Column('',width="medium", pinned=True), ... ### Current Behavior Doesn't pin with hide_index=False ### Is this a regression? - [ ] Yes, this used to work in a previous version. ### Debug info - Streamlit version: 1.41.1 - Python version: latest 3 something - Operating System: mac - Browser: chrome ### Additional Information _No response_
open
2025-02-12T23:34:26Z
2025-02-21T21:15:15Z
https://github.com/streamlit/streamlit/issues/10385
[ "type:enhancement", "type:docs", "feature:st.column_config" ]
nickgreengithub
2
pydantic/pydantic-core
pydantic
700
Test hooky assigment round robin
test Selected Assignee: @dmontagu
closed
2023-06-27T10:09:19Z
2023-06-27T10:26:30Z
https://github.com/pydantic/pydantic-core/issues/700
[ "unconfirmed" ]
lig
0
tox-dev/tox
automation
3,238
Tox v4.14.1 is no longer expanding {envtmpdir} (and potentially other variables)
## Issue We are using `package = external` and `package_env = build-metatensor-core` in our tox setup, and build the wheels with `pip wheel python/metatensor-core {[testenv]build_single_wheel_flags} --wheel-dir {envtmpdir}/dist` On tox 4.14.0, everything is fine, on 4.14.1 tox creates a directory literally named `{envtmpdir}/dist` (instead of expanding this to `.tox/build-metatensor-core/tmp/dist`. ```console $ ls \{envtmpdir\}/dist metatensor_core-0.2.0.dev7-py3-none-macosx_14_0_arm64.whl ``` ## Environment Provide at least: - OS: macOS 14.3.1 <details open> <summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary> ```console $ pip list Package Version ----------------------- -------- archspec 0.2.3 boltons 23.1.1 Brotli 1.1.0 build 1.0.3 cachetools 5.3.3 certifi 2024.2.2 cffi 1.16.0 chardet 5.2.0 charset-normalizer 3.3.2 colorama 0.4.6 conda 24.1.2 conda-libmamba-solver 24.1.0 conda-package-handling 2.2.0 conda_package_streaming 0.9.0 distlib 0.3.8 distro 1.9.0 filelock 3.13.1 fsspec 2024.2.0 idna 3.6 importlib-metadata 7.0.1 Jinja2 3.1.3 jsonpatch 1.33 jsonpointer 2.4 libmambapy 1.5.7 mamba 1.5.7 MarkupSafe 2.1.5 menuinst 2.0.2 mpmath 1.3.0 networkx 3.2.1 numpy 1.26.4 packaging 23.2 pip 24.0 platformdirs 4.2.0 pluggy 1.4.0 pycosat 0.6.6 pycparser 2.21 pyproject-api 1.6.1 pyproject_hooks 1.0.0 PySocks 1.7.1 requests 2.31.0 ruamel.yaml 0.18.6 ruamel.yaml.clib 0.2.8 setuptools 69.1.1 sympy 1.12 tomli 2.0.1 torch 2.2.1 tox 4.14.1 tqdm 4.66.2 truststore 0.8.0 typing_extensions 4.10.0 urllib3 2.2.1 virtualenv 20.25.1 wheel 0.42.0 zipp 3.17.0 zstandard 0.22.0 ``` </details> ## Output of running tox <details open> <summary>Output of <code>tox -rvv</code></summary> ```console $ tox -rvv -e core-tests build-metatensor-core: 111 W remove tox env folder /Users/guillaume/code/metatensor/.tox/build-metatensor-core [tox/tox_env/api.py:323] build-metatensor-core_sdist_meta: 111 W remove tox env folder /Users/guillaume/code/metatensor/.tox/build-metatensor-core_sdist_meta [tox/tox_env/api.py:323] core-tests: 115 I find interpreter for spec PythonSpec(path=/opt/miniforge3/bin/python3.11) [virtualenv/discovery/builtin.py:58] core-tests: 115 I proposed PythonInfo(spec=CPython3.11.7.final.0-64, exe=/opt/miniforge3/bin/python3.11, platform=darwin, version='3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65] core-tests: 115 D accepted PythonInfo(spec=CPython3.11.7.final.0-64, exe=/opt/miniforge3/bin/python3.11, platform=darwin, version='3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:67] core-tests: 116 D filesystem is not case-sensitive [virtualenv/info.py:25] core-tests: 130 I create virtual environment via CPython3Posix(dest=/Users/guillaume/code/metatensor/.tox/core-tests, clear=False, no_vcs_ignore=False, global=False) [virtualenv/run/session.py:50] core-tests: 130 D create folder /Users/guillaume/code/metatensor/.tox/core-tests/bin [virtualenv/util/path/_sync.py:12] core-tests: 131 D create folder /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages [virtualenv/util/path/_sync.py:12] core-tests: 131 D write /Users/guillaume/code/metatensor/.tox/core-tests/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:33] core-tests: 131 D home = /opt/miniforge3/bin [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D implementation = CPython [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D version_info = 3.11.7.final.0 [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D virtualenv = 20.25.1 [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D base-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D base-exec-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D base-executable = /opt/miniforge3/bin/python3.11 [virtualenv/create/pyenv_cfg.py:38] core-tests: 131 D symlink /opt/miniforge3/bin/python3.11 to /Users/guillaume/code/metatensor/.tox/core-tests/bin/python [virtualenv/util/path/_sync.py:32] core-tests: 131 D create virtualenv import hook file /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/_virtualenv.pth [virtualenv/create/via_global_ref/api.py:91] core-tests: 131 D create /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/_virtualenv.py [virtualenv/create/via_global_ref/api.py:94] core-tests: 131 D ============================== target debug ============================== [virtualenv/run/session.py:52] core-tests: 132 D debug via /Users/guillaume/code/metatensor/.tox/core-tests/bin/python /opt/miniforge3/lib/python3.11/site-packages/virtualenv/create/debug.py [virtualenv/create/creator.py:200] core-tests: 131 D { "sys": { "executable": "/Users/guillaume/code/metatensor/.tox/core-tests/bin/python", "_base_executable": "/opt/miniforge3/bin/python3.11", "prefix": "/Users/guillaume/code/metatensor/.tox/core-tests", "base_prefix": "/opt/miniforge3", "real_prefix": null, "exec_prefix": "/Users/guillaume/code/metatensor/.tox/core-tests", "base_exec_prefix": "/opt/miniforge3", "path": [ "/opt/miniforge3/lib/python311.zip", "/opt/miniforge3/lib/python3.11", "/opt/miniforge3/lib/python3.11/lib-dynload", "/Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages" ], "meta_path": [ "<class '_virtualenv._Finder'>", "<class '_frozen_importlib.BuiltinImporter'>", "<class '_frozen_importlib.FrozenImporter'>", "<class '_frozen_importlib_external.PathFinder'>" ], "fs_encoding": "utf-8", "io_encoding": "utf-8" }, "version": "3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ]", "makefile_filename": "/opt/miniforge3/lib/python3.11/config-3.11-darwin/Makefile", "os": "<module 'os' (frozen)>", "site": "<module 'site' (frozen)>", "datetime": "<module 'datetime' from '/opt/miniforge3/lib/python3.11/datetime.py'>", "math": "<module 'math' from '/opt/miniforge3/lib/python3.11/lib-dynload/math.cpython-311-darwin.so'>", "json": "<module 'json' from '/opt/miniforge3/lib/python3.11/json/__init__.py'>" } [virtualenv/run/session.py:53] core-tests: 151 I add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/Users/guillaume/Library/Application Support/virtualenv) [virtualenv/run/session.py:57] core-tests: 152 D got embed update of distribution %s from ('pip', PosixPath('/Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/embed/3/pip.json')) [virtualenv/app_data/via_disk_folder.py:131] core-tests: 154 D install wheel from wheel /opt/miniforge3/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/wheel-0.42.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] core-tests: 154 D install setuptools from wheel /opt/miniforge3/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/setuptools-69.1.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] core-tests: 154 D install pip from wheel /opt/miniforge3/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/pip-24.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] core-tests: 154 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/setuptools-69.1.0.dist-info to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/setuptools-69.1.0.dist-info [virtualenv/util/path/_sync.py:40] core-tests: 155 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.42.0-py3-none-any/wheel-0.42.0.dist-info to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/wheel-0.42.0.dist-info [virtualenv/util/path/_sync.py:40] core-tests: 155 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-24.0-py3-none-any/pip-24.0.dist-info to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/pip-24.0.dist-info [virtualenv/util/path/_sync.py:40] core-tests: 156 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.42.0-py3-none-any/wheel to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/wheel [virtualenv/util/path/_sync.py:40] core-tests: 156 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/distutils-precedence.pth to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40] core-tests: 157 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-24.0-py3-none-any/pip to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/pip [virtualenv/util/path/_sync.py:40] core-tests: 157 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/setuptools to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/setuptools [virtualenv/util/path/_sync.py:40] core-tests: 162 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.42.0-py3-none-any/wheel-0.42.0.virtualenv to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/wheel-0.42.0.virtualenv [virtualenv/util/path/_sync.py:40] core-tests: 163 D generated console scripts wheel wheel-3.11 wheel3 wheel3.11 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] core-tests: 191 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/pkg_resources to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40] core-tests: 200 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/_distutils_hack to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40] core-tests: 200 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/setuptools-69.1.0.virtualenv to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/setuptools-69.1.0.virtualenv [virtualenv/util/path/_sync.py:40] core-tests: 201 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] core-tests: 234 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-24.0-py3-none-any/pip-24.0.virtualenv to /Users/guillaume/code/metatensor/.tox/core-tests/lib/python3.11/site-packages/pip-24.0.virtualenv [virtualenv/util/path/_sync.py:40] core-tests: 234 D generated console scripts pip3 pip3.11 pip-3.11 pip [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] core-tests: 234 I add activators for Bash, CShell, Fish, Nushell, PowerShell, Python [virtualenv/run/session.py:63] core-tests: 236 D write /Users/guillaume/code/metatensor/.tox/core-tests/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:33] core-tests: 236 D home = /opt/miniforge3/bin [virtualenv/create/pyenv_cfg.py:38] core-tests: 236 D implementation = CPython [virtualenv/create/pyenv_cfg.py:38] core-tests: 236 D version_info = 3.11.7.final.0 [virtualenv/create/pyenv_cfg.py:38] core-tests: 236 D virtualenv = 20.25.1 [virtualenv/create/pyenv_cfg.py:38] core-tests: 236 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:38] core-tests: 236 D base-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] core-tests: 236 D base-exec-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] core-tests: 236 D base-executable = /opt/miniforge3/bin/python3.11 [virtualenv/create/pyenv_cfg.py:38] core-tests: 238 W install_deps> python -I -m pip install numpy pytest pytest-cov toml 'torch==2.2.*' [tox/tox_env/api.py:425] Collecting numpy Using cached numpy-1.26.4-cp311-cp311-macosx_11_0_arm64.whl.metadata (114 kB) Collecting pytest Using cached pytest-8.0.2-py3-none-any.whl.metadata (7.7 kB) Collecting pytest-cov Using cached pytest_cov-4.1.0-py3-none-any.whl.metadata (26 kB) Collecting toml Using cached toml-0.10.2-py2.py3-none-any.whl.metadata (7.1 kB) Collecting torch==2.2.* Using cached torch-2.2.1-cp311-none-macosx_11_0_arm64.whl.metadata (25 kB) Collecting filelock (from torch==2.2.*) Using cached filelock-3.13.1-py3-none-any.whl.metadata (2.8 kB) Collecting typing-extensions>=4.8.0 (from torch==2.2.*) Using cached typing_extensions-4.10.0-py3-none-any.whl.metadata (3.0 kB) Collecting sympy (from torch==2.2.*) Using cached sympy-1.12-py3-none-any.whl.metadata (12 kB) Collecting networkx (from torch==2.2.*) Using cached networkx-3.2.1-py3-none-any.whl.metadata (5.2 kB) Collecting jinja2 (from torch==2.2.*) Using cached Jinja2-3.1.3-py3-none-any.whl.metadata (3.3 kB) Collecting fsspec (from torch==2.2.*) Using cached fsspec-2024.2.0-py3-none-any.whl.metadata (6.8 kB) Collecting iniconfig (from pytest) Using cached iniconfig-2.0.0-py3-none-any.whl.metadata (2.6 kB) Collecting packaging (from pytest) Using cached packaging-23.2-py3-none-any.whl.metadata (3.2 kB) Collecting pluggy<2.0,>=1.3.0 (from pytest) Using cached pluggy-1.4.0-py3-none-any.whl.metadata (4.3 kB) Collecting coverage>=5.2.1 (from coverage[toml]>=5.2.1->pytest-cov) Using cached coverage-7.4.3-cp311-cp311-macosx_11_0_arm64.whl.metadata (8.2 kB) Collecting MarkupSafe>=2.0 (from jinja2->torch==2.2.*) Using cached MarkupSafe-2.1.5-cp311-cp311-macosx_10_9_universal2.whl.metadata (3.0 kB) Collecting mpmath>=0.19 (from sympy->torch==2.2.*) Using cached mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB) Using cached torch-2.2.1-cp311-none-macosx_11_0_arm64.whl (59.7 MB) Using cached numpy-1.26.4-cp311-cp311-macosx_11_0_arm64.whl (14.0 MB) Using cached pytest-8.0.2-py3-none-any.whl (333 kB) Using cached pytest_cov-4.1.0-py3-none-any.whl (21 kB) Using cached toml-0.10.2-py2.py3-none-any.whl (16 kB) Using cached coverage-7.4.3-cp311-cp311-macosx_11_0_arm64.whl (207 kB) Using cached pluggy-1.4.0-py3-none-any.whl (20 kB) Using cached typing_extensions-4.10.0-py3-none-any.whl (33 kB) Using cached filelock-3.13.1-py3-none-any.whl (11 kB) Using cached fsspec-2024.2.0-py3-none-any.whl (170 kB) Using cached iniconfig-2.0.0-py3-none-any.whl (5.9 kB) Using cached Jinja2-3.1.3-py3-none-any.whl (133 kB) Using cached networkx-3.2.1-py3-none-any.whl (1.6 MB) Using cached packaging-23.2-py3-none-any.whl (53 kB) Using cached sympy-1.12-py3-none-any.whl (5.7 MB) Using cached MarkupSafe-2.1.5-cp311-cp311-macosx_10_9_universal2.whl (18 kB) Using cached mpmath-1.3.0-py3-none-any.whl (536 kB) Installing collected packages: mpmath, typing-extensions, toml, sympy, pluggy, packaging, numpy, networkx, MarkupSafe, iniconfig, fsspec, filelock, coverage, pytest, jinja2, torch, pytest-cov Successfully installed MarkupSafe-2.1.5 coverage-7.4.3 filelock-3.13.1 fsspec-2024.2.0 iniconfig-2.0.0 jinja2-3.1.3 mpmath-1.3.0 networkx-3.2.1 numpy-1.26.4 packaging-23.2 pluggy-1.4.0 pytest-8.0.2 pytest-cov-4.1.0 sympy-1.12 toml-0.10.2 torch-2.2.1 typing-extensions-4.10.0 core-tests: 11493 I exit 0 (11.25 seconds) /Users/guillaume/code/metatensor> python -I -m pip install numpy pytest pytest-cov toml 'torch==2.2.*' pid=44644 [tox/execute/api.py:280] build-metatensor-core: 11495 I find interpreter for spec PythonSpec(path=/opt/miniforge3/bin/python3.11) [virtualenv/discovery/builtin.py:58] build-metatensor-core: 11495 I proposed PythonInfo(spec=CPython3.11.7.final.0-64, exe=/opt/miniforge3/bin/python3.11, platform=darwin, version='3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65] build-metatensor-core: 11495 D accepted PythonInfo(spec=CPython3.11.7.final.0-64, exe=/opt/miniforge3/bin/python3.11, platform=darwin, version='3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:67] build-metatensor-core: 11496 I create virtual environment via CPython3Posix(dest=/Users/guillaume/code/metatensor/.tox/build-metatensor-core, clear=False, no_vcs_ignore=False, global=False) [virtualenv/run/session.py:50] build-metatensor-core: 11496 D create folder /Users/guillaume/code/metatensor/.tox/build-metatensor-core/bin [virtualenv/util/path/_sync.py:12] build-metatensor-core: 11496 D create folder /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages [virtualenv/util/path/_sync.py:12] build-metatensor-core: 11496 D write /Users/guillaume/code/metatensor/.tox/build-metatensor-core/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:33] build-metatensor-core: 11496 D home = /opt/miniforge3/bin [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11496 D implementation = CPython [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11496 D version_info = 3.11.7.final.0 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11496 D virtualenv = 20.25.1 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11496 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11496 D base-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11496 D base-exec-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11496 D base-executable = /opt/miniforge3/bin/python3.11 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11497 D symlink /opt/miniforge3/bin/python3.11 to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/bin/python [virtualenv/util/path/_sync.py:32] build-metatensor-core: 11497 D create virtualenv import hook file /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/_virtualenv.pth [virtualenv/create/via_global_ref/api.py:91] build-metatensor-core: 11497 D create /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/_virtualenv.py [virtualenv/create/via_global_ref/api.py:94] build-metatensor-core: 11497 D ============================== target debug ============================== [virtualenv/run/session.py:52] build-metatensor-core: 11497 D debug via /Users/guillaume/code/metatensor/.tox/build-metatensor-core/bin/python /opt/miniforge3/lib/python3.11/site-packages/virtualenv/create/debug.py [virtualenv/create/creator.py:200] build-metatensor-core: 11497 D { "sys": { "executable": "/Users/guillaume/code/metatensor/.tox/build-metatensor-core/bin/python", "_base_executable": "/opt/miniforge3/bin/python3.11", "prefix": "/Users/guillaume/code/metatensor/.tox/build-metatensor-core", "base_prefix": "/opt/miniforge3", "real_prefix": null, "exec_prefix": "/Users/guillaume/code/metatensor/.tox/build-metatensor-core", "base_exec_prefix": "/opt/miniforge3", "path": [ "/opt/miniforge3/lib/python311.zip", "/opt/miniforge3/lib/python3.11", "/opt/miniforge3/lib/python3.11/lib-dynload", "/Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages" ], "meta_path": [ "<class '_virtualenv._Finder'>", "<class '_frozen_importlib.BuiltinImporter'>", "<class '_frozen_importlib.FrozenImporter'>", "<class '_frozen_importlib_external.PathFinder'>" ], "fs_encoding": "utf-8", "io_encoding": "utf-8" }, "version": "3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ]", "makefile_filename": "/opt/miniforge3/lib/python3.11/config-3.11-darwin/Makefile", "os": "<module 'os' (frozen)>", "site": "<module 'site' (frozen)>", "datetime": "<module 'datetime' from '/opt/miniforge3/lib/python3.11/datetime.py'>", "math": "<module 'math' from '/opt/miniforge3/lib/python3.11/lib-dynload/math.cpython-311-darwin.so'>", "json": "<module 'json' from '/opt/miniforge3/lib/python3.11/json/__init__.py'>" } [virtualenv/run/session.py:53] build-metatensor-core: 11517 I add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/Users/guillaume/Library/Application Support/virtualenv) [virtualenv/run/session.py:57] build-metatensor-core: 11518 D got embed update of distribution %s from ('pip', PosixPath('/Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/embed/3/pip.json')) [virtualenv/app_data/via_disk_folder.py:131] build-metatensor-core: 11518 D install setuptools from wheel /opt/miniforge3/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/setuptools-69.1.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] build-metatensor-core: 11518 D install wheel from wheel /opt/miniforge3/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/wheel-0.42.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] build-metatensor-core: 11518 D install pip from wheel /opt/miniforge3/lib/python3.11/site-packages/virtualenv/seed/wheels/embed/pip-24.0-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:49] build-metatensor-core: 11519 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-24.0-py3-none-any/pip-24.0.dist-info to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/pip-24.0.dist-info [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11519 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/setuptools-69.1.0.dist-info to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/setuptools-69.1.0.dist-info [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11519 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.42.0-py3-none-any/wheel-0.42.0.dist-info to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/wheel-0.42.0.dist-info [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11521 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.42.0-py3-none-any/wheel to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/wheel [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11521 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/distutils-precedence.pth to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11522 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/setuptools to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/setuptools [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11522 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-24.0-py3-none-any/pip to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/pip [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11528 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/wheel-0.42.0-py3-none-any/wheel-0.42.0.virtualenv to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/wheel-0.42.0.virtualenv [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11529 D generated console scripts wheel wheel3 wheel-3.11 wheel3.11 [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] build-metatensor-core: 11558 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/pkg_resources to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/pkg_resources [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11567 D copy directory /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/_distutils_hack to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11568 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/setuptools-69.1.0-py3-none-any/setuptools-69.1.0.virtualenv to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/setuptools-69.1.0.virtualenv [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11568 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] build-metatensor-core: 11604 D copy /Users/guillaume/Library/Application Support/virtualenv/wheel/3.11/image/1/CopyPipInstall/pip-24.0-py3-none-any/pip-24.0.virtualenv to /Users/guillaume/code/metatensor/.tox/build-metatensor-core/lib/python3.11/site-packages/pip-24.0.virtualenv [virtualenv/util/path/_sync.py:40] build-metatensor-core: 11604 D generated console scripts pip3 pip3.11 pip-3.11 pip [virtualenv/seed/embed/via_app_data/pip_install/base.py:43] build-metatensor-core: 11604 I add activators for Bash, CShell, Fish, Nushell, PowerShell, Python [virtualenv/run/session.py:63] build-metatensor-core: 11605 D write /Users/guillaume/code/metatensor/.tox/build-metatensor-core/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:33] build-metatensor-core: 11605 D home = /opt/miniforge3/bin [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11605 D implementation = CPython [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11605 D version_info = 3.11.7.final.0 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11605 D virtualenv = 20.25.1 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11605 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11605 D base-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11605 D base-exec-prefix = /opt/miniforge3 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11606 D base-executable = /opt/miniforge3/bin/python3.11 [virtualenv/create/pyenv_cfg.py:38] build-metatensor-core: 11607 W install_requires> python -I -m pip install cmake packaging setuptools wheel [tox/tox_env/api.py:425] Collecting cmake Using cached cmake-3.28.3-py2.py3-none-macosx_10_10_universal2.macosx_10_10_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl.metadata (6.3 kB) Collecting packaging Using cached packaging-23.2-py3-none-any.whl.metadata (3.2 kB) Requirement already satisfied: setuptools in ./.tox/build-metatensor-core/lib/python3.11/site-packages (69.1.0) Requirement already satisfied: wheel in ./.tox/build-metatensor-core/lib/python3.11/site-packages (0.42.0) Using cached cmake-3.28.3-py2.py3-none-macosx_10_10_universal2.macosx_10_10_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl (48.5 MB) Using cached packaging-23.2-py3-none-any.whl (53 kB) Installing collected packages: cmake, packaging Successfully installed cmake-3.28.3 packaging-23.2 build-metatensor-core: 13373 I exit 0 (1.77 seconds) /Users/guillaume/code/metatensor> python -I -m pip install cmake packaging setuptools wheel pid=44648 [tox/execute/api.py:280] build-metatensor-core: 13374 W install_deps> python -I -m pip install cmake packaging setuptools wheel [tox/tox_env/api.py:425] Requirement already satisfied: cmake in ./.tox/build-metatensor-core/lib/python3.11/site-packages (3.28.3) Requirement already satisfied: packaging in ./.tox/build-metatensor-core/lib/python3.11/site-packages (23.2) Requirement already satisfied: setuptools in ./.tox/build-metatensor-core/lib/python3.11/site-packages (69.1.0) Requirement already satisfied: wheel in ./.tox/build-metatensor-core/lib/python3.11/site-packages (0.42.0) build-metatensor-core: 13647 I exit 0 (0.27 seconds) /Users/guillaume/code/metatensor> python -I -m pip install cmake packaging setuptools wheel pid=44650 [tox/execute/api.py:280] build-metatensor-core: 13648 W commands[0]> pip wheel python/metatensor-core --no-deps --no-build-isolation --check-build-dependencies --wheel-dir '{env_tmp_dir}/dist' [tox/tox_env/api.py:425] Processing ./python/metatensor-core Preparing metadata (pyproject.toml) ... done Building wheels for collected packages: metatensor-core Building wheel for metatensor-core (pyproject.toml) ... done Created wheel for metatensor-core: filename=metatensor_core-0.2.0.dev7-py3-none-macosx_14_0_arm64.whl size=393337 sha256=3ef52bf49aeeab3cb26abb2f37e70fa66a4e087d0bea399fe5138888d440b34f Stored in directory: /Users/guillaume/Library/Caches/pip/wheels/51/2c/1e/776d763cc8f4fe85ef01b2aa554b8f88005d759914ef385ec8 Successfully built metatensor-core build-metatensor-core: 14924 I exit 0 (1.28 seconds) /Users/guillaume/code/metatensor> pip wheel python/metatensor-core --no-deps --no-build-isolation --check-build-dependencies --wheel-dir '{env_tmp_dir}/dist' pid=44652 [tox/execute/api.py:280] core-tests: 14925 E failed with no package found in /Users/guillaume/code/metatensor/.tox/build-metatensor-core/tmp/dist/* [tox/session/cmd/run/single.py:57] core-tests: FAIL code 1 (14.82 seconds) evaluation failed :( (14.85 seconds) ``` </details> ---- Ping @gaborbernat, this seems to be a fallout of #3237
open
2024-03-07T11:23:48Z
2024-08-20T19:00:49Z
https://github.com/tox-dev/tox/issues/3238
[ "bug:minor", "level:hard", "help:wanted" ]
Luthaf
26
pyro-ppl/numpyro
numpy
1,253
Raise an exception (or warning) when calling the inverse of a non-invertible transform.
The `Transform` class in `numpyro.distributions.transforms` provides an `inv` property which is meant to return the inverse transform. However, this is available even for functions that are not invertible. In those cases, we silently return the identity function which seems weird. ### Example: ```python import numpyro.distributions as dist f1 = dist.transforms.AbsTransform() print(f1.inv(-1)) # Not defined, but returns -1 print(f1.inv(f1(-1))) # Returns +1 ``` The issue is also present in transformations that are obtained via `ComposeTransform`. When the transformation is invertible, it is very easy to obtain the CDF of the transformed distribution (which is something I often need). So I think providing an 'automatic' inverse function is a good idea, but maybe it'd be good to raise an error (or at least a warning) when trying to access the inverse of a non-invertible transform.
closed
2021-12-11T14:47:13Z
2022-01-02T00:16:29Z
https://github.com/pyro-ppl/numpyro/issues/1253
[ "warnings & errors" ]
omarfsosa
2
glumpy/glumpy
numpy
90
Possible to draw into Glumpy's OpenGL context from Python-bound C native code?
Currently, I use Glumpy more or less as a black box: Data is computed "somewhere", and then sent to Glumpy for visualisation. As a next step in my experimentation, I plan to do some computation with OpenCL. This will probably happen in a C/C++ program that will be bound to Python by pybind11. That program has to compute the vertex positions, and it would be nice to use them directly for the visualisation with OpenGL/OpenCL interoperability. In general, this could also be an option for that case: [https://github.com/glumpy/glumpy/issues/89](url) I wonder: Can I get the OpenGL context of Glumpy and use it as a parameter to such a (native) function? Basically I'd assume it's possible, but I'm not sure whether I'm overlooking something.
open
2016-10-07T09:56:36Z
2016-10-07T17:13:05Z
https://github.com/glumpy/glumpy/issues/90
[]
mibieri
5
supabase/supabase-py
flask
261
Extra / in storage get_public_url() method
**Describe the bug** There is an extra forward slash returned in the get_public_url method. **To Reproduce** Steps to reproduce the behavior: 1. Instantiate SupabaseStorageClient 2. Use client.from_({bucket}).get_public_url({filename}) 3. Result is 'https://{database}.supabase.co/storage/v1//object/public/{bucket}/{filename}' **Expected behavior** 'https://{database}.supabase.co/storage/v1/object/public/{bucket}/{filename}' which matches what you get from the supabase website console when clicking the button for public_url. **Desktop (please complete the following information):** - OS: Windows11 - Python: 3.9.9 - Supabase version: 0.5.8 **Additional context** Deleting the extra / between v1 and object does give the correct url.
closed
2022-08-27T15:19:06Z
2022-10-10T15:02:40Z
https://github.com/supabase/supabase-py/issues/261
[ "bug", "storage" ]
justinbarak
7
suitenumerique/docs
django
393
Add an onboarding template doc for new users
## Feature Request **Is your feature request related to a problem or unsupported use case? Please describe.** Some users are not super familiar with block based structured editors, slash commands, markdown etc. Docs is super confusing for them. **Describe the solution you'd like** For new users instead of having no docs. Why not display a doc which content can serve as tutorial. Where there would be examples of all text blocs. Instructions to play around etc. **Describe alternatives you've considered** Adding a tooltip on the docs editiing page. When you click on it you get a modal that explain the features of the editor. Like in the pad. But I feel it's less interactive. ![Image](https://github.com/user-attachments/assets/469099ec-be8a-473d-844b-b8bdb0418faf) **Discovery, Documentation, Adoption, Migration Strategy** Notion isn't empty when you join. You can play with template to get started. ![Image](https://github.com/user-attachments/assets/74ac7d7a-a4bd-4f95-9d4d-84c88be5237b)
open
2024-10-28T10:16:49Z
2025-01-30T17:05:36Z
https://github.com/suitenumerique/docs/issues/393
[]
virgile-dev
8
ndleah/python-mini-project
data-visualization
149
Which version of python does this repo use is it multiple or is it one???
closed
2023-09-12T04:43:09Z
2024-06-02T06:03:32Z
https://github.com/ndleah/python-mini-project/issues/149
[]
11aaditya11
4
quantumlib/Cirq
api
7,072
Support more OpenQASM 2.0 gates from qelib1.inc
**Is your feature request related to a use case or problem? Please describe.** This is a follow-up to #7007, which reported qasm parser failure for identifiers starting with `gate`. After the fix in #7018 the example code still fails because of unknown `rzz` gate. **Describe the solution you'd like** Support parsing of more QASM gates from "qelib1.inc". At the very least we should add gates that have a direct counterpart in cirq. **What is the urgency from your perspective for this issue? Is it blocking important work?** <!-- Please choose one and remove the others --> P2 - we should do it in the next couple of quarters <!-- [optional] additional comment / context -->
open
2025-02-19T00:16:24Z
2025-03-05T19:12:49Z
https://github.com/quantumlib/Cirq/issues/7072
[ "kind/feature-request", "triage/accepted", "area/qasm" ]
pavoljuhas
1
sanic-org/sanic
asyncio
2,266
A way to globally set `ignore_body` to False/True
I'm in the process of upgrading our API to a recent Sanic from 19.x.y version. The clients are not just a browser but API-driven ones like `requests` et al. Which means helper routes like sending a `DELETE /path\r\n\r\n{"json": "body"}` exist. I'm finding I have to annotate **every single** `.delete` route with `, ignore_body=False` Is there a way to set this globally?
closed
2021-10-04T19:34:36Z
2022-03-02T08:49:17Z
https://github.com/sanic-org/sanic/issues/2266
[ "stale" ]
autumnjolitz
3
Avaiga/taipy
automation
2,104
[OTHER] Using taipy==3.0.0 , how can markdown be rendered in table?
### I need to render markdown in table for Taipy==3.0.0 ### Please help, here is the code: ```python from taipy.gui import Gui import markdown def md_to_html(md_text): return markdown.markdown(md_text) md_data = [ "**Bold Text**", "- List Item 1\n- List Item 2", "[Link Example](https://www.example.com)", "*Italic* and **Bold**", ] html_data = [md_to_html(item) for item in md_data] page = """ # Taipy Table with HTML (converted from Markdown) <|{html_data}|table|show_all|markdown|> """ Gui(page=page).run(port=8501) ``` ### Code of Conduct - [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+). - [X] I am willing to work on this issue (optional)
open
2024-10-20T17:22:07Z
2024-10-25T13:09:34Z
https://github.com/Avaiga/taipy/issues/2104
[ "📄 Documentation", "🖰 GUI", "🆘 Help wanted", "🟨 Priority: Medium", "✨New feature", "📝Release Notes" ]
IshanRattan
4
biolab/orange3
data-visualization
6,299
"CN2Classifier object has no attribute params" shows if I press "report" menu of an OWRuleViewer
Here is my environment: Python 3.9 PyQt5 Orange3 here is my code, you can run this code directly ```from PyQt5 import QtWidgets, QtGui, QtCore from PyQt5.QtCore import * from PyQt5.QtWidgets import * from PyQt5.QtGui import * import Orange from Orange.widgets.visualize.owruleviewer import OWRuleViewer from Orange.classification import CN2Learner class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(1200, 800) self.MainWindow = MainWindow self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.gridLayout = QtWidgets.QGridLayout(self.centralwidget) self.gridLayout.setObjectName("gridLayout") self.verticalLayout = QtWidgets.QVBoxLayout() self.verticalLayout.setObjectName("verticalLayout") self.horizontalLayout = QtWidgets.QHBoxLayout() self.horizontalLayout.setObjectName("horizontalLayout") spacerItem = QtWidgets.QSpacerItem(20, 20, QtWidgets.QSizePolicy.MinimumExpanding, QtWidgets.QSizePolicy.Minimum) self.horizontalLayout.addItem(spacerItem) self.pushButton_showOrange = QtWidgets.QPushButton(self.centralwidget) self.pushButton_showOrange.setObjectName("pushButton_showOrange") self.horizontalLayout.addWidget(self.pushButton_showOrange) spacerItem1 = QtWidgets.QSpacerItem(40, 20, QtWidgets.QSizePolicy.MinimumExpanding, QtWidgets.QSizePolicy.Minimum) self.horizontalLayout.addItem(spacerItem1) self.pushButton_closeOrange = QtWidgets.QPushButton(self.centralwidget) self.pushButton_closeOrange.setObjectName("pushButton_closeOrange") self.horizontalLayout.addWidget(self.pushButton_closeOrange) spacerItem2 = QtWidgets.QSpacerItem(20, 20, QtWidgets.QSizePolicy.MinimumExpanding, QtWidgets.QSizePolicy.Minimum) self.horizontalLayout.addItem(spacerItem2) self.verticalLayout.addLayout(self.horizontalLayout) spacerItem3 = QtWidgets.QSpacerItem(20, 0, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Minimum) self.verticalLayout.addItem(spacerItem3) self.horizontalLayout_2 = QtWidgets.QHBoxLayout() self.horizontalLayout_2.setObjectName("horizontalLayout_2") spacerItem4 = QtWidgets.QSpacerItem(10, 20, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Minimum) self.horizontalLayout_2.addItem(spacerItem4) self.tabWidget = QtWidgets.QTabWidget(self.centralwidget) self.tabWidget.setObjectName("tabWidget") self.tab_added = QtWidgets.QWidget() self.tab_added.setObjectName("tab_added") current_verticalLayout = QtWidgets.QVBoxLayout(self.tab_added) current_verticalLayout.setObjectName("current_verticalLayout") spacerItem2 = QtWidgets.QSpacerItem(20, 0, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Minimum) current_verticalLayout.addItem(spacerItem2) ###############################Python 3.9 + PyQt5 + Orange 3 ###################################### data = Orange.data.Table(r"D:\Software\Orange3\Orange\Lib\site-packages\Orange\datasets\heart_disease.tab") learner = Orange.classification.CN2Learner() model = learner(data) model.instances = data self.ow = OWRuleViewer() # 1. create an instance self.ow.set_classifier(model) self.ow.show() #################################################################################################### self.ow.setParent(self.tab_added) # 2. add "ow" to the "tab" of the QTabWidget #################################################################################################### spacerItem3 = QtWidgets.QSpacerItem(20, 0, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Minimum) current_verticalLayout.addItem(spacerItem3) current_verticalLayout.addWidget(self.ow) # 3. add "ow" to the vertical layout #################################################################################################### self.tabWidget.addTab(self.tab_added, "") self.horizontalLayout_2.addWidget(self.tabWidget) spacerItem5 = QtWidgets.QSpacerItem(10, 20, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Minimum) self.horizontalLayout_2.addItem(spacerItem5) self.verticalLayout.addLayout(self.horizontalLayout_2) self.gridLayout.addLayout(self.verticalLayout, 0, 0, 1, 1) MainWindow.setCentralWidget(self.centralwidget) self.menubar = QtWidgets.QMenuBar(MainWindow) self.menubar.setGeometry(QtCore.QRect(0, 0, 1200, 23)) self.menubar.setObjectName("menubar") MainWindow.setMenuBar(self.menubar) self.statusbar = QtWidgets.QStatusBar(MainWindow) self.statusbar.setObjectName("statusbar") MainWindow.setStatusBar(self.statusbar) self.retranslateUi(MainWindow) self.tabWidget.setCurrentIndex(0) QtCore.QMetaObject.connectSlotsByName(MainWindow) self.pushButton_closeOrange.clicked.connect(self.close_orange) self.pushButton_showOrange.clicked.connect(self.show_orange) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) self.pushButton_showOrange.setText(_translate("MainWindow", "Open")) self.pushButton_closeOrange.setText(_translate("MainWindow", "Close")) self.tabWidget.setTabText(self.tabWidget.indexOf(self.tab_added), _translate("MainWindow", "tab_added")) def close_orange(self): self.ow.close() def show_orange(self): self.ow.show() if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) MainWindow = QtWidgets.QMainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_()) ``` Then, click the "report" at the bottom left of the OWRuleViewer ![1 (1)](https://user-images.githubusercontent.com/28619306/212644758-ec5533d9-fe40-4ff7-b0e9-4b1f102c0eba.png) then "CN2Classifier object has no attribute params" shows ![1 (2)](https://user-images.githubusercontent.com/28619306/212644802-5e283358-0810-4b06-b244-4b03cb3a5898.png) but, if I comment line 177 in Python\Lib\site-packages\Orange\widgets\visualize\owruleviewer.py as below ![1 (3)](https://user-images.githubusercontent.com/28619306/212644830-a8d0ef0b-e13e-41f4-a9ca-e80818191f95.png) ``` def send_report(self): if self.classifier is not None: self. report_domain("Data domain", self.classifier.original_domain) #self. report_items("Rule induction algorithm", self.classifier.params) self. report_table("Induced rules", self.view) ``` then error will not occur and it works correctly: ![1 (4)](https://user-images.githubusercontent.com/28619306/212644652-59825add-1d0d-4bbf-8cf8-128da2fc39dd.png) so, is this a bug? or do I forget a neccesary step (call some method or set some attribute?) after creating an instance of OWRuleViewer? ``` data = Orange.data.Table(r"D:\Software\Orange3\Orange\Lib\site-packages\Orange\datasets\heart_disease.tab") learner = Orange.classification.CN2Learner() model = learner(data) model.instances = data self.ow = OWRuleViewer() # 1. create an instance self.ow.set_classifier(model) self.ow.show() ```
closed
2023-01-16T09:30:46Z
2023-01-20T18:41:05Z
https://github.com/biolab/orange3/issues/6299
[ "bug", "snack" ]
madainigun14
1
blb-ventures/strawberry-django-plus
graphql
198
Support standard property fields
Using `@property` fields raises this line: https://github.com/blb-ventures/strawberry-django-plus/blob/af55c4c3681a4c1de95fd64c512a02aa14a344fb/strawberry_django_plus/type.py#L167 Many times we need to keep using normal property fields, and instead add the optimization hints to the schema field. This also breaks other libraries like https://github.com/W1ldPo1nter/django-queryable-properties Example use: (`is_alpha` field) (https://django-queryable-properties.readthedocs.io/en/stable/common.html): ```python class ApplicationVersion(models.Model): ALPHA = 'a' BETA = 'b' STABLE = 's' RELEASE_TYPE_CHOICES = ( (ALPHA, _('Alpha')), (BETA, _('Beta')), (STABLE, _('Stable')), ) release_type = models.CharField(max_length=1, choices=RELEASE_TYPE_CHOICES) objects = QueryablePropertiesManager() is_alpha = ValueCheckProperty('release_type', ALPHA) ```
closed
2023-04-22T20:30:55Z
2023-05-01T18:07:55Z
https://github.com/blb-ventures/strawberry-django-plus/issues/198
[]
SafaAlFulaijLumofy
2
randyzwitch/streamlit-folium
streamlit
70
Map width cannot be increased when using st_folium
Hi there, I noticed that the map width doesn't change in my web app when the width argument in st_folium is modified. The height argument works well though. You may find my code to reproduce the web app below. Thank you. ``` import pandas as pd import numpy as np import json from geopy.geocoders import Nominatim import requests import folium import streamlit as st from streamlit_folium import folium_static, st_folium def center(): address = 'Singapore' geolocator = Nominatim(user_agent="id_explorer") location = geolocator.geocode(address) latitude = location.latitude longitude = location.longitude return latitude, longitude def tiles_func(input): mapping = { "OpenStreetMap" : "OpenStreetMap", "https://maps-a.onemap.sg/v3/Grey/{z}/{x}/{y}.png" : "OneMap" } output = mapping[input] return output centers = center() options = ("https://maps-a.onemap.sg/v3/Grey/{z}/{x}/{y}.png", "OpenStreetMap") add_select = st.sidebar.selectbox(label="Choose your base map", options=options, format_func=tiles_func) m = folium.Map(tiles=add_select, location=[centers[0], centers[1]], zoom_start=11, attr="map") m.add_child(folium.LatLngPopup()) st.title('Map of Singapore') data = st_folium(m, width=2000) if data["last_clicked"] is None: data = "" else: st.text(f'Latitude: {data["last_clicked"]["lat"]}\nLongitude: {data["last_clicked"]["lng"]}') ``` ![image](https://user-images.githubusercontent.com/100036474/171595305-d33e96bb-bafa-4019-aa7b-2ccbbf784b2c.png)
closed
2022-06-02T09:02:33Z
2022-06-06T02:52:45Z
https://github.com/randyzwitch/streamlit-folium/issues/70
[]
leonseet
2
xzkostyan/clickhouse-sqlalchemy
sqlalchemy
56
Why I should have primary key and I didn't have any table ,it tell me I have
py3.7 clickhouse-driver 0.0.19 clickhouse-sqlalchemy 0.0.10 # primary key ```py from clickhouse_sqlalchemy.ext.declarative import declarative_base class BaseModel(declarative_base()): __abstract__ = True __table_args__ = {'extend_existing': True} class MachineOp(BaseModel): __tablename__ = 'machine_op' date = Column(Date) time = Column(UInt32) ip = Column(String) behavior_type = Column(UInt8) file_name = Column(String) file_path = Column(String) detail = Column(String) ``` ERROR ``` sqlalchemy.exc.ArgumentError: Mapper mapped class MachineOp->machine_op could not assemble any primary key columns for mapped table 'machine_op' ``` # table exist ```py class BaseModel(declarative_base()): __abstract__ = True # __table_args__ = {'extend_existing': True} class MachineOp(BaseModel): __tablename__ = 'xxxxdfaf' date = Column(Date) time = Column(UInt32) ip = Column(String) behavior_type = Column(UInt8) file_name = Column(String) file_path = Column(String) detail = Column(String) ``` I didn't have any Table 'xxxxdfaf' . It was strange. ``` ERROR: Failure: InvalidRequestError (Table 'xxxxdfaf' is already defined for this MetaData instance. Specify 'extend_existing=True' to redefine options and columns on an existing Table object.) ``` My table setting ``` CREATE TABLE machine_op ( `date` Date, `time` UInt32, `ip` String, `behavior_type` UInt8, `file_name` String, `file_path` String, `detail` String ) ENGINE = MergeTree() PARTITION BY date ORDER BY date; ```
closed
2019-05-21T09:38:31Z
2019-05-21T14:40:26Z
https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/56
[]
631068264
2
holoviz/panel
jupyter
6,888
VS Code + notebook setup, Panel + Param, no reactivity when using `pn.template`
#### Versions bokeh==3.4.1 ipykernel==6.29.4 jupyter-bokeh==4.0.4 panel==1.4.4 param==2.1.0 #### Expected and observed behavior ```python import panel as pn import param pn.extension() text = pn.widgets.TextInput() @param.depends(text.param.value, watch=True) def _update(value): print(value) text #text.servable(); #pn.template.VanillaTemplate(main=[text]) ``` When rendering `text`, everything is fine. Typing "1" and moving focus away from the input results in `_update` being called and the value being printed below the input. ![text](https://github.com/holoviz/panel/assets/52021/6ab42b56-4b70-4f13-9dfc-2e74b7c70db3) When rendering `pn.template` (after commenting out `text` and uncommenting `pn.template.VanillaTemplate(main=[text])`), the `_update` function is never called. ![template](https://github.com/holoviz/panel/assets/52021/62d7324e-fd0a-4c06-9a95-bf506da8207c) #### Example code See above or use the repro repo: https://github.com/earshinov/repro-panel-param-vscode-notebook-pn-template-no-reactivity #### Screenshots or screencasts of the bug in action — #### Notes A beginner Panel and Param user here. Am I expecting them to support something that they weren't intended to support? - [ ] I may be interested in making a pull request to address this
open
2024-06-02T11:45:58Z
2024-06-27T09:48:47Z
https://github.com/holoviz/panel/issues/6888
[]
earshinov
1
aimhubio/aim
data-visualization
3,038
Passing a bearer token
## ❓ Hello! In 3.17.4 there was a way to pass an auth bearer token using the environment variable `AIM_RT_BEARER_TOKEN` and I see that in the 4.0+ releases that variable is no longer part of the source code. Is it still possible to pass a bearer token and if so, would you be able to provide an example on how to do that?
open
2023-10-04T16:35:37Z
2025-02-03T16:51:16Z
https://github.com/aimhubio/aim/issues/3038
[ "type / question" ]
jennifer12121
1
dynaconf/dynaconf
fastapi
658
Override config file from env variable using nested path
**Describe the bug** Not sure if it's working as expected but there is a difference between uppercase and lowercase when we try to override the config from a file `settings.yaml` when there is at least three nested levels **To Reproduce** 1. Having the following folder structure <details> <summary> Project structure </summary> ```bash . ├── requirements.txt ├── settings.yaml └── test_config.py ``` </details> 2. Having the following `requirements.txt`: <details> <summary> Project structure </summary> ``` python==3.9.6 dynaconf==3.1.5 pytest==6 ``` </details> 3. Having the following config files: <!-- Please adjust if you are using different files and formats! --> <details> <summary> Config files </summary> **./settings.yaml** ```yaml a: b: foo c: d: hello ``` </details> 4. Having the following app code: <details> <summary> Code </summary> **./test_config.py** ```python from dynaconf import Dynaconf def new_settings(): settings = Dynaconf( envvar_prefix="DYNACONF", settings_files=['settings.yaml'], ) print(f"{settings.a=}") return settings def test_nested_one_uppercase(monkeypatch): monkeypatch.setenv("DYNACONF_A__B", "OK") assert new_settings().a.b == "OK" def test_nested_two_lowercase(monkeypatch): monkeypatch.setenv("DYNACONF_a__b__c", "OK") assert new_settings().a.b.c == "OK" def test_nested_two_uppercase(monkeypatch): monkeypatch.setenv("DYNACONF_A__B__C", "OK") assert new_settings().A.B.C == "OK" assert new_settings().a.b.c == "OK" ``` </details> 5. Executing under the following environment <details> <summary> Execution </summary> ```bash python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt pytest ``` </details> I used `pytest` with `monkeypatch` fixture to play with the lib but the issue is still there outside tests. I'm using MacOS but I also tried to run the script in a linux docker image The last test is not working because the path is in uppercase and there are many nested levels ``` ============================================================================================================================= test session starts ============================================================================================================================== platform darwin -- Python 3.9.6, pytest-6.0.0, py-1.10.0, pluggy-0.13.1 collected 3 items test_config.py ..F [100%] =================================================================================================================================== FAILURES =================================================================================================================================== __________________________________________________________________________________________________________________________ test_nested_two_uppercase ___________________________________________________________________________________________________________________________ monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x102334a00> def test_nested_two_uppercase(monkeypatch): monkeypatch.setenv("DYNACONF_A__B__C", "OK") assert new_settings().A.B.C == "OK" > assert new_settings().a.b.c == "OK" E AttributeError: 'str' object has no attribute 'c' test_config.py:23: AttributeError ----------------------------------------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------------------------------------- settings.a=<Box: {'B': {'C': 'OK'}, 'b': 'foo', 'c': {'d': 'hello'}}> settings.a=<Box: {'B': {'C': 'OK'}, 'b': 'foo', 'c': {'d': 'hello'}}> =========================================================================================================================== short test summary info ============================================================================================================================ FAILED test_config.py::test_nested_two_uppercase - AttributeError: 'str' object has no attribute 'c' ========================================================================================================================= 1 failed, 2 passed in 0.33s ========================================================================================================================== ```
closed
2021-09-09T09:14:40Z
2023-07-13T19:11:01Z
https://github.com/dynaconf/dynaconf/issues/658
[ "bug", "LazyIssue" ]
harscoet
0
marimo-team/marimo
data-science
3,700
`\verb` command (sometimes) not rendering
### Describe the bug The $\verb|\verb|$ command in Tex does not render correctly, but this happens _sometimes_. I've been able to reproduce the issue minimally as [follows](https://marimo.io/p/@aliphys/notebook-q9hkn9): ![Image](https://github.com/user-attachments/assets/d756d3fa-fb58-484e-b55e-bc6a06506a5a) The issue replicates on both the Cloud instance, as well as the locally installed instance of Marimo. I would expect that the render of the third cell is also the render of the second cell. ### Environment For the local environment: <details> ``` { "marimo": "0.11.0", "OS": "Windows", "OS Version": "11", "Processor": "Intel64 Family 6 Model 167 Stepping 1, GenuineIntel", "Python Version": "3.12.8", "Binaries": { "Browser": "--", "Node": "--" }, "Dependencies": { "click": "8.1.8", "docutils": "0.21.2", "itsdangerous": "2.2.0", "jedi": "0.19.2", "markdown": "3.7", "narwhals": "1.25.1", "packaging": "24.2", "psutil": "6.1.1", "pygments": "2.19.1", "pymdown-extensions": "10.14.3", "pyyaml": "6.0.2", "ruff": "0.9.4", "starlette": "0.45.3", "tomlkit": "0.13.2", "typing-extensions": "4.12.2", "uvicorn": "0.34.0", "websockets": "14.2" }, "Optional Dependencies": { "pandas": "2.2.3" }, "Experimental Flags": {} } ``` </details> ### Code to reproduce Code below (generated via the UI, not manually) is accessible here: https://marimo.io/p/@aliphys/notebook-q9hkn9 ``` import marimo __generated_with = "0.10.12" app = marimo.App() @app.cell def _(): import marimo as mo return (mo,) @app.cell def _(mo): mo.md("""$\verb|Hello World|$""") return @app.cell def _(mo): mo.md(r"""$\verb|Hello World|$""") return @app.cell def _(mo): print(mo.__version__) return if __name__ == "__main__": app.run() ```
closed
2025-02-05T14:45:03Z
2025-02-05T17:22:08Z
https://github.com/marimo-team/marimo/issues/3700
[ "bug" ]
aliphys
3
nalepae/pandarallel
pandas
98
df.parallel_apply occured AttributeError: Can't pickle local object 'prepare_worker.<locals>.closure.<locals>.wrapper'
when I use df.parallel_apply(lambda x:assinLabel(x),axis=1) It occurred AttributeError: Can't pickle local object 'prepare_worker.<locals>.closure.<locals>.wrapper'
closed
2020-06-19T10:40:12Z
2022-03-14T20:30:44Z
https://github.com/nalepae/pandarallel/issues/98
[]
mliwang
7
gto76/python-cheatsheet
python
90
Pythons
closed
2021-03-30T05:21:43Z
2021-03-31T04:42:22Z
https://github.com/gto76/python-cheatsheet/issues/90
[]
ghost
0
sunscrapers/djoser
rest-api
708
Is it possible to add expiration time to tokens of email activation?
Who needs this change would need to overwrite some structure like view or serialize? Or can we add through some parameter? REF: https://stackoverflow.com/questions/71628282/add-expire-time-for-validation-and-verification-in-djoser
open
2022-12-13T03:24:53Z
2022-12-31T17:45:36Z
https://github.com/sunscrapers/djoser/issues/708
[]
albjoaov
1
charlesq34/pointnet
tensorflow
270
after run the train.py for classification. how to visualize the result?
after run the train.py for classification. how to visualize the result?
open
2021-03-17T15:41:25Z
2021-07-14T09:46:06Z
https://github.com/charlesq34/pointnet/issues/270
[]
noridayu1998
2
biolab/orange3
scikit-learn
6,452
Concatenate from position
<!-- Thanks for taking the time to submit a feature request! For the best chance at our team considering your request, please answer the following questions to the best of your ability. --> **What's your use case?** For my budget data that is stored in several files (one by year), the column can vary in name (such as oct2018/10-2018/2018 October....) even if the fifht column is always the one corresponding to january, the sixth to february, etc.. I would like to associate all these files with the concatenate tool. **What's your proposed solution?** I would like an option on the concatenate tool to authorize the concatenation according to the field position instead of the field name. **Are there any alternative solutions?** Rename field before... but that's a manually task.
closed
2023-05-23T19:14:10Z
2023-10-20T12:31:48Z
https://github.com/biolab/orange3/issues/6452
[]
simonaubertbd
1
blb-ventures/strawberry-django-plus
graphql
31
Optimizer doesn't merge doubled up sub-queries
If you create a query like the following: ```gql query TestQuery { items { id nestedItems { id } nestedItems { testField } } } ``` The optimizer will only flag the second of these subqueries as needed for prefetch, creating an N+1 problem for the `id` field on `nestedItems`. The response object properly merges these sub-queries, it would make sense for the optimizer to be able to handle them as well. Obviously the example query above can easily rewritten, but in the case that the front end generates a more complicated query, that won't be possible. A more realistic example might be: ``` fragment NestedItemsId on NestedItemType { id } fragment NestedItemsField on NestedItemType { testField } query TestQuery { items { id ...NestedItemsId ...NestedItemsField } } ```
closed
2022-04-08T19:24:34Z
2022-04-08T22:43:38Z
https://github.com/blb-ventures/strawberry-django-plus/issues/31
[]
hiporox
1
Urinx/WeixinBot
api
160
启动报错
环境: windows 10 64位 python 3.5.2 出错信息: ``` > python weixin.py Traceback (most recent call last): File "weixin.py", line 1182, in <module> webwx.start() File "weixin.py", line 33, in wrapper return fn(*args) File "weixin.py", line 940, in start self._echo('[*] 微信网页版 ... 开动') File "weixin.py", line 1037, in _echo sys.stdout.write(str) File "weixin.py", line 1164, in write s = s.decode('utf-8') AttributeError: 'str' object has no attribute 'decode' ```
open
2017-03-01T01:08:01Z
2017-03-02T02:34:44Z
https://github.com/Urinx/WeixinBot/issues/160
[]
guotie
3
mwaskom/seaborn
data-science
3,778
Feature request: Dumbell plots
![image](https://github.com/user-attachments/assets/d3cdb53e-83c9-41fd-80b5-954133abf81d) These can be quite common in politics or sociology in order to represent not only the data itself but also the difference between two groups. Although this kind of plot can be built by using `matplotlib` and `seaborn` combined, having a dedicated method that simplifies this work could be helpful. If this feature is interesting for more people, I can contributing into this future (both interface and implementation itself). More examples: https://datavizcatalogue.com/blog/chart-snapshot-dumbbell-plot/
closed
2024-11-07T16:18:06Z
2024-11-07T23:24:11Z
https://github.com/mwaskom/seaborn/issues/3778
[]
mflova
1
allenai/allennlp
nlp
5,203
'from allennlp.commands.elmo import ElmoEmbedder' not work even in 0.9.0
> > We removed the `elmo` command in v1.0. If you want to use the `elmo` command then you need to check out v0.9.0. We removed this command because `elmo` is becoming a bit old and we didn't think it made sense to continue supporting it as a top-level command. > > version 0.9.0 worked fine. thx. version 0.9.0 not work for me, any advice is welcome ``` Traceback (most recent call last): File "/content/sow-reap-paraphrasing/processing/get_elmo_embeds.py", line 1, in <module> from allennlp.commands.elmo import ElmoEmbedder File "/usr/local/lib/python3.7/dist-packages/allennlp/commands/__init__.py", line 8, in <module> from allennlp.commands.configure import Configure File "/usr/local/lib/python3.7/dist-packages/allennlp/commands/configure.py", line 26, in <module> from allennlp.service.config_explorer import make_app File "/usr/local/lib/python3.7/dist-packages/allennlp/service/config_explorer.py", line 24, in <module> from allennlp.common.configuration import configure, choices File "/usr/local/lib/python3.7/dist-packages/allennlp/common/configuration.py", line 17, in <module> from allennlp.data.dataset_readers import DatasetReader File "/usr/local/lib/python3.7/dist-packages/allennlp/data/__init__.py", line 1, in <module> from allennlp.data.dataset_readers.dataset_reader import DatasetReader File "/usr/local/lib/python3.7/dist-packages/allennlp/data/dataset_readers/__init__.py", line 10, in <module> from allennlp.data.dataset_readers.ccgbank import CcgBankDatasetReader File "/usr/local/lib/python3.7/dist-packages/allennlp/data/dataset_readers/ccgbank.py", line 9, in <module> from allennlp.data.dataset_readers.dataset_reader import DatasetReader File "/usr/local/lib/python3.7/dist-packages/allennlp/data/dataset_readers/dataset_reader.py", line 8, in <module> from allennlp.data.instance import Instance File "/usr/local/lib/python3.7/dist-packages/allennlp/data/instance.py", line 3, in <module> from allennlp.data.fields.field import DataArray, Field File "/usr/local/lib/python3.7/dist-packages/allennlp/data/fields/__init__.py", line 7, in <module> from allennlp.data.fields.array_field import ArrayField File "/usr/local/lib/python3.7/dist-packages/allennlp/data/fields/array_field.py", line 10, in <module> class ArrayField(Field[numpy.ndarray]): File "/usr/local/lib/python3.7/dist-packages/allennlp/data/fields/array_field.py", line 50, in ArrayField @overrides File "/usr/local/lib/python3.7/dist-packages/overrides/overrides.py", line 88, in overrides return _overrides(method, check_signature, check_at_runtime) File "/usr/local/lib/python3.7/dist-packages/overrides/overrides.py", line 114, in _overrides _validate_method(method, super_class, check_signature) File "/usr/local/lib/python3.7/dist-packages/overrides/overrides.py", line 135, in _validate_method ensure_signature_is_compatible(super_method, method, is_static) File "/usr/local/lib/python3.7/dist-packages/overrides/signature.py", line 93, in ensure_signature_is_compatible ensure_return_type_compatibility(super_type_hints, sub_type_hints, method_name) File "/usr/local/lib/python3.7/dist-packages/overrides/signature.py", line 288, in ensure_return_type_compatibility f"{method_name}: return type `{sub_return}` is not a `{super_return}`." TypeError: ArrayField.empty_field: return type `None` is not a `<class 'allennlp.data.fields.field.Field'>`. ``` _Originally posted by @TITC in https://github.com/allenai/allennlp/issues/4384#issuecomment-841781842_
closed
2021-05-16T07:54:29Z
2021-05-21T22:53:03Z
https://github.com/allenai/allennlp/issues/5203
[]
TITC
4
deepfakes/faceswap
deep-learning
1,388
bash ./faceswap_setup_x64.sh
closed
2024-05-10T08:37:18Z
2024-05-10T11:44:38Z
https://github.com/deepfakes/faceswap/issues/1388
[]
geedee301
0
explosion/spaCy
nlp
12,221
RuntimeError: from_dlpack received an invalid capsule.
<!-- NOTE: For questions or install related issues, please open a Discussion instead. --> ## How to reproduce the behaviour ``` import spacy nlp = spacy.load('en_core_web_trf') tokens1 = nlp("My string") ``` ## Your Environment <!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.--> * Operating System: Ubuntu 20.04 LTS * Python Version Used: Python 3.8.10 * spaCy Version Used: 3.5.0 * Environment Information: Full Error: >RuntimeError: from_dlpack received an invalid capsule. Note that DLTensor capsules can be consumed only once, so you might have already constructed a tensor from it once.
closed
2023-02-03T09:50:39Z
2023-03-17T00:02:28Z
https://github.com/explosion/spaCy/issues/12221
[ "third-party", "🔮 thinc" ]
tassosblackg
3
koxudaxi/datamodel-code-generator
fastapi
1,990
JSON Schema Discriminator With External Reference Support
**Is your feature request related to a problem? Please describe.** While I love the project for most use cases, I'm always frustrated when trying to generate Pydantic models from JSON Schemas that utilize field discriminators with external references, as it results in errors and hinders my ability to work efficiently. **Describe the solution you'd like** I would like the `datamodel-codegen` tool to support parsing and generating Pydantic models for JSON Schemas that contain field discriminators referencing external schema files (e.g., `$ref`: "./type1.json" or likewise references in the `discriminator.mapping`). This feature would allow me to effectively utilize JSON Schema's external reference mechanism, enabling more flexible and reusable schema designs. **Describe alternatives you've considered** I've considered alternative solutions such as manually parsing and preprocessing the JSON Schema to construct a schema where all external references are resolved internally within a `$def` object. However, this approach would be time-consuming, error-prone for a widespread application, and may not provide the same level of performance and maintainability as a dedicated tool `datamodel-codegen`. **Additional context** Showcase of a breaking data model generation: `datamodel-codegen --input schema.json` `schema.json: ```json { "properties": { "inner": { "discriminator": { "mapping": { "a": "./type1.json", "b": "./type2.json" }, "propertyName": "type_" }, "oneOf": [ { "$ref": "./type1.json" }, { "$ref": "./type2.json" } ], "title": "Inner" } }, "required": [ "inner" ], "title": "Response", "type": "object" } ``` `type1.json`: ```json { "properties": { "type_": { "const": "a", "default": "a", "title": "Type " } }, "title": "Type1", "type": "object" } ``` `type2.json`: ```json { "properties": { "type_": { "const": "b", "default": "b", "title": "Type " } }, "title": "Type2", "type": "object" } ```
closed
2024-06-04T10:32:59Z
2024-06-10T19:19:31Z
https://github.com/koxudaxi/datamodel-code-generator/issues/1990
[]
luca-knaack-webcom
0
influxdata/influxdb-client-python
jupyter
633
warnings returned when using example program on https://influxdb-client.readthedocs.io/en/latest/
### Specifications * Client Version: '1.39.0' * InfluxDB Version: v2.7.4 * Platform: 20.04.6 LTS ### Code sample to reproduce problem ```python ``` see https://influxdb-client.readthedocs.io/en/latest/ at "Getting Started" ### Expected behavior I expected no warnings ### Actual behavior /home/ubuntu/radioconda/lib/python3.10/site-packages/influxdb_client/client/flux_csv_parser.py:334: UserWarning: The response contains columns with duplicated names: '['', 'result', 'table', '_start', '_stop', '_time', '_value', '_field', '_measurement', 'org', 'result']'. You should use the 'record.row' to access your data instead of 'record.values' dictionary. warnings.warn(message, UserWarning) The response contains columns with duplicated names: '['', 'result', 'table', '_start', '_stop', '_time', '_value', '_field', '_measurement', 'org', 'result']'. You should use the 'record.row' to access your data instead of 'record.values' dictionary ### Additional info _No response_
closed
2024-01-19T14:55:43Z
2025-03-22T14:57:57Z
https://github.com/influxdata/influxdb-client-python/issues/633
[ "bug" ]
schaefer01
2
cupy/cupy
numpy
9,020
tf.experimental.dlpack.to_dlpack() becomes performance bottleneck
Hi, my code uses CuPy and TensorFlow, where CuPy is used for data preprocessing and postprocessing, and TensorFlow is responsible for loading models and inference. For the conversion between CuPy arrays and TF tensors, we use DLpack: https://docs.cupy.dev/en/stable/user_guide/interoperability.html#dlpack. However, it seems that `tf.experimental.dlpack.to_dlpack()` has become a performance bottleneck because I need to convert the inference result (TF tensor) into a CuPy array for post-processing. And a single conversion takes about **0.13 seconds** but converting from DLpack to TF tensor (`tf.experimental.dlpack.from_dlpack()`) takes **less than 0.01 seconds**. The entire calculation process needs to be carried out in a loop, which usually requires hundreds of iterations, so the data conversion time will be magnified. So I want to know how to reduce the time taken by `tf.experimental.dlpack.to_dlpack()` or data conversion process if there are any suggestions. Thanks in advance :)
open
2025-03-06T09:37:25Z
2025-03-10T13:00:34Z
https://github.com/cupy/cupy/issues/9020
[ "issue-checked" ]
JiahuaZhao
3
microsoft/nlp-recipes
nlp
44
[Scenario] Engineering Work
closed
2019-05-07T06:37:32Z
2019-07-11T05:18:06Z
https://github.com/microsoft/nlp-recipes/issues/44
[ "engineering" ]
nikhilrj
0
django-import-export/django-import-export
django
1,885
`Resource.get_fields()` no longer called outside of tests
**Describe the bug** #1849 removed the last in-package call to `get_fields()`, which even seemed to be left accidentally in #1626. All remaining usage is within tests: ``` $ rg get_fields import_export/resources.py 142: def get_fields(self, **kwargs): tests/core/tests/test_model_resource_fields_generate_widgets.py 35: expected_has_default_widget = self._get_fields_with_expected_default_widget() 78: def _get_fields_with_expected_default_widget(self): tests/core/tests/test_resources/test_modelresource/test_relationship.py 45: full_title = resource.export_field(resource.get_fields()[0], self.book) 67: full_title = resource.export_field(resource.get_fields()[0], self.book) tests/core/tests/test_resources/test_import_export.py 202: field.name for field in self.resource._meta.model._meta.get_fields() tests/core/tests/test_resources/test_modelresource/test_resource_transactions.py 26: fields = resource.get_fields() ``` Perhaps it can be removed? Maybe it also deserves a mention in the v4 release notes, or at least the changelog. I found this method was unused because I have a resource class with an overridden `get_fields()` that was no longer being called.
closed
2024-06-21T15:30:47Z
2024-06-21T16:48:13Z
https://github.com/django-import-export/django-import-export/issues/1885
[ "bug" ]
adamchainz
1
pydata/xarray
numpy
9,869
Changes to assignment warning in sel/isel methods
### What is your issue? I have a few suggestions on the warning against value assignment in the [DataArray.sel documentation](https://docs.xarray.dev/en/latest/generated/xarray.DataArray.sel.html#xarray.DataArray.sel). I wanted to get a maintainer's opinion on if these changes are helpful before I submit a PR. **1. Similar functions should have this warning** DataArray.isel, Dataset.sel and Dataset.isel also can't be used for value assignment and should have a similar warning on their documentation page. **2. Warning should be specific to the function** Each warning should only warn against using a specific function. Currently the warning on DataArray.sel warns against using either DataArray.sel or DataArray.isel. The code example in the warning should use a specific function. The example in the DataArray.sel warning currently uses the isel method. **3. Warning should tell user how to properly assign values** The warning currently tells the user not to use isel/sel for value assignment but doesn't direct the user towards an alternative. The code example currently tells the user what not to do. The example can be extended by a line to show the correct way (i.e. direct/loc indexing)
open
2024-12-10T01:04:30Z
2024-12-10T02:02:49Z
https://github.com/pydata/xarray/issues/9869
[ "topic-documentation" ]
JanukanS
1
mlfoundations/open_clip
computer-vision
37
Can CLIP be trained on Windows?
Hi, Thanks for the tremendous effort! Is it possible to set up this training code, for fine-tuning CLIP on a custom dataset, on a Windows 10 machine?
closed
2021-12-31T14:31:26Z
2022-01-03T07:53:53Z
https://github.com/mlfoundations/open_clip/issues/37
[]
romesa-khan
2
nok/sklearn-porter
scikit-learn
5
Single feature RandomForestClassifier throws index out of range exception
I've built a very simple single feature RandomForestClassifier: ``` from sklearn.ensemble import RandomForestClassifier import numpy as np from sklearn_porter import Porter rf = RandomForestClassifier() features = [[i] for i in xrange(0, 10)] labels = [i > 5 for i in xrange(0, 10)] rf.fit(features, labels) for feature in xrange(-20, 20): print feature, '->', rf.predict(np.array([feature]).reshape(1, -1)) result = Porter(language='java').port(rf) print result ``` which gives the following stack trace: ``` Traceback (most recent call last): File "generateModel.py", line 21, in <module> result = Porter(language='java').port(rf) File "/usr/local/lib/python2.7/dist-packages/sklearn_porter/__init__.py", line 72, in port ported_model = instance.port(model) File "/usr/local/lib/python2.7/dist-packages/sklearn_porter/classifier/RandomForestClassifier/__init__.py", line 84, in port return self.predict() File "/usr/local/lib/python2.7/dist-packages/sklearn_porter/classifier/RandomForestClassifier/__init__.py", line 95, in predict return self.create_class(self.create_method()) File "/usr/local/lib/python2.7/dist-packages/sklearn_porter/classifier/RandomForestClassifier/__init__.py", line 198, in create_method tree = self.create_single_method(idx, model) File "/usr/local/lib/python2.7/dist-packages/sklearn_porter/classifier/RandomForestClassifier/__init__.py", line 162, in create_single_method indices.append([str(j) for j in range(model.n_features_)][i]) IndexError: list index out of range ``` The line in question involves indexing into the feature vector, but sometimes the index is negative, which is fine except when it wraps around the list twice. In this case, `model.n_features_` is 1 but `i` (the index) is -2, giving the list out of range exception. What is the best solution for this? Would simply taking the modulus of the index by the length of list be correct? Thanks!
closed
2017-01-23T20:02:48Z
2017-01-29T04:39:13Z
https://github.com/nok/sklearn-porter/issues/5
[ "bug" ]
lichard49
3
python-arq/arq
asyncio
262
Question: Order of jobs
On our system we noticed the following behaviour: when a lot of jobs are created in a short period of time (a few hundred) and they are relatively long-running, jobs that were added last are executed faster then onces that were added first. That made us think that **arq** uses LIFO algorithm. Is this true? And if so, is it possible to make it work as FIFO?
closed
2021-08-23T09:48:02Z
2021-08-25T11:43:06Z
https://github.com/python-arq/arq/issues/262
[]
Minstel
3
dnouri/nolearn
scikit-learn
7
train_test_split does not work for Pandas Series with non Dense Index
If using a Pandas Series where the Index values are not dense, then `train_test_split` will select index values that do not exist. This because [this line](https://github.com/dnouri/nolearn/blob/a9ac3e7310a7208ddf178c95dd7ac9cedf3e8df1/nolearn/lasagne.py#L257): ``` train_indices, valid_indices = iter(kf).next() ``` returns the row numbers but then the indexing is done with index values`
closed
2014-12-26T21:01:58Z
2015-02-20T12:09:53Z
https://github.com/dnouri/nolearn/issues/7
[]
cancan101
3
graphql-python/graphene-django
graphql
625
Generic relationship type
I've used django quite a bit, and am hoping to check out graphene-django as an introduction to using Graphene. The graph I'm starting with is based on schema.org, which means that a relationship can be of one or more types. I looked quickly through some of the example models but I didn't see how to easily do this. Does graphene-django have some integration / something that looks like the [django generic relation](https://docs.djangoproject.com/en/2.2/ref/contrib/contenttypes/#generic-relations)? Many thanks!
closed
2019-05-05T17:21:44Z
2019-05-06T23:48:49Z
https://github.com/graphql-python/graphene-django/issues/625
[]
vsoch
3
deeppavlov/DeepPavlov
nlp
1,219
Is there any chance you are going to upgrade to tensorflow==2.2.0?
Is there any chance you are going to upgrade to tensorflow==2.2.0?
closed
2020-05-15T20:50:09Z
2020-05-17T18:21:28Z
https://github.com/deeppavlov/DeepPavlov/issues/1219
[]
shyamalschandra
1
cvat-ai/cvat
pytorch
9,211
Migration of backend data to external storage services (e.g. AWS S3)
Hello everyone! We are currently using CVAT `v2.7.4` in Kubernetes. In this installation, the backend server pods use a single PV with `AccessMode: ReadWriteMany` for all these pods. This is a very specific type of volume in our infrastructure and we try not to use them unless absolutely necessary. I would like to clarify two points for myself: - is there a way to replace this PV with an external storage service (for example, aws S3)? - if there is such a possibility, how can I migrate data from PV to conditional S3? I would be very grateful for any recommendations.
closed
2025-03-14T11:20:54Z
2025-03-17T19:10:16Z
https://github.com/cvat-ai/cvat/issues/9211
[]
obervinov
2
polarsource/polar
fastapi
4,646
When a User account is created, link Customers with the same email
closed
2024-12-16T09:38:46Z
2024-12-16T16:40:23Z
https://github.com/polarsource/polar/issues/4646
[]
frankie567
0
horovod/horovod
deep-learning
3,284
What's the difference between tf.distribute.experimental.MultiWorkerMirroredStrategy and horovod ?
sorry, have some question to ask... I have tried tf.distribute.experimental.MultiWorkerMirroredStrategy on one subnetwork with 2 machines with 2 GPU card of each to deploy tf.distribute.experimental.MultiWorkerMirroredStrategy to train one albert text classification model training, but found ttrainning is slow. the training communcation is by the "grpc". If I use horovod to do distribute training, will be faster than tf.distribute.experimental.MultiWorkerMirroredStrategy in raw tensorflow ? horovod do more optimization than native tensorflow ? thanks.
closed
2021-11-22T11:39:28Z
2021-11-22T12:47:57Z
https://github.com/horovod/horovod/issues/3284
[]
Yazooliu
0
automl/auto-sklearn
scikit-learn
861
Custom classifier metafeature calculation breaks when going from 0.6 to 0.7
## Describe the bug ## When going from version 0.6 to 0.7, the metafeature calculation for this particular custom classifier is broken. I also have an XGBoost classifier, which is not affected. ## To Reproduce ## The following classifier works on version 0.6, but in using it for metafeature calculation in 0.7 throws an error: ``` import torch from torch.utils.data import DataLoader, Dataset from torch.optim import Adam import torch.nn.functional as F import torch.nn as nn from ConfigSpace.configuration_space import ConfigurationSpace from ConfigSpace.hyperparameters import UniformFloatHyperparameter, \ UniformIntegerHyperparameter, CategoricalHyperparameter from autosklearn.pipeline.constants import DENSE, SIGNED_DATA, \ UNSIGNED_DATA, PREDICTIONS from autosklearn.pipeline.components.base \ import AutoSklearnClassificationAlgorithm import numpy as np class ClassNNWrapper(AutoSklearnClassificationAlgorithm): class ClassNet(nn.Module): def __init__( self, input_size, output_size, hidden_dim, dropout_prob, activation, layers=7): super().__init__() self.input_size = input_size self.output_size = output_size self.hidden_dim = hidden_dim self.dropout_prob = dropout_prob self.layers = layers self.activation = activation self.init_model() def init_model(self): self.dense = [] self.batch_norm = [] self.dropout = [] #self.first_batch_norm = nn.BatchNorm1d(self.input_size, self.input_size) self.first_dense = nn.Linear(self.input_size, self.hidden_dim) self.last_dense = nn.Linear(self.hidden_dim, self.output_size) for i in range(self.layers): self.dense.append(nn.Linear(self.hidden_dim, self.hidden_dim)) self.batch_norm.append(nn.BatchNorm1d(self.hidden_dim, self.hidden_dim)) self.dropout.append(nn.Dropout(self.dropout_prob)) self.dense = nn.ModuleList(self.dense) self.batch_norm = nn.ModuleList(self.batch_norm) self.dropout = nn.ModuleList(self.dropout) def forward(self, x): #x = self.first_batch_norm(x) x = self.activation(self.first_dense(x)) for dense, bn, drop in zip(self.dense, self.batch_norm, self.dropout): x = self.activation(dense(x)) x = bn(x) x = drop(x) x = F.softmax(self.last_dense(x)) return x class AutoSkDataset(Dataset): def __init__(self, X, Y): super().__init__() self.X = X self.Y = Y def __len__(self): return len(self.X) def __getitem__(self, ixs): if torch.is_tensor(ixs): ixs = ixs.tolist() Y_ret = self.Y[ixs, :].ravel() if self.Y.ndim > 1 else self.Y[ixs] return self.X[ixs, :], Y_ret def __init__(self, hidden_dim_factor, dropout_prob, activation, epochs, layers=7, random_state=None): torch.backends.cudnn.enabled = False self.hidden_dim_factor = hidden_dim_factor self.dropout_prob = dropout_prob self.activation = activation self.epochs = epochs self.layers = layers if random_state is not None: torch.manual_seed(random_state.randint(0, 50000)) def fit(self, X, Y, sample_weight=None): self.input_size = X.shape[1] self.output_size = len(np.unique(Y.ravel())) self.device = torch.device('cuda' if torch.cuda.is_available() else "cpu") self.model = self.ClassNet( self.input_size, self.output_size, max(1, int(self.input_size * self.hidden_dim_factor)), self.dropout_prob, self.activation, self.layers ).to(self.device).float() self.data = DataLoader( self.AutoSkDataset(X, Y), batch_size=128, shuffle=True ) self.param_optim = Adam(self.model.parameters()) self.criterion = nn.CrossEntropyLoss().to(self.device) self._fit() return self def _fit(self): self.model.train() for i in range(self.epochs): for x, y in self.data: self.param_optim.zero_grad() x = x.to(self.device).float() y = y.to(self.device).long() #x.to(self.device) #y.to(self.device) out = self.model(x) loss = self.criterion(out, y) loss.backward() self.param_optim.step() def predict(self, X): with torch.no_grad(): self.model.eval() pred = np.argmax(self.model(torch.Tensor(X).to(self.device).float()).cpu().numpy(), axis=1) return pred def predict_proba(self, X): with torch.no_grad(): self.model.eval() pred = self.model(torch.Tensor(X).to(self.device).float()).cpu().numpy() return pred @staticmethod def get_properties(dataset_properties=None): return { 'shortname': 'ClassNN', 'name': 'Neural Network Classifier', 'handles_regression': False, 'handles_classification': True, 'handles_multiclass': True, 'handles_multilabel': False, 'is_deterministic': True, 'input': (DENSE, SIGNED_DATA), 'output': (PREDICTIONS,) } @staticmethod def get_hyperparameter_search_space(dataset_properties=None): cs = ConfigurationSpace() hidden_dim_factor = UniformFloatHyperparameter( name='hidden_dim_factor', lower=0.5, upper=1.2, default_value=1 ) dropout_prob = UniformFloatHyperparameter( name='dropout_prob', lower=0.01, upper=0.5, default_value=0.05 ) activation = CategoricalHyperparameter( name='activation', choices=[F.leaky_relu, F.tanh] ) layers = UniformIntegerHyperparameter( 'layers', 3, 8, default_value=5) epochs = UniformIntegerHyperparameter( 'epochs', 20, 100, default_value=50) cs.add_hyperparameters([hidden_dim_factor, dropout_prob, activation, layers, epochs]) return cs ``` ## Expected behavior ## The classifier should also be able to be used for metafeature calculation in version 0.7. ## Actual behavior, stacktrace or logfile ## The classifier is ignored in the metafeature calculation. When all classifiers except this one are removed, all datasets fail with an error. This did not happen in version 0.6 (also in the case where this classifier is the only one available to auto-sklearn). ``` Traceback (most recent call last): File "/home/ubuntu/autosk_clf/scripts/scripts/run_auto-sklearn_for_metadata_generation.py", line 74, in <module> feat_type=cat) File "/home/ubuntu/.local/lib/python3.6/site-packages/autosklearn/estimators.py", line 681, in fit dataset_name=dataset_name, File "/home/ubuntu/.local/lib/python3.6/site-packages/autosklearn/estimators.py", line 351, in fit self._automl[0].fit(**kwargs) File "/home/ubuntu/.local/lib/python3.6/site-packages/autosklearn/automl.py", line 1026, in fit load_models=load_models, File "/home/ubuntu/.local/lib/python3.6/site-packages/autosklearn/automl.py", line 213, in fit only_return_configuration_space=only_return_configuration_space, File "/home/ubuntu/.local/lib/python3.6/site-packages/autosklearn/automl.py", line 413, in _fit exclude_preprocessors=self._exclude_preprocessors) File "/home/ubuntu/.local/lib/python3.6/site-packages/autosklearn/automl.py", line 892, in _create_search_space exclude_preprocessors=exclude_preprocessors) File "/home/ubuntu/.local/lib/python3.6/site-packages/autosklearn/util/pipeline.py", line 52, in get_configuration_space return _get_classification_configuration_space(info, include, exclude) File "/home/ubuntu/.local/lib/python3.6/site-packages/autosklearn/util/pipeline.py", line 94, in _get_classification_configuration_space include=include, exclude=exclude).\ File "/home/ubuntu/.local/lib/python3.6/site-packages/autosklearn/pipeline/classification.py", line 77, in __init__ random_state, init_params) File "/home/ubuntu/.local/lib/python3.6/site-packages/autosklearn/pipeline/base.py", line 35, in __init__ self.config_space = self.get_hyperparameter_search_space() File "/home/ubuntu/.local/lib/python3.6/site-packages/autosklearn/pipeline/base.py", line 224, in get_hyperparameter_search_space dataset_properties=self.dataset_properties) File "/home/ubuntu/.local/lib/python3.6/site-packages/autosklearn/pipeline/classification.py", line 172, in _get_hyperparameter_search_space exclude=exclude, include=include, pipeline=self.steps) File "/home/ubuntu/.local/lib/python3.6/site-packages/autosklearn/pipeline/base.py", line 306, in _get_base_search_space assert np.sum(matches) != 0, "No valid pipeline found." AssertionError: No valid pipeline found. ``` ## Environment and installation: ## Please give details about your installation: OS: Ubuntu 18.04 Installation is on a VM without a venv. Python version: 3.6 Auto-sklearn version: 0.7
closed
2020-05-26T16:45:28Z
2020-07-02T11:49:40Z
https://github.com/automl/auto-sklearn/issues/861
[]
RitterHannah
5
ludwig-ai/ludwig
computer-vision
3,583
Out of Memory Error Running llama2_7b_finetuning_4bit Example
**Describe the bug** Running the "Llama2-7b Fine-Tuning with 4bit Quantization" example notebook in Colab fails with the following error: `OutOfMemoryError: CUDA out of memory. Tried to allocate 86.00 MiB (GPU 0; 14.75 GiB total capacity; 13.37 GiB already allocated; 80.81 MiB free; 13.79 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF` I am new to Ludwig and AI in general so this may be user error. **To Reproduce** Steps to reproduce the behavior: 1. Open the notebook in Colab - https://colab.research.google.com/drive/1c3AO8l_H6V_x37RwQ8V7M6A-RmcBf2tG?usp=sharing 2. Add my HUGGING_FACE_HUB_TOKEN to code frame 2 3. Run the notebook 4. Run ends in error ``` INFO:ludwig.trainers.trainer:Starting with step 0, epoch: 0 Training: 18%|█▊ | 1935/10920 [17:45<1:13:32, 2.04it/s, loss=0.081] --------------------------------------------------------------------------- OutOfMemoryError Traceback (most recent call last) <ipython-input-4-f24794477812> in <cell line: 6>() 4 5 model = LudwigModel(config=config, logging_level=logging.INFO) ----> 6 results = model.train(dataset="ludwig://alpaca") 7 print(results) 27 frames /usr/local/lib/python3.10/dist-packages/bitsandbytes/functional.py in dequantize_4bit(A, quant_state, absmax, out, blocksize, quant_type) 906 907 if out is None: --> 908 out = torch.empty(shape, dtype=dtype, device=A.device) 909 910 n = out.numel() OutOfMemoryError: CUDA out of memory. Tried to allocate 86.00 MiB (GPU 0; 14.75 GiB total capacity; 13.37 GiB already allocated; 80.81 MiB free; 13.79 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF ``` After 3 runs in new Colab sessions it consistently fails at the same point. ``` INFO:ludwig.trainers.trainer:Starting with step 0, epoch: 0 Training: 18%|█▊ | 1935/10920 ``` Running the [example notebook](https://ludwig.ai/latest/examples/llms/llm_finetuning/) out the box. No other changes were made to the notebook or config. ``` model_type: llm base_model: meta-llama/Llama-2-7b-hf quantization: bits: 4 adapter: type: lora prompt: template: | ### Instruction: {instruction} ### Input: {input} ### Response: input_features: - name: prompt type: text output_features: - name: output type: text trainer: type: finetune learning_rate: 0.0001 batch_size: 1 gradient_accumulation_steps: 16 epochs: 3 learning_rate_scheduler: warmup_fraction: 0.01 preprocessing: sample_ratio: 0.1 ``` **Expected behavior** After watching [video](https://www.youtube.com/watch?v=g68qlo9Izf0) by Ludwig team that explained how to fine-tune Llama-2-7b on a single T4 GPU, I tried running the [example notebook](https://github.com/ludwig-ai/ludwig/tree/master/examples/llama2_7b_finetuning_4bit) linked in the Ludwig GitHub repository and expected the model to be fine-tuned for instruction on the Alpaca dataset. *FYI, the notebook from the video worked fine for me, but it uses a smaller data set - https://colab.research.google.com/drive/1Ly01S--kUwkKQalE-75skalp-ftwl0fE?usp=sharing.* **Environment (please complete the following information):** - OS: Google Colab - Linux Ubuntu - Python version: 3.10 - Ludwig version: 0.8.2 **Additional context** Add any other context about the problem here. - Random seed: 42 - Dataset: ludwig://alpaca - Data format: ludwig - Torch version: 2.0.1+cu118 - Compute: - GPU Type: Tesla T4 - GPUs per node: 1 - Number of nodes: 1 - System RAM: 51.0 GB - GPU RAM: 15.0 GB - Disk: 166.8 GB
closed
2023-09-03T23:26:57Z
2023-10-23T23:17:58Z
https://github.com/ludwig-ai/ludwig/issues/3583
[ "bug" ]
charleslbryant
6
mwaskom/seaborn
data-science
2,916
facetgrid: when hue value doesn't appear in all graphs the legend appears to be misannotated
**Problem:** The same color is assigned to two different values in the legend. **The code to reproduce:** ``` import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sna = pd.DataFrame( { 'x' : np.tile(range(10), 5), 'y' : list(range(10)) + list(range(2,12)) + list(range(4, 14)) + list(range(6, 16)) + list(range(8, 18)), 'id_for_hue' : ['foo']*10 + ['bar']*10 + ['baz']*10 + ['bar']*10 + ['baz']*10, 'id_for_graph' : ['a']*30 + ['b']*20 } ) g = sns.FacetGrid(sna, col='id_for_graph') g.map_dataframe( sns.lineplot, x = 'x', y = 'y', hue = 'id_for_hue' ) g.add_legend() ``` **Output in jupyterlab:** ![image](https://user-images.githubusercontent.com/30849701/180108642-fe9c6345-c8d1-4cf6-a173-e37aada57984.png) **Versions:** seaborn: 0.11.2 matplotlib: 3.5.2 pandas: 1.3.4 numpy: 1.21.6 python: 3.7.6 Thanks for taking a look!
closed
2022-07-21T01:23:54Z
2022-07-22T21:09:08Z
https://github.com/mwaskom/seaborn/issues/2916
[]
bengroves
4
gradio-app/gradio
data-visualization
9,971
[Gradio 5] Input text inside Slider component alignment and size
- [x] I have searched to see if a similar issue already exists. **Is your feature request related to a problem? Please describe.** I like the input insid of slider component when aligned by default to the right, but when info text is present, sometimes the input down to the left in next line. Its could be nice if we have too other attribute to specify the size of input text and other to specify his alignment. **Describe the solution you'd like** Here are one solution i did using external css: ```python import gradio as gr css=""" .gradio-slider-input { input[type="number"] { width: 8em; } } .slider-input-right > .wrap > .head { display: flex; } .slider-input-right > .wrap > .head > .tab-like-container { margin-left: auto; } """ #css="" #uncomment this line to test without css with gr.Blocks(css=css) as app: with gr.Row(): with gr.Column(): freeu_s1_scale = gr.Slider( elem_classes="gradio-slider-input slider-input-right", interactive=False, label="S1 scale", minimum=0.1, maximum=1.0, step=0.1, value=0.9, info="Scaling factor to prevent over-smoothing during image generation in stage 1 of the diffusion process" ) with gr.Column(): freeu_s2_scale = gr.Slider( elem_classes="gradio-slider-input slider-input-right", interactive=False, label="S2 scale", minimum=0.1, maximum=1.0, step=0.1, value=0.2, info="Scale factor to prevent over-smoothing during image generation in stage 2 of the diffusion process" ) with gr.Column(): freeu_b1_scale = gr.Slider( elem_classes="gradio-slider-input slider-input-right", interactive=False, label="B1 scale", minimum=1.0, maximum=1.2, step=0.1, value=1.2, info="Scaling factor for stage 1 to amplify the contributions of backbone features" ) with gr.Column(): freeu_b2_scale = gr.Slider( elem_classes="gradio-slider-input slider-input-right", interactive=False, label="B2 scale", minimum=1.2, maximum=1.6, step=0.1, value=1.4, info="Scaling factor for stage 2 to amplify the contributions of backbone features" ) app.launch(inbrowser=True) ``` **Screenshots** **Default (without css)** ![image](https://github.com/user-attachments/assets/0eebcb0c-465b-4125-a617-2f0997159c5e) **Aligned to the right and input text changed** ![image](https://github.com/user-attachments/assets/f76ccbe3-1f76-4115-a0eb-5e1502f01930)
open
2024-11-16T18:06:19Z
2024-11-20T17:09:01Z
https://github.com/gradio-app/gradio/issues/9971
[ "bug" ]
elismasilva
0
bmoscon/cryptofeed
asyncio
58
update kraken to use new websocket API
https://www.kraken.com/en-us/features/websocket-api
closed
2019-02-02T22:47:11Z
2019-02-03T21:25:37Z
https://github.com/bmoscon/cryptofeed/issues/58
[]
bmoscon
1
plotly/dash-table
dash
73
Selections not saving for dropdown column
If I try to select a new weather type, it doesn't save for me: ![weather](https://user-images.githubusercontent.com/3968590/45365922-934afb80-b5ab-11e8-870d-8613f8417345.gif)
closed
2018-09-11T14:15:16Z
2018-09-12T17:01:07Z
https://github.com/plotly/dash-table/issues/73
[ "dash-type-bug" ]
charleyferrari
6
serengil/deepface
deep-learning
918
load_image usage in represent
In represent function, where load_image is called, it is returning img object and img name but this is not covered here: https://github.com/serengil/deepface/blob/master/deepface/DeepFace.py#L706 Also, region should be dict instead of list https://github.com/serengil/deepface/blob/master/deepface/DeepFace.py#L721
closed
2023-12-14T10:08:45Z
2023-12-14T14:04:29Z
https://github.com/serengil/deepface/issues/918
[ "bug" ]
serengil
1
Johnserf-Seed/TikTokDownload
api
435
[BUG]多窗口运行崩溃
提示①: ![image](https://github.com/Johnserf-Seed/TikTokDownload/assets/52158736/c9a73fdf-73a4-44c5-8115-be0b7fb9bf5c) Traceback (most recent call last): File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Download.py", line 71, in VideoDownload f'aweme_id={self.aweme_id[i]}&aid=6383&cookie_enabled=true&platform=PC&downlink=10').params ~~~~~~~~~~~~~^^^ IndexError: list index out of range During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\admin\Desktop\tk\TikTokDownload-main\TikTokTool.py", line 32, in <module> profile.getProfile(cmd.setting()) File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Profile.py", line 130, in getProfile self.getData(self.api_post_url) File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Profile.py", line 180, in getData self.getVideoInfo(result) File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Profile.py", line 299, in getVideoInfo Util.Download().VideoDownload(self) File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Download.py", line 84, in VideoDownload self.aweme_id[i]) ~~~~~~~~~~~~~^^^ IndexError: list index out of range 提示②: ![image](https://github.com/Johnserf-Seed/TikTokDownload/assets/52158736/aef77268-c984-44de-9f1d-8e8004fef48e) Traceback (most recent call last): File "C:\Users\admin\Desktop\tk\TikTokDownload-main\TikTokTool.py", line 32, in <module> profile.getProfile(cmd.setting()) File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Profile.py", line 130, in getProfile self.getData(self.api_post_url) File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Profile.py", line 180, in getData self.getVideoInfo(result) File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Profile.py", line 301, in getVideoInfo self.getNextData() File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Profile.py", line 231, in getNextData self.getVideoInfo(result) File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Profile.py", line 301, in getVideoInfo self.getNextData() File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Profile.py", line 231, in getNextData self.getVideoInfo(result) File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Profile.py", line 301, in getVideoInfo self.getNextData() File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Profile.py", line 231, in getNextData self.getVideoInfo(result) File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Profile.py", line 301, in getVideoInfo self.getNextData() File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Profile.py", line 231, in getNextData self.getVideoInfo(result) File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Profile.py", line 298, in getVideoInfo datas = Util.Images(self.headers).get_all_images(self.image_list) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\admin\Desktop\tk\TikTokDownload-main\Util\Images.py", line 54, in get_all_images js = Util.json.loads(r) ^^^^^^^^^^^^^^^^^^ File "C:\Users\admin\AppData\Local\Programs\Python\Python311\Lib\json\__init__.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\admin\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\admin\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
open
2023-05-21T06:40:45Z
2023-05-21T06:40:45Z
https://github.com/Johnserf-Seed/TikTokDownload/issues/435
[ "故障(bug)", "额外求助(help wanted)", "无效(invalid)" ]
summer990
0
mirumee/ariadne
graphql
77
Write release notes for 0.2
We should have release notes on our docs, that would provide information about new things and how to update from previous versions.
closed
2018-12-12T10:44:38Z
2019-01-07T16:33:55Z
https://github.com/mirumee/ariadne/issues/77
[ "docs" ]
rafalp
0
cleanlab/cleanlab
data-science
639
Fix label issues when model does not predict some class ever
Some users have reported bugs in two situations. Investigate these to verify bugs exist and then address any that come up. 1) When there exists some class k, such that the model never predicts class k for any of the examples in the dataset. 2) When there exists some class k such that pred_probs[:,k] == 0 for all examples whose given label is k. Then the confident_thresholds[k] == 0 which can lead to abnormalities. One simple fix for case 2, is if `pred_probs[:,k] == 0` for all examples, then can just delete this column from the pred_probs. Subsequently can simply delete all examples with label k from the dataset before finding label issues, keeping track of the how indices in the reduced dataset correspond to indices in the original dataset.
closed
2023-02-19T22:46:18Z
2023-03-13T22:03:13Z
https://github.com/cleanlab/cleanlab/issues/639
[ "bug" ]
jwmueller
2
ipython/ipython
jupyter
14,025
8.12.0: Magic %run isn't stored in history
This is similar to issue #13688 but for %run Basically this line is not stored in history any more ```python %run path/to/script.py ``` This only happens when there is a path. I also noticed that %history returns empty even though up arrow brings older commands (all but %run)
open
2023-04-21T21:19:46Z
2023-05-17T17:26:42Z
https://github.com/ipython/ipython/issues/14025
[]
dashesy
1
modoboa/modoboa
django
3,460
DMARC reports for alias domain are not added to domain report
# Impacted versions * OS Type: Debian * OS Version: Bookworm (12) * Database Type: PostgreSQL * Database version: 15.10 * Modoboa: 2.2.4 * installer used: No * Webserver: Nginx # Steps to reproduce * Have a working domain setup with SPF, DKIM, and DMARC * Have the DMARC transport rule set up to receive and parse DMARC reports * Have an alias domain setup for the primary domain, with SPF, DKIM, and DMARC set up for this domain as well * Wait for DMARC reports to come in for mails purportedly sent from one of the _alias_ domains # Current behavior The DMARC report for the alias domain is received and parsed by modoboa, the record can be seen in the `dmarc_report` table in the database, correctly attributed to the alias domain in the `policy_domain` field. These reports however don't seem to be taken into account in the DMARC report pages of the web interface - only reports for the actual domain are shown here. # Expected behavior Reports for the main domain as well as its aliases are both taken into account when showing the DMARC report stats in the web interface.
open
2025-02-14T16:24:32Z
2025-02-18T07:58:52Z
https://github.com/modoboa/modoboa/issues/3460
[ "enhancement" ]
PatTheMav
0
PaddlePaddle/ERNIE
nlp
584
infer_classifyer.py结果不正确
使用infer_classifyer.py加载模型预测每次结果都不一样。 paddle版本1.8。 代码来源repro分支。
closed
2020-11-02T10:50:31Z
2021-02-14T02:51:13Z
https://github.com/PaddlePaddle/ERNIE/issues/584
[ "wontfix" ]
gromzhu
3
AutoGPTQ/AutoGPTQ
nlp
81
Issue with positional params with `BaseGPTQForCausalLM.forward()`
Hi I am trying my new perplexity calculation on AutoGPTQ for the first time, and hitting this error: ``` │ /workspace/TB_ppl/auto_gptq/eval_tasks/perplexity.py:88 in run │ │ │ │ 85 │ │ │ │ │ tokens[0][batch_start] = self._tokenizer.bos_token_id │ │ 86 │ │ │ │ │ │ 87 │ │ │ │ with torch.no_grad(): │ │ ❱ 88 │ │ │ │ │ outputs = self._model(tokens[:, batch_start:batch_start+batch_size]) │ │ 89 │ │ │ │ │ batch_logits = outputs.logits.float() │ │ 90 │ │ │ │ │ │ 91 │ │ │ │ tokens[0][batch_start] = token_org │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in _call_impl │ │ │ │ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │ │ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │ │ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │ │ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │ │ 1502 │ │ # Do not call functions when jit is used │ │ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │ │ 1504 │ │ backward_pre_hooks = [] │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ TypeError: BaseGPTQForCausalLM.forward() takes 1 positional argument but 2 were given ``` This is the issue: `outputs = self._model(tokens)` Looking at `modeling/_base.py` all that's needed to get this call working is: ``` def forward(self, *args, **kwargs): return self.model(*args, **kwargs) ``` Is that OK to PR? Just want to check there's not going to be any side effects I've not thought of.
closed
2023-05-15T08:58:31Z
2023-05-16T11:21:36Z
https://github.com/AutoGPTQ/AutoGPTQ/issues/81
[]
TheBloke
1
huggingface/transformers
machine-learning
36,045
Optimization -OO crashes docstring handling
### System Info transformers = 4.48.1 Python = 3.12 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use the `pipeline_flux.calculate_shift` function with` python -OO` Code runs fine with no Python interpreter optimization but crashes with `-OO` with: ``` File "/home/me/Documents/repo/train.py", line 25, in <module> from diffusers.pipelines.flux.pipeline_flux import calculate_shift File "/home/me/Documents/repo/.venv/lib/python3.12/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 20, in <module> from transformers import ( File "<frozen importlib._bootstrap>", line 1412, in _handle_fromlist File "/home/me/Documents/repo/.venv/lib/python3.12/site-packages/transformers/utils/import_utils.py", line 1806, in __getattr__ value = getattr(module, name) ^^^^^^^^^^^^^^^^^^^^^ File "/home/me/Documents/repo/.venv/lib/python3.12/site-packages/transformers/utils/import_utils.py", line 1805, in __getattr__ module = self._get_module(self._class_to_module[name]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/me/Documents/repo/.venv/lib/python3.12/site-packages/transformers/utils/import_utils.py", line 1819, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.clip.modeling_clip because of the following error (look up to see its traceback): 'NoneType' object has no attribute 'split' ``` This is related to the removal of docstrings with `-OO`. ### Expected behavior I'd expect no crash. I assume `lines = func_doc.split("\n")` could be replaced with: `lines = func_doc.split("\n") if func_doc else []`
closed
2025-02-05T11:31:43Z
2025-02-06T15:31:24Z
https://github.com/huggingface/transformers/issues/36045
[ "bug" ]
donthomasitos
3
httpie/http-prompt
api
139
Allow to register an initialisation process
Usually the session has an expiration and cookies stored stay available only for a restricted period of time. But often the process is the same to access to a protected resource: - post some data (user & password) to a form to get a valid session - play as you like So the request is the following: > allow to register some instructions to execute automatically at start I know: the best approach should be to get an API key and provide it to each request. That http-prompt allows already very well, but this is not always the real life 😞
open
2018-01-05T11:52:45Z
2018-01-06T14:26:32Z
https://github.com/httpie/http-prompt/issues/139
[ "enhancement" ]
lgnap
0
onnx/onnx
machine-learning
6,646
bfloat16 to numpy conversion ignores raw_data
# Bug Report ### Description When a bfloat16 tensor is converted into a `numpy.ndarray`, its `raw_data` contents is not loaded. ### System information Ubuntu 22.04.3 ONNX version: 1.16.1, 1.17.0 Python version: 3.10.12 PyTorch version: 2.5.1 ### Reproduction instructions A program to generate the model (attached in [test_bfloat16.onnx.gz](https://github.com/user-attachments/files/18478099/test_bfloat16.onnx.gz)): ``` import torch class TestModel(torch.nn.Module): def __init__(self): super().__init__() self.a = torch.tensor([[0.25, 0.5], [1, 2]], dtype=torch.bfloat16) def forward(self, inp): return self.a + inp inp = torch.zeros([2, 2], dtype=torch.bfloat16) torch.onnx.export(TestModel(), (inp,), "test_bfloat16.onnx") ``` A program to test loading bfloat16 tensors: ``` import onnx from onnx.reference import op_run import numpy as np m = onnx.load("test_bfloat16.onnx") t = m.graph.node[0].attribute[0].t print("The bfloat16 tensor t is:") print(t) print("onnx.numpy_helper.to_array(t) is:") print(onnx.numpy_helper.to_array(t)) print("\nop_run.to_array_extended(t) is:") print(op_run.to_array_extended(t)) print("\nnp.frombuffer(t.raw_data, bfloat16) is:") print(np.frombuffer(t.raw_data, op_run.bfloat16).reshape(tuple(t.dims))) ``` ONNX 1.16.1 prints: ``` The bfloat16 tensor t is: dims: 2 dims: 2 data_type: 16 raw_data: "\200>\000?\200?\000@" onnx.numpy_helper.to_array(t) is: [[0.25 0.5 ] [1. 2. ]] onnx.reference.op_run.to_array_extended(t) is: [[ 0 0] [16384 16527]] np.frombuffer(t.raw_data, bfloat16) is: [[16000 16128] [16256 16384]] ``` ONNX 1.17.1 prints: ``` The bfloat16 tensor t is: dims: 2 dims: 2 data_type: 16 raw_data: "\200>\000?\200?\000@" onnx.numpy_helper.to_array(t) is: [[ 0 16256] [ 0 16256]] op_run.to_array_extended(t) is: [[ 0 16256] [ 0 16256]] np.frombuffer(t.raw_data, bfloat16) is: [[16000 16128] [16256 16384]] ``` The contents of `op_run.to_array_extended(t)` and in 1.17.1 that of `onnx.numpy_helper.to_array(t)` as well may vary because it is actually an uninitialized array. ### Expected behavior In all cases the conversion results should be the same as `np.frombuffer(t.raw_data, bfloat16)`, i.e. ``` [[16000 16128] [16256 16384]] ```
closed
2025-01-20T13:31:55Z
2025-01-28T01:57:03Z
https://github.com/onnx/onnx/issues/6646
[ "bug" ]
aparpara
2
miguelgrinberg/python-socketio
asyncio
592
Python client socketio cannot connect to nodejs server socketio (unexpected response from server)
I am trying to connect a python client to NodeJs server but I get some errors. When I connect this py client to a python server it works without problems.. Also connected from web to nodeJS works well, but python client to nodejs not.. ``` import socketio sio = socketio.Client() @sio.event def connect(): print("Connected") @sio.event def connect_error(): print('[INFO] Failed to connect to server.') @sio.event def disconnect(): print('[INFO] Disconnected from server.') @sio.on("message") def message_received(message): print(message) sio.connect('http://localhost:5000') sio.emit("something", "Hello from python.") ``` NodeJS server: ``` var app = require('express')(); var http = require('http').createServer(app); var server = http.listen(5000, () => { console.log('listening on *:5000'); }); var io = require('socket.io')(server,{ /*pingInterval: 500,*/ origins: '*:*'}); app.get('/', (req, res) => { res.sendFile(__dirname + '/index.html'); }); io.on('connection', (socket) => { console.log('a user connected'); }); ``` The errors: ``` Traceback (most recent call last): File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\socketio\client.py", line 281, in connect engineio_path=socketio_path) File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\engineio\client.py", line 195, in connect url, headers or {}, engineio_path) File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\engineio\client.py", line 315, in _connect_polling 'Unexpected response from server'), None) File "<string>", line 3, in raise_from engineio.exceptions.ConnectionError: Unexpected response from server During handling of the above exception, another exception occurred: Traceback (most recent call last): File "client.py", line 31, in <module> sio.connect('http://localhost:5000') File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\socketio\client.py", line 285, in connect exc.args[1] if len(exc.args) > 1 else exc.args[0]) File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\socketio\client.py", line 555, in _trigger_event return self.handlers[namespace][event](*args) TypeError: connect_error() takes 0 positional arguments but 1 was given ``` How can I solve this issue?
closed
2020-12-19T12:04:45Z
2020-12-19T14:24:00Z
https://github.com/miguelgrinberg/python-socketio/issues/592
[ "question" ]
hyperion2333
2
LibrePhotos/librephotos
django
992
Social graph navigation broken. "Cannot read properties of undefined" in Graph.js
Graph is placed weirdly: ![image](https://github.com/LibrePhotos/librephotos/assets/136166458/ae6d1fd5-6938-47d6-a561-7d803edce437) Trying to zoom/pan the graph results in the following errors ![image](https://github.com/LibrePhotos/librephotos/assets/136166458/227f3950-c31a-4e9c-adea-d3d1f5ca7930) ## Please provide additional information: - 💻 Operating system: Ubuntu server, latest LTS - ⚙ Architecture (x86 or ARM): x86 - 🔢 Librephotos version: Very recent (cloned the repo like yesterday) - 📸 Librephotos installation method (Docker, Kubernetes, .deb, etc.): Git clone -> docker-compose - 📁 How is your picture library mounted (Local file system (Type), NFS, SMB, etc.): At /mnt/data0/blah/blah/blah/blah/Photos with only read permissions
closed
2023-08-10T19:40:43Z
2023-08-18T12:47:27Z
https://github.com/LibrePhotos/librephotos/issues/992
[ "bug" ]
ghost
2
open-mmlab/mmdetection
pytorch
11,614
Run models without installing mmdetection, openmim or mm related packages (CO-DETR)
I would like to run a model without having to install all the MM related packages. Did anybody try to convert a model into pure vanilla pytorch? If so, I would like to hear your opinion on the topic
open
2024-04-03T10:32:54Z
2024-04-03T10:32:54Z
https://github.com/open-mmlab/mmdetection/issues/11614
[]
federico-ferlito
0
gradio-app/gradio
machine-learning
10,033
The scrollbar cannot be displayed in gr.Gallery.
### Describe the bug The scrollbar cannot be displayed properly in gr.Gallery, and all images cannot be viewed by sliding up or down. I've tried gradio=5.6.0 and gradio=43.0. ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Reproduction ```python import gradio as gr gallery = gr.Gallery( label="Generated images", show_label=False, elem_id="gallery" , columns=[3], rows=[2], object_fit="contain", height=300) ``` ### Screenshot ![image](https://github.com/user-attachments/assets/cc964d3b-b7f3-40c3-995c-e1f8748654a0) ### Logs _No response_ ### System Info ```shell gradio=5.6.0 and gradio=43.0 gradio_client=1.4.3 Operating System: Linux ``` ### Severity I can work around it
open
2024-11-25T08:01:37Z
2025-01-22T06:30:50Z
https://github.com/gradio-app/gradio/issues/10033
[ "bug" ]
Kirsty-tong
3
OWASP/Nettacker
automation
202
subdomain scan returning items with '<BR>'
subdomain scan has recently started returning subdomains with 'BR' HTML tag like this: `lists.owasp.org<BR>www.lists.owasp.org `
closed
2020-01-27T21:52:28Z
2020-05-18T15:07:20Z
https://github.com/OWASP/Nettacker/issues/202
[ "bug fixed" ]
securestep9
4
Evil0ctal/Douyin_TikTok_Download_API
api
33
tiktok api
我想知道这个获取单个作品详细信息 https://api.tiktokv.com/aweme/v1/multi/aweme/detail/?aweme_ids=%5B{}%5D 的api,作者是从那里看到的,代码上的注释是官方api? 请求解答,感激
closed
2022-05-30T10:49:50Z
2022-11-09T21:09:44Z
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/33
[ "enhancement" ]
johngao01
2
OFA-Sys/Chinese-CLIP
nlp
57
运行训练脚本没跑起来
作者您好,在您的帮助下前一个问题zeroshot_eval.sh已成功运行,也成功得到了您所提到的准确率,非常感谢。 之后想尝试训练模型,但使用训练命令时却没有反应,可以麻烦您解答一下吗? ${DATAPATH}文件夹命名为data,datasets使用作者提供预处理好的MUGE。 输入命令如下: bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh data 输入后,没有反馈信息 ![219132259-57a31c54-8aaa-4fe2-abb0-c7354ec0f2da](https://user-images.githubusercontent.com/92359626/220244447-6ed324e5-906f-44fb-8cd4-0bc36d9866f7.png) 脚本设置如下: ``` GPUS_PER_NODE=1 WORKER_CNT=1 export MASTER_ADDR=localhost export MASTER_PORT=8514 export RANK=0 export PYTHONPATH=${PYTHONPATH}:`pwd`/cn_clip/ … ``` 脚本后面部分没有改动
closed
2023-02-21T04:02:59Z
2023-03-22T09:07:44Z
https://github.com/OFA-Sys/Chinese-CLIP/issues/57
[]
zhou-wenn
5
twopirllc/pandas-ta
pandas
490
Data squeeze doesn't match with Trading View
I download data in csv and try to process it ``` df = pd.read_csv('/tmp/file.csv', index_col='Date') df.ta.squeeze(bb_length=20, bb_std=1.0, kc_length=5, kc_scalar=1.5, lazybear=True, use_tr=True, append=True) ``` But if compare it with TV indicator data are different In trading view `37.67` But in output `18.095` <img width="657" alt="image" src="https://user-images.githubusercontent.com/37179932/154170174-4d431e5a-89a4-4958-8bcc-898c2325e9cf.png"> <img width="730" alt="image" src="https://user-images.githubusercontent.com/37179932/154170192-43dcfc65-de0d-48ba-8137-f01a477b8056.png">
open
2022-02-15T23:59:36Z
2022-03-15T15:25:56Z
https://github.com/twopirllc/pandas-ta/issues/490
[ "duplicate", "info" ]
vitassuper
9
WZMIAOMIAO/deep-learning-for-image-processing
deep-learning
661
多GPU训练出错,能否帮助看下可能是什么问题
**System information** * Have I written custom code: 没有 * OS Platform(e.g., window10 or Linux Ubuntu 16.04):Linux Ubuntu 16.04 * Python version: 3.8 * Deep learning framework and version(e.g., Tensorflow2.1 or Pytorch1.3): pytorch * Use GPU or not: 使用GPU * CUDA/cuDNN version(if you use GPU): * The network you trained(e.g., Resnet34 network): MaskRCNN **Describe the current behavior** 多GPU训练pascal voc数据集出错 **Error info / logs** Traceback (most recent call last): File "train_multi_GPU.py", line 269, in <module> main(args) File "train_multi_GPU.py", line 147, in main mean_loss, lr = utils.train_one_epoch(model, optimizer, data_loader, File "/hy-tmp/MaskFCN/mask_rcnn/train_utils/train_eval_utils.py", line 27, in train_one_epoch images = list(image.to(device) for image in images) File "/hy-tmp/MaskFCN/mask_rcnn/train_utils/train_eval_utils.py", line 27, in <genexpr> images = list(image.to(device) for image in images) File "/usr/local/lib/python3.8/dist-packages/PIL/Image.py", line 548, in __getattr__ raise AttributeError(name) AttributeError: to Traceback (most recent call last): File "train_multi_GPU.py", line 269, in <module> main(args) File "train_multi_GPU.py", line 147, in main mean_loss, lr = utils.train_one_epoch(model, optimizer, data_loader, File "/hy-tmp/MaskFCN/mask_rcnn/train_utils/train_eval_utils.py", line 27, in train_one_epoch images = list(image.to(device) for image in images) File "/hy-tmp/MaskFCN/mask_rcnn/train_utils/train_eval_utils.py", line 27, in <genexpr> images = list(image.to(device) for image in images) File "/usr/local/lib/python3.8/dist-packages/PIL/Image.py", line 548, in __getattr__ raise AttributeError(name) AttributeError: to ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 3891) of binary: /usr/bin/python3.8
open
2022-10-18T09:51:22Z
2023-02-12T08:13:21Z
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/661
[]
Yuhuoo
5
aminalaee/sqladmin
fastapi
578
A list option in the ModelView that determines which related models should not be loaded when the record is updated.
### Checklist - [X] There are no similar issues or pull requests for this yet. ### Is your feature related to a problem? Please describe. Now, if you making your `ModelView` for some model in your database you can't determines which related models not to load even if your related models in lazy-mode. Now all your related records will be loaded for some reason when you click on the "Save" button. It's OK if you have 100-200 related records. But, for example, in my case, I have a User who has several thousand related records. In this case, it becomes impossible to change something from the user through the admin panel, since when you click on the "Save" button, absolutely all related records start loading. And accordingly everything hangs and eventually falls. ### Describe the solution you would like. List parameter in ModelView that determines which related models not to load ### Describe alternatives you considered Correct way to override `Query` class ### Additional context To solve my problem for my project, I had to override the `Query` class and the `_update_async` function, which doesn't really look like much. And, for example, in case of changing the driver from async to sync, I will have to redefine another function.
closed
2023-08-16T14:15:21Z
2023-08-21T07:38:32Z
https://github.com/aminalaee/sqladmin/issues/578
[]
bullochka-stack
7
sktime/sktime
data-science
7,634
[DOC] Improving documentation for "forecasting" estimators lacking examples.
#### Describe the issue linked to the documentation The current documentation is comprehensive but examples are missing in several locations. As an umbrella issue, this seeks to highlight all the "forecasting" estimators in `sktime`, lacking examples in the documentation. It can be used as a reference. <!-- Tell us about the confusion introduced in the documentation. --> Here's a list that I was able to accumulate during a read-through of the docs, you are free to add more here :) or point out if an example isn't supposed to be added in the docs. - [AutoTS](https://github.com/sktime/sktime/tree/main/sktime/forecasting/autots.py) - [BoxCoxBiasAdjustedForecaster](https://github.com/sktime/sktime/tree/main/sktime/forecasting/boxcox_bias_adjusted_forecaster.py) - [PyKANForecaster](https://github.com/sktime/sktime/tree/main/sktime/forecasting/pykan_forecaster.py) - [DirectReductionForecaster](https://github.com/sktime/sktime/tree/main/sktime/forecasting/compose/_reduce.py) - [RecursiveReductionForecaster](https://github.com/sktime/sktime/tree/main/sktime/forecasting/compose/_reduce.py) - [BaseDeepNetworkPyTorch](https://github.com/sktime/sktime/tree/main/sktime/forecasting/base/adapters/_pytorch.py) - [DirectTabularRegressionForecaster](https://github.com/sktime/sktime/tree/main/sktime/forecasting/compose/_reduce.py) - [DirectTimeSeriesRegressionForecaster](https://github.com/sktime/sktime/tree/main/sktime/forecasting/compose/_reduce.py) - [DartsRegressionModel](https://github.com/sktime/sktime/tree/main/sktime/forecasting/darts.py) - [DartsLinearRegressionModel](https://github.com/sktime/sktime/tree/main/sktime/forecasting/darts.py) - [DartsXGBModel](https://github.com/sktime/sktime/tree/main/sktime/forecasting/darts.py) - [StatsForecastAutoETS](https://github.com/sktime/sktime/tree/main/sktime/forecasting/statsforecast.py) - [StatsForecastAutoCES](https://github.com/sktime/sktime/tree/main/sktime/forecasting/statsforecast.py) - [StatsForecastAutoTBATS](https://github.com/sktime/sktime/tree/main/sktime/forecasting/statsforecast.py) - [OnlineEnsembleForecaster](https://github.com/sktime/sktime/tree/main/sktime/forecasting/online_learning/_online_ensemble.py) - [ForecastingRandomizedSearchCV](https://github.com/sktime/sktime/tree/main/sktime/forecasting/model_selection/_tune.py) #### Suggest a potential alternative/fix <!-- Tell us how we could improve the documentation in this regard. --> A simple demonstration of the estimator's usage inside the docstring.
open
2025-01-12T19:07:08Z
2025-01-15T14:21:52Z
https://github.com/sktime/sktime/issues/7634
[ "documentation" ]
PranavBhatP
5