repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
mckinsey/vizro
|
plotly
| 123
|
Consider AG Grid as recommended default over Dash data table
|
_Originally posted by @antonymilne in https://github.com/mckinsey/vizro/issues/114#issuecomment-1770347893_
Just from quickly playing around with the the example dashboard here, I prefer the AG Grid. I wonder whether we should be recommending that instead of Dash data table as the default table for people to use in the (I guess) most common case that someone just wants to draw a nice table?
* for small screen sizes the table has its own little scroll bar rather than stretching out the whole screen
* I can rearrange column order by drag and drop
* the active row is highlighted
* the style just feels a bit more modern and slicker somehow (though dark theme is obviously not usable right now)
I'd guess some of these you can probably also achieve with Dash data table by using appropriate styling parameters or arguments though?
These are just my impressions from playing around for a few seconds though, so don't take them too seriously - it's the first time I've used both sorts of table, so I don't know what each is capable of or how easily we can get all the features we want out of each. But curious what other people think - should Dash data table or AG Grid be the "default" table that we suggest people to use?
|
closed
|
2023-10-25T05:20:46Z
|
2024-03-07T13:21:49Z
|
https://github.com/mckinsey/vizro/issues/123
|
[] |
antonymilne
| 6
|
aeon-toolkit/aeon
|
scikit-learn
| 1,799
|
[ENH] Add the option to turn off all type checking and conversion and go straight to _fit/_predict/_transform
|
### Describe the feature or idea you want to propose
in estimator base classes (looking at BaseCollectionTransformer but same elsewhere) we do checks, get meta data and convert to inner type. All good, but it does introduce an overhead. For example, this is minirocket with and without the checks
the checks involve several scans of the data. For larger data it would be good to have an option to just go straight to inner methods and fail without informative error.
This is minirocket in safe and unsafe modes (time in secs, to transform length 500 series varying number of cases). Note the difference is linear in n
<html xmlns:v="urn:schemas-microsoft-com:vml"
xmlns:o="urn:schemas-microsoft-com:office:office"
xmlns:x="urn:schemas-microsoft-com:office:excel"
xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta name=ProgId content=Excel.Sheet>
<meta name=Generator content="Microsoft Excel 15">
<link id=Main-File rel=Main-File
href="file:///C:/Users/Tony/AppData/Local/Temp/msohtmlclip1/01/clip.htm">
<link rel=File-List
href="file:///C:/Users/Tony/AppData/Local/Temp/msohtmlclip1/01/clip_filelist.xml">
<style>
<!--table
{mso-displayed-decimal-separator:"\.";
mso-displayed-thousand-separator:"\,";}
@page
{margin:.75in .7in .75in .7in;
mso-header-margin:.3in;
mso-footer-margin:.3in;}
tr
{mso-height-source:auto;}
col
{mso-width-source:auto;}
br
{mso-data-placement:same-cell;}
td
{padding-top:1px;
padding-right:1px;
padding-left:1px;
mso-ignore:padding;
color:black;
font-size:11.0pt;
font-weight:400;
font-style:normal;
text-decoration:none;
font-family:"Aptos Narrow", sans-serif;
mso-font-charset:0;
mso-number-format:General;
text-align:general;
vertical-align:bottom;
border:none;
mso-background-source:auto;
mso-pattern:auto;
mso-protection:locked visible;
white-space:nowrap;
mso-rotate:0;}
.xl65
{mso-number-format:Fixed;}
-->
</style>
</head>
<body link="#467886" vlink="#96607D">
| With | Unsafe | Diff
-- | -- | -- | --
500 | 0.49 | 0.16 | -0.33
1000 | 0.88 | 0.20 | -0.68
1500 | 1.30 | 0.26 | -1.04
2000 | 1.74 | 0.32 | -1.43
2500 | 2.02 | 0.49 | -1.53
3000 | 2.46 | 0.44 | -2.02
3500 | 2.81 | 0.49 | -2.32
4000 | 3.19 | 0.54 | -2.65
4500 | 3.66 | 0.60 | -3.06
5000 | 4.08 | 0.73 | -3.35
5500 | 4.57 | 0.75 | -3.82
6000 | 4.75 | 0.83 | -3.92
6500 | 5.17 | 0.83 | -4.33
7000 | 5.80 | 0.95 | -4.85
7500 | 6.14 | 0.99 | -5.15
8000 | 6.51 | 1.01 | -5.50
8500 | 6.85 | 1.05 | -5.80
9000 | 7.33 | 1.13 | -6.21
9500 | 7.55 | 1.23 | -6.32
10000 | 7.92 | 1.22 | -6.70
</body>
</html>
### Describe your proposed solution
I would have a parameter in constructor, say "unsafe", that defaults to False, then just test in fit etc.
```python
def fit(self, X, y=None):
if self.unsafe:
self._fit(X,y)
return self
if self.get_tag("requires_y"):
if y is None:
raise ValueError("Tag requires_y is true, but fit called with y=None")
# skip the rest if fit_is_empty is True
if self.get_tag("fit_is_empty"):
self._is_fitted = True
return self
self.reset()
```
etc
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_
|
closed
|
2024-07-14T16:26:50Z
|
2024-07-14T17:11:59Z
|
https://github.com/aeon-toolkit/aeon/issues/1799
|
[
"enhancement"
] |
TonyBagnall
| 1
|
ipython/ipython
|
data-science
| 14,364
|
Documentation about available options and their structural hierarchy
|
<!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
Hi, I'm looking for a list of the available options and I still cannot find in the latest documentation.
There were the pages for them in docs in previous versions:
https://ipython.org/ipython-doc/dev/config/options/index.html
Is there any page that I'm missing or is there any way to get the equivalent information using IPython shell?
i.e. I found the following magic command tells the available options for the `TerminalPythonApp` class, but I want comprehensive list of available options other than the class as well.
```py
%config TerminalIPythonApp
TerminalIPythonApp(BaseIPythonApplication, InteractiveShellApp) options
---------------------------------------------------------------------
TerminalIPythonApp.add_ipython_dir_to_sys_path=<Bool>
...
%config InteractiveShellApp
UsageError: Invalid config statement: 'InteractiveShellApp'
# No information available for the super classes
```
I found a part of the information is described in the latest docs:
https://ipython.readthedocs.io/en/stable/config/intro.html
But I still feel it difficult to customize options because:
- There is no list of available options for each class
- The structural hierarchy of classes is unclear (i.e. I have no idea where is the best place to put option values because the relationship of the similar classes:`TerminalIPythonApp`, `InteractiveShellApp`, `InteractiveShell`)
Thank you!
|
open
|
2024-03-06T18:29:10Z
|
2024-03-06T20:36:03Z
|
https://github.com/ipython/ipython/issues/14364
|
[] |
furushchev
| 2
|
home-assistant/core
|
asyncio
| 140,830
|
Transmission integration says 'Failed to connect'
|
### The problem
I can normally connect my Transmission daemon 4.0.6 installed as OpenWrt 23.05.5 package from desktop and mobile app clients. When I connect via HA integration setup wizard it always says Failed to connect as soon as I set the correct user/password (if I set them wrong it says authentication is incorrect. If I start netcat on a different port I can see the integration sends a POST request correctly. It only immediately fails when everything is set correctly.
The issue is probably the same reported here: https://community.home-assistant.io/t/transmission-integration-failed-to-connect/402640
### What version of Home Assistant Core has the issue?
core-2025.3.1
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Container
### Integration causing the issue
transmission
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/transmission
### Diagnostics information
HA container runs as host network mode. In many other cases I can connect to other local services for different integrations. Both HA and Transmission daemon share the same LAN IP as host.
I think the actual TCP connection does not fail. Not sure what else makes the integration setup wizard reporting the failure.
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
I set Transmission log level to 5 but it does not print anything about the failure.
```
### Additional information
_No response_
|
open
|
2025-03-17T21:52:42Z
|
2025-03-17T21:52:48Z
|
https://github.com/home-assistant/core/issues/140830
|
[
"integration: transmission"
] |
nangirl
| 1
|
Kanaries/pygwalker
|
matplotlib
| 12
|
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 511737: character maps to <undefined>
|
This is an issue from [reddit](https://www.reddit.com/r/datascience/comments/117bptb/comment/j9cb6wn/?utm_source=share&utm_medium=web2x&context=3)
```bash
Traceback (most recent call last):
File "E:\py\test-pygwalker\main.py", line 15, in <module>
gwalker = pyg.walk(df)
File "E:\py\test-pygwalker\venv\lib\site-packages\pygwalker\gwalker.py", line 91, in walk
js = render_gwalker_js(gid, props)
File "E:\py\test-pygwalker\venv\lib\site-packages\pygwalker\gwalker.py", line 65, in render_gwalker_js
js = gwalker_script() + js
File "E:\py\test-pygwalker\venv\lib\site-packages\pygwalker\base.py", line 15, in gwalker_script
gwalker_js = "const exports={};const process={env:{NODE_ENV:\"production\"} };" + f.read()
File "E:\Python\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 511737: character maps to <undefined>
```
> even loading df = pd.DataFrame(data={'a':[1]}) causes this problem to appear.
|
closed
|
2023-02-21T02:24:51Z
|
2023-02-21T03:15:03Z
|
https://github.com/Kanaries/pygwalker/issues/12
|
[
"bug"
] |
ObservedObserver
| 1
|
TencentARC/GFPGAN
|
pytorch
| 368
|
original or clean?
|
Does anyone knows what's the difference between clean version and original version? which version performance is better?
|
open
|
2023-04-18T09:43:29Z
|
2023-04-18T09:43:29Z
|
https://github.com/TencentARC/GFPGAN/issues/368
|
[] |
sunjian2015
| 0
|
iterative/dvc
|
data-science
| 9,924
|
remote: migrate: deduplicate objects between v2 and v3
|
My situation is like this
a) dataFolderA containing fileA.bin fileB.bin, fileC.Bin and I added that via "dvc add dataFolderA" to the remote dvc via 2.0
b) then I changed fileB.bin and added that via "dvc add dataFolderB" to the remove via dvc 3.0
when investigating the remote(and cache) I can see the md5-renamed file for fileA.bin and fileC.bin in both files/md5/<xx>/<xyz> and <xx>/<xyz>
it is the same exact md5 hash and the data for fileA.bin and fileC.bin are now twice in the remote (and cache)
(I am simplifying my case there are many fileA,fileB,fileC's involved)
How can I clean up the remote?. I know there exists a "dvc cache migrate" (have not tried it yet though) .
Kindest regards
|
closed
|
2023-09-07T13:36:16Z
|
2023-10-13T16:06:27Z
|
https://github.com/iterative/dvc/issues/9924
|
[
"feature request"
] |
12michi34
| 7
|
ets-labs/python-dependency-injector
|
flask
| 664
|
Installation via pip warns about deprecated installation method
|
I see this warning when installing the package using pip 22.3.1
```
Running setup.py install for dependency-injector ... done
DEPRECATION: juice-client-db is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
```
|
closed
|
2023-01-31T17:17:08Z
|
2024-12-10T14:21:23Z
|
https://github.com/ets-labs/python-dependency-injector/issues/664
|
[] |
chbndrhnns
| 1
|
tensorflow/tensor2tensor
|
deep-learning
| 1,083
|
InvalidArgumentError in Transformer model
|
### Description
I am trying to run the `Transformer` model in training mode. I took the as the [`asr_transformer` notebook](https://github.com/tensorflow/tensor2tensor/blob/v1.9.0/tensor2tensor/notebooks/asr_transformer.ipynb) as example and built up on this.
> **Note:** For `hparams` the input and target modality is just `'default'`.
### The error
The exception information:
```
tensorflow.python.framework.errors_impl.InvalidArgumentError:
In[0].dim(0) and In[1].dim(0) must be the same:
[100,2,1,192] vs [1,2,229,192] [Op:BatchMatMul]
name: transformer/parallel_0/transformer/transformer/body/decoder/layer_0/encdec_attention/multihead_attention/dot_product_attention/MatMul/
```
### Inspection
I was debugging into the `transformer` model right to where `dot_product_attention` is being called [common_attention.py#L3470](https://github.com/tensorflow/tensor2tensor/blob/v1.9.0/tensor2tensor/layers/common_attention.py#L3470). From the docs:
```python
"""Dot-product attention.
Args:
q: Tensor with shape [..., length_q, depth_k].
k: Tensor with shape [..., length_kv, depth_k]. Leading dimensions must
match with q.
v: Tensor with shape [..., length_kv, depth_v] Leading dimensions must
match with q.
```
### Environment information
```
OS: Linux everest11 4.15.0-34-generic #37~16.04.1-Ubuntu SMP Tue Aug 28 10:44:06 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ pip freeze | grep tensor
tensor2tensor==1.9.0
tensorboard==1.10.0
tensorflow==1.10.1
$ python -V
Python 3.5.6 :: Anaconda, Inc.
```
# Steps to reproduce:
```python
import os
import logging.config
import tensorflow as tf
from tensor2tensor import problems
from tensor2tensor import models
from tensor2tensor.utils import metrics
from tensor2tensor.utils import registry
from tensor2tensor.utils import trainer_lib
from asr.util import is_debug_mode
Modes = tf.estimator.ModeKeys
tfe = tf.contrib.eager
tfe.enable_eager_execution()
if __name__ == '__main__':
problem_name = 'librispeech_clean_small'
input_dir = os.path.join('datasets', 'input', 'problems', problem_name) #'input/ende_wmt_bpe32k'
data_dir = os.path.join(input_dir, 'data')
tmp_dir = os.path.join(input_dir, 'tmp')
tf.gfile.MakeDirs(data_dir)
tf.gfile.MakeDirs(tmp_dir)
problem = problems.problem(problem_name)
problem.generate_data(data_dir, tmp_dir)
encoders = problem.feature_encoders(None)
model_name = "transformer"
hparams_set = "transformer_librispeech_tpu"
hparams = trainer_lib.create_hparams(hparams_set, data_dir=data_dir, problem_name=problem_name)
model_class = registry.model(model_name)
model = model_class(hparams=hparams, mode=Modes.TRAIN)
# In Eager mode, opt.minimize must be passed a loss function wrapped with
# implicit_value_and_gradients
@tfe.implicit_value_and_gradients
def loss_fn(features):
_, losses = model(features)
return losses["training"]
# Setup the training data
train_data = problem.dataset(Modes.TRAIN, data_dir)
optimizer = tf.train.AdamOptimizer()
# Train
NUM_STEPS = 100
for count, example in enumerate(tfe.Iterator(train_data)):
example['inputs'] = tf.reshape(example['inputs'], (1,) + tuple([d.value for d in example['inputs'].shape]))
loss, gv = loss_fn(example)
optimizer.apply_gradients(gv)
```
|
closed
|
2018-09-20T10:45:00Z
|
2018-09-20T12:20:46Z
|
https://github.com/tensorflow/tensor2tensor/issues/1083
|
[] |
stefan-falk
| 1
|
gee-community/geemap
|
streamlit
| 2,022
|
geemap module: extract_values_to_points(filepath) function export bugs
|
<!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
Python version 3.12
geemap version 0.32.1
### Description
I added the Google Earth Engine Landsat data to Geemap, manually plotted some random points of interest on the map interface, and extracted the CSV file for future reference.
The output CSV file has the wrong latitude and longitude (It seems that the actual point longitude are in the latitude column and the actual Latitude in the longitude column)
Point I plot:

Output CSV:

### What I Did
```python
import geemap
import ee
Map = geemap.Map(center=[40, -100], zoom=4)
landsat7 = ee.Image('LANDSAT/LE7_TOA_5YEAR/1999_2003').select(
['B1', 'B2', 'B3', 'B4', 'B5', 'B7']
)
landsat_vis = {'bands': ['B3', 'B2', 'B1'], 'gamma': 1.4}
Map.addLayer(landsat7, landsat_vis, "Landsat")
Map.set_plot_options(add_marker_cluster=True, overlay=True)
Map.extract_values_to_points('samples.csv')
```
|
closed
|
2024-05-27T22:06:21Z
|
2024-05-27T22:22:12Z
|
https://github.com/gee-community/geemap/issues/2022
|
[
"bug"
] |
zyang91
| 1
|
keras-team/keras
|
deep-learning
| 20,081
|
Loading up Json_files built and trained in Keras 2 for Keras 3
|
Using Keras 3, I am trying to load up a built and trained model from Keras 2 API that is stored in .json with weights stored in .h5. The model file is the following: [cnn_model.json](https://github.com/user-attachments/files/16462021/cnn_model.json). Since model_from_json does not exist in Keras 3, I rewrote the function from the Keras 2 API so that I can load the .json file. With Keras 3 (with torch backend), I am trying to load the model and the weights with the following code
```
import os
import keras
import json
os.environ["KERAS_BACKEND"] = "torch"
def model_from_json(json_string, custom_objects=None):
"""Parses a JSON model configuration string and returns a model instance.
Args:
json_string: JSON string encoding a model configuration.
custom_objects: Optional dictionary mapping names
(strings) to custom classes or functions to be
considered during deserialization.
Returns:
A Keras model instance (uncompiled).
model_config = json.loads(json_string)
return deserialize_keras_object(model_config, custom_objects=custom_objects)
def model_torch():
model_name = 'cnn_model' #model file name
model_file = model_name + '.json'
with open(model_file, 'r') as json_file:
print('USING MODEL:' + model_file)
loaded_model_json = json_file.read()
loaded_model = model_from_json(loaded_model_json)
loaded_model.load_weights(model_name + '.h5')
loaded_model.compile('sgd', 'mse')
if __name__ == "__main__":
model_torch()
```
However, when I run this code, I obtain the error below (as shown below). With this, I have the three following questions:
1. How does one possibly fix this error given that the model I want to load (in Keras 3) was built and trained in tensorflow-keras 2?
2. Is it better to rebuild the model in Keras using the load_model() function in Keras 3, and if so, how can you translate the weights from the .h5 file that was created in tensorflow-keras 2 to keras 3?
3. To rebuild how, how should one translate the json dictionary to actual code?
Error I obtain:
`
TypeError: Could not locate class 'Sequential'. Make sure custom classes are decorated with `@keras.saving.register_keras_serializable()`. Full object config: {'class_name': 'Sequential', 'config': {'name': 'sequential', 'layers': [{'class_name': 'Conv2D', 'config': {'name': 'conv2d_20', 'trainable': True, 'batch_input_shape': [None, 50, 50, 1], 'dtype': 'float32', 'filters': 32, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'activation': 'relu', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_13', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'Conv2D', 'config': {'name': 'conv2d_21', 'trainable': True, 'dtype': 'float32', 'filters': 32, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_14', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'MaxPooling2D', 'config': {'name': 'max_pooling2d_10', 'trainable': True, 'dtype': 'float32', 'pool_size': [2, 2], 'padding': 'valid', 'strides': [2, 2], 'data_format': 'channels_last'}}, {'class_name': 'Dropout', 'config': {'name': 'dropout_17', 'trainable': True, 'dtype': 'float32', 'rate': 0.25, 'noise_shape': None, 'seed': None}}, {'class_name': 'Conv2D', 'config': {'name': 'conv2d_22', 'trainable': True, 'dtype': 'float32', 'filters': 64, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'same', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_15', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'Conv2D', 'config': {'name': 'conv2d_23', 'trainable': True, 'dtype': 'float32', 'filters': 64, 'kernel_size': [3, 3], 'strides': [1, 1], 'padding': 'valid', 'data_format': 'channels_last', 'dilation_rate': [1, 1], 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_16', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'MaxPooling2D', 'config': {'name': 'max_pooling2d_11', 'trainable': True, 'dtype': 'float32', 'pool_size': [2, 2], 'padding': 'valid', 'strides': [2, 2], 'data_format': 'channels_last'}}, {'class_name': 'Dropout', 'config': {'name': 'dropout_18', 'trainable': True, 'dtype': 'float32', 'rate': 0.25, 'noise_shape': None, 'seed': None}}, {'class_name': 'Flatten', 'config': {'name': 'flatten_8', 'trainable': True, 'dtype': 'float32', 'data_format': 'channels_last'}}, {'class_name': 'Dense', 'config': {'name': 'dense_15', 'trainable': True, 'dtype': 'float32', 'units': 512, 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_17', 'trainable': True, 'dtype': 'float32', 'activation': 'relu'}}, {'class_name': 'Dropout', 'config': {'name': 'dropout_19', 'trainable': True, 'dtype': 'float32', 'rate': 0.5, 'noise_shape': None, 'seed': None}}, {'class_name': 'Dense', 'config': {'name': 'dense_16', 'trainable': True, 'dtype': 'float32', 'units': 2, 'activation': 'linear', 'use_bias': True, 'kernel_initializer': {'class_name': 'VarianceScaling', 'config': {'scale': 1.0, 'mode': 'fan_avg', 'distribution': 'uniform', 'seed': None, 'dtype': 'float32'}}, 'bias_initializer': {'class_name': 'Zeros', 'config': {'dtype': 'float32'}}, 'kernel_regularizer': None, 'bias_regularizer': None, 'activity_regularizer': None, 'kernel_constraint': None, 'bias_constraint': None}}, {'class_name': 'Activation', 'config': {'name': 'activation_18', 'trainable': True, 'dtype': 'float32', 'activation': 'softmax'}}]}, 'keras_version': '2.2.4-tf', 'backend': 'tensorflow'}
`
|
closed
|
2024-08-01T21:36:25Z
|
2024-09-07T19:33:52Z
|
https://github.com/keras-team/keras/issues/20081
|
[
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] |
manuelpaeza
| 7
|
vastsa/FileCodeBox
|
fastapi
| 178
|
设置好S3存储后还是默认保存到本地以及#96的问题又出现了
|
①如题,我的环境是docker

我的设置应该没错吧,后台日志也没任何报错信息。
②还有#96的问题又出现了,我按照日志修改后可以用了,但是就像上面说的,只能本地存储。日志如下:
```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/usr/local/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/fastapi/applications.py", line 289, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 91, in __call__
await self.simple_response(scope, receive, send, request_headers=headers)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 146, in simple_response
await self.app(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
raise e
File "/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 273, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 190, in run_endpoint_function
return await dependant.call(**values)
File "/app/apps/base/views.py", line 41, in share_file
if file.size > settings.uploadSize:
TypeError: '>' not supported between instances of 'int' and 'str'
```
麻烦作者修复一下~
|
closed
|
2024-06-18T04:10:20Z
|
2024-06-18T17:24:01Z
|
https://github.com/vastsa/FileCodeBox/issues/178
|
[] |
ChanLicher
| 5
|
clovaai/donut
|
nlp
| 74
|
prediction results
|
The number of predictions I am getting in the inference is limited to 16, while it should be much more than that.
Is there a certain parameter that I need to modify in order to increase this number?
|
closed
|
2022-10-20T12:17:20Z
|
2022-11-01T11:06:54Z
|
https://github.com/clovaai/donut/issues/74
|
[] |
josianem
| 0
|
microsoft/JARVIS
|
pytorch
| 183
|
importError
|
python models_server.py --config configs/config.default.yaml
Traceback (most recent call last):
File "/home/ml/docker/projects/JARVIS/server/models_server.py", line 29, in <module>
from controlnet_aux import OpenposeDetector, MLSDdetector, HEDdetector, CannyDetector, MidasDetector
ImportError: cannot import name 'CannyDetector' from 'controlnet_aux' (/home/ml/miniconda3/lib/python3.10/site-packages/controlnet_aux/__init__.py)
some debug suggestions?
|
closed
|
2023-04-24T06:42:37Z
|
2024-08-26T03:28:59Z
|
https://github.com/microsoft/JARVIS/issues/183
|
[] |
birchmi
| 2
|
mljar/mercury
|
jupyter
| 84
|
Sort apps in the mercury gallery
|
Hi
Is there a way to sort apps in the gallery based on the title of the notebook or date updated or any other given condition?? Please look at the attached image. It consists of ML notebooks of a course. It has a few apps. I want to sort the apps based on the title so that notebooks will be in order and users can go through them easily. This is the website [link ](https://mlnotebooks.herokuapp.com/) for the app in which I want to sort.

|
closed
|
2022-04-15T10:12:09Z
|
2023-02-20T08:37:12Z
|
https://github.com/mljar/mercury/issues/84
|
[] |
rajeshai
| 1
|
d2l-ai/d2l-en
|
pytorch
| 2,032
|
Contribution Steps
|
The absence of the best techniques for revising and contributing to this project makes it is tedious. Can you update the contributing guidelines?
|
closed
|
2022-02-01T07:13:18Z
|
2022-03-21T22:16:50Z
|
https://github.com/d2l-ai/d2l-en/issues/2032
|
[] |
callmekofi
| 1
|
idealo/imagededup
|
computer-vision
| 161
|
Precision, recall is not right
|
Precision, recall score computed by the evaluate function was not as I expected. So I wrote my own function and it output different results. I double check by using sklearn and it returned the same result as my own function. Can you recheck the evaluate function?
|
closed
|
2021-11-11T11:12:33Z
|
2021-11-11T13:06:02Z
|
https://github.com/idealo/imagededup/issues/161
|
[] |
yosajka
| 1
|
Avaiga/taipy
|
data-visualization
| 2,236
|
[🐛 BUG] Taipy Studio does not recognize some properties
|
### What went wrong? 🤔
With Taipy 4.0.0 and Taipy Studio 2.0.0, Taipy Studio does not recognize some valid properties like mode for text or content for part:

```python
page = """
<|{map_title}|text|mode=md|>
<|part|content={folium_map()}|height=600px|>
"""
```
### OS
Windows
### Version of Taipy
4.0.0
### Acceptance Criteria
- [ ] A unit test reproducing the bug is added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] The bug reporter validated the fix.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional)
|
open
|
2024-11-12T12:54:06Z
|
2025-03-21T13:43:00Z
|
https://github.com/Avaiga/taipy/issues/2236
|
[
"💥Malfunction",
"🟧 Priority: High",
"🔒 Staff only",
"👩💻Studio"
] |
AlexandreSajus
| 0
|
plotly/dash-cytoscape
|
plotly
| 6
|
Can't Change Layout to Preset
|

|
closed
|
2018-08-16T15:54:32Z
|
2018-09-27T17:46:10Z
|
https://github.com/plotly/dash-cytoscape/issues/6
|
[
"react"
] |
xhluca
| 1
|
noirbizarre/flask-restplus
|
api
| 773
|
Parse argument as a list
|
I have the following model and parser:
```
component = api.model('Component', {
'location': fields.String,
'amount': fields.Integer,
})
...
parser = api.parser()
parser.add_argument('presented_argument', type=[component])
args = parser.parse_args()
```
I have error if I do it in this way: "'list' object is not callable"
What is the correct way to parse list?
|
closed
|
2020-01-22T16:46:35Z
|
2020-01-27T09:54:20Z
|
https://github.com/noirbizarre/flask-restplus/issues/773
|
[] |
glebmikulko
| 2
|
pydantic/pydantic-ai
|
pydantic
| 93
|
Examples builder
|
We plan to add an examples builder which would take a sequence of things (e.g. pydantic models, dataclasses, dicts etc.) and serialize them.
Usage would be something like
```py
from pydantic_ai import format_examples
@agent.system_prompt
def foobar():
return f'the examples are:\n{format_examples(examples, dialect='xml')}'
```
The suggest is that LLMs find it particularly easy to read XML, so we'll offer (among other formats) XML as way to format the examples.
By default, should it use
```py
"""
<example>
<input>
show me values greater than 5
</input>
<sql>
SELECT * FROM table WHERE value > 5
</sql>
</example>
...
"""
```
or
```py
"""
<example input="show me values greater than 5" sql="SELECT * FROM table WHERE value > 5" />
...
"""
```
?
|
closed
|
2024-11-25T19:36:36Z
|
2025-01-02T23:00:33Z
|
https://github.com/pydantic/pydantic-ai/issues/93
|
[
"Feature request"
] |
samuelcolvin
| 4
|
albumentations-team/albumentations
|
deep-learning
| 1,508
|
Label additional targets keypoints.
|
## 🐛 Bug
I want to augment an image, a main set of keypoints ('stems') and a varying amount of additional sets of keypoints ('row_center_lines') on it and labels for these keypoints ('row_center_line_classes').
It appears to be working as intended when using [A.ReplayCompose](https://albumentations.ai/docs/examples/replay/) (compare code below).
However for every time I apply replay, the original image has to be augmented again as well, leading to unnecessary memory use, right? Therefore I thought It'd make sense to use additional_targets instead. This does indeed work in the case of transforming keypoints only, however after some research and experimenting I haven't been able to get it to work for labels as well. At best it throws a len(data) needs to be len(labels) error. Is there a way to label both additional target keypoints and standard keypoints and transform them, or am I better of for now with sticking to using replay?
The reason I want to augment the labels as well is because as it appears the order of the keypoints is swapped around in case of flips or similar augmentations.
## To Reproduce
Steps to reproduce the behavior:
1. Add keypoints additional_targets to A.Compose.
2. Try to somehow add labels for these additional keypoints.
3. Apply the transform created by A.Compose, entering the additional keypoints and somehow the labels.
4. Be disappointed.
Working replay_code:
` def aug_fn(img, stems, stem_classes, row_center_lines, row_center_line_classes, no_augment):
"""This function takes the input image and stems and returns the augmented image
:img: input image
:stems: numpy array of stem positions
:stem_classes: list of stem classes
:row_center_lines: numpy array of row center lines
:row_center_line_classes: list of row center line classes
"""
# could use tf.numpy_function instead, but that's discouraged
img = img.numpy()
stems = stems.numpy()
stem_classes = stem_classes.numpy()
# don't remove out of bounds keypoints and their classes
remove_invisible=False
label_fields = ['stem_classes']
# only flip and crop to target size in some cases
transforms_list = self.limited_transforms_list if no_augment else self.all_transforms_list
transforms = A.ReplayCompose(
transforms=transforms_list,
keypoint_params=A.KeypointParams(format='xy', remove_invisible=remove_invisible,
label_fields=label_fields,
)) # can add more labels here, but also need to add when using replay
# Augment the image and stems
aug_data = transforms(image=img,
keypoints=stems,
stem_classes=stem_classes,
)
# for testing purposes (only if determinism=true)
#self.aug_data_list = aug_data['replay']
stems = tf.convert_to_tensor(aug_data['keypoints'], dtype=tf.float32)
stem_classes = tf.convert_to_tensor(aug_data['stem_classes'], dtype=tf.string)
if len(row_center_lines) > 0:
# augment each row_center_line by replay
row_center_lines = row_center_lines.numpy().astype(np.float32)
row_center_line_classes = row_center_line_classes.numpy().astype(str)
for i, row_center_line in enumerate(row_center_lines):
# image is augmented again, could enter empty np.empty() if faster
replay_row_center_lines = A.ReplayCompose.replay(aug_data['replay'],
image=img,
keypoints=row_center_line,
# one label for each of the two row_center_line keypoints
stem_classes=(row_center_line_classes[i], row_center_line_classes[i]))
# both classes are the same, so [0] or [1] is fine
row_center_lines[i] = replay_row_center_lines['keypoints']
row_center_line_classes[i] = replay_row_center_lines['stem_classes'][0]
img = aug_data["image"]
return img, stems, stem_classes, row_center_lines, row_center_line_classes
# 3. Crop and apply augmentations to images and stems (randomcrop only in validation mode)
# tf.numpy_function converts the input tensors to numpy arrays automatically, which are needed for augmentation
img, stem_pos, stem_classes, row_center_lines, row_center_line_classes = tf.py_function(func=aug_fn,
inp=[img, stem_pos, stem_classes, row_center_lines, row_center_line_classes, no_augment],
Tout=[tf.uint8, tf.float32, tf.string, tf.float32, tf.string],
name="aug_fn") `
Faulty additonal_targets code:
` def aug_fn(img, stems, stem_classes, row_center_lines, row_center_line_classes, no_augment):
"""This function takes the input image and stems and returns the augmented image
:img: input image
:stems: stem positions
:stem_classes: stem classes
:row_center_lines: row center lines
:row_center_line_classes: row center line classes
"""
# could use tf.numpy_function instead, but that's discouraged
img = img.numpy()
stems = stems.numpy()
stem_classes = stem_classes.numpy()
# don't remove out of bounds keypoints and their classes
remove_invisible=False
label_fields = [] if len(stems) == 0 else ['stem_classes']
additional_targets = {}
additional_target_args = {}
if len(row_center_lines) > 0:
row_center_lines = row_center_lines.numpy().astype(np.float32)
row_center_line_classes = row_center_line_classes.numpy().astype(str)
for i, row_center_line in enumerate(row_center_lines):
additional_targets[f'row_center_line_{i}'] = 'keypoints'
additional_target_args[f'row_center_line_{i}'] = row_center_line
#additional_targets[f'row_center_line_class_{i}'] = 'classification'
#additional_target_args[f'row_center_line_class_{i}'] = row_center_line_classes[i]
# only flip and crop to target size in some cases
transforms_list = self.limited_transforms_list if no_augment else self.all_transforms_list
transforms = A.Compose(
transforms=transforms_list,
additional_targets=additional_targets,
keypoint_params=A.KeypointParams(format='xy', remove_invisible=remove_invisible,
label_fields=label_fields,
))
# Augment the image and stems
aug_data = transforms(image=img,
keypoints=stems,
stem_classes=stem_classes,
**additional_target_args
)
for asc, osc in zip(stem_classes, aug_data['stem_classes']):
print(f'asc: {asc}')
print(f'osc: {osc}')
if asc != osc:
print('wooops')
img = aug_data["image"]
stems = tf.convert_to_tensor(aug_data['keypoints'], dtype=tf.float32)
stem_classes = tf.convert_to_tensor(aug_data['stem_classes'], dtype=tf.string)
row_center_line_classes_copy=row_center_line_classes.copy()
if len(row_center_lines) > 0:
for j, row_center_line in enumerate(row_center_lines):
row_center_lines[j] = aug_data[f'row_center_line_{j}']
#row_center_line_classes[j] = aug_data[f'row_center_line_class_{j}']
return img, stems, stem_classes, row_center_lines, row_center_line_classes
img, stem_pos, stem_classes, row_center_lines, row_center_line_classes = tf.py_function(func=aug_fn,
inp=[img, stem_pos, stem_classes, row_center_lines, row_center_line_classes, no_augment],
Tout=[tf.uint8, tf.float32, tf.string, tf.float32, tf.string],
name="aug_fn") `
## Expected behavior
Have labels of all keypoints be augmented according to the keypoint augmentations themselves.
Thank you very much for having a look!
## Environment
- Albumentations version (e.g., 0.1.8): 1.3.1
- Python version (e.g., 3.7): 3.8.10
- OS (e.g., Linux): Linux
- How you installed albumentations (`conda`, `pip`, source): pip install albumentations
- Any other relevant information:
## Additional context
Notes: 'row_center_lines' are pairs of two keypoints forming a line, so a "line transformation" would also work, if something like that existed. Maybe bounding box augmentation could be used for it?
|
open
|
2024-01-18T10:12:45Z
|
2024-01-23T10:50:19Z
|
https://github.com/albumentations-team/albumentations/issues/1508
|
[] |
JonathanGehret
| 1
|
holoviz/panel
|
plotly
| 7,580
|
panel oauth-secret not a valid command
|
The section https://panel.holoviz.org/how_to/authentication/configuration.html#encryption mentions you can run `panel oauth-secret`.

But you cannot with panel 1.5.5. Do you mean `panel secret`?

|
closed
|
2025-01-02T12:30:27Z
|
2025-01-17T19:11:57Z
|
https://github.com/holoviz/panel/issues/7580
|
[] |
MarcSkovMadsen
| 0
|
plotly/dash
|
flask
| 2,325
|
[BUG] `pip install dash` is still installing obsolete packages
|
Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.7.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
Using pip install installs components that are now incorporated into dash itself. These are unnecessary as far as dash is concerned.
**Expected behavior**
Dash installs with incorporating obsolete pypi packages.
**After note**
I see that this was already covered in #1944. Apologies for the new issue, I will be patient.
|
closed
|
2022-11-18T00:02:51Z
|
2022-11-18T00:09:25Z
|
https://github.com/plotly/dash/issues/2325
|
[] |
ryanskeith
| 0
|
tableau/server-client-python
|
rest-api
| 734
|
Add documentation for Data Acceleration Report
|
Add docs to match code added in #596
|
open
|
2020-11-19T00:43:18Z
|
2023-03-03T21:42:04Z
|
https://github.com/tableau/server-client-python/issues/734
|
[
"docs"
] |
bcantoni
| 0
|
coqui-ai/TTS
|
deep-learning
| 2,801
|
[Bug] cannot install TTS from pip (Dependency lookup for OpenBLAS with method 'pkgconfig' failed)
|
### Describe the bug
When I run pip install TTS (on mac in vscode) I run into this error:
Found Pkg-config: NO
Run-time dependency python found: YES 3.10
Program cython found: YES (/private/var/folders/0y/kbqk5xzn5f397bmdxh0jt8h40000gn/T/pip-build-env-422m5n95/overlay/bin/cython)
Compiler for C supports arguments -Wno-unused-but-set-variable: NO
Compiler for C supports arguments -Wno-unused-function: YES
Compiler for C supports arguments -Wno-conversion: YES
Compiler for C supports arguments -Wno-misleading-indentation: YES
Library m found: YES
Fortran compiler for the host machine: gfortran (gcc 13.1.0 "GNU Fortran (Homebrew GCC 13.1.0) 13.1.0")
Fortran linker for the host machine: gfortran ld64 650.9
Compiler for Fortran supports arguments -Wno-conversion: YES
Checking if "-Wl,--version-script" : links: NO
Program pythran found: YES (/private/var/folders/0y/kbqk5xzn5f397bmdxh0jt8h40000gn/T/pip-build-env-422m5n95/overlay/bin/pythran)
Did not find CMake 'cmake'
Found CMake: NO
Run-time dependency xsimd found: NO (tried pkgconfig, framework and cmake)
Run-time dependency threads found: YES
Library npymath found: YES
Library npyrandom found: YES
pybind11-config found: YES (/private/var/folders/0y/kbqk5xzn5f397bmdxh0jt8h40000gn/T/pip-build-env-422m5n95/overlay/bin/pybind11-config) 2.10.4
Run-time dependency pybind11 found: YES 2.10.4
Run-time dependency openblas found: NO (tried pkgconfig, framework and cmake)
Run-time dependency openblas found: NO (tried framework)
../../scipy/meson.build:159:9: ERROR: Dependency lookup for OpenBLAS with method 'pkgconfig' failed: Pkg-config binary for machine 1 not found. Giving up.
A full log can be found at /private/var/folders/0y/kbqk5xzn5f397bmdxh0jt8h40000gn/T/pip-install-jmtiyxte/scipy_aaa7ee9f969b4a2984a0151a62db7a37/.mesonpy-820un46a/build/meson-logs/meson-log.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
### To Reproduce
pip install TTS
or
git clone https://github.com/coqui-ai/TTS
pip install -e .[all,dev,notebooks] # Select the relevant extras
### Expected behavior
Supposed to install TTS
### Logs
```shell
pip install TTS
Collecting TTS
Using cached TTS-0.15.6.tar.gz (1.5 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting cython==0.29.30 (from TTS)
Using cached Cython-0.29.30-py2.py3-none-any.whl (985 kB)
Collecting scipy>=1.4.0 (from TTS)
Using cached scipy-1.11.1.tar.gz (56.0 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [44 lines of output]
+ meson setup /private/var/folders/0y/kbqk5xzn5f397bmdxh0jt8h40000gn/T/pip-install-jmtiyxte/scipy_aaa7ee9f969b4a2984a0151a62db7a37 /private/var/folders/0y/kbqk5xzn5f397bmdxh0jt8h40000gn/T/pip-install-jmtiyxte/scipy_aaa7ee9f969b4a2984a0151a62db7a37/.mesonpy-820un46a/build -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=/private/var/folders/0y/kbqk5xzn5f397bmdxh0jt8h40000gn/T/pip-install-jmtiyxte/scipy_aaa7ee9f969b4a2984a0151a62db7a37/.mesonpy-820un46a/build/meson-python-native-file.ini
The Meson build system
Version: 1.2.0
Source dir: /private/var/folders/0y/kbqk5xzn5f397bmdxh0jt8h40000gn/T/pip-install-jmtiyxte/scipy_aaa7ee9f969b4a2984a0151a62db7a37
Build dir: /private/var/folders/0y/kbqk5xzn5f397bmdxh0jt8h40000gn/T/pip-install-jmtiyxte/scipy_aaa7ee9f969b4a2984a0151a62db7a37/.mesonpy-820un46a/build
Build type: native build
Project name: SciPy
Project version: 1.11.1
C compiler for the host machine: cc (clang 12.0.5 "Apple clang version 12.0.5 (clang-1205.0.22.11)")
C linker for the host machine: cc ld64 650.9
C++ compiler for the host machine: c++ (clang 12.0.5 "Apple clang version 12.0.5 (clang-1205.0.22.11)")
C++ linker for the host machine: c++ ld64 650.9
Cython compiler for the host machine: cython (cython 0.29.36)
Host machine cpu family: aarch64
Host machine cpu: aarch64
Program python found: YES (/Library/Frameworks/Python.framework/Versions/3.10/bin/python3)
Did not find pkg-config by name 'pkg-config'
Found Pkg-config: NO
Run-time dependency python found: YES 3.10
Program cython found: YES (/private/var/folders/0y/kbqk5xzn5f397bmdxh0jt8h40000gn/T/pip-build-env-422m5n95/overlay/bin/cython)
Compiler for C supports arguments -Wno-unused-but-set-variable: NO
Compiler for C supports arguments -Wno-unused-function: YES
Compiler for C supports arguments -Wno-conversion: YES
Compiler for C supports arguments -Wno-misleading-indentation: YES
Library m found: YES
Fortran compiler for the host machine: gfortran (gcc 13.1.0 "GNU Fortran (Homebrew GCC 13.1.0) 13.1.0")
Fortran linker for the host machine: gfortran ld64 650.9
Compiler for Fortran supports arguments -Wno-conversion: YES
Checking if "-Wl,--version-script" : links: NO
Program pythran found: YES (/private/var/folders/0y/kbqk5xzn5f397bmdxh0jt8h40000gn/T/pip-build-env-422m5n95/overlay/bin/pythran)
Did not find CMake 'cmake'
Found CMake: NO
Run-time dependency xsimd found: NO (tried pkgconfig, framework and cmake)
Run-time dependency threads found: YES
Library npymath found: YES
Library npyrandom found: YES
pybind11-config found: YES (/private/var/folders/0y/kbqk5xzn5f397bmdxh0jt8h40000gn/T/pip-build-env-422m5n95/overlay/bin/pybind11-config) 2.10.4
Run-time dependency pybind11 found: YES 2.10.4
Run-time dependency openblas found: NO (tried pkgconfig, framework and cmake)
Run-time dependency openblas found: NO (tried framework)
../../scipy/meson.build:159:9: ERROR: Dependency lookup for OpenBLAS with method 'pkgconfig' failed: Pkg-config binary for machine 1 not found. Giving up.
A full log can be found at /private/var/folders/0y/kbqk5xzn5f397bmdxh0jt8h40000gn/T/pip-install-jmtiyxte/scipy_aaa7ee9f969b4a2984a0151a62db7a37/.mesonpy-820un46a/build/meson-logs/meson-log.txt
[end of output]
```
### Environment
```shell
Python 3.10
```
### Additional context
_No response_
|
closed
|
2023-07-25T14:34:57Z
|
2023-07-31T13:56:25Z
|
https://github.com/coqui-ai/TTS/issues/2801
|
[
"bug"
] |
valenmoore
| 3
|
ultralytics/ultralytics
|
pytorch
| 19,252
|
Yolo11 speed very slow compared to Yolo8
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi,
I tested Yolov11 for the first time today and the performances are really bad and I do not know why.
I tried with a fresh new environment and the coco validation set, code below
```
from ultralytics import YOLO
# Load a model
# model = YOLO("yolo8l.pt")
model = YOLO("yolo11l.pt")
model.val(data='coco.yaml', batch=32)
```
and the results are the following
YOLO11:
Speed: 0.2ms preprocess, 99.2ms inference, 0.0ms loss, 0.8ms postprocess per image
Class Images Instances Box(P R mAP50 mAP50-95):
all 5000 36335 0.748 0.634 0.697 0.534
YOLO8:
Speed: 0.2ms preprocess, 41.0ms inference, 0.0ms loss, 0.8ms postprocess per image
Class Images Instances Box(P R mAP50 mAP50-95):
all 5000 36335 0.739 0.634 0.695 0.531
Is it normal to be that slow?
environment:
Ultralytics 8.3.75 🚀 Python-3.12.8 torch-2.5.1+cu124 CUDA:0 (NVIDIA GeForce RTX 3060 Laptop GPU, 6144MiB)
Setup complete ✅ (20 CPUs, 29.4 GB RAM, 226.1/1006.9 GB disk)
OS Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Environment Linux
Python 3.12.8
Install pip
RAM 29.38 GB
Disk 226.1/1006.9 GB
CPU 12th Gen Intel Core(TM) i7-12700H
CPU count 20
GPU NVIDIA GeForce RTX 3060 Laptop GPU, 6144MiB
GPU count 1
CUDA 12.4
numpy ✅ 2.1.1<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.0>=1.4.1
torch ✅ 2.5.1>=1.8.0
torch ✅ 2.5.1!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 6.1.1
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.13>=2.0.0
### Additional
_No response_
|
closed
|
2025-02-14T16:10:32Z
|
2025-02-19T08:30:37Z
|
https://github.com/ultralytics/ultralytics/issues/19252
|
[
"question",
"detect"
] |
francescobodria
| 10
|
jadore801120/attention-is-all-you-need-pytorch
|
nlp
| 197
|
download dataset error
|
hello, I want to download the WMT'17 by your codes,but I faid,could you tell me how to solve this problem,thank you so much.
the error as following:
Already downloaded and extracted http://data.statmt.org/wmt17/translation-task/training-parallel-nc-v12.tgz.
Already downloaded and extracted http://data.statmt.org/wmt17/translation-task/dev.tgz.
Downloading from http://storage.googleapis.com/tf-perf-public/official_transformer/test_data/newstest2014.tgz to newstest2014.tgz.
newstest2014.tgz: 0.00B [00:00, ?B/s]
Traceback (most recent call last):
File "preprocess.py", line 336, in <module>
main()
File "preprocess.py", line 187, in main
raw_test = get_raw_files(opt.raw_dir, _TEST_DATA_SOURCES)
File "preprocess.py", line 100, in get_raw_files
src_file, trg_file = download_and_extract(raw_dir, d["url"], d["src"], d["trg"])
File "preprocess.py", line 71, in download_and_extract
compressed_file = _download_file(download_dir, url)
File "preprocess.py", line 93, in _download_file
urllib.request.urlretrieve(url, filename=filename, reporthook=t.update_to)
File "/usr/local/lib/python3.7/urllib/request.py", line 247, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "/usr/local/lib/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/local/lib/python3.7/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/local/lib/python3.7/urllib/request.py", line 641, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/lib/python3.7/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/usr/local/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/usr/local/lib/python3.7/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
|
open
|
2022-04-26T14:10:04Z
|
2023-09-20T03:35:31Z
|
https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/197
|
[] |
qimg412
| 4
|
davidsandberg/facenet
|
computer-vision
| 602
|
Incorrect labels
|
Just for didactic purposes, the comment here (https://github.com/davidsandberg/facenet/blob/master/src/models/inception_resnet_v1.py#L178 ) is incorrect (also the following ones in the lines 182, 186, 189...).
For instance, in L178 is says 149x149x32 and it's supposed to be 79x79x32.
Do you accept patches?
Thanks
|
open
|
2018-01-04T09:40:45Z
|
2018-01-04T09:40:45Z
|
https://github.com/davidsandberg/facenet/issues/602
|
[] |
tiagofrepereira2012
| 0
|
sktime/pytorch-forecasting
|
pandas
| 1,360
|
Get an error when creating dataset,how to fix it?
|
@jdb78
I used 150,000 pieces of data to create a dataset,add'series' and 'time_idx' column like this:
```
.......
data_len=150000
max_encoder_length = 4*96
max_prediction_length = 96
batch_size = 512
data=get_data(data_len)
data['time_idx']= data.index%(96*5) # as time index
data['series']=data.index//(96*5) # as group id
training_cutoff = data["time_idx"].max() - max_prediction_length
context_length = max_encoder_length
prediction_length = max_prediction_length
training = TimeSeriesDataSet(
data[lambda x: x.time_idx <= training_cutoff],
time_idx="time_idx",
target="close",
categorical_encoders={"series":NaNLabelEncoder().fit(data.series)},
group_ids=["series"],
time_varying_unknown_reals=["value"],
max_encoder_length=context_length,
max_prediction_length=prediction_length,
)
.......
```
The TimeSeriesDataSet give me an erro:'AssertionError: filters should not remove entries all entries - check encoder/decoder lengths and lags', I think there is some wrong in my definition
```
max_encoder_length = 4*96
max_prediction_length = 96
batch_size = 512
```
This problem has been bothering me for days,how can I modify these values to fix this problem,HELP PLEASE!
|
open
|
2023-08-05T03:49:32Z
|
2023-08-06T00:37:28Z
|
https://github.com/sktime/pytorch-forecasting/issues/1360
|
[] |
Lookforworld
| 4
|
Miksus/rocketry
|
automation
| 192
|
@app.task(daily.between("08:00", "20:00") & every("10 minutes"))
|
**Describe the bug**
Tasks with this task definition only run once
**Expected behavior**
task should run every day, every 10 minutes during 8am and 8pm
**Desktop (please complete the following information):**
- OS: Ubuntu 20
- Python 3.8
|
open
|
2023-02-09T13:47:51Z
|
2023-02-26T17:32:55Z
|
https://github.com/Miksus/rocketry/issues/192
|
[
"bug"
] |
faulander
| 3
|
ivy-llc/ivy
|
tensorflow
| 28,372
|
Fix Frontend Failing Test: tensorflow - mathematical_functions.jax.numpy.minimum
|
closed
|
2024-02-21T17:50:59Z
|
2024-02-21T21:29:19Z
|
https://github.com/ivy-llc/ivy/issues/28372
|
[
"Sub Task"
] |
samthakur587
| 0
|
|
waditu/tushare
|
pandas
| 860
|
fut_holding 字段vol_chg none
|
通过
ts_pro.query( 'fut_holding',trade_date=date_str,exchange=ec,fields='trade_date,symbol,broker,vol,long_hld,long_chg,short_hld,short_chg,exchange')
查询的数据,其中字段vol_chg 的数据都是None
查询日期为 20181203,20181204,20181205,20181206
所有品种
本人推广连接
https://tushare.pro/register?reg=125923
|
closed
|
2018-12-07T06:59:57Z
|
2018-12-09T01:02:10Z
|
https://github.com/waditu/tushare/issues/860
|
[] |
xiangzhy
| 2
|
pywinauto/pywinauto
|
automation
| 508
|
Examples do not work correctly from python console on windows 10
|
Hello,
I tried examples provided in readme.md using python console. I use Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AMD64)] on win32
'Simple' example that run from console produced notepad like on screnshot attached.

If I understood the 'About' window must be closed on executing:
`app.AboutNotepad.OK.click()` - but it did not happen.
But if I run this example as script it works fine.
Second example: 'MS UI Automation Example' failed on
```python
Properties = Desktop(backend='uia').Common_Files_Properties
```
with `NameError: name 'Desktop' is not defined`
But again it works fine as script.
Is it normal behavior and this module is not intended to work from console?
|
open
|
2018-06-18T14:24:05Z
|
2019-05-13T12:41:26Z
|
https://github.com/pywinauto/pywinauto/issues/508
|
[
"question"
] |
0de554K
| 1
|
openapi-generators/openapi-python-client
|
rest-api
| 373
|
Enums with default values generate incorrect field type and to_dict implementation
|
**Describe the bug**
We have several Enums in our API spec, and models that include fields of that enum type. Sometimes, we set default values on those enum fields. In those cases, the generated code in `to_dict` is incorrect and cannot be used to POST entities to our API.
**To Reproduce**
Steps to reproduce the behavior:
Add this code to a main.py file
```
from typing import Optional, Union, Dict
from fastapi import FastAPI
from pydantic import BaseModel
from enum import Enum
from starlette.responses import Response
app = FastAPI()
class ItemType(str, Enum):
RATIO = "ratio"
CURRENCY = "currency"
class ItemResource(BaseModel):
id: int
optional_id: Optional[int]
id_with_default: int = 42
item_type: ItemType
optional_item_type: Optional[ItemType]
item_type_with_default: ItemType = ItemType.RATIO
@app.post("/")
def write_item(model: ItemResource):
return Response(status_code=201)
```
Run the code with `uvicorn main:app &`
Generate an sdk with `openapi-python-client generate --url http://localhost:8000/openapi.json`
Open the generated `item_resource.py`
**Expected behavior**
`item_type_with_default` should look like:
`item_type_with_default: Union[Unset, ItemType] = ItemType.RATIO`
`to_dict` should have code like:
```
item_type_with_default = self.item_type_with_default
if item_type_with_default is not UNSET:
field_dict["item_type_with_default"] = item_type_with_default
```
**Actual behavior**
`item_type_with_default` looks like:
`item_type_with_default: Union[Unset, None] = UNSET`
`to_dict` has code like:
```
item_type_with_default = None
if item_type_with_default is not UNSET:
field_dict["item_type_with_default"] = item_type_with_default
```
Since `self.item_type_with_default` is never accessed, the caller has no way to actually provide a value to the server.
You can see the code in there for `id_with_default`, an int field with default that does the right thing.
**OpenAPI Spec File**
```
{"openapi":"3.0.2","info":{"title":"FastAPI","version":"0.1.0"},"paths":{"/":{"post":{"summary":"Write Item","operationId":"write_item__post","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/ItemResource"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}}},"components":{"schemas":{"HTTPValidationError":{"title":"HTTPValidationError","type":"object","properties":{"detail":{"title":"Detail","type":"array","items":{"$ref":"#/components/schemas/ValidationError"}}}},"ItemResource":{"title":"ItemResource","required":["id","item_type"],"type":"object","properties":{"id":{"title":"Id","type":"integer"},"optional_id":{"title":"Optional Id","type":"integer"},"id_with_default":{"title":"Id With Default","type":"integer","default":42},"item_type":{"$ref":"#/components/schemas/ItemType"},"optional_item_type":{"$ref":"#/components/schemas/ItemType"},"item_type_with_default":{"allOf":[{"$ref":"#/components/schemas/ItemType"}],"default":"ratio"}}},"ItemType":{"title":"ItemType","enum":["ratio","currency"],"type":"string","description":"An enumeration."},"ValidationError":{"title":"ValidationError","required":["loc","msg","type"],"type":"object","properties":{"loc":{"title":"Location","type":"array","items":{"type":"string"}},"msg":{"title":"Message","type":"string"},"type":{"title":"Error Type","type":"string"}}}}}}
```
**Desktop:**
- OS: macOS 11.2.2
- Python Version: 3.8.6
- openapi-python-client version 0.8.0
- fast-api version 0.62.0
|
closed
|
2021-03-31T17:34:44Z
|
2021-03-31T21:04:45Z
|
https://github.com/openapi-generators/openapi-python-client/issues/373
|
[
"🐞bug"
] |
joshzana
| 2
|
iperov/DeepFaceLab
|
machine-learning
| 5,686
|
The Chinese(ZH-cn) translation.
|
[https://zhuanlan.zhihu.com/p/165589205](https://zhuanlan.zhihu.com/p/165589205)
# How to use it
最近这几年视频换脸十分流行,在B站常有up主上传自己恶搞的AI换脸视频。当然,PS修图一直都是热点,但PS常用于P一张图。而网上看到的,比如将迪丽热巴演的某片段换成了鹿晗的脸(没有其他意思,确实有这些恶搞)??以至于以假乱真,这些都是咋做到的呢?其实就是使用到了强大的AI技术:AI+“造假”混合,就产生了“深度造假”。
Deepfakes,一种混合“深度学习”和“造假” 的合成技术 ,其中一人的现有图像或视频被替换为其他人的肖像。Deepfakes利用了机器学习和人工智能中的强大技术来生成具有极高欺骗力的视觉和音频内容。用于创建的主要机器学习方法是基于深度学习的训练生成神经网络,如生成对抗网络GAN。
按照维基的资料,Deepfakes这个词起源于2017年底,来自Reddit用户分享了他们创建的“深度造假”产品。2018年1月,启动了名为FakeApp的桌面应用程序。此应用程序使用户可以轻松创建和共享彼此交换脸部的视频。截至2019年,FakeApp已被Faceswap和基于命令行的DeepFaceLab等开源替代产品所取代。较大的公司也开始使用Deepfake。
本文介绍使用DeepFaceLab这款开源产品,它基于python和tensorflow。
说明,基于本文掌握的内容不得用于非法违法目的以及违背道德的行为以及用于商业利益,否则本人概不负责。
开始前,需要在[https://github.com/iperov/DeepFaceLab](https://link.zhihu.com/?target=https%3A//github.com/iperov/DeepFaceLab)上获取下载地址,并进行安装(想要我文章版本链接的,私聊我,人多的话那回不过来!记得赞赞一下嘛!)。
这里要说下,使用DeepFaceLab最好需要足够好的电脑配置,因为AI深度训练的过程基于cpu以及gpu,显卡性能越好意味着其速度越快效果越好。但这不是绝对,如果有足够的耐心也是能够合成出一定效果的,一切都只是娱乐嘛。(ps:我写本文时用到的是win7电脑,非高配置,这不重要)
安装完毕后,你会在DeepFaceLab_NVIDIA\下看到类似下图的文件:
安装后会看到的一些文件
其中,workplace存放我们的视频素材以及图片。在这之前,你需要准备两个视频,源视频是你想换过去的人脸的视频(比如你自己),目标视频是被换掉的人脸的视频(比如星爷)。本文把吴孟达老师的一段“你在教我做事啊”的视频片段换成沈腾,所以使用的源视频素材是沈腾,而目标视频就是“你在教我做事啊”小片段。将源视频重命名为data_src.mp4,目标视频重命名为data_dst.mp4并放置于workplace。(确保选择的源视频素材人脸清晰、正脸、表情丰富但不要遮挡、模糊,时长不需要长)
视频放置于workplace下
然后双击2) extract images from video data_src.bat,此命令将data_src视频每一帧提取为图片,回车,再回车默认选择图片格式为png。
等待将data_src视频分帧
等待命令行窗口运行完毕便可关掉,以上是deepfacelab使用ffmpeg提取帧。然后,在workplace/data_src文件夹下就会看到data_src.mp4的每一帧图片,如图:
提取data_src.mp4后的每一帧
删除data_src文件夹下没有沈腾的图片以及模糊的图片。随后再执行3) extract images from video data_dst FULL FPS.bat,同样的,此命令将从data_dst.mp4中提取每一帧,执行完后在workplace/data_dst文件夹下将会看到每一帧图片。
提取data_dst.mp4后的每一帧
执行4) data_src faceset extract.bat,这意味着将从data_src下的每一张图片里获取人脸。操作如下:
回车,Face type选择wf(wf=整个脸,基于情况而定,本教程基于默认选择此),Max number of faces from image默认回车键,提取最大的数量。Image Size是图片大小,默认回车512,Jpeg quality是图片质量,越高越好,默认回车90。Write debug images to aligned_debug这个选项默认回车,在data_src文件下我们可以不需要,但在操作data_dst时就需要了(程序会自动)。然后等待进程的执行,依据电脑配置,配置越高速度越快。
完成后,在data_src/aligned文件夹下会看到全部提取到的人脸,对就是沈腾的大头贴。
data_src/aligned下的人脸图
同样,我们在这里删除掉模糊的照片。其实前面在data_src/下删除了不合规图片,现在应该没有模糊的大头贴了。
点击5) data_dst faceset extract MANUAL.bat,从data_dst中提取人脸,这一步很重要也很繁琐!为何?我要说一下,我写本文章选的目标素材是吴孟达和叶德娴的小片段,所以视频里存在两个不同的脸。如果你的目标视频只单独存在一个人,使用data_dst faceset extract.bat可以快速的直接提取(所以并不需要此步骤的手动绘制)。而本情况一张图片里同时出现多脸,所以要选择data_dst faceset extract MANUAL.bat,随后会跳出选取窗口,需要手动提取,如下图。
这是第一帧的图片,我们只想用沈腾的脸替换掉吴孟达老师的脸,因此用鼠标抓取。轻轻移动鼠标,程序会根据当前区域而绘制轮廓图,可以滚动滚轴放大缩小,左键锁定右键解锁,按回车确定,意味着每一张图片都需要这样,不过可以连续按住回车,当脸部大幅度变动时再重新调整。
此步骤执行完后,data_dst目录下将会出现aligned和aligned_debug两个目录。前者是吴孟达老师的大头图,后者是绘制的脸部轮廓原型。
最后到了关键一步了,就是训练模型,利用AI算法不断训练。
Deepfacelab提供了两种训练方式:quick96和SAEHD。如何选择?若你的显卡显存低于6gb,建议使用前者,而saehd是更高清晰度的模型,用于具有至少6GB显存的GPU。两种模型都可以产生良好的效果,但显然SAEHD功能更强大,因此面向非新手。为什么要训练模型?因为我们要给AI学习的这些素材,AI通过算法识别A和B两者的脸部,然后根据前者脸部的特征学会面部表情,以及如何换向另一个的脸。训练就是AI学习两个脸部的过程。
本文选择使用quick96。执行6) train Quick96.bat,第一次训练时会让你命名此模型的名字,然后就会开始训练。
刚开始训练时
随后将弹出预览图,会看到五列脸。靠左是学习沈腾老师的脸部,靠右是吴孟达老师的,最后一列是替换之后的模样。刚开始十分模糊,随着时间的深入(根据时长以及你的计算机性能),模型将会越来越清晰。按p是更新,按s保存当前模型进度,回车关闭。模型训练需要很长的时间,如果基本上是正面的脸部视频,最好也不要低于6小时,而想要制作出精良的换脸视频,需要你的技术以及模型的时长了,甚至超过72小时都是可以的。因为模型可以随时进行训练,因此关闭之后可以再次执行train Quick96.bat。训练一段时间之后,预览图就为变得清晰:
训练一段后
当你觉得不需要再训练时,可以做最后一步了。执行7) merge Quick96.bat,开始调整合并你的模型。提示Use interactive merger时按y进入交互选项界面,如图:
interactive窗口的配置提示
Tab键切换面板(帮助/当前帧预览),你需要按tab切换到预览窗口才能点击按钮。+和-选择窗口大小。按c是color transfer mode,更改脸部传输颜色的模式,顶部1-6数字键是更换脸部模型,u和j调整脸部覆盖的大小,n更改sharpen锐化模式,y和h调整模糊效果,w和s调整脸部遮罩的范围,e和d模糊遮罩程度,还有其他选项,这些都依据自己想要的效果不断混合调整,因此需要耐心以及熟练度。<和>步入上下帧,shfit+<返回第一帧,每完成当前帧,进入下一帧时就需要重新调整选项来达到适合的效果,你肯定要问这一帧帧得弄到猴年马月?所以若脸部没有较大波动,可以直接按/键覆盖当前选项到下一帧,若需要一次性全部覆盖,可按shift+>覆盖到最后一帧。当所有的帧都配置完毕时,可按shift+>执行Meger,可在控制窗口查看进度。
以下是某一帧类似的效果(为了写文章,效果先凑合吧):
进行调整的某一帧
下图是执行Meger的进度,当交互界面的选项更新时,命令窗口会更新选项信息。若在交互界面按了按钮卡住不动时,通常是发生错误了,可在命令窗口看到python的错误日志(遇到此情况,则需要关闭重来)。
执行最终的合并时
当前操作执行完毕后,在data_dst下多出了两个文件夹merged以及merged_mask,前者是用于合成最终视频的已换脸的图片,后者是遮罩图。好的,我们执行本文教程的最后一步8) merged to mp4.bat,意味着将诞生最终的换脸视频,进程执行完毕后,在workspace目录下会看到result.mp4这个文件。
以下是训练了4小时(不够久)的大概效果:
参数配置1
参数配置2
如果你设置得不太好以及训练太短了,效果都会不尽人意,可以重复第7步重新调整。好,来欣赏最终的换脸视频吧。
沈腾:“你在教我做事啊?”
补一个,以下是曾黎换脸的高圆圆的周芷若。
(建议视频最好正脸,效果会很好)
【曾黎版 周芷若】
编辑于 2023-05-07 13:23・IP 属地福建
|
open
|
2023-06-17T16:44:19Z
|
2023-06-17T16:44:19Z
|
https://github.com/iperov/DeepFaceLab/issues/5686
|
[] |
zelihole
| 0
|
pydantic/logfire
|
fastapi
| 477
|
Is there a way to add tags to a span after it is created?
|
### Question
I have additional tags I want to that come from the result of a function running inside a span, but I'm not able to add them via `span.tags += ["my_tag"]` with the current API.
Is there some other way to do this?
Thanks!
|
closed
|
2024-10-04T17:23:46Z
|
2024-10-23T00:37:36Z
|
https://github.com/pydantic/logfire/issues/477
|
[
"good first issue",
"Feature Request"
] |
ohmeow
| 6
|
gradio-app/gradio
|
python
| 10,772
|
Gradio ChatInterface's bash/cURL API Example Is Incorrect
|
### Describe the bug
The bash/cURL API example provided for ChatInterface does not work. The server returns "Method Not Allowed", and the server log prints an error message.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
``` python
import gradio as gr
def echo(message, history):
print(f"cool {message} {history}")
return message
demo = gr.ChatInterface(fn=echo, type="messages", examples=["hello", "hola", "merhaba"], title="Echo Bot")
demo.launch()
```
### Screenshot
```
ed@banana:/tmp$ curl -X POST http://localhost:7860/gradio_api/call/chat -s -H "Content-Type: application/json" -d '{
"data": [
"Hello!!"
]}' | awk -F'"' '{ print $4}' | read EVENT_ID; curl -N http://localhost:7860/gradio_api/call/chat/$EVENT_ID
{"detail":"Method Not Allowed"}
```

### Logs
```shell
* Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Traceback (most recent call last):
File "/home/ed/.pyenv/versions/gradio/lib/python3.10/site-packages/gradio/queueing.py", line 625, in process_events
response = await route_utils.call_process_api(
File "/home/ed/.pyenv/versions/gradio/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "/home/ed/.pyenv/versions/gradio/lib/python3.10/site-packages/gradio/blocks.py", line 2092, in process_api
inputs = await self.preprocess_data(
File "/home/ed/.pyenv/versions/gradio/lib/python3.10/site-packages/gradio/blocks.py", line 1761, in preprocess_data
self.validate_inputs(block_fn, inputs)
File "/home/ed/.pyenv/versions/gradio/lib/python3.10/site-packages/gradio/blocks.py", line 1743, in validate_inputs
raise ValueError(
ValueError: An event handler (_submit_fn) didn't receive enough input values (needed: 2, got: 1).
Check if the event handler calls a Javascript function, and make sure its return value is correct.
Wanted inputs:
[<gradio.components.textbox.Textbox object at 0x7c70f759fa00>, <gradio.components.state.State object at 0x7c70f75bbd90>]
Received inputs:
["Hello!!"]
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.17.1
gradio_client version: 1.7.1
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.8
ffmpy: 0.5.0
gradio-client==1.7.1 is not installed.
httpx: 0.28.1
huggingface-hub: 0.29.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.2.3
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.7
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2025.2.0
httpx: 0.28.1
huggingface-hub: 0.29.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
I can work around it
|
closed
|
2025-03-10T14:59:35Z
|
2025-03-10T15:57:17Z
|
https://github.com/gradio-app/gradio/issues/10772
|
[
"bug"
] |
edmcman
| 2
|
ivy-llc/ivy
|
tensorflow
| 28,258
|
Fix Ivy Failing Test: jax - shape.ivy.Shape.__add__
|
closed
|
2024-02-12T15:50:49Z
|
2024-02-13T09:32:15Z
|
https://github.com/ivy-llc/ivy/issues/28258
|
[
"Sub Task"
] |
fnhirwa
| 0
|
|
plotly/dash
|
data-science
| 2,444
|
change behavior of grouping in go.Scatter x axes
|
When plotting a go.Scatter with many values in the x axis, the points suddenly get grouped, so for the same visible point, multiple values are represented.

How can I represent all datapoints separately, independently of how big my x axis is?
Thank you!
|
open
|
2023-03-06T07:42:21Z
|
2024-08-13T19:28:32Z
|
https://github.com/plotly/dash/issues/2444
|
[
"feature",
"P3"
] |
asicoderOfficial
| 5
|
nonebot/nonebot2
|
fastapi
| 3,156
|
Plugin: PCR签到重制版
|
### PyPI 项目名
nonebot-plugin-pcr-sign
### 插件 import 包名
nonebot_plugin_pcr_sign
### 标签
[{"label":"PCR","color":"#ea5252"},{"label":"签到","color":"#aeeaa8"}]
### 插件配置项
```dotenv
```
### 插件测试
- [ ] 如需重新运行插件测试,请勾选左侧勾选框
|
closed
|
2024-12-03T08:31:02Z
|
2024-12-06T03:13:20Z
|
https://github.com/nonebot/nonebot2/issues/3156
|
[
"Plugin"
] |
FrostN0v0
| 1
|
pytorch/pytorch
|
python
| 149,468
|
torch.library.opcheck doesn't check strides for CPU Tensors
|
Repro:
```py
import torch
from torchvision.transforms.functional import to_pil_image, pil_to_tensor
import PIL
def crop(pic, box):
img = to_pil_image(pic.cpu())
cropped_img = img.crop(box)
return pil_to_tensor(cropped_img).to(pic.device) / 255.
img = torch.ones(3, 64, 64)
img *= torch.linspace(0, 1, steps=64) * torch.linspace(0, 1, steps=64).unsqueeze(-1)
cropped_img = crop(img, (10, 10, 50, 50))
def f(img):
return crop(img, (10, 10, 50, 50))
cropped_img = f(img)
print(img.shape, img.stride())
print(cropped_img.shape, cropped_img.stride())
from typing import Sequence
@torch.library.custom_op("mylib::crop", mutates_args=())
def crop(pic: torch.Tensor, box: Sequence[int]) -> torch.Tensor:
img = to_pil_image(pic.cpu())
cropped_img = img.crop(box)
result = (pil_to_tensor(cropped_img) / 255.).to(pic.device, pic.dtype)
return result
@crop.register_fake
def _(pic, box):
channels = pic.shape[0]
x0, y0, x1, y1 = box
# result = pic.new_empty(y1 - y0, x1 - x0, channels).permute(2, 0, 1)
result = pic.new_empty(channels, y1 - y0, x1 - x0)
return result
result = torch.library.opcheck(crop, (img, (10, 10, 50, 50)))
print(result)
```
cc @ezyang @gchanan @kadeng @msaroufim
|
open
|
2025-03-19T01:32:23Z
|
2025-03-19T01:44:15Z
|
https://github.com/pytorch/pytorch/issues/149468
|
[
"high priority",
"triage review"
] |
zou3519
| 1
|
pywinauto/pywinauto
|
automation
| 607
|
Some Static controls in a dialog cannot be specified, but print_control_identifiers can print them out.
|
Some Static controls in a dialog cannot be specified, even if print_control_identifiers can print them out.
But the button controls on the same dialog can be specified.
Why?
the error massage:
```
MatchError: Could not find 'Static2' in 'dict_keys(['Button', 'OK', 'OKButton', 'Button0', 'Button1', 'Button2', 'CancelButton', 'Cancel'])'
```
When print_control_identifiers , the different about these 2 categories of contols is the Static controls have ''(empty) titles or strange title such as 'title1 \r\n title2 \r\n'.
|
closed
|
2018-11-20T08:52:00Z
|
2018-11-22T06:40:31Z
|
https://github.com/pywinauto/pywinauto/issues/607
|
[
"duplicate"
] |
gzll
| 1
|
PokemonGoF/PokemonGo-Bot
|
automation
| 5,638
|
Releasing pokemon feature - Keep Favourites
|
### Short Description
Releasing pokemon feature
### Possible solution
```
"release": {
"any": {"keep_best_iv": 1, **"keep_favourite_pokemon": 1}**,
```
(1 = never release a pokemon with Favoured status, regardless of IV or CP)
### How it would help others
I've got EG; 3 porygon I want to keep, but I have 7 of them, if I do keep_best_iv=3, it will keep the 3 I favourited, but it will also keep every other pokemon with the best IV, including rattattas etc, unless I manually create an entry for every single pokemon.
|
closed
|
2016-09-23T17:33:21Z
|
2016-09-26T07:26:28Z
|
https://github.com/PokemonGoF/PokemonGo-Bot/issues/5638
|
[] |
Paradoxum
| 2
|
tflearn/tflearn
|
tensorflow
| 353
|
Does tflearn allow me to use learning rate scheduling?
|
Does it? or controbuted welcome?
|
open
|
2016-09-20T10:04:27Z
|
2016-09-22T00:58:13Z
|
https://github.com/tflearn/tflearn/issues/353
|
[] |
changukshin
| 1
|
jupyter-book/jupyter-book
|
jupyter
| 1,460
|
pdfhtml builder should allow A4 paper size (and other formats)
|
### Description
Testing builder pdfhtml, I noticed that it was impossible to build something else than Letter format documents. However, pyppeteer has a format option so it would be nice if "jb build" also had a "format" option.
### Implementation
This would likely need a fix somewhere around:
https://github.com/executablebooks/jupyter-book/blob/18d52700b8636773f96631b3e187e38075557041/jupyter_book/pdf.py#L48-L85
|
open
|
2021-09-12T15:36:21Z
|
2021-09-12T19:39:26Z
|
https://github.com/jupyter-book/jupyter-book/issues/1460
|
[
"enhancement",
":label: pdfhtml",
"complexity: medium"
] |
mietlicki
| 2
|
FlareSolverr/FlareSolverr
|
api
| 501
|
[1337x] (testing) Exception (1337x): The cookies provided by FlareSolverr are not valid: The cookies provided by FlareSolverr are not valid
|
**Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
|
closed
|
2022-09-03T11:19:46Z
|
2022-09-03T11:21:01Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/501
|
[] |
fitsou
| 0
|
microsoft/nni
|
deep-learning
| 5,720
|
NNI is starting, it's time to run an epoch but there's no value in the page?
|
**Describe the issue**:
it's time to run an epoch but there's no value in the page?
**Environment**:
- NNI version:2.5
- Training service (local|remote|pai|aml|etc):local
- Client OS:Win10
- Server OS (for remote mode only):
- Python version: 3.7
- PyTorch/TensorFlow version:PyTorch
- Is conda/virtualenv/venv used?:conda
- Is running in Docker?: no
**Configuration**:
searchSpaceFile: search_space.json
trialCommand: python train_nni.py
trialGpuNumber: 0
trialConcurrency: 1
tuner:
name: TPE
classArgs:
optimize_mode: maximize
trainingService:
platform: local

**How to reproduce it?**:
|
open
|
2023-12-10T11:22:42Z
|
2023-12-10T11:22:42Z
|
https://github.com/microsoft/nni/issues/5720
|
[] |
yao-ao
| 0
|
iperov/DeepFaceLab
|
deep-learning
| 5,277
|
"The paging file is too small for this operation to complete" (train Quick96)"
|
THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
## Expected behavior
The file should execute normally and open a preview of the deepfake training
## Actual behavior
It gives this ImportError: "the paging file is too small for this operation to complete" (train Quick96)"
## Steps to reproduce
1. Normally do a deepfake operation
2. Open "6) train Quick96"
3. It gives the error
## Other relevant information
- **train Quick96.bat**
- **Operating system and version:** Windows, macOS, Linux
- **Python version:** 3.5, 3.6.4, ... (if you are not using prebuilt windows binary)


|
open
|
2021-02-06T19:53:55Z
|
2023-06-08T22:16:37Z
|
https://github.com/iperov/DeepFaceLab/issues/5277
|
[] |
alex-ui
| 4
|
horovod/horovod
|
machine-learning
| 3,475
|
horovod build fails with oneccl and tensorflow
|
**Environment:**
1. Framework: TensorFlow
2. Framework version: 2.6 and 2.8
3. Horovod version: 0.22.1 and 0.24.2
4. MPI version: OneCCL
5. CUDA version: N/A
6. NCCL version: N/A
7. Python version: 3.8.10
8. Spark / PySpark version:
9. Ray version:
10. OS and version: Ubuntu 18.04.4 LTS
11. GCC version: 7.5
12. CMake version: 3.10.2
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? Yes
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? Yes
4. Did you check if you question is answered in the [troubleshooting guide] Yes (https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
I followed the steps provided in this link.
https://horovod.readthedocs.io/en/stable/oneccl_include.html
ERROR: Command errored out with exit status 1: ../../../oneCCL_venv/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-02_6xuky/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-02_6xuky/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-5vrb6pf6/install-record.txt --single-version-externally-managed --compile --install-headers ../../../oneCCL_venv/include/site/python3.8/horovod Check the logs for full command output.
|
open
|
2022-03-17T22:28:33Z
|
2023-05-08T14:16:39Z
|
https://github.com/horovod/horovod/issues/3475
|
[
"bug"
] |
ashiqimranintel
| 4
|
ydataai/ydata-profiling
|
pandas
| 809
|
Install fails on python 3.9.6
|
**Describe the bug**
Trying to `pip install pandas-profiling` but I get [this](https://pastebin.com/BZA6tjLh)
**To Reproduce**
**Version information:**
pip 21.1.3
Python 3.9.6
**Additional context**
<!--
Add any other context about the problem here.
-->
|
closed
|
2021-07-16T15:57:20Z
|
2021-12-20T11:51:36Z
|
https://github.com/ydataai/ydata-profiling/issues/809
|
[
"dependencies 🔗"
] |
yiannis-had
| 5
|
aiogram/aiogram
|
asyncio
| 807
|
Menu-based wizard as a generalization of finite_state_machine
|
One common use case is a simple wizard, which can be implemented as shown in `examples/finite_state_machine_example.py`.
However, another common use case is when:
* there is no need to choose somethings in all the steps;
* it's more convenient to let a user choose the order in which the steps are 'answered';
* steps can form a tree-like hierarchy.
Is it possible with the current API to implement such menu-based "wizard"?
|
closed
|
2022-01-12T23:28:55Z
|
2022-01-17T15:36:01Z
|
https://github.com/aiogram/aiogram/issues/807
|
[
"enhancement",
"question issue"
] |
boyaryn
| 2
|
LibrePhotos/librephotos
|
django
| 746
|
Import already tagged faces with 3rd party software
|
**Describe the enhancement you'd like**
An option to add people faces already tagged with another software.Popular options are Google photos and photoprism
**Describe why this will benefit the LibrePhotos**
Resource economy:i'm running librephotos on a low power raspberry pi 400 and face classification is resource intensive
Time econony:face clustering/grouping is mostly inaccurate (in my experience) with librephotos and I have to manually tag almost each face .it will save me huge amount of time to add already tagged faces
**Additional context**
Thanks and keep up
|
closed
|
2023-02-07T05:50:16Z
|
2023-08-02T14:57:32Z
|
https://github.com/LibrePhotos/librephotos/issues/746
|
[
"enhancement"
] |
ippocratis
| 2
|
seleniumbase/SeleniumBase
|
web-scraping
| 2,827
|
Element is not clickable at point (916, 785)
|
How to solve it?
Normally I would solve resizing the window, clicking by xpath, etc, but I cant found how to do it with SB.
Anyone can help me?
The element:
`
driver.click("//*[@id='sendButton']")`
|
closed
|
2024-06-04T11:34:47Z
|
2024-06-04T13:07:35Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2827
|
[
"question"
] |
marconimedeiros2
| 1
|
TracecatHQ/tracecat
|
pydantic
| 275
|
[FEATURE IDEA] User data store for IP lists, etc.
|
**Is your feature request related to a problem? Please describe.**
We have the database sink action, but it is intended to sink to external DBs. A way for users to store data local to tracecat, and then query against that in other actions would be really handy for a use case like scheduling a workflow once a day to pull the latest TOR exit nodes, and another workflow for incidents/events that you could compare an IP against that list without pulling the list each time the other workflow ran.
Other good use cases for having this capability:
- Pulling the ASN database file from MaxMind and doing lookups where your event data does not supply ASN
- MaxMind GeoIP database for doing geolocation locally
- TOR entry/exit node lookup
- Known VPN node lookup
- Storing the results of a lookup from one of the API integrations, such as pulling a list of very attacked users, or group members from Okta, AD, etc and then comparing against without incurring a lookup every time that workflow runs.
- "Bad" domain or IP lists pushed to other systems for web filtering, firewalls, etc.
**Describe the solution you'd like**
2 actions:
- INSERT/UPDATE/DELETE to a "user data" table in the local tracecat DB
- SELECT from a "user data" table in the local tracecat DB
**Describe alternatives you've considered**
Other alternatives:
- An action to download data to disk and reference it from there, but then the list would have to be loaded, or grep'd each time you wanted to do a lookup which would be inefficient.
- Do the download every time an incoming webhook runs a workflow where you need to access the data. Very data inefficient and could hit rate limits for accessing those files.
**Additional context**
|
open
|
2024-07-27T14:44:13Z
|
2025-02-17T19:06:55Z
|
https://github.com/TracecatHQ/tracecat/issues/275
|
[
"enhancement",
"priority:medium"
] |
mattdurant
| 9
|
hbldh/bleak
|
asyncio
| 1,634
|
Bleak pairing not working gives Failed to pair: org. bluez.Error.AuthenticationRejected error on raspberry pi
|
bleak version: 0.22.2
Python version:3.11.6
BlueZ version:5.68
### Description
I used bleak library to pair the ble devices but get error like
```console
| Attempting to pair with EA:A0:01:2B:32:A9
-- | --
| Failed to pair: org. bluez.Error.AuthenticationRejected
````
### What I Did
```python
import asyncio
from bleak import BleakScanner, BleakClient
Wahdat_name = "SM-DOM2"
characteristic_uuid = "00002A37-0000-1000-8000-00805f9b34fb"
async def scan_and_pair():
# Scan for devices
print("Scanning for BLE devices...")
devices = await BleakScanner.discover()
if not devices:
print("No BLE devices found.")
return
# List found devices
print("Found devices:")
for i, device in enumerate(devices):
print(f"{i}: {device.name} - {device.address}")
# Check if Wahdat's Android Watch is found by name
device_found = False
for device in devices:
if device.name == Wahdat_name:
device_found = True
selected_device = device.address
break
print("SELECTED",selected_device)
if not device_found:
print(f"{Wahdat_name} not found.")
return
# Pair with Wahdat's Android Watch
async with BleakClient(selected_device) as client:
try:
print(f"Attempting to pair with {selected_device} ({Wahdat_name})...")
paired = await client.pair()
if paired:
value = await client.read_gatt_char(characteristic_uuid)
print("Read value:", value)
print(f"Successfully paired with {selected_device} ({Wahdat_name})")
else:
print(f"Failed to pair with {selected_device} ({Wahdat_name})")
except Exception as e:
print(f"Error during pairing: {e}")
asyncio.run(scan_and_pair())
````
<img width="625" alt="Screenshot 2024-08-26 at 4 41 24 PM" src="https://github.com/user-attachments/assets/f194bc82-6baa-4c5f-bb71-98f819ab9889">
|
open
|
2024-08-27T10:51:46Z
|
2024-08-28T14:46:22Z
|
https://github.com/hbldh/bleak/issues/1634
|
[
"Backend: BlueZ"
] |
wahadatjan
| 3
|
SYSTRAN/faster-whisper
|
deep-learning
| 1,212
|
Empty sequence when using faster-whisper's transcribe on fine-tuned model
|
Hi,
I'm trying to use `faster-whisper` with a fine-tuned model of the new whisper's turbo model : `openai/whisper-large-v3-turbo`
### Faster-whisper library
When I'm trying to run inference of my fine-tuned model with `faster-whisper`, after converting the model using this command line :
```sh
ct2-transformers-converter --model "mlouala/whisper-diin-v3" --output_dir "whisper-din-v3" --force --copy_files tokenizer_config.json preprocessor_config.json --quantization int8
```
Then running this script :
```python
from faster_whisper import WhisperModel
model_size = "/home/dev/whisper-din-v3"
model = WhisperModel(model_size, device="cuda")
segments, info = model.transcribe('foo.wav', beam_size=5)
for segment in segments:
print(dict(start=segment.start, end=segment.end, text=segment.text))
```
I tested multiple quantization (int8, int8_float32, int16) and no quantization at all but it **always** returns _empty list of segments_.
Nonetheless, it detects correctly the langage and audio's duration as you can see in the TranscriptionInfo :
```python
TranscriptionInfo(language='fr', language_probability=0.8290529251098633, duration=8.2, duration_after_vad=8.2, all_language_probs=[....], transcription_options=TranscriptionOptions(beam_size=5, best_of=5, patience=1, length_penalty=1, repetition_penalty=1, no_repeat_ngram_size=0, log_prob_threshold=-1.0, no_speech_threshold=0.6, compression_ratio_threshold=2.4, condition_on_previous_text=True, prompt_reset_on_temperature=0.5, temperatures=[0.0, 0.2, 0.4, 0.6, 0.8, 1.0], initial_prompt=None, prefix=None, suppress_blank=True, suppress_tokens=(1, 2, 7, 8, 9, 10, 14, 25, 26, 27, 28, 29, 31, 58, 59, 60, 61, 62, 63, 90, 91, 92, 93, 359, 503, 522, 542, 873, 893, 902, 918, 922, 931, 1350, 1853, 1982, 2460, 2627, 3246, 3253, 3268, 3536, 3846, 3961, 4183, 4667, 6585, 6647, 7273, 9061, 9383, 10428, 10929, 11938, 12033, 12331, 12562, 13793, 14157, 14635, 15265, 15618, 16553, 16604, 18362, 18956, 20075, 21675, 22520, 26130, 26161, 26435, 28279, 29464, 31650, 32302, 32470, 36865, 42863, 47425, 49870, 50254, 50258, 50358, 50359, 50360, 50361), without_timestamps=False, max_initial_timestamp=1.0, word_timestamps=False, prepend_punctuations='"\'“¿([{-', append_punctuations='"\'.。,,!!??::”)]}、', multilingual=False, max_new_tokens=None, clip_timestamps=[0.0], hallucination_silence_threshold=None, hotwords=None), vad_options=None)
```
Also, when I'm running the 'base' turbo model converted using ct2-transformers-converter it works fine.
### Geniune Transformer library
But my model is working fine when using this simple code with geniune transformers library :
```python
from transformers import pipeline
pipe = pipeline(model="mlouala/whisper-diin-v3")
def transcribe(audio):
text = pipe(audio)["text"]
return text
print(transcribe('foo.wav'))
```
Any clues ?
|
open
|
2024-12-22T06:55:11Z
|
2025-02-14T16:42:53Z
|
https://github.com/SYSTRAN/faster-whisper/issues/1212
|
[] |
mlouala-dev
| 6
|
Gozargah/Marzban
|
api
| 1,341
|
مرزبان پولی بشه
|
سلام و عرض ادب و خسته نباشید ، من و خیلیا الان مدت زمان زیادیه که با امکانات کم نظیر مرزبان تونستیم شر و مزاحمت های پی در پی فیلترینگ رو پشت سر بزاریم ، منتهی مدتی میشه مرزبان اپدیت نمیده و از خیلی پنل های دیگ عقبه ، این موضوع هم قابل درکه شما عزیزان پردژه رو رایگان جلو میبرید و مشغله های زیادی دارید ، مسئله دونیت هم الان برای کاربر های مرزبان هم یکم عجیب شده چون هیچ اتفاق و اپدیتی صورت نمیگیره،
لطفا مرزبان رو لایسنسی کنید ، تا هم درامدی داشته باشید و کاربر ها هم از اینکه مرزبان اپدیت خواهد ماند مطمئن شویم .
|
closed
|
2024-10-06T12:27:45Z
|
2024-10-10T21:14:24Z
|
https://github.com/Gozargah/Marzban/issues/1341
|
[] |
esodrevo
| 0
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,650
|
Which browser and uc version combined works better or which are the most stable versions?
|
Which browser and uc version combined works better? I think the best way to proceed it's dockerizing this stuff and break the cloudflare websites.
|
open
|
2023-11-05T19:21:08Z
|
2023-11-14T06:39:51Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1650
|
[] |
juanfrilla
| 3
|
gee-community/geemap
|
jupyter
| 1,044
|
Time Slider for EVI resulting into Trait Error
|
### Environment Information
- geemap version: 0.13
- Python version: 3.8
- Operating System: Windows (although I am using Google Colab)
### Description
I am trying to calculate Enhanced Vegetation Index using SENTINEL 2 images and using a time-slider to visualize it. I am using the map function to do so, however, it's leading to Trait Error.
### What I Did
```python
def calculate_evi(image):
evi = image.expression(
'(2.5 * (NIR - RED)) / ((NIR + 6 * RED - 7.5 * BLUE) + 1)', {
'NIR': image.select('B8'),
'RED': image.select('B4'),
'BLUE': image.select('B2')
})
return evi
evi_map = gm.Map(center = [17.566, 13.590], zoom = 8)
collection = (
ee.ImageCollection('COPERNICUS/S2_SR')
.filterDate('2019-01-01', '2019-12-31')
.filterBounds(ee.Geometry.Point([13.590, 17.566]))
.filterMetadata('CLOUDY_PIXEL_PERCENTAGE', 'less_than', 10)
)
#this operation is performed on an image, so what we need is an image not a collection
evi_image_collection = collection.map(calculate_evi)
eviParams = {'min': -1,
'max': 1,
'palette': ['#d73027', '#f46d43', '#fdae61', '#fee08b', '#ffffbf', '#d9ef8b', '#a6d96a', '#66bd63', '#1a9850'],}
evi_map.addLayer(evi_image_collection, eviParams, 'EVI', False) #evi needs to be an image collection here
evi_map.add_time_slider(evi_image_collection, eviParams) #evi_image this needs to be an image collection here
evi_map.addLayerControl()
evi_map
```
```
---------------------------------------------------------------------------
TraitError Traceback (most recent call last)
<ipython-input-5-7908b5f314b0> in <module>()
26
27 evi_map.addLayer(evi_image_collection, eviParams, 'EVI', False) #evi needs to be an image collection here
---> 28 evi_map.add_time_slider(evi_image_collection, eviParams) #evi_image this needs to be an image collection here
29 evi_map.addLayerControl()
30 evi_map
11 frames
/usr/local/lib/python3.7/dist-packages/ipywidgets/widgets/widget_int.py in _validate_min(self, proposal)
106 min = proposal['value']
107 if min > self.max:
--> 108 raise TraitError('setting min > max')
109 if min > self.value:
110 self.value = min
TraitError: setting min > max
```
|
closed
|
2022-04-28T08:21:10Z
|
2022-05-06T09:36:09Z
|
https://github.com/gee-community/geemap/issues/1044
|
[
"bug"
] |
aayushmalik
| 4
|
pandas-dev/pandas
|
data-science
| 60,870
|
PERF: Regression in groupby ops from adding skipna
|
https://github.com/rhshadrach/asv-runner/issues/42
Due to #60752 - cc @snitish
|
closed
|
2025-02-06T21:30:07Z
|
2025-02-10T18:24:55Z
|
https://github.com/pandas-dev/pandas/issues/60870
|
[
"Groupby",
"Missing-data",
"Performance",
"Regression"
] |
rhshadrach
| 2
|
FlareSolverr/FlareSolverr
|
api
| 359
|
[yggcookie] Exception (yggcookie): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Cloudflare Error: Cloudflare has blocked this request. Probably your IP is banned for this site, check in your web browser.: FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Cloudflare Error: Cloudflare has blocked this request. Probably your IP is banned for this site, check in your web browser. (Test)
|
closed
|
2022-04-11T10:24:46Z
|
2022-04-12T14:55:40Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/359
|
[
"duplicate"
] |
windorose
| 1
|
|
mljar/mercury
|
jupyter
| 13
|
Add scrolling if many parameters in the sidebar
|
In the case of many widgets in the sidebar the Run and Donwload buttons are not available.
There should be some scroll available.

|
closed
|
2022-01-18T15:23:21Z
|
2022-01-18T15:36:19Z
|
https://github.com/mljar/mercury/issues/13
|
[] |
pplonski
| 0
|
supabase/supabase-py
|
flask
| 790
|
Cannot get past this empty error
|
# Bug report
## Describe the bug
Trying to execute a simple select query using Python 3.12 or 3.9. I cannot get past this error.
## To Reproduce
```python
Python 3.12.3 (main, Apr 9 2024, 08:09:14) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from supabase import create_client, Client
>>> from supabase.lib.client_options import ClientOptions
>>> url: str = "https://svyjpvnhftybdowglgmt.supabase.co/rest/v1/iot"
>>> key: str = "OMITTED"
>>> client_options = ClientOptions(postgrest_client_timeout=999999, schema="public")
>>> supabase: Client = create_client(url, key, client_options)
>>> print(supabase.table("iot").select("id").execute())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/markreeves/.local/share/virtualenvs/vercel-tidbyt-35n3k3fp/lib/python3.12/site-packages/postgrest/_sync/request_builder.py", line 78, in execute
raise APIError(r.json())
postgrest.exceptions.APIError: {}
>>> print(supabase)
<supabase._sync.client.SyncClient object at 0x1043c5a30>
>>> print (supabase.table("iot").select("id"))
<postgrest._sync.request_builder.SyncSelectRequestBuilder object at 0x1063f7fe0>
```
I've tried using postgrest directly too, receive the same error. Same happens with `select("*")`.
## Expected behavior
It works in RapidAPI or using `requests` to simply fetch my project URL, so it's not a permissions issue. I expect to not get an error using the documented methods.
## System information
- OS: macOS
- Version of supabase-py: 2.4.5
|
closed
|
2024-05-05T22:36:32Z
|
2024-05-22T20:24:19Z
|
https://github.com/supabase/supabase-py/issues/790
|
[
"invalid"
] |
heymarkreeves
| 1
|
miguelgrinberg/microblog
|
flask
| 14
|
Documentation not available
|
The (excellent!) [tutorial](http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world) for which this project appears to be a working example is currently unavailable because of an internal server error. To avoid this in the future, it might be nice to just put the tutorial in the same repo.
|
closed
|
2015-02-28T17:46:24Z
|
2017-12-10T20:18:47Z
|
https://github.com/miguelgrinberg/microblog/issues/14
|
[] |
DanielSank
| 1
|
praw-dev/praw
|
api
| 1,900
|
Code snippets not displayed correctly in docs
|
### Describe the solution you'd like
The code snippets in the [Step 5: Cleaning Up The Code](https://praw.readthedocs.io/en/latest/tutorials/reply_bot.html#step-5-cleaning-up-the-code) of the Submission Stream Reply Bot example are not displayed correctly.

### Describe alternatives you've considered
_No response_
### Additional context
_No response_
|
closed
|
2022-10-05T07:58:25Z
|
2022-11-04T19:23:46Z
|
https://github.com/praw-dev/praw/issues/1900
|
[
"Documentation"
] |
asztalosdani
| 2
|
PeterL1n/BackgroundMattingV2
|
computer-vision
| 51
|
dataset.images.ImageDataset is sensitive to files with upper case letters in extension
|
img.PNG or img.Jpg isn't picked up by
```
[
*glob.glob(os.path.join(root, "**", "*.jpg"), recursive=True),
*glob.glob(os.path.join(root, "**", "*.png"), recursive=True),
]
```
|
closed
|
2021-01-20T13:48:01Z
|
2021-01-29T08:49:03Z
|
https://github.com/PeterL1n/BackgroundMattingV2/issues/51
|
[] |
kasperschnack
| 0
|
deepset-ai/haystack
|
machine-learning
| 8,556
|
Add variable to specify inputs as optional to ConditionalRouter
|
**Is your feature request related to a problem? Please describe.**
We are using a conditional router to route to different pipeline branches based on a parameter we insert during runtime. We would like to also set a default branch in case the respective parameter (path) was not inserted at runtime.
Through testing we figured out this works if you run the component individually. So
```python
from haystack.components.routers import ConditionalRouter
routes = [
{
"condition": '{{path == "rag"}}',
"output": "{{question}}",
"output_name": "normal",
"output_type": str
},
{
"condition": '{{path == "followup_short"}}',
"output": "{{question}}",
"output_name": "followup_short",
"output_type": str
},
{
"condition": '{{path == "followup_elaborate"}}',
"output": "{{question}}",
"output_name": "followup_elaborate",
"output_type": str
},
{
"condition": "{{ True }}",
"output": "{{ question }}",
"output_name": "fallback",
"output_type": str
}
]
router = ConditionalRouter(routes)
res = router.run(question="What?")
print(res)
```
However, when inserting it into a Pipeline it no longer works
```python
from haystack import Pipeline
from haystack.components.routers import ConditionalRouter
routes = [
{
"condition": '{{path == "rag"}}',
"output": "{{question}}",
"output_name": "normal",
"output_type": str
},
{
"condition": '{{path == "followup_short"}}',
"output": "{{question}}",
"output_name": "followup_short",
"output_type": str
},
{
"condition": '{{path == "followup_elaborate"}}',
"output": "{{question}}",
"output_name": "followup_elaborate",
"output_type": str
},
{
"condition": "{{ True }}",
"output": "{{ question }}",
"output_name": "fallback",
"output_type": str
}
]
router = ConditionalRouter(routes)
pipe = Pipeline()
pipe.add_component("router", router)
res = pipe.run(data={"router": {"question": "What?"}})
print(res)
```
It would be great to add a way to make parameters (or some of them optional). Perhaps following the PromptBuilder we could add a `required_variables` parameter or do the inverse (to not cause a breaking change) and make `optional_variables` list so that all parameters are required by default (current behavior) and then explicitly add some as optional.
|
closed
|
2024-11-19T10:00:46Z
|
2024-11-26T09:48:56Z
|
https://github.com/deepset-ai/haystack/issues/8556
|
[
"P1"
] |
sjrl
| 0
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 830
|
Single Voice Synthesizer Training - Error
|
Hello @Bluefish and all,
I am running the toolbox on Win10, under Anaconda3 (run as administrator), env: VoiceClone, using an NVidia RTS2070 Super, CUDA version 11.1, tensorflow v2.6.0, pytorch 3 with all other requirements met. The toolbox GUI (demo_toolbox.py) works fine.
My project is to use the toolbox to clone 15 voices from a computer simulation (to be able to add additional text in those voices back into the sim), one voice at a time, using the Single Voice method described in Issue #437.
For proof of principle, I built a custom single voice dataset for one of those voices, following the instructions in Issue #437, and this direction: [https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/437#issuecomment-666099538](url) . My dataset consists of 329 utterances (1.wav through 329.wav, 1.txt through 329.txt, a total of about 45 minutes worth of speech in 3-12 second increments) arranged using the LibriTTS method and schema, into folder ...dataset_root\LibriTTS\train-clean-100\speaker_012\book-001\
I was able to successfully preprocess the single-voice data using synthesizer_preprocess_audio.py and synthesizer_preprocess_embeds.py, which produced what appears to be proper audio, mels and embed data which properly went to "...datasets_root\SV2TTS\synthesizer\" in folders "audio", "embeds", "mels" and file "train.txt"
I placed the preprocessed data folders into c:\utilities\SV2TTS\synthesizer\saved_models\logs-singlespeaker\datasets_root\SV2TTS\synthesizer
The "pretrained" folder already existed in c:\utilities\synthesizer" (with the LibriSpeech pretrained.pt from the original toolbox installation).
I then tried to train the synthesizer using the following command:
`python synthesizer_train.py pretrained synthesizer/saved_models/logs-singlespeaker/datasets_root/SV2TTS/synthesizer --summary_interval 125 --checkpoint_interval 100
`
synthesizer_train.py responded that --summary_interval and --checkpoint_interval were invalid arguments
I tried training again, with command:
`python synthesizer_train.py pretrained synthesizer/saved_models/logs-singlespeaker/datasets_root/SV2TTS/synthesizer --save_every 100`
synthesizer_train.py responded this way:
````
Arguments:
run_id: pretrained
syn_dir: synthesizer/saved_models/logs-singlespeaker/datasets_root/SV2TTS/synthesizer
models_dir: synthesizer/saved_models/
save_every: 100
backup_every: 25000
force_restart: False
hparams:
Checkpoint path: synthesizer\saved_models\pretrained\pretrained.pt
Loading training data from: synthesizer\saved_models\logs-singlespeaker\datasets_root\SV2TTS\synthesizer\train.txt
Using model: Tacotron
Using device: cuda
Initialising Tacotron Model...
Trainable Parameters: 30.870M
Loading weights at synthesizer\saved_models\pretrained\pretrained.pt
Tacotron weights loaded from step 295000
Using inputs from:
synthesizer\saved_models\logs-singlespeaker\datasets_root\SV2TTS\synthesizer\train.txt
synthesizer\saved_models\logs-singlespeaker\datasets_root\SV2TTS\synthesizer\mels
synthesizer\saved_models\logs-singlespeaker\datasets_root\SV2TTS\synthesizer\embeds
Found 328 samples
+----------------+------------+---------------+------------------+
| Steps with r=2 | Batch Size | Learning Rate | Outputs/Step (r) |
+----------------+------------+---------------+------------------+
| 25k Steps | 12 | 3e-05 | 2 |
+----------------+------------+---------------+------------------+
Traceback (most recent call last):
File "synthesizer_train.py", line 35, in <module>
train(**vars(args))
File "C:\Utilities\SV2TTS\synthesizer\train.py", line 158, in train
for i, (texts, mels, embeds, idx) in enumerate(data_loader, 1):
File "C:\Users\Colt_\.conda\envs\VoiceClone\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__
return self._get_iterator()
File "C:\Users\Colt_\.conda\envs\VoiceClone\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\Colt_\.conda\envs\VoiceClone\lib\site-packages\torch\utils\data\dataloader.py", line 918, in __init__
w.start()
File "C:\Users\Colt_\.conda\envs\VoiceClone\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\Colt_\.conda\envs\VoiceClone\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\Colt_\.conda\envs\VoiceClone\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\Colt_\.conda\envs\VoiceClone\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
reduction.dump(process_obj, to_child)
File "C:\Users\Colt_\.conda\envs\VoiceClone\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'train.<locals>.<lambda>'
(VoiceClone) C:\Utilities\SV2TTS>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Colt_\.conda\envs\VoiceClone\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\Colt_\.conda\envs\VoiceClone\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
````
synthesizer_train.py did build folder "pretrained" inside folder c:\utilities\SV2TTS\synthesizer\saved_models\", as it should have. Inside were 4 empty folders (mel-spectrograms, metas, plots, wavs).
I also tried synthesizer training using similar commands with different run_ids and from different syn_dirs and got exactly the same response. Any idea why python is throwing these callback errors? Could this have anything to do with Conda being installed in my user directory?
Any help would be greatly appreciated.
Thanks,
Tomcattwo
|
closed
|
2021-08-26T01:29:20Z
|
2021-08-26T02:23:32Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/830
|
[] |
Tomcattwo
| 1
|
autogluon/autogluon
|
data-science
| 3,921
|
During inference, multi-GPUs are not used with DDP strategy. Only single GPU is used.
|
I have 8 GPUs in a node, with "ddp_find_unused_parameters_true" as the strategy, it can train an AutoMM model with multiple GPUs.
However, during inference with .predict(), the trained model cannot use multi GPUs for inference. Only 1 GPU is used.
|
closed
|
2024-02-15T03:52:09Z
|
2024-08-26T22:18:55Z
|
https://github.com/autogluon/autogluon/issues/3921
|
[
"bug: unconfirmed",
"Needs Triage",
"module: multimodal"
] |
hohoCode
| 3
|
davidsandberg/facenet
|
tensorflow
| 1,249
|
Find
|

|
open
|
2024-04-15T15:05:31Z
|
2024-04-15T15:05:31Z
|
https://github.com/davidsandberg/facenet/issues/1249
|
[] |
Ali1819299036378
| 0
|
deezer/spleeter
|
tensorflow
| 427
|
[Bug] RuntimeError: dictionary changed size during iteration
|
<!-- PLEASE READ THIS CAREFULLY :
- Any issue which does not respect following template or lack of information will be considered as invalid and automatically closed
- First check FAQ from wiki to see if your problem is not already known
-->
## Description
I get a RuntimeError on OSX 10.15.4
## Step to reproduce
Install using pip, try example.
## Output
```
(env) ❯ spleeter separate -i spleeter-example.mp3 -p spleeter:2stems -o output
Traceback (most recent call last):
File "/Users/janwirth/blog/env/bin/spleeter", line 10, in <module>
sys.exit(entrypoint())
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/spleeter/__main__.py", line 54, in entrypoint
main(sys.argv)
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/spleeter/__main__.py", line 36, in main
enable_logging()
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/spleeter/utils/logging.py", line 60, in enable_logging
tf_logger = get_tensorflow_logger()
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/spleeter/utils/logging.py", line 27, in get_tensorflow_logger
from tensorflow.compat.v1 import logging
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/tensorflow/__init__.py", line 99, in <module>
from tensorflow_core import *
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/tensorflow_core/__init__.py", line 28, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "<frozen importlib._bootstrap>", line 1019, in _handle_fromlist
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/tensorflow/__init__.py", line 50, in __getattr__
module = self._load()
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/tensorflow/__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/tensorflow_core/python/__init__.py", line 52, in <module>
from tensorflow.core.framework.graph_pb2 import *
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/tensorflow_core/core/framework/graph_pb2.py", line 7, in <module>
from google.protobuf import descriptor as _descriptor
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/google/protobuf/__init__.py", line 37, in <module>
__import__('pkg_resources').declare_namespace(__name__)
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/pkg_resources/__init__.py", line 83, in <module>
__import__('pkg_resources.extern.packaging.requirements')
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/pkg_resources/_vendor/packaging/requirements.py", line 9, in <module>
from pkg_resources.extern.pyparsing import stringStart, stringEnd, originalTextFor, ParseException
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 668, in _load_unlocked
File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/pkg_resources/extern/__init__.py", line 43, in load_module
__import__(extant)
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/pkg_resources/_vendor/pyparsing.py", line 4756, in <module>
_escapedPunc = Word( _bslash, r"\[]-*.$+^?()~ ", exact=2 ).setParseAction(lambda s,l,t:t[0][1])
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/pkg_resources/_vendor/pyparsing.py", line 1284, in setParseAction
self.parseAction = list(map(_trim_arity, list(fns)))
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/pkg_resources/_vendor/pyparsing.py", line 1066, in _trim_arity
this_line = extract_stack(limit=2)[-1]
File "/Users/janwirth/blog/env/lib/python3.7/site-packages/pkg_resources/_vendor/pyparsing.py", line 1050, in extract_stack
frame_summary = traceback.extract_stack(limit=-offset+limit-1)[offset]
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/traceback.py", line 211, in extract_stack
stack = StackSummary.extract(walk_stack(f), limit=limit)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/traceback.py", line 363, in extract
f.line
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/traceback.py", line 285, in line
self._line = linecache.getline(self.filename, self.lineno).strip()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/linecache.py", line 16, in getline
lines = getlines(filename, module_globals)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/linecache.py", line 48, in getlines
for mod in sys.modules.values():
RuntimeError: dictionary changed size during iteration
```
```
(env) ❯ python --version
Python 3.7.3
```
## Environment
| | |
| ----------------- | ------------------------------- |
| OS | OSX 10.15.4 |
| Installation type | pip |
| RAM available | 32g |
|
closed
|
2020-06-20T11:59:35Z
|
2020-10-19T10:47:17Z
|
https://github.com/deezer/spleeter/issues/427
|
[
"bug",
"invalid"
] |
janwirth
| 5
|
netbox-community/netbox
|
django
| 18,363
|
cant create vm mac-address via api
|
### Deployment Type
Self-hosted
### Triage priority
N/A
### NetBox Version
v4.2.1
### Python Version
3.10
### Steps to Reproduce
1. create an interface on a vm
2. attempt to use the api to create a mac address linked to the interface id (curl example from swagger)
```
curl -X 'POST' \
'https://netbox.dev.domain/api/dcim/mac-addresses/' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-H 'X-CSRFTOKEN:{removed}' \
-d '{
"mac_address": "BC:24:11:7E:0E:BF",
"assigned_object_type": "virtualization.vminterface",
"assigned_object_id": 1
}'
```
### Expected Behavior
mac address gets created
### Observed Behavior
400 response:
```
Error: Bad Request
Response body
Download
{
"assigned_object_id": [
"This field cannot be null."
]
}
```
|
closed
|
2025-01-09T14:12:04Z
|
2025-01-09T18:55:20Z
|
https://github.com/netbox-community/netbox/issues/18363
|
[
"type: bug",
"status: accepted",
"severity: medium"
] |
ITJamie
| 0
|
lanpa/tensorboardX
|
numpy
| 134
|
add_text not working with tensorboard > 1.6
|
First I just want to say this is a fantastic library! Really useful work.
Second, I just tried `add_text` and found that the results come up blank in tensorboard >= 1.7, but it works with tensorboard 1.6.
I installed the tensorboardX package from source.
I'm not sure if this is a tensorboardX issue or a tensorboard issue, but I thought it was worth bringing up.
Thanks,
Tom
|
closed
|
2018-05-01T14:55:22Z
|
2018-09-19T19:08:45Z
|
https://github.com/lanpa/tensorboardX/issues/134
|
[] |
teffland
| 9
|
yunjey/pytorch-tutorial
|
pytorch
| 55
|
Assertion Error
|
Hello I try implement this into video classification. Let say I have 3 feature per frame then I have 10 frames to consider. I am using this example, I think `sequence_length=10` and `input_shape=3` am I right? this is my code for the architecture.
```
# Hyper Parameters
sequence_length = 10
input_size = 3
hidden_size = 128
num_layers = 2
num_classes = 2
batch_size = 100
num_epochs = 2
learning_rate = 0.003
train_loader = torch.utils.data.DataLoader(dataset=spatio_dataset,
batch_size=batch_size,
shuffle=True)
print(train_loader)
# BiRNN Model (Many-to-One)
class BiRNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, num_classes):
super(BiRNN, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers,
batch_first=True, bidirectional=True)
self.fc = nn.Linear(hidden_size*2, num_classes) # 2 for bidirection
def forward(self, x):
# Set initial states
h0 = Variable(torch.zeros(self.num_layers*2, x.size(0), self.hidden_size)).cuda() # 2 for bidirection
c0 = Variable(torch.zeros(self.num_layers*2, x.size(0), self.hidden_size)).cuda()
# Forward propagate RNN
out, _ = self.lstm(x, (h0, c0))
# Decode hidden state of last time step
out = self.fc(out[:, -1, :])
return out
rnn = BiRNN(input_size, hidden_size, num_layers, num_classes)
rnn.cuda()
# Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
```
Here is my train code:
```
# Train the Model
for epoch in range(num_epochs):
for i, (spatio, label) in enumerate(train_loader):
#print(spatio)
#print(label)
spatio = Variable(spatio.view(-1, sequence_length, input_size)).cuda()
label = Variable(label).cuda()
print(spatio.size())
print(label.size())
# Forward + Backward + Optimize
optimizer.zero_grad()
outputs = rnn(spatio)
loss = criterion(outputs, label)
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print ('Epoch [%d/%d], Step [%d/%d], Loss: %.4f'
%(epoch+1, num_epochs, i+1, len(train_dataset)//batch_size, loss.data[0]))
```
However I got This error:
```
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-54-5beb6b9da8ce> in <module>()
11 # Forward + Backward + Optimize
12 optimizer.zero_grad()
---> 13 outputs = rnn(spatio)
14 loss = criterion(outputs, label)
15 loss.backward()
/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
204
205 def __call__(self, *input, **kwargs):
--> 206 result = self.forward(*input, **kwargs)
207 for hook in self._forward_hooks.values():
208 hook_result = hook(self, input, result)
<ipython-input-51-474be96e77da> in forward(self, x)
29
30 # Forward propagate RNN
---> 31 out, _ = self.lstm(x, (h0, c0))
32
33 # Decode hidden state of last time step
/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
204
205 def __call__(self, *input, **kwargs):
--> 206 result = self.forward(*input, **kwargs)
207 for hook in self._forward_hooks.values():
208 hook_result = hook(self, input, result)
/usr/local/lib/python2.7/dist-packages/torch/nn/modules/rnn.pyc in forward(self, input, hx)
89 dropout_state=self.dropout_state
90 )
---> 91 output, hidden = func(input, self.all_weights, hx)
92 if is_packed:
93 output = PackedSequence(output, batch_sizes)
/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.pyc in forward(input, *fargs, **fkwargs)
341 else:
342 func = AutogradRNN(*args, **kwargs)
--> 343 return func(input, *fargs, **fkwargs)
344
345 return forward
/usr/local/lib/python2.7/dist-packages/torch/autograd/function.pyc in _do_forward(self, *input)
200 self._nested_input = input
201 flat_input = tuple(_iter_variables(input))
--> 202 flat_output = super(NestedIOFunction, self)._do_forward(*flat_input)
203 nested_output = self._nested_output
204 nested_variables = _unflatten(flat_output, self._nested_output)
/usr/local/lib/python2.7/dist-packages/torch/autograd/function.pyc in forward(self, *args)
222 def forward(self, *args):
223 nested_tensors = _map_variable_tensor(self._nested_input)
--> 224 result = self.forward_extended(*nested_tensors)
225 del self._nested_input
226 self._nested_output = result
/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.pyc in forward_extended(self, input, weight, hx)
283 hy = tuple(h.new() for h in hx)
284
--> 285 cudnn.rnn.forward(self, input, hx, weight, output, hy)
286
287 self.save_for_backward(input, hx, weight, output)
/usr/local/lib/python2.7/dist-packages/torch/backends/cudnn/rnn.pyc in forward(fn, input, hx, weight, output, hy)
253 w.zero_()
254 params = get_parameters(fn, handle, w)
--> 255 _copyParams(weight, params)
256
257 if tuple(hx.size()) != hidden_size:
/usr/local/lib/python2.7/dist-packages/torch/backends/cudnn/rnn.pyc in _copyParams(params_from, params_to)
181 for layer_params_from, layer_params_to in zip(params_from, params_to):
182 for param_from, param_to in zip(layer_params_from, layer_params_to):
--> 183 assert param_from.type() == param_to.type()
184 param_to.copy_(param_from)
185
AssertionError:
```
After I check the input shape, the example and my code have same input shape but it has different output shape.
Example MNIST shape:
```
torch.Size([100, 28, 28])
torch.Size([100])
```
While my shape is:
```
torch.Size([100, 10, 3])
torch.Size([100, 1])
```
Anybody know how to solved this error?, if my assumpion is right because of shape, do you know how to shape from `torch.Size([100, 1])` into `torch.Size([100])` ?
-Thank You-
|
open
|
2017-07-23T19:11:16Z
|
2017-11-28T13:50:39Z
|
https://github.com/yunjey/pytorch-tutorial/issues/55
|
[] |
herleeyandi
| 1
|
pydantic/pydantic
|
pydantic
| 10,572
|
Error when adding a default value to a field with name conflict
|
### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
The following example raises the error `AttributeError: 'NoneType' object has no attribute 'Other'`.
No error is raised if the default value isn't specified, or if Python's dataclasses are used. No error either if the `other` field is renamed to something else.
### Example Code
```Python
# other.py
from pydantic.dataclasses import dataclass
@dataclass
class Other:
x: int
# test.py
from pydantic.dataclasses import dataclass
import other
@dataclass
class Class:
other: "other.Other | None" = None
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.0a1
pydantic-core version: 2.24.0
pydantic-core build: profile=release pgo=false
install path: [XXX]/.venv/lib/python3.12/site-packages/pydantic
python version: 3.12.6 (main, Sep 10 2024, 00:05:17) [GCC 11.4.0]
platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
related packages: typing_extensions-4.12.2 mypy-1.11.2
commit: unknown
```
|
closed
|
2024-10-08T14:23:49Z
|
2024-10-15T08:49:12Z
|
https://github.com/pydantic/pydantic/issues/10572
|
[
"duplicate"
] |
AdrienVannson
| 2
|
scrapy/scrapy
|
web-scraping
| 6,106
|
Remove deprecated code moved to itemloaders
|
Deprecated in 2.3.0.
* scrapy.utils.misc.extract_regex()
* scrapy.loader.common
* scrapy.loader.processors
|
closed
|
2023-10-17T19:12:02Z
|
2023-10-18T10:52:31Z
|
https://github.com/scrapy/scrapy/issues/6106
|
[
"good first issue",
"cleanup"
] |
wRAR
| 0
|
indico/indico
|
flask
| 6,368
|
Date Field in Registration Form Only Shows Previous and Next 5 years
|
When trying to use Date field in a registration form for date of birth, users are only able to choose from the previous or next 5 years.
Some people think that you cannot put a date below 2019... which is complicated for a date a birth.
Of course, when you type the date it works.
Is this normal behaviour or should the list of years be scrollable ?
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new registration form
2. Add a Date field
3. Try to complete the registration form as a user
4. See problem
**Screenshots**

|
closed
|
2024-05-30T12:05:20Z
|
2024-10-10T14:48:29Z
|
https://github.com/indico/indico/issues/6368
|
[
"bug"
] |
aicampbell
| 3
|
coqui-ai/TTS
|
pytorch
| 3,148
|
[Bug] XTTS v2.0 finetuning - wrong checkpoint links
|
### Describe the bug
Hi there,
I believe that in the new XTTS v2.0 fine tuning recipe, there needs to be a change to the following lines:
```
TOKENIZER_FILE_LINK = "https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/v2.0/vocab.json"
XTTS_CHECKPOINT_LINK = "https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/v2.0/model.pth"
```
It's impossible to reach these URLs.
Thanks.
### To Reproduce
```
python recipes/ljspeech/xtts_v2/train_gpt_xtts.py
```
### Expected behavior
Training
### Logs
```shell
/home/raph/repos/TTS/TTS/tts/layers/xtts/trainer/dataset.py:10: UserWarning: Torchaudio's I/O functions now support par-call bakcend dispatch. Importing backend implementation directly is no longer guaranteed to work. Please use `backend` keyword with load/save/info function, instead of calling the udnerlying implementation directly.
from torchaudio.backend.soundfile_backend import load as torchaudio_soundfile_load
/home/raph/repos/TTS/TTS/tts/layers/xtts/trainer/dataset.py:11: UserWarning: Torchaudio's I/O functions now support par-call bakcend dispatch. Importing backend implementation directly is no longer guaranteed to work. Please use `backend` keyword with load/save/info function, instead of calling the udnerlying implementation directly.
from torchaudio.backend.sox_io_backend import load as torchaudio_sox_load
/home/raph/miniconda3/envs/TTS/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py:30: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.
warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.")
Traceback (most recent call last):
File "/home/raph/repos/TTS/recipes/ljspeech/xtts_v2/train_gpt_xtts.py", line 232, in <module>
main()
File "/home/raph/repos/TTS/recipes/ljspeech/xtts_v2/train_gpt_xtts.py", line 204, in main
model = GPTTrainer.init_from_config(config)
File "/home/raph/repos/TTS/TTS/tts/layers/xtts/trainer/gpt_trainer.py", line 500, in init_from_config
return GPTTrainer(config)
File "/home/raph/repos/TTS/TTS/tts/layers/xtts/trainer/gpt_trainer.py", line 79, in __init__
self.xtts.tokenizer = VoiceBpeTokenizer(self.args.tokenizer_file)
File "/home/raph/repos/TTS/TTS/tts/layers/xtts/tokenizer.py", line 540, in __init__
self.tokenizer = Tokenizer.from_file(vocab_file)
Exception: expected value at line 1 column 1
~/repos/TTS main !1 ?3 vim recipes/ljspeech
```
```
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB",
"NVIDIA A100-PCIE-40GB"
],
"available": true,
"version": "12.1"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.0+cu121",
"TTS": "0.20.0",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.13",
"version": "#98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023"
}
}
```
### Additional context
_No response_
|
closed
|
2023-11-06T17:06:50Z
|
2023-12-12T06:56:07Z
|
https://github.com/coqui-ai/TTS/issues/3148
|
[
"bug"
] |
rlenain
| 4
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 316
|
模型第二阶段预训练问题
|
你好,我使用run_clm_pt_with_peft.py代码进行第二阶段预训练,同样在A100(40G)的机器运行跑,512的长度,使用你们提供的预训练脚本的参数,其中trainable params: 183828480 || all params: 6660100096 || trainable%: 2.760145903969309,训练参数只有2.76% ,比你论文里面的6.06%差距很大,同时我用3张卡,batch_size最大只能开到2,和你们1024的差距也太大了,有什么可能的原因吗
|
closed
|
2023-05-11T11:04:00Z
|
2023-05-23T22:02:31Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/316
|
[
"stale"
] |
longeryeah
| 8
|
custom-components/pyscript
|
jupyter
| 451
|
no api documentation or link pointing to it somewhere else
|
Hi, where can we find api docs that describe return signatures? Event data and such?
If it should be found in the HASS api docs, which it isn't now, then I will ask there...
|
open
|
2023-03-15T17:10:30Z
|
2023-09-22T10:38:54Z
|
https://github.com/custom-components/pyscript/issues/451
|
[] |
Morriz
| 3
|
TencentARC/GFPGAN
|
pytorch
| 116
|
RealESRGANer returns fully black image after upsampling
|
RealESRGANer returns fully black image after upsampling
Cuda Version: 11.3
GPU: Nvidia 1650
|
open
|
2021-12-14T10:17:46Z
|
2022-07-31T22:11:23Z
|
https://github.com/TencentARC/GFPGAN/issues/116
|
[] |
cianoo45
| 3
|
deepset-ai/haystack
|
pytorch
| 8,249
|
TypeError when using Advanced RAG
|
**Describe the bug**
When running the code provided in the documentation and blog for HyDE (Advanced RAG), there is a type error that originates from running the Hypothetical Document Embedder pipeline connecting the adapter to the embedder. The OutputAdapter returns the list of documents as a string, but SentenceTransformersDocumentEmbedder() expects them as a list of documents.
The output type defined in OutputAdapter is incorrect as it specifies `List[Document]` but the type is actually a string.
**Error message**
TypeError: SentenceTransformersDocumentEmbedder expects a list of Documents as input.In case you want to embed a list of strings, please use the SentenceTransformersTextEmbedder.
**Expected behavior**
The pipeline to embed the documents with the Hypothetical Document Embedder should run without error, and generate the hypothetical embeddings.
**Additional context**
The error occurs when the code is copied from the tutorials directly. Also when the code has been swapped out to use an Ollama Generator and local PDFs as the data.
The error can be fixed by using the `.to_dict()` method in the `custom_filters` on each Document, then in the SentenceTranformersDocumentEmbedder() using the `.from_dict()` method. I would be happy to create a pull request with this change.
**To Reproduce**
Copy and run the code from either of these tutorials: https://docs.haystack.deepset.ai/docs/hypothetical-document-embeddings-hyde and https://haystack.deepset.ai/blog/optimizing-retrieval-with-hyde
**FAQ Check**
- [ yes] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS: ubuntu
- GPU/CPU: nvidia GeForce RTX 3070
- Haystack version (commit or version number): 2.3.1
- DocumentStore: ChromaDocumentStore/InMemory
- Reader: N/A
- Retriever: ChromaEmbeddingRetriever/InMemory
|
closed
|
2024-08-19T13:57:36Z
|
2025-02-05T04:22:47Z
|
https://github.com/deepset-ai/haystack/issues/8249
|
[
"stale",
"P1",
"community-triage"
] |
liviaj29
| 3
|
ultralytics/yolov5
|
machine-learning
| 13,427
|
如何在yolov5中添加FPS和mAPs评价指标?
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
如何在yolov5中添加FPS和mAPs评价指标?
### Additional
_No response_
|
open
|
2024-11-22T03:00:31Z
|
2024-11-24T10:09:08Z
|
https://github.com/ultralytics/yolov5/issues/13427
|
[
"question"
] |
lqh964165950
| 2
|
roboflow/supervision
|
tensorflow
| 825
|
AttributeError: 'Results' object has no attribute 'obb'
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
i was testing the example provided in the documentation, but i get this error when i run the code, you can see **it prints the detection results** but still i get this error after it
Error:
```bash
0: 256x640 1 person, 2 bicycles, 2 skateboards, 1 potted plant, 64.0ms
Speed: 34.0ms preprocess, 64.0ms inference, 32.1ms postprocess per image at shape (1, 3, 256, 640)
Traceback (most recent call last):
File "a:\Main\CODE\GUI\GradeVision-QT-material - TESTVERSION\supervis.py", line 8, in <module>
detections = sv.Detections.from_ultralytics(results)
File "C:\Users\alimo\miniconda3\envs\guienv\lib\site-packages\supervision\detection\core.py", line 181, in from_ultralytics
if ultralytics_results.obb is not None:
File "C:\Users\alimo\miniconda3\envs\guienv\lib\site-packages\ultralytics\utils\__init__.py", line 153, in __getattr__
raise AttributeError(f"'{name}' object has no attribute '{attr}'. See valid attributes below.\n{self.__doc__}")
AttributeError: 'Results' object has no attribute 'obb'. See valid attributes below.
A class for storing and manipulating inference results.
Args:
orig_img (numpy.ndarray): The original image as a numpy array.
path (str): The path to the image file.
names (dict): A dictionary of class names.
boxes (torch.tensor, optional): A 2D tensor of bounding box coordinates for each detection.
masks (torch.tensor, optional): A 3D tensor of detection masks, where each mask is a binary image.
probs (torch.tensor, optional): A 1D tensor of probabilities of each class for classification task.
keypoints (List[List[float]], optional): A list of detected keypoints for each object.
Attributes:
orig_img (numpy.ndarray): The original image as a numpy array.
orig_shape (tuple): The original image shape in (height, width) format.
boxes (Boxes, optional): A Boxes object containing the detection bounding boxes.
masks (Masks, optional): A Masks object containing the detection masks.
probs (Probs, optional): A Probs object containing probabilities of each class for classification task.
keypoints (Keypoints, optional): A Keypoints object containing detected keypoints for each object.
speed (dict): A dictionary of preprocess, inference, and postprocess speeds in milliseconds per image.
names (dict): A dictionary of class names.
path (str): The path to the image file.
_keys (tuple): A tuple of attribute names for non-empty attributes.
```
### Environment
python 3.10
supervision==0.18.0
torch==2.1.2+cu118
torchaudio==2.1.2+cu118
torchvision==0.16.2+cu118
ultralytics==8.0.229
### Minimal Reproducible Example
```python
import cv2
import supervision as sv
from ultralytics import YOLO
model = YOLO("yolov8s.pt")
image = cv2.imread('image_00\data/0000000047.png')
results = model(image)[0]
detections = sv.Detections.from_ultralytics(results)
bounding_box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
labels = [
model.model.names[class_id]
for class_id
in detections.class_id
]
annotated_image = bounding_box_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections, labels=labels)
```
### Additional
im not running it in a ipynb file
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
closed
|
2024-01-31T15:25:23Z
|
2024-01-31T16:12:48Z
|
https://github.com/roboflow/supervision/issues/825
|
[
"bug",
"question"
] |
AliMostafaRadwan
| 6
|
Gozargah/Marzban
|
api
| 1,403
|
پایین اوردن هسته xray
|
سلام چطور میتونم هسته xray
بیارم روی نسخه 1.8.24 داخل هر اموزشی ک گذاشتین انجام دادم ولی نمیشه هسترو بیارم پایین لطفا کمک کنید
|
closed
|
2024-10-28T20:53:14Z
|
2024-10-29T09:13:58Z
|
https://github.com/Gozargah/Marzban/issues/1403
|
[
"Duplicate"
] |
ramin1120
| 1
|
itamarst/eliot
|
numpy
| 103
|
Move writeFailure to eliot.twisted, deprecate eliot.writeFailure
|
Twisted APIs should be in one place.
|
open
|
2014-09-04T22:13:37Z
|
2018-09-22T20:59:14Z
|
https://github.com/itamarst/eliot/issues/103
|
[
"API enhancement"
] |
itamarst
| 0
|
plotly/dash-table
|
plotly
| 744
|
Header width with fixed headers expand when filtering even when columns have fixed width
|

```python
import dash
from dash.dependencies import Input, Output
import dash_table
import dash_html_components as html
import datetime
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/gapminder2007.csv')
df['Mock Date'] = [
datetime.datetime(2020, 1, 1, 0, 0, 0) + i * datetime.timedelta(hours=13)
for i in range(len(df))
]
app = dash.Dash(__name__)
def table_type(df_column):
# Note - this only works with Pandas >= 1.0.0
if isinstance(df_column.dtype, pd.DatetimeTZDtype):
return 'datetime',
elif (isinstance(df_column.dtype, pd.StringDtype) or
isinstance(df_column.dtype, pd.BooleanDtype) or
isinstance(df_column.dtype, pd.CategoricalDtype) or
isinstance(df_column.dtype, pd.PeriodDtype)):
return 'text'
elif (isinstance(df_column.dtype, pd.SparseDtype) or
isinstance(df_column.dtype, pd.IntervalDtype) or
isinstance(df_column.dtype, pd.Int8Dtype) or
isinstance(df_column.dtype, pd.Int16Dtype) or
isinstance(df_column.dtype, pd.Int32Dtype) or
isinstance(df_column.dtype, pd.Int64Dtype)):
return 'numeric'
else:
return 'any'
app.layout = dash_table.DataTable(
columns=[
{'name': i, 'id': i, 'type': table_type(df[i])} for i in df.columns
],
data=df.to_dict('records'),
filter_action='native',
fixed_rows={'headers': True},
style_table={'height': 400},
style_data={
'minWidth': '{}%'.format(100 / len(df.columns)),
'width': '{}%'.format(100 / len(df.columns)),
'maxWidth': '{}%'.format(100 / len(df.columns))
}
)
if __name__ == '__main__':
app.run_server(debug=True)
```
fyi @Marc-Andre-Rivet for when you are in the neighborhood
|
open
|
2020-04-14T23:28:27Z
|
2020-04-14T23:28:41Z
|
https://github.com/plotly/dash-table/issues/744
|
[
"bug"
] |
chriddyp
| 1
|
Farama-Foundation/Gymnasium
|
api
| 1,116
|
Can't run flappy bird envrionment with gymnasium
|
### Question
I am installing pygame and gym_ple from the following two commands:
!pip install git+https://github.com/GrupoTuring/PyGame-Learning-Environment
!pip install git+https://github.com/lusob/gym-ple
I am doing the following imports (they are part of a bigger project)
import copy
import torch
import random
import gym
import gym_ple
import matplotlib
import numpy as np
import torch.nn.functional as F
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from collections import deque, namedtuple
from IPython.display import HTML
from base64 import b64encode
from torch import nn
from torch.utils.data import DataLoader
from torch.utils.data.dataset import IterableDataset
from torch.optim import AdamW
from pytorch_lightning import LightningModule, Trainer
from gym.wrappers import TransformObservation
device = 'cuda:0' if torch.cuda.is_available() else 'cpu'
num_gpus = torch.cuda.device_count()
Then I run the following lines of code:
env = gym_ple.make("FlappyBird-v0")
env.step(env.action_space.sample())
And I get the following error:
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[25], [line 1](vscode-notebook-cell:?execution_count=25&line=1)
----> [1](vscode-notebook-cell:?execution_count=25&line=1) env.step(env.action_space.sample())
File d:\anaconda3\Lib\site-packages\gym\wrappers\time_limit.py:17, in TimeLimit.step(self, action)
[16](file:///D:/anaconda3/Lib/site-packages/gym/wrappers/time_limit.py:16) def step(self, action):
---> [17](file:///D:/anaconda3/Lib/site-packages/gym/wrappers/time_limit.py:17) observation, reward, done, info = self.env.step(action)
[18](file:///D:/anaconda3/Lib/site-packages/gym/wrappers/time_limit.py:18) self._elapsed_steps += 1
[19](file:///D:/anaconda3/Lib/site-packages/gym/wrappers/time_limit.py:19) if self._elapsed_steps >= self._max_episode_steps:
File d:\anaconda3\Lib\site-packages\gym\wrappers\order_enforcing.py:13, in OrderEnforcing.step(self, action)
[11](file:///D:/anaconda3/Lib/site-packages/gym/wrappers/order_enforcing.py:11) def step(self, action):
[12](file:///D:/anaconda3/Lib/site-packages/gym/wrappers/order_enforcing.py:12) assert self._has_reset, "Cannot call env.step() before calling reset()"
---> [13](file:///D:/anaconda3/Lib/site-packages/gym/wrappers/order_enforcing.py:13) observation, reward, done, info = self.env.step(action)
[14](file:///D:/anaconda3/Lib/site-packages/gym/wrappers/order_enforcing.py:14) return observation, reward, done, info
File d:\anaconda3\Lib\site-packages\gym\core.py:80, in Env.step(self, action)
[63](file:///D:/anaconda3/Lib/site-packages/gym/core.py:63) @abstractmethod
[64](file:///D:/anaconda3/Lib/site-packages/gym/core.py:64) def step(self, action: ActType) -> Tuple[ObsType, float, bool, dict]:
[65](file:///D:/anaconda3/Lib/site-packages/gym/core.py:65) """Run one timestep of the environment's dynamics. When end of
[66](file:///D:/anaconda3/Lib/site-packages/gym/core.py:66) episode is reached, you are responsible for calling `reset()`
[67](file:///D:/anaconda3/Lib/site-packages/gym/core.py:67) to reset this environment's state.
(...)
[78](file:///D:/anaconda3/Lib/site-packages/gym/core.py:78) info (dict): contains auxiliary diagnostic information (helpful for debugging, logging, and sometimes learning)
[79](file:///D:/anaconda3/Lib/site-packages/gym/core.py:79) """
---> [80](file:///D:/anaconda3/Lib/site-packages/gym/core.py:80) raise NotImplementedError
NotImplementedError:
I went into the site-packages directory inside my anaconda3 folder and I found the ple and gym_ple folder, and also, the file for FlappyBird game inside the ple folder. Please help me understand what is wrong
And I also intend to use the following wrappers for normalization:
class RunningMeanStd:
# https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Parallel_algorithm
def __init__(self, epsilon=1e-4, shape=()):
self.mean = np.zeros(shape, "float64")
self.var = np.ones(shape, "float64")
self.count = epsilon
def update(self, x):
batch_mean = np.mean(x, axis=0)
batch_var = np.var(x, axis=0)
batch_count = x.shape[0]
self.update_from_moments(batch_mean, batch_var, batch_count)
def update_from_moments(self, batch_mean, batch_var, batch_count):
self.mean, self.var, self.count = update_mean_var_count_from_moments(
self.mean, self.var, self.count, batch_mean, batch_var, batch_count
)
def update_mean_var_count_from_moments(
mean, var, count, batch_mean, batch_var, batch_count
):
delta = batch_mean - mean
tot_count = count + batch_count
new_mean = mean + delta * batch_count / tot_count
m_a = var * count
m_b = batch_var * batch_count
M2 = m_a + m_b + np.square(delta) * count * batch_count / tot_count
new_var = M2 / tot_count
new_count = tot_count
return new_mean, new_var, new_count
class NormalizeObservation(gym.core.Wrapper):
def __init__(
self,
env,
epsilon=1e-8,
):
super().__init__(env)
self.num_envs = getattr(env, "num_envs", 1)
self.is_vector_env = getattr(env, "is_vector_env", False)
if self.is_vector_env:
self.obs_rms = RunningMeanStd(shape=self.single_observation_space.shape)
else:
self.obs_rms = RunningMeanStd(shape=self.observation_space.shape)
self.epsilon = epsilon
def step(self, action):
obs, rews, dones, infos = self.env.step(action)
if self.is_vector_env:
obs = self.normalize(obs)
else:
obs = self.normalize(np.array([obs]))[0]
return obs, rews, dones, infos
def reset(self, **kwargs):
return_info = kwargs.get("return_info", False)
if return_info:
obs, info = self.env.reset(**kwargs)
else:
obs = self.env.reset(**kwargs)
if self.is_vector_env:
obs = self.normalize(obs)
else:
obs = self.normalize(np.array([obs]))[0]
if not return_info:
return obs
else:
return obs, info
def normalize(self, obs):
self.obs_rms.update(obs)
return (obs - self.obs_rms.mean) / np.sqrt(self.obs_rms.var + self.epsilon)
class NormalizeReward(gym.core.Wrapper):
def __init__(
self,
env,
gamma=0.99,
epsilon=1e-8,
):
super().__init__(env)
self.num_envs = getattr(env, "num_envs", 1)
self.is_vector_env = getattr(env, "is_vector_env", False)
self.return_rms = RunningMeanStd(shape=())
self.returns = np.zeros(self.num_envs)
self.gamma = gamma
self.epsilon = epsilon
def step(self, action):
obs, rews, dones, infos = self.env.step(action)
if not self.is_vector_env:
rews = np.array([rews])
self.returns = self.returns * self.gamma + rews
rews = self.normalize(rews)
self.returns[dones] = 0.0
if not self.is_vector_env:
rews = rews[0]
return obs, rews, dones, infos
def normalize(self, rews):
self.return_rms.update(self.returns)
return rews / np.sqrt(self.return_rms.var + self.epsilon)
and I can't also render. Whenever I call env.render(), it throws the same error- not implemented. Please help!
|
closed
|
2024-07-12T07:29:41Z
|
2024-07-12T09:37:08Z
|
https://github.com/Farama-Foundation/Gymnasium/issues/1116
|
[
"question"
] |
IAMHAADICOOL
| 1
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 3,938
|
Submission status change triggers an error
|
### What version of GlobaLeaks are you using?
4.14.0
### What browser(s) are you seeing the problem on?
Chrome, Firefox
### What operating system(s) are you seeing the problem on?
Windows, Linux
### Describe the issue
Changing the status of a submission is not possible.
The error report generated and sent by mail is following:
`KeyError Mapping key not found.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 190, in maybeDeferred
result = f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/globaleaks/rest/decorators.py", line 66, in wrapper
return f(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/globaleaks/rest/decorators.py", line 39, in wrapper
return f(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/globaleaks/rest/decorators.py", line 52, in wrapper
return f(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/globaleaks/handlers/operation.py", line 25, in put
return func(self, request['args'], *args, **kwargs)
File "/usr/lib/python3/dist-packages/globaleaks/handlers/recipient/rtip.py", line 1151, in update_submission_status
req_args['status'], req_args['substatus'], req_args['motivation'])
KeyError: 'status'`
### Proposed solution
_No response_
|
closed
|
2024-01-08T16:16:13Z
|
2024-01-08T16:33:03Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3938
|
[
"T: Bug",
"Triage"
] |
verlata
| 1
|
davidsandberg/facenet
|
computer-vision
| 1,040
|
ValueError: invalid literal for int() with base 10: ''
|
Traceback (most recent call last):
File "src/train_softmax.py", line 580, in <module>
main(parse_arguments(sys.argv[1:]))
File "src/train_softmax.py", line 234, in main
prelogits, prelogits_center_loss, args.random_rotate, args.random_crop, args.random_flip, prelogits_norm, args.prelogits_hist_max, args.use_fixed_image_standardization)
File "src/train_softmax.py", line 306, in train
lr = facenet.get_learning_rate_from_file(learning_rate_schedule_file, epoch)
File "C:\facenet-master\src\facenet.py", line 295, in get_learning_rate_from_file
e = int(par[0])
ValueError: invalid literal for int() with base 10: ''
|
open
|
2019-06-29T07:39:54Z
|
2019-06-29T07:39:54Z
|
https://github.com/davidsandberg/facenet/issues/1040
|
[] |
hsm4703
| 0
|
itamarst/eliot
|
numpy
| 17
|
Add helper for nested actions in Eliot
|
(Originally from Jean-Paul Calderone).
Here's a pattern I just stumbled back in to:
```
def receive(self, input):
"""
Add logging of state transitions to the wrapped state machine.
@see: L{IFiniteStateMachine.receive}
"""
def _():
action = LOG_FSM_TRANSITION(
self.logger,
fsm_state=unicode(self.state), fsm_input=unicode(input))
with action as theAction:
output = super(_FiniteStateLogger, self).receive(input)
theAction.addSuccessFields(
fsm_next_state=unicode(self.state), fsm_output=unicode(output))
return output
return self._action.run(_)
```
(reminds me of Axiom programming).
It would be nice to be able to avoid the nested function when achieving this.
There could be a decorator that runs a whole function in the context of some action (eg `@partOfAction(attributeNamingAction)` or a context manager `with self.actionAttribute.resume():`
|
closed
|
2014-04-15T14:58:14Z
|
2018-09-22T20:59:11Z
|
https://github.com/itamarst/eliot/issues/17
|
[
"API enhancement"
] |
itamarst
| 1
|
ipython/ipython
|
data-science
| 13,853
|
Incorrect coverage reporting
|
Back in February the coverage was close to 73.89% library and 97.83% for tests. Currently the coverage for tests is not reported and for library sits at around 40%.
This started happening in April. e6432249582e05f438303ce73d082a0351bb383e has 73.84% library coverage but next commit, cf0cbaf5cfbd8464e5b0ea69da8d141671008e82 has only 41.20%. Both commits are unrelated (both fixing typos in documentation).
When working on #13852 I saw a coverage drop even when adding comprehensively tested functions. Many functions and classes which I do test extensively are not marked as covered at all.
I also see some of incorrect misses locally but these are different for some reason.
It is very likely due to some change in `pytest-cov` and `coverage`.
|
closed
|
2022-12-04T01:17:24Z
|
2022-12-09T20:47:07Z
|
https://github.com/ipython/ipython/issues/13853
|
[
"testing"
] |
krassowski
| 2
|
tflearn/tflearn
|
data-science
| 1,110
|
how to set batch_size=None in model.fit() and use batch_size defined in input_data layer directly
|
hello, I want to define the batch_size directly in input_data layer, and set batch_size in model.fit to None. However, I get an error " Incoming Tensor shape must be 4-D, not 5-D". Can anyone help? Thank u
|
open
|
2019-01-04T03:30:09Z
|
2019-01-04T03:30:09Z
|
https://github.com/tflearn/tflearn/issues/1110
|
[] |
WuChannn
| 0
|
reloadware/reloadium
|
flask
| 130
|
Can't get new fixture after Ctrl + S in pytest
|
Hi! I add new pytest fixture. In my case one of automatically created objects in pytest-django. And after Ctrl + S I have error like this
```
partners\test_app.py:147 (test_personal_manager_not_accessed_to_company)
manager1_browser_client = <django.test.client.Client object at 0x0000022358923460>
company = <Company: Name1>
> ???
test_app.py:10148:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
l1llll1llllll111Il1l1 = <class 'reloadium.lib.llll1lllll1lll1lIl1l1.l1lll11111l1ll11Il1l1.lllll1111l111ll1Il1l1'>
ll111l1l111l1l1lIl1l1 = [<django.test.client.Client object at 0x0000022358923460>, <Company: Name1>]
lllll11l11111111Il1l1 = {}, l1111l1111ll11llIl1l1 = [DbMemento]
@classmethod
def l111llll111l11llIl1l1(l1llll1llllll111Il1l1, ll111l1l111l1l1lIl1l1: List[Any], lllll11l11111111Il1l1: Dict[str, Any], l1111l1111ll11llIl1l1: List[l1l1lll1ll1l1lllIl1l1]) -> Any: # type: ignore
with l11llll111l1l1llIl1l1():
assert lllll1lll11111l1Il1l1.l11l111l1111l1llIl1l1.llll1lllll1lll1lIl1l1
l1l1ll1l11lll1llIl1l1 = lllll1lll11111l1Il1l1.l11l111l1111l1llIl1l1.llll1lllll1lll1lIl1l1.ll1111l111111l11Il1l1.lll11lllll11l11lIl1l1()
l1l1ll1l11lll1llIl1l1.ll11lllll1l1l11lIl1l1()
l1l11ll11lll1111Il1l1 = lllll1lll11111l1Il1l1.l11l111l1111l1llIl1l1.ll111l1111111lllIl1l1.lllll11lllll111lIl1l1(l1l1ll1l11lll1llIl1l1.ll111l1l11111ll1Il1l1, l1l1ll1l11lll1llIl1l1.l1ll11l11lll1l1lIl1l1.ll1ll1llllll11llIl1l1())
assert l1l11ll11lll1111Il1l1
l1ll1lll11l1ll11Il1l1 = l1llll1llllll111Il1l1.ll1ll111111lll1lIl1l1()
for llll111lllll1111Il1l1 in l1111l1111ll11llIl1l1:
llll111lllll1111Il1l1.ll11l1ll11lll111Il1l1()
for llll111lllll1111Il1l1 in l1111l1111ll11llIl1l1:
llll111lllll1111Il1l1.lll11l1llllllll1Il1l1()
> l111l111l1l111llIl1l1 = l1l11ll11lll1111Il1l1(*ll111l1l111l1l1lIl1l1, **lllll11l11111111Il1l1); l1l1ll1l11lll1llIl1l1.l1l11111ll1111llIl1l1.additional_info.pydev_step_stop = l1ll1lll11l1ll11Il1l1 # type: ignore
E TypeError: test_personal_manager_not_accessed_to_company() missing 1 required positional argument: 'company'
..\..\..\.reloadium\package\3.10\reloadium\lib\llll1lllll1lll1lIl1l1\l1lll11111l1ll11Il1l1.py:56: TypeError
```
|
closed
|
2023-03-27T13:42:51Z
|
2023-05-16T15:32:21Z
|
https://github.com/reloadware/reloadium/issues/130
|
[] |
mihalt
| 2
|
explosion/spaCy
|
machine-learning
| 12,449
|
Korean blank model crashes if mecab-ko not installed, but not during initialization
|
Initializing a Korean spacy.blank model throws an error when `natto-py` is not installed, and asks the user to install both `natto-py` and `mecab-ko`. However, if only `natto-py` (and not `mecab-ko`) is installed, the initialization no longer throws an error. Instead, the model throws the following error on trying to process a document. It doesn't matter if this document actually contains korean orthography.
```
MeCabError: libmecab.dylib could not be found, please use MECAB_PATH
```
I think that maybe the initialization should still throw an error, or this `MeCabError` should be coerced into something more specific to spaCy (e.g., `please install mecab-ko` etc. etc.)
## How to reproduce the behaviour
Only install `natto-py`, but not `mecab-ko`, then run the following:
```python
import spacy
model = spacy.blank("ko") # no error
model("꿀벌") # error
model("dog walks home") # same error
```
## Your Environment
- **spaCy version:** 3.5.1
- **Platform:** macOS-12.5-arm64-arm-64bit
- **Python version:** 3.9.15
|
closed
|
2023-03-20T09:05:59Z
|
2023-04-20T00:02:20Z
|
https://github.com/explosion/spaCy/issues/12449
|
[
"lang / ko",
"feat / ux"
] |
stephantul
| 3
|
TheAlgorithms/Python
|
python
| 12,115
|
python program car and boats classes
|
### Feature description
This problem involves creating two classes (Car and Boat) that return a formatted string based on the vehicle's speed. This solution includes the implementation of constructors and string representations.
|
closed
|
2024-10-16T09:52:06Z
|
2024-12-28T09:16:50Z
|
https://github.com/TheAlgorithms/Python/issues/12115
|
[
"enhancement"
] |
NimraAslamkhan
| 2
|
xlwings/xlwings
|
automation
| 2,153
|
xlwings Server: Increase default timeout to 30s for VBA client
|
* Currently at 5s
* Document parameter
|
closed
|
2023-02-01T09:01:27Z
|
2023-02-04T21:53:51Z
|
https://github.com/xlwings/xlwings/issues/2153
|
[
"Server"
] |
fzumstein
| 0
|
fastapi-admin/fastapi-admin
|
fastapi
| 75
|
How to deploy FastAPI Admin panel?
|
Hi there,
I was wondering how I can deploy this admin panel. Can you please describe it to me?
Thanks and regards
|
closed
|
2021-09-01T05:36:06Z
|
2021-09-01T06:08:24Z
|
https://github.com/fastapi-admin/fastapi-admin/issues/75
|
[] |
m-peko
| 3
|
graphql-python/graphene
|
graphql
| 1,351
|
Network representation support
|
It will be interesting to introduce all network representation for multiple purpose (represents ip address with an easy way, networks address).
I wrote some scalar compatible and tested with graphene v3
There is the code to change in graphene/types/__init__.py:
```python
# flake8: noqa
from graphql import GraphQLResolveInfo as ResolveInfo
from .argument import Argument
from .base64 import Base64
from .context import Context
from .datetime import Date, DateTime, Time
from .decimal import Decimal
from .dynamic import Dynamic
from .enum import Enum
from .field import Field
from .inputfield import InputField
from .inputobjecttype import InputObjectType
from .interface import Interface
from .json import JSONString
from .mutation import Mutation
from .network import (
IPv4Address,
IPv6Address,
IPvAnyAddress,
IPv4Network,
IPv6Network,
IPvAnyNetwork,
)
from .objecttype import ObjectType
from .scalars import ID, Boolean, Float, Int, Scalar, String
from .schema import Schema
from .structures import List, NonNull
from .union import Union
from .uuid import UUID
__all__ = [
"Argument",
"Base64",
"Boolean",
"Context",
"Date",
"DateTime",
"Decimal",
"Dynamic",
"Enum",
"Field",
"Float",
"ID",
"InputField",
"InputObjectType",
"Int",
"Interface",
"IPv4Address",
"IPv6Address",
"IPvAnyAddress",
"IPv4Network",
"IPv6Network",
"IPvAnyNetwork",
"JSONString",
"List",
"Mutation",
"NonNull",
"ObjectType",
"ResolveInfo",
"Scalar",
"Schema",
"String",
"Time",
"UUID",
"Union",
]
```
and the new graphene/types/network.py file:
```python
from .scalars import Scalar
from graphql.error import GraphQLError
import ipaddress
from graphql.language import ast, print_ast
class IPv4Address(Scalar):
"""
The 'IPv4Address' scalar type represents an IPv4 Address
value as specified by
[RFC 6864](https://datatracker.ietf.org/doc/html/rfc6864).
"""
@staticmethod
def serialize(ip):
if isinstance(ip, str):
ip = ipaddress.IPv4Address(ip)
if not isinstance(ip, ipaddress.IPv4Address):
raise GraphQLError(f"IPv4Address cannot represent value: {repr(ip)}")
return str(ipaddress.IPv4Address(ip))
@classmethod
def parse_literal(cls, node):
if not isinstance(node, ast.StringValueNode):
raise GraphQLError(
f"IPv4Address cannot represent non-string value: {print_ast(node)}"
)
return cls.parse_value(node.value)
@staticmethod
def parse_value(value):
if isinstance(value, ipaddress.IPv4Address):
return value
if not isinstance(value, str):
raise GraphQLError(
f"IPv4Address cannot represent non-string value: {repr(value)}"
)
try:
return ipaddress.IPv4Address(value)
except ValueError as v:
raise GraphQLError(f"{v}")
except Exception:
raise GraphQLError(f"IPv4Address cannot represent value: {repr(value)}")
class IPv6Address(Scalar):
"""
The 'IPv6Address' scalar type represents an IPv6 Address
value as specified by
[RFC 4291](https://datatracker.ietf.org/doc/html/rfc4291.html).
"""
@staticmethod
def serialize(ip):
if isinstance(ip, str):
ip = ipaddress.IPv6Address(ip)
if not isinstance(ip, ipaddress.IPv6Address):
raise GraphQLError(f"IPv6Address cannot represent value: {repr(ip)}")
return str(ipaddress.IPv6Address(ip))
@classmethod
def parse_literal(cls, node):
if not isinstance(node, ast.StringValueNode):
raise GraphQLError(
f"IPv6Address cannot represent non-string value: {print_ast(node)}"
)
return cls.parse_value(node.value)
@staticmethod
def parse_value(value):
if isinstance(value, ipaddress.IPv6Address):
return value
if not isinstance(value, str):
raise GraphQLError(
f"IPv6Address cannot represent non-string value: {repr(value)}"
)
try:
return ipaddress.IPv6Address(value)
except ValueError as v:
raise GraphQLError(f"{v}")
except Exception:
raise GraphQLError(f"IPv6Address cannot represent value: {repr(value)}")
class IPvAnyAddress(Scalar):
"""
The 'IPvAnyAddress' scalar type represents an IPv4 Address or IPv6 Address
value as specified by
[RFC 6864](https://datatracker.ietf.org/doc/html/rfc6864) for IPv4 and
[RFC 4291](https://datatracker.ietf.org/doc/html/rfc4291.html) for IPv6.
"""
@staticmethod
def serialize(ip):
if isinstance(ip, str):
ip = ipaddress.ip_address(ip)
if not isinstance(ip, ipaddress.IPv4Address) and not isinstance(
ip, ipaddress.IPv6Address
):
raise GraphQLError(f"IPvAnyAddress cannot represent value: {repr(ip)}")
return str(ipaddress.ip_address(ip))
@classmethod
def parse_literal(cls, node):
if not isinstance(node, ast.StringValueNode):
raise GraphQLError(
f"IPvAnyAddress cannot represent non-string value: {print_ast(node)}"
)
return cls.parse_value(node.value)
@staticmethod
def parse_value(value):
if isinstance(value, ipaddress.IPv4Address) or isinstance(
value, ipaddress.IPv6Address
):
return value
if not isinstance(value, str):
raise GraphQLError(
f"IPvAnyAddress cannot represent non-string value: {repr(value)}"
)
try:
return ipaddress.ip_address(value)
except ValueError as v:
raise GraphQLError(f"{v}")
except Exception:
raise GraphQLError(f"IPvAnyAddress cannot represent value: {repr(value)}")
class IPv4Network(Scalar):
"""
The 'IPv4Network' scalar type represents an IPv4 Network
value as specified by
[RFC 6864](https://datatracker.ietf.org/doc/html/rfc6864).
"""
@staticmethod
def serialize(ip):
if isinstance(ip, str):
ip = ipaddress.IPv4Network(ip)
if not isinstance(ip, ipaddress.IPv4Network):
raise GraphQLError(f"IPv4Network cannot represent value: {repr(ip)}")
return str(ipaddress.IPv4Network(ip))
@classmethod
def parse_literal(cls, node):
if not isinstance(node, ast.StringValueNode):
raise GraphQLError(
f"IPv4Network cannot represent non-string value: {print_ast(node)}"
)
return cls.parse_value(node.value)
@staticmethod
def parse_value(value):
if isinstance(value, ipaddress.IPv4Network):
return value
if not isinstance(value, str):
raise GraphQLError(
f"IPv4Network cannot represent non-string value: {repr(value)}"
)
try:
return ipaddress.IPv4Network(value)
except ValueError as v:
raise GraphQLError(f"{v}")
except Exception:
raise GraphQLError(f"IPv4Network cannot represent value: {repr(value)}")
class IPv6Network(Scalar):
"""
The 'IPv6Network' scalar type represents an IPv6 Network
value as specified by
[RFC 4291](https://datatracker.ietf.org/doc/html/rfc4291.html).
"""
@staticmethod
def serialize(ip):
if isinstance(ip, str):
ip = ipaddress.IPv6Network(ip)
if not isinstance(ip, ipaddress.IPv6Network):
raise GraphQLError(f"IPv6Network cannot represent value: {repr(ip)}")
return str(ipaddress.IPv6Network(ip))
@classmethod
def parse_literal(cls, node):
if not isinstance(node, ast.StringValueNode):
raise GraphQLError(
f"IPv6Network cannot represent non-string value: {print_ast(node)}"
)
return cls.parse_value(node.value)
@staticmethod
def parse_value(value):
if isinstance(value, ipaddress.IPv6Network):
return value
if not isinstance(value, str):
raise GraphQLError(
f"IPv6Network cannot represent non-string value: {repr(value)}"
)
try:
return ipaddress.IPv6Network(value)
except ValueError as v:
raise GraphQLError(f"{v}")
except Exception:
raise GraphQLError(f"IPv6Network cannot represent value: {repr(value)}")
class IPvAnyNetwork(Scalar):
"""
The 'IPvAnyNetwork' scalar type represents an IPv4 Network or IPv6 Network
value as specified by
[RFC 6864](https://datatracker.ietf.org/doc/html/rfc6864) for IPv4 and
[RFC 4291](https://datatracker.ietf.org/doc/html/rfc4291.html) for IPv6.
"""
@staticmethod
def serialize(ip):
if isinstance(ip, str):
ip = ipaddress.ip_network(ip)
if not isinstance(ip, ipaddress.IPv4Network) and not isinstance(
ip, ipaddress.IPv6Network
):
raise GraphQLError(f"IPvAnyNetwork cannot represent value: {repr(ip)}")
return str(ipaddress.ip_network(ip))
@classmethod
def parse_literal(cls, node):
if not isinstance(node, ast.StringValueNode):
raise GraphQLError(
f"IPvAnyNetwork cannot represent non-string value: {print_ast(node)}"
)
return cls.parse_value(node.value)
@staticmethod
def parse_value(value):
if isinstance(value, ipaddress.IPv4Network) or isinstance(
value, ipaddress.IPv6Network
):
return value
if not isinstance(value, str):
raise GraphQLError(
f"IPvAnyNetwork cannot represent non-string value: {repr(value)}"
)
try:
return ipaddress.ip_network(value)
except ValueError as v:
raise GraphQLError(f"{v}")
except Exception:
raise GraphQLError(f"IPvAnyNetwork cannot represent value: {repr(value)}")
```
Already formatted by black obviously.
I work with network ingineer an a project using graphene v3 and I have to represents network devices with their facts, I needed this feature so I developed it but it will be interesting to integrate it directly into graphene i think.
Thanks in advance.
|
open
|
2021-07-17T06:56:39Z
|
2022-06-14T19:52:32Z
|
https://github.com/graphql-python/graphene/issues/1351
|
[
"✨ enhancement"
] |
Never77
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.