repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
activeloopai/deeplake
|
computer-vision
| 2,946
|
[FEATURE] Comparaison to LitData
|
### Description
Hey there,
I just discovered Deeplake and found it to be an amazing library. I wondered if you would be interested to compare with LitData: https://github.com/Lightning-AI/litdata
### Use Cases
Better understanding of market performance.
|
open
|
2024-09-11T13:50:19Z
|
2025-03-24T03:03:07Z
|
https://github.com/activeloopai/deeplake/issues/2946
|
[
"enhancement"
] |
tchaton
| 3
|
flasgger/flasgger
|
rest-api
| 195
|
flasgger showing the wrong example value for body (from another function)
|
Not sure if this is the right place for this...if it should be swagger but...
The below is just learning code but I'm having issues with Flasgger picking up the right detail for `Example Value` for the body for `Item`. If I leave in the `Order` swagger detail, I get a different `Example Value` than if I remove it. For example, if I include `Order`'s swagger detail the `Example Value` body for `Item` will be...
{
"customer_id": 0,
"date": "string",
"status": "string"
}
If I don't, it's this...
{
"customer_id": 0,
"date": "string",
"items": [
{
"name": "string",
"price": 0,
"product_id": 0,
"quantity": 0
}
],
"status": "string"
}
But what I'm actually expecting is...
{
"name": "string",
"price": 0,
"product_id": 0,
"quantity": 0
}
I checked the code several times for anything that might pop out but I'm not seeing it. I also tried changing `id: data` to several things but it just resulted in undefined. Anyone know what might be the issue?
**Swagger Detail for two unrelated functions**
**Update Order**
"""
Update an Order
This endpoint will update an Order based on the body that is posted
---
tags:
- Orders
consumes:
- application/json
produces:
- application/json
parameters:
- name: order_id
in: path
description: ID of the order that needs updating
type: integer
required: true
- in: body
name: body
schema:
id: data
required:
- customer_id
- status
- date
properties:
customer_id:
type: integer
description: the customer id for the order
status:
type: string
description: the order status
date:
type: string
description: the date of the order
responses:
200:
description: Order updated
schema:
$ref: '#/definitions/Order'
400:
description: Bad Request (the posted data was not valid)
"""
**Update Items**
"""
Update an Item
---
tags:
- Items
consumes:
- application/json
produces:
- application/json
parameters:
- name: order_id
in: path
description: ID of the order that has item to be updated
type: integer
required: true
- name: item_id
in: path
description: ID of the item that needs updating of a specified order
type: integer
required: true
- in: body
name: body
schema:
id: data
required:
- name
- product_id
- quantity
- price
name:
type: string
description: the item name
product_id:
type: integer
description: the product_id of the item
quantity:
type: integer
description: the quantity of the item
price:
type: number
description: the price of the item
responses:
200:
description: Item updated
schema:
$ref: '#/definitions/Item'
400:
description: Bad Request (the posted data was not valid)
"""
|
open
|
2018-04-29T20:58:46Z
|
2018-10-01T17:31:19Z
|
https://github.com/flasgger/flasgger/issues/195
|
[
"hacktoberfest"
] |
andyd0
| 4
|
plotly/dash
|
jupyter
| 2,536
|
[BUG] Turn twitter cards off by default to avoid security issues
|
- replace the result of `pip list | grep dash` below
```
dash 2.9.3
dash-auth 2.0.0
dash-bootstrap-components 1.4.1
dash-core-components 2.0.0
dash-extensions 0.1.13
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: OSX
- Browser Firefox
- Version 102.10
**Describe the bug**
Deploying a dash app with pages allows attackers to embed code into the webapp. This is a potential security vulnerability since it allows attackers to execute arbitrary code in the context of the dash sandbox.
Concretely, if use_pages is true, dash calls self._pages_meta_tags()
https://github.com/plotly/dash/blob/a8b3ddbec5a0c639d41230137c5e5744d5f43c8f/dash/dash.py#L968
which *always* adds the twitter and og meta tags
https://github.com/plotly/dash/blob/a8b3ddbec5a0c639d41230137c5e5744d5f43c8f/dash/dash.py#L906
The twitter meta tag includes the URL
```
<!-- Twitter Card data -->
<meta property="twitter:card" content="summary_large_image">
<meta property="twitter:url" content="{flask.request.url}">
<meta property="twitter:title" content="{title}">
<meta property="twitter:description" content="{description}">
<meta property="twitter:image" content="{image_url}">
```
So if the dash app is involved with a URL that includes a `<script>` tag, the script specified in the tag will be executed.
Example URL
```
[dash_app_base_url]/?'"--></style></scRipt><scRipt>netsparker(0x000F45)</scRipt>
```
This causes our dash app to fail cyber security/pen testing scans.
A workaround is to a custom `index_string` which removes all `meta` tags, but that has the disadvantage of not including any meta tags, even the ones that we might want.
A better option would be to make the twitter and og meta tags opt-in; it is not clear that specifying a twitter card is necessary for all deployers.
I am happy to submit a PR (`include_card_configs` property) if that would help
**Screenshots**
-----------
#### Vulnerability report
<img width="873" alt="Screenshot 2023-05-21 at 9 54 22 PM" src="https://github.com/plotly/dash/assets/2423263/64d70995-8819-4604-b3f0-aef0435d9eac">
-----------
#### Page source includes the embedded script
<img width="873" alt="Screenshot 2023-05-21 at 9 52 51 PM" src="https://github.com/plotly/dash/assets/2423263/b90e03f2-8bed-4528-aada-e77fc853373b">
-----------
#### Example of an embedded alert in firefox
<img width="878" alt="Screenshot 2023-05-21 at 9 57 17 PM" src="https://github.com/plotly/dash/assets/2423263/2aea626b-1480-4d0f-b356-4392a60ab2b6">
|
closed
|
2023-05-22T05:00:50Z
|
2023-05-25T16:12:38Z
|
https://github.com/plotly/dash/issues/2536
|
[] |
shankari
| 4
|
pytorch/pytorch
|
numpy
| 149,627
|
DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int16 (__main__.TestForeachCUDA)
|
Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39097709474).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99
|
open
|
2025-03-20T15:42:47Z
|
2025-03-20T15:42:52Z
|
https://github.com/pytorch/pytorch/issues/149627
|
[
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] |
pytorch-bot[bot]
| 1
|
coqui-ai/TTS
|
deep-learning
| 2,438
|
[Feature request] Implementation for FreeVC
|
<!-- Welcome to the 🐸TTS project!
We are excited to see your interest and appreciate your support! --->
**🚀 Feature Description**
Original implementation: https://github.com/OlaWod/FreeVC
We can use it in combination with any 🐸TTS model and make it speak with any voice aka faking voice cloning.
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Additional context**
I'll initially implement only inference with the pre-trained models and later we can implement the training. It should be easy as it a quite the same as the VITS.
<!-- Add any other context or screenshots about the feature request here. -->
|
closed
|
2023-03-20T13:54:57Z
|
2024-02-24T05:59:56Z
|
https://github.com/coqui-ai/TTS/issues/2438
|
[
"feature request"
] |
erogol
| 4
|
cobrateam/splinter
|
automation
| 511
|
Chrome options
|
Seems like the way the chrome drive sets options should be updated to avoid interfering with other possible arguments being passed to the function. Unless I'm missing something.
|
closed
|
2016-08-27T03:58:32Z
|
2018-08-27T01:06:21Z
|
https://github.com/cobrateam/splinter/issues/511
|
[] |
bmcculley
| 0
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 1,044
|
Missing "PyQt5.QtCore"
|
I tried to run the progrem and run into this error
Traceback (most recent call last):
File "demo_toolbox.py", line 5, in <module>
from toolbox import Toolbox
File "E:\voice clone\toolbox\__init__.py", line 11, in <module>
from toolbox.ui import UI
File "E:\voice clone\toolbox\ui.py", line 12, in <module>
from PyQt5.QtCore import Qt, QStringListModel
ModuleNotFoundError: No module named 'PyQt5.QtCore'
i tried pip install this missing file but:
ERROR: Could not find a version that satisfies the requirement PyQt5.QtCore (from versions: none)
ERROR: No matching distribution found for PyQt5.QtCore
|
open
|
2022-03-26T05:59:00Z
|
2022-03-29T04:48:50Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1044
|
[] |
Laxdal
| 2
|
pydantic/pydantic-settings
|
pydantic
| 528
|
In a class deriving from BaseSettings, an attribute with name `path` gets expanded to the environment var `PATH`
|
Here is an odd issue with `pydantic-settings` `v.2.7.1`. If a class that derives from `BaseSettings` contains an attribute named `path` (even if it's lowercase!), that attribute gets expanded to whatever the current `PATH` environment variable is set to.
For instance:
```python
from pydantic_settings import BaseSettings
from pathlib import Path
class BrokenPathSettings(BaseSettings):
path: Path = Path("./")
not_path: Path = Path("./")
bps = BrokenPathSettings()
print(f"==[ path ]==\n{bps.path}\n")
print(f"==[ not_path ]==\n{bps.not_path}")
```
produces
```
==[ path ]==
<Whatever PATH is set to, e.g., `/home/<username>/.miniconda/bin:/usr/bin`>
==[ not_path ]==
.
```
What is even more concerning, the `path` attribute doesn't even have to be declared as a `Path` - the behaviour is broken even if it's `str`! However, if I set it to something other than `Path` or `str` (for instance, `int`), it fails with the following error:
```bash
File "/home/<username>/.miniconda/envs/path_test/lib/python3.13/site-packages/pydantic_settings/main.py", line 171, in __init__
super().__init__(
~~~~~~~~~~~~~~~~^
**__pydantic_self__._settings_build_values(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<25 lines>...
)
^
)
^
File "/home/<username>/.miniconda/envs/path_test/lib/python3.13/site-packages/pydantic/main.py", line 214, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for BrokenPathSettings
path
Input should be a valid integer, unable to parse string as an integer [type=int_parsing, input_value='/home/<username>/.miniconda/...', input_type=str]
For further information visit https://errors.pydantic.dev/2.10/v/int_parsing
```
Surprise factor aside, it vaguely smells of a security issue, although at this stage I'm not really sure how exactly it could be exploited.
|
closed
|
2025-01-21T18:29:26Z
|
2025-01-22T09:04:21Z
|
https://github.com/pydantic/pydantic-settings/issues/528
|
[
"unconfirmed"
] |
cantordust
| 1
|
TencentARC/GFPGAN
|
deep-learning
| 289
|
Output Result Files to only 1 folder
|
Is there a way to only output files from (whole_imgs) to the (restored_imgs) folder, and not (cmp, cropped_faces, restored_faces) I don't need these other image sizes and it just makes the process take longer
|
open
|
2022-10-13T18:20:28Z
|
2022-10-19T06:59:47Z
|
https://github.com/TencentARC/GFPGAN/issues/289
|
[] |
multifirez
| 1
|
sktime/pytorch-forecasting
|
pandas
| 1,596
|
[MNT] MPS backend test failures on MacOS
|
The CI fails with MPS backend failures on a number of tests:
```
RuntimeError: MPS backend out of memory (MPS allocated: 0 bytes, other allocations: 0 bytes, max allowed: 7.93 GB). Tried to allocate 256 bytes on shared pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
[W 2024-08-22 20:58:07,168] Trial 0 failed with value None.
```
|
closed
|
2024-08-22T20:59:49Z
|
2024-09-13T16:05:47Z
|
https://github.com/sktime/pytorch-forecasting/issues/1596
|
[
"maintenance"
] |
fkiraly
| 8
|
ray-project/ray
|
data-science
| 50,698
|
CI test linux://python/ray/train/v2:test_data_parallel_trainer is flaky
|
CI test **linux://python/ray/train/v2:test_data_parallel_trainer** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8396#01951ae3-8f44-4218-901d-4b144474feab
- https://buildkite.com/ray-project/postmerge/builds/8390#01951a34-43e6-428b-b98f-1832dd663b5e
- https://buildkite.com/ray-project/postmerge/builds/8377#019515bf-8c94-4454-afb8-60c47eb48990
DataCaseName-linux://python/ray/train/v2:test_data_parallel_trainer-END
Managed by OSS Test Policy
|
closed
|
2025-02-18T21:33:05Z
|
2025-02-21T17:44:54Z
|
https://github.com/ray-project/ray/issues/50698
|
[
"bug",
"triage",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability",
"ml"
] |
can-anyscale
| 9
|
keras-rl/keras-rl
|
tensorflow
| 356
|
Keras RL for another externel Enviroment
|
Is it possible to use Keras-RL for any other problem rather than Open AI Gym ??
|
closed
|
2020-03-18T07:09:18Z
|
2020-03-30T15:25:31Z
|
https://github.com/keras-rl/keras-rl/issues/356
|
[] |
Jekso
| 4
|
strawberry-graphql/strawberry-django
|
graphql
| 263
|
Relay and model relationships
|
<!--- Provide a general summary of the changes you want in the title above. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
How to handle relationships with Relay implementation?
```python
@strawberry.type
class Post(relay.Node):
id: relay.NodeID[int]
body: auto
comments: List["Comment"] # This doesn't seem to be correct.
comments: relay.ListConnection["Comment"] # Nor this.
```
|
closed
|
2023-06-19T02:12:12Z
|
2025-03-20T15:57:12Z
|
https://github.com/strawberry-graphql/strawberry-django/issues/263
|
[] |
salcedo
| 4
|
gradio-app/gradio
|
machine-learning
| 10,407
|
Chat Interface with`fill_height=True` Fails to Utilize Full Vertical Space with `save_history=True`
|
### Describe the bug
### Bug Description: Chat Interface Vertical Space Utilization
**Issue:** When `save_history=True` is set in the `gr.ChatInterface`, the chat box fails to utilize the full vertical space of the browser window, resulting in an unsatisfactory user experience.
**Expected Behavior:** The chat interface should fill the entire vertical space of the browser, providing a more comfortable and engaging chat experience.
**Steps to Reproduce:**
1. Set `save_history=True` in the `gr.ChatInterface` with `fill_height=True`.
2. Launch the application.
3. Observe that the chat box occupies less than half of the available vertical space.
**Impact:** This issue diminishes usability and aesthetics of the chat interface, making it less pleasant to use.
**Environment:** Gradio version [5.12], Python version [3.11],
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
### Steps to Reproduce:
1. Set up a Gradio application with the following code snippet:
```python
if __name__ == "__main__":
demo = gr.ChatInterface(
fn=chat_function,
title="Bot",
submit_btn="Send",
stop_btn=True,
save_history=True, # Set this to True
type="messages",
fill_height=True,
fill_width=True,
autoscroll=True,
show_progress="minimal",
theme="default"
)
demo.queue()
demo.launch(
server_name="0.0.0.0",
server_port=7860,
show_error=True
)
```
2. Launch the application.
3. Open the application in a web browser.
4. Observe that the chat box occupies less than half of the available vertical space.
5. Toggle `save_history` to `False` the chat interface fill up the vertical space properly.
### Screenshot
full_height=True, save_history=True
<img width="930" alt="Image" src="https://github.com/user-attachments/assets/92eccb11-cac1-4962-9303-c172a675ca23" />
full_height=True, save_history=False
<img width="930" alt="Image" src="https://github.com/user-attachments/assets/080fd2ae-9ff4-4ef6-934c-66a0ede44842" />
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.12.0
gradio_client version: 1.5.4
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.5.0
gradio-client==1.5.4 is not installed.
httpx: 0.28.1
huggingface-hub: 0.27.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.5
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.2
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.28.1
huggingface-hub: 0.27.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
I can work around it
|
closed
|
2025-01-22T07:38:13Z
|
2025-01-23T04:28:52Z
|
https://github.com/gradio-app/gradio/issues/10407
|
[
"bug"
] |
Cyfine
| 1
|
scikit-hep/awkward
|
numpy
| 3,221
|
`ak.nones_like`
|
### Description of new feature
It would be very useful to have this additional `*_like` operator to create a column of nulled data.
This comes up in a physics use case at LHC (at cleaner experimental environments this doesn't hold) where a Jet is an object that has charge but due to QCD screening and reconstruction effects that charge is difficult or impossible to determine accurately. This means that while Jets are what physicists typically call "candidates", which are four-momenta with an assigned charge, the charge can be best described as `None` since any associated number is confusing or misleading.
So, to successfully reconcile types with physics concepts being able to generate `None` columns in the same shape as `pt` or other Jet quantities would be really useful, so that we can assign the `None` column as the candidate's charge.
@jpivarski created a few-line example of a mockup of expected behavior using awkward:
```
>>> some_data = ak.Array([[1, 2, 3], [], [4, 5]])
>>> with_none = ak.concatenate([ak.Array([None]), some_data]) # [None] has type ?unknown, merges well
>>> with_none
<Array [None, [1, 2, 3], [], [4, 5]] type='4 * option[var * int64]'>
>>> with_none[np.zeros(1000, np.int64)]
<Array [None, None, None, ..., None, None] type='1000 * option[var * int64]'>
```
|
open
|
2024-08-19T21:47:20Z
|
2024-08-20T12:56:43Z
|
https://github.com/scikit-hep/awkward/issues/3221
|
[
"feature"
] |
lgray
| 2
|
jupyter/nbviewer
|
jupyter
| 323
|
Gists with multiple files don't render an index
|
Viewing a regular github repo will [show a file listing](http://nbviewer.ipython.org/github/ipython/ipython/tree/master/), but not a [gist with multiple files](http://nbviewer.ipython.org/gist/icook/899fcc7187d2d0ae5f57). Direct links to the gist's files work fine tho.
|
closed
|
2014-08-13T00:21:33Z
|
2014-08-22T15:22:56Z
|
https://github.com/jupyter/nbviewer/issues/323
|
[
"type:Bug"
] |
icook
| 1
|
strawberry-graphql/strawberry
|
asyncio
| 3,358
|
Private fields not private in Python 3.12.
|
<!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
Private fields not private in Python 3.12. I reproduced with the following code:
```
import strawberry
@strawberry.type
class User:
name: str
age: int
secret: str = strawberry.Private[str]
@strawberry.type
class Query:
@strawberry.field
def user(self) -> User:
return User(name="Patrick", age=100, secret="can't tell")
schema = strawberry.Schema(query=Query)
```
and I see the following at `http://0.0.0.0:8000/graphql`:

## System Information
- Operating system: Apple M1 Sonoma | Python 3.12
- Strawberry version (if applicable): 0.218.1
|
closed
|
2024-01-24T09:47:51Z
|
2025-03-20T15:56:34Z
|
https://github.com/strawberry-graphql/strawberry/issues/3358
|
[
"bug"
] |
rsr5
| 5
|
keras-team/keras
|
machine-learning
| 20,071
|
tf.data.Dataset Pipeline Preprocessing Custom Layer Recommendation
|
I am looking to create a number of custom preprocessing layers to be used in a TensorFlow `tf.data` pipeline. I initially assumed I could subclass `keras.Layer` and in `call` simply use any `keras.ops` operations. I only use python parameters statically for condition statements but otherwise use `keras.ops` for all commands (e.g. `keras.ops.fori_loop`). I can run the pipeline alone successfully (e.g. `iter(next(train_ds))`), however, when I try to train a model using `TensorFlow` backend it complains with several issues as it's trying to create a symbolic graph of my preprocess layers. These layers are not attached to the model- they are attached to the data pipeline via `map`. I had assumed the dataset pipeline would happen on the CPU but it seems that my layers are being mapped to the GPU in a TF graph. If I force everything to run on the CPU, everything runs fine but ideally I want the model training to happen on GPU and data pipeline to happen on CPU.
Are there any reference examples I could follow?
When I looked at the included preprocessing layers of `keras` they all seemed to use `keras.backend.numpy` for operations (rather than `keras.ops`). I also noticed the TF pipeline safe layers subclass `TFDataLayer` which isn't exposed in the public API. Is there a way to indicate to keras that I want to run the entire preprocessing pipeline on the CPU.
Any help would be greatly appreciated.
Below are some layers that I implemented as reference (based on what I could find from both `keras` and `keras-cv`:
```python
from typing import Callable
import keras
from .defines import NestedTensorValue
def tf_keras_map(f, xs):
# NOTE: Workaround until (https://github.com/keras-team/keras/issues/20048)
import tensorflow as tf
xs = keras.tree.map_structure(tf.convert_to_tensor, xs)
def get_fn_output_signature(x):
out = f(x)
return keras.tree.map_structure(tf.TensorSpec.from_tensor, out)
# Grab single element unpacking and repacking single element
xe = tf.nest.pack_sequence_as(xs, [y[0] for y in tf.nest.flatten(xs)])
fn_output_signature = get_fn_output_signature(xe)
return tf.map_fn(f, xs, fn_output_signature=fn_output_signature)
class BaseAugmentation(keras.layers.Layer):
SAMPLES = "data"
LABELS = "labels"
TARGETS = "targets"
ALL_KEYS = (SAMPLES, LABELS, TARGETS)
TRANSFORMS = "transforms"
IS_DICT = "is_dict"
BATCHED = "is_batched"
USE_TARGETS = "use_targets"
NDIMS = 4 # Modify in subclass (includes batch size)
def __init__(
self,
seed: int | None = None,
auto_vectorize: bool = True,
data_format: str | None = None,
name: str | None = None,
**kwargs,
):
"""BaseAugmentation acts as a base class for various custom augmentation layers.
This class provides a common interface for augmenting samples and labels. In the future, we will
add support for segmentation and bounding boxes.
The only method that needs to be implemented by the subclass is
- augment_sample: Augment a single sample during training.
Optionally, you can implement the following methods:
- augment_label: Augment a single label during training.
- get_random_transformations: Returns a nested structure of random transformations that should be applied to the batch.
This is required to have unique transformations for each sample in the batch and maintain the same transformations for samples and labels.
- batch_augment: Augment a batch of samples and labels during training. Needed if layer requires access to all samples (e.g. CutMix).
By default, this method will coerce the input into a batch as well as a nested structure of inputs.
If auto_vectorize is set to True, the augment_sample and augment_label methods will be vectorized using keras.ops.vectorized_map.
Otherwise, it will use keras.ops.map which runs sequentially.
Args:
seed (int | None): Random seed. Defaults to None.
auto_vectorize (bool): If True, augment_sample and augment_label methods will be vectorized using keras.ops.vectorized_map.
Otherwise, it will use keras.ops.map which runs sequentially. Defaults to True.
data_format (str | None): Data format. Defaults to None. Will use keras.backend.image_data_format() if None.
name (str | None): Layer name. Defaults to None.
"""
super().__init__(name=name, **kwargs)
self._random_generator = keras.random.SeedGenerator(seed)
self.data_format = data_format or keras.backend.image_data_format()
self.built = True
self.training = True
self.auto_vectorize = auto_vectorize
def _map_fn(
self, func: Callable[[NestedTensorValue], keras.KerasTensor], inputs: NestedTensorValue
) -> keras.KerasTensor:
"""Calls appropriate mapping function with given inputs.
Args:
func (Callable): Function to be mapped.
inputs (dict): Dictionary containing inputs.
Returns:
KerasTensor: Augmented samples or labels
"""
if self.auto_vectorize:
return keras.ops.vectorized_map(func, inputs)
# NOTE: Workaround until (https://github.com/keras-team/keras/issues/20048)
if keras.backend.backend() == "tensorflow":
return tf_keras_map(func, inputs)
return keras.ops.map(func, inputs)
def call(self, inputs: NestedTensorValue, training: bool = True) -> NestedTensorValue:
"""This method will serve as the main entry point for the layer. It will handle the input formatting and output formatting.
Args:
inputs (NestedTensorValue): Inputs to be augmented.
training (bool): Whether the model is training or not.
Returns:
NestedTensorValue: Augmented samples or labels.
"""
self.training = training
inputs, metadata = self._format_inputs(inputs)
return self._format_outputs(self.batch_augment(inputs), metadata)
def augment_sample(self, inputs: NestedTensorValue) -> keras.KerasTensor:
"""Augment a single sample during training.
!!! note
This method should be implemented by the subclass.
Args:
input(NestedTensorValue): Single sample.
Returns:
KerasTensor: Augmented sample.
"""
return inputs[self.SAMPLES]
def augment_samples(self, inputs: NestedTensorValue) -> keras.KerasTensor:
"""Augment a batch of samples during training.
Args:
input(NestedTensorValue): Batch of samples.
Returns:
KerasTensor: Augmented batch of samples.
"""
return self._map_fn(self.augment_sample, inputs=inputs)
def augment_label(self, inputs: NestedTensorValue) -> keras.KerasTensor:
"""Augment a single label during training.
!!! note
Implement this method if you need to augment labels.
Args:
input(NestedTensorValue): Single label.
Returns:
keras.KerasTensor: Augmented label.
"""
return inputs[self.LABELS]
def augment_labels(self, inputs: NestedTensorValue) -> keras.KerasTensor:
"""Augment a batch of labels during training.
Args:
inputs(NestedTensorValue): Batch of labels.
Returns:
keras.KerasTensor: Augmented batch of labels.
"""
return self._map_fn(self.augment_label, inputs=inputs)
def get_random_transformations(self, input_shape: tuple[int, ...]) -> NestedTensorValue:
"""Generates random transformations needed for augmenting samples and labels.
Args:
input_shape (tuple[int,...]): Shape of the input (N, ...).
Returns:
NestedTensorValue: Batch of random transformations.
!!! note
This method should be implemented by the subclass if the layer requires random transformations.
"""
return keras.ops.arange(input_shape[0])
def batch_augment(self, inputs: NestedTensorValue) -> NestedTensorValue:
"""Handles processing entire batch of samples and labels in a nested structure.
Responsible for calling augment_samples and augment_labels.
Args:
inputs (NestedTensorValue): Batch of samples and labels.
Returns:
NestedTensorValue: Augmented batch of samples and labels.
"""
samples = inputs.get(self.SAMPLES, None)
labels = inputs.get(self.LABELS, None)
result = {}
transformations = self.get_random_transformations(input_shape=keras.ops.shape(samples))
result[self.SAMPLES] = self.augment_samples(inputs={self.SAMPLES: samples, self.TRANSFORMS: transformations})
if labels is not None:
result[self.LABELS] = self.augment_labels(inputs={self.LABELS: labels, self.TRANSFORMS: transformations})
# END IF
# preserve any additional inputs unmodified by this layer.
for key in inputs.keys() - result.keys():
result[key] = inputs[key]
return result
def _format_inputs(self, inputs: NestedTensorValue) -> tuple[NestedTensorValue, dict[str, bool]]:
"""Validate and force inputs to be batched and placed in structured format.
Args:
inputs (NestedTensorValue): Inputs to be formatted.
Returns:
tuple[NestedTensorValue, dict[str, bool]]: Formatted inputs and metadata.
"""
metadata = {self.IS_DICT: True, self.USE_TARGETS: False, self.BATCHED: True}
if not isinstance(inputs, dict):
inputs = {self.SAMPLES: inputs}
metadata[self.IS_DICT] = False
samples = inputs.get(self.SAMPLES, None)
if inputs.get(self.SAMPLES) is None:
raise ValueError(f"Expect the inputs to have key {self.SAMPLES}. Got keys: {list(inputs.keys())}")
# END IF
if inputs[self.SAMPLES].shape.rank != self.NDIMS - 1 and samples.shape.rank != self.NDIMS:
raise ValueError(f"Invalid input shape: {samples.shape}")
# END IF
if inputs[self.SAMPLES].shape.rank == self.NDIMS - 1:
metadata[self.BATCHED] = False
# Expand dims to make it batched for keys of interest
for key in set(self.ALL_KEYS).intersection(inputs.keys()):
if inputs[key] is not None:
inputs[key] = keras.ops.expand_dims(inputs[key], axis=0)
# END IF
# END FOR
# END IF
return inputs, metadata
def _format_outputs(self, output: NestedTensorValue, metadata: dict[str, bool]) -> NestedTensorValue:
"""Format the output to match the initial input format.
Args:
output: Output to be formatted.
metadata: Metadata used for formatting.
Returns:
Output in the original format.
"""
if not metadata[self.BATCHED]:
for key in set(self.ALL_KEYS).intersection(output.keys()):
if output[key] is not None: # check if tensor
output[key] = keras.ops.squeeze(output[key], axis=0)
# END IF
# END FOR
# END IF
if not metadata[self.IS_DICT]:
return output[self.SAMPLES]
if metadata[self.USE_TARGETS]:
output[self.TARGETS] = output[self.LABELS]
del output[self.LABELS]
return output
def compute_output_shape(self, input_shape, *args, **kwargs):
"""By default assumes the shape of the input is the same as the output.
Args:
input_shape: Shape of the input.
Returns:
tuple: Shape of the output
!!! note
This method should be implemented by the subclass if the output shape is different from the input shape.
"""
return input_shape
def get_config(self):
"""Serialize the layer configuration."""
config = super().get_config()
config.update(
{
"seed": self.seed,
"auto_vectorize": self.auto_vectorize,
"data_format": self.data_format,
}
)
return config
class BaseAugmentation1D(BaseAugmentation):
NDIMS = 3 # (N, T, C) or (N, C, T)
def __init__(self, **kwargs):
"""BaseAugmentation1D acts as a base class for various custom augmentation layers.
This class provides a common interface for augmenting samples and labels. In the future, we will
add support for segmentation and 1D bounding boxes.
The only method that needs to be implemented by the subclass is
- augment_sample: Augment a single sample during training.
Optionally, you can implement the following methods:
- augment_label: Augment a single label during training.
- get_random_transformations: Returns a nested structure of random transformations that should be applied to the batch.
This is required to have unique transformations for each sample in the batch and maintain the same transformations for samples and labels.
- batch_augment: Augment a batch of samples and labels during training. Needed if layer requires access to all samples (e.g. CutMix).
By default, this method will coerce the input into a batch as well as a nested structure of inputs.
If auto_vectorize is set to True, the augment_sample and augment_label methods will be vectorized using keras.ops.vectorized_map.
Otherwise, it will use keras.ops.map which runs sequentially.
Example:
```python
class NormalizeLayer1D(BaseAugmentation1D):
def __init__(self, **kwargs):
...
def augment_sample(self, inputs):
sample = inputs["data"]
mu = keras.ops.mean()
std = keras.ops.std()
return (sample - mu) / (std + self.epsilon)
x = np.random.rand(100, 3)
lyr = NormalizeLayer(...)
y = lyr(x, training=True)
```
"""
super().__init__(**kwargs)
if self.data_format == "channels_first":
self.data_axis = -1
self.ch_axis = -2
else:
self.data_axis = -2
self.ch_axis = -1
# END IF
class BaseAugmentation2D(keras.layers.Layer):
NDIMS = 4 # (N, H, W, C) or (N, C, H, W)
def __init__(self, **kwargs):
"""BaseAugmentation2D acts as a base class for various custom augmentation layers.
This class provides a common interface for augmenting samples and labels. In the future, we will
add support for segmentation and 1D bounding boxes.
The only method that needs to be implemented by the subclass is
- augment_sample: Augment a single sample during training.
Optionally, you can implement the following methods:
- augment_label: Augment a single label during training.
- get_random_transformations: Returns a nested structure of random transformations that should be applied to the batch.
This is required to have unique transformations for each sample in the batch and maintain the same transformations for samples and labels.
- batch_augment: Augment a batch of samples and labels during training. Needed if layer requires access to all samples (e.g. CutMix).
By default, this method will coerce the input into a batch as well as a nested structure of inputs.
If auto_vectorize is set to True, the augment_sample and augment_label methods will be vectorized using keras.ops.vectorized_map.
Otherwise, it will use keras.ops.map which runs sequentially.
Example:
```python
class NormalizeLayer2D(BaseAugmentation2D):
def __init__(self, name=None, **kwargs):
...
def augment_sample(self, inputs):
sample = inputs["data"]
mu = keras.ops.mean()
std = keras.ops.std()
return (sample - mu) / (std + self.epsilon)
x = np.random.rand(32, 32, 3)
lyr = NormalizeLayer(...)
y = lyr(x, training=True)
```
"""
super().__init__(**kwargs)
if self.data_format == "channels_first":
self.ch_axis = -3
self.height_axis = -2
self.width_axis = -1
else:
self.ch_axis = -1
self.height_axis = -3
self.width_axis = -2
# END IF
class RandomNoiseDistortion1D(BaseAugmentation1D):
sample_rate: float
frequency: tuple[float, float]
amplitude: tuple[float, float]
noise_type: str
def __init__(
self,
sample_rate: float = 1,
frequency: float | tuple[float, float] = 100,
amplitude: float | tuple[float, float] = 0.1,
noise_type: str = "normal",
**kwargs,
):
"""Apply random noise distortion to the 1D input.
Noise points are first generated at given frequency resolution with amplitude picked based on noise_type.
The noise points are then interpolated to match the input duration and added to the input.
Args:
sample_rate (float): Sample rate of the input.
frequency (float|tuple[float,float]): Frequency of the noise in Hz. If tuple, frequency is randomly picked between the values.
amplitude (float|tuple[float,float]): Amplitude of the noise. If tuple, amplitude is randomly picked between the values.
noise_type (str): Type of noise to generate. Currently only "normal" is supported.
Example:
```python
sample_rate = 100 # Hz
duration = 3*sample_rate # 3 seconds
sig_freq = 10 # Hz
sig_amp = 1 # Signal amplitude
noise_freq = (1, 2) # Noise frequency range
noise_amp = (1, 2) # Noise amplitude range
x = sig_amp*np.sin(2*np.pi*sig_freq*np.arange(duration)/sample_rate).reshape(-1, 1)
lyr = RandomNoiseDistortion1D(sample_rate=sample_rate, frequency=noise_freq, amplitude=noise_amp)
y = lyr(x, training=True)
```
"""
super().__init__(**kwargs)
self.sample_rate = sample_rate
self.frequency = parse_factor(frequency, min_value=None, max_value=sample_rate / 2, param_name="frequency")
self.amplitude = parse_factor(amplitude, min_value=None, max_value=None, param_name="amplitude")
self.noise_type = noise_type
def get_random_transformations(self, input_shape: tuple[int, int, int]):
"""Generate noise distortion tensor
Args:
input_shape (tuple[int, ...]): Input shape.
Returns:
dict: Dictionary containing the noise tensor.
"""
batch_size = input_shape[0]
duration_size = input_shape[self.data_axis]
ch_size = input_shape[self.ch_axis]
# Add one period to the noise and clip later
if self.frequency[0] == self.frequency[1]:
frequency = self.frequency[0]
else:
frequency = keras.random.uniform(
shape=(), minval=self.frequency[0], maxval=self.frequency[1], seed=self._random_generator
)
if self.amplitude[0] == self.amplitude[1]:
amplitude = self.amplitude[0]
else:
amplitude = keras.random.uniform(
shape=(), minval=self.amplitude[0], maxval=self.amplitude[1], seed=self._random_generator
)
noise_duration = keras.ops.cast((duration_size / self.sample_rate) * frequency + frequency, dtype="int32")
if self.data_format == "channels_first":
noise_shape = (batch_size, 1, ch_size, noise_duration)
else:
noise_shape = (batch_size, 1, noise_duration, ch_size)
if self.noise_type == "normal":
noise_pts = keras.random.normal(noise_shape, stddev=amplitude, seed=self._random_generator)
else:
raise ValueError(f"Invalid noise shape: {self.noise_type}")
# keras.ops doesnt contain any low-level interpolate. So we leverage the
# image module and fix height to 1 as workaround
noise = keras.ops.image.resize(
noise_pts,
size=(1, duration_size),
interpolation="bicubic",
crop_to_aspect_ratio=False,
data_format=self.data_format,
)
# Remove height dimension
noise = keras.ops.squeeze(noise, axis=1)
return {"noise": noise}
def augment_samples(self, inputs) -> keras.KerasTensor:
"""Augment all samples in the batch as it's faster."""
samples = inputs[self.SAMPLES]
if self.training:
noise = inputs[self.TRANSFORMS]["noise"]
return samples + noise
return samples
def get_config(self):
"""Serialize the layer configuration to a JSON-compatible dictionary."""
config = super().get_config()
config.update(
{
"sample_rate": self.sample,
"frequency": self.frequency,
"amplitude": self.amplitude,
"noise_type": self.noise_type,
}
)
return config
```
|
closed
|
2024-07-31T18:13:56Z
|
2024-08-02T19:50:08Z
|
https://github.com/keras-team/keras/issues/20071
|
[
"type:support"
] |
apage224
| 7
|
tensorflow/tensor2tensor
|
machine-learning
| 1,139
|
AttributeError: 'HParams' object has no attribute 'decode_reference'
|
### Description
...when i run t2t-decoder, an error occur, why and what happen?
here is the command:
t2t-decoder --data_dir=data --problem=translate_enzh_wmt32k --model=transformer --hparams_set=transformer_base_single_gpu --output_dir=train --decode_hparams="beam_size=4,alpha=0.6" --decode_from_file=test10.10 --decode_to_file=translation_test.en
and the error:
INFO:tensorflow:Inference results OUTPUT: 翻版
INFO:tensorflow:Inference results INPUT: C
INFO:tensorflow:Inference results OUTPUT: C
INFO:tensorflow:Inference results INPUT: B
INFO:tensorflow:Inference results OUTPUT: B
INFO:tensorflow:Inference results INPUT: A
INFO:tensorflow:Inference results OUTPUT: A
INFO:tensorflow:Elapsed Time: 5906.26919
INFO:tensorflow:Averaged Single Token Generation Time: 0.0030321
INFO:tensorflow:Writing decodes into translation_test.en
INFO:tensorflow:Running decode hooks.
Traceback (most recent call last):
File "/home/zhangshaoyuan/anaconda3/bin/t2t-decoder", line 17, in <module>
tf.app.run()
File "/home/zhangshaoyuan/anaconda3/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/home/zhangshaoyuan/anaconda3/bin/t2t-decoder", line 12, in main
t2t_decoder.main(argv)
File "/home/zhangshaoyuan/anaconda3/lib/python3.6/site-packages/tensor2tensor/bin/t2t_decoder.py", line 190, in main
decode(estimator, hp, decode_hp)
File "/home/zhangshaoyuan/anaconda3/lib/python3.6/site-packages/tensor2tensor/bin/t2t_decoder.py", line 90, in decode
checkpoint_path=FLAGS.checkpoint_path)
File "/home/zhangshaoyuan/anaconda3/lib/python3.6/site-packages/tensor2tensor/utils/decoding.py", line 443, in decode_from_file
), None)
File "/home/zhangshaoyuan/anaconda3/lib/python3.6/site-packages/tensor2tensor/utils/decoding.py", line 823, in run_postdecode_hooks
summaries = hook(decode_hook_args)
File "/home/zhangshaoyuan/anaconda3/lib/python3.6/site-packages/tensor2tensor/data_generators/translate.py", line 80, in compute_bleu_summaries
if (decode_hparams.decode_reference is None or
AttributeError: 'HParams' object has no attribute 'decode_reference'
### Environment information
Name: tensor2tensor
Version: 1.9.0
Name: tensorflow
Version: 1.11.0
anyone tell me why? thanks very much!
|
open
|
2018-10-12T07:53:01Z
|
2021-01-29T15:38:56Z
|
https://github.com/tensorflow/tensor2tensor/issues/1139
|
[] |
wingsyuan
| 5
|
ResidentMario/missingno
|
pandas
| 4
|
Parameter for y axis font size and sparkline font size
|
Currently, these sizes are hardcoded. `ax0.set_yticklabels([1, df.shape[0]], fontsize=20)` and
```
ax1.annotate(max_completeness,
xy=(max_completeness, max_completeness_index),
xytext=(max_completeness + 2, max_completeness_index),
fontsize=14,
va='center',
ha='left')
```
I wonder if either of the two options could be provided:
1. Same font size is used everywhere (which is a parameter already)
2. Additional params are made available for tweaking these individual font sizes.
I would advocate 1 over 2 for simplicity. Would also be useful to allow usage of different fonts, like `serif`. Wonder if all this could be passed as `kwargs` to matplotlib.
|
closed
|
2016-03-30T06:16:21Z
|
2016-04-08T05:30:02Z
|
https://github.com/ResidentMario/missingno/issues/4
|
[
"enhancement"
] |
nipunbatra
| 3
|
piccolo-orm/piccolo
|
fastapi
| 900
|
Type of "create_pydantic_model" is partially unknown (Python 3.11)
|
Hey!
When trying to use Piccolo v1.1 with Pydantic V2, Python 3.11 and Pylance in strict mode I've got this issue:
```
Type of "create_pydantic_model" is partially unknown
Type of "create_pydantic_model" is "(table: type[Table], nested: bool | Tuple[ForeignKey, ...] = False, exclude_columns: Tuple[Column, ...] = (), include_columns: Tuple[Column, ...] = (), include_default_columns: bool = False, include_readable: bool = False, all_optional: bool = False, model_name: str | None = None, deserialize_json: bool = False, recursion_depth: int = 0, max_recursion_depth: int = 5, pydantic_config: ConfigDict | None = None, **schema_extra_kwargs: Unknown) -> type[BaseModel]"
```
Here is any way to run a FastAPI project in strict type mode? Or for now it is recommended to use it in basic mode?
And by the way, when I'm trying this (in basic mode):
```python
Foo_pydantic = create_pydantic_model(
table=Foo,
)
def bar(foo: Foo_pydantic):
pass
```
I also run into an issue:
```
Variable not allowed in type expression
(variable) Image_schema: Unknown
```
|
open
|
2023-11-03T15:22:32Z
|
2023-11-07T17:55:27Z
|
https://github.com/piccolo-orm/piccolo/issues/900
|
[] |
Conobi
| 3
|
roboflow/supervision
|
machine-learning
| 1,785
|
What is supervision's coordinate system?
|
### Search before asking
- [x] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
What is supervision's coordinate system? I'm pretty sure that it uses top-left origin (y increases down, x increases right), but I couldn't find it mentioned anywhere in the docs. I'm about to go test what it is, but I think the docs could be improved by mentioning this somewhere. Perhaps on a foundational class's docs like `Detections`.
### Additional
_No response_
|
closed
|
2025-02-05T15:45:15Z
|
2025-02-19T14:25:10Z
|
https://github.com/roboflow/supervision/issues/1785
|
[
"question"
] |
mattico
| 1
|
vitalik/django-ninja
|
django
| 1,066
|
Mock Authentication for a view
|
Hello! I was trying to rewrite some of my tests involving API calls through Django's TestClient and I couldn't figure out how to mock the authentication callable.
I've written a method to validate the supplied token with a remote server (a CAS-like service), this is how I handle authentications on all of my other projects but I used to set a test token for testing purposes and use that to test the API and authentication, but this causes tests to be slow due to the latency of the requests going back and forth to the authentication service. So I wanted to mock the authentication callable instead to bypass the whole authentication flow altogether.
**core/api.py**
```python
from ninja import Router
from users.auth import user_auth
router = Router(auth=user_auth)
@router.get("/_health", tags=["Health Check"])
def health(request: HttpRequest):
"""
Health check doesn't check any functionality it only validates that the server is up.
"""
return {"status": "healthy"}
```
**users/auth.py**
```python
def user_auth(request):
# does authentication either raises an HttpError or returns a CustomUser instance
```
**tests/test_api.py** this is not the actual test although I've also tried running exactly this and still no luck.
```python
class TestMocks(TestCase):
@patch("core.api.user_auth")
def test_patch(self, mock):
response = self.client.get(
path=reverse_lazy("api-v0.1.0:health"),
)
mock.assert_called() # <-- this always fails
```
I couldn't find any specific documentation in the official documents about how to mock the authentication callable passed to a router instance. I tried patching it using `unittest.mock.patch` decorator but that didn't work.
So my question is that, is there a way to do this? And if not, what is the correct way to bypass the authentication while testing the API operations?
Thanks in advance.
|
open
|
2024-01-27T11:44:27Z
|
2024-12-22T19:46:23Z
|
https://github.com/vitalik/django-ninja/issues/1066
|
[] |
AryanGHM
| 3
|
prkumar/uplink
|
rest-api
| 17
|
Add doc strings to argument annotation classes
|
Argument Annotation classes in `uplink/types.py` are missing class doc strings. To improve code documentation, we need to add doc strings to the following classes, adhering the [Google Style Guide](https://google.github.io/styleguide/pyguide.html?showone=Comments#Comments) for consistency with the rest of the codebase:
- [x] `uplink.types.Query`
- [x] `uplink.types.QueryMap`
- [x] `uplink.types.Header`
- [x] `uplink.types.HeaderMap`
- [x] `uplink.types.Field`
- [x] `uplink.types.FieldMap`
- [x] `uplink.types.Part`
- [x] `uplink.types.PartMap`
- [x] `uplink.types.Body`
- [x] `uplink.types.Url`
|
closed
|
2017-11-12T21:13:04Z
|
2017-12-01T18:44:20Z
|
https://github.com/prkumar/uplink/issues/17
|
[
"help wanted",
"good first issue",
"Documentation"
] |
prkumar
| 2
|
ClimbsRocks/auto_ml
|
scikit-learn
| 439
|
get bad score running the sample code
|
1) I configure everything and run the whole script and get negative score on the boston datasets. Is it just a sample since i get a bad score is normal ?
2) The default is only using gradient boosting for the classification and regression and not automatically choose the best model for taining and prediction?
|
open
|
2021-04-18T13:54:26Z
|
2021-04-18T14:11:15Z
|
https://github.com/ClimbsRocks/auto_ml/issues/439
|
[] |
Aun0124
| 0
|
ivy-llc/ivy
|
numpy
| 28,448
|
Extend ivy-lint to everything instead of just the frontends
|
We're currently using a custom pre-commit lint hook specifically designed for formatting Python files in Ivy's frontend, as detailed in our documentation and implemented in our [lint hook repository](https://github.com/unifyai/lint-hook). This formatter organizes the code into two main sections: `helpers` and `main`, sorting functions alphabetically within these sections based on a [specific regex pattern](https://github.com/unifyai/lint-hook/blob/main/ivy_lint/formatters/function_ordering.py#L15).
The task at hand is to adapt and extend this formatter to cater to Ivy's backend and the Ivy Stateful API. Unlike the frontend, where the division is simply between `helpers` and `main` functions, the backend and stateful API require a more nuanced approach to accommodate various headers that categorize functions into either the "Array API Standard" or "Extras", among others. This distinction is crucial as it helps in segregating functions that adhere to the standard from those that do not, with new functions being added regularly.
For the backend and Ivy Stateful API, the goal is to maintain the integrity of these headers, such as "Array API Standard", "Autograd", "Optimizer Steps", "Optimizer Updates", "Array Printing", "Retrieval", "Conversions", "Memory", etc., ensuring they remain unchanged. The proposed approach involves sorting functions alphabetically within each section defined by these headers, thereby preserving the organizational structure and clarity regarding the functionalities of each section.
The desired structure for updating the formatter should adhere to the following template, ensuring a clear and organized codebase:
```py
# global imports
# local imports
# Global declarations
<Global variables, mode stacks, initializers, postponed evaluation typehints, etc.>
# Helpers #
# -------- #
<Private helper functions specific to the submodule>
# Classes
<Class definitions within the submodule>
# <function section header 1>
<Alphabetical listing of functions in section 1, including relevant assignment statements>
# <function section header 2>
<Alphabetical listing of functions in section 2>
...
# <function section header n>
<Alphabetical listing of functions in section n>
```
This structure not only ensures functions are easily locatable and the code remains clean but also respects the categorization of functionalities as per Ivy's standards. The approach was previously attempted in a pull request ([#22830](https://github.com/unifyai/ivy/pull/22830)), which serves as a reference for implementing these changes.
If you have any questions feel free to reach out to @NripeshN or @KareemMAX
|
closed
|
2024-02-28T08:07:29Z
|
2024-05-06T10:47:45Z
|
https://github.com/ivy-llc/ivy/issues/28448
|
[
"Bounty"
] |
vedpatwardhan
| 1
|
coqui-ai/TTS
|
python
| 3,840
|
[Bug] can not load the checkpoints when do fine-tune with XTTS_v2
|
### Describe the bug
Hello, community and @eginhard,
For the XTTS fine-tuning, I manually downloaded `dvae.pth, mel_stats.pth, model.pth` and `vocab.json` in `train_gpt_xtts.py`.
Further, below is the command line for fine-tuning XTTS_v2.
```
CUDA_VISIBLE_DEVICES="0" python recipes/mshop/xtts_v2/train_gpt_xtts.py \
--restore_path /home/ec2-user/SageMaker/workspace/TTS/XTTS/xtts_v2/model.pth
```
where the model.pth is derived from `tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)`
I got the error as below:
```
| > Layer missing in the checkpoint: dvae.decoder.3.net.4.weight
| > Layer missing in the checkpoint: dvae.decoder.3.net.4.bias
| > Layer missing in the checkpoint: dvae.decoder.4.0.conv.weight
| > Layer missing in the checkpoint: dvae.decoder.4.0.conv.bias
| > Layer missing in the checkpoint: dvae.decoder.5.0.conv.weight
| > Layer missing in the checkpoint: dvae.decoder.5.0.conv.bias
| > Layer missing in the checkpoint: dvae.decoder.6.weight
| > Layer missing in the checkpoint: dvae.decoder.6.bias
| > Layer missing in the checkpoint: dvae.codebook.embed
| > Layer missing in the checkpoint: dvae.codebook.cluster_size
| > Layer missing in the checkpoint: dvae.codebook.embed_avg
| > Layer missing in the checkpoint: torch_mel_spectrogram_dvae.mel_stft.spectrogram.window
| > Layer missing in the checkpoint: torch_mel_spectrogram_dvae.mel_stft.mel_scale.fb
| > 0 / 1023 layers are restored.
> Model restored from step 10000000
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/workspace/TTS/XTTS/recipes/mshop/xtts_v2/train_gpt_xtts.py", line 202, in <module>
main()
File "/home/ec2-user/SageMaker/workspace/TTS/XTTS/recipes/mshop/xtts_v2/train_gpt_xtts.py", line 186, in main
trainer = Trainer(
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/trainer/trainer.py", line 558, in __init__
(self.model, self.optimizer, self.scaler, self.restore_step, self.restore_epoch) = self.restore_model(
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/trainer/trainer.py", line 862, in restore_model
restore_epoch = checkpoint["epoch"]
KeyError: 'epoch'
```
As a result,
1) I put the pretrained weights of dvae, hifigan (mel_stats.pth), model.pth in DVAE_CHECKPOINT, MEL_NORM_FILE, TOKENIZER_FILE, and XTTS_CHECKPOINT, but it seems like not working
2) When I check the XTTS_v2 checkpoints with torch.load() and do find 'epoch', there is no epoch in checkpoints.
### To Reproduce
```
CUDA_VISIBLE_DEVICES="0" python recipes/mshop/xtts_v2/train_gpt_xtts.py \
--restore_path /home/ec2-user/SageMaker/workspace/TTS/XTTS/xtts_v2/model.pth
```
### Expected behavior
fine-tuning with own dataset
### Logs
```shell
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.28.ln_2.bias
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.28.mlp.c_fc.weight
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.28.mlp.c_fc.bias
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.28.mlp.c_proj.weight
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.28.mlp.c_proj.bias
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.29.ln_1.weight
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.29.ln_1.bias
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.29.attn.c_attn.weight
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.29.attn.c_attn.bias
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.29.attn.c_proj.weight
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.29.attn.c_proj.bias
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.29.ln_2.weight
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.29.ln_2.bias
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.29.mlp.c_fc.weight
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.29.mlp.c_fc.bias
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.29.mlp.c_proj.weight
| > Layer missing in the checkpoint: xtts.gpt.gpt.h.29.mlp.c_proj.bias
| > Layer missing in the checkpoint: xtts.gpt.gpt.ln_f.weight
| > Layer missing in the checkpoint: xtts.gpt.gpt.ln_f.bias
| > Layer missing in the checkpoint: xtts.gpt.mel_pos_embedding.emb.weight
| > Layer missing in the checkpoint: xtts.gpt.text_pos_embedding.emb.weight
| > Layer missing in the checkpoint: xtts.gpt.final_norm.weight
| > Layer missing in the checkpoint: xtts.gpt.final_norm.bias
| > Layer missing in the checkpoint: xtts.gpt.text_head.weight
| > Layer missing in the checkpoint: xtts.gpt.text_head.bias
| > Layer missing in the checkpoint: xtts.gpt.mel_head.weight
| > Layer missing in the checkpoint: xtts.gpt.mel_head.bias
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.latents
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.layers.0.0.to_q.weight
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.layers.0.0.to_kv.weight
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.layers.0.0.to_out.weight
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.layers.0.1.0.weight
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.layers.0.1.0.bias
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.layers.0.1.2.weight
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.layers.0.1.2.bias
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.layers.1.0.to_q.weight
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.layers.1.0.to_kv.weight
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.layers.1.0.to_out.weight
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.layers.1.1.0.weight
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.layers.1.1.0.bias
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.layers.1.1.2.weight
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.layers.1.1.2.bias
| > Layer missing in the checkpoint: xtts.gpt.conditioning_perceiver.norm.gamma
| > Layer missing in the checkpoint: torch_mel_spectrogram_style_encoder.mel_stft.spectrogram.window
| > Layer missing in the checkpoint: torch_mel_spectrogram_style_encoder.mel_stft.mel_scale.fb
| > Layer missing in the checkpoint: dvae.discrete_loss.accumulator_index
| > Layer missing in the checkpoint: dvae.discrete_loss.accumulator_filled
| > Layer missing in the checkpoint: dvae.discrete_loss.accumulator
| > Layer missing in the checkpoint: dvae.encoder.0.0.weight
| > Layer missing in the checkpoint: dvae.encoder.0.0.bias
| > Layer missing in the checkpoint: dvae.encoder.1.0.weight
| > Layer missing in the checkpoint: dvae.encoder.1.0.bias
| > Layer missing in the checkpoint: dvae.encoder.2.net.0.weight
| > Layer missing in the checkpoint: dvae.encoder.2.net.0.bias
| > Layer missing in the checkpoint: dvae.encoder.2.net.2.weight
| > Layer missing in the checkpoint: dvae.encoder.2.net.2.bias
| > Layer missing in the checkpoint: dvae.encoder.2.net.4.weight
| > Layer missing in the checkpoint: dvae.encoder.2.net.4.bias
| > Layer missing in the checkpoint: dvae.encoder.3.net.0.weight
| > Layer missing in the checkpoint: dvae.encoder.3.net.0.bias
| > Layer missing in the checkpoint: dvae.encoder.3.net.2.weight
| > Layer missing in the checkpoint: dvae.encoder.3.net.2.bias
| > Layer missing in the checkpoint: dvae.encoder.3.net.4.weight
| > Layer missing in the checkpoint: dvae.encoder.3.net.4.bias
| > Layer missing in the checkpoint: dvae.encoder.4.net.0.weight
| > Layer missing in the checkpoint: dvae.encoder.4.net.0.bias
| > Layer missing in the checkpoint: dvae.encoder.4.net.2.weight
| > Layer missing in the checkpoint: dvae.encoder.4.net.2.bias
| > Layer missing in the checkpoint: dvae.encoder.4.net.4.weight
| > Layer missing in the checkpoint: dvae.encoder.4.net.4.bias
| > Layer missing in the checkpoint: dvae.encoder.5.weight
| > Layer missing in the checkpoint: dvae.encoder.5.bias
| > Layer missing in the checkpoint: dvae.decoder.0.weight
| > Layer missing in the checkpoint: dvae.decoder.0.bias
| > Layer missing in the checkpoint: dvae.decoder.1.net.0.weight
| > Layer missing in the checkpoint: dvae.decoder.1.net.0.bias
| > Layer missing in the checkpoint: dvae.decoder.1.net.2.weight
| > Layer missing in the checkpoint: dvae.decoder.1.net.2.bias
| > Layer missing in the checkpoint: dvae.decoder.1.net.4.weight
| > Layer missing in the checkpoint: dvae.decoder.1.net.4.bias
| > Layer missing in the checkpoint: dvae.decoder.2.net.0.weight
| > Layer missing in the checkpoint: dvae.decoder.2.net.0.bias
| > Layer missing in the checkpoint: dvae.decoder.2.net.2.weight
| > Layer missing in the checkpoint: dvae.decoder.2.net.2.bias
| > Layer missing in the checkpoint: dvae.decoder.2.net.4.weight
| > Layer missing in the checkpoint: dvae.decoder.2.net.4.bias
| > Layer missing in the checkpoint: dvae.decoder.3.net.0.weight
| > Layer missing in the checkpoint: dvae.decoder.3.net.0.bias
| > Layer missing in the checkpoint: dvae.decoder.3.net.2.weight
| > Layer missing in the checkpoint: dvae.decoder.3.net.2.bias
| > Layer missing in the checkpoint: dvae.decoder.3.net.4.weight
| > Layer missing in the checkpoint: dvae.decoder.3.net.4.bias
| > Layer missing in the checkpoint: dvae.decoder.4.0.conv.weight
| > Layer missing in the checkpoint: dvae.decoder.4.0.conv.bias
| > Layer missing in the checkpoint: dvae.decoder.5.0.conv.weight
| > Layer missing in the checkpoint: dvae.decoder.5.0.conv.bias
| > Layer missing in the checkpoint: dvae.decoder.6.weight
| > Layer missing in the checkpoint: dvae.decoder.6.bias
| > Layer missing in the checkpoint: dvae.codebook.embed
| > Layer missing in the checkpoint: dvae.codebook.cluster_size
| > Layer missing in the checkpoint: dvae.codebook.embed_avg
| > Layer missing in the checkpoint: torch_mel_spectrogram_dvae.mel_stft.spectrogram.window
| > Layer missing in the checkpoint: torch_mel_spectrogram_dvae.mel_stft.mel_scale.fb
| > 0 / 1023 layers are restored.
> Model restored from step 10000000
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/workspace/TTS/XTTS/recipes/mshop/xtts_v2/train_gpt_xtts.py", line 202, in <module>
main()
File "/home/ec2-user/SageMaker/workspace/TTS/XTTS/recipes/mshop/xtts_v2/train_gpt_xtts.py", line 186, in main
trainer = Trainer(
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/trainer/trainer.py", line 558, in __init__
(self.model, self.optimizer, self.scaler, self.restore_step, self.restore_epoch) = self.restore_model(
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/trainer/trainer.py", line 862, in restore_model
restore_epoch = checkpoint["epoch"]
KeyError: 'epoch'
```
### Environment
```shell
{
"CUDA": {
"GPU": [
"Tesla V100-SXM2-32GB",
"Tesla V100-SXM2-32GB",
"Tesla V100-SXM2-32GB",
"Tesla V100-SXM2-32GB",
"Tesla V100-SXM2-32GB",
"Tesla V100-SXM2-32GB",
"Tesla V100-SXM2-32GB",
"Tesla V100-SXM2-32GB"
],
"available": true,
"version": "12.1"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.3.1+cu121",
"TTS": "0.22.0",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.8",
"version": "#1 SMP Tue May 21 16:52:24 UTC 2024"
}
}
```
### Additional context
...
|
closed
|
2024-07-29T14:10:11Z
|
2024-07-29T18:46:28Z
|
https://github.com/coqui-ai/TTS/issues/3840
|
[
"bug"
] |
kaen2891
| 4
|
dask/dask
|
scikit-learn
| 11,749
|
Support `scipy.sparse.sparray`s
|
Since older versions of scipy don’t have it, there could be a conditional branch here: `if hasattr(scipy.sparse, "sparray"): ...`
https://github.com/dask/dask/blob/4d71629d1f22ced0dd780919f22e70a642ec6753/dask/array/backends.py#L212-L232
|
closed
|
2025-02-14T14:56:43Z
|
2025-02-21T13:28:08Z
|
https://github.com/dask/dask/issues/11749
|
[
"needs triage"
] |
flying-sheep
| 0
|
adbar/trafilatura
|
web-scraping
| 117
|
Cannot import Trafilatura
|
ImportError: cannot import name 'Document' from 'readability' (C:\Users\name\AppData\Roaming\Python\Python38\site-packages\readability\__init__.py)
|
closed
|
2021-09-17T18:04:03Z
|
2021-10-01T19:47:40Z
|
https://github.com/adbar/trafilatura/issues/117
|
[
"question"
] |
SkateRock123
| 5
|
cvat-ai/cvat
|
tensorflow
| 8,802
|
Where are the annotation txt files corresponding to the images in the project or task?
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
I want to be able to directly find the corresponding txt file for each image after annotating on the CVAT platform, so that I don't have to use the export dataset feature in tasks.thanks
### Describe the solution you'd like
_No response_
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
|
closed
|
2024-12-09T13:41:44Z
|
2024-12-11T07:01:25Z
|
https://github.com/cvat-ai/cvat/issues/8802
|
[
"enhancement"
] |
stephen-TT
| 5
|
joeyespo/grip
|
flask
| 140
|
Rendering offline does not work
|
Hello, I really like the project and I use it to render my notes to markdown. But sometimes I work offline, and grip does not work. I see there is a `render_offline` that is set as default to False, but nowhere else is it set.
By hardcoding in `render_offline = True` in renderers.py, I see it works, and it uses the cached github renderer.
Can you add either a ping test to the script to set `render_offline` or at the least an argument into the parser like `--render-offline`? Thanks.
|
closed
|
2015-09-15T18:04:02Z
|
2015-09-15T18:20:49Z
|
https://github.com/joeyespo/grip/issues/140
|
[
"duplicate"
] |
brian-bk
| 1
|
koxudaxi/fastapi-code-generator
|
fastapi
| 131
|
Handle all request parameter types
|
Hi,
Thanks for this tool !
I'm trying to infer request parameter type, based on what's inside "in" field in operation's parameters.
It appears that only Query parameter type is handled for the moment, is it a known issue ? Willing to help and PR if needed.
|
open
|
2021-04-02T07:59:19Z
|
2021-04-05T15:12:22Z
|
https://github.com/koxudaxi/fastapi-code-generator/issues/131
|
[] |
rambobinator
| 1
|
OpenBB-finance/OpenBB
|
machine-learning
| 6,707
|
[🕹️] Integrate DefiLlama's Public API with OpenBB Platform
|
# 📄 Task: (XXL: 1.500-3000 Points)
Integrate DefiLlama’s public API into the OpenBB Platform. Your task is to create a provider extension for DeFiLlama and implement the relevant commands in the `openbb-crypto` extension.
---
### 💡 Tips:
1. **Implementation**: Use the [DeliLlama API documentation](https://defillama.com/docs/api) to implement API calls for relevant data, such as DeFi metrics, protocols, and other valuable crypto data. No API key is required.
1. **Commands**: Integrate the new provider into the `openbb-crypto` extension by adding the necessary commands to fetch and display data from DeFiLlama.
1. **Testing**: Ensure all commands and API calls work as expected and handle edge cases (e.g., rate limits, missing data).
1. **Examples**: You can refer to existing implementations of extensions and providers [that have already been implemented repository](https://github.com/OpenBB-finance/OpenBB/tree/develop).
- Refer to the [OpenBB architecture overview](https://docs.openbb.co/platform/developer_guide/architecture_overview) for insight into the platform’s structure.
- Refer to the [OpenBB documentation](https://docs.openbb.co/platform/getting_started/create_new_provider_extension) to create a new provider extension for Defillama.
- Follow the instructions in the [mapping guide](https://docs.openbb.co/platform/getting_started/map_a_provider_to_a_route) to map the provider to the appropriate route within `openbb-crypto`.
- Check other examples of how provider extensions are implemented in the repo to guide your work.
### 📬 Submission:
- Follow the submission instructions [here](https://github.com/OpenBB-finance/OpenBB/tree/develop/oss.gg).
- Open a Pull Request (PR) to the `develop` branch.
- Include a brief description of the integration and what commands were added in the PR body.
- Provide a working example of how to use the new provider extension (e.g., screenshots or video).
---
Happy hacking!
|
closed
|
2024-09-30T18:52:41Z
|
2024-11-02T07:42:58Z
|
https://github.com/OpenBB-finance/OpenBB/issues/6707
|
[
"🕹️ 1500 points"
] |
piiq
| 35
|
jina-ai/serve
|
machine-learning
| 5,400
|
Hot reload for API endpoints
|
**Describe the feature**
When editing code while running in a local development environment, something like an inotifywait watcher either kills the existing executor or sends a message to the executor to hotpatch the server. This makes development much easier, and the latter method does not require reloading ML models which can be a slow process.
**Your proposal**
Look at how other command development servers like FastAPI implement hot updates and conceive of some way to hotpatch or reload modules related to the jina RPC server with low overhead.
**Environment**
Linux x64 latest.
**Screenshots**
N/A
|
closed
|
2022-11-16T14:41:39Z
|
2022-11-29T19:23:02Z
|
https://github.com/jina-ai/serve/issues/5400
|
[
"area/community"
] |
AmericanPresidentJimmyCarter
| 6
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
pytorch
| 1,345
|
customize folders
|
Hello,
I want to change the names of the folders of trainA, trainB -> for example, cars_red, cars_blue?
In which part of the code should I change?
It seems that I should change something in unaligned_dataset but I can only change 'A' and 'B' not the opt.phase.
Is there any way to change it ?
|
open
|
2021-11-29T07:42:29Z
|
2021-12-02T20:37:17Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1345
|
[] |
seanko29
| 1
|
streamlit/streamlit
|
machine-learning
| 10,821
|
Allow changing `st.dataframe` row height from the UI
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Streamlit 1.43 added support for programmatically setting the `row_height` in `st.dataframe` & `st.data_editor`. Also allow the user to change the row height interactively via the data grid UI.
### How?
Provide a toolbar action that allows the configuration of the row height from UI. This could just be a few options like `short`, `medium`, `tall`, `extra tall`. For example, this is what Airtable provides in a menu:
<img width="148" alt="Image" src="https://github.com/user-attachments/assets/fc40099b-c722-41d4-afe2-a132f8fa01e6" />
This feature does not require any changes to the API.
### Additional Context
_No response_
|
open
|
2025-03-18T11:15:58Z
|
2025-03-18T11:16:57Z
|
https://github.com/streamlit/streamlit/issues/10821
|
[
"type:enhancement",
"feature:st.dataframe",
"feature:st.data_editor"
] |
lukasmasuch
| 1
|
qubvel-org/segmentation_models.pytorch
|
computer-vision
| 84
|
can't use efficientnet offline
|
i downloaded the latest segmentation models pytorch whl file from this link : https://pypi.org/project/segmentation-models-pytorch/#files
also installed pretrained models,this one : https://pypi.org/project/pretrainedmodels/
but i don't see efficientnet there,is there any way to use efficientnet models offline with fpn,unet or linknet?
|
closed
|
2019-10-07T17:04:10Z
|
2019-10-07T21:22:07Z
|
https://github.com/qubvel-org/segmentation_models.pytorch/issues/84
|
[] |
mobassir94
| 3
|
sqlalchemy/alembic
|
sqlalchemy
| 761
|
One cannot add a column with a comment within batch operation context
|
**Describe the bug**
Batch operation `op.batch_alter_table(...).add_column()` method cannot add a column that comes with a comment.
Comment support is fairly new, maybe this is a "feature" due to some constraint I am not aware of ?
**Expected behavior**
The column is added to the table and the appropriate comment is defined for the column.
<!-- A clear and concise description of what you expected to happen. -->
**To Reproduce**
I believe the following migration script, the one I found the issue with, stripped of anything unrelated to the bug, should provide the minimal amount of code to reproduce the bug:
```py
from alembic import op
import sqlalchemy as sa
revision = '792d2e012c4a'
down_revision = None # Obviously there exists other migrations that created the table in the first place
def upgrade():
with op.batch_alter_table('some_table', schema='some_schema') as batch_op:
batch_op.add_column(sa.Column('column_name', sa.Integer, nullable=True, comment="some comment"))
# More operations may come here
```
**Error**
```Traceback (most recent call last):
File "~/.virtualenvs/project/bin/alembic", line 8, in <module>
sys.exit(main())
File "~/.virtualenvs/project/lib/python3.7/site-packages/alembic/config.py", line 581, in main
CommandLine(prog=prog).main(argv=argv)
File "~/.virtualenvs/project/lib/python3.7/site-packages/alembic/config.py", line 575, in main
self.run_cmd(cfg, options)
File "~/.virtualenvs/project/lib/python3.7/site-packages/alembic/config.py", line 555, in run_cmd
**dict((k, getattr(options, k, None)) for k in kwarg)
File "~/.virtualenvs/project/lib/python3.7/site-packages/alembic/command.py", line 298, in upgrade
script.run_env()
File "~/.virtualenvs/project/lib/python3.7/site-packages/alembic/script/base.py", line 489, in run_env
util.load_python_file(self.dir, "env.py")
File "~/.virtualenvs/project/lib/python3.7/site-packages/alembic/util/pyfiles.py", line 98, in load_python_file
module = load_module_py(module_id, path)
File "~/.virtualenvs/project/lib/python3.7/site-packages/alembic/util/compat.py", line 184, in load_module_py
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "src/dags/stats/migrations/env.py", line 145, in <module>
run_migrations_online()
File "src/dags/stats/migrations/env.py", line 138, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "~/.virtualenvs/project/lib/python3.7/site-packages/alembic/runtime/environment.py", line 846, in run_migrations
self.get_context().run_migrations(**kw)
File "~/.virtualenvs/project/lib/python3.7/site-packages/alembic/runtime/migration.py", line 522, in run_migrations
step.migration_fn(**kw)
File "src/dags/stats/migrations/versions/792d2e012c4a_migration_script_.py", line 51, in upgrade
batch_op.add_column(sa.Column('column_name', sa.Integer, nullable=True, comment="some comment"))
File "<string>", line 3, in add_column
File "~/.virtualenvs/jfdags/lib/python3.7/site-packages/alembic/operations/ops.py", line 1956, in batch_add_column
return operations.invoke(op)
File "~/.virtualenvs/jfdags/lib/python3.7/site-packages/alembic/operations/base.py", line 373, in invoke
return fn(self, operation)
File "~/.virtualenvs/jfdags/lib/python3.7/site-packages/alembic/operations/toimpl.py", line 146, in add_column
operations.impl.create_column_comment(column)
AttributeError: 'BatchOperationsImpl' object has no attribute 'create_column_comment'
```
**Versions.**
- OS: Ubuntu 2020.04
- Python: 3.7.9
- Alembic: 1.4.3
- SQLAlchemy: 1.3.20
- Database: Postgresql
- DBAPI: psycopg2
**Additional context**
<!-- Add any other context about the problem here. -->
**Have a nice day!**
You too !
|
closed
|
2020-11-27T16:38:16Z
|
2020-12-18T12:59:37Z
|
https://github.com/sqlalchemy/alembic/issues/761
|
[
"bug",
"batch migrations"
] |
cansjt
| 3
|
samuelcolvin/watchfiles
|
asyncio
| 166
|
Resource warning when RustNotify is deleted without calling `close()`
|
I agree with @graingert in #164 that we should raise a resource warning just to be safe if `RustNotify` isn't explicitly "closed" even though it mostly wouldn't actually be necessary since `__del__` would clean up the thread.
However at the moment it seems pyo3 doesn't support the `__del__` method, see https://github.com/PyO3/pyo3/issues/2479.
Once (if) pyo3 support `__del__`, here is the code to add so I don't need to remember it:
```rs
pub fn __del__(&mut self, py: Python) -> PyResult<()> {
if matches!(self.watcher, WatcherEnum::None) {
Ok(())
} else {
self.close();
let resource_warning = py.import("warnings")?.getattr("ResourceWarning")?;
PyErr::warn(py, resource_warning, "RustNotify was not closed correctly", 1)
}
}
```
|
open
|
2022-06-25T11:19:58Z
|
2022-06-25T11:19:58Z
|
https://github.com/samuelcolvin/watchfiles/issues/166
|
[] |
samuelcolvin
| 0
|
deedy5/primp
|
web-scraping
| 74
|
set http1_title_case_headers to True because its sending both upper and lower header's key
|

|
open
|
2025-01-08T09:21:42Z
|
2025-02-10T06:29:37Z
|
https://github.com/deedy5/primp/issues/74
|
[
"question"
] |
luanon404
| 0
|
sktime/sktime
|
data-science
| 7,359
|
[ENH] graph neural networks for forecasting
|
Graph neural networks are a popular choice for forecasting hierarchically structured panels, graph structured panels, or structured variable settings, e.g., energy networks with time series observed at devices located at the vertices or edges of an abstract graph.
Mahdi Torabi Rad suggested these two packages:
* https://github.com/stellargraph/stellargraph - seems to have been quite popular, but now inactive since 3 years
* https://github.com/TorchSpatiotemporal/tsl - still active
|
open
|
2024-11-04T09:18:02Z
|
2024-11-09T09:08:37Z
|
https://github.com/sktime/sktime/issues/7359
|
[
"feature request",
"implementing algorithms",
"interfacing algorithms",
"module:forecasting",
"enhancement"
] |
fkiraly
| 2
|
browser-use/browser-use
|
python
| 1,007
|
Unable to Retrieve Actual Input Value from Input Element
|
### Bug Description
I'm using trying to get some informations from a modem interface web page. This page has multiple sections for both 2.4 GHz and 5 GHz settings.
Under the 5 GHz section, there is an input element whose value I want to retrieve. However, when I attempt to fetch the input's actual value,
the browser use returns the adjacent default text (e.g., "Default: ATTkiQAcZb") instead of the current value in the input field.
I expect it to return the real input value that appears in the UI, but I'm getting the default label text instead.
Note: I could only get the actual result after 1 out of 10 attempts.
I am using "Gemini 2.0 Flash Exp" LLM model.
### Reproduction Steps
1. Navigate to the modem's configuration page with the browser-use tool.
2. Locate the 5 GHz section, which contains various input fields (e.g., SSID, password, mode, etc.).
3. Attempt to retrieve the value from one of these input fields (for example, the SSID field).
4. Observe that instead of the input’s actual text, the tool provides the adjacent default label (e.g., "Default: ATTkiQAcZb" or similar).
### Code Sample
```python
import asyncio
import os
from dotenv import load_dotenv
from langchain_google_genai import ChatGoogleGenerativeAI
from pydantic import SecretStr
from browser_use import Agent
load_dotenv()
api_key = ''
open_ai = ''
if not api_key:
raise ValueError('GEMINI_API_KEY is not set')
llm = ChatGoogleGenerativeAI(model='gemini-2.0-flash-exp', api_key=SecretStr(api_key))
planner_llm = ChatGoogleGenerativeAI(model='gemini-2.0-flash-exp', api_key=SecretStr(api_key))
async def run_search():
agent = Agent(
task=(
"""
and Go to 'http://192.168.1.254/'
http://192.168.1.254/cgi-bin/remoteaccess.ha
and Click on 'Home Network'
and Click on 'Wi-Fi'
and Enter the 'Device Access Code' (&%1*3227?2) into the required input field
and Click on 'Continue'
and Click on 'Advanced Options'
and retrive value rigth next to 'Network Name (SSID)' text under '5 GHz Wi-Fi Configuration' section
"""
),
llm=llm,
max_actions_per_step=4,
planner_llm=planner_llm,
use_vision_for_planner=False,
use_vision=False,
planner_interval=4,
save_conversation_path='logs/conversation',
)
await agent.run(max_steps=25)
if __name__ == '__main__':
asyncio.run(run_search())
```
### Version
0.1.40
### LLM Model
Other (specify in description)
### Operating System
Windows 10
### Relevant Log Output
```shell
INFO [browser_use] BrowserUse logging setup complete with level info
INFO [root] Anonymized telemetry enabled. See https://docs.browser-use.com/development/telemetry for more information.
C:\Users\fetullah.turkeli\AppData\Local\Programs\Python\Python313\Lib\site-packages\browser_use\agent\message_manager\views.py:59: LangChainBetaWarning: The function `load` is in beta. It is actively being worked on, so the API may change.
value['message'] = load(value['message'])
INFO [agent] Saving conversation to logs/conversation
INFO [agent] 🚀 Starting task:
and Go to 'http://192.168.1.254/'
http://192.168.1.254/cgi-bin/remoteaccess.ha
and Click on 'Home Network'
and Click on 'Wi-Fi'
and Enter the 'Device Access Code' (&%1*3227?2) into the required input field
and Click on 'Continue'
and Click on 'Advanced Options'
and retrive value rigth next to 'Network Name (SSID)' text under '5 GHz Wi-Fi Configuration' section
INFO [agent] 📍 Step 1
INFO [agent] 🤷 Eval: Unknown - I have not yet navigated to the initial URL.
INFO [agent] 🧠 Memory: Starting the task. Need to go to 'http://192.168.1.254/'. 0 out of 7 steps completed.
INFO [agent] 🎯 Next goal: Navigate to the specified URL.
INFO [agent] 🛠️ Action 1/1: {"go_to_url":{"url":"http://192.168.1.254/"}}
INFO [controller] 🔗 Navigated to http://192.168.1.254/
INFO [agent] 📍 Step 2
INFO [agent] 👍 Eval: Success - Navigated to the initial URL.
INFO [agent] 🧠 Memory: Need to click on 'Home Network'. 1 out of 7 steps completed.
INFO [agent] 🎯 Next goal: Click on 'Home Network'
INFO [agent] 🛠️ Action 1/1: {"click_element":{"index":4}}
WARNING [controller] Element not clickable with index 4 - most likely the page changed
INFO [agent] 📍 Step 3
INFO [agent] ⚠ Eval: Failed - Could not click on Home Network. The page changed after the navigation, so the element is not available anymore.
INFO [agent] 🧠 Memory: Need to click on Home Network, but it failed. Trying to navigate directly to the remoteaccess.ha page. 1 out of 7 steps completed.
INFO [agent] 🎯 Next goal: Navigate to the remoteaccess.ha page.
INFO [agent] 🛠️ Action 1/1: {"go_to_url":{"url":"http://192.168.1.254/cgi-bin/remoteaccess.ha" }}
INFO [controller] 🔗 Navigated to http://192.168.1.254/cgi-bin/remoteaccess.ha
INFO [agent] 📍 Step 4
INFO [agent] Planning Analysis:
{
"state_analysis": "The agent has successfully navigated to the remoteaccess.ha page, which requires a device access code. The next step is to input the provided access code into the input field.",
"progress_evaluation": "28% - The agent has navigated to the login page and is ready to input the access code. 2 out of 7 steps are completed.",
"challenges": "Successfully inputting the Device Access Code is crucial. Incorrect input will block further progress.",
"next_steps": [
"Enter the device access code '&%1*3227?2' into the input field.",
"Click the 'Continue' button."
],
"reasoning": "These steps directly address the current requirement of entering the access code and proceeding to the next stage of accessing the router's configuration."
}
INFO [agent] 👍 Eval: Success - Navigated to the remoteaccess.ha page.
INFO [agent] 🧠 Memory: Need to enter the device access code and click continue. 2 out of 7 steps completed.
INFO [agent] 🎯 Next goal: Enter the access code and click continue.
INFO [agent] 🛠️ Action 1/2: {"input_text":{"index":1,"text":"&%1*3227?2"}}
INFO [agent] 🛠️ Action 2/2: {"click_element":{"index":3}}
INFO [controller] ⌨️ Input &%1*3227?2 into index 1
INFO [controller] 🖱️ Clicked button with index 3:
INFO [agent] 📍 Step 5
INFO [agent] 👍 Eval: Success - Entered the device access code and clicked continue.
INFO [agent] 🧠 Memory: Need to click on 'Home Network'. 3 out of 7 steps completed.
INFO [agent] 🎯 Next goal: Click on 'Home Network'.
INFO [agent] 🛠️ Action 1/1: {"click_element":{"index":4}}
WARNING [controller] Element not clickable with index 4 - most likely the page changed
INFO [agent] 📍 Step 6
INFO [agent] ⚠ Eval: Failed - Could not click on Home Network. The page changed. Now trying to click on Wi-Fi.
INFO [agent] 🧠 Memory: Need to click on Wi-Fi. 4 out of 7 steps completed.
INFO [agent] 🎯 Next goal: Click on Wi-Fi.
INFO [agent] 🛠️ Action 1/1: {"click_element":{"index":11}}
INFO [controller] 🖱️ Clicked button with index 11: Wi-Fi
INFO [agent] 📍 Step 7
INFO [agent] 👍 Eval: Success - Clicked on Wi-Fi.
INFO [agent] 🧠 Memory: Need to click on Advanced Options. 5 out of 7 steps completed.
INFO [agent] 🎯 Next goal: Click on Advanced Options.
INFO [agent] 🛠️ Action 1/1: {"click_element":{"index":15}}
INFO [controller] 🖱️ Clicked button with index 15: Advanced Options
INFO [agent] 📍 Step 8
INFO [agent] Planning Analysis:
{
"state_analysis": "The agent has successfully navigated to the Advanced Options page under Wi-Fi settings. The next step is to retrieve the value next to 'Network Name (SSID)' under the '5 GHz Wi-Fi Configuration' section.",
"progress_evaluation": "71% - The agent is on the final step of retrieving the desired information. 5 out of 7 steps are completed.",
"challenges": "Accurately locating and extracting the text value associated with the 'Network Name (SSID)' field under the correct section is crucial.",
"next_steps": [
"Locate the 'Network Name (SSID)' field under the '5 GHz Wi-Fi Configuration' section.",
"Extract the text value present in the input field next to it."
],
"reasoning": "These steps directly address the final requirement of extracting the SSID for the 5 GHz Wi-Fi network."
}
INFO [agent] 👍 Eval: Success - Clicked on Advanced Options.
INFO [agent] 🧠 Memory: Need to extract the value next to 'Network Name (SSID)' under '5 GHz Wi-Fi Configuration' section. 6 out of 7 steps completed.
INFO [agent] 🎯 Next goal: Extract the value.
INFO [agent] 🛠️ Action 1/1: {"extract_content":{"goal":"extract the value next to 'Network Nam e (SSID)' under '5 GHz Wi-Fi Configuration' section"}}
WARNING [langchain_google_genai.chat_models] Retrying langchain_google_genai.chat_models._chat_with_retry.<locals>._chat_with_retry in 2.0 seconds as it raised ResourceExhausted: 429 Resource has been exhausted (e.g. check quota)..
INFO [controller] 📄 Extracted from page
:
Wi-Fi

[Skip to Main Content](#content-sub)
[Device](/cgi-bin/home.ha)
[Broadband](/cgi-bin/broadbandstatistics.ha)
[Home Network](/cgi-bin/lanstatistics.ha)
[Voice](/cgi-bin/voice.ha)
[Firewall](/cgi-bin/firewall.ha)
[Diagnostics](/cgi-bin/diag.ha)
[Status](/cgi-bin/lanstatistics.ha)
[Configure](/cgi-bin/etherlan.ha)
[IPv6](/cgi-bin/ip6lan.ha)
[Wi-Fi](/cgi-bin/wconfig_unified.ha)
[MAC Filtering](/cgi-bin/wmacauth.ha)
[Subnets & DHCP](/cgi-bin/dhcpserver.ha)
[IP Allocation](/cgi-bin/ipalloc.ha)
Wi-Fi
=====
Making a change to some pulldowns on this page will automatically change the context below it, enabling you to fill only the appropriate fields for the change you have made.
You have disabled JavaScript(TM) in your browser. When you change an item that has an Update button to the right of it, make your change, then click the Update button. This will transform the page according to the change you have made and you may then proceed.
To configure only basic Wi-Fi options with shared Wi-Fi settings, click on the "Basic Options" link below.
[Basic Options](/cgi-bin/wconfig_unified.ha)
| Hide or Show Passwords | |
| --- | --- |
2.4 GHz Wi-Fi Configuration
===========================
| Wi-Fi Operation*Default: On* | Off On |
| --- | --- |
| Mode*Default: G/N* | G-only B/G B/G/N B/G/N/AX G/N G/N/AX N/AX N-only |
| Bandwidth*Default: 20MHz* | 20MHz 40MHz |
| Channel*Default: Automatic* | 1 2 3 4 5 6 7 8 9 10 11 Automatic |
| Channel Scan | |
| Power Level (in %) *Default: 100* | |
Home SSID
---------
| Home SSID Enable*Default: On* | Off On |
| --- | --- |
| Network Name (SSID)*Default: ATTkiQAcZb* | |
| Hide Network Name*Default: Off* | Off On |
| Security*Default: WPA-Default Password* | OFF - No Privacy WPA WPA - Default Password |
| | |
| WPA Version*Default: WPA-2* | WPA-1 and WPA-2 WPA-2 WPA-2 and WPA-3 |
| Password*Default: hzk3t#9+e9iv* | |
| |
| | |
| Wi-Fi Protected Setup*Default: On* | Off On |
| Maximum Clients*Default: 80* | |
| Scan to Join SSID | Main 2.4ghz ssid qrcode |
Guest SSID
----------
| Guest SSID Enable*Default: Off* | Off On |
| --- | --- |
| Network Access*Default: Internet Only* | Internet Only Internet & Home LAN |
| Guest SSID Subnet*Default: 192.168.2.0/24* | 192.168..0/24 |
| Guest Network Name*Default: ATTkiQAcZb\_Guest* | |
| Hide Network Name*Default: Off* | Off On |
| Security*Default: WPA* | OFF - No Privacy WPA |
| | |
| WPA Version*Default: WPA-2* | WPA-1 and WPA-2 WPA-2 WPA-2 and WPA-3 |
| Password | |
| Maximum Clients*Default: 10* | |
5 GHz Wi-Fi Configuration
=========================
| Wi-Fi Operation*Default: On* | Off On |
| --- | --- |
| Mode*Default: N/AC/AX* | AC/N N/AC/AX N-only |
| Bandwidth*Default: 80MHz* | 160MHz 20MHz 40MHz 80MHz |
| Channel Scan | |
| Power Level (in %) *Default: 100* | |
Home SSID
---------
| Home SSID Enable*Default: On* | On |
| --- | --- |
| Network Name (SSID)*Default: ATTkiQAcZb* | |
| Hide Network Name*Default: Off* | Off On |
| Security*Default: WPA-Default Password* | OFF - No Privacy WPA WPA - Default Password |
| WPA Version*Default: WPA-2* | WPA-2 WPA-2 and WPA-3 |
| Password*Default: hzk3t#9+e9iv* | |
| |
| | |
| Wi-Fi Protected Setup*Default: On* | Off On |
| Maximum Clients*Default: 80* | |
| Scan to Join SSID | Main 5ghz ssid qrcode |
WPS Virtual Pushbutton
----------------------
Click the pushbutton below, associated with the Home SSID, then follow the Wi-Fi client instructions.
WPS PIN
-------
Enter the Wi-Fi Client's all digit PIN, Submit, then follow the Wi-Fi client instructions.
WPS PIN
e.g., 12345670
The buttons below apply any Wi-Fi changes made above. Clients currently connected on each radio may be affected.
Help
----
**Basic Options:** Basic Wi-Fi options with share settings, can be configured using "Basic Options" link.
**Hide or Show Passwords:** This button allows you to hide or display the Wi-Fi passwords on this and other pages. This setting also controls the display or hiding of the Device Access Code.
**Wi-Fi Band Selection:** Your device is designed to operate on two Wi-Fi frequency bands (2.4 GHz and 5 GHz). You should configure parameters for each band, under each separate section, and then select Save to apply the changes. The 5 GHz band provides improved speed, but not all Wi-Fi clients support this band. If a client doesn't support it, it cannot see the 5 GHz Network SSID.
**Wi-Fi Operation:** This is enabled by default. Turning it off will disable all Wi-Fi services for the associated band.
**Mode:** This option allows you to restrict the device to respond only to Wi-Fi LAN devices using the specified protocols. Do not change this setting unless you fully understand the implications of having your device ignore a given class of clients.
**Bandwidth:** The higher the bandwidth, the faster the Wi-Fi speed. Do not change this setting unless you fully understand all the Wi-Fi settings and how they affect each other.
**Channel:** The device can transfer data on various channels. If a nearby Wi-Fi network is transmitting on the same channel, this interferes with data transfer. If you experience speed problems on your Wi-Fi network, test whether a particular choice of channel improves the data transfer. Choosing Automatic causes the device to select the best operating channel for its environment.
**Power Level:** The device can operate at a reduced power level to cover a smaller area. For instance, in a densely populated setting, you might reduce the transmit power to reduce interference with other Wi-Fi transmitters.
**Note:** Power configured will be rounded off to nearest multiple of 10.
**Home/Guest SSID:** A Wi-Fi client connected to a Home SSID may be able to communicate with other devices on that SSID and elsewhere on the LAN, as well as connect to the Internet. A Wi-Fi client connected to a Guest SSID can connect to the Internet, but whether or not it can communicate with other devices depends on the Network Access setting.
**SSID Enable:** The Home and Guest SSIDs can be enabled or disabled independently.
**Network Access:** When set to 'Internet Only', Wi-Fi clients on the Guest SSID will have Internet access, but will not be able to contact any other LAN hosts, including the BGW320-505 device. When set to 'Internet & Home LAN', the Guest SSID behaves in a similar fashion to the Home SSID.
**Guest SSID Subnet:** If Network Access is Internet Only (isolated), the Guest SSID Subnet indicates which subnet will be used for handing out DHCP leases to Wi-Fi clients connecting to this SSID. The default is to use the existing LAN subnet, but a separate subnet just for Guest clients can be used.
**Network Name (SSID):** When a Wi-Fi client searches for available networks choosing this name will mean choosing this device for the Wi-Fi access point.
**Hide Network Name (SSID):** When on, the device will hide its Network Name (SSID) from clients scanning for Wi-Fi networks. To connect to your Wi-Fi network, they must first know
the Network Name.
**Security:** Choose security to prevent outside access to your network. Your device and each client must be using the same password.
* **OFF - No Privacy:** This mode allows any Wi-Fi users to connect to your Wi-Fi network.
* **WPA:** This mechanism provides the best data protection and access control. The password (security key) can be 8-63 characters. At least 20 characters are recommended for best security. To enter a hex WPA password, enter a 64 character hex password preceded with 0x.
* **WPA - Default Password:** The WPA mechanism uses the factory installed default password unique to this device.
**WPA Version:** This field allows you to select the WPA version(s) that will be required for client connections. Select 'WPA-2 and WPA-3' for maximum interoperability.
**Wi-Fi Protected Setup:** This device supports the use of Wi-Fi Protected Setup (WPS). WPS provides a convenient way to add advanced secure Wi-Fi connections for multiple Wi-Fi certified WPS devices to your network. Older clients can continue to connect using the original security setting. WPS clients can connect and learn the security setting using the hardware pushbutton or PIN. The client machine(s) to be added should be powered on and their Wi-Fi interfaces operational.
**Maximum Clients:** Each enabled SSID has a maximum number of clients it will allow to connect concurrently. The total maximum of clients for the 2.4 GHz band is 90. The 5 GHz band supports up to 91 total clients.
**WPS Virtual Pushbutton:** Click the Pushbutton for the Home SSID, then follow any instructions that came with your Wi-Fi client.
**WPS PIN:** Enter the all digit PIN of your Wi-Fi client, click the Submit button for the Home SSID, then follow any instructions that came with your Wi-Fi client.
**Scan to Join SSID:** With the camera app of a Wi-Fi client device, scan the QR code to join the SSID.
INFO [agent] 📍 Step 9
WARNING [langchain_google_genai.chat_models] Retrying langchain_google_genai.chat_models._achat_with_retry.<locals>._achat_with_retry in 2.0 seconds as it raised ResourceExhausted: 429 Resource has been exhausted (e.g. check quota)..
WARNING [agent] ❌ Result failed 1/3 times:
429 Resource has been exhausted (e.g. check quota).
INFO [agent] 📍 Step 9
WARNING [langchain_google_genai.chat_models] Retrying langchain_google_genai.chat_models._achat_with_retry.<locals>._achat_with_retry in 2.0 seconds as it raised ResourceExhausted: 429 Resource has been exhausted (e.g. check quota)..
WARNING [agent] ❌ Result failed 2/3 times:
429 Resource has been exhausted (e.g. check quota).
INFO [agent] 📍 Step 9
INFO [agent] ⚠ Eval: Failed - Could not extract content. The API quota was exhausted. Trying to scroll down to see if that helps.
INFO [agent] 🧠 Memory: Need to extract the value next to 'Network Name (SSID)' under '5 GHz Wi-Fi Configuration' section. 6 out of 7 steps completed. Trying to scroll down.
INFO [agent] 🎯 Next goal: Scroll down to see if that helps with content extraction.
INFO [agent] 🛠️ Action 1/1: {"scroll_down":{"amount":1000}}
INFO [controller] 🔍 Scrolled down the page by 1000 pixels
INFO [agent] 📍 Step 10
INFO [agent] ⚠ Eval: Failed - Could not extract content. The API quota was exhausted. Scrolled down, but still could not extract. Trying again.
INFO [agent] 🧠 Memory: Need to extract the value next to 'Network Name (SSID)' under '5 GHz Wi-Fi Configuration' section. 6 out of 7 steps completed. Trying again.
INFO [agent] 🎯 Next goal: Extract the value.
INFO [agent] 🛠️ Action 1/1: {"extract_content":{"goal":"extract the value next to 'Network Nam e (SSID)' under '5 GHz Wi-Fi Configuration' section"}}
WARNING [langchain_google_genai.chat_models] Retrying langchain_google_genai.chat_models._chat_with_retry.<locals>._chat_with_retry in 2.0 seconds as it raised ResourceExhausted: 429 Resource has been exhausted (e.g. check quota)..
INFO [controller] 📄 Extracted from page
:
Wi-Fi

[Skip to Main Content](#content-sub)
[Device](/cgi-bin/home.ha)
[Broadband](/cgi-bin/broadbandstatistics.ha)
[Home Network](/cgi-bin/lanstatistics.ha)
[Voice](/cgi-bin/voice.ha)
[Firewall](/cgi-bin/firewall.ha)
[Diagnostics](/cgi-bin/diag.ha)
[Status](/cgi-bin/lanstatistics.ha)
[Configure](/cgi-bin/etherlan.ha)
[IPv6](/cgi-bin/ip6lan.ha)
[Wi-Fi](/cgi-bin/wconfig_unified.ha)
[MAC Filtering](/cgi-bin/wmacauth.ha)
[Subnets & DHCP](/cgi-bin/dhcpserver.ha)
[IP Allocation](/cgi-bin/ipalloc.ha)
Wi-Fi
=====
Making a change to some pulldowns on this page will automatically change the context below it, enabling you to fill only the appropriate fields for the change you have made.
You have disabled JavaScript(TM) in your browser. When you change an item that has an Update button to the right of it, make your change, then click the Update button. This will transform the page according to the change you have made and you may then proceed.
To configure only basic Wi-Fi options with shared Wi-Fi settings, click on the "Basic Options" link below.
[Basic Options](/cgi-bin/wconfig_unified.ha)
| Hide or Show Passwords | |
| --- | --- |
2.4 GHz Wi-Fi Configuration
===========================
| Wi-Fi Operation*Default: On* | Off On |
| --- | --- |
| Mode*Default: G/N* | G-only B/G B/G/N B/G/N/AX G/N G/N/AX N/AX N-only |
| Bandwidth*Default: 20MHz* | 20MHz 40MHz |
| Channel*Default: Automatic* | 1 2 3 4 5 6 7 8 9 10 11 Automatic |
| Channel Scan | |
| Power Level (in %) *Default: 100* | |
Home SSID
---------
| Home SSID Enable*Default: On* | Off On |
| --- | --- |
| Network Name (SSID)*Default: ATTkiQAcZb* | |
| Hide Network Name*Default: Off* | Off On |
| Security*Default: WPA-Default Password* | OFF - No Privacy WPA WPA - Default Password |
| | |
| WPA Version*Default: WPA-2* | WPA-1 and WPA-2 WPA-2 WPA-2 and WPA-3 |
| Password*Default: hzk3t#9+e9iv* | |
| |
| | |
| Wi-Fi Protected Setup*Default: On* | Off On |
| Maximum Clients*Default: 80* | |
| Scan to Join SSID | Main 2.4ghz ssid qrcode |
Guest SSID
----------
| Guest SSID Enable*Default: Off* | Off On |
| --- | --- |
| Network Access*Default: Internet Only* | Internet Only Internet & Home LAN |
| Guest SSID Subnet*Default: 192.168.2.0/24* | 192.168..0/24 |
| Guest Network Name*Default: ATTkiQAcZb\_Guest* | |
| Hide Network Name*Default: Off* | Off On |
| Security*Default: WPA* | OFF - No Privacy WPA |
| | |
| WPA Version*Default: WPA-2* | WPA-1 and WPA-2 WPA-2 WPA-2 and WPA-3 |
| Password | |
| Maximum Clients*Default: 10* | |
5 GHz Wi-Fi Configuration
=========================
| Wi-Fi Operation*Default: On* | Off On |
| --- | --- |
| Mode*Default: N/AC/AX* | AC/N N/AC/AX N-only |
| Bandwidth*Default: 80MHz* | 160MHz 20MHz 40MHz 80MHz |
| Channel Scan | |
| Power Level (in %) *Default: 100* | |
Home SSID
---------
| Home SSID Enable*Default: On* | On |
| --- | --- |
| Network Name (SSID)*Default: ATTkiQAcZb* | |
| Hide Network Name*Default: Off* | Off On |
| Security*Default: WPA-Default Password* | OFF - No Privacy WPA WPA - Default Password |
| WPA Version*Default: WPA-2* | WPA-2 WPA-2 and WPA-3 |
| Password*Default: hzk3t#9+e9iv* | |
| |
| | |
| Wi-Fi Protected Setup*Default: On* | Off On |
| Maximum Clients*Default: 80* | |
| Scan to Join SSID | Main 5ghz ssid qrcode |
WPS Virtual Pushbutton
----------------------
Click the pushbutton below, associated with the Home SSID, then follow the Wi-Fi client instructions.
WPS PIN
-------
Enter the Wi-Fi Client's all digit PIN, Submit, then follow the Wi-Fi client instructions.
WPS PIN
e.g., 12345670
The buttons below apply any Wi-Fi changes made above. Clients currently connected on each radio may be affected.
Help
----
**Basic Options:** Basic Wi-Fi options with share settings, can be configured using "Basic Options" link.
**Hide or Show Passwords:** This button allows you to hide or display the Wi-Fi passwords on this and other pages. This setting also controls the display or hiding of the Device Access Code.
**Wi-Fi Band Selection:** Your device is designed to operate on two Wi-Fi frequency bands (2.4 GHz and 5 GHz). You should configure parameters for each band, under each separate section, and then select Save to apply the changes. The 5 GHz band provides improved speed, but not all Wi-Fi clients support this band. If a client doesn't support it, it cannot see the 5 GHz Network SSID.
**Wi-Fi Operation:** This is enabled by default. Turning it off will disable all Wi-Fi services for the associated band.
**Mode:** This option allows you to restrict the device to respond only to Wi-Fi LAN devices using the specified protocols. Do not change this setting unless you fully understand the implications of having your device ignore a given class of clients.
**Bandwidth:** The higher the bandwidth, the faster the Wi-Fi speed. Do not change this setting unless you fully understand all the Wi-Fi settings and how they affect each other.
**Channel:** The device can transfer data on various channels. If a nearby Wi-Fi network is transmitting on the same channel, this interferes with data transfer. If you experience speed problems on your Wi-Fi network, test whether a particular choice of channel improves the data transfer. Choosing Automatic causes the device to select the best operating channel for its environment.
**Power Level:** The device can operate at a reduced power level to cover a smaller area. For instance, in a densely populated setting, you might reduce the transmit power to reduce interference with other Wi-Fi transmitters.
**Note:** Power configured will be rounded off to nearest multiple of 10.
**Home/Guest SSID:** A Wi-Fi client connected to a Home SSID may be able to communicate with other devices on that SSID and elsewhere on the LAN, as well as connect to the Internet. A Wi-Fi client connected to a Guest SSID can connect to the Internet, but whether or not it can communicate with other devices depends on the Network Access setting.
**SSID Enable:** The Home and Guest SSIDs can be enabled or disabled independently.
**Network Access:** When set to 'Internet Only', Wi-Fi clients on the Guest SSID will have Internet access, but will not be able to contact any other LAN hosts, including the BGW320-505 device. When set to 'Internet & Home LAN', the Guest SSID behaves in a similar fashion to the Home SSID.
**Guest SSID Subnet:** If Network Access is Internet Only (isolated), the Guest SSID Subnet indicates which subnet will be used for handing out DHCP leases to Wi-Fi clients connecting to this SSID. The default is to use the existing LAN subnet, but a separate subnet just for Guest clients can be used.
**Network Name (SSID):** When a Wi-Fi client searches for available networks choosing this name will mean choosing this device for the Wi-Fi access point.
**Hide Network Name (SSID):** When on, the device will hide its Network Name (SSID) from clients scanning for Wi-Fi networks. To connect to your Wi-Fi network, they must first know
the Network Name.
**Security:** Choose security to prevent outside access to your network. Your device and each client must be using the same password.
* **OFF - No Privacy:** This mode allows any Wi-Fi users to connect to your Wi-Fi network.
* **WPA:** This mechanism provides the best data protection and access control. The password (security key) can be 8-63 characters. At least 20 characters are recommended for best security. To enter a hex WPA password, enter a 64 character hex password preceded with 0x.
* **WPA - Default Password:** The WPA mechanism uses the factory installed default password unique to this device.
**WPA Version:** This field allows you to select the WPA version(s) that will be required for client connections. Select 'WPA-2 and WPA-3' for maximum interoperability.
**Wi-Fi Protected Setup:** This device supports the use of Wi-Fi Protected Setup (WPS). WPS provides a convenient way to add advanced secure Wi-Fi connections for multiple Wi-Fi certified WPS devices to your network. Older clients can continue to connect using the original security setting. WPS clients can connect and learn the security setting using the hardware pushbutton or PIN. The client machine(s) to be added should be powered on and their Wi-Fi interfaces operational.
**Maximum Clients:** Each enabled SSID has a maximum number of clients it will allow to connect concurrently. The total maximum of clients for the 2.4 GHz band is 90. The 5 GHz band supports up to 91 total clients.
**WPS Virtual Pushbutton:** Click the Pushbutton for the Home SSID, then follow any instructions that came with your Wi-Fi client.
**WPS PIN:** Enter the all digit PIN of your Wi-Fi client, click the Submit button for the Home SSID, then follow any instructions that came with your Wi-Fi client.
**Scan to Join SSID:** With the camera app of a Wi-Fi client device, scan the QR code to join the SSID.
INFO [agent] 📍 Step 11
INFO [agent] ⚠ Eval: Failed - Could not extract content due to API quota. But found the value by analyzing the page source.
INFO [agent] 🧠 Memory: Need to extract the value next to 'Network Name (SSID)' under '5 GHz Wi-Fi Configuration' section. Found the value by analyzing the page source. Task completed.
INFO [agent] 🎯 Next goal: Complete the task.
INFO [agent] 🛠️ Action 1/1: {"done":{"text":"The Network Name (SSID) under the 5 GHz Wi-Fi Con figuration is ATTkiQAcZb","success":true}}
INFO [agent] 📄 Result: The Network Name (SSID) under the 5 GHz Wi-Fi Configuration is ATTkiQAcZb
INFO [agent] ✅ Task completed
INFO [agent] ✅ Successfully
```
|
open
|
2025-03-12T12:03:10Z
|
2025-03-12T12:09:13Z
|
https://github.com/browser-use/browser-use/issues/1007
|
[
"bug"
] |
burakyildirimm
| 1
|
microsoft/qlib
|
machine-learning
| 994
|
an error in qlib/data/dataset/handler.py at line 518
|
if self.process_type == DataHandlerLP.PTYPE_I:
_learn_df = _shared_df
|
closed
|
2022-03-18T13:14:47Z
|
2022-07-15T12:02:31Z
|
https://github.com/microsoft/qlib/issues/994
|
[
"question",
"stale"
] |
ning213
| 5
|
sktime/sktime
|
data-science
| 7,070
|
[BUG] `_make_hierarchical` with `datetime` index as time has no frequency information
|
**Describe the bug**
The dataframe returned by `_make_hierarchical` has no frequency information. Despite being set by `_make_index`
**To Reproduce**
```python
y_train = _make_hierarchical(
hierarchy_levels=(2, 4),
n_columns=2,
min_timepoints=22,
max_timepoints=22,
index_type="datetime",
)
y_train.index.get_level_values(-1)
```
**Expected behavior**
Frequency should be set.
**Additional context**
Probably related to the following pandas bug:
* https://github.com/pandas-dev/pandas/issues/58743
Not sure if we can build a workaround...
If not possible, feel free to close it.
|
open
|
2024-09-01T18:41:57Z
|
2024-09-09T20:14:15Z
|
https://github.com/sktime/sktime/issues/7070
|
[
"bug",
"module:tests",
"module:datasets&loaders"
] |
benHeid
| 3
|
streamlit/streamlit
|
data-visualization
| 10,872
|
Infer page title and favicon from `st.title` instead of requiring `st.set_page_config`
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Today, to change the page title/favicon shown in the browser tab, you need to call `st.set_page_config`. What if we instead inferred that title and favicon from the first call to `st.title`, but still let the dev overwrite it with `st.set_page_config` if they want to have a different title in the app and in the browser tab.
### Why?
- It's very annoying to always need to call `st.set_page_config`. Especially since in most cases you want the same title as shown in the app.
- If devs don't call `st.set_page_config`, the browser tab will just have a default title, which doesn't look great.
### How?
- Set the page title to the first use of `st.title` on the page.
- Ideally, we can also extract an icon from the `st.title` call, e.g. if it's at the beginning or end of the string.
- This should also work if you show the title with markdown, e.g. `st.markdown("# The title")`.
- I think we should not use `st.sidebar.title` for this, since it might be something completely unrelated. But might be worth looking at a few apps.
- If `st.set_page_config` is set, it should always overwrite the title inferred from `st.title`.
### Additional Context
_No response_
|
open
|
2025-03-21T19:39:38Z
|
2025-03-21T19:40:12Z
|
https://github.com/streamlit/streamlit/issues/10872
|
[
"type:enhancement",
"feature:st.set_page_config",
"feature:st.title"
] |
jrieke
| 1
|
pyg-team/pytorch_geometric
|
deep-learning
| 8,902
|
IndexError: Found indices in 'edge_index' that are larger than 0 (got X). Please ensure that all indices in 'edge_index' point to valid indices in the interval [0, 1)
|
### 🐛 Describe the bug
My `GCNConv` model looks like as follows:
```Python
class GCN(nn.Module):
def __init__(self, vocab_size, input_channels, hidden_channels, out_channels, dropout_rate=0.5):
super(GCN, self).__init__()
self.embedding = nn.Embedding(vocab_size, input_channels)
self.gnn1 = GCNConv(input_channels, hidden_channels, normalize=False)
self.gnn2 = GCNConv(hidden_channels, out_channels, normalize=False)
self.dropout = nn.Dropout(p=dropout_rate)
self.relu = nn.ReLU()
self.pooling = nn.AdaptiveAvgPool2d(out_channels)
def forward(self, x, edge_index, edge_weight):
x = self.embedding(x)
x = self.gnn1(x, edge_index, edge_weight)
x = self.relu(x)
x = self.dropout(x)
x = self.gnn2(x, edge_index, edge_weight)
x = self.relu(x)
x = self.dropout(x)
x = self.pooling(x)
return x
```
My `Data` object looks as such:
`Data(x=[146, 1], edge_index=[2, 172], edge_type=[172], edge_weights=[172])`
with my `edge_index` below:
```Python
tensor([[145, 145, 145, 145, 145, 145, 86, 118, 124, 37, 20, 38, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20,
20, 20, 20, 37, 37, 37, 38, 38, 38, 38, 38, 38, 86, 86,
86, 86, 86, 86, 86, 86, 86, 86, 86, 86, 86, 86, 86, 118,
118, 124, 124, 124, 124, 124, 124, 124, 124, 124, 124, 124, 124, 124,
124, 124, 124, 124, 124, 124, 124, 124, 0, 0, 0, 5, 10, 10,
13, 20, 23, 23, 27, 30, 40, 40, 41, 41, 42, 45, 46, 46,
47, 47, 48, 48, 49, 51, 55, 55, 56, 56, 57, 57, 58, 60,
61, 62, 67, 71, 71, 71, 77, 78, 83, 84, 86, 90, 93, 97,
98, 98, 99, 101, 102, 104, 105, 106, 107, 110, 111, 112, 113, 113,
114, 116, 121, 121, 123, 125, 127, 130, 130, 132, 132, 133, 134, 136,
137, 139, 140, 142],
[ 86, 118, 124, 37, 20, 38, 145, 145, 145, 145, 145, 145, 0, 10,
13, 20, 40, 41, 42, 45, 46, 47, 48, 55, 56, 57, 67, 71,
90, 97, 98, 101, 106, 107, 110, 113, 114, 116, 121, 132, 133, 134,
137, 139, 142, 40, 46, 48, 5, 57, 58, 62, 93, 102, 0, 10,
23, 30, 49, 60, 61, 71, 78, 83, 84, 86, 98, 99, 130, 111,
112, 0, 23, 27, 41, 47, 51, 55, 56, 71, 77, 104, 105, 113,
121, 123, 125, 127, 130, 132, 136, 140, 20, 86, 124, 38, 20, 86,
20, 20, 86, 124, 124, 86, 20, 37, 20, 124, 20, 20, 20, 37,
20, 124, 20, 37, 86, 124, 20, 124, 20, 124, 20, 38, 38, 86,
86, 38, 20, 20, 86, 124, 124, 86, 86, 86, 86, 20, 38, 20,
20, 86, 86, 20, 38, 124, 124, 20, 20, 20, 118, 118, 20, 124,
20, 20, 20, 124, 124, 124, 124, 86, 124, 20, 124, 20, 20, 124,
20, 20, 124, 20]])
```
When running my model as below:
```Python
medical_model = MedGCN(num_med_nodes+1, 64, 64, 64).to(device)
out = medical_model(medical_data.x, medical_data.edge_index, medical_data.edge_weights)
```
I get the following error:
```Python
RuntimeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/torch_geometric/nn/conv/message_passing.py](https://localhost:8080/#) in _index_select_safe(self, src, index)
303 try:
--> 304 return src.index_select(self.node_dim, index)
305 except (IndexError, RuntimeError) as e:
RuntimeError: INDICES element is out of DATA bounds, id=145 axis_dim=1
During handling of the above exception, another exception occurred:
IndexError Traceback (most recent call last)
10 frames
[<ipython-input-139-c27e3bba7fe8>](https://localhost:8080/#) in <cell line: 4>()
13 print(medical_data.edge_weights)
14
---> 15 medical_out = medical_model(medical_data.x, medical_data.edge_index, medical_data.edge_weights)
16
...
[<ipython-input-134-a363cdee8ff3>](https://localhost:8080/#) in forward(self, x, edge_index, edge_weight)
19 def forward(self, x, edge_index, edge_weight):
20 x = self.embedding(x)
---> 21 x = self.gnn1(x, edge_index, edge_weight)
22 x = self.relu(x)
23 x = self.dropout(x)
...
[/usr/local/lib/python3.10/dist-packages/torch_geometric/nn/conv/gcn_conv.py](https://localhost:8080/#) in forward(self, x, edge_index, edge_weight)
261
262 # propagate_type: (x: Tensor, edge_weight: OptTensor)
--> 263 out = self.propagate(edge_index, x=x, edge_weight=edge_weight)
264
265 if self.bias is not None:
[~/.cache/pyg/message_passing/torch_geometric.nn.conv.gcn_conv_GCNConv_propagate.py](https://localhost:8080/#) in propagate(self, edge_index, x, edge_weight, size)
166 else:
167
--> 168 kwargs = self.collect(
169 edge_index,
170 x,
[~/.cache/pyg/message_passing/torch_geometric.nn.conv.gcn_conv_GCNConv_propagate.py](https://localhost:8080/#) in collect(self, edge_index, x, edge_weight, size)
87 elif isinstance(x, Tensor):
88 self._set_size(size, j, x)
---> 89 x_j = self._index_select(x, edge_index_j)
90 else:
91 x_j = None
[/usr/local/lib/python3.10/dist-packages/torch_geometric/nn/conv/message_passing.py](https://localhost:8080/#) in _index_select(self, src, index)
298 return src.index_select(self.node_dim, index)
299 else:
--> 300 return self._index_select_safe(src, index)
301
302 def _index_select_safe(self, src: Tensor, index: Tensor) -> Tensor:
[/usr/local/lib/python3.10/dist-packages/torch_geometric/nn/conv/message_passing.py](https://localhost:8080/#) in _index_select_safe(self, src, index)
313
314 if (index.numel() > 0 and index.max() >= src.size(self.node_dim)):
--> 315 raise IndexError(
316 f"Found indices in 'edge_index' that are larger "
317 f"than {src.size(self.node_dim) - 1} (got "
IndexError: Found indices in 'edge_index' that are larger than 0 (got 145). Please ensure that all indices in 'edge_index' point to valid indices in the interval [0, 1) in your node feature matrix and try again.
```
I don't know if it is relevant, but I should also say that some `edge_weights` are negative also.
How do I resolve this?
Thank you
### Versions
```
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.27.9
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.58+-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 256 KiB (1 instance)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] torch==2.1.0+cu121
[pip3] torch_geometric==2.4.0
[pip3] torch-scatter==2.1.2+pt21cu121
[pip3] torch-sparse==0.6.18+pt21cu121
[pip3] torchaudio==2.1.0+cu121
[pip3] torchdata==0.7.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.16.0
[pip3] torchvision==0.16.0+cu121
[pip3] triton==2.1.0
[conda] Could not collect
```
|
closed
|
2024-02-12T20:32:20Z
|
2024-02-15T09:16:11Z
|
https://github.com/pyg-team/pytorch_geometric/issues/8902
|
[
"bug"
] |
Rajat1206
| 2
|
davidteather/TikTok-Api
|
api
| 1,093
|
[FEATURE_REQUEST] - Scrape the tiktok text
|
Thanks for this wonderful tool.
I would like to scrape the text of the video, but I fail to find the way. It seems that all I see in the documentation is to scrape the info related to videos or user info.
Please advice.
|
open
|
2023-12-18T08:59:04Z
|
2024-08-13T01:02:52Z
|
https://github.com/davidteather/TikTok-Api/issues/1093
|
[
"feature_request"
] |
lynyanny
| 3
|
explosion/spaCy
|
deep-learning
| 13,467
|
⚠ Aborting and saving the final best model. Encountered exception: RuntimeError('Invalid argument')
|
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System:
* Python Version Used:
* spaCy Version Used:
* Environment Information:
|
closed
|
2024-04-28T12:12:26Z
|
2024-06-15T00:03:34Z
|
https://github.com/explosion/spaCy/issues/13467
|
[
"invalid"
] |
Lance-Owen
| 1
|
InstaPy/InstaPy
|
automation
| 6,149
|
Mutual following takes forever
|
<!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
Mutual following should not take long for accounts with a small following_count.
## Current Behavior
get_mutual_following currently first grabs all_followers, then grabs all_following and then does a one by one check on all followers to see if they belong to all following. This is inefficient.
## Possible Solution (optional)
grab relationship_counts and decide the order in which to do things - do we grab following first or followers first and which of them we use to run the loop. Also, I don't know if it's possible, but if one of the numbers is considerably small than the other, say followers is 50,000 and following is 100, it might make sense to check each "following" one by one to see if they are in the "followers" directly on the site.
## InstaPy configuration
|
closed
|
2021-04-13T06:11:27Z
|
2021-07-21T05:18:54Z
|
https://github.com/InstaPy/InstaPy/issues/6149
|
[
"wontfix"
] |
guttikar
| 1
|
piskvorky/gensim
|
data-science
| 3,179
|
Gensim's FastText model reads in unsupported modes from Facebook's FastText
|
In gensim/models/fasttext.py:
```python
model = FastText(
vector_size=m.dim,
vector_size=m.dim,
window=m.ws,
window=m.ws,
epochs=m.epoch,
epochs=m.epoch,
negative=m.neg,
negative=m.neg,
# FIXME: these next 2 lines read in unsupported FB FT modes (loss=3 softmax or loss=4 onevsall,
# or model=3 supervised) possibly creating inconsistent gensim model likely to fail later. Displaying
# clear error/warning with explanatory message would be far better - even if there might be some reason
# to continue with the load - such as providing read-only access to word-vectors trained those ways. (See:
# https://github.com/facebookresearch/fastText/blob/2cc7f54ac034ae320a9af784b8145c50cc68965c/src/args.h#L19
# for FB FT mode definitions.)
hs=int(m.loss == 1),
hs=int(m.loss == 1),
sg=int(m.model == 2),
sg=int(m.model == 2),
bucket=m.bucket,
bucket=m.bucket,
min_count=m.min_count,
min_count=m.min_count,
sample=m.t,
sample=m.t,
min_n=m.minn,
min_n=m.minn,
max_n=m.maxn,
max_n=m.maxn,
)
```
|
open
|
2021-06-22T01:55:10Z
|
2024-08-25T17:02:40Z
|
https://github.com/piskvorky/gensim/issues/3179
|
[
"bug",
"difficulty easy",
"good first issue",
"fasttext"
] |
mpenkov
| 11
|
davidsandberg/facenet
|
tensorflow
| 407
|
How to solve in win7?
|
Traceback (most recent call last):
File "E:\FaceCode_Github\facenet\facenet-master\facenet-master\src\align\align
_dataset_mtcnn.py", line 143, in <module>
main(parse_arguments(sys.argv[1:]))
File "E:\FaceCode_Github\facenet\facenet-master\facenet-master\src\align\align
_dataset_mtcnn.py", line 46, in main
facenet.store_revision_info(src_path, output_dir, ' '.join(sys.argv))
File "E:\FaceCode_Github\facenet\facenet-master\facenet-master\src\facenet.py"
, line 501, in store_revision_info
gitproc = Popen(['git', 'rev-parse', 'HEAD'], stdout = PIPE, cwd=src_path)
File "D:\softWarePackAge\anaconda3\lib\subprocess.py", line 947, in __init__
restore_signals, start_new_session)
File "D:\softWarePackAge\anaconda3\lib\subprocess.py", line 1224, in _execute_
child
startupinfo)
FileNotFoundError: [WinError 2] 系统找不到指定的文件。
|
closed
|
2017-08-02T08:42:49Z
|
2017-11-10T23:17:55Z
|
https://github.com/davidsandberg/facenet/issues/407
|
[] |
wyc2015fq
| 3
|
matterport/Mask_RCNN
|
tensorflow
| 2,662
|
create similar masks as mask rcnn
|
Hi,
I have an application running with mask rcnn model, due to slow speed of mask rcnn I have switched to object detection model i.e yolov4 model. Now the application flow is setup for masks as well, I am trying to convert the bounding boxes points into mask format i.e exactly same as mask rcnn format. I am using opencv drawcontours but there are some issues not able to create the exact same format of mask as mask rcnn outputs.
Here is the sample code:
```
masks_list = []
for (classid, score, box_) in zip(classes, scores, boxes):
if score> 0.5:
r['rois'].append([box_[0],box_[1],box_[2],box_[3]])
r['class_ids'].append(int(classid))
r['scores'].append(score)
gen_mask = np.zeros((input_height, input_width), dtype=np.uint8)
points = []
points.append([int(box_[0]),int(box_[1])])
points.append([int(box_[2]),int(box_[3])])
cv2.drawContours(gen_mask, np.int32([points]), -1, 1, -1, cv2.LINE_AA)
tmp = gen_mask > 0
masks_list.append(tmp)
if len(masks_list)>0:
masks_list = np.swapaxes(masks_list, 0, 2)
masks_list = np.swapaxes(masks_list, 1, 0)
r['rois'] = np.array(r['rois'])
r['class_ids'] = np.array(r['class_ids'])
r['scores'] = np.array(r['scores'])
r['masks'] = np.array(masks_list)
masks, class_ids, regions, scores = r['masks'], r['class_ids'], r['rois'], r['scores']
```
Please help me figured out the issue with code snippet above.
|
closed
|
2021-08-04T12:02:20Z
|
2021-08-08T06:26:21Z
|
https://github.com/matterport/Mask_RCNN/issues/2662
|
[] |
muhammadabdullah34907
| 1
|
deepinsight/insightface
|
pytorch
| 2,268
|
arcface_torch pretrained weights problem
|
Do you provide the pretrained face recognition model that I can straightly use the model to extract facial features?
[models](https://onedrive.live.com/?authkey=%21AFZjr283nwZHqbA&id=4A83B6B633B029CC%215581&cid=4A83B6B633B029CC) It seems only the backbone?
您好,请问你们提供arcface_torch在某个数据集上训练好后的模型吗?我看你们在[inference.py](https://github.com/deepinsight/insightface/blob/master/recognition/arcface_torch/inference.py)这个代码里面载入的只是backbone?
谢谢!
|
open
|
2023-03-21T07:13:03Z
|
2023-03-22T08:11:42Z
|
https://github.com/deepinsight/insightface/issues/2268
|
[] |
iiTzFrankie
| 1
|
plotly/dash-cytoscape
|
dash
| 23
|
Adding Support for networkX
|
## Description
At the moment, conversion between networkX graphs and Dash Cytoscape networks have to be done manually. In order to make it easier for everyone to integrate networkX in their Dash workflow, we would like to improve support for the library. Please let us know your thoughts, and we highly appreciate if you would like to contribute!
|
open
|
2019-01-11T23:02:26Z
|
2024-03-11T15:47:19Z
|
https://github.com/plotly/dash-cytoscape/issues/23
|
[
"suggestion",
"♥ NEEDS SPON$OR"
] |
xhluca
| 9
|
amidaware/tacticalrmm
|
django
| 2,152
|
server install.sh dependency missing python3.11-venv
|
- OS: Debian 12
- RMM Version v0.20.1
- Standard install.sh
Debian 12 clean install, nor the 3.11.8 download include python venv
Install script fails
`
The virtual environment was not created successfully because ensurepip is not
available. On Debian/Ubuntu systems, you need to install the python3-venv
package using the following command.
apt install python3.11-venv
./install.sh: line 599: python: command not found
`
Python is installed by manual specific version download
Resolution amend after line 366 to add python3.11-venv
|
closed
|
2025-02-24T20:17:48Z
|
2025-02-25T08:18:21Z
|
https://github.com/amidaware/tacticalrmm/issues/2152
|
[] |
davemc50
| 3
|
kizniche/Mycodo
|
automation
| 1,354
|
Request to add DFRobot i2c DAC output module (0-10v)
|
Found a awesome DAC from DFrobot, it is the first i2C device that i have found that outputs a 0-10v signal. also has 8 possible i2c addresses. i am a failure at programming so hoping this is universal enough for other people to find useful too.
https://www.dfrobot.com/product-2613.html
tons of lighting systems use 0-10v dimming, and this is the easiest way to add it to mycodo without making your own circuit, which is how i am currently doing it, this is just more user friendly
https://www.prolighting.com/blog/2020/03/25/understanding-0-10v-dimming-with-todays-led-commercial-fixtures/
wiki: https://wiki.dfrobot.com/SKU_DFR0971_2_Channel_I2C_0_10V_DAC_Module
probably somewhat similar to https://kizniche.github.io/Mycodo/Supported-Outputs/#digital-to-analog-converter-mcp4728
i envision the output to just have the option to select i2c address and set output voltage, bit simpler then the mcp4728
they also have 6 version of this board with various different outputs
|
closed
|
2023-12-10T04:13:34Z
|
2024-10-04T03:55:22Z
|
https://github.com/kizniche/Mycodo/issues/1354
|
[
"enhancement",
"Testing"
] |
Nithalik
| 43
|
Nike-Inc/koheesio
|
pydantic
| 162
|
[FEATURE] Add `authorization_header` to HttpStep to support other token types besides just `bearer`
|
> Replacing `bearer_token` with `authorization_header` will do (most of) the job. And if this attribute is set, we just update the `headers` attribute.
Hmm.. so add something like this:
```python
authorization_header: Optional[SecretStr] = Field(
default=None,
description="Token for authorization",
alias="token",
repr=False,
)
```
_Originally posted by @dannymeijer in https://github.com/Nike-Inc/koheesio/issues/158#issuecomment-2627282844_
|
closed
|
2025-01-31T13:08:55Z
|
2025-02-26T15:16:48Z
|
https://github.com/Nike-Inc/koheesio/issues/162
|
[] |
dannymeijer
| 2
|
albumentations-team/albumentations
|
deep-learning
| 2,432
|
[Feature request] Add apply_to_images to Sharpen
|
open
|
2025-03-11T01:16:51Z
|
2025-03-11T01:16:58Z
|
https://github.com/albumentations-team/albumentations/issues/2432
|
[
"enhancement",
"good first issue"
] |
ternaus
| 0
|
|
slackapi/python-slack-sdk
|
asyncio
| 1,280
|
Intermittent socket mode connectivity
|
(Filling out the following details about bugs will help us solve your issue sooner.)
### Reproducible in:
#### The Slack SDK version
slack-sdk==3.19.0
#### Python runtime version
Python 3.8.14
#### OS info
ProductName: Mac OS X
ProductVersion: 10.15.7
BuildVersion: 19H2026
Darwin Kernel Version 19.6.0: Tue Jun 21 21:18:39 PDT 2022; root:xnu-6153.141.66~1/RELEASE_X86_64
#### Steps to reproduce:
[excerpt of the new socket mode slack driver for our internal fork of the skozcen/will project]
```python
@property
def client(self):
"References/initializes our Slack client"
if self._client is None:
self._client = SocketModeClient(
app_token=settings.SLACK_APP_TOKEN,
web_client=WebClient(token=settings.SLACK_BOT_TOKEN),
ping_pong_trace_enabled=True,
on_message_listeners=[self._sm_event_handler],
on_error_listeners=[self._sm_error_handler],
)
return self._client
def _load_channels(self):
"Updates our internal list of channels. Kind of expensive."
channels = {}
for page in self.client.web_client.conversations_list(
limit=self.PAGE_LIMIT,
exclude_archived=True,
types="public_channel,private_channel,mpim,im",
):
for channel in page["channels"]:
channels[channel["id"]] = Channel(
id=channel["id"],
name=channel.get("name", channel["id"]),
source=clean_for_pickling(channel),
members=dict(),
)
if channels:
self._channels = channels
self.save("slack_channel_cache", channels)
logging.info("Loaded/saved %s channels in the cache", len(channels))
def _update_channels(self, channel: dict = None):
"Updates our internal list of channels. Kind of expensive."
if channel is None and not self._channels:
channels = self.load("slack_channel_cache", None)
if channels is not None:
self._channels = channels
return
channels = self.channels
name = channel.get("name", channel["id"])
logging.debug("Updated channel %s", str(name))
channels[channel["id"]] = Channel(
id=channel["id"],
name=name,
source=clean_for_pickling(channel),
members=dict(),
)
if channels:
self._channels = channels
self.save("slack_channel_cache", channels)
logging.debug("saved %s channels to the cache", len(channels))
def _load_people(self):
"Loads all Slack users, kind of expensive."
people = dict()
for page in self.client.web_client.users_list(limit=self.PAGE_LIMIT):
for member in page["members"]:
if member["deleted"]:
continue
people[member["id"]] = Person.from_slack(member)
if member["name"] == self.handle:
self.me = people[member["id"]]
if people:
self._people = people
self.save("slack_people_cache", people)
logging.info("Loaded/saved %s people in the cache", len(people))
def _update_people(self, person: dict = None):
"Updates our internal list of Slack users. Kind of expensive."
if person is None and not self._people:
if self.load("slack_people_cache", None):
self._people = self.load("slack_people_cache", None)
return
people = self.people
member_id = person["id"]
if person.get("deleted", False):
if member_id in people:
del people[member_id]
else:
people[member_id] = Person.from_slack(person)
self._people = people
if self._people:
self.save("slack_people_cache", people)
logging.debug(
"saved %s people to the cache because of %s",
len(people),
person["name"],
)
def _sm_error_handler(self, message: str):
data = json.loads(message)
logging.error("Got an error: %s", data)
self._ack(data)
def _ack(self, data: dict):
"""All Slack events need to be acked ... except for the ones that don't (currently only `hello`)
:param dict: The original Slack event
"""
if "envelope_id" in data:
self.client.send_socket_mode_response({"envelope_id": data["envelope_id"]})
logging.warning(f'{os.getpid()} got message: {data["payload"]}')
elif data.get("type", None) == "hello":
return
else:
logging.warning("Not acknowledging %s", data)
def _sm_event_handler(self, message: str):
"""All Slack events are handled/routed by this function. If it raises an Exception
the current Slack connection will be closed and re-opened, so maybe don't do that.
"""
try:
data = json.loads(message)
self._ack(data)
event_type = data.get("type", None)
if data.get("retry_attempt", 0) > 0:
# return
logging.warning("Retry %d", data["retry_attempt"])
if event_type == "events_api":
event_type = data["payload"].get("event", {}).get("type", None)
if event_type is None:
logging.error("Got an event without an event type!", message)
elif event_type == "hello":
logging.info(message)
elif event_type == "channel_created":
self._update_channels(data["payload"]["event"]["channel"])
elif event_type == "user_change":
self._update_people(data["payload"]["event"]["user"])
elif event_type == "message":
if not self.handle:
id = data["payload"].get("authorizations", [{"user_id": None}])[0][
"user_id"
]
if id and id in self.people:
self.__class__.handle = id
self.__class__.me = self.people[id]
self.handle_incoming_slack_event(data=data["payload"]["event"])
else:
logging.warning("No handler for a %s event: %s", event_type, data)
except:
logging.exception("Exception processing %s", message)
def _watch_slack_ws(self):
"This is our main loop."
metadata_loaded = False # Only load metadata once(expensive)
while not self.exiting:
try:
if not self.client.is_connected():
self.client.connect()
if not metadata_loaded:
Thread(target=self._load_people, daemon=True).start()
Thread(target=self._load_channels, daemon=True).start()
metadata_loaded = True
while self.client.is_connected():
time.sleep(5)
if self.exiting:
self.client.close()
except Exception as the_exception:
logging.exception("Error in websocket loop")
print(the_exception)
time.sleep(2)
```
### Expected result:
I expect to see messages come in within a second or two of sending them via DM. I cannot seem to find a way to expose the claimed intermittent availability issue.
### Actual result:
Messages take from approximately zero seconds to 8 minutes to be received by the client.
### Requirements
I originally submitted my issue to https://my.slack.com/help/requests/new but they directed me here. From their response:
```text
Hi Brian,
Thanks for your patience. I'm a Platform specialist, and I reviewed the notes that Paige provided about your earlier chat.
It appears that the socket mode connection generated for your app seemed to be intermittently unavailable so we don't get the expected `envelope_id` within 3 seconds, and end up retrying your requests.
* The 3-second requirement is mentioned here: https://api.slack.com/apis/connections/socket-implement#acknowledge [api.slack.com].
* The event callback retry mechanism is mentioned here: https://api.slack.com/apis/connections/events-api#the-events-api__field-guide__error-handling__failure-conditions [api.slack.com]
Given this, you may want to consider creating multiple socket mode connections (as mentioned here: https://api.slack.com/apis/connections/socket-implement#connections [api.slack.com]), but if you do that, the Slack callbacks will only be sent to one of the connections, but that's used as a backup in case the main connection is not available for any reason. **Additionally, it should be noted that you're allowed up to 10 simultaneous websocket connections.**
Sorry for the trouble here, and hopefully, this helps. Thanks for checking in about this.
```
|
closed
|
2022-10-21T20:25:54Z
|
2022-10-24T23:27:37Z
|
https://github.com/slackapi/python-slack-sdk/issues/1280
|
[
"question",
"socket-mode"
] |
BrianGallew
| 3
|
Python3WebSpider/ProxyPool
|
flask
| 62
|
代理池项目中setting.py文件相关配置
|
不算bug,建议:
1.在项目setting.py文件中,看到声明了LOG_DIR日志存储路径参数,但未使用。
应新建出...\project\ProxyPool\logs文件夹,并在配置文件中修改:
logger.add(env.str('LOG_RUNTIME_FILE', 'runtime.log'), level='DEBUG', rotation='1 week', retention='20 days')
logger.add(env.str('LOG_ERROR_FILE', 'error.log'), level='ERROR', rotation='1 week')
修改为:
logger.add(env.str('LOG_RUNTIME_FILE', f'{LOG_DIR}/runtime.log'), level='DEBUG', rotation='1 week', retention='20 days')
logger.add(env.str('LOG_ERROR_FILE', f'{LOG_DIR}/error.log'), level='ERROR', rotation='1 week')
2.setting.py文件中ENABLE_TESTER, ENABLE_GETTER, ENABLE_SERVER开关参数若都为False时,运行run.py文件报错(try方法中finally还会报错),可修改scheduler.py文件。(此条有点杠精,可忽略)

|
open
|
2020-04-09T08:57:18Z
|
2020-04-09T10:01:47Z
|
https://github.com/Python3WebSpider/ProxyPool/issues/62
|
[
"bug"
] |
Yu-Yv
| 1
|
hankcs/HanLP
|
nlp
| 724
|
发现新词的方法,过滤的时候建议shiyonLinkedList
|
/**
* 提取词语
*
* @param reader 大文本
* @param size 需要提取词语的数量
* @return 一个词语列表
*/
public List<WordInfo> discover(BufferedReader reader, int size) throws IOException
在Line 83行,ArrayList替换为LinkedList,做移除时,会减少remove带来的list左移的复杂度,速度有很大提升
|
closed
|
2017-12-27T04:46:18Z
|
2017-12-27T09:07:22Z
|
https://github.com/hankcs/HanLP/issues/724
|
[
"improvement"
] |
Jnoson
| 1
|
AntonOsika/gpt-engineer
|
python
| 583
|
Expanded use of LangChain features - tools, etc.?
|
Thanx for creating such a good ai dev aide. I know GE is using LangChain for LLM abstraction, which is great. Can you expand GE functionality to add more LangChain features like external tools, etc? Eg., to allow web searching, GitHub repo access, chat with private documents, etc.
Is there any interest in the GE community for this?
|
closed
|
2023-08-11T21:46:06Z
|
2023-08-16T19:25:48Z
|
https://github.com/AntonOsika/gpt-engineer/issues/583
|
[] |
radman-x
| 1
|
gradio-app/gradio
|
python
| 10,828
|
Receive ValueError when using nested @gr.render()
|
### Describe the bug
When attempting to use a nested @gr.render() I receive a `ValueError: Reactive render must be inside a Blocks context.`. However, as the example code shoes, the reactive render IS placed in a Blocks context (it's nested).
The motivation here is to be able to dynamically render nested gradio components. For example, suppose I want to render certain gradio components for certain users (high level use of gr.render()) and within that gradio component I want to dynamically render a list of buttons based on user interaction or API calls (the nested gr.render()).
Without being able to use gr.render() nested, I cannot create those dynamic components at a lower level. This prevents me from creating composable and dynamic gradio components.
The example code below shows an example implementation and reproduces the error.
Perhaps I am missing something? I checked the issue log and couldn't find something similar.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def run_nested_app(user):
with gr.Blocks() as nested_app:
gr.Textbox(f"{user}", label="User Profile")
search_box = gr.Textbox(
value="", interactive=True, placeholder="Edit me", label="Search Bar"
)
# This dynamic render here is supposed to provide dynamic content based on user interaction.
# But this render is broken because it is not "inside a Blocks context"
@gr.render(inputs=search_box)
def render_stuff(search_box):
split_words = search_box.split()
for word in split_words:
gr.Button(value=word, key=word)
return nested_app
with gr.Blocks() as demo:
user = gr.State("User1")
# This dynamic render works correctly.
@gr.render(inputs=user)
def render_example(user):
run_nested_app(user)
if __name__ == "__main__":
demo.launch()
```
### Screenshot
_No response_
### Logs
```shell
File "bar/.venv/lib/python3.11/site-packages/gradio/renderable.py", line 75, in apply
self.fn(*args, **kwargs)
File "bar/foo/ui/example.py", line 28, in render_example
run_nested_app(user)
File "bar/foo/ui/example.py", line 13, in run_nested_app
@gr.render(inputs=search_box)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "bar/.venv/
lib/python3.11/site-packages/gradio/renderable.py", line 127, in render
raise ValueError("Reactive render must be inside a Blocks context.")
ValueError: Reactive render must be inside a Blocks context.
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.12.0
gradio_client version: 1.5.4
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.5.0
gradio-client==1.5.4 is not installed.
httpx: 0.27.2
huggingface-hub: 0.27.1
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.5
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.2
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.27.2
huggingface-hub: 0.27.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
Blocking usage of gradio
|
open
|
2025-03-17T21:57:16Z
|
2025-03-17T21:57:16Z
|
https://github.com/gradio-app/gradio/issues/10828
|
[
"bug"
] |
i-am-mike-davis
| 0
|
viewflow/viewflow
|
django
| 335
|
(2.0.0a0) Improvements for Django - Suggestion
|
Hi @kmmbvnr,
thanks to [this comment](https://github.com/viewflow/django-fsm/issues/267#issuecomment-912257373) we are currently using the pre-release version 2 of this package.
In development, when using this package with Django models, we realised that you were still able to directly alter the field which defines, in which state our model currently is.
<details>
<summary>Quick Example</summary>
```python
class InvoiceStatusChoices(TextChoices):
NEW = 'NEW', _('New')
PAID = 'PAID', _('Paid')
REJECTED = 'REJECTED', _('Rejected')
class Invoice(models.Model):
status = models.CharField(max_length=150, choices=InvoiceStatusChoices.choices)
```
```python
invoice = Invoice.objects.create()
invoice.status = InvoiceStatusChoices.REJECTED
invoice.save() # still possible, we wanted to disable this and instead only allow changes via flow transitions
```
</details>
Since we did not want that to be possible, but instead have a developer _always_ call a transition method if he wants to alter the state, we wrote a Mixin, which does not allow state changes to be made directly, but always expects for state changes to happen as a result of having called a transition method.
Do you think its possible for me to create a PR with this mixin, so it can be incorporated in this package? 🙂
|
closed
|
2021-11-16T07:52:29Z
|
2024-02-15T12:45:30Z
|
https://github.com/viewflow/viewflow/issues/335
|
[
"request/enhancement"
] |
christoph-teichmeister
| 7
|
neuml/txtai
|
nlp
| 206
|
Add workflow scheduling
|
Currently, workflows are run on demand. This makes sense for data that is dynamically changing.
In the case where an index is being refreshed from the same input source, a schedule option would be beneficial. While the same could be accomplished with standard scheduling tools like cron, having an option directly within txtai is helpful.
This task will integrate the [croniter](https://pypi.org/project/croniter/) [MIT License] library. A new schedule section will be added to workflows. A schedule requires a cron expression, a list of elements to run each iteration and a max number of iterations to run (will also have option to run indefinitely).
Schedules will work for Python, YAML and API workflows.
Workflow scheduling is initially targeted for refreshing embedding indexes. Longer term, this will form the foundation for reporting, similar to what is currently found in paperai but for all domains.
|
closed
|
2022-01-27T17:18:07Z
|
2022-01-27T17:27:55Z
|
https://github.com/neuml/txtai/issues/206
|
[] |
davidmezzetti
| 0
|
dpgaspar/Flask-AppBuilder
|
flask
| 1,835
|
add Azure OAUTH example to security docs
|
Currently, we don't list an Azure OAUTH example for [in the security docs](https://flask-appbuilder.readthedocs.io/en/latest/security.html#authentication-oauth) for `OAUTH_PROVIDERS`.
However, we already have an example for Azure OAUTH in the [`./examples/oauth/config.py`](https://github.com/dpgaspar/Flask-AppBuilder/blob/v3.4.5/examples/oauth/config.py#L71-L87) file, so we can probably copy that example into [`./docs/security.rst`](https://github.com/dpgaspar/Flask-AppBuilder/blob/master/docs/security.rst#authentication-oauth) to help users understand that this is an option.
__For example: (derived from `./examples/oauth/config.py`)__
```python
OAUTH_PROVIDERS = [
...
...
...
{
"name": "azure",
"icon": "fa-windows",
"token_key": "access_token",
"remote_app": {
"client_id": "AZURE_APPLICATION_ID",
"client_secret": "AZURE_SECRET",
"api_base_url": "https://login.microsoftonline.com/AZURE_TENANT_ID/oauth2",
"client_kwargs": {
"scope": "User.read name preferred_username email profile upn",
"resource": "AZURE_APPLICATION_ID",
},
"request_token_url": None,
"access_token_url": "https://login.microsoftonline.com/AZURE_TENANT_ID/oauth2/token",
"authorize_url": "https://login.microsoftonline.com/AZURE_TENANT_ID/oauth2/authorize",
},
},
...
...
...
]
```
|
closed
|
2022-04-25T23:09:00Z
|
2023-08-01T07:34:03Z
|
https://github.com/dpgaspar/Flask-AppBuilder/issues/1835
|
[] |
thesuperzapper
| 2
|
dnouri/nolearn
|
scikit-learn
| 42
|
train_test_split signature
|
Any reason the signature for `train_test_split` takes in the `eval_size` rather than just reason `self. eval_size`?
https://github.com/dnouri/nolearn/blob/12f263ccf09d32c3b73eff3d6fbe96ce85631f1e/nolearn/lasagne.py#L361
|
closed
|
2015-02-10T20:59:46Z
|
2015-07-22T20:48:22Z
|
https://github.com/dnouri/nolearn/issues/42
|
[] |
cancan101
| 2
|
gradio-app/gradio
|
data-science
| 10,211
|
Gallery preview row unable to display large number of images (overflow images are hidden and cannot be selected)
|
### Describe the bug
<img width="1439" alt="image" src="https://github.com/user-attachments/assets/7e8e1de0-1bbf-477c-afb6-af5a62fe269f" />
<img width="1439" alt="image" src="https://github.com/user-attachments/assets/216ad393-772c-4875-9385-cf7ba57e2efe" />
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Fix the bug
```python
css = """
.thumbnails {
justify-content: space-between !important;
}
"""
with gr.Blocks(css=css) as demo:
```
### System Info
```shell
gradio@5.9.0
```
### Severity
I can work around it
|
open
|
2024-12-16T15:14:14Z
|
2024-12-16T16:50:42Z
|
https://github.com/gradio-app/gradio/issues/10211
|
[
"bug"
] |
xiaomofa
| 0
|
sktime/sktime
|
scikit-learn
| 7,605
|
[ENH] Transformer Conformal Prediction for Time-Series
|
#6660 mentions the suggestion of adding Transformer Conformal Prediction for Time Series
[https://github.com/Jayaos/TCPTS](https://github.com/Jayaos/TCPTS)
Since conformal prediction provides uncertainty quantification for various scitypes such as regression, classification, or forecasting, I wonder if it will make more sense to implement this in a broader module like `proba` instead of confining to a particular scitype? Or is it better implemented as a `performance_metrics`?
|
open
|
2025-01-04T17:41:48Z
|
2025-01-05T17:06:19Z
|
https://github.com/sktime/sktime/issues/7605
|
[
"implementing algorithms",
"module:forecasting",
"enhancement"
] |
jgyasu
| 1
|
langmanus/langmanus
|
automation
| 124
|
多轮对话有问题
|
Thank you for your support and contribution to LangManus!
**Describe the Bug**
A clear and concise description of the issue.
**Steps to Reproduce**
Please provide detailed steps to reproduce the issue:
1. What actions did you take?
2. What was the expected behavior?
3. What actually happened?
**Environment Information**
Please provide the following details:
- Operating System and Version: [e.g., Windows 10, macOS Ventura, Ubuntu 22.04]
- Python Version: 3.12.7
**Checklist**
Before submitting this issue, please confirm the following:
- [ ] I have read the `README.md` file thoroughly.
- [ ] I have reviewed the `docs/faq.md` file and confirmed that this issue is not already addressed there.
**Screenshots or Logs**
If applicable, please attach screenshots or logs to help us understand the issue.
**Additional Context**
Add any other context or information about the problem here.
|
open
|
2025-03-24T09:01:28Z
|
2025-03-24T09:03:00Z
|
https://github.com/langmanus/langmanus/issues/124
|
[
"bug"
] |
expresschen
| 1
|
ray-project/ray
|
deep-learning
| 51,501
|
CI test windows://python/ray/tests:test_list_actors_4 is consistently_failing
|
CI test **windows://python/ray/tests:test_list_actors_4** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aaf1-9737-4a02-a7f8-1d7087c16fb1
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4156-97c5-9793049512c1
DataCaseName-windows://python/ray/tests:test_list_actors_4-END
Managed by OSS Test Policy
|
closed
|
2025-03-19T00:06:29Z
|
2025-03-19T21:53:13Z
|
https://github.com/ray-project/ray/issues/51501
|
[
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] |
can-anyscale
| 3
|
python-restx/flask-restx
|
flask
| 505
|
Swagger docs and Marshmallow schemes
|
Hi,
I'm using a set with Flask, SQLAlchemy and Marshmallow in my API.
I would like to use marshmallow schema to parse and also if possible create swagger docs.
Even the [docs](https://flask-restx.readthedocs.io/en/latest/parsing.html) suggest it's better integration suite for this kind of stuff than reqparse
Is there any news or examples for this integration?
thank you all
|
open
|
2023-01-06T19:13:24Z
|
2023-01-08T15:47:40Z
|
https://github.com/python-restx/flask-restx/issues/505
|
[
"enhancement"
] |
plenzjr
| 1
|
fastapi-users/fastapi-users
|
asyncio
| 878
|
Allow update single field with `PATCH` `users/me`
|
## Describe the bug
Currently, when I try to update a user field on `users/me` using `PATCH`, I'm getting a validation error because the `email` field is mandatory. Accordingly, with the `PATCH` method spec, we should be able to update a single field without providing all other fields. But, even if I provide the field `email`, it raises `UPDATE_USER_EMAIL_ALREADY_EXISTS`
## To Reproduce
Note: I'm using JWT as auth strategy
Steps to reproduce the behavior:
1. Go to openapi docs and create a new user
2. Now, login in that user
3. Search for endpoint named `Users:Current User` and try to update user password using PATCH `/users/me`, with the payload: `{"password": "newpass"}`
4. See validation error
## Expected behavior
Instead of a `422`, we should receive a `200`
## Configuration
- Python version : `3.9.7`
- FastAPI version : `0.68.1`
- FastAPI Users version : `9.2.1`
### FastAPI Users configuration
```py
fastapi_users = FastAPIUsers(
get_user_manager,
[auth_backend],
UserSchema, # <- schema used
UserCreateSchema,
UserUpdateSchema,
UserDB,
)
class UserSchema(models.BaseUser, models.BaseOAuthAccountMixin):
pass
```
## Additional context
I'm using the Tortoise-ORM adapter
```py
async def get_user_db():
yield TortoiseUserDatabase(UserDB, User, OAuthAccount)
async def get_user_manager(user_db=Depends(get_user_db)):
yield UserManager(user_db)
```
Thanks for your help.
|
closed
|
2022-01-17T17:50:10Z
|
2022-01-17T18:00:38Z
|
https://github.com/fastapi-users/fastapi-users/issues/878
|
[
"bug"
] |
fullonic
| 1
|
nolar/kopf
|
asyncio
| 378
|
Question: run-time registration of handlers?
|
> <a href="https://github.com/nemethf"><img align="left" height="50" src="https://avatars1.githubusercontent.com/u/1453824?v=4"></a> An issue by [nemethf](https://github.com/nemethf) at _2020-06-24 12:49:16+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/issues/378
>
## Question
<!-- What problem do you currently face and see no solution for it? -->
I have a CRD with a selector field that defines a set of pods on which my operator should configure something. Additionally, it is possible to have two custom resources configuring different parts of the same pod. That is the intersection of the selectors can be non-empty.
Without using kopf, I would create a watch on pods for each custom resource and write an event handler for the watchers. The handler would somehow receive the event and the name of custom resource it belongs to. It seems kopf does not support this approach.
So can you, please, tell me how I can implement an operator for this problem with kopf? Thank you.
<!-- If possible, explain what other ways did you try to solve the problem? -->
(I think #317 is somewhat similar, but not applicable here.)
## Checklist
- [X] I have read the [documentation](https://kopf.readthedocs.io/en/latest/) and searched there for the problem
- [X] I have searched in the [GitHub Issues](https://github.com/nolar/kopf/issues?utf8=%E2%9C%93&q=) for similar questions
## Keywords
<!-- Which keywords did you search for in the documentation/issue for this problem? -->
I basically read all the titles of open issues, pull requests and the documentation from cover to cover.
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-06-24 18:39:44+00:00_
>
I am not sure that I got the idea right. So the answer may be wrong or irrelevant. But this is how I would approach it:
Since there are pods involved, there is a need for pod-handler. Since not all pods should be involved, we have to filter them. Since the criteria of filtering are quite sophisticated, I would use callbacks:
* https://kopf.readthedocs.io/en/latest/filters/#by-arbitrary-callbacks
```python
import kopf
def does_it_match(**_) -> bool:
return True
@kopf.on.event('', 'v1', 'pods', when=does_it_match)
def pod_event(**_):
pass
```
So, at this moment, all pods in the cluster/namespace will be intercepted. Now, we need to narrow the criteria. Since there is a selector in a CR, I would keep that global state of all selectors in memory, mapping to the original CRs they came from:
```python
from typing import MutableMapping, Mapping
import kopf
SelectorKey = Tuple[str, str] # (namespace, name)
SelectorLabels = Mapping[str, str]
SELECTORS: MutableMapping[SelectorKey, SelectorLabels] = {}
@kopf.on.create('zalando.org', 'v1', 'kopfexamples')
@kopf.on.resume('zalando.org', 'v1', 'kopfexamples')
@kopf.on.update('zalando.org', 'v1', 'kopfexamples') # optionally
def cr_appears(namespace, name, spec, **_):
key = (namespace, name)
SELECTORS[key] = spec.get('selector', {})
@kopf.on.delete('zalando.org', 'v1', 'kopfexamples')
def cr_disappears(spec, **_):
key = (namespace, name)
try:
del SELECTORS[key]
except KeyError:
pass
```
So, at this point, we would have data for filtering the pods. Now, I would actually filter in that function above:
```python
def does_it_match(labels: Mapping[str, str], **_) -> bool:
for (namespace, name), selector_labels in SELECTORS.items():
if all(labels.get(key) == val for key, val in selector_labels.items()):
return True
return False
```
Now, the pods that do not match any known selector, will be silently ignored. Notice: they will get into the sight of the operator itself — in one and only one watch-stream — but will be filtered out in the earliest stages, with no logs produced (silently).
---
This is a difference here from your suggested approach: instead of having N watch-stream with labels in the URL (where N is the number of CRs with selectors), there will be one and only one watch-stream (and therefore TCP/HTTP/API connection), seeing all the pods, and just picking those of our interest, and ignoring others.
This will easy the API side, but will put some CPU load to the operator. The RAM footprint will be minimal, though not zero: every pod will spawn its own worker task (asyncio.Task), where the pod events will be routed to, and almost instantly ignored; but the tasks are objects too — on a cluster with thousands of pods this can be noticed.
---
As a continuation, using the same `for` + `if`, I would be able to detect to which CRs each individual pod corresponds (one or or even a few of them) — in the handler itself. And do something with that pod as the contextual object (in kwargs) and the detected CRs. Perhaps, the CRs' spec should be also preserved somewhere in the global state, so that we would know what to do specifically — after the matching CRs are identified by their selectors.
---
The downside here is that you have to keep some state in memory — for all the CRs, or all the pods, or all of something, depending on which of them you expect to be the least memory consuming.
I am not yet sure if it is possible to solve the cross-resource communication in any other way: when an event happens on a pod, no events happen on the CRs, so we have nothing to "join". You either scan your own in-memory state, or K8s's in-memory state via the K8s API on every pod event (costly!). But the up-to-date state must be somewhere there.
---
**PS:** The typing thing is fully optional, and is ignored at runtime. I just got a habit of using it for clarity.
---
> <a href="https://github.com/nemethf"><img align="left" height="30" src="https://avatars1.githubusercontent.com/u/1453824?v=4"></a> Commented by [nemethf](https://github.com/nemethf) at _2020-06-25 07:54:16+00:00_
>
I still think the dynamic handler registration is a bit more convenient, but you are right that it is less scalable than the approach you outlined.
However, with your approach, it might happen that a pod is created (and generates no events afterwards) before the corresponding CR appears in the system or vice versa, so I think the operator should store all the relevant info for both CRs and pods. This is doable but gets complicated when the CRD contains a namespace field in addition to the selector field.
At any rate, I'm going to ignore the namespace field in my operator and go with your idea. Thank you for enlightening me.
|
open
|
2020-08-18T20:05:06Z
|
2020-08-23T20:58:55Z
|
https://github.com/nolar/kopf/issues/378
|
[
"question",
"archive"
] |
kopf-archiver[bot]
| 0
|
statsmodels/statsmodels
|
data-science
| 8,896
|
ENH: Implement append, apply and extend for ETSResults
|
The current implementation of [`ETSResults`](https://www.statsmodels.org/dev/generated/statsmodels.tsa.exponential_smoothing.ets.ETSResults.html) doesn't have any of the following methods that are available in [`ExponentialSmoothingResults`](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.exponential_smoothing.ExponentialSmoothingResults.html) and the [`ARIMAResults`](https://www.statsmodels.org/dev/generated/statsmodels.tsa.arima.model.ARIMAResults.html#statsmodels.tsa.arima.model.ARIMAResults) models (links to the exponential smoothing equivalents):
- [append](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.exponential_smoothing.ExponentialSmoothingResults.append.html#statsmodels.tsa.statespace.exponential_smoothing.ExponentialSmoothingResults.append)
- [apply](url)
- [extend](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.mlemodel.MLEResults.extend.html#statsmodels.tsa.statespace.mlemodel.MLEResults.extend)
Having these methods available will be very nice, especially for "online" models and also implementations of Bagging ETS with different sources of uncertainty like in the paper below:
- Fotios Petropoulos, Rob J. Hyndman, Christoph Bergmeir, "Exploring the sources of uncertainty: Why does bagging for time series forecasting work?", European Journal of Operational Research, Volume 268, Issue 2, 2018, available at https://www.sciencedirect.com/science/article/abs/pii/S037722171830081X#:~:text=We%20suggest%20that%20another%20reason,forms%20on%20the%20original%20data.
This is possibly related to issue #6881 as the methods for ARIMA and ExponentialSmoothing are inherited from the [`MLEResults`](https://github.com/statsmodels/statsmodels/blob/77cb066320391ffed4196a32491ddca28e8c9122/statsmodels/tsa/statespace/mlemodel.py#L2262) class.
|
open
|
2023-05-26T12:01:00Z
|
2023-07-19T14:54:15Z
|
https://github.com/statsmodels/statsmodels/issues/8896
|
[] |
ltsaprounis
| 0
|
onnx/onnxmltools
|
scikit-learn
| 153
|
LightGBM multiclass convert error
|
I try to add a test for lightgbm, but i found multiclass classification seems to have error:
```
=================================== FAILURES ===================================
_______ TestLGBMClassifierConverter.test_model_multiclass_classification _______
self = <test_LightGBMClassifier.TestLGBMClassifierConverter testMethod=test_model_multiclass_classification>
def test_model_multiclass_classification(self):
model = self._fit_model_binary_classification(LGBMClassifier(
objective="ova",
learning_rate=0.05,
boosting_type="gbdt",
num_class=10))
> model_onnx = convert_sklearn(model, 'scikit-learn LGBM multiclass classifier', [('input', FloatTensorType([1, 10]))])
tests/sklearn/test_LightGBMClassifier.py:47:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
onnxmltools/convert/main.py:18: in convert_sklearn
doc_string, targeted_onnx, custom_conversion_functions, custom_shape_calculators)
onnxmltools/convert/sklearn/convert.py:97: in convert
onnx_model = convert_topology(topology, name, doc_string, targeted_onnx)
onnxmltools/convert/common/_topology.py:704: in convert_topology
_registration.get_converter(operator.type)(scope, operator, container)
onnxmltools/convert/sklearn/operator_converters/LightGbm.py:140: in convert_lightgbm
_parse_tree_structure(tree_id, class_id, learning_rate, tree['tree_structure'], attrs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tree_id = 2, class_id = 2, learning_rate = 1
tree_structure = {'leaf_value': -34.53877639770508}
attrs = {'class_ids': [0, 0, 0, 1, 1, 1], 'class_nodeids': [3, 4, 2, 3, 4, 2], 'class_treeids': [0, 0, 0, 1, 1, 1], 'class_weights': [0.004500000000000001, 0.005, -0.005000000000000001, -0.004500000000000001, -0.005, 0.005000000000000001], ...}
def _parse_tree_structure(tree_id, class_id, learning_rate, tree_structure, attrs):
# The pool of all nodes' indexes created when parsing a single tree. Different trees may use different pools.
node_id_pool = set()
node_id = _create_node_id(node_id_pool)
left_id = _create_node_id(node_id_pool)
right_id = _create_node_id(node_id_pool)
attrs['nodes_treeids'].append(tree_id)
attrs['nodes_nodeids'].append(node_id)
> attrs['nodes_featureids'].append(tree_structure['split_feature'])
E KeyError: 'split_feature'
onnxmltools/convert/sklearn/operator_converters/LightGbm.py:49: KeyError
```
The relevant code is here https://github.com/onnx/onnxmltools/pull/152
|
closed
|
2018-10-18T00:24:27Z
|
2018-10-23T18:49:01Z
|
https://github.com/onnx/onnxmltools/issues/153
|
[] |
jilongliao
| 1
|
lucidrains/vit-pytorch
|
computer-vision
| 277
|
structural 3D ViT
|
Just wanted to check whether the 3D model, made for videos can be repurposed to 3D structural transformer with frame = z?
Is there a fundamental difference between x,y,z if I use it like so, or is it symmetrical?
|
closed
|
2023-09-04T08:20:14Z
|
2023-09-05T15:12:11Z
|
https://github.com/lucidrains/vit-pytorch/issues/277
|
[] |
aperiamegh
| 4
|
google-research/bert
|
nlp
| 621
|
Fine Tuning + Extracting Features for Custom Dataset
|
TL;DR: How do I first train on my own dataset, and then use the trained model as a feature extractor?
My question is similar to the one mentioned here: #145
I want to extract features from my own dataset. My thought was to first train an estimator (following the steps in run_classifier.py), and then use the trained estimator to extract features (following the code in extract_features.py). First, is this a valid solution?
The problem with this approach is that the two files have different definitions of InputFeatures and model functions.
|
open
|
2019-05-04T08:14:43Z
|
2019-05-16T13:52:22Z
|
https://github.com/google-research/bert/issues/621
|
[] |
rvoak
| 2
|
stanford-oval/storm
|
nlp
| 339
|
Topic input Chinese, the program does not generate content
|
Hello, I am trying to use Storm to generate Chinese articles.
this is my code:
```
from knowledge_storm.collaborative_storm.engine import CollaborativeStormLMConfigs, RunnerArgument, CoStormRunner
from knowledge_storm.lm import DeepSeekModel
from knowledge_storm.logging_wrapper import LoggingWrapper
from knowledge_storm.rm import DuckDuckGoSearchRM
import os
# Co-STORM adopts the same multi LM system paradigm as STORM
lm_config: CollaborativeStormLMConfigs = CollaborativeStormLMConfigs()
kwargs = {
"api_key": "xxx",
"api_base": "https://api.siliconflow.cn",
"api_provider": "openai",
"temperature": 1.0,
"top_p": 0.9,
}
model_name = "Qwen/Qwen2.5-7B-Instruct"
question_answering_lm = DeepSeekModel(model=model_name, max_tokens=1000, **kwargs )
discourse_manage_lm = DeepSeekModel(model=model_name, max_tokens=500, **kwargs )
utterance_polishing_lm = DeepSeekModel(model=model_name, max_tokens=2000, **kwargs )
warmstart_outline_gen_lm = DeepSeekModel(model=model_name, max_tokens=500, **kwargs )
question_asking_lm = DeepSeekModel(model=model_name, max_tokens=300, **kwargs )
knowledge_base_lm = DeepSeekModel(model=model_name, max_tokens=1000, **kwargs )
lm_config.set_question_answering_lm(question_answering_lm)
lm_config.set_discourse_manage_lm(discourse_manage_lm)
lm_config.set_utterance_polishing_lm(utterance_polishing_lm)
lm_config.set_warmstart_outline_gen_lm(warmstart_outline_gen_lm)
lm_config.set_question_asking_lm(question_asking_lm)
lm_config.set_knowledge_base_lm(knowledge_base_lm)
# Check out the Co-STORM's RunnerArguments class for more configurations.
topic = input('Topic: ')
runner_argument = RunnerArgument(topic=topic)
logging_wrapper = LoggingWrapper(lm_config)
# bing_rm = BingSearch(bing_search_api_key=os.environ.get("BING_SEARCH_API_KEY"),
# k=runner_argument.retrieve_top_k)
dockdockgo_rm = DuckDuckGoSearchRM(k=runner_argument.retrieve_top_k)
costorm_runner = CoStormRunner(lm_config=lm_config,
runner_argument=runner_argument,
logging_wrapper=logging_wrapper,
rm=dockdockgo_rm,
# rm=bing_rm
)
# Warm start the system to build shared conceptual space between Co-STORM and users
costorm_runner.warm_start()
# Step through the collaborative discourse
# Run either of the code snippets below in any order, as many times as you'd like
# To observe the conversation:
# conv_turn = costorm_runner.step()
# # To inject your utterance to actively steer the conversation:
# costorm_runner.step(user_utterance="我认为人工智能会毁灭世界")
# Generate report based on the collaborative discourse
costorm_runner.knowledge_base.reogranize()
article = costorm_runner.generate_report()
print(' >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ')
print(article)
# topic: Artificial Intelligence Development
# topic:人工智能发展
```
The problem I encountered is that when I enter Chinese in the topic like “人工智能发展”, the content cannot be generated. But when using "Artificial Intelligence Development", the article can be generated,I debugged the code,The first problem I found was that the Chinese characters were not processed when the string was split, I modified the line 430 of `storm/knowledge_storm/collaborative_storm/engine.py`:
```
role_name, role_description = expert_name.split(":")
```
to
```
if ":" in expert_name:
role_name, role_description = expert_name.split(":")
elif ":" in expert_name:
role_name, role_description = expert_name.split(":")
```
The first problem is solved.
But when I re-run the program and enter the Chinese topic, the program still does not output the article.
I debug the code and found In the last line of the `forward` method of the `ArticleGenerationModule` class in `storm/knowledge_storm/collaborative_storm/modules/article_generation.py`, `return "\n".join(to_return)` is executed twice each time the program is executed. When the program executes to this point for the second time, the `knowledge_base.root.children` variable is an empty list, which means that the tree is not constructed correctly when the program runs to this point. I'm kind of overwhelmed by the code. I'm not sure how to proceed with the debugging. can you provide some help?
thanks.
Additionally, there's a strange issue: Chinese articles aren't always ungenerable. Out of around 20 attempts, one or two might succeed.
|
open
|
2025-03-12T02:04:56Z
|
2025-03-21T01:34:16Z
|
https://github.com/stanford-oval/storm/issues/339
|
[] |
JV-X
| 7
|
Lightning-AI/pytorch-lightning
|
data-science
| 20,140
|
StreamingDataset not working in multi-gpu environement
|
### Bug description
I'm trying to use the Streaming library by Mosaic as described in the doc (https://lightning.ai/docs/pytorch/stable/data/alternatives.html) but it doesn't work as seamlessly as expected.
When trying the default approach as described in the documentation with 4 GPUs on a single node:
```
train_dataset = YourDataset()
train_dataloader = DataLoader(train_dataset, batch_size=batch_size)
model = ...
trainer = L.Trainer()
trainer.fit(model, train_dataloader)
```
results in the following error:
```
FileExistsError: [Errno 17] File exists: '/000001_locals'
```
Mu understanding is that the environment variables (WORLD_SIZE, LOCAL_WORLD_SIZE, RANK, etc) are not set properly in time and thus the StreamingDataset does not know it is running in a multi-gpu/multi-node and thus the different processes are unaware of each others.
You can create the dataset after the trainer, but it does not change the outcome.
Has anyone been successful in running Streaming with lightning in a multi-node/multi-gpu setup?
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
_No response_
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU: None
- available: False
- version: 12.1
* Lightning:
- ema-pytorch: 0.4.2
- lightning: 2.2.4
- lightning-utilities: 0.10.1
- pytorch-lightning: 2.2.1
- torch: 2.2.1
- torchinfo: 1.8.0
- torchmetrics: 1.3.0.post0
- torchvision: 0.17.1
* Packages:
- absl-py: 2.1.0
- accelerate: 0.28.0
- aiohttp: 3.9.1
- aiosignal: 1.3.1
- antlr4-python3-runtime: 4.9.3
- appdirs: 1.4.4
- async-timeout: 4.0.3
- attrs: 23.2.0
- azure-core: 1.30.2
- azure-identity: 1.17.1
- azure-storage-blob: 12.21.0
- azure-storage-file-datalake: 12.16.0
- bcrypt: 4.2.0
- beartype: 0.17.2
- boto3: 1.34.149
- botocore: 1.34.149
- brotli: 1.1.0
- cachetools: 5.3.2
- certifi: 2023.11.17
- cffi: 1.16.0
- charset-normalizer: 3.3.2
- circuitbreaker: 1.4.0
- click: 8.1.7
- configparser: 6.0.0
- consistencydecoder: 1.0
- contourpy: 1.2.1
- cramjam: 2.8.3
- cryptography: 42.0.8
- cycler: 0.12.1
- diffusers: 0.27.0
- docker-pycreds: 0.4.0
- einops: 0.7.0
- ema-pytorch: 0.4.2
- filelock: 3.13.1
- fonttools: 4.53.1
- frozenlist: 1.4.1
- fsspec: 2023.12.2
- gitdb: 4.0.11
- gitpython: 3.1.41
- google-api-core: 2.19.1
- google-auth: 2.26.2
- google-auth-oauthlib: 1.2.0
- google-cloud-core: 2.4.1
- google-cloud-storage: 2.10.0
- google-crc32c: 1.5.0
- google-resumable-media: 2.7.1
- googleapis-common-protos: 1.63.2
- grpcio: 1.60.0
- huggingface-hub: 0.21.4
- hydra-core: 1.3.2
- idna: 3.6
- importlib-metadata: 7.0.2
- isodate: 0.6.1
- jinja2: 3.1.3
- jmespath: 1.0.1
- kiwisolver: 1.4.5
- lightning: 2.2.4
- lightning-utilities: 0.10.1
- lsuv: 0.2.2
- markdown: 3.5.2
- markupsafe: 2.1.3
- matplotlib: 3.9.1
- mosaicml-streaming: 0.7.6
- mpmath: 1.3.0
- msal: 1.30.0
- msal-extensions: 1.2.0
- multidict: 6.0.4
- networkx: 3.2.1
- numpy: 1.26.3
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu12: 8.9.2.26
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu12: 2.19.3
- nvidia-nvjitlink-cu12: 12.3.101
- nvidia-nvtx-cu12: 12.1.105
- oauthlib: 3.2.2
- oci: 2.129.4
- omegaconf: 2.3.0
- packaging: 23.2
- paramiko: 3.4.0
- pathtools: 0.1.2
- pillow: 10.2.0
- pip: 24.1.2
- portalocker: 2.10.1
- promise: 2.3
- proto-plus: 1.24.0
- protobuf: 4.23.4
- psutil: 5.9.7
- pyasn1: 0.5.1
- pyasn1-modules: 0.3.0
- pycparser: 2.22
- pyjwt: 2.8.0
- pynacl: 1.5.0
- pyopenssl: 24.2.1
- pyparsing: 3.1.2
- python-dateutil: 2.8.2
- python-snappy: 0.7.2
- pytorch-lightning: 2.2.1
- pytz: 2024.1
- pyyaml: 6.0.1
- regex: 2023.12.25
- requests: 2.31.0
- requests-oauthlib: 1.3.1
- rsa: 4.9
- s3transfer: 0.10.2
- safetensors: 0.4.2
- scipy: 1.12.0
- sentry-sdk: 1.39.2
- setproctitle: 1.3.3
- setuptools: 58.1.0
- shortuuid: 1.0.11
- six: 1.16.0
- smmap: 5.0.1
- subprocess32: 3.5.4
- sympy: 1.12
- tensorboard: 2.15.1
- tensorboard-data-server: 0.7.2
- termcolor: 2.4.0
- timm: 0.9.16
- tokenizers: 0.19.1
- torch: 2.2.1
- torchinfo: 1.8.0
- torchmetrics: 1.3.0.post0
- torchvision: 0.17.1
- tqdm: 4.66.1
- transformers: 4.40.2
- triton: 2.2.0
- typing-extensions: 4.9.0
- urllib3: 2.1.0
- wandb: 0.16.6
- werkzeug: 3.0.1
- xxhash: 3.4.1
- yarl: 1.9.4
- yaspin: 3.0.1
- zipp: 3.18.1
- zstd: 1.5.5.1
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.10.4
- release: 5.14.0-284.55.1.el9_2.x86_64
- version: #1 SMP PREEMPT_DYNAMIC Mon Feb 19 16:57:59 EST 2024
</details>
### More info
_No response_
|
open
|
2024-07-30T12:06:41Z
|
2024-08-18T02:26:46Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20140
|
[
"bug",
"repro needed"
] |
davidpicard
| 3
|
Kav-K/GPTDiscord
|
asyncio
| 211
|
[BUG] Welcome message not working with turbo model
|
**Describe the bug**
Welcome message not showing when using this model
**To Reproduce**
Steps to reproduce the behavior:
Just change default model to chatgpt 3.5 turbo
**Expected behavior**
Welcome messages working!
**Screenshots**
N/A
**Additional context**
N/A
|
closed
|
2023-03-05T23:58:05Z
|
2023-03-06T05:13:17Z
|
https://github.com/Kav-K/GPTDiscord/issues/211
|
[
"bug"
] |
rangerpowerx
| 1
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 4,207
|
Wrong translation on comments tab
|
### What version of GlobaLeaks are you using?
5.0.11
### What browser(s) are you seeing the problem on?
Firefox
### What operating system(s) are you seeing the problem on?
Windows
### Describe the issue
The tab "only me" is not translated.

### Proposed solution
_No response_
|
closed
|
2024-09-30T14:36:40Z
|
2024-10-03T18:43:43Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4207
|
[
"T: Bug",
"C: Client"
] |
AlainL88
| 1
|
Nemo2011/bilibili-api
|
api
| 267
|
action 登录凭据失效
|
换下sessdata?...
|
closed
|
2023-04-30T07:58:51Z
|
2023-07-25T08:14:38Z
|
https://github.com/Nemo2011/bilibili-api/issues/267
|
[
"help wanted"
] |
z0z0r4
| 2
|
docarray/docarray
|
pydantic
| 1,709
|
bug: WeaviateDocumentIndex fails if index name is not capitalized
|
WeaviateDocumentIndex fails if index name is not capitalized
```python
from docarray import BaseDoc, DocList
from docarray.index import WeaviateDocumentIndex
from docarray.typing import NdArray
from pydantic import Field
import numpy as np
# Define the document schema.
class MyDoc(BaseDoc):
title: str
embedding: NdArray[128] = Field(is_embedding=True)
# Create dummy documents.
docs = DocList[MyDoc](MyDoc(title=f'title #{i}', embedding=np.random.rand(128)) for i in range(10))
# Initialize a new WeaviateDocumentIndex instance and add the documents to the index.
doc_index = WeaviateDocumentIndex[MyDoc](index_name='zdzd')
doc_index.index(docs)
# Perform a vector search.
query = np.ones(128)
retrieved_docs = doc_index.find(query, limit=10)
```
> File "/Users/jinaai/Desktop/docarray/docarray/index/backends/weaviate.py", line 354, in find
> docs, scores = self._find(
> File "/Users/jinaai/Desktop/docarray/docarray/index/backends/weaviate.py", line 408, in _find
> results["data"]["Get"][index_name], score_name
> KeyError: 'zdzd'
>
|
closed
|
2023-07-18T13:30:00Z
|
2023-07-18T14:08:01Z
|
https://github.com/docarray/docarray/issues/1709
|
[] |
jupyterjazz
| 0
|
pytest-dev/pytest-html
|
pytest
| 48
|
Add a mechanism for merging multiple html reports
|
Would it be possible to expose a flag that allows merging multiple html reports?
This is something that could probably be done with a separate script, but I think it makes sense to add this functionality to the plugin. Something like:
py.test --merge-html result1.html result2.html ... resultN.html
Cheers,
Goncalo
|
closed
|
2016-05-31T21:55:07Z
|
2024-10-01T14:30:22Z
|
https://github.com/pytest-dev/pytest-html/issues/48
|
[] |
ghost
| 10
|
amdegroot/ssd.pytorch
|
computer-vision
| 553
|
Why....does this kind of error happen?
|
```
/home/dnaplus/Downloads/ssd.pytorch/ssd.py:34: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
self.priors = Variable(self.priorbox.forward(), volatile=True)
Loading base network...
Initializing weights...
train.py:214: UserWarning: nn.init.xavier_uniform is now deprecated in favor of nn.init.xavier_uniform_.
init.xavier_uniform(param)
Loading the dataset...
Training SSD on: VOC0712
Using the specified args:
Namespace(basenet='vgg16_reducedfc.pth', batch_size=8, cuda=True, dataset='VOC', dataset_root='/home/dnaplus/data/VOCdevkit/', gamma=0.1, lr=0.001, momentum=0.9, num_workers=1, resume=None, save_folder='weights/', start_iter=0, visdom=False, weight_decay=0.0005)
/home/dnaplus/Downloads/ssd.pytorch/utils/augmentations.py:238: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
mode = random.choice(self.sample_options)
train.py:169: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
targets = [Variable(ann.cuda(), volatile=True) for ann in targets]
/home/dnaplus/.local/lib/python3.6/site-packages/torch/nn/_reduction.py:42: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
timer: 26.3106 sec.
iter 0 || Loss: 46.5593 || Traceback (most recent call last):
File "train.py", line 255, in <module>
train()
File "train.py", line 165, in train
images, targets = next(batch_iterator)
File "/home/dnaplus/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "/home/dnaplus/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/home/dnaplus/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/dnaplus/.local/lib/python3.6/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/dnaplus/.local/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/dnaplus/.local/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dnaplus/.local/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dnaplus/Downloads/ssd.pytorch/data/voc0712.py", line 115, in __getitem__
im, gt, h, w = self.pull_item(index)
File "/home/dnaplus/Downloads/ssd.pytorch/data/voc0712.py", line 134, in pull_item
img, boxes, labels = self.transform(img, target[:, :4], target[:, 4])
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
```
Hello, hope everyone's doing okay. Any help would be appreciated:)
|
closed
|
2021-08-04T04:15:05Z
|
2024-01-14T17:14:40Z
|
https://github.com/amdegroot/ssd.pytorch/issues/553
|
[] |
cnr0724
| 1
|
adithya-s-k/marker-api
|
rest-api
| 11
|
Cloudflare tunnel timeout issue
|
Hi,
I use cloudflare tunnel to make server in local network accessible through public network, and CF has limitation with 100 seconds to response otherwise the connection would be closed. Most of my pdf files take more than 100s to convert so I wonder if you can add function that response to client and hold the connection before file been converted.
Best regards

|
open
|
2024-06-26T08:51:23Z
|
2024-06-26T08:51:23Z
|
https://github.com/adithya-s-k/marker-api/issues/11
|
[] |
wangxing7714436
| 0
|
LAION-AI/Open-Assistant
|
machine-learning
| 2,851
|
Display name spill-over
|
In isolated cases users misuse their freedom to use unicode display names to set bad names which spill over to other parts of the UI. Specifically affected are the leaderboard and trollboard pages which list users.
Use css styles to clip display names at the border of the surrounding cell's box.
|
closed
|
2023-04-23T08:41:28Z
|
2023-04-25T10:49:32Z
|
https://github.com/LAION-AI/Open-Assistant/issues/2851
|
[
"bug",
"website",
"good first issue"
] |
andreaskoepf
| 2
|
521xueweihan/HelloGitHub
|
python
| 2,191
|
自荐项目: Output-Line 在nodejs终端中显示打印语句的文件路径以及行号
|
## 项目推荐
- 项目地址:[Lete114/Output-Line](https://github.com/Lete114/Output-Line)
- 类别:JavaScript
- 项目描述:
- 必写:在nodejs终端中显示打印语句的文件路径以及行号
- 可选:适用于代码调试,快速移除遗漏的console.log语句
- 描述长度:
当你写的项目文件过多时,可能有30以上的js文件时,有一个bug需要解决,于是你打了好几个console.log语句用于查看内容输出,当问题解决后你需要将这些输出语句删除,可文件太多,不方便查找,这时你可能会想到使用IDE的全局文件搜索,但是在项目中有些地方也是用了输出语句的,例如`try-catch`输出异常错误等,这时`Output-Line`就派上用场了,它会直接在终端直接显示输出语句的文件夹,可以直接从终端`ctrl+鼠标左键`直接找到输出语句
- 推荐理由:方便调试,也可追踪报错信息由哪一个文件输出,还能追踪到第三方库的输出信息
- 示例代码:
```js
/*
┌────────────┬────────────┬────────────┐
│ property │ default │ type │
├────────────┼────────────┼────────────┤
│ methods │ Console │ Array │
│ prefix │ '🐞' │ String │
│ isRelative │ false │ Boolean │
└────────────┴────────────┴────────────┘
*/
// 使用默认参数
// require('output-line')()
// 自定义参数
const options = {
methods: ['debug', 'log', 'warn', 'error'],
prefix: '🐸',
isRelative: true
}
require('output-line')(options)
console.log('%s %d', 'age', 18)
console.log({ f: 'foo', b: 'bar' })
```
- 截图:


|
closed
|
2022-05-03T10:02:13Z
|
2022-05-24T03:30:44Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2191
|
[] |
Lete114
| 1
|
pallets-eco/flask-sqlalchemy
|
sqlalchemy
| 532
|
Using limit method before paginate method.
|
Hi. When I use limit method before paginate, an InvalidRequestError will be thrown:
> InvalidRequestError: Query.order_by() being called on a Query which already has LIMIT or OFFSET applied. To modify the row-limited results of a Query, call from_self() first. Otherwise, call order_by() before limit() or offset() are applied.
Then, I found this in source code for this project:
https://github.com/mitsuhiko/flask-sqlalchemy/blob/c1e309a90822d2b8b1c0ca2d05949bbfb7005778/flask_sqlalchemy/__init__.py#L485
When I remove `order_by(None)`, the program works fine.
So, why use `order_by(None)` before `count()` in that line? Is it necessary?
|
closed
|
2017-08-22T10:12:25Z
|
2020-12-05T20:55:43Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/532
|
[] |
WaferJay
| 1
|
mage-ai/mage-ai
|
data-science
| 5,096
|
Refresh Button for particular Block
|
**Is your feature request related to a problem? Please describe.**
If we code on the code editor like in VS Code, we need to refresh the browser to show the code on Mage UI; it will not instantly replicate the code in UI.
**Describe the solution you'd like**
Instead of refreshing the browser, can we add a refresh or reload button for a block? It may be like that in the block header near the Run block.
**Describe alternatives you've considered**
Or Block should be run by the Code Editor, but it may take more effort.
**Additional context**
|
open
|
2024-05-21T05:54:14Z
|
2024-05-21T17:29:54Z
|
https://github.com/mage-ai/mage-ai/issues/5096
|
[
"enhancement"
] |
kkhatke
| 0
|
iMerica/dj-rest-auth
|
rest-api
| 528
|
How to change API Endpoint url?
|
May I ask How do I change the url of login page from /login to /login1, or /registration to /register?
|
closed
|
2023-07-12T22:05:24Z
|
2023-07-12T22:18:41Z
|
https://github.com/iMerica/dj-rest-auth/issues/528
|
[] |
tran-hai-long
| 1
|
mirumee/ariadne
|
graphql
| 736
|
Directives do not get reported through _service.sdl introspection query ---> subgraph cannot be federated
|
Apollo federation uses the following introspection query when glueing together subgraphs:
https://github.com/apollographql/rover/blob/83d99ad2c707a5da4e3d48593af7b22a51d6d07d/crates/rover-client/src/operations/subgraph/introspect/introspect_query.graphql#L1-L6
```gql
query SubgraphIntrospectQuery {
# eslint-disable-next-line
_service {
sdl
}
}
```
In our subgraph, we define a schema only, non-executable `@lint` directive. If I do an introspection for directives directly, it shows up, with all the other directives:
<img width="1314" alt="Screen Shot 2021-12-03 at 11 10 13 PM" src="https://user-images.githubusercontent.com/1590756/144701189-f0ebce3f-8341-4b7e-b7fb-31a1eb57c422.png">
But if I run the introspection query apollo rover is using (`@apollo/rover@0.4.1`):
<img width="1652" alt="Screen Shot 2021-12-03 at 11 09 37 PM" src="https://user-images.githubusercontent.com/1590756/144701225-4abb328d-3fe5-42f3-9c20-2c93b84a366b.png">
^ That's just some random test schema, but cruicially - no directives in the output.
This ultimately leads to the following:
```
error[E029]: Encountered 1 build error while trying to build a supergraph.
Caused by:
Encountered 1 build error while trying to build the supergraph.
UNKNOWN: [@lint] -> Custom directives must be implemented in every service. The following services do not implement the @lint directive: i18n_strings.
```
I'll keep poking around, it's possible i'm missing a trick here somewhere, or need to upgrade things, but writing this down as a starting point. Thanks!
|
open
|
2021-12-04T07:14:26Z
|
2024-03-25T08:25:04Z
|
https://github.com/mirumee/ariadne/issues/736
|
[
"to do"
] |
magicmark
| 3
|
0b01001001/spectree
|
pydantic
| 249
|
improve falcon form data read part
|
> @yedpodtrzitko It depends what you are trying to achieve. Do you want to buffer whole files in memory? Or do you want to spool them to temp files like some other frameworks do?
> Falcon even puts a cap on a maximum amount of data that can be referenced this way ([`await part.get_data()`](https://falcon.readthedocs.io/en/stable/api/multipart.html#falcon.media.multipart.BodyPart.get_data)) in order to avoid surprises such as running out of memory.
> Use `await part.stream.read()` to read the whole part as a bytestring, or [`await part.stream.pipe(async_file)`](https://falcon.readthedocs.io/en/stable/api/multipart.html#multipart-forms), or read by chunks, and store the result somewhere. You'll probably need to introduce some new object type to hold these attributes.
_Originally posted by @vytas7 in https://github.com/0b01001001/spectree/pull/225#discussion_r936042043_
|
open
|
2022-08-04T13:27:12Z
|
2022-11-10T16:22:44Z
|
https://github.com/0b01001001/spectree/issues/249
|
[
"enhancement",
"good first issue",
"help wanted"
] |
kemingy
| 4
|
Yorko/mlcourse.ai
|
scikit-learn
| 666
|
Misleading hyperlink on https://mlcourse.ai/roadmap
|
Misleading hyperlink on https://mlcourse.ai/roadmap
Chapter: "Week 5. Bagging and Random Forest"
Link: “Random Forest”
Actual link: https://mlcourse.ai/articles/topic5-part1-bagging/
Expected link: https://mlcourse.ai/articles/topic5-part2-rf/
|
closed
|
2020-06-02T17:17:10Z
|
2020-06-06T07:47:59Z
|
https://github.com/Yorko/mlcourse.ai/issues/666
|
[
"minor_fix"
] |
www050
| 1
|
littlecodersh/ItChat
|
api
| 676
|
如何添加http服务暴露发送消息的接口。
|
如何添加http服务暴露发送消息的接口。
|
open
|
2018-06-04T02:50:33Z
|
2018-06-06T16:40:08Z
|
https://github.com/littlecodersh/ItChat/issues/676
|
[
"help wanted"
] |
fanlushuai
| 1
|
scikit-learn/scikit-learn
|
python
| 30,199
|
Add "mish" activation function to sklearn.neural_network.MLPClassifier and make it the default
|
### Describe the workflow you want to enable
Currently, the default activation function for `sklearn.neural_network.MLPClassifier` is "relu". However, there are several papers that demonstrate better results with "mish" = (x ⋅ tanh(ln(1 + e^x))) = x ⋅ tanh(softplus(x)).
Some references:
1) According to [Mish: A Self Regularized Non-Monotonic Neural Activation Function](https://arxiv.org/abs/1908.08681v1), mish outperformed all relu variants on CIFAR-10.
2) According to [Optimizing cnn-Bigru performance: Mish activation and comparative analysis with Relu](https://arxiv.org/abs/2405.20503), mish outperformed relu on three different classification datasets.
3) According to [Analyzing Lung Disease Using Highly Effective Deep Learning Techniques](https://www.researchgate.net/publication/340880583_Analyzing_Lung_Disease_Using_Highly_Effective_Deep_Learning_Techniques), mish outperformed relu on a lung lesion dataset, regardless of which optimizer was used (SGD, Adagrad, Adam, etc.).
4) According to [Double-Branch Network with Pyramidal Convolution and Iterative Attention for Hyperspectral Image Classification](https://www.researchgate.net/publication/350701369_Double-Branch_Network_with_Pyramidal_Convolution_and_Iterative_Attention_for_Hyperspectral_Image_Classification), using mish improved accuracy on four hyperspectral image classification datasets.
5) Not an academic paper, but still: https://lessw.medium.com/meet-mish-new-state-of-the-art-ai-activation-function-the-successor-to-relu-846a6d93471f.
### Describe your proposed solution
```
def faster_mish(x):
# naive implementation: return x * np.tanh(np.log1p(np.exp(x)))
expx = np.exp(x)
n = expx * expx + 2 * expx
# https://cs.stackexchange.com/a/125052
return np.where(x <= -0.6, x * n / (n + 2), x - 2 * x / (n + 2))
```
### Describe alternatives you've considered, if relevant
Pytorch has implemented mish: https://pytorch.org/docs/stable/generated/torch.nn.Mish.html
However, for my small personal project, I'd like to keep things simple and using sklearn is definitely easier that using pytorch.
### Additional context
I mostly focused on classification, but mish is good for regression too, so it should be the default for `MLPRegressor`, too.
|
closed
|
2024-11-02T22:30:45Z
|
2024-11-04T09:22:32Z
|
https://github.com/scikit-learn/scikit-learn/issues/30199
|
[
"New Feature",
"Needs Triage"
] |
Expertium
| 1
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
computer-vision
| 1,693
|
The problem of style transfer
|
Hello, author.
I want to use my own dataset to perform some style transfer tasks, such as converting land scenes into an underwater style. However, I only want to transfer the style. But when I was running my own dataset, I found that besides the style being transferred, the scenery in the pictures also changes (perhaps because the scenery in the land photos is different from that at the bottom of the water). How can I keep the scenery in the pictures unchanged while making the environment look like it's underwater?
|
open
|
2025-03-19T00:47:55Z
|
2025-03-19T00:47:55Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1693
|
[] |
zhangjy328
| 0
|
Nekmo/amazon-dash
|
dash
| 132
|
Systemd service does not work after reboot
|
Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with amazon-dash)
- [ ] Feature request (request for a new functionality)
- [x] Question
- [ ] Other
### Guideline for bug reports
You can delete this section if your report is not a bug
* amazon-dash version: v1.3.2
* Python version: 2.7.13
* Pip & Setuptools version: pip 9.0.1 from /usr/lib/python2.7/dist-packages (python 2.7)
* Operating System: Raspbian GNU/Linux 9 (stretch)
How to get your version:
```
amazon-dash --version
python --version
pip --version
easy_install --version
```
- [x] The `pip install` or `setup install` command has been completed without errors
- [x] The `python -m amazon_dash.install` command has been completed without errors
- [x] The `amazon-dash discovery` command works without errors
- [x] I have created/edited the configuration file
- [x] *Amazon-dash service* or `amazon-dash --debug run` works
#### Description
I setup the autostart Smith the commands
sudo systemctl start amazon-dash
After a reboot I can see that the amazon-dash is running
pi@servberry:~ $ sudo systemctl status amazon-dash
● amazon-dash.service - Amazon Dash service
Loaded: loaded (/lib/systemd/system/amazon-dash.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-03-06 19:40:31 CET; 1min 2s ago
Main PID: 330 (amazon-dash)
CGroup: /system.slice/amazon-dash.service
└─330 /usr/bin/python /usr/local/bin/amazon-dash run --config /etc/amazon-dash.yml
Mar 06 19:40:31 servberry systemd[1]: Started Amazon Dash service.
Mar 06 19:40:32 servberry env[330]: Welcome to Amazon-dash v1.3.2 using Python 2.7.13
Mar 06 19:40:32 servberry env[330]: Listening for events. Amazon-dash will execute the events associated with the r
lines 1-10/10 (END)
But when I press the button on the Dash button, nothing happens.
#### What I Did
When restarting the amazon-dash with the command
sudo systemctl restart amazon-dash
the Dash-Button works
|
closed
|
2019-03-06T18:50:44Z
|
2019-03-22T17:16:54Z
|
https://github.com/Nekmo/amazon-dash/issues/132
|
[
"bug"
] |
scharrin2
| 3
|
biolab/orange3
|
scikit-learn
| 6,372
|
UMAP under unsupervised
|
**What's your use case?**
<!-- Have, other than t-SNE, a way to visualize data in 2D reducing dimensions? -->
**What's your proposed solution?**
<!-- Implement UMAP algo and/or any other. -->
Thank you! I love Orange!!!
|
open
|
2023-03-24T16:34:32Z
|
2023-04-02T23:07:39Z
|
https://github.com/biolab/orange3/issues/6372
|
[] |
pepeto
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.