repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
pytorch/vision
|
computer-vision
| 8,730
|
release for python 3.13
|
### 🚀 The feature
Any plans to release for python 3.13?
thanks
### Motivation, pitch
torch is already compatible with 3.13
### Alternatives
_No response_
### Additional context
_No response_
|
closed
|
2024-11-14T08:41:04Z
|
2025-02-27T10:40:56Z
|
https://github.com/pytorch/vision/issues/8730
|
[] |
dpinol
| 6
|
wagtail/wagtail
|
django
| 12,408
|
Streamfield migrations fail on revisions that don't have target field
|
### Issue Summary
`wagtail.blocks.migrations.migrate_operation.MigrateStreamData` does not gracefully handle revisions that do not contain the field that is being operated on. This may occur when running a migration on a model that has revisions from before the creation of the field on the model. We do support limiting which revisions to operate on by date, but we should also gracefully handle this scenario (by doing nothing if the field does not exist in a given revision).
### Impact
Prevents migrations from being successfully run.
### Steps to Reproduce
1. Start a new Wagtail project
2. makemigrations, migrate, createsuperuser
3. Save a revision of the homepage
4. Add a "body" StreamField to the homepage
5. makemigrations
6. Create an empty migration in the home app
7. Add a streamfield data migration operation that operates on the newly added "body" field
```python
# Generated by Django 4.2.16 on 2024-10-12 10:51
from django.db import migrations
from wagtail.blocks.migrations.migrate_operation import MigrateStreamData
from wagtail.blocks.migrations.operations import AlterBlockValueOperation
class Migration(migrations.Migration):
dependencies = [
('home', '0003_homepage_body'),
]
operations = [
MigrateStreamData(
"home",
"homepage",
"body",
[
(AlterBlockValueOperation("Hello world"), "text")
]
)
]
```
8. Run the migrations
```sh
gitpod /workspace/wagtail-gitpod (main) $ python manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, home, sessions, taggit, wagtailadmin, wagtailcore, wagtaildocs, wagtailembeds, wagtailforms, wagtailimages, wagtailredirects, wagtailsearch, wagtailusers
Running migrations:
Applying home.0003_homepage_body... OK
Applying home.0004_auto_20241012_1051...Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
utility.execute()
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/core/management/__init__.py", line 436, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/core/management/base.py", line 412, in run_from_argv
self.execute(*args, **cmd_options)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/core/management/base.py", line 458, in execute
output = self.handle(*args, **options)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/core/management/base.py", line 106, in wrapper
res = handle_func(*args, **kwargs)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/core/management/commands/migrate.py", line 356, in handle
post_migrate_state = executor.migrate(
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/db/migrations/executor.py", line 135, in migrate
state = self._migrate_all_forwards(
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/db/migrations/executor.py", line 167, in _migrate_all_forwards
state = self.apply_migration(
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/db/migrations/executor.py", line 252, in apply_migration
state = migration.apply(state, schema_editor)
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/db/migrations/migration.py", line 132, in apply
operation.database_forwards(
File "/workspace/.pip-modules/lib/python3.8/site-packages/django/db/migrations/operations/special.py", line 193, in database_forwards
self.code(from_state.apps, schema_editor)
File "/workspace/.pip-modules/lib/python3.8/site-packages/wagtail/blocks/migrations/migrate_operation.py", line 159, in migrate_stream_data_forward
raw_data = json.loads(revision.content[self.field_name])
KeyError: 'body'
```
https://github.com/wagtail/wagtail/blob/309e47f0ccb19dba63aaa64d52914e87eef390dc/wagtail/blocks/migrations/migrate_operation.py#L159
### Technical details
- Python version: 3.8.12
- Django version: 4.2.16
- Wagtail version: 6.2.2
|
open
|
2024-10-12T11:03:21Z
|
2024-12-01T03:58:01Z
|
https://github.com/wagtail/wagtail/issues/12408
|
[
"type:Bug",
"component:Streamfield"
] |
jams2
| 2
|
xinntao/Real-ESRGAN
|
pytorch
| 328
|
请问在哪可以看到生成器的网络结构
|
作者您好,我没有在代码中找到生成器的arch文件,basicsr的arch文件夹下也没有ESRGAN的arch文件
请问在哪里可以看到ESRGAN的arch文件
|
closed
|
2022-05-12T10:41:16Z
|
2023-02-15T07:34:34Z
|
https://github.com/xinntao/Real-ESRGAN/issues/328
|
[] |
EgbertMeow
| 1
|
miguelgrinberg/Flask-Migrate
|
flask
| 73
|
The multidb is not putting changes in the correct database.
|
I am having an issue where my schema changes are not showing up in the correct database. Furthermore, the test_multidb_migrate_upgrade fails when running "python setup.py test".
When I run these commands:
``` sh
cd tests
rm *.db && rm -rf migrations # cleanup
python app_multidb.py db init --multidb
python app_multidb.py db migrate
python app_multidb.py db upgrade
```
I get this version. Which is wrong, because User should be in upgrade_, not upgrade_db1.
``` python
"""empty message
Revision ID: af440038899
Revises:
Create Date: 2015-08-17 14:24:58.842302
"""
# revision identifiers, used by Alembic.
revision = 'af440038899'
down_revision = None
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade(engine_name):
globals()["upgrade_%s" % engine_name]()
def downgrade(engine_name):
globals()["downgrade_%s" % engine_name]()
def upgrade_():
pass
def downgrade_():
pass
def upgrade_db1():
### commands auto generated by Alembic - please adjust! ###
op.create_table('user',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=128), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_table('group',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=128), nullable=True),
sa.PrimaryKeyConstraint('id')
)
### end Alembic commands ###
def downgrade_db1():
### commands auto generated by Alembic - please adjust! ###
op.drop_table('group')
op.drop_table('user')
### end Alembic commands ###
```
Using the latest libraries, not sure if a recent upgrade broke it:
```
$ pip freeze
Flask==0.10.1
Flask-Migrate==1.5.0
Flask-SQLAlchemy==2.0
Flask-Script==2.0.5
Jinja2==2.8
Mako==1.0.1
MarkupSafe==0.23
SQLAlchemy==1.0.8
Werkzeug==0.10.4
alembic==0.8.0
itsdangerous==0.24
python-editor==0.3
wsgiref==0.1.2
```
For sanity, can you confirm unit tests still pass with alembic 0.8, which I believe was released a few days ago? Thanks!
|
closed
|
2015-08-17T18:38:27Z
|
2015-09-04T17:57:35Z
|
https://github.com/miguelgrinberg/Flask-Migrate/issues/73
|
[
"bug"
] |
espositocode
| 3
|
jina-ai/serve
|
fastapi
| 5,994
|
Dynamic k8s namespace for generating kubernetes yaml
|
**Describe the feature**
The current implementation for to_kubernetes_yaml flow [takes `k8s_namespace` to be explicitly specified somewhere otherwise it outputs as `default` namespace](https://github.com/jina-ai/jina/blob/34664ee8db0a0e593a6c71dd6476cbf266a80641/jina/orchestrate/flow/base.py#L2772C69-L2772C69). The namespace need not be explicitly set and can be dynamically injected through metadata, hence improving the reusability of the generated templates.
For example:
```shell
kubectl apply -f "file.yaml" --namespace=prod
kubectl apply -f "file.yaml" --namespace=dev
```
**Your proposal**
We may accept that the Optional field can be `None` and propagate that when generating the yaml, in which case the generated yaml will include this for env injection:
```yaml
- name: K8S_NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
```
And of course we may remove the injection of namespace for all the generated objects in this case.
More info from official docs
- https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
- https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
|
closed
|
2023-07-30T11:05:54Z
|
2024-06-06T00:18:51Z
|
https://github.com/jina-ai/serve/issues/5994
|
[
"Stale"
] |
sansmoraxz
| 25
|
suitenumerique/docs
|
django
| 113
|
🐛Editor difference with PDF
|
## Bug Report
Some properties of the editor are not reflected to the PDF (color / bg / alignment)
An issue was opened about it:
- [x] https://github.com/TypeCellOS/BlockNote/issues/893
## Demo

## Code
https://github.com/numerique-gouv/impress/blob/9c19b22a66766018f91262f9d1bd243cdecfa884/src/frontend/apps/impress/src/features/docs/doc-tools/components/ModalPDF.tsx#L88
|
closed
|
2024-07-01T12:50:13Z
|
2024-08-02T15:34:03Z
|
https://github.com/suitenumerique/docs/issues/113
|
[
"bug",
"enhancement",
"frontend"
] |
AntoLC
| 0
|
agronholm/anyio
|
asyncio
| 418
|
'get_coro' doesn't apply to a 'Task' object
|
Don't know if I should file it under `anyio` or `httpx`.
I have a FastAPI web app that makes some external calls with `httpx`. I sometimes (timeout involved? loop terminated elsewhere?) get the following error. I was unsuccessful at reproducing it in a minimal example, so please forgive me for just pasting the traceback.
As you can see in the first block, it is just a call with `httpx.AsycnClient` GET.
using
python 3.9 (dockerized with python:3.9-alpine3.14)
anyio 3.5.0
httpx 0.19.0
EDIT: The problem could be related to the use of nest-asyncio elsewhere in the app.
```
File "/usr/local/lib/python3.9/site-packages/httpx/_client.py", line 1740, in get
return await self.request(
File "/usr/local/lib/python3.9/site-packages/httpx/_client.py", line 1494, in request
response = await self.send(
File "/usr/local/lib/python3.9/site-packages/httpx/_client.py", line 1586, in send
response = await self._send_handling_auth(
File "/usr/local/lib/python3.9/site-packages/httpx/_client.py", line 1616, in _send_handling_auth
response = await self._send_handling_redirects(
File "/usr/local/lib/python3.9/site-packages/httpx/_client.py", line 1655, in _send_handling_redirects
response = await self._send_single_request(request, timeout)
File "/usr/local/lib/python3.9/site-packages/httpx/_client.py", line 1699, in _send_single_request
) = await transport.handle_async_request(
File "/usr/local/lib/python3.9/site-packages/httpx/_transports/default.py", line 281, in handle_async_request
) = await self._pool.handle_async_request(
File "/usr/local/lib/python3.9/site-packages/httpcore/_async/connection_pool.py", line 219, in handle_async_request
async with self._connection_acquiry_lock:
File "/usr/local/lib/python3.9/site-packages/httpcore/_backends/base.py", line 76, in __aenter__
await self.acquire()
File "/usr/local/lib/python3.9/site-packages/httpcore/_backends/anyio.py", line 104, in acquire
await self._lock.acquire()
File "/usr/local/lib/python3.9/site-packages/anyio/_core/_synchronization.py", line 119, in acquire
self.acquire_nowait()
File "/usr/local/lib/python3.9/site-packages/anyio/_core/_synchronization.py", line 150, in acquire_nowait
task = get_current_task()
File "/usr/local/lib/python3.9/site-packages/anyio/_core/_testing.py", line 59, in get_current_task
return get_asynclib().get_current_task()
File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 1850, in get_current_task
return _create_task_info(current_task()) # type: ignore[arg-type]
File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 1846, in _create_task_info
return TaskInfo(id(task), parent_id, name, get_coro(task))
TypeError: descriptor 'get_coro' for '_asyncio.Task' objects doesn't apply to a 'Task' object
```
|
closed
|
2022-02-10T10:26:31Z
|
2022-05-08T19:08:01Z
|
https://github.com/agronholm/anyio/issues/418
|
[] |
jacopo-exact
| 7
|
pytest-dev/pytest-cov
|
pytest
| 605
|
Maximum coverage in minimal time
|
# Summary
Given a project where tests have been added incrementally over time and there is a significant amount of overlap between tests,
I'd like to be able to generate a list of tests that creates maximum coverage in minimal time. Clearly this is a pure coverage approach and doesn't guarantee that functional coverage is maintained, but this could be a good approach to identifying redundant tests.
I have a quick proof-of-concept that's not integrated into pytest that:
* runs all tests with `pytest-cov` and `--durations=0`
* processes `CoverageData` and the output of `--durations=0` to generate a list of arcs/lines that are covered for each context
* reduces the list of subsets using the set cover algorithm
* optionally applies a coverage 'confidence' in the event you want a faster smoke test that has reduced coverage (say 95%).
I am happy to work on a PR and include tests, but before I do I wanted to gauge fit to your project's goals and if you'd rather not have this feature, I can always create a separate plugin for people who want it.
|
closed
|
2023-08-10T11:57:49Z
|
2023-12-11T09:26:55Z
|
https://github.com/pytest-dev/pytest-cov/issues/605
|
[] |
masaccio
| 5
|
horovod/horovod
|
deep-learning
| 3,162
|
Spark with Horovod fails with py4j.protocol.Py4JJavaError
|
**Environment:**
1. Framework: TensorFlow, Keras
2. Framework version: tensorflow-2.4.3, keras-2.6.0
3. Horovod version: horovod-0.22.1
4. MPI version:
5. CUDA version:
6. NCCL version:
7. Python version: python-3.6.9
8. Spark / PySpark version: Spark-3.1.2
9. Ray version:
10. OS and version: Ubuntu 18
11. GCC version: gcc-7.5.0
12. CMake version: cmake-3.21.2
When running the sample script keras_spark_rossmann_estimator.py, spark app fails at model training with the following error:
```
Total params: 2,715,603
Trainable params: 2,715,567
Non-trainable params: 36
__________________________________________________________________________________________________
/home/cc/.local/lib/python3.6/site-packages/keras/optimizer_v2/optimizer_v2.py:356: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
"The `lr` argument is deprecated, use `learning_rate` instead.")
num_partitions=80
writing dataframes
train_data_path=file:///tmp/intermediate_train_data.0
val_data_path=file:///tmp/intermediate_val_data.0
train_partitions=76===========================================> (15 + 1) / 16]
val_partitions=8
/home/cc/.local/lib/python3.6/site-packages/horovod/spark/common/util.py:479: FutureWarning: The 'field_by_name' method is deprecated, use 'field' instead
metadata, avg_row_size = make_metadata_dictionary(train_data_schema)
train_rows=806871
val_rows=37467
Exception in thread Thread-3: (0 + 8) / 8]
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/runner.py", line 140, in run_spark
result = procs.mapPartitionsWithIndex(mapper).collect()
File "/usr/local/lib/python3.6/dist-packages/pyspark/rdd.py", line 949, in collect
sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "/home/cc/.local/lib/python3.6/site-packages/py4j/java_gateway.py", line 1310, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/usr/local/lib/python3.6/dist-packages/pyspark/sql/utils.py", line 111, in deco
return f(*a, **kw)
File "/home/cc/.local/lib/python3.6/site-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job 63 cancelled part of cancelled job group horovod.spark.run.0
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2258)
at org.apache.spark.scheduler.DAGScheduler.handleJobCancellation(DAGScheduler.scala:2154)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleJobGroupCancelled$4(DAGScheduler.scala:1048)
at scala.runtime.java8.JFunction1$mcVI$sp.apply(JFunction1$mcVI$sp.java:23)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
at org.apache.spark.scheduler.DAGScheduler.handleJobGroupCancelled(DAGScheduler.scala:1047)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2407)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2387)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2376)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:868)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2196)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2217)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2236)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2261)
at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
at org.apache.spark.rdd.RDD.collect(RDD.scala:1029)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:180)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Traceback (most recent call last):
File "keras_spark_rossmann_estimator.py", line 397, in <module>
keras_model = keras_estimator.fit(train_df).setOutputCols(['Sales_output'])
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/common/estimator.py", line 35, in fit
return super(HorovodEstimator, self).fit(df, params)
File "/usr/local/lib/python3.6/dist-packages/pyspark/ml/base.py", line 161, in fit
return self._fit(dataset)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/common/estimator.py", line 81, in _fit
backend, train_rows, val_rows, metadata, avg_row_size, dataset_idx)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/keras/estimator.py", line 317, in _fit_on_prepared_data
env=env)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/common/backend.py", line 85, in run
**self._kwargs)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/runner.py", line 284, in run
_launch_job(use_mpi, use_gloo, settings, driver, env, stdout, stderr)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/runner.py", line 155, in _launch_job
settings.verbose)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/runner/launch.py", line 706, in run_controller
gloo_run()
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/runner.py", line 152, in <lambda>
run_controller(use_gloo, lambda: gloo_run(settings, nics, driver, env, stdout, stderr),
File "/home/cc/.local/lib/python3.6/site-packages/horovod/spark/gloo_run.py", line 67, in gloo_run
launch_gloo(command, exec_command, settings, nics, {}, server_ip)
File "/home/cc/.local/lib/python3.6/site-packages/horovod/runner/gloo_run.py", line 271, in launch_gloo
.format(name=name, code=exit_code))
RuntimeError: Horovod detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was:
Process name: 0
Exit code: 255
```
This is followed by the following thread dump
```
21/09/13 04:12:33 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:176)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:150)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:691)
at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:255)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:111)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:140)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:53)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/09/13 04:12:33 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:176)
at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:150)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:691)
at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:255)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:111)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:140)
at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:53)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
```
|
open
|
2021-09-13T05:06:05Z
|
2021-09-14T22:34:34Z
|
https://github.com/horovod/horovod/issues/3162
|
[
"bug"
] |
aakash-sharma
| 2
|
scikit-optimize/scikit-optimize
|
scikit-learn
| 569
|
ImportError: cannot import name MaskedArray
|
I got scikit-optimize from your pypi release [here](https://pypi.python.org/pypi/scikit-optimize), where it says I need scikit-learn >= 0.18. "I'm in luck." thought I, for 0.18 I have. But trying to import skopt I get an error that MaskedArray can not be imported from sklearn.utils.fixes, and trying to import that class from there myself also yields an error. *Does scikit-optimize actually have a dependency on a later version of sklearn?*
|
closed
|
2017-12-11T19:31:36Z
|
2023-06-20T19:06:05Z
|
https://github.com/scikit-optimize/scikit-optimize/issues/569
|
[] |
pavelkomarov
| 25
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 138
|
How to use a queue of negative samples as done in MoCo
|
Hi Kevin,
I wonder if such an extended NT-Xent loss could be implemented?
The NT-Xent implemented in this package can return the pairwise loss when given a mini-batch and a label array. I wonder if for the purpose of increasing negative samples to make the task harder, could we directly use this package?
To be more specific, I use @JohnGiorgi's example:
`import torch`
`from pytorch_metric_learning.losses import NTXentLoss`
`batch_size = 16`
`embedding_dim = 512`
`anchor_embeddings = torch.randn(batch_size, embedding_dim)`
`positive_embeddings = torch.randn(batch_size, embedding_dim)`
`embeddings = torch.cat((anchor_embeddings, positive_embeddings))`
`indices = torch.arange(0, anchor_embeddings.size(0), device=anchor_embeddings.device)`
`labels = torch.cat((indices, indices))`
`loss = NTXentLoss(temperature=0.10)`
`loss(embeddings, labels)`
Assuming I have another list of negative samples with size 224 * 512, how could I use the package?
I would be really appreciate if you could provide this function since this would be really useful for making the CL harder when limited by the resource.
|
closed
|
2020-07-14T01:22:43Z
|
2020-07-25T14:21:05Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/138
|
[
"Frequently Asked Questions",
"question"
] |
CSerxy
| 13
|
inventree/InvenTree
|
django
| 9,196
|
[FR] PUI: Please add IPN to supplier parts table
|
### Please verify that this feature request has NOT been suggested before.
- [x] I checked and didn't find a similar feature request
### Problem statement
With reference to #9179: The IPN is missing in the supplierpart table. In CUI it was combined with the name. An
additional column is also fine.
### Suggested solution
Either as in CUI or additional column
### Describe alternatives you've considered
We can stay with CUI
### Examples of other systems
_No response_
### Do you want to develop this?
- [ ] I want to develop this.
|
closed
|
2025-02-27T06:56:01Z
|
2025-02-27T12:18:00Z
|
https://github.com/inventree/InvenTree/issues/9196
|
[
"enhancement",
"User Interface"
] |
SergeoLacruz
| 3
|
graphql-python/graphene-django
|
graphql
| 750
|
Bug: Supposedly wrong types in query with filter_fields since 2.4.0
|
### Problem
When using filter_fields I get an error about using wrong types which started appearing in 2.4.0.
`Variable "startedAtNull" of type "Boolean" used in position expecting type "DateTime".` The error does not occur with graphene-django 2.3.2
### Context
- using django-filter 2.2.0
- django 2.4.0
###
**Schema.py**
```
DATETIME_FILTERS = ['exact', 'isnull', 'lt', 'lte', 'gt', 'gte', 'month', 'year', 'date']
class OrderNode(DjangoObjectType):
class Meta:
model = Order
exclude = ('tenant', )
filter_fields = {
'id': ['exact'],
'start_at': DATETIME_FILTERS,
'finish_at': DATETIME_FILTERS,
'finished_at': DATETIME_FILTERS,
'started_at': DATETIME_FILTERS,
}
interfaces = (OrderNodeInterface,)
```
**Query:**
```
ORDERS_QUERY = '''
query order(
$tenant: String
$projectId: ID
$startedAtNull: Boolean
) {
orders(
tenant: $tenant
project_Id: $projectId
startedAt_Isnull: $startedAtNull
) {
edges {
node {
id,
city
}
}
}
}
'''
```
**Result:**
`Variable "startedAtNull" of type "Boolean" used in position expecting type "DateTime".`
### Solution
I am confident it is related to this PR: https://github.com/graphql-python/graphene-django/pull/682/files . In graphene_django/filter/utils.py the way how to retrieve the Type of a field was changed. Or maybe I misunderstood the changelog.
|
closed
|
2019-08-16T09:43:08Z
|
2019-10-10T08:20:16Z
|
https://github.com/graphql-python/graphene-django/issues/750
|
[
"🐛bug"
] |
lassesteffen
| 10
|
plotly/dash-table
|
dash
| 600
|
Incorrect cell validation / coercion
|
1 - Validation default is not applied correctly when its value is `0` (number) -- the value is falsy and trips the default case
2 - Deleting cell content with `backspace` does not run validation
1 - This is simple, update https://github.com/plotly/dash-table/blob/dev/src/dash-table/type/reconcile.ts#L67 to do a `R.isNil` check instead
2 - This is a bit more involved, I suggest that we start using the `on_change` settings applicable to each cell and use the reconciliation result of `null` if the reconciliation is successful. Otherwise, continue using `''` as we do right now.
|
closed
|
2019-09-24T12:34:57Z
|
2019-09-24T16:28:50Z
|
https://github.com/plotly/dash-table/issues/600
|
[
"dash-type-bug",
"size: 0.5"
] |
Marc-Andre-Rivet
| 0
|
stanfordnlp/stanza
|
nlp
| 808
|
Stanza sluggish with multiprocessing
|
Down below testcase.
Stanza is fast if `parallel == 1`, but becomes sluggish when distributed among processes.
````
import os, multiprocessing, time
import stanza
parallel = os.cpu_count()
language = 'cs'
sentence = 'ponuže dobře al ja nemam i zpětlou vas dbu od policiei teto ty teto věci najitam spravným orbganutyperypřecuji nas praco zdněspotaměcham'
stanza.download( language )
def nlp_stanza( ignore ):
nlp_pipeline = stanza.Pipeline( language, logging_level='WARN', use_gpu=False )
for i in range(50):
s = int(time.process_time()*1000)
nlp_pipeline( sentence )
e = int(time.process_time()*1000)
print( os.getpid(), str(e-s)+'ms:', sentence )
pool = multiprocessing.Pool( processes=parallel )
pool.map( nlp_stanza, range(parallel) )
pool.join()
````
|
closed
|
2021-09-16T12:41:13Z
|
2021-09-20T12:35:40Z
|
https://github.com/stanfordnlp/stanza/issues/808
|
[
"bug"
] |
doublex
| 1
|
deepinsight/insightface
|
pytorch
| 2,328
|
batch SimilarityTransform
|
The following code implements face alignment using functions from the `skimage` library. In cases where there are a small number of faces, using this function for face alignment can yield satisfactory results. However, when dealing with a large collection of faces, I'm looking for a method to calculate similarity transformation matrices in batches. I have already achieved batch face alignment using similarity transformation matrices of shape [b, 2, 3] with the `kornia` library. However, I'm struggling to find a way to calculate similarity transformation matrices in batches while maintaining consistent results with the computation performed by `skimage`. Furthermore, I'm hoping to accomplish this using the PyTorch framework. I have attempted to replicate the computation of similarity transformation matrices from `skimage` using PyTorch, but the results do not match. This discrepancy could impact the accuracy of subsequent face recognition tasks. Has anyone successfully implemented this? Any help would be greatly appreciated.
```
tform = trans.SimilarityTransform()
tform.estimate(src,dst)
M = tform.params[0:2,:]
```
can stack the results of `trans.SimilarityTransform()` into a [n, 2, 3] matrix, which can then be used with `kornia.geometry.transform.warp_affine` along with the corresponding images to perform batch alignment tasks.
|
open
|
2023-06-06T03:29:15Z
|
2023-07-11T02:55:29Z
|
https://github.com/deepinsight/insightface/issues/2328
|
[] |
muqishan
| 1
|
python-visualization/folium
|
data-visualization
| 1,850
|
export / save Folium map as static image (PNG)
|
Code:
```python
colormap = branca.colormap.linear.plasma.scale(vmin, vmax).to_step(100)
r_map = folium.Map(location=[lat, long], tiles='openstreetmap')
for i in range(0, len(df)):
r_lat = ...
r_long = ...
r_score = ...
Circle(location=[r_lat, r_long], radius=5, color=colormap(r_score)).add_to(r_map)
colormap.add_to(r_map)
r_map
```
This works fine, but there seems to be no way to generate the map, optionally, as a fixed-size, non-zoomable bitmap in a decent format like PNG.
A side-effect of this is - the map does not show in a PDF export of the Jupyter notebook where the map is generated.
It would be nice if `folium.Map()` had a way to generate fixed bitmap output, e.g. like Matplotlib.pyplot.
I've read the documentation, searched the web, there really seems to be no good solution other than a Selenium hack which requires too many moving parts and extra libraries. This should be instead a standard Folium feature.
To be clear, I am running all this code in a Jupyter notebook.
I don't think I have the time to implement a PR myself.
|
closed
|
2023-12-22T21:15:47Z
|
2024-05-25T14:11:19Z
|
https://github.com/python-visualization/folium/issues/1850
|
[] |
FlorinAndrei
| 5
|
tradingstrategy-ai/web3-ethereum-defi
|
pytest
| 125
|
Error when loading last N blocks using JSONRPCReorganisationMonitor
|
How do you only load the last N blocks? This code wants to loads only last 5 blocks i believe, however it errors when adding new blocks.
```
reorg_mon = JSONRPCReorganisationMonitor(web3, check_depth=30)
reorg_mon.load_initial_block_headers(block_count=5)
while True:
try:
# Figure out the next good unscanned block range,
# and fetch block headers and timestamps for this block range
chain_reorg_resolution = reorg_mon.update_chain()
if chain_reorg_resolution.reorg_detected:
logger.info(f"Chain reorganisation data updated: {chain_reorg_resolution}")
# Read specified events in block range
for log_result in read_events(
web3,
start_block=chain_reorg_resolution.latest_block_with_good_data + 1,
end_block=chain_reorg_resolution.last_live_block,
filter=filter,
notify=None,
chunk_size=100,
context=token_cache,
extract_timestamps=None,
reorg_mon=reorg_mon,
):
pass
```
```
INFO:eth_defi.event_reader.reorganisation_monitor:figure_reorganisation_and_new_blocks(), range 17,285,423 - 17,285,443, last block we have is 17,285,443, check depth is 20
ERROR: LoadError: Python: AssertionError: Blocks must be added in order. Last block we have: 17285443, the new record is: BlockHeader(block_number=17285423, block_hash='0x8d481922bd607150c9f3299004a113e44955327770ab04ed10de115e2172d6fe', timestamp=1684400615)
Python stacktrace:
[1] add_block
@ eth_defi.event_reader.reorganisation_monitor ~/Library/Caches/pypoetry/virtualenvs/cryptopy-5siZoxZ4-py3.10/lib/python3.10/site-packages/eth_defi/event_reader/reorganisation_monitor.py:324
[2] figure_reorganisation_and_new_blocks
@ eth_defi.event_reader.reorganisation_monitor ~/Library/Caches/pypoetry/virtualenvs/cryptopy-5siZoxZ4-py3.10/lib/python3.10/site-packages/eth_defi/event_reader/reorganisation_monitor.py:396
[3] update_chain
```
|
closed
|
2023-05-18T09:10:38Z
|
2023-08-08T11:47:19Z
|
https://github.com/tradingstrategy-ai/web3-ethereum-defi/issues/125
|
[] |
bryaan
| 2
|
graphdeco-inria/gaussian-splatting
|
computer-vision
| 850
|
basic question from beginner of 3d reconstruction using 3DGS
|
Hello! I am trying to start 3d reconstruction with your 3DGS software.
I'd like to ask some basic questions.
1. Does 3DGS needs camera intrinsic parameter? It just helps 3D quality, or it is must to have?
2. I am using Colmap to make SfM as prerequisite before train.py used to make 3DGS .ply file. I put 2800 images using basic gpu 16GB VRAM (10 fps from a movie), then takes nearly 30 hours to make a .ply for 3DGS. Do you know any good way to speed it up? like other tool.
3. so I should put .ply to train.py, right? or what do you mean "Colmap dataset" in readme?
Thank you for your kind advice!
|
open
|
2024-06-15T15:19:30Z
|
2024-06-18T11:58:18Z
|
https://github.com/graphdeco-inria/gaussian-splatting/issues/850
|
[] |
RickMaruyama
| 1
|
hankcs/HanLP
|
nlp
| 1,806
|
文件流未正确关闭
|
<!--
感谢找出bug,请认真填写下表:
-->
**Describe the bug**
A clear and concise description of what the bug is.
VectorsReader类的readVectorFile()方法未正确关闭文件流,导致资源泄漏
**Code to reproduce the issue**
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
使用Word2VecTrainer的train方法,或者new 一个WordVectorModel对象,进程进行中,删除模型文件比如model.txt,会无法删除
```java
```
**Describe the current behavior**
A clear and concise description of what happened.
**Expected behavior**
A clear and concise description of what you expected to happen.
我在使用该jar进行词向量模型训练或者文档转换为向量后,模型文件始终删除不掉,文件资源积压
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):任何系统都会出现
- Python version:
- HanLP version:目前发现1.5-1.8.3的版本都会出现该问题
**Other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
[VectorsReader.java](https://github.com/hankcs/HanLP/blob/v1.8.3/src/main/java/com/hankcs/hanlp/mining/word2vec/VectorsReader.java)
* [x] I've completed this form and searched the web for solutions.
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
|
closed
|
2023-02-24T13:40:24Z
|
2023-02-25T01:02:15Z
|
https://github.com/hankcs/HanLP/issues/1806
|
[
"bug"
] |
zjqer
| 3
|
NullArray/AutoSploit
|
automation
| 406
|
Unhandled Exception (e195f1bf9)
|
Autosploit version: `3.0`
OS information: `Linux-4.18.0-kali2-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/root/Puffader/Autosploit/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/root/Puffader/Autosploit/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
|
closed
|
2019-01-24T09:40:39Z
|
2019-04-02T20:27:09Z
|
https://github.com/NullArray/AutoSploit/issues/406
|
[] |
AutosploitReporter
| 0
|
piskvorky/gensim
|
machine-learning
| 2,973
|
phrases.export_phrases() doesn't yield all bigrams
|
Hallo and thank you for this tool,
phrases.export_phrases() doesn't yield all bigrams when some are part of a bigger n-gram.
If I create a Phrases object with
`phrases = gensim.models.phrases.Phrases(sentences, min_count=1, threshold=10, delimiter=b' ', scoring='default')`
on the following two sentences
New York City has the largest population of all the cities in the United States .
Every year, many travelers come to the United States to visit New York City .
`print(dict(phrases.export_phrases(sentences)))` only returns {b'New York': 11.5, b'United States': 11.5}. It should also return {b'York City': 11.5} however.
line 187 of phrases.py should probably be changed to `last_uncommon = word` . It fixes the problem on my side and seems to be what the code was intended to be.
Thank you,
Olivier NC
macOS-10.14.5-x86_64-i386-64bit
Python 3.8.5 (v3.8.5:580fbb018f, Jul 20 2020, 12:11:27)
[Clang 6.0 (clang-600.0.57)]
Bits 64
NumPy 1.19.2
SciPy 1.5.2
gensim 3.8.3
FAST_VERSION 0
|
closed
|
2020-10-06T03:28:57Z
|
2020-10-09T00:04:27Z
|
https://github.com/piskvorky/gensim/issues/2973
|
[] |
o-nc
| 3
|
cvat-ai/cvat
|
pytorch
| 8,431
|
Toggle switch for mask point does not work anymore
|
For instance segmentation, I used to draw the first mask. For the second mask which would always overlap the first mask, I usually used CTRL to make the points of mask 1 appear to have a suitable overlap in annotation.
Now, the function of making the points of mask 1 appear is gone?! I can not turn the points on with CTRL, which makes the instance segmentation impossible for now. The only way to make the points appear is visible in figure 1 is to hover over it. But this does not work in draw mode. As soon as I enter the draw mode the points disappear, as seen in figure 2.


Please have a look at it. I'm sure its just a small oneliner that must have been the reason to make the Toggle of the points not work with CTRL.
|
closed
|
2024-09-11T10:24:34Z
|
2024-09-11T11:11:25Z
|
https://github.com/cvat-ai/cvat/issues/8431
|
[] |
hasano20
| 2
|
encode/apistar
|
api
| 71
|
Interactive API Documentation
|
We'll be pulling in REST framework's existing interactive API docs.
It's gonna be ✨fabulous✨.
|
closed
|
2017-04-20T14:07:21Z
|
2017-08-04T15:06:37Z
|
https://github.com/encode/apistar/issues/71
|
[
"Baseline feature"
] |
tomchristie
| 6
|
pytorch/pytorch
|
machine-learning
| 149,516
|
```StateDictOptions``` in combination with ```cpu_offload=True``` and ```strict=False``` not working
|
### 🐛 Describe the bug
When running the following for distributed weight loading:
```
options = StateDictOptions(
full_state_dict=True,
broadcast_from_rank0=True,
strict=False,
cpu_offload=True,
)
set_model_state_dict(model=model, model_state_dict=weights, options=options)
```
I am getting `KeyError`for keys that are not in the model.
I believe it has to do with not checking for strict at this point:
https://github.com/pytorch/pytorch/blob/main/torch/distributed/_state_dict_utils.py#L656
Which only appears to be done afterwards.
### Versions
Current main
cc @LucasLLC @pradeepfn
|
open
|
2025-03-19T14:53:37Z
|
2025-03-20T19:19:49Z
|
https://github.com/pytorch/pytorch/issues/149516
|
[
"oncall: distributed checkpointing"
] |
psinger
| 0
|
getsentry/sentry
|
django
| 86,783
|
Add ttid_contribution_rate() function
|
### Problem Statement
Add ttid_contribution_rate() to eap via rpc
### Solution Brainstorm
_No response_
### Product Area
Unknown
|
closed
|
2025-03-11T13:16:41Z
|
2025-03-11T17:46:36Z
|
https://github.com/getsentry/sentry/issues/86783
|
[] |
DominikB2014
| 0
|
Evil0ctal/Douyin_TikTok_Download_API
|
fastapi
| 98
|
I can't use shortcuts on ios
|
I have updated to the latest 6.0 version but the shortcut still says to update, it makes me unable to download videos on tiktok
I use iphone X iOS 15.5

|
closed
|
2022-11-08T18:48:19Z
|
2022-11-10T08:00:58Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/98
|
[
"API Down",
"Fixed"
] |
beelyhot5
| 1
|
timkpaine/lantern
|
plotly
| 169
|
Chop out email to separate jlab plugin
|
closed
|
2018-07-22T17:57:18Z
|
2018-08-10T19:55:22Z
|
https://github.com/timkpaine/lantern/issues/169
|
[
"feature"
] |
timkpaine
| 2
|
|
deepspeedai/DeepSpeed
|
deep-learning
| 6,729
|
GPU mem doesn't release after delete tensors in optimizer.bit16groups
|
I'm developing a peft algorithm, basically it does the following:
Say the training process has 30 steps in total,
1. For global step 0\~9: train `lmhead` + `layer_0`
2. For global step 10\~19: train `lmhead` + `layer_1`
3. For global step 20\~29: train `lmhead` + `layer_0`
The key point is that, after the switch, the states of `lmhead` are expected to be kept, while the states of the body layers should be deleted.
For example, the `step` in `lmhead` state should go from 0 to 29, while `step` for body layers count from 0 after every switch, even if the layer has been selected before.
In this case, the parameter group looks like:
```python
optimizer_grouped_parameters = [
{
# this should always be lmhead:
# `requires_grad` and `not in active_layers_names` rules out all body layers
# `in decay_parameters` rules out ln
"params": [
p for n, p in opt_model.named_parameters() if (
n not in self.active_layers_names and n in decay_parameters and p.requires_grad)
],
"weight_decay": self.args.weight_decay,
},
{
# this should always be ln (outside of body layers)
"params": [
p for n, p in opt_model.named_parameters() if (
n not in self.active_layers_names and n not in decay_parameters and p.requires_grad)
],
"weight_decay": 0.0,
},
{
# selected body layers with decay
"params": [
p for n, p in opt_model.named_parameters() if (
n in self.active_layers_names and n in decay_parameters and p.requires_grad)
],
"weight_decay": self.args.weight_decay,
},
{
# selected body layers without decay
"params": [
p for n, p in opt_model.named_parameters() if (
n in self.active_layers_names and n not in decay_parameters and p.requires_grad)
],
"weight_decay": 0.0,
},
]
```
The first two represents layers that states should be kept, while the last two will change.
An approach I came up with is that partially "re-init" the optimizer at the beginning of the step that should do the switch. I modified my huggingface trainer based on ds optimizer `__init__` method:
```python
def _reinit_deepspeed_zero_optimizer_params(self, optimizer: DeepSpeedZeroOptimizer):
num_non_lisa_body_layer_pgs = len(self.optimizer.param_groups) - len(LISA_BODY_LAYER_PARAM_GROUPS_IDX)
objs = [
optimizer.bit16_groups,
optimizer.round_robin_bit16_groups,
optimizer.round_robin_bit16_indices,
optimizer.round_robin_bit16_meta,
optimizer.bit16_groups_flat,
optimizer.groups_padding,
optimizer.parallel_partitioned_bit16_groups,
optimizer.single_partition_of_fp32_groups,
optimizer.partition_size,
optimizer.params_in_partition,
optimizer.params_not_in_partition,
optimizer.first_offset
]
for obj in objs:
del obj[num_non_lisa_body_layer_pgs:]
empty_cache()
torch.cuda.empty_cache()
gc.collect()
for i, param_group in enumerate(optimizer.optimizer.param_groups):
if i in range(num_non_lisa_body_layer_pgs):
# skip lmhead, ln, etc.
continue
## same as deepspeed/runtime/zero/stage_1_and_2.py DeepSpeedZeroOptimizer.__init__ below
partition_id = dist.get_rank(group=optimizer.real_dp_process_group[i])
# push this group to list before modify
# TODO: Explore simplification that avoids the extra book-keeping by pushing the reordered group
trainable_parameters = []
for param in param_group['params']:
if param.requires_grad:
param.grad_accum = None
trainable_parameters.append(param)
optimizer.bit16_groups.append(trainable_parameters)
# not sure why apex was cloning the weights before flattening
# removing cloning here
see_memory_usage(f"Before moving param group {i} to CPU")
# move all the parameters to cpu to free up GPU space for creating flat buffer
# Create temp CPU param copies, free accelerator tensors
orig_group_numel = 0
for param in optimizer.bit16_groups[i]:
orig_group_numel += param.numel()
param.cpu_data = param.data.cpu()
param.data = torch.empty(1).to(param.device)
empty_cache()
see_memory_usage(f"After moving param group {i} to CPU", force=False)
# Reorder group parameters for load balancing of gradient partitioning during backward among ranks.
# This ensures that gradients are reduced in a fashion such that ownership round robins among the ranks.
# For example, rather than 3 gradients (g_n+2, g_n+1, g_n) that are reduced consecutively belonging
# to the same rank, instead they will belong to 3 ranks (r_m+2, r_m+1, r_m).
if optimizer.round_robin_gradients:
round_robin_tensors, round_robin_indices = optimizer._round_robin_reorder(
optimizer.bit16_groups[i], dist.get_world_size(group=optimizer.real_dp_process_group[i]))
else:
round_robin_tensors = optimizer.bit16_groups[i]
round_robin_indices = list(range(len(optimizer.bit16_groups[i])))
optimizer.round_robin_bit16_groups.append(round_robin_tensors)
optimizer.round_robin_bit16_indices.append(round_robin_indices)
# Create meta tensors list, ordered according to round_robin_tensors
meta_tensors = []
for param in round_robin_tensors:
meta_tensors.append(torch.zeros_like(param.cpu_data, device="meta"))
optimizer.round_robin_bit16_meta.append(meta_tensors)
# create flat buffer in CPU
flattened_buffer = optimizer.flatten_dense_tensors_aligned(
optimizer.round_robin_bit16_groups[i],
optimizer.nccl_start_alignment_factor * dist.get_world_size(group=optimizer.real_dp_process_group[i]),
use_cpu_data=True)
# free temp CPU params
for param in optimizer.bit16_groups[i]:
del param.cpu_data
# Move CPU flat tensor to the accelerator memory.
optimizer.bit16_groups_flat.append(flattened_buffer.to(get_accelerator().current_device_name()))
del flattened_buffer
see_memory_usage(f"After flattening and moving param group {i} to GPU", force=False)
# Record padding required for alignment
if partition_id == dist.get_world_size(group=optimizer.real_dp_process_group[i]) - 1:
padding = optimizer.bit16_groups_flat[i].numel() - orig_group_numel
else:
padding = 0
optimizer.groups_padding.append(padding)
if dist.get_rank(group=optimizer.real_dp_process_group[i]) == 0:
see_memory_usage(f"After Flattening and after emptying param group {i} cache", force=False)
# set model bit16 weight to slices of flattened buffer
optimizer._update_model_bit16_weights(i)
# divide the flat weights into near equal partition equal to the data parallel degree
# each process will compute on a different part of the partition
data_parallel_partitions = optimizer.get_data_parallel_partitions(optimizer.bit16_groups_flat[i], i)
optimizer.parallel_partitioned_bit16_groups.append(data_parallel_partitions)
# verify that data partition start locations are 4-byte aligned
for partitioned_data in data_parallel_partitions:
assert (partitioned_data.data_ptr() % (2 * optimizer.nccl_start_alignment_factor) == 0)
# A partition of the fp32 master weights that will be updated by this process.
# Note that the params in single_partition_of_fp32_groups is cloned and detached
# from the origin params of the model.
if not optimizer.fp16_master_weights_and_gradients:
weights_partition = optimizer.parallel_partitioned_bit16_groups[i][partition_id].to(
optimizer.device).clone().float().detach()
else:
weights_partition = optimizer.parallel_partitioned_bit16_groups[i][partition_id].to(
optimizer.device).clone().half().detach()
if optimizer.cpu_offload:
weights_partition = get_accelerator().pin_memory(weights_partition)
optimizer.single_partition_of_fp32_groups.append(weights_partition)
# Set local optimizer to have flat params of its own partition.
# After this, the local optimizer will only contain its own partition of params.
# In that case, the local optimizer only saves the states(momentum, variance, etc.) related to its partition's params(zero stage1).
optimizer.single_partition_of_fp32_groups[
i].requires_grad = True # keep this in case internal optimizer uses it
param_group['params'] = [optimizer.single_partition_of_fp32_groups[i]]
partition_size = len(optimizer.bit16_groups_flat[i]) / dist.get_world_size(group=optimizer.real_dp_process_group[i])
params_in_partition, params_not_in_partition, first_offset = optimizer.get_partition_info(
optimizer.round_robin_bit16_groups[i], partition_size, partition_id)
optimizer.partition_size.append(partition_size)
optimizer.params_in_partition.append(params_in_partition)
optimizer.params_not_in_partition.append(params_not_in_partition)
optimizer.first_offset.append(first_offset)
```
**However, I found `del obj` not working, as the mem profiling result shown below:**

I noticed the tensors the arrows point at spawn when:
```python
# Move CPU flat tensor to the accelerator memory.
optimizer.bit16_groups_flat.append(flattened_buffer.to(get_accelerator().current_device_name()))
```
Are there any insights?
|
closed
|
2024-11-08T11:42:49Z
|
2024-12-06T21:59:52Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6729
|
[] |
wheresmyhair
| 2
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 575
|
跑run_pt.sh时,使用--deepspeed ds_zero2_no_offload.json运行卡住了
|
peft使用的指定的0.3.0dev,运行run_pt.sh时,参数都是默认配置,只修改了模型和token路径,运行后读取数据正常,但卡在下面这里不动了,试了几次都如此,但删除--deepspeed ${deepspeed_config_file}后不会卡住,单报其他错误,感觉问题可能出在deepspeed脚本上,下面是卡住时的log:
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:12<00:00, 2.66it/s]
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:12<00:00, 2.65it/s]
[INFO|modeling_utils.py:3283] 2023-06-12 18:58:17,569 >> All model checkpoint weights were used when initializing LlamaForCausalLM.
[INFO|modeling_utils.py:3291] 2023-06-12 18:58:17,569 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at /media/zzg/GJ_disk01/pretrained_model/text-generation-webui/models/decapoda-research_llama-7b-hf.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training.
[INFO|configuration_utils.py:537] 2023-06-12 18:58:17,572 >> loading configuration file /media/zzg/GJ_disk01/pretrained_model/text-generation-webui/models/decapoda-research_llama-7b-hf/generation_config.json
[INFO|configuration_utils.py:577] 2023-06-12 18:58:17,572 >> Generate config GenerationConfig {
"_from_model_config": true,
"bos_token_id": 0,
"eos_token_id": 1,
"pad_token_id": 0,
"transformers_version": "4.30.0.dev0"
}
06/12/2023 18:58:34 - INFO - __main__ - Init new peft model
06/12/2023 18:58:34 - INFO - __main__ - target_modules: ['q_proj', 'v_proj', 'k_proj', 'o_proj', 'gate_proj', 'down_proj', 'up_proj']
06/12/2023 18:58:34 - INFO - __main__ - lora_rank: 8
trainable params: 429203456 || all params: 6905475072 || trainable%: 6.2154080859739
[INFO|trainer.py:594] 2023-06-12 18:59:44,457 >> max_steps is given, it will override any value given in num_train_epochs
/home/zzg/miniconda3/envs/py39_DL_cu118/lib/python3.9/site-packages/transformers/optimization.py:411: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
[2023-06-12 18:59:44,476] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.9.4, git-hash=unknown, git-branch=unknown
06/12/2023 18:59:47 - INFO - torch.distributed.distributed_c10d - Added key: store_based_barrier_key:2 to store for rank: 0
trainable params: 429203456 || all params: 6905475072 || trainable%: 6.2154080859739
/home/zzg/miniconda3/envs/py39_DL_cu118/lib/python3.9/site-packages/transformers/optimization.py:411: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
06/12/2023 18:59:51 - INFO - torch.distributed.distributed_c10d - Added key: store_based_barrier_key:2 to store for rank: 1
06/12/2023 18:59:51 - INFO - torch.distributed.distributed_c10d - Rank 1: Completed store-based barrier for key:store_based_barrier_key:2 with 2 nodes.
06/12/2023 18:59:51 - INFO - torch.distributed.distributed_c10d - Rank 0: Completed store-based barrier for key:store_based_barrier_key:2 with 2 nodes.
[2023-06-12 18:59:51,724] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
[2023-06-12 18:59:51,728] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer
[2023-06-12 18:59:51,728] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer
[2023-06-12 18:59:51,761] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = AdamW
[2023-06-12 18:59:51,761] [INFO] [utils.py:54:is_zero_supported_optimizer] Checking ZeRO support for optimizer=AdamW type=<class 'transformers.optimization.AdamW'>
[2023-06-12 18:59:51,761] [WARNING] [engine.py:1116:_do_optimizer_sanity_check] **** You are using ZeRO with an untested optimizer, proceed with caution *****
[2023-06-12 18:59:51,761] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.float16 ZeRO stage 2 optimizer
[2023-06-12 18:59:51,761] [INFO] [stage_1_and_2.py:133:__init__] Reduce bucket size 100000000
[2023-06-12 18:59:51,761] [INFO] [stage_1_and_2.py:134:__init__] Allgather bucket size 100000000
[2023-06-12 18:59:51,762] [INFO] [stage_1_and_2.py:135:__init__] CPU Offload: False
[2023-06-12 18:59:51,762] [INFO] [stage_1_and_2.py:136:__init__] Round robin gradient partitioning: False
Using /home/zzg/.cache/torch_extensions/py39_cu117 as PyTorch extensions root...
Using /home/zzg/.cache/torch_extensions/py39_cu117 as PyTorch extensions root...
deepspeed重新install后也同样如此,请问有解决办法吗?
|
closed
|
2023-06-12T11:04:47Z
|
2023-06-13T03:51:24Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/575
|
[] |
guijuzhejiang
| 6
|
holoviz/panel
|
plotly
| 7,517
|
JSComponent not working in Jupyter
|
I'm on panel==1.5.4 panel-copy-paste==0.0.4
The `render_fn` cannot be found and the component does not display.

```python
import panel as pn
from panel_copy_paste import PasteToDataFrameButton
import pandas as pd
ACCENT = "#ff4a4a"
pn.extension("tabulator")
```
```python
PasteToDataFrameButton(target=to_table)
```
I'm on a JupyterHub behind a reverse proxy if that matters?
|
open
|
2024-11-25T06:35:13Z
|
2025-01-21T10:50:40Z
|
https://github.com/holoviz/panel/issues/7517
|
[
"more info needed"
] |
MarcSkovMadsen
| 2
|
donnemartin/data-science-ipython-notebooks
|
matplotlib
| 13
|
Command to run mrjob s3 log parser is incorrect
|
Current:
```
python mr-mr_s3_log_parser.py -r emr s3://bucket-source/ --output-dir=s3://bucket-dest/"
```
Should be:
```
python mr_s3_log_parser.py -r emr s3://bucket-source/ --output-dir=s3://bucket-dest/"
```
|
closed
|
2015-07-31T22:52:18Z
|
2015-12-28T13:14:13Z
|
https://github.com/donnemartin/data-science-ipython-notebooks/issues/13
|
[
"bug"
] |
donnemartin
| 1
|
google-research/bert
|
tensorflow
| 782
|
InvalidArgumentError (see above for traceback): Found Inf or NaN global norm. : Tensor had NaN values [[{{node VerifyFinite/CheckNumerics}} = CheckNumerics[T=DT_FLOAT, message="Found Inf or NaN global norm.", _device="/job:localhost/replica:0/task:0/device:GPU:0"](global_norm/global_norm)]]
|
I add POS tag feature to the BERT model and meet the following problem,I tried to reduce the batch_size, but it was useless.
python run_oqmrc_POS.py --task_name=MyPro --do_train=true --do_eval=true --data_dir=./data --vocab_file=chinese_bert/vocab.txt --pos_tag_vocab_file=pyltp_data/pos_tag_vocab.txt --bert_config_file=chinese_bert/bert_config.json --init_checkpoint=chinese_bert/bert_model.ckpt --max_seq_length=128 --train_batch_size=32 --learning_rate=2e-5 --num_train_epochs=3.0 --output_dir=tmp/mypro_output_POS/


|
open
|
2019-07-23T07:18:52Z
|
2019-07-23T07:18:52Z
|
https://github.com/google-research/bert/issues/782
|
[] |
daishu7
| 0
|
AUTOMATIC1111/stable-diffusion-webui
|
pytorch
| 15,870
|
[Bug]: Stable Diffusion is now very slow and won't work at all
|
### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Well, after switching to the babes 3.1 checkpoint I tried to generate an image, but when I stopped it because I didn't like it I got the little CUDA message, so I tried to generate another image and got the same thing. So I closed out Stable Diffusion, went into the files, deleted some of the useless past outputs, ran it again, there was nothing off about the startup process, and when I tried to reload the last set of prompts, it wouldn't do it. So I had to go into the parameters, copy and paste the last prompts, hit generate, and it won't even load up, which I found strange. So I closed out of stable diffusion, checked to see if my NVIDIA card needed updating, it did, installed the update, ran Stable Diffusion again, same dang thing happened. Then I thought it might be my laptop since it needed an update, so I ran the update on my laptop, then tried to run SD again, NOPE! Same thing happened AGAIN. Then, I realized, it's ONLY SD that's taking forever, everything else on my laptop is fine. As of the moment, I can't do ANYTHING on SD, can't check for extension updates, can't generate anything, can't switch to previous versions, can't switch checkpoints, I was lucky to get the sysinfo, I tried closing out, going into settings, removing the checkpoints until there was only one left, my usual default one, Anythingv5.0, can't even load that one. Currently, it has been almost 10 minutes while typing this, and before typing this I tried to generate something on SD, IT HASN'T EVEN STARTED, IT'S STILL PROCESSING, NOTHING ELSE ON SD IS LOADING, NOTHING IS PREVENTING IT FROM GENERATING. I honestly don't know what's going on, can someone please help me?
### Steps to reproduce the problem
No idea
### What should have happened?
It should not be very freaking slow.
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-05-23-05-16.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/15411881/sysinfo-2024-05-23-05-16.json)
### Console logs
```Shell
C:\Users\zach\OneDrive\Desktop\Stable Diffusion\stable-diffusion-webui>git pull
Already up to date.
venv "C:\Users\zach\OneDrive\Desktop\Stable Diffusion\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
#######################################################################################################
Initializing Civitai Link
If submitting an issue on github, please provide the below text for debugging purposes:
Python revision: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Civitai Link revision: 115cd9c35b0774c90cb9c397ad60ef6a7dac60de
SD-WebUI revision: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Checking Civitai Link requirements...
#######################################################################################################
Launching Web UI with arguments: --precision full --no-half --skip-torch-cuda-test --xformers
ControlNet preprocessor location: C:\Users\zach\OneDrive\Desktop\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-05-23 01:04:51,516 - ControlNet - INFO - ControlNet v1.1.448
Civitai: API loaded
Loading weights [90bef92d4f] from C:\Users\zach\OneDrive\Desktop\Stable Diffusion\stable-diffusion-webui\models\Stable-diffusion\babes_31.safetensors
[LyCORIS]-WARNING: LyCORIS legacy extension is now loaded, if you don't expext to see this message, please disable this extension.
2024-05-23 01:04:52,407 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: C:\Users\zach\OneDrive\Desktop\Stable Diffusion\stable-diffusion-webui\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Civitai: Check resources for missing info files
Civitai: Check resources for missing preview images
Startup time: 17.4s (prepare environment: 4.6s, import torch: 5.8s, import gradio: 1.1s, setup paths: 1.2s, initialize shared: 0.3s, other imports: 0.7s, load scripts: 2.5s, create ui: 0.6s, gradio launch: 0.4s).
Civitai: Found 0 resources missing info files
Civitai: No info found on Civitai
Civitai: Found 0 resources missing preview images
Civitai: No preview images found on Civitai
Applying attention optimization: xformers... done.
Model loaded in 4.1s (load weights from disk: 0.9s, create model: 0.7s, apply weights to model: 1.7s, apply float(): 0.3s, move model to device: 0.1s, calculate empty prompt: 0.4s).
```
### Additional information
Everything is updated, everything is all caught up, there is nothing that needs updating as far as I'm aware of, feel free to point something out if I missed it.
|
open
|
2024-05-23T05:24:37Z
|
2024-05-28T22:59:12Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15870
|
[
"bug-report"
] |
MichaelDeathBringer
| 8
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,122
|
a trouble when testing cyclegan
|
Hi, appreciating your open source code, it's really a masterpiece.
I've met some trouble when I run the test.py using cyclegan, like this:

my input data are images with [256,256,3], I keep some flags the same as training, such as netG, norm, dropout, like belows:
!python train.py --dataroot /content/data/single/ --netG unet_256 --batch_size 64 --model cycle_gan1 --gpu_ids 0 --serial_batches --preprocess none --no_flip --name XQ12_singlecycle_noidentity --n_epochs 50 --n_epochs_decay 50 --print_freq 3200 --save_epoch_freq 10 --save_latest_freq 12096 --update_html_freq 40000 --lambda_identity 0.0 --lambda_supervised 0.0 --lambda_per 0.0 --lambda_A 10 --lambda_B 10 --beta_A 10 --beta_B 10 --perceptual_layers 8 --continue_train --epoch_count 61
!python test.py --dataroot /content/data/single/ --batch_size 1 --netG unet_256 --model testcyclegan1 --gpu_ids 0 --serial_batches --preprocess none --no_flip --name XQ12_singlecycle_noidentity --input_nc 3 --no_dropout --eval --dataset_mode singlenew1 --results_dir /content/data/results1_singlecycle/ --num_test 14382 --model_suffix _A
please don't mind some strange flags, I just add these flags for some new loss functions based on your wonderful code.
I printed out the tensor size before this problem appeared, the size is [1,512,2,2], which I think is OK. Actually, I met this problem once before, that happened in the training stage, the reason is I used BN in my network and I set my batch_size as 1. I searched this problem on the internet, someone told me if using BN, the batch_size must >1. But in the cyclegan, we use IN right? I tried to change batch_size in the test.py to 2 which was 1 as default. Then, I would not meet this problem, but there is a new one, the results would skip a half of images. I can only get image0,2,4,6... without image1,3,5,7... . Could you please help me find out the solution? Thanks.
|
closed
|
2020-08-08T20:52:50Z
|
2020-08-09T14:40:49Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1122
|
[] |
bearxiong333
| 7
|
sigmavirus24/github3.py
|
rest-api
| 1,111
|
The search result `total_count` is always 0
|
I'm trying to use GitHub3.py to find the number of total issues in a repository. For that, I want to use the Search API to search for the issues in a particular repository.
For example, with the raw GitHub API, I can request [https://api.github.com/search/issues?q=repo%3Asigmavirus24/github3.py](https://api.github.com/search/issues?q=repo%3Asigmavirus24/github3.py) and then get the total issues count in "total_count" key.
However, the following code snippet gives me zero:
```python
gh = login(token=API_TOKEN)
count = gh.search_issues(query="repo: sigmavirus24/github3.py").total_count
print(count)
# 0
```
|
open
|
2022-09-28T14:25:29Z
|
2022-09-28T14:25:29Z
|
https://github.com/sigmavirus24/github3.py/issues/1111
|
[] |
theoctober19th
| 0
|
MagicStack/asyncpg
|
asyncio
| 655
|
Does not work on ASGI servers
|
<!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.21.0
* **PostgreSQL version**: 13
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: N/A (No)
* **Python version**: 3.9
* **Platform**: Fedora Rawhide
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: Yes
* **If you built asyncpg locally, which version of Cython did you use?**:
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: Yes
<!-- Enter your issue details below this comment. -->
When running asyncpg on an ASGI server (FastAPI/Quart), asyncpg crashes with another operation in progress. This does not happen on AIOHTTP or non-ASGI servers
Also, when using uvicorn, i get an error
Traceback (most recent call last):
File "/usr/bin/uvicorn", line 33, in <module>
sys.exit(load_entry_point('uvicorn==0.11.8', 'console_scripts', 'uvicorn')())
File "/home/rootspring/.local/lib/python3.9/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/rootspring/.local/lib/python3.9/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/rootspring/.local/lib/python3.9/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/rootspring/.local/lib/python3.9/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/uvicorn/main.py", line 339, in main
run(**kwargs)
File "/usr/lib/python3.9/site-packages/uvicorn/main.py", line 362, in run
server.run()
File "/usr/lib/python3.9/site-packages/uvicorn/main.py", line 390, in run
loop.run_until_complete(self.serve(sockets=sockets))
File "uvloop/loop.pyx", line 1456, in uvloop.loop.Loop.run_until_complete
File "/usr/lib/python3.9/site-packages/uvicorn/main.py", line 397, in serve
config.load()
File "/usr/lib/python3.9/site-packages/uvicorn/config.py", line 278, in load
self.loaded_app = import_from_string(self.app)
File "/usr/lib/python3.9/site-packages/uvicorn/importer.py", line 20, in import_from_string
module = importlib.import_module(module_str)
File "/usr/lib64/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 790, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "./server_fastapi.py", line 85, in <module>
db = loop.run_until_complete(setup_db())
File "uvloop/loop.pyx", line 1450, in uvloop.loop.Loop.run_until_complete
File "uvloop/loop.pyx", line 1443, in uvloop.loop.Loop.run_until_complete
File "uvloop/loop.pyx", line 1351, in uvloop.loop.Loop.run_forever
File "uvloop/loop.pyx", line 480, in uvloop.loop.Loop._run
RuntimeError: this event loop is already running.
|
closed
|
2020-11-21T09:29:53Z
|
2023-04-24T16:37:54Z
|
https://github.com/MagicStack/asyncpg/issues/655
|
[] |
cheesycod
| 5
|
psf/requests
|
python
| 5,994
|
ca_certs zip file extraction permission issue with multiple users on Python 3.6
|
When you have multiple users on a machine that each use `requests` from zipapps with `certifi`, one user running a request should not block other users from successfully performing requests. This issue only appears when using a zipapp on python3.6. For python3.7+ the certifi library handles the tempfile and `requests.util.extract_zipped_paths` never sees the zipapp. https://github.com/certifi/python-certifi/blob/2021.10.08/certifi/core.py#L43-L44
## Expected Result
```
user1 # python3.6 zipapp.2.26.zip
get file
user2 # python3.6 zipapp.2.26.zip
get file
```
## Actual Result
```
user1 # python3.6 zipapp.2.26.zip
get file
user2 # python3.6 zipapp.2.26.zip
Traceback (most recent call last):
File "/tmp/zipapp.2.26.zip/urllib3/util/ssl_.py", line 402, in ssl_wrap_socket
PermissionError: [Errno 13] Permission denied
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/zipapp.2.26.zip/urllib3/connectionpool.py", line 706, in urlopen
File "/tmp/zipapp.2.26.zip/urllib3/connectionpool.py", line 382, in _make_request
File "/tmp/zipapp.2.26.zip/urllib3/connectionpool.py", line 1010, in _validate_conn
File "/tmp/zipapp.2.26.zip/urllib3/connection.py", line 421, in connect
File "/tmp/zipapp.2.26.zip/urllib3/util/ssl_.py", line 404, in ssl_wrap_socket
urllib3.exceptions.SSLError: [Errno 13] Permission denied
During the handling the above ....... <a bunch of MaxRetryError tracebacks>
```
### Behaviors on versions:
The core issue is that the changes in #5707 result in different file permissions on disk. Prior to that change, the file would be extracted with `0o664` permissions, but afterwords it is generated with `0o600` permissions from [mkstemp](https://docs.python.org/3/library/tempfile.html#tempfile.mkstemp), which means that users other than the first runner couldn't access the certfille.
https://github.com/psf/requests/blob/c193d9742ed58d41252e65a4f57a936683bb8dbd/requests/utils.py#L275-L276
https://github.com/psf/requests/blob/c193d9742ed58d41252e65a4f57a936683bb8dbd/requests/utils.py#L283-L288
#### Python3.6 w/ requests==2.25.1
```
$ PYTHONPATH=zipapp.2.25.zip python3.6
>>> import certifi, requests
>>> certifi.where()
'/tmp/zipapp.2.25.zip/certifi/cacert.pem'
>>> requests.utils.extract_zipped_paths(certifi.where())
'/tmp/certifi/cacert.pem'
```
```
$ ls -lah /tmp/certifi/cacert.pem
-rw-rw-r--. 1 user1 user1 278K May 7 2020 /tmp/certifi/cacert.pem
```
#### Python3.6 w/ requests==2.26.0
```
$ PYTHONPATH=zipapp.2.26.zip python3.6
>>> import certifi, requests
>>> certifi.where()
'/tmp/zipapp.2.26.zip/certifi/cacert.pem'
>>> requests.utils.extract_zipped_paths(certifi.where())
'/tmp/cacert.pem'
```
```
$ ls -lah /tmp/cacert.pem
-rw-------. 1 user1 user1 254K Sep 23 23:34 /tmp/cacert.pem
```
#### Python3.7 w/ requests==2.26.0
```
$ PYTHONPATH=zipapp.2.26.zip python3.7
>>> import certifi, requests
>>> certifi.where()
'/tmp/tmpuwsvnshl'
>>> requests.utils.extract_zipped_paths(certifi.where())
'/tmp/tmpuwsvnshl'
```
```
$ ls -lah /tmp/tmpuwsvnshl
-rw------- 1 user1 user1 260K Nov 30 15:46 /tmp/tmpuwsvnshl
```
## System Information
$ PYTHONPATH=zipapp.2.26.zip python3.6 -m requests.help
```json
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "2.0.8"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.3"
},
"implementation": {
"name": "CPython",
"version": "3.6.8"
},
"platform": {
"release": "3.10.0-862.3.2.el7.x86_64",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.26.0"
},
"system_ssl": {
"version": "100020bf"
},
"urllib3": {
"version": "1.26.7"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
```
Ideally I'd prefer the `extract_zipped_paths` writer use a temporary file like `certifi` which gets cleaned up at the end of the program, but as a fallback the writer could be updated to save with `0o664` permissions again so that it doesn't matter which user first runs.
|
closed
|
2021-12-01T00:00:05Z
|
2022-04-02T17:01:56Z
|
https://github.com/psf/requests/issues/5994
|
[] |
Peter200lx
| 2
|
JaidedAI/EasyOCR
|
machine-learning
| 1,301
|
numpy 2
|
hey, got issue related to numpy 1/2 binary builds
can you confirm that Numpy 2 is supported, and, if not, please provide ETA for support
|
open
|
2024-09-05T00:06:22Z
|
2024-12-09T17:46:17Z
|
https://github.com/JaidedAI/EasyOCR/issues/1301
|
[] |
Napolitain
| 1
|
alpacahq/alpaca-trade-api-python
|
rest-api
| 588
|
[WARNING] data websocket error, restarting connection: no close frame received or sent
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
websocket connection on sip is unusable for me, a connection lost every 10 seconds, one of tow errors
- [WARNING] data websocket error, restarting connection: no close frame received or sent
- [WARNING] data websocket error, restarting connection: sent 1011 (unexpected error) keepalive ping timeout; no close frame received
simple subscribtion to bars for all symbols
attempts:
- tested versions 1.5.0 1.5.1
- changed host (local machine, friend's machine, personal laptop, 2 vps servers) exact same behaviour
- changed websocket versions i can't even remember how many
log websocket shows ping timeout sometimes
some suggested that api already reconnect automatically, that's not a solution because you're missing 5sec of timeout before the connection actually drop plus reconnection+subscription time, that leaves gaps in 1minute bars by the time you start receiving again.
for 24h run i got a total of 290 candles on AAPL (because of this error) out of possible 960 candles that 70% lost data.
my guess:
- Alpaca servers websocket timeout ping interval is too short for such high traffic.
- Alpaca servers deliberately drop random connections at saturation to prevent service failure (problem happens at peak hours)
### Expected Behavior
Expected to work
### Steps To Reproduce
```markdown
simple 1minute barse subscription for all symbols
```
### Anything else?
_No response_
|
open
|
2022-03-15T16:12:23Z
|
2022-12-14T20:12:40Z
|
https://github.com/alpacahq/alpaca-trade-api-python/issues/588
|
[] |
kimboox44
| 11
|
yihong0618/running_page
|
data-visualization
| 380
|
【keep】为啥会存在轨迹缺失的问题
|
`actions` 倒没报错,但好像没获取`gpx`到数据,不太清楚原因,今天晚上再跑个步测试一下看看是不是脚本的问题。
<img width="1512" alt="image" src="https://user-images.githubusercontent.com/79169717/221772616-3a2623d8-65ff-4bb8-9729-02eb15ee04e5.png">
还有这个彩蛋,左边的样式是不是有点问题。
<img width="1512" alt="image" src="https://user-images.githubusercontent.com/79169717/221772688-f081a074-973f-4a43-851c-70d79e48eb59.png">
|
closed
|
2023-02-28T06:34:49Z
|
2023-10-21T11:22:41Z
|
https://github.com/yihong0618/running_page/issues/380
|
[] |
sun0225SUN
| 5
|
modelscope/modelscope
|
nlp
| 900
|
MsDataset.load报错
|
我执行下列代码加载数据集:
from modelscope.msdatasets import MsDataset
# Loading dataset
hf_ds = MsDataset.load(
'ICASSP_2021_DNS_Challenge', namespace='modelscope',split='test')
出现以下报错:
2024-07-04 13:20:42,801 - modelscope - INFO - PyTorch version 1.11.0+cu113 Found.
2024-07-04 13:20:42,801 - modelscope - INFO - Loading ast index from /home/tian/.cache/modelscope/ast_indexer
2024-07-04 13:20:42,891 - modelscope - INFO - Loading done! Current index file version is 1.15.0, with md5 842522ac9e7126c035b0b056d88b631b and a total number of 980 components indexed
transformer is not installed, please install it if you want to use related modules
/home/tian/miniconda3/envs/frcrn/lib/python3.8/site-packages/datasets/load.py:2524: FutureWarning: 'ignore_verifications' was deprecated in favor of 'verification_mode' in version 2.9.1 and will be removed in 3.0.0.
You can remove this warning by passing 'verification_mode=no_checks' instead.
warnings.warn(
/home/tian/miniconda3/envs/frcrn/lib/python3.8/site-packages/datasets/load.py:926: FutureWarning: The repository for audiolib contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at /home/tian/.cache/modelscope/hub/datasets/modelscope/ICASSP_2021_DNS_Challenge/master/meta/audiolib.py
You can avoid this message in future by passing the argument `trust_remote_code=True`.
Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.
warnings.warn(
Traceback (most recent call last):
File "train.py", line 9, in <module>
hf_ds = MsDataset.load(
File "/home/tian/miniconda3/envs/frcrn/lib/python3.8/site-packages/modelscope/msdatasets/ms_dataset.py", line 316, in load
dataset_inst = remote_dataloader_manager.load_dataset(
File "/home/tian/miniconda3/envs/frcrn/lib/python3.8/site-packages/modelscope/msdatasets/data_loader/data_loader_manager.py", line 132, in load_dataset
oss_downloader.process()
File "/home/tian/miniconda3/envs/frcrn/lib/python3.8/site-packages/modelscope/msdatasets/data_loader/data_loader.py", line 83, in process
self._prepare_and_download()
File "/home/tian/miniconda3/envs/frcrn/lib/python3.8/site-packages/modelscope/msdatasets/data_loader/data_loader.py", line 135, in _prepare_and_download
self.dataset = hf_load_dataset(
File "/home/tian/miniconda3/envs/frcrn/lib/python3.8/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/home/tian/miniconda3/envs/frcrn/lib/python3.8/site-packages/datasets/load.py", line 2265, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
TypeError: 'NoneType' object is not callable
**Your Environments (__required__)**
* OS:
* Linux LAPTOP-L8O6SGOA 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
* CPU:
* Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 183
Model name: 13th Gen Intel(R) Core(TM) i9-13900HX
Stepping: 1
CPU MHz: 2419.200
BogoMIPS: 4838.40
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 48K
L1i cache: 32K
L2 cache: 2048K
L3 cache: 36864K
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
* 下载pytorch:pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
* python版本:3.8
@wangxingjun778
|
closed
|
2024-07-04T05:26:54Z
|
2024-08-11T01:58:34Z
|
https://github.com/modelscope/modelscope/issues/900
|
[
"Stale"
] |
tianqiong123
| 3
|
jupyterhub/repo2docker
|
jupyter
| 1,289
|
Container engine initialization error, unclear why
|
<!-- Thank you for contributing. These HTML commments will not render in the issue, but you can delete them once you've read them if you prefer! -->
### Bug description
<!-- Use this section to clearly and concisely describe the bug. -->
I ran this command: `jupyter-repo2docker .` and apparently it cannot find the docker version. Even when I run it in debug mode I get no real info (`[Repo2Docker] Looking for repo2docker_config in #...`, and when I search the code for this string `repo2docker_config` in the documentation and code nothing really comes up.
#### Expected behaviour
<!-- Tell us what you thought would happen. -->
I thought something would be built, because I am following this documentation: https://repo2docker.readthedocs.io/en/latest/usage.html#calling-repo2docker
#### Actual behaviour
```shell
Container engine initialization error: ('Check if docker is running on the host.', DockerException("Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))"))
```
### How to reproduce
1. Buy a mac
2. Install brew
3. Install pyenv
4. Do `pyenv install 3.11.3`
5. Do `pyenv global 3.11.3`
6. Do `pip install pipx`
7. `pipx install jupyter-repo2docker`
8. Navigate to a directory where you want to build a docker image.
9. Install Docker, `Docker version 24.0.2, build cb74dfc`
10. Do `jupyter-repo2docker .`
### Your personal set up
<!-- Tell us a little about the system you're using. You can see the guidelines for setting up and reporting this information at https://repo2docker.readthedocs.io/en/latest/contributing/contributing.html#setting-up-for-local-development. -->
- OS: OSX 13.3.1
- Docker version: Docker version 24.0.2, build cb74dfc<!-- Run this command to get your version. -->
- repo2docker version `2023.06.0` <!-- Run this command to get your version. -->
#### What happens when I run `jupyter-repo2docker --no-build .`
```Dockerfile
FROM docker.io/library/buildpack-deps:bionic
# Avoid prompts from apt
ENV DEBIAN_FRONTEND=noninteractive
# Set up locales properly
RUN apt-get -qq update && \
apt-get -qq install --yes --no-install-recommends locales > /dev/null && \
apt-get -qq purge && \
apt-get -qq clean && \
rm -rf /var/lib/apt/lists/*
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \
locale-gen
ENV LC_ALL=en_US.UTF-8 \
LANG=en_US.UTF-8 \
LANGUAGE=en_US.UTF-8
# Use bash as default shell, rather than sh
ENV SHELL=/bin/bash
# Set up user
ARG NB_USER
ARG NB_UID
ENV USER=${NB_USER} \
HOME=/home/${NB_USER}
RUN groupadd \
--gid ${NB_UID} \
${NB_USER} && \
useradd \
--comment "Default user" \
--create-home \
--gid ${NB_UID} \
--no-log-init \
--shell /bin/bash \
--uid ${NB_UID} \
${NB_USER}
# Base package installs are not super interesting to users, so hide their outputs
# If install fails for some reason, errors will still be printed
RUN apt-get -qq update && \
apt-get -qq install --yes --no-install-recommends \
gettext-base \
less \
unzip \
> /dev/null && \
apt-get -qq purge && \
apt-get -qq clean && \
rm -rf /var/lib/apt/lists/*
EXPOSE 8888
# Environment variables required for build
ENV APP_BASE=/srv
ENV CONDA_DIR=${APP_BASE}/conda
ENV NB_PYTHON_PREFIX=${CONDA_DIR}/envs/notebook
ENV NPM_DIR=${APP_BASE}/npm
ENV NPM_CONFIG_GLOBALCONFIG=${NPM_DIR}/npmrc
ENV NB_ENVIRONMENT_FILE=/tmp/env/environment.lock
ENV MAMBA_ROOT_PREFIX=${CONDA_DIR}
ENV MAMBA_EXE=${CONDA_DIR}/bin/mamba
ENV CONDA_PLATFORM=linux-aarch64
ENV KERNEL_PYTHON_PREFIX=${NB_PYTHON_PREFIX}
# Special case PATH
ENV PATH=${NB_PYTHON_PREFIX}/bin:${CONDA_DIR}/bin:${NPM_DIR}/bin:${PATH}
# If scripts required during build are present, copy them
COPY --chown=501:501 build_script_files/-2fusers-2fsteven-2f-2elocal-2fpipx-2fvenvs-2fjupyter-2drepo2docker-2flib-2fpython3-2e11-2fsite-2dpackages-2frepo2docker-2fbuildpacks-2fconda-2factivate-2dconda-2esh-26bc03 /etc/profile.d/activate-conda.sh
COPY --chown=501:501 build_script_files/-2fusers-2fsteven-2f-2elocal-2fpipx-2fvenvs-2fjupyter-2drepo2docker-2flib-2fpython3-2e11-2fsite-2dpackages-2frepo2docker-2fbuildpacks-2fconda-2fenvironment-2epy-2d3-2e10-2dlinux-2daarch64-2elock-9c3286 /tmp/env/environment.lock
COPY --chown=501:501 build_script_files/-2fusers-2fsteven-2f-2elocal-2fpipx-2fvenvs-2fjupyter-2drepo2docker-2flib-2fpython3-2e11-2fsite-2dpackages-2frepo2docker-2fbuildpacks-2fconda-2finstall-2dbase-2denv-2ebash-a3eaa9 /tmp/install-base-env.bash
RUN TIMEFORMAT='time: %3R' \
bash -c 'time /tmp/install-base-env.bash' && \
rm -rf /tmp/install-base-env.bash /tmp/env
RUN mkdir -p ${NPM_DIR} && \
chown -R ${NB_USER}:${NB_USER} ${NPM_DIR}
# ensure root user after build scripts
USER root
# Allow target path repo is cloned to be configurable
ARG REPO_DIR=${HOME}
ENV REPO_DIR=${REPO_DIR}
# Create a folder and grant the user permissions if it doesn't exist
RUN if [ ! -d "${REPO_DIR}" ]; then \
/usr/bin/install -o ${NB_USER} -g ${NB_USER} -d "${REPO_DIR}"; \
fi
WORKDIR ${REPO_DIR}
RUN chown ${NB_USER}:${NB_USER} ${REPO_DIR}
# We want to allow two things:
# 1. If there's a .local/bin directory in the repo, things there
# should automatically be in path
# 2. postBuild and users should be able to install things into ~/.local/bin
# and have them be automatically in path
#
# The XDG standard suggests ~/.local/bin as the path for local user-specific
# installs. See https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html
ENV PATH=${HOME}/.local/bin:${REPO_DIR}/.local/bin:${PATH}
# The rest of the environment
ENV CONDA_DEFAULT_ENV=${KERNEL_PYTHON_PREFIX}
# Run pre-assemble scripts! These are instructions that depend on the content
# of the repository but don't access any files in the repository. By executing
# them before copying the repository itself we can cache these steps. For
# example installing APT packages.
# If scripts required during build are present, copy them
COPY --chown=501:501 src/binder/environment.yml ${REPO_DIR}/binder/environment.yml
USER ${NB_USER}
RUN TIMEFORMAT='time: %3R' \
bash -c 'time ${MAMBA_EXE} env update -p ${NB_PYTHON_PREFIX} --file "binder/environment.yml" && \
time ${MAMBA_EXE} clean --all -f -y && \
${MAMBA_EXE} list -p ${NB_PYTHON_PREFIX} \
'
# ensure root user after preassemble scripts
USER root
# Copy stuff.
COPY --chown=501:501 src/ ${REPO_DIR}/
# Run assemble scripts! These will actually turn the specification
# in the repository into an image.
# Container image Labels!
# Put these at the end, since we don't want to rebuild everything
# when these change! Did I mention I hate Dockerfile cache semantics?
LABEL repo2docker.ref="None"
LABEL repo2docker.repo="local"
LABEL repo2docker.version="2023.06.0"
# We always want containers to run as non-root
USER ${NB_USER}
# Make sure that postBuild scripts are marked executable before executing them
RUN chmod +x binder/postBuild
RUN ./binder/postBuild
# Add start script
# Add entrypoint
ENV PYTHONUNBUFFERED=1
COPY /python3-login /usr/local/bin/python3-login
COPY /repo2docker-entrypoint /usr/local/bin/repo2docker-entrypoint
ENTRYPOINT ["/usr/local/bin/repo2docker-entrypoint"]
# Specify the default command to run
CMD ["jupyter", "notebook", "--ip", "0.0.0.0"]
```
|
closed
|
2023-06-14T21:46:01Z
|
2024-09-16T15:22:03Z
|
https://github.com/jupyterhub/repo2docker/issues/1289
|
[] |
startakovsky
| 13
|
neuml/txtai
|
nlp
| 138
|
Add korean translation to README.md
|
Hi! ! I'm South Korean and I want to help you translate README.md to Korean.
Is it okay to translate your README.md?
Thank you.
|
closed
|
2021-11-07T15:45:15Z
|
2021-11-14T12:53:51Z
|
https://github.com/neuml/txtai/issues/138
|
[] |
0206pdh
| 1
|
ccxt/ccxt
|
api
| 25,285
|
XT.com Futures - pagination in fetch_ohlcv
|
### Operating System
Windows
### Programming Languages
Python
### CCXT Version
4.4.59
### Description
The current implementation of `fetch_ohlcv` for XT.com does not support pagination (`params.paginate`) unlike most exchanges. Would it be possible to add it?
https://github.com/ccxt/ccxt/blob/99fc65ec7aa5b8b88c70a20de5806850236216d8/python/ccxt/async_support/xt.py#L1385-L1398
### Code
```
```
|
open
|
2025-02-15T03:55:50Z
|
2025-02-19T10:59:10Z
|
https://github.com/ccxt/ccxt/issues/25285
|
[] |
krasnyd
| 3
|
2noise/ChatTTS
|
python
| 93
|
数字、标点符号,字母,都会出错
|
数字、标点符号,字母,都会出错
|
closed
|
2024-05-30T09:51:14Z
|
2024-08-04T04:02:16Z
|
https://github.com/2noise/ChatTTS/issues/93
|
[
"stale"
] |
weiyi88
| 5
|
sinaptik-ai/pandas-ai
|
pandas
| 917
|
Streamlit UI example for pandaAI
|
### 🚀 The feature
To add example of UI for pandasAI. I can share source of my own UI - https://pva-ask-my-data-eqwdqswwf.streamlit.app/. Inside it's pandasAI.
### Motivation, pitch
Maybe it can be useful for other people who use pandasAI
### Alternatives
_No response_
### Additional context
_No response_
|
closed
|
2024-01-31T15:35:40Z
|
2024-03-16T16:20:48Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/917
|
[] |
PavelAgurov
| 3
|
plotly/dash-core-components
|
dash
| 626
|
Feature request: Rangeslider and slider to support datetime format
|
I've done some testing, and as far as I can see the sliders in Dash don't support datetime formats, only numerical formats. This would be great to have.
It would be especially handy when working with time series data in pandas.
https://dash.plot.ly/dash-core-components/slider
https://dash.plot.ly/dash-core-components/rangeslider
|
open
|
2018-11-27T15:40:55Z
|
2019-08-30T16:13:44Z
|
https://github.com/plotly/dash-core-components/issues/626
|
[] |
Judochopalots
| 1
|
dsdanielpark/Bard-API
|
nlp
| 57
|
Responce error
|
Response code not 200. Response Status is 302
|
closed
|
2023-06-08T04:44:37Z
|
2023-06-08T12:26:00Z
|
https://github.com/dsdanielpark/Bard-API/issues/57
|
[] |
Ridoy302583
| 1
|
seleniumbase/SeleniumBase
|
web-scraping
| 3,569
|
Setting the `lang` arg via the `cdp_driver` isn't taking effect
|
## Setting the `lang` arg via the `cdp_driver` isn't taking effect
https://github.com/seleniumbase/SeleniumBase/blob/5d732a412f1a1c5da10345bdb29f160182d00450/seleniumbase/undetected/cdp_driver/cdp_util.py#L235
This is the pure CDP Mode equivalent of setting the `locale` / `locale_code`.
|
closed
|
2025-02-26T05:35:48Z
|
2025-02-26T22:43:23Z
|
https://github.com/seleniumbase/SeleniumBase/issues/3569
|
[
"bug",
"UC Mode / CDP Mode"
] |
mdmintz
| 1
|
jupyterhub/jupyterhub-deploy-docker
|
jupyter
| 91
|
'make build' fails with Conda
|
Followed all the configurations and then it fails in build stage with error - ModuleNotFoundError: No module named 'conda'.
Below is the full error trace,
make build
docker-compose build
hub-db uses an image, skipping
Building hub
Step 1/9 : ARG JUPYTERHUB_VERSION
Step 2/9 : FROM jupyterhub/jupyterhub-onbuild:$JUPYTERHUB_VERSION
# Executing 1 build trigger
---> Using cache
---> b6927b9a433f
Step 3/9 : RUN /opt/conda/bin/conda install -yq psycopg2=2.7 && /opt/conda/bin/conda clean -tipsy && /opt/conda/bin/pip install --no-cache-dir oauthenticator==0.8.* dockerspawner==0.9.*
---> Running in 794099d91a50
Solving environment: ...working... done
## Package Plan ##
environment location: /opt/conda
added / updated specs:
- psycopg2=2.7
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
Traceback (most recent call last):
File "/opt/conda/bin/conda", line 7, in <module>
from conda.cli import main
ModuleNotFoundError: No module named 'conda'
ERROR: Service 'hub' failed to build: The command '/bin/sh -c /opt/conda/bin/conda install -yq psycopg2=2.7 && /opt/conda/bin/conda clean -tipsy && /opt/conda/bin/pip install --no-cache-dir oauthenticator==0.8.* dockerspawner==0.9.*' returned a non-zero code: 1
|
closed
|
2019-08-26T12:00:41Z
|
2021-04-27T10:50:58Z
|
https://github.com/jupyterhub/jupyterhub-deploy-docker/issues/91
|
[] |
karthi4k
| 9
|
clovaai/donut
|
computer-vision
| 115
|
Train script hangs with no errors
|
```bash
root@spot-a100-1670595978:/app# python3 train.py --config config/train_cord.yaml --pretrained_model_name_or_path
"naver-clova-ix/donut-base" --dataset_name_or_paths "['/app/jsonl']" --exp_version "abay_experiment"
resume_from_checkpoint_path: None
result_path: ./result
pretrained_model_name_or_path: naver-clova-ix/donut-base
dataset_name_or_paths:
- /app/jsonl
sort_json_key: False
train_batch_sizes:
- 8
val_batch_sizes:
- 1
input_size:
- 1280
- 960
max_length: 768
align_long_axis: False
num_nodes: 1
seed: 2022
lr: 3e-05
warmup_steps: 300
num_training_samples_per_epoch: 800
max_epochs: 30
max_steps: -1
num_workers: 8
val_check_interval: 1.0
check_val_every_n_epoch: 3
gradient_clip_val: 1.0
verbose: True
exp_name: train_cord
exp_version: abay_experiment
Config is saved at result/train_cord/abay_experiment/config.yaml
/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/seed.py:48: LightningDeprecationWarning: `pytorch_lightning.utilities.seed.seed_everything` has been deprecated in v1.8.0 and will be removed in v1.10.0. Please use `lightning_lite.utilities.seed.seed_everything` instead.
rank_zero_deprecation(
Global seed set to 2022
/usr/local/lib/python3.8/dist-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Some weights of DonutModel were not initialized from the model checkpoint at naver-clova-ix/donut-base and are newly initialized because the shapes did not match:
- encoder.model.layers.0.blocks.1.attn_mask: found shape torch.Size([3072, 100, 100]) in the checkpoint and torch.Size([768, 100, 100]) in the model instantiated
- encoder.model.layers.1.blocks.1.attn_mask: found shape torch.Size([768, 100, 100]) in the checkpoint and torch.Size([192, 100, 100]) in the model instantiated
- encoder.model.layers.2.blocks.1.attn_mask: found shape torch.Size([192, 100, 100]) in the checkpoint and torch.Size([48, 100, 100]) in the model instantiated
- encoder.model.layers.2.blocks.3.attn_mask: found shape torch.Size([192, 100, 100]) in the checkpoint and torch.Size([48, 100, 100]) in the model instantiated
- encoder.model.layers.2.blocks.5.attn_mask: found shape torch.Size([192, 100, 100]) in the checkpoint and torch.Size([48, 100, 100]) in the model instantiated
- encoder.model.layers.2.blocks.7.attn_mask: found shape torch.Size([192, 100, 100]) in the checkpoint and torch.Size([48, 100, 100]) in the model instantiated
- encoder.model.layers.2.blocks.9.attn_mask: found shape torch.Size([192, 100, 100]) in the checkpoint and torch.Size([48, 100, 100]) in the model instantiated
- encoder.model.layers.2.blocks.11.attn_mask: found shape torch.Size([192, 100, 100]) in the checkpoint and torch.Size([48, 100, 100]) in the model instantiated
- encoder.model.layers.2.blocks.13.attn_mask: found shape torch.Size([192, 100, 100]) in the checkpoint and torch.Size([48, 100, 100]) in the model instantiated
- encoder.model.layers.3.blocks.1.attn_mask: found shape torch.Size([48, 100, 100]) in the checkpoint and torch.Size([12, 100, 100]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Resolving data files: 100%|████████████████████████████████████████████████████████████| 16197/16197 [00:00<00:00, 27669.39it/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████| 2001/2001 [00:00<00:00, 13913.31it/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████| 1801/1801 [00:00<00:00, 14983.16it/s]
Using custom data configuration jsonl-9ae412abd68dd439
Found cached dataset imagefolder (/root/.cache/huggingface/datasets/imagefolder/jsonl-9ae412abd68dd439/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f)
```
Just hangs here, nothing happens
|
closed
|
2022-12-28T02:52:05Z
|
2024-12-16T10:05:42Z
|
https://github.com/clovaai/donut/issues/115
|
[] |
abaybektursun
| 8
|
ranaroussi/yfinance
|
pandas
| 1,609
|
KeyError: shortName
|
My program needs to get the name for a stock. This is done in finance by getting the shortName value in the dictionary. This worked until version 0.2.22. However, after updating to 0.2.24 due to missing values, I am getting a KeyError for the shortName. I am guessing that after the update, shortName is not included in the info.
Is this a global issue?
How do I fix this?
Python version is 3.10.6
|
closed
|
2023-07-16T08:21:43Z
|
2023-07-28T17:12:38Z
|
https://github.com/ranaroussi/yfinance/issues/1609
|
[] |
vismoh2010
| 5
|
Yorko/mlcourse.ai
|
scikit-learn
| 370
|
locally built docker image doesn't work
|
I've created docker image locally, using docker image build and then tried to run it like this:
`python run_docker_jupyter.py -t mlc_local`
got this:
```
Running command
docker run -it --rm -p 5022:22 -p 4545:4545 -v "/home/egor/private/mlcourse.ai":/notebooks -w /notebooks mlc_local jupyter
Command: jupyter
[I 12:44:17.454 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/traitlets/traitlets.py", line 528, in get
value = obj._trait_values[self.name]
KeyError: 'allow_remote_access'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 869, in _default_allow_remote
addr = ipaddress.ip_address(self.ip)
File "/usr/lib/python3.5/ipaddress.py", line 54, in ip_address
address)
ValueError: '' does not appear to be an IPv4 or IPv6 address
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/jupyter-notebook", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python3.5/dist-packages/jupyter_core/application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "<decorator-gen-7>", line 2, in initialize
File "/usr/local/lib/python3.5/dist-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 1629, in initialize
self.init_webapp()
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 1379, in init_webapp
self.jinja_environment_options,
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 158, in __init__
default_url, settings_overrides, jinja_env_options)
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 251, in init_settings
allow_remote_access=jupyter_app.allow_remote_access,
File "/usr/local/lib/python3.5/dist-packages/traitlets/traitlets.py", line 556, in __get__
return self.get(obj, cls)
File "/usr/local/lib/python3.5/dist-packages/traitlets/traitlets.py", line 535, in get
value = self._validate(obj, dynamic_default())
File "/usr/local/lib/python3.5/dist-packages/notebook/notebookapp.py", line 872, in _default_allow_remote
for info in socket.getaddrinfo(self.ip, self.port, 0, socket.SOCK_STREAM):
File "/usr/lib/python3.5/socket.py", line 732, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -5] No address associated with hostname
```
|
closed
|
2018-10-10T12:50:06Z
|
2018-10-11T13:59:36Z
|
https://github.com/Yorko/mlcourse.ai/issues/370
|
[
"enhancement"
] |
eignatenkov
| 7
|
matterport/Mask_RCNN
|
tensorflow
| 2,815
|
Differences in results for this model on TF2.0CPU and TF2.7 GPU
|
Hi,
I ran this model on a custom dataset TF2.0 CPU and TF2.7GPU. Got good results on test data for object detection on TF2.0 but TF2.7 GPU results are totally bad. Not a single object was identified after same number of epochs. Is it because MRCNN model is not ported to TF2.7 as yet.
|
open
|
2022-04-22T00:07:16Z
|
2022-11-10T14:06:55Z
|
https://github.com/matterport/Mask_RCNN/issues/2815
|
[] |
suraj123
| 3
|
axnsan12/drf-yasg
|
rest-api
| 518
|
Add type hints
|
HI! I started a https://github.com/intgr/drf-yasg-stubs repository since drf-yasg was the only major component in my project that did not include type stubs. For now it's still quite incomplete (some of it is still auto-generated files with `Any` types).
I'm wondering what are your opinions on type stubs? [PEP 561](https://www.python.org/dev/peps/pep-0561/#specification) offers three ways to distribute stubs.
Option 3 is what I'm doing right now, with a separate package.
Clearly option 1 is out of the window if you want to retain Python 2 support. Although given its support ends ending in one week (and that Django already dropped it ages ago), perhaps dropping Python 2 altogether is due?
Option 2 would be to merge the `.pyi` files that I've written into this drf-yasg repository. This might be the reasonable first step if you don't want to drop Python 2 yet.
I'm willing to submit pull requests for it and do any other work that you deem necessary.
|
open
|
2019-12-24T11:37:34Z
|
2025-03-07T12:15:19Z
|
https://github.com/axnsan12/drf-yasg/issues/518
|
[
"triage"
] |
intgr
| 8
|
widgetti/solara
|
flask
| 131
|
test/ci issue: coverage slows down some tests
|
After 172cdefeabd88f166d451873ea4582589a4cbb9b the test_memoize_hook fails more regularly. We've seen it fail before, but now it's almost 90%. Therefore we disabled coverage in CI for now.
If we want to enable it again, 172cdefeabd88f166d451873ea4582589a4cbb9b might give a clue.
This fails about 50% of the time on my osx laptop.
```
$ py.test --cov=solara tests/unit/cache_test.py
```
|
open
|
2023-05-31T21:25:06Z
|
2023-06-01T06:43:48Z
|
https://github.com/widgetti/solara/issues/131
|
[
"help wanted"
] |
maartenbreddels
| 1
|
Kludex/mangum
|
asyncio
| 151
|
Slash at end of endpoint
|
I've been updated to version `0.10.0` and suddenly all my endpoints on custom domain doesn't work more. I had to add `/` at end of url to back to work.
Is it a normal or a bug?
|
closed
|
2020-12-04T21:43:35Z
|
2022-12-28T13:07:37Z
|
https://github.com/Kludex/mangum/issues/151
|
[
"more info needed"
] |
sergiors
| 3
|
aiortc/aioquic
|
asyncio
| 6
|
Understanding packet header construction
|
Hi,
I am working on a project, which requires modifying packet headers, especially for the ack packets.
Can you please help me understand the code flow via which I can modify the headers and add custom key values.
Thanks.
|
closed
|
2019-05-28T11:30:06Z
|
2019-06-01T02:02:53Z
|
https://github.com/aiortc/aioquic/issues/6
|
[] |
massvoice
| 2
|
openapi-generators/openapi-python-client
|
fastapi
| 1,203
|
Read-only `src_dict` dictionary in `from_dict` methods should be typed as `Mapping[str, Any]`
|
**Describe the bug**
Generated classmethod `from_dict`
https://github.com/openapi-generators/openapi-python-client/blob/5cfe4e1d594951725844ea470fc9d61f40c08093/openapi_python_client/templates/model.py.jinja#L131-L132
should probably be annotated as
```python
@classmethod
def from_dict(cls: type[T], src_dict: Mapping[str, Any]) -> T:
```
i.e. using `typing.Mapping`.
This marks the input dictionary as immutable and thus mypy is OK if you pass it a `TypedDict`. Otherwise, it correctly complains about type-incompatibility because the method _could_ be removing or adding keys from the dictionary.
**OpenAPI Spec File**
NA.
**Additional context**
- https://github.com/python/mypy/issues/13122#issuecomment-1184828983
|
closed
|
2025-02-08T21:21:44Z
|
2025-03-15T19:00:56Z
|
https://github.com/openapi-generators/openapi-python-client/issues/1203
|
[] |
edgarrmondragon
| 0
|
proplot-dev/proplot
|
data-visualization
| 288
|
Manually specify `title` and `abc` coordinate positions
|
<!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
Hi, is it possible to make the abc labels slightly offset to the left from the axis? This would probably be a negative position.
<img width="306" alt="image" src="https://user-images.githubusercontent.com/8291800/134991597-a6722340-8aab-4878-a345-928175343d40.png">
I was hoping to have the (b) moved slightly left so that I can center the title without the two texts crashing into each other.
I tried the following after having all of my "imshow" and other formatting code run:
```python
ax = axes[2]
aobj = ax._title_dict['abc']
print(aobj.get_position()) # prints (0, 1.0)
# no effect
aobj.set_x(-.25)
# no effect
abc = ax.get_children()[1]
abc.set_position((-.25, 1.0))
```
I couldn't figure out what was running to overwrite these positions, but I assume it's something internal to proplot to make the layout nice and orderly.
### Proplot version
```
>>> import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version)
3.3.0
0.9.1
```
|
open
|
2021-09-27T22:08:14Z
|
2022-07-08T15:54:21Z
|
https://github.com/proplot-dev/proplot/issues/288
|
[
"feature"
] |
scottstanie
| 6
|
pinry/pinry
|
django
| 156
|
problem uploading image
|
I tried to set up pinry myself using apache and mod_wsgi and I got the server running but could post any pins - kept getting an error 'proplem saving image'. Looking at the requests I could see that any call to /api/v2/... was returning a 403 error.
Thinking that maybe it was my lack of knowledge of django and how to set it up I blew everything away, re-cloned it and then used the docker image. I still get the same issue - 'problem saving image'. As with the original instance all requests to /api/v2/... are returning a 403 forbidden error code.
|
closed
|
2019-10-03T04:03:16Z
|
2019-12-08T19:19:30Z
|
https://github.com/pinry/pinry/issues/156
|
[] |
t1v0
| 2
|
sinaptik-ai/pandas-ai
|
data-visualization
| 1,291
|
LLM Call response of JudgeAgent not always returning <Yes> or <No>
|
### System Info
macos = 14.5
python = 3.10.13
pandasai = 2.2.12
### 🐛 Describe the bug
Using AzureOpenAI agent in combination with JudgeAgent.
```
llm = AzureOpenAI(...)
judge = JudgeAgent(config={"llm": llm, "verbose": True})
agent = Agent("filepath", config={"llm": llm, "verbose": True}, judge=judge)
```
The logs show that the following is added to the prompt by the JudgeAgent:
```
Reason step by step and at the end answer:
1. Explain what the code does
2. Explain what the user query asks for
3. Strictly compare the query with the code that is generated
Always return <Yes> or <No> if exactly meets the requirements
```
But, the actual answers of the LLM responses do not contain `<Yes>` or `<No>` and only answers the questions 1, 2 and 3, so https://github.com/Sinaptik-AI/pandas-ai/blob/e011e8ffdc8a2cd88db07c4440f331540a175648/pandasai/ee/agents/judge_agent/pipeline/llm_call.py#L44-L50 throws a `pandasai.exceptions.InvalidOutputValueMismatch: Invalid response of LLM Call`
|
closed
|
2024-07-24T13:18:52Z
|
2024-11-04T16:08:30Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/1291
|
[
"bug"
] |
sschrijver-pon
| 2
|
FlareSolverr/FlareSolverr
|
api
| 1,141
|
[yggtorrent] (updating) The cookies provided by FlareSolverr are not valid
|
### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.3.16
- Last working FlareSolverr version:
- Operating system:
- Are you using Docker: [yes/no] no
- FlareSolverr User-Agent (see log traces or / endpoint):Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
- Are you using a VPN: [yes/no] no
- Are you using a Proxy: [yes/no] no
- Are you using Captcha Solver: [yes/no] no
- If using captcha solver, which one:
- URL to test this issue:
```
### Description
This happens whenever i tried resolving a captcha on jackett
### Logged Error Messages
```text
An error occurred while updating this indexer
The cookies provided by FlareSolverr are not valid
```
### Screenshots
_No response_
|
closed
|
2024-04-02T18:37:42Z
|
2024-04-02T19:14:35Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/1141
|
[
"more information needed"
] |
daniwalter001
| 2
|
ray-project/ray
|
machine-learning
| 50,827
|
CI test linux://python/ray/data:test_transform_pyarrow is flaky
|
CI test **linux://python/ray/data:test_transform_pyarrow** is flaky. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8496#01952c44-0d09-4aa4-b1f3-e432b7ebfca1
- https://buildkite.com/ray-project/postmerge/builds/8495#01952b30-22c6-4a0f-9857-59a7988f67d8
- https://buildkite.com/ray-project/postmerge/builds/8491#01952b00-e020-4d4e-b46a-209c0b3dbf5b
- https://buildkite.com/ray-project/postmerge/builds/8491#01952ad9-1225-449b-84d0-29cfcc6a048c
DataCaseName-linux://python/ray/data:test_transform_pyarrow-END
Managed by OSS Test Policy
|
closed
|
2025-02-22T06:46:30Z
|
2025-03-04T09:29:49Z
|
https://github.com/ray-project/ray/issues/50827
|
[
"bug",
"triage",
"data",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] |
can-anyscale
| 31
|
pywinauto/pywinauto
|
automation
| 680
|
Updating table cell with pywinauto
|
Hello all,
I am currently using pywinauto to step through a tree and print the contents of the table in each children. I would like to know if i can use pywinauto to push in a text file which would go and update the changed cells.

I attached a screenshot of what the output look like. I was wondering how i can change the "Yes" to a "No" for example and push that back in to update the table.
|
open
|
2019-03-01T21:48:41Z
|
2019-03-24T12:43:37Z
|
https://github.com/pywinauto/pywinauto/issues/680
|
[
"question"
] |
ab3linc
| 4
|
fastapi/sqlmodel
|
fastapi
| 130
|
How to preload relationship attributes to access outside of session?
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
# Data base set up. Copied from:
# https://sqlmodel.tiangolo.com/tutorial/relationship-attributes/back-populates/
from typing import List, Optional
from sqlmodel import Field, Relationship, Session, SQLModel, create_engine, select
class Team(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str
headquarters: str
heroes: List["Hero"] = Relationship(back_populates="team")
class Hero(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str
secret_name: str
age: Optional[int] = None
team_id: Optional[int] = Field(default=None, foreign_key="team.id")
team: Optional[Team] = Relationship(back_populates="heroes")
sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
engine = create_engine(sqlite_url, echo=False)
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
def create_heroes():
with Session(engine) as session:
team_preventers = Team(name="Preventers", headquarters="Sharp Tower")
team_z_force = Team(name="Z-Force", headquarters="Sister Margaret’s Bar")
hero_deadpond = Hero(
name="Deadpond", secret_name="Dive Wilson", team=team_z_force
)
hero_rusty_man = Hero(
name="Rusty-Man", secret_name="Tommy Sharp", age=48, team=team_preventers
)
hero_spider_boy = Hero(name="Spider-Boy", secret_name="Pedro Parqueador")
session.add(hero_deadpond)
session.add(hero_rusty_man)
session.add(hero_spider_boy)
session.commit()
session.refresh(hero_deadpond)
session.refresh(hero_rusty_man)
session.refresh(hero_spider_boy)
hero_spider_boy.team = team_preventers
session.add(hero_spider_boy)
session.commit()
session.refresh(hero_spider_boy)
hero_black_lion = Hero(name="Black Lion", secret_name="Trevor Challa", age=35)
hero_sure_e = Hero(name="Princess Sure-E", secret_name="Sure-E")
team_wakaland = Team(
name="Wakaland",
headquarters="Wakaland Capital City",
heroes=[hero_black_lion, hero_sure_e],
)
session.add(team_wakaland)
session.commit()
session.refresh(team_wakaland)
hero_tarantula = Hero(name="Tarantula", secret_name="Natalia Roman-on", age=32)
hero_dr_weird = Hero(name="Dr. Weird", secret_name="Steve Weird", age=36)
hero_cap = Hero(
name="Captain North America", secret_name="Esteban Rogelios", age=93
)
team_preventers.heroes.append(hero_tarantula)
team_preventers.heroes.append(hero_dr_weird)
team_preventers.heroes.append(hero_cap)
session.add(team_preventers)
session.commit()
session.refresh(hero_tarantula)
session.refresh(hero_dr_weird)
session.refresh(hero_cap)
def main():
create_db_and_tables()
create_heroes()
main()
# Within session I can access heroes.
with Session(engine) as session:
team = session.exec(select(Team)).first()
print(team.heroes)
# [Hero(id=1, age=None, name='Deadpond', secret_name='Dive Wilson', team_id=1)]
# Outside of session I cannot.
with Session(engine) as session:
team = session.exec(select(Team)).first()
print(team.heroes)
# ---------------------------------------------------------------------------
# DetachedInstanceError Traceback (most recent call last)
# /var/folders/38/ccm_21tj43v1ntn9ks9vyy740000gn/T/ipykernel_7846/3037874887.py in <module>
# 3 team = session.exec(select(Team)).first()
# 4
# ----> 5 print(team.heroes)
#
# ~/Library/Caches/pypoetry/virtualenvs/flask-webapp-VRI2aZnU-py3.9/lib/python3.9/site-packages/sqlalchemy/orm/attributes.py in __get__(self, instance, owner)
# 479 replace_context=err,
# 480 )
# --> 481 return self.impl.get(state, dict_)
# 482
# 483
#
# ~/Library/Caches/pypoetry/virtualenvs/flask-webapp-VRI2aZnU-py3.9/lib/python3.9/site-packages/sqlalchemy/orm/attributes.py in get(self, state, dict_, passive)
# 924 return PASSIVE_NO_RESULT
# 925
# --> 926 value = self._fire_loader_callables(state, key, passive)
# 927
# 928 if value is PASSIVE_NO_RESULT or value is NO_VALUE:
#
# ~/Library/Caches/pypoetry/virtualenvs/flask-webapp-VRI2aZnU-py3.9/lib/python3.9/site-packages/sqlalchemy/orm/attributes.py in _fire_loader_callables(self, state, key, # passive)
# 960 return callable_(state, passive)
# 961 elif self.callable_:
# --> 962 return self.callable_(state, passive)
# 963 else:
# 964 return ATTR_EMPTY
#
# ~/Library/Caches/pypoetry/virtualenvs/flask-webapp-VRI2aZnU-py3.9/lib/python3.9/site-packages/sqlalchemy/orm/strategies.py in _load_for_state(self, state, passive, loadopt, # extra_criteria)
# 841 return attributes.PASSIVE_NO_RESULT
# 842
# --> 843 raise orm_exc.DetachedInstanceError(
# 844 "Parent instance %s is not bound to a Session; "
# 845 "lazy load operation of attribute '%s' cannot proceed"
#
# DetachedInstanceError: Parent instance <Team at 0x1162a8400> is not bound to a Session; lazy load operation of attribute 'heroes' cannot proceed (Background on this error at: https://sqlalche.me/e/14/bhk3)
```
### Description
I am using sqlmodel with flask. I would like to pre-load the relationship attributes for a given object before passing that object into a jinja2 template. The challenge is I can't figure out how to pre-load the attributes.
In my example code, how can I get the last line to execute without throwing an error?
```python
# Outside of session I cannot.
with Session(engine) as session:
team = session.exec(select(Team)).first()
print(team.heroes)
```
### Operating System
macOS
### Operating System Details
macOS Big Sur 11.3.1
### SQLModel Version
0.0.4
### Python Version
3.9.4
### Additional Context
In reference to my tweet :) https://twitter.com/TheReaLSamlam/status/1447779469221974016
|
closed
|
2021-10-12T21:53:45Z
|
2022-05-29T02:29:24Z
|
https://github.com/fastapi/sqlmodel/issues/130
|
[
"question"
] |
SamEdwardes
| 7
|
vitalik/django-ninja
|
pydantic
| 666
|
Router does not support auth inheritance
|
**Describe the bug**
When you try adding a new router to an existing router, the leaf router doesn't inherit the top-level auth.
Consider the below example:
```py
from ninja import NinjaAPI, Router
from ninja.security import APIKeyQuery
api = NinjaAPI()
r1 = Router()
r2 = Router()
r3 = Router()
class Auth(APIKeyQuery):
def __init__(self, secret):
self.secret = secret
super().__init__()
def authenticate(self, request, key):
if key == self.secret:
return key
api.add_router("/r1", r1, auth=Auth("r1_auth"))
r1.add_router("/r2", r2)
r2.add_router("/r3", r3)
@r1.get("/")
def op1(request):
return request.auth
@r2.get("/")
def op2(request):
return request.auth
@r3.get("/")
def op3(request):
return request.auth
```
So the auth provided for router `r1` won't be present for any operations in routers `r2` and `r3` even though it comes under it.
This is only for routers though. If we add auth when we initialize `NinjaApi()` it propagates down to all routers and endpoints even if we provide the auth when initializing router r1 as `r1 = Router(auth=Auth("r1_auth"))`. Screenshot of the above code is shown below.
<img width="1453" alt="Screen Shot 2023-01-28 at 9 44 00 AM" src="https://user-images.githubusercontent.com/38973423/215273256-f2b43a14-153a-4e5b-9111-2aa779fc6a0c.png">
I think it's super helpful to include this in the documentation if we don't plan to support it as a dev can easily misinterpret it and in turn, pose a security threat to the app they are building.
**Versions (please complete the following information):**
- Python version: 3.9.12
- Django version: 4.1.5
- Django-Ninja version: 0.20.0
- Pydantic version: 1.10.4
|
closed
|
2023-01-28T15:06:38Z
|
2023-02-07T09:22:53Z
|
https://github.com/vitalik/django-ninja/issues/666
|
[] |
aasiffaizal
| 1
|
deezer/spleeter
|
deep-learning
| 812
|
[Discussion] Does Spleeter tech powers Apple Music Sing?
|
Can we confirm yet if Apple Music's newest exciting Karaoke feature called "Apple Music Sing" is **_powered by Spleeter's tech?_**

> Ever since the launch of the [Live Lyrics feature](https://youtu.be/Y7zfExL8Yes) back in 2019, I knew damn well Apple Music just created the best Karaoke UI implementation yet.
>
> And the first time [The Verge wrote about Spleeter](https://www.theverge.com/2019/11/5/20949338/vocal-isolation-ai-machine-learning-deezer-spleeter-automated-open-source-tensorflow) back in 2019, I knew damn well after reading that article and playing with the open source tech on my own, that this two will be a match in heaven as a product feature.
Anyone testing out Beta versions of tvOS, iOS or iPadOS, can we investigate and confirm this?
Because I really wanted to open up a discussion, if it is really justified for Apple to bar this amazing party feature to their newest [Apple TV 4K box only.](https://9to5mac.com/2022/12/08/apple-music-sing-karaoke-compatible-devices/)
I've run your tech on my M1 Macbook Air and I'd say it's not that too hardware demanding.
So I really wanna understand, if it is really that hardware demanding to run this in _real time music?_
|
open
|
2022-12-10T12:39:45Z
|
2022-12-12T17:32:09Z
|
https://github.com/deezer/spleeter/issues/812
|
[
"question"
] |
Mancerrss
| 1
|
deepfakes/faceswap
|
machine-learning
| 715
|
I got some UnicodeEncodeError issues. How can I slove it?
|
File "C:\Users\jho60\AppData\Local\Programs\Python\Python36\lib\configparser.py", line 931, in _write_section
fp.write("{}{}\n".format(key, value))
**UnicodeEncodeError: 'cp949' codec can't encode character '\u2013' in position 159: illegal multibyte sequence**
File "C:\faceswap\lib\logger.py", line 155, in crash_log
outfile.writelines(freeze_log)
**UnicodeEncodeError: 'cp949' codec can't encode character '\u2013' in position 339: illegal multibyte sequence**
First issue, I can fix it with modifying code as 'open(file_name, 'w', -1, "utf-8")'
But in second issue, I don't know where can I fix it.
|
closed
|
2019-04-28T12:19:37Z
|
2019-04-29T05:52:08Z
|
https://github.com/deepfakes/faceswap/issues/715
|
[] |
ghost
| 4
|
gradio-app/gradio
|
deep-learning
| 10,783
|
Gradio: predict() got an unexpected keyword argument 'message'
|
### Describe the bug
Trying to connect my telegram-bot(webhook) via API with my public Gradio space on Huggingface.
Via terminal - all works OK.
But via telegram-bot always got the same issue: Error in connection Gradio: predict() got an unexpected keyword argument 'message'.
What should i use to work it properly?
HF:
Gradio sdk_version: 5.20.1
Requirements.txt
- gradio==5.20.1
- fastapi>=0.112.2
- gradio-client>=1.3.0
- urllib3~=2.0
- requests>=2.28.2
- httpx>=0.24.1
- aiohttp>=3.8.5
- async-timeout==4.0.2
- huggingface-hub>=0.19.3
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
# Gradio API
async def send_request_to_gradio(query: str, chat_history: list = None) -> str:
try:
client = Client(HF_SPACE_NAME, hf_token=HF_TOKEN)
logging.info(f"Отправляем запрос в Gradio: query={query}, chat_history={chat_history}")
result = client.predict(
message=query,
chat_history=chat_history or None,
api_name="/chat"
)
logging.info(f"Reply from Gradio: {result}")
# Обработка результата
if isinstance(result, list) and result:
response = result[0]["content"] if isinstance(result[0], dict) and "content" in result[0] else "Не найдено"
return response
else:
logging.warning("Empty or error Gradio API.")
return "Не удалось получить ответ."
except Exception as e:
logging.error(f"Error in connection Gradio: {e}")
return "Error. Try again"
```
### Screenshot
_No response_
### Logs
```shell
===== Application Startup at 2025-03-11 11:37:38 =====
tokenizer_config.json: 0%| | 0.00/453 [00:00<?, ?B/s]
tokenizer_config.json: 100%|██████████| 453/453 [00:00<00:00, 3.02MB/s]
tokenizer.json: 0%| | 0.00/16.3M [00:00<?, ?B/s]
tokenizer.json: 100%|██████████| 16.3M/16.3M [00:00<00:00, 125MB/s]
added_tokens.json: 0%| | 0.00/23.0 [00:00<?, ?B/s]
added_tokens.json: 100%|██████████| 23.0/23.0 [00:00<00:00, 149kB/s]
special_tokens_map.json: 0%| | 0.00/173 [00:00<?, ?B/s]
special_tokens_map.json: 100%|██████████| 173/173 [00:00<00:00, 1.05MB/s]
config.json: 0%| | 0.00/879 [00:00<?, ?B/s]
config.json: 100%|██████████| 879/879 [00:00<00:00, 4.49MB/s]
model.safetensors: 0%| | 0.00/1.11G [00:00<?, ?B/s]
model.safetensors: 3%|▎ | 31.5M/1.11G [00:01<00:39, 27.1MB/s]
model.safetensors: 6%|▌ | 62.9M/1.11G [00:02<00:37, 28.0MB/s]
model.safetensors: 68%|██████▊ | 756M/1.11G [00:03<00:01, 313MB/s]
model.safetensors: 100%|█████████▉| 1.11G/1.11G [00:03<00:00, 300MB/s]
/usr/local/lib/python3.10/site-packages/gradio/chat_interface.py:334: UserWarning: The 'tuples' format for chatbot messages is deprecated and will be removed in a future version of Gradio. Please set type='messages' instead, which uses openai-style 'role' and 'content' keys.
self.chatbot = Chatbot(
* Running on local URL: http://0.0.0.0:7860, with SSR ⚡ (experimental, to disable set `ssr=False` in `launch()`)
To create a public link, set `share=True` in `launch()`.
```
### System Info
```shell
title: Nika Prop
emoji: 💬
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 5.20.1
app_file: app.py
pinned: false
short_description: Nika real estate
```
### Severity
Blocking usage of gradio
|
closed
|
2025-03-11T12:12:43Z
|
2025-03-18T10:28:21Z
|
https://github.com/gradio-app/gradio/issues/10783
|
[
"bug",
"needs repro"
] |
brokerelcom
| 11
|
Evil0ctal/Douyin_TikTok_Download_API
|
fastapi
| 229
|
[BUG] issue with TikTok video download
|
Hello!
First, I want to thank the author for such a wonderful project, but in the process of getting to know him, I had an error related to uploading a video
I am running project in docker desktop
When i insert TikTok video link in WebAPP interface and go to the parsing results page, after clicking on the Video Download-No-Watermark button, I am transferred to the (https://api.douyin.wtf/download?url=https://vt.tiktok.com/ZSL9yE7jq/&prefix=true&watermark=false), where i can see this information:
status | "endpoint closed"
-- | --
message | "此端点已关闭请在配置文件中开启/This endpoint is closed, please enable it in the configuration file"
I am very new with docker so I would appreciate any help
[Thanks!](url)



|
closed
|
2023-07-27T13:50:54Z
|
2023-08-04T09:31:15Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/229
|
[
"BUG",
"enhancement"
] |
spac3orange
| 1
|
ultralytics/ultralytics
|
pytorch
| 19,357
|
Train and val losses became "NaN" but metrics do not update accordingly.
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
This seems that this is slightly related to #18521
During training several models (yolov8, v9, v10, 11) for a custom dataset with different configurations, the train and val losses became NaN for several of the possible combinations. However, that is not the problem here, but how the monitoring and early stopping works with it. All training configurations have `optimizer=Adam`, `epochs=300`, `patience=30` and `AMP=True`. I'm using a single V100-SXM2-32GB for training.
There are two different cases that I've observed:
Case 1: Losses become NaN, the best metrics are acquired just before the losses became NaN. The validation metrics do not change at all and are considered to be improving all the time. In this case, the training continues until the number of `epochs` are acquired and early stopping doesn't trigger. Both resulting `best.pt` and `last.pt` are useless as they are full of NaN.
Case 2: Losses become NaN, the best metrics are acquired earlier. In this case, the training continues until the `patience` triggers. The resulting `best.pt` is useful, `last.pt` is not.
What should happen in both cases: When the losses are NaN, the validation metrics should update to NaN, zero or something like that so early stopping would trigger and at least the `best.pt` is an usable model.
I assume that the issue is related to how metrics are updated. Models with NaN losses do not predict anything so the resulting metrics have nothing to update.
Environment:
ultralytics 8.3.74, Python-3.12.8

### Additional
_No response_
|
closed
|
2025-02-21T08:40:28Z
|
2025-02-25T14:19:57Z
|
https://github.com/ultralytics/ultralytics/issues/19357
|
[
"enhancement",
"question",
"fixed",
"detect"
] |
mayrajeo
| 13
|
WeblateOrg/weblate
|
django
| 14,279
|
Batch Automatic Translation on Component and Project
|
### Describe the problem
Currently, automatic translation can only be performed in the language page, then if we have many languages, when we add some new text on template language, we need to go to each language page and click Automatic Translation, so if we have Batch Automatic Translation button on component tools, it will become very easy.
### Describe the solution you would like
Add Batch Automatic Translation button on component tools
### Describe alternatives you have considered
_No response_
### Screenshots
_No response_
### Additional context
_No response_
|
open
|
2025-03-20T03:50:05Z
|
2025-03-20T09:42:28Z
|
https://github.com/WeblateOrg/weblate/issues/14279
|
[
"enhancement",
"hacktoberfest",
"help wanted",
"good first issue",
"Area: Automated translation"
] |
kingshuaishuai
| 2
|
rthalley/dnspython
|
asyncio
| 601
|
How to start
|
closed
|
2020-11-06T10:00:19Z
|
2020-11-07T22:58:57Z
|
https://github.com/rthalley/dnspython/issues/601
|
[] |
DarkLand-Chen
| 1
|
|
aiogram/aiogram
|
asyncio
| 700
|
Refactor exceptions
|
closed
|
2021-09-21T21:35:25Z
|
2021-09-21T21:53:00Z
|
https://github.com/aiogram/aiogram/issues/700
|
[
"enhancement",
"breaking",
"3.x"
] |
JrooTJunior
| 0
|
|
google-research/bert
|
tensorflow
| 716
|
How to run prediction on text classification task on GPU
|
I used the fine-tuned model to predict txt, but I seems like to run on CPU, for it takes 5s on each txt(which have nearly 2000 words). and I see log like this below, is there something wrong I do.
Instructions for updating:
Use keras.layers.dense instead.
2019-06-25 10:27:33.731101: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-06-25 10:27:33.866396: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5626e7cf6f70 executing computations on platform CUDA. Devices:
2019-06-25 10:27:33.866448: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): Quadro P5000, Compute Capability 6.1
2019-06-25 10:27:33.870521: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2793620000 Hz
2019-06-25 10:27:33.871123: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5626e7d600f0 executing computations on platform Host. Devices:
2019-06-25 10:27:33.871162: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
2019-06-25 10:27:33.871906: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: Quadro P5000 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:03:00.0
totalMemory: 15.90GiB freeMemory: 15.78GiB
2019-06-25 10:27:33.871942: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-06-25 10:27:33.873527: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-06-25 10:27:33.873560: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-06-25 10:27:33.873572: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-06-25 10:27:33.874258: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15349 MB memory) -> physical GPU (device: 0, name: Quadro P5000, pci bus id: 0000:03:00.0, compute capability: 6.1)
2019-06-25 10:27:35.508488: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally
|
closed
|
2019-06-25T02:46:52Z
|
2019-07-10T01:09:05Z
|
https://github.com/google-research/bert/issues/716
|
[] |
Biaocsu
| 2
|
rougier/scientific-visualization-book
|
matplotlib
| 80
|
BUG: showcase textspiral, assignment destination is read-only
|
I try to replicate `shocase/text-spiral.py` in my laptop.
I tried to run the following code.
```python
import mpmath
mpmath.mp.dps = 15000
text = str(mpmath.pi)
path = TextPath((0, 0), text, size=6, ) #, prop=FontProperties(family="Source Serif Pro"))
#path.vertices.setflags(write=1)
Vx, Vy = path.vertices[:, 0], path.vertices[:, 1]
X = np.interp(Vx, L, T[:, 0]) + Vy * np.interp(Vx, L, O[:, 0])
Y = np.interp(Vx, L, T[:, 1]) + Vy * np.interp(Vx, L, O[:, 1])
Vx[...] = X
Vy[...] = Y
```
Then I encountered `ValueError: assignment destination is read-only`.
If I add the line `path.vertices.setflags(write=1)` right after `path` object is created,
I can reproduce the showcase.
I used matplotlib 3.6.3, which might be different behaviour for a default object editing permission (can be checked by `path.vertices.flags`) .
Hope this comment will help improve reproducibility.
|
open
|
2023-01-20T12:03:59Z
|
2023-02-02T12:47:50Z
|
https://github.com/rougier/scientific-visualization-book/issues/80
|
[] |
toshiakiasakura
| 1
|
ethanopp/fitly
|
dash
| 20
|
Rolling window of strava activities?
|
Hi,
Curious if there's a way to configure a "rolling window" of strava activities, like optionally only keeping the past 13 months, as an example.
Thanks!
|
open
|
2021-03-14T03:30:57Z
|
2021-03-14T03:30:57Z
|
https://github.com/ethanopp/fitly/issues/20
|
[] |
spawn-github
| 0
|
kymatio/kymatio
|
numpy
| 309
|
Ill-conditioning in `scattering3d_qm7.py`
|
When running this, I get
```
...
Ridge regression, alpha: 1.0, MAE: 5.897314548492432, RMSE: 8.19788932800293
/mnt/xfs1/home/janden/local/anaconda3/envs/kymatio_cuda90/lib/python3.7/site-packages/sklearn/linear_model/ridge.py:125: LinAlgWarning: scipy.linalg.solve
Ill-conditioned matrix detected. Result is not guaranteed to be accurate.
Reciprocal condition number1.719067e-08
overwrite_a=True).T
/mnt/xfs1/home/janden/local/anaconda3/envs/kymatio_cuda90/lib/python3.7/site-packages/sklearn/linear_model/ridge.py:125: LinAlgWarning: scipy.linalg.solve
Ill-conditioned matrix detected. Result is not guaranteed to be accurate.
Reciprocal condition number1.847256e-08
overwrite_a=True).T
/mnt/xfs1/home/janden/local/anaconda3/envs/kymatio_cuda90/lib/python3.7/site-packages/sklearn/linear_model/ridge.py:125: LinAlgWarning: scipy.linalg.solve
Ill-conditioned matrix detected. Result is not guaranteed to be accurate.
Reciprocal condition number1.704315e-08
overwrite_a=True).T
/mnt/xfs1/home/janden/local/anaconda3/envs/kymatio_cuda90/lib/python3.7/site-packages/sklearn/linear_model/ridge.py:125: LinAlgWarning: scipy.linalg.solve
Ill-conditioned matrix detected. Result is not guaranteed to be accurate.
Reciprocal condition number1.990118e-08
overwrite_a=True).T
/mnt/xfs1/home/janden/local/anaconda3/envs/kymatio_cuda90/lib/python3.7/site-packages/sklearn/linear_model/ridge.py:125: LinAlgWarning: scipy.linalg.solve
Ill-conditioned matrix detected. Result is not guaranteed to be accurate.
Reciprocal condition number1.824673e-08
overwrite_a=True).T
...
```
Seems like `alpha = 1.0` is a bad idea? Why do we have this in here?
|
closed
|
2019-01-17T15:10:46Z
|
2020-02-19T07:25:30Z
|
https://github.com/kymatio/kymatio/issues/309
|
[
"3D"
] |
janden
| 4
|
remsky/Kokoro-FastAPI
|
fastapi
| 116
|
docker compose fails because of `entrypoint.sh` EOL sequence
|
**Describe the bug**
As the title says, when running `docker compose --up build` on a Windows host, the command fails towards the end.
**Screenshots or console output**
```
kokoro-tts-1 | /opt/nvidia/nvidia_entrypoint.sh: /app/docker/scripts/entrypoint.sh: /bin/sh^M: bad interpreter: No such file or directory
kokoro-tts-1 | /opt/nvidia/nvidia_entrypoint.sh: line 67: /app/docker/scripts/entrypoint.sh: Success
kokoro-tts-1 exited with code 126
```
**Branch / Deployment used**
latest master commit (e5b79fc27135e4c054eeb7608da26c86ac7f3344)
**Operating System**
Windows / NVIDIA GPU
**Additional context**
Changing the EOL sequence from `CRLF` to `LF` fixed the issue for me.
|
closed
|
2025-02-03T16:44:23Z
|
2025-02-17T09:33:13Z
|
https://github.com/remsky/Kokoro-FastAPI/issues/116
|
[] |
Puncia
| 2
|
lepture/authlib
|
django
| 567
|
The expires_in function needs to have a timedelta to avoid tokenExpiry errors for milliseconds
|
**Describe the bug**
I am using the OAuth2session object
```
client = OAuth2Session(client_id=client_id, client_secret=client_secret, token_endpoint=token_url, grant_type='client_credentials')
client.fetch_token(token_url)
client.get(<MY_PROTECTED_URL>)
```
Here, the library behavior is that the token gets automatically refreshed if that has expired. Refer https://github.com/lepture/authlib/blob/master/authlib/oauth2/client.py#L257
However, the function which checks the token expiry https://github.com/lepture/authlib/blob/master/authlib/oauth2/rfc6749/wrappers.py#L13 , simply checks the expiry time with the current time . Because of this we are missing some corner cases, where the token is about to expire in few milliseconds/seconds and when the API call to the protected url is made, it gives error in authentication.
`JWT expired at 2023-06-20T13:16:42Z. Current time: 2023-06-20T13:16:42Z, a difference of 105 milliseconds. Allowed clock skew: 0 milliseconds."
`
**Error Stacks**
`JWT expired at 2023-06-20T13:16:42Z. Current time: 2023-06-20T13:16:42Z, a difference of 105 milliseconds. Allowed clock skew: 0 milliseconds."
`
**To Reproduce**
A minimal example to reproduce the behavior:
While the exact replication is not possible here as the request is failing by few milliseconds.
```
client = OAuth2Session(client_id=<client_id>, client_secret=<client_secret>, token_endpoint=<token_url>, grant_type='client_credentials')
client.fetch_token(<token_ur>l)
client.get(<MY_PROTECTED_URL>)
```
**A clear and concise description of what you expected to happen.**
Even if the token got expired by few milliseconds, the library should be able to handle such cases by obtaining a new token.
Instead of https://github.com/lepture/authlib/blob/master/authlib/oauth2/rfc6749/wrappers.py#L17 , we should be adding a small timedelta . For eg - even if the token is going to expire in next 60 seconds, refresh that still.
**Environment:**
- OS: Linux
- Python Version: 3.6
- Authlib Version: 1.1.0
**Additional context**
There should be some timedelta introduced in the function , so that we can avoid facing issues where API requests fail by few milliseconds. Here, we can add logic to show that token has expired , let's say 30-60 seconds prior to its actual expiry.
|
closed
|
2023-07-24T08:47:18Z
|
2024-04-08T16:58:45Z
|
https://github.com/lepture/authlib/issues/567
|
[
"bug",
"good first issue"
] |
pghole
| 2
|
noirbizarre/flask-restplus
|
api
| 428
|
flask request RequestParser bundle error=True is not working as expected
|
```
from flask import Flask
from flask_restplus import Api, Resource, reqparse
app = Flask(name)
api = Api(app)
parser = reqparse.RequestParser(bundle_errors=False)
parser.add_argument('username', type=list, required=True, help="Missing Username", location="json")
parser.add_argument('password', type=list, required=True, help="Missing Password", location="json")
@api.route('/user')
class User(Resource):
def post(self):
args = parser.parse_args()
return {"ID":"1", "Username": args['username'], "Password": args['password']}, 201
if name == 'main':
app.run(host="0.0.0.0", debug=True)
```
1- When bundle_errors=False and I send a request with missing parameters
```
curl -X POST
http://localhost:5051/user
-H 'content-type: application/json'
-d '{}'
I get the following response
{
"errors": {
"username": "Missing Username"
},
"message": "Input payload validation failed"
}
```
Which is fine except that is showed only one missing field.
2- When I used bundle_errors=True (as mentioned in the documentation), I got the following result
```
{
"Username": "Missing required parameter in the JSON body",
"Password": "Missing required parameter in the JSON body",
"ID": "1"
}
```
Which means that RequestParser didn't throw any error and returned this string "Missing required parameter in the JSON body" as the actual input
Am I doing something wrong?
|
open
|
2018-05-04T05:44:29Z
|
2021-10-12T11:18:44Z
|
https://github.com/noirbizarre/flask-restplus/issues/428
|
[] |
vimox-shah
| 8
|
allenai/allennlp
|
data-science
| 4,775
|
Ask for further integration with Optuna
|
Hello, I'm a member of Optuna dev and the author of the allennlp-guide chapter on hyperparameter optimization.
Recently, I created [allennlp-optuna](https://github.com/himkt/allennlp-optuna), a prototype of a wrapper for Optuna to enable users to optimize hyperparameter of AllenNLP model. It provides a way to use Optuna by `allennlp` subcommands.
For explaining `allennlp-optuna`, I made the quick tutorial of `allennlp-optuna` in [readthedocs](https://allennlp-optuna.readthedocs.io/en/latest/tutorial/index.html). With `allennlp-optuna`, users can run multi-node distributed optimization by simply executing the same command on multiple machines. I put a simple example on [README](https://github.com/himkt/allennlp-optuna#5-hyperparameter-optimization-at-scale).
And recently, I also wrote [the post](https://techlife.cookpad.com/entry/2020/11/06/110000) (sorry in Japanese...) about the NER system using AllenNLP+Optuna.
Can we have an opportunity to add `allennlp-optuna` to default plugins of AllenNLP? If it helps the decision, I'm willing to transfer `allennlp-optuna` to AllenAI. I'm really happy if it helps NLP practitioners to tune the hyperparameter of their model with ease.
Thank you, as always.
|
closed
|
2020-11-07T13:25:21Z
|
2020-11-11T00:12:06Z
|
https://github.com/allenai/allennlp/issues/4775
|
[
"question"
] |
himkt
| 2
|
django-import-export/django-import-export
|
django
| 1,020
|
Prevent new items. Update only.
|
Is there any setting that will allow me to ignore any new items. I would only want to import to update. But let's say that there is a new ID that does not currently exists in the database, I would want to ignore that.
|
closed
|
2019-10-21T20:56:38Z
|
2019-11-19T18:16:42Z
|
https://github.com/django-import-export/django-import-export/issues/1020
|
[] |
jangeador
| 2
|
Zeyi-Lin/HivisionIDPhotos
|
machine-learning
| 98
|
HivisionIDPhotos Api调用问题
|
INFO: [127.0.0.1:52124](http://127.0.0.1:52124/) - "POST /add_background HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 435, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
return await [self.app](http://self.app/)(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\fastapi\applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\starlette\applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\starlette\middleware\errors.py", line 186, in __call__
raise exc
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\starlette\middleware\errors.py", line 164, in __call__
await [self.app](http://self.app/)(scope, receive, _send)
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\starlette\middleware\exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions([self.app](http://self.app/), conn)(scope, receive, send)
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
raise exc
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\starlette\routing.py", line 754, in __call__
await self.middleware_stack(scope, receive, send)
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\starlette\routing.py", line 774, in app
await route.handle(scope, receive, send)
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\starlette\routing.py", line 295, in handle
await [self.app](http://self.app/)(scope, receive, send)
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\starlette\routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
raise exc
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\starlette\routing.py", line 74, in app
response = await f(request)
^^^^^^^^^^^^^^^^
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\fastapi\routing.py", line 297, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zdy\AppData\Local\anaconda3\Lib\site-packages\fastapi\routing.py", line 210, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\home\项目\HivisionIDPhotos-1.2.1\deploy_api.py", line 116, in photo_add_background
result_image = add_background(
^^^^^^^^^^^^^^^
File "G:\home\项目\HivisionIDPhotos-1.2.1\hivision\utils.py", line 259, in add_background
b, g, r, a = cv2.split(input_image)
^^^^^^^^^^
请问调用api add_background 这个接口后报ASGI这个错误 是什么原因
|
closed
|
2024-09-11T03:04:28Z
|
2024-09-11T06:06:44Z
|
https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/98
|
[] |
OuTaMan
| 9
|
man-group/arctic
|
pandas
| 830
|
Have a restore_version api that supports operations with uncompressed chunks
|
Currently in restore_version we read and write the entire data to a new version. There was a more efficient implementation of this but that was reverted as it might cause corruptions.
What we want is to have a version that just reads and writes uncompressed chunks to save up on memory blowing up due to the new bytearray + collecting uncompressed data chunks which we ran into.
|
open
|
2019-12-02T11:59:58Z
|
2019-12-02T12:10:58Z
|
https://github.com/man-group/arctic/issues/830
|
[
"VersionStore"
] |
shashank88
| 1
|
saleor/saleor
|
graphql
| 16,872
|
Bug: Saleor apps installed using django command do not show up in the Saleor Dashboard
|
### What are you trying to achieve?
Saleor apps installed using the Django command `manage.py install_app --activate <manifest_url>` do not show up in the Saleor Dashboard under Apps / Installed Apps, even if the installation using the Django command completed successfully.
### Steps to reproduce the problem
1. install Saleor app using Django command `manage.py install_app --activate <manifest_url>`
2. check new row was successfully added into `SELECT * FROM public.app_app`
3. open Saleor Dashboard / Apps
4. and section Installed Apps will be empty
### What did you expect to happen?
This is due to App.is_installed being False
Apps installed from Saleor dashboard end up handled in celery task install_app_task(), where this boolean gets set to True
https://github.com/saleor/saleor/blob/main/saleor/app/tasks.py#L34
### Logs

### Environment
Saleor version: 3.20.37, git head
|
closed
|
2024-10-11T12:40:54Z
|
2024-10-14T08:16:33Z
|
https://github.com/saleor/saleor/issues/16872
|
[
"bug",
"triage"
] |
ceresnam
| 2
|
pytest-dev/pytest-mock
|
pytest
| 245
|
Since 3.3.0 github is missing releases confusing users
|
When you visit https://github.com/pytest-dev/pytest-mock you only see version 3.3.0 as latest. Even if you click releases page you still see the same.
While after while you may be lucky to discover that tags for newer versions exists, that does not provide the best experience.
Ideally github releases should be created for any releases, so they are displayed correctly. If, for some reason, the project maintainers do not want to make releases using github, at least it should remove the releases tab from the project as it is really confusing.
I personally use github releases to make releases and never push tags, but I think it may be possible to use an action that creates a release when a tag is pushed, i just never did it. Somehow I find the web interface better for performing releases,.
|
closed
|
2021-05-18T08:16:35Z
|
2021-05-18T11:56:29Z
|
https://github.com/pytest-dev/pytest-mock/issues/245
|
[] |
ssbarnea
| 1
|
huggingface/pytorch-image-models
|
pytorch
| 1,482
|
[BUG] Wrong Repo Id
|
This is regarding the new models (vit CLIP)
the URL for their weights is wrong
```
Repository Not Found for url: https://huggingface.co/CLIP-ViT-g-14-laion2B-s12B-b42K/resolve/main/open_clip_pytorch_model.bin.
Please make sure you specified the correct `repo_id` and `repo_type`.
If the repo is private, make sure you are authenticated.
```
it should be changed to `https://huggingface.co/laion/CLIP-ViT-g-14-laion2B-s12B-b42K/blob/main/open_clip_pytorch_model.bin`
|
closed
|
2022-10-05T14:23:34Z
|
2022-10-05T16:00:23Z
|
https://github.com/huggingface/pytorch-image-models/issues/1482
|
[
"bug"
] |
MohamedAliRashad
| 1
|
nteract/testbook
|
pytest
| 59
|
More examples documentation
|
We could use more examples for how to use testbook in different scenarios. This would have a strong lasting effect for adoption and resuse of the project over other efforts.
|
open
|
2020-08-18T17:01:56Z
|
2021-05-20T09:49:46Z
|
https://github.com/nteract/testbook/issues/59
|
[
"documentation",
"sprint-friendly"
] |
MSeal
| 1
|
litestar-org/polyfactory
|
pydantic
| 566
|
Bug: RecursionError With constrained 0 length lists
|
### Description
When constraining a list to be empty (max_length=0):
```python
from pydantic import BaseModel, Field
from polyfactory.factories.pydantic_factory import ModelFactory
class TestModel(BaseModel):
empty_list_field: list = Field(default=[], max_length=0)
class TestModelFactory(ModelFactory):
__model__ = TestModel
TestModelFactory.build()
```
a recursion error occurs.
The problem seems to be in https://github.com/litestar-org/polyfactory/blob/67c57208de5ce993bdb2c7888864ac4e71964511/polyfactory/value_generators/constrained_collections.py#L49
where a default length of 1 is used if the randomly picked length (always 0) is falsy. @Goldziher do you remember why this default exists?
### URL to code causing the issue
_No response_
### MCVE
```python
from pydantic import BaseModel, Field
from polyfactory.factories.pydantic_factory import ModelFactory
class TestModel(BaseModel):
empty_list_field: list = Field(default=[], max_length=0)
class TestModelFactory(ModelFactory):
__model__ = TestModel
TestModelFactory.build()
```
### Steps to reproduce
```bash
1. Install polyfactory & pydantic (v2)
2. Run example code
```
### Screenshots
_No response_
### Logs
_No response_
### Release Version
v2.16.2
### Platform
- [x] Linux
- [x] Mac
- [x] Windows
- [ ] Other (Please specify in the description above)
|
closed
|
2024-07-16T09:30:48Z
|
2025-03-20T15:53:18Z
|
https://github.com/litestar-org/polyfactory/issues/566
|
[
"bug"
] |
tom300z
| 1
|
Lightning-AI/pytorch-lightning
|
deep-learning
| 19,772
|
Sanitize object params before they get logged from argument-free classes
|
### Description & Motivation
The motivation for this proposal is as follows: when you store classes (not-yet instantiated, but from the main file) in a module's hyperparameters to instantiate them later, the related entries in the dictionary are not sanitized.
### Pitch
For example, let's say my configuration is this:
```python
class Stepper():
def __init__(self):
self.scale = 3
def step(self):
return self.scale
config = {
"stepper": Stepper
}
```
Then I want the hyperparameters that will be logged to look like this:
```python
config = {
"criterion": "Stepper"
}
```
And not like this:
```python
config = {
"criterion": "<__main__.Stepper object at 0x352255190>"
}
```
### Alternatives
When a module has at least one `__init__` argument, this problem doesn't exist:
```python
class Stepper:
def __init__(self, scale):
self.scale = scale
def forward(self, x):
return self.scale * x
config = {
"stepper": Stepper,
"config": {"scale": 0.0001}
}
```
Results in a logged configuration dictionary of form:
```python
config = {
"criterion": "Stepper",
"config": {"scale": 0.0001}
}
```
### Additional context
_No response_
cc @borda
|
closed
|
2024-04-12T20:16:34Z
|
2024-06-06T18:51:56Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19772
|
[
"feature"
] |
V0XNIHILI
| 0
|
PaddlePaddle/PaddleHub
|
nlp
| 1,555
|
ace2p分割模型GPU推理时发生错误
|
```
代码:
module = hub.Module(name='ace2p', version='1.0.0')
while flag:
input_dict = {'image': [path_jpg_in]}
_ = module.segmentation(data=input_dict, use_gpu=True, output_dir=masked_path, batch_size=7)
**问题1**:
batch_size大于等于8时就出错。
--------------------------------------
C++ Traceback (most recent call last):
--------------------------------------
0 paddle::framework::SignalHandle(char const*, int)
1 paddle::platform::GetCurrentTraceBackString[abi:cxx11]()
----------------------
Error Message Summary:
----------------------
FatalError: `Segmentation fault` is detected by the operating system.
[TimeInfo: *** Aborted at 1627438146 (unix time) try "date -d @1627438146" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0x0) received by PID 35851 (TID 0x7f3769525700) from PID 0 ***]
Segmentation fault (core dumped)
**问题2**:
循环推理多次之后会出现以下错误,但是显存还有很多空间。3616M/11019M
Traceback (most recent call last):
File "run_batch3.py", line 214, in <module>
ps.get_origin_mask()
File "run_batch3.py", line 209, in main
# print('{:.3f}s {}'.format(t1 - t0, image_name))
File "run_batch3.py", line 84, in get_origin_mask
batch_size=len(image_list))
File "/gfs_brick03/zhanghong/miniconda3/envs/paddle_cu101_v2/lib/python3.6/site-packages/paddlehub/compat/paddle_utils.py", line 220, in runner
return func(*args, **kwargs)
File "/gfs_brick03/zhanghong/miniconda3/envs/paddle_cu101_v2/lib/python3.6/site-packages/paddlehub/compat/module/module_v1.py", line 201, in __call__
result += self.processor.postprocess(sign_name, data_out, sub_data, **kwargs)
File "./ace2p/python/356ef3563a66791ef656737189a222ec.py", line 202, in postprocess
for index, data in enumerate(data_out[0]):
MemoryError: (ResourceExhausted) Fail to alloc memory of 524288000 size, error code is 12.
[Hint: Expected error == 0, but received error:12 != 0:0.] (at /paddle/paddle/fluid/memory/detail/system_allocator.cc:62)
```
|
open
|
2021-07-28T02:18:21Z
|
2021-08-10T12:58:31Z
|
https://github.com/PaddlePaddle/PaddleHub/issues/1555
|
[
"cv"
] |
justzhanghong
| 1
|
ipython/ipython
|
jupyter
| 14,120
|
IPython file error
|
```
Traceback (most recent call last):
File "/home/me_user/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/traitlets/traitlets.py", line 656, in get
value = obj._trait_values[self.name]
~~~~~~~~~~~~~~~~~^^^^^^^^^^^
KeyError: 'ipython_dir'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/me_user/aoc/main.py", line 28, in <module>
script, input_path = check_required_files_exists(year=year, day=day, sample=sample)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me_user/aoc/src/common/util.py", line 89, in check_required_files_exists
import ipdb; ipdb.set_trace()
^^^^^^^^^^^^^^^^
File "/home/me_user/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/ipdb/__main__.py", line 77, in set_trace
p = _init_pdb(context).set_trace(frame)
^^^^^^^^^^^^^^^^^^
File "/home/vhij/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/ipdb/__main__.py", line 54, in _init_pdb
debugger_cls = _get_debugger_cls()
^^^^^^^^^^^^^^^^^^^
File "/home/vhij/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/ipdb/__main__.py", line 34, in _get_debugger_cls
ipapp.initialize(["--no-term-title"])
File "/home/me_user/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/traitlets/config/application.py", line 113, in inner
return method(app, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me_user/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/IPython/terminal/ipapp.py", line 270, in initialize
super(TerminalIPythonApp, self).initialize(argv)
File "/home/me_user/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/traitlets/config/application.py", line 113, in inner
return method(app, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me_user/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/IPython/core/application.py", line 484, in initialize
self.init_profile_dir()
File "/home/me_user/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/IPython/core/application.py", line 388, in init_profile_dir
p = ProfileDir.find_profile_dir_by_name(self.ipython_dir, self.profile, self.config)
^^^^^^^^^^^^^^^^
File "/home/me_user/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/traitlets/traitlets.py", line 703, in __get__
return self.get(obj, cls)
^^^^^^^^^^^^^^^^^^
File "/home/vhme_userij/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/traitlets/traitlets.py", line 659, in get
default = obj.trait_defaults(self.name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me_user/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/traitlets/traitlets.py", line 1872, in trait_defaults
return self._get_trait_default_generator(names[0])(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vhij/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/traitlets/traitlets.py", line 1233, in __call__
return self.func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/me_user/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/IPython/core/application.py", line 207, in _ipython_dir_default
d = get_ipython_dir()
^^^^^^^^^^^^^^^^^
File "/home/me_user/.cache/pypoetry/virtualenvs/advent-of-code-XSxK3i_Q-py3.11/lib/python3.11/site-packages/IPython/paths.py", line 73, in get_ipython_dir
os.makedirs(ipdir, exist_ok=True)
File "<frozen os>", line 225, in makedirs
FileExistsError: [Errno 17] File exists: '/home/vhij/.ipython'
If you suspect this is an IPython 8.14.0 bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at ipython-dev@python.org
You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
c.Application.verbose_crash=True
```
|
open
|
2023-07-24T11:03:15Z
|
2023-07-24T11:08:19Z
|
https://github.com/ipython/ipython/issues/14120
|
[] |
Vasile-Hij
| 0
|
Buuntu/fastapi-react
|
sqlalchemy
| 28
|
Add React login page
|
closed
|
2020-05-25T02:19:46Z
|
2020-05-25T15:18:52Z
|
https://github.com/Buuntu/fastapi-react/issues/28
|
[
"enhancement"
] |
Buuntu
| 0
|
|
plotly/dash
|
plotly
| 2,852
|
[BUG] set_props called multiple times only keep the last props.
|
For regular callbacks, when multiple call of `set_props` to the same component id, only the last call is saved.
Example:
```
from dash import Dash, Input, html, set_props
app = Dash()
app.layout = [
html.Button("start", id="start"),
html.Div("initial", id="output"),
]
@app.callback(
Input("start", "n_clicks"),
)
def on_click(_):
set_props("output", {"children": "changed"})
set_props("output", {"style": {"background": "red"}})
if __name__ == "__main__":
app.run(debug=True)
```
Clicking on the start button only set the background red, the text stays at "initial". The props should be merged and both updated.
|
closed
|
2024-05-07T16:35:57Z
|
2024-05-15T19:22:04Z
|
https://github.com/plotly/dash/issues/2852
|
[
"bug",
"sev-1"
] |
T4rk1n
| 0
|
proplot-dev/proplot
|
data-visualization
| 105
|
More issues with "thin" fonts
|
@bradyrx In your example (#103) it looks like matplotlib may be picking up [a "thin" font again](https://github.com/lukelbd/proplot/issues/94) :/. Could you run the following:
```python
from matplotlib.font_manager import findfont, FontProperties
print(findfont(FontProperties(['sans-serif'])))
```
and post the result? Also which proplot version are you using?
|
closed
|
2020-01-09T05:54:48Z
|
2020-01-09T09:30:29Z
|
https://github.com/proplot-dev/proplot/issues/105
|
[
"bug"
] |
lukelbd
| 1
|
yuka-friends/Windrecorder
|
streamlit
| 57
|
feat: 为托盘的更新提示添加“更新日志”入口
|
https://github.com/yuka-friends/Windrecorder/pull/46
在程序有可用更新时,在更新选项下添加一个“查看更新日志(what's new)”的选项,点击后浏览器打开 GitHub 上的 CHANGELOG 文件(和浏览器访问 localhost:xxxx 进入页面那个选项一致)
|
closed
|
2023-12-04T14:42:12Z
|
2024-02-09T11:18:33Z
|
https://github.com/yuka-friends/Windrecorder/issues/57
|
[
"enhancement",
"P0"
] |
Antonoko
| 0
|
simple-login/app
|
flask
| 1,982
|
Remove sensitive words
|
I will make a PR
|
closed
|
2023-12-29T19:54:47Z
|
2024-01-02T11:33:28Z
|
https://github.com/simple-login/app/issues/1982
|
[] |
ghost
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.