repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
httpie/cli
|
api
| 813
|
Document JSON-escaping in "=" syntax
|
In `=` syntax, special characters:
- not a `char` defined in [RFC 8259](https://www.rfc-editor.org/rfc/rfc8259) 7. Strings
- non-ascii characters (such like "あ")
are escaped, and `:=` escapes only non-ascii characters:
```sh
http httpbin.org/post double_quote='"' non-ascii=あ new_line='
'
# sends: {"double_quote": "\"", "non-ascii": "\u3042", "new_line": "\n"}
```

(The output of `http -v` is somewhat confusing, so I pasted a Wireshark capture. See https://github.com/httpie/httpie/issues/1474)
This behavior should be documented, shouldn't it?
|
closed
|
2019-11-02T11:26:48Z
|
2023-01-21T08:15:36Z
|
https://github.com/httpie/cli/issues/813
|
[
"help wanted"
] |
wataash
| 3
|
graphql-python/gql
|
graphql
| 344
|
Managing CRSF cookies with graphene-django
|
Hello,
I'm trying to do a GraphQL mutation between 2 Django projects (different hosts). One project has the `gql` client and the other has the `graphene` and `graphene-django` libraries. Everything works fine when `django.middleware.csrf.CsrfViewMiddleware` is deactivated on the second project, but when it is enabled the server throws an error `Forbidden (CSRF cookie not set.): /graphql`.
At the client side, how can I fetch a CSRF token from the server and include it in the HTTP header ?
This is my code
```
transport = AIOHTTPTransport(url="http://x.x.x.x:8000/graphql")
client = Client(transport=transport, fetch_schema_from_transport=True)
query = gql(
"""
mutation ...
"""
)
result = client.execute(query)
```
Please find the complete traceback. As you can see there is also a `"Not a JSON answer"` but it must be `gql` that doesn't recognize the answer from `graphene`.
```
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: Traceback (most recent call last):
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/transport/aiohttp.py", line 316, in execute
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: result = await resp.json(content_type=None)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 1119, in json
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return loads(stripped.decode(encoding))
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/usr/lib/python3.7/json/__init__.py", line 348, in loads
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return _default_decoder.decode(s)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/usr/lib/python3.7/json/decoder.py", line 337, in decode
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: obj, end = self.raw_decode(s, idx=_w(s, 0).end())
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/usr/lib/python3.7/json/decoder.py", line 355, in raw_decode
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: raise JSONDecodeError("Expecting value", s, err.value) from None
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: During handling of the above exception, another exception occurred:
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: Traceback (most recent call last):
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/transport/aiohttp.py", line 304, in raise_response_error
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: resp.raise_for_status()
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 1009, in raise_for_status
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: headers=self.headers,
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('http://y.y.y.y:8000/graphql')
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: The above exception was the direct cause of the following exception:
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: Traceback (most recent call last):
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/asgiref/sync.py", line 482, in thread_handler
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: raise exc_info[1]
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/core/handlers/base.py", line 233, in _get_response_async
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: response = await wrapped_callback(request, *callback_args, **callback_kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/asgiref/sync.py", line 444, in __call__
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: ret = await asyncio.wait_for(future, timeout=None)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/usr/lib/python3.7/asyncio/tasks.py", line 414, in wait_for
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return await fut
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/asgiref/current_thread_executor.py", line 22, in run
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: result = self.fn(*self.args, **self.kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/asgiref/sync.py", line 486, in thread_handler
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return func(*args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/contrib/admin/options.py", line 616, in wrapper
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return self.admin_site.admin_view(view)(*args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/utils/decorators.py", line 130, in _wrapped_view
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: response = view_func(request, *args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/views/decorators/cache.py", line 44, in _wrapped_view_func
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: response = view_func(request, *args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/contrib/admin/sites.py", line 232, in inner
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return view(request, *args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/utils/decorators.py", line 43, in _wrapper
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return bound_method(*args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/utils/decorators.py", line 130, in _wrapped_view
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: response = view_func(request, *args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/contrib/admin/options.py", line 1723, in changelist_view
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: response = self.response_action(request, queryset=cl.get_queryset(request))
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/contrib/admin/options.py", line 1408, in response_action
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: response = func(self, request, queryset)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/var/www/myapp/strategy/admin.py", line 42, in push
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: target.push()
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/aiohttp_csrf/__init__.py", line 102, in wrapped_handler
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return handler(*args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/var/www/myapp/strategy/models.py", line 125, in push
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: result = client.execute(query, variable_values=params)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/client.py", line 396, in execute
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: **kwargs,
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/usr/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return future.result()
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/client.py", line 284, in execute_async
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: async with self as session:
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/client.py", line 658, in __aenter__
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return await self.connect_async()
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/client.py", line 638, in connect_async
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: await self.session.fetch_schema()
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/client.py", line 1253, in fetch_schema
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: parse(get_introspection_query())
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/transport/aiohttp.py", line 323, in execute
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: await raise_response_error(resp, "Not a JSON answer")
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/transport/aiohttp.py", line 306, in raise_response_error
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: raise TransportServerError(str(e), e.status) from e
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: gql.transport.exceptions.TransportServerError: 403, message='Forbidden', url=URL('http://y.y.y.y:8000/graphql')
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: [error ] request_failed [django_structlog.middlewares.request] code=500 request=<ASGIRequest: POST '/admin/strategy/target/'> user_id=1
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: Internal Server Error: /admin/strategy/target/
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: Traceback (most recent call last):
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/transport/aiohttp.py", line 316, in execute
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: result = await resp.json(content_type=None)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 1119, in json
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return loads(stripped.decode(encoding))
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/usr/lib/python3.7/json/__init__.py", line 348, in loads
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return _default_decoder.decode(s)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/usr/lib/python3.7/json/decoder.py", line 337, in decode
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: obj, end = self.raw_decode(s, idx=_w(s, 0).end())
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/usr/lib/python3.7/json/decoder.py", line 355, in raw_decode
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: raise JSONDecodeError("Expecting value", s, err.value) from None
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: During handling of the above exception, another exception occurred:
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: Traceback (most recent call last):
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/transport/aiohttp.py", line 304, in raise_response_error
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: resp.raise_for_status()
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 1009, in raise_for_status
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: headers=self.headers,
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('http://y.y.y.y:8000/graphql')
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: The above exception was the direct cause of the following exception:
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: Traceback (most recent call last):
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/asgiref/sync.py", line 482, in thread_handler
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: raise exc_info[1]
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/core/handlers/exception.py", line 38, in inner
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: response = await get_response(request)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/core/handlers/base.py", line 233, in _get_response_async
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: response = await wrapped_callback(request, *callback_args, **callback_kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/asgiref/sync.py", line 444, in __call__
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: ret = await asyncio.wait_for(future, timeout=None)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/usr/lib/python3.7/asyncio/tasks.py", line 414, in wait_for
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return await fut
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/asgiref/current_thread_executor.py", line 22, in run
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: result = self.fn(*self.args, **self.kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/asgiref/sync.py", line 486, in thread_handler
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return func(*args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/contrib/admin/options.py", line 616, in wrapper
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return self.admin_site.admin_view(view)(*args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/utils/decorators.py", line 130, in _wrapped_view
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: response = view_func(request, *args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/views/decorators/cache.py", line 44, in _wrapped_view_func
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: response = view_func(request, *args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/contrib/admin/sites.py", line 232, in inner
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return view(request, *args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/utils/decorators.py", line 43, in _wrapper
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return bound_method(*args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/utils/decorators.py", line 130, in _wrapped_view
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: response = view_func(request, *args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/contrib/admin/options.py", line 1723, in changelist_view
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: response = self.response_action(request, queryset=cl.get_queryset(request))
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/django/contrib/admin/options.py", line 1408, in response_action
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: response = func(self, request, queryset)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/var/www/myapp/strategy/admin.py", line 42, in push
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: target.push()
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/aiohttp_csrf/__init__.py", line 102, in wrapped_handler
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return handler(*args, **kwargs)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/var/www/myapp/strategy/models.py", line 125, in push
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: result = client.execute(query, variable_values=params)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/client.py", line 396, in execute
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: **kwargs,
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/usr/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return future.result()
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/client.py", line 284, in execute_async
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: async with self as session:
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/client.py", line 658, in __aenter__
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: return await self.connect_async()
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/client.py", line 638, in connect_async
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: await self.session.fetch_schema()
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/client.py", line 1253, in fetch_schema
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: parse(get_introspection_query())
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/transport/aiohttp.py", line 323, in execute
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: await raise_response_error(resp, "Not a JSON answer")
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: File "/home/username/.local/lib/python3.7/site-packages/gql/transport/aiohttp.py", line 306, in raise_response_error
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: raise TransportServerError(str(e), e.status) from e
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: gql.transport.exceptions.TransportServerError: 403, message='Forbidden', url=URL('http://y.y.y.y:8000/graphql')
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: Internal Server Error: /admin/strategy/target/ exc_info=(<class 'gql.transport.exceptions.TransportServerError'>, TransportServerError("403, message='Forbidden', url=URL('http://y.y.y.y:8000/graphql')"), <traceback object at 0x7f5d70ac8910>)
Jul 16 08:00:59 ubuntu-2cpu-4gb-de-fra1 uvicorn[129750]: INFO: x.x.x.x:21866 - "POST /admin/strategy/target/ HTTP/1.1" 500 Internal Server Error
```
|
closed
|
2022-07-16T07:51:39Z
|
2022-07-21T12:16:39Z
|
https://github.com/graphql-python/gql/issues/344
|
[
"type: question or discussion"
] |
Kinzowa
| 1
|
pytorch/vision
|
computer-vision
| 8,048
|
ImageFolder balancer
|
### 🚀 The feature
The new feature impacts the file [torchvision/datasets/folder.py](https://github.com/pytorch/vision/blob/main/torchvision/datasets/folder.py).
The idea is to add to the `make_dataset` function a new optional parameter that allows balancing the dataset folder. The new parameter, `sampling_strategy`, can assume the following values: `None` (default), `"oversample"` and `"undersample"`.
|Value| Description|
|---|---|
|`None` | no operation performed on the dataset. This value will be the dafault. |
|`"oversample"`| the dataset will be balanced by adding image path copies of minoritary classes up to the number of the majority class.|
|`"undersample"` | the dataset will be balanced by deleting images path of majority classes up to the number of the minoritary class.|
### Motivation, pitch
While working with an unbalanced dataset, I find extremely useful to balance it at runtime instead of copying/removing images in the filesystem.
After the balanced data folder is defined you can also apply data augmentation when you define the data loader to not use simply image copies and avoid overfitting.
### Alternatives
The implementation can be done in two ways:
1. add the parameter `sampling_strategy` to the `make_dataset` function;
2. define a new class, namely `BalancedImageFolder`, that overwrites the `make_dataset` method in order to define the `sampling` parameter.
We believe that the less impactful way is the first implementation, because, for how the code is currently structured, for overwriting the `make_dataset` method I probably will have to change the structure of the file (because for defining the new `make_balanced_dataset` function I have to copy a lot of code from the original `make_dataset` function, and this is obviously a bad practice).
### Additional context
_No response_
|
closed
|
2023-10-16T09:41:20Z
|
2023-10-17T14:15:24Z
|
https://github.com/pytorch/vision/issues/8048
|
[] |
lorenzomassimiani
| 2
|
inducer/pudb
|
pytest
| 284
|
Pudb not being drawn at full width of the shell
|
Hi,
I am running pudb 2017.1.4 on macosx High Sierra. When I run pudb from my code (pudb.set_trace()) it draws the pudb interface on the left 1/4 of my shell.
I was able to replicate the behaviour with and without all my config on fish (aka bare fish shell). I tried it on iTerm and Terminal.
If I run the pudb3 <my script> command, the gui is fine.
pudb config:
```
[pudb]
breakpoints_weight = 1
current_stack_frame = top
custom_shell =
custom_stringifier =
custom_theme =
display = auto
line_numbers = True
prompt_on_quit = True
seen_welcome = e033
shell = internal
sidebar_width = 1
stack_weight = 1
stringifier = type
theme = solarized
variables_weight = 1
wrap_variables = False
```
pip3.4 package:
```
autopep8 (1.3.3)
flake8 (3.5.0)
mccabe (0.6.1)
pip (9.0.1)
ptvsd (3.2.1)
pudb (2017.1.4)
pycodestyle (2.3.1)
pydocstyle (2.1.1)
pyflakes (1.6.0)
Pygments (2.2.0)
setuptools (38.2.1)
six (1.11.0)
snowballstemmer (1.2.1)
urwid (1.3.1)
wheel (0.30.0)
yapf (0.20.0)
```
pudb.set_trace()

|
closed
|
2017-11-29T02:53:32Z
|
2024-02-02T22:02:31Z
|
https://github.com/inducer/pudb/issues/284
|
[] |
mrvkino
| 5
|
nolar/kopf
|
asyncio
| 707
|
kopf.on.event : how to identify "MODIFIED" event of only the spec change
|
**patch.status** call also raising "MODIFIED" event, so I could not find a way to identify only the "MODIFIED" event which happens as part of **spec change**.
I am doing **patch.status** update in both **kopf.on.update** and **kopf.on.event** handlers for event type "MODIFIED", will it make any problem ?
I think somewhere **patch.status** update conflict happening. I am not able to see the patch.status update in the **kopf.on.event** handle.
use case:
-------------
I want to show the state as "Updating" (kubectl get mycr) when updating CR. I were checking even['type'] == "MODIFIED" in kopf.on.event handle, but that is not working because of issue mentioned above.
Should i use any kubernetes python client library to update status field ?
Strange Behavior
---------
I have 2 **patch.status** update in **kopf.on.update** handle, one for change state to "**Updating**" and another one for change state to "**Updated**".
But i can see only one "**Patching with**" debug **log message** and only one patch is working, why ?
|
open
|
2021-03-07T14:49:39Z
|
2021-04-06T14:17:31Z
|
https://github.com/nolar/kopf/issues/707
|
[
"question"
] |
sajuptpm
| 4
|
miguelgrinberg/flasky
|
flask
| 273
|
Is the 'seed()’ in the app/models.py necessary ??
|
In the chapter 11c ,Model User and Post have function 'generate_fake(count=100)',In order to generate random data,import random.seed and forgery_py ,function code is:
from sqlalchemy.exc import IntegrityError
from random import seed
import forgery_py
seed() #why?
for i in range(count):
u = User(email=forgery_py.internet.email_address(),
username=forgery_py.internet.user_name(True),
password=forgery_py.lorem_ipsum.word(),
confirmed=True,
name=forgery_py.name.full_name(),
location=forgery_py.address.city(),
about_me=forgery_py.lorem_ipsum.sentence(),
member_since=forgery_py.date.date(True))
db.session.add(u)
try:
db.session.commit()
except IntegrityError:
db.session.rollback()
random.seed() Initialize internal state of the random number generator
Why should first call seed() function ?what's the effect?Is it necessary?
Try to delete ‘seed()',Random data can alse be generated normally
Thankes
@miguelgrinberg
|
closed
|
2017-05-30T13:15:56Z
|
2017-05-31T02:52:44Z
|
https://github.com/miguelgrinberg/flasky/issues/273
|
[
"question"
] |
sandylili
| 2
|
Significant-Gravitas/AutoGPT
|
python
| 8,918
|
Unable to setup locally : docker compose up -d fails at autogpt_platform-market-migrations-1
|
### ⚠️ Search for existing issues first ⚠️
- [X] I have searched the existing issues, and there is no existing issue for my problem
### Which Operating System are you using?
MacOS
### Which version of AutoGPT are you using?
Master (branch)
### What LLM Provider do you use?
Azure
### Which area covers your issue best?
Installation and setup
### What commit or version are you using?
2121ffd06b26a438706bf642372cc46d81c94ddc
### Describe your issue.
I am trying to set AutoGPT locally. I am following the steps in the video tutorial on main README.md file.
The docker setup fails when i run this command.
`docker compose up -d`
It fails with the following status :
` ✘ Container autogpt_platform-market-migrations-1 service "market-migrations" didn't complete successfully: exit 1 `
and then gets stuck at
` Container autogpt_platform-migrate-1 Waiting `
What did I miss ?
### Upload Activity Log Content
_No response_
### Upload Error Log Content
_No response_
|
open
|
2024-12-09T12:10:32Z
|
2025-02-14T23:22:07Z
|
https://github.com/Significant-Gravitas/AutoGPT/issues/8918
|
[] |
mali-tintash
| 20
|
healthchecks/healthchecks
|
django
| 712
|
Average execution time
|
I would like to add an idea, that would be helpful for me:
In the detail page of a check an option to display the average execution time would be very helpful. Maybe even a graph that displays the execution time? Also a minimum or maximum value could be helpful.
This would help especially when adding new features to a script to see if it is not consuming too much time.
Thanks for your consideration and your work!
|
open
|
2022-10-05T12:58:35Z
|
2024-07-08T09:10:17Z
|
https://github.com/healthchecks/healthchecks/issues/712
|
[
"feature"
] |
BeyondVertical
| 1
|
pytorch/pytorch
|
numpy
| 149,551
|
Remove PyTorch conda installation instructions from the documentation and tutorials
|
### 🐛 Describe the bug
Please see: https://github.com/pytorch/pytorch/issues/138506
and https://dev-discuss.pytorch.org/t/pytorch-deprecation-of-conda-nightly-builds/2590
Pytorch have deprecated usage of conda builds for release 2.6. We need to remove mentioning of conda package installtion instructions from tutorials and documents.
Examples:
https://pytorch.org/tutorials/beginner/introyt/tensorboardyt_tutorial.html#before-you-start
* https://pytorch.org/audio/main/build.linux.html
* https://pytorch.org/audio/main/installation.html
* https://pytorch.org/audio/main/build.windows.html#install-pytorch
* https://pytorch.org/tutorials/recipes/recipes/tensorboard_with_pytorch.html
* https://pytorch.org/tutorials/beginner/introyt/captumyt.html
* https://pytorch.org/vision/main/training_references.html
* https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html#environment-setup
* https://pytorch.org/docs/stable/notes/windows.html#package-not-found-in-win-32-channel
### Versions
2.7.0
cc @svekars @sekyondaMeta @AlannaBurke
|
open
|
2025-03-19T20:18:59Z
|
2025-03-20T16:08:48Z
|
https://github.com/pytorch/pytorch/issues/149551
|
[
"module: docs",
"triaged",
"topic: docs"
] |
atalman
| 1
|
3b1b/manim
|
python
| 1,652
|
NumberPlane not showing it's lines
|
### Describe the bug
I have used manimgl before but after changing the version (almost 4-5 months old) with new version, whenever i am running the example scene (**OpeningManimExample**), the lines in the **NumberPlane** is not showing.
**Image**

**Version Used**:
Manim - 1.1.0
|
closed
|
2021-10-16T12:51:21Z
|
2021-10-17T03:21:40Z
|
https://github.com/3b1b/manim/issues/1652
|
[
"bug"
] |
aburousan
| 2
|
aio-libs-abandoned/aioredis-py
|
asyncio
| 702
|
connection pool publish spending a lot of time in traceback
|
I tried publishing 20k messages and it took about 6 seconds so I wondered why it was so slow, as redis self-test tells me it can do 200k set commands with 50 connections in 1.5 seconds which is pretty ok.
I did a profiling with snakeviz and it turns out when I publish from the pool, the execute function spends a lot of time in traceback.py
I am not very familiar with python internals, so I came here to ask why the profiler would show that we are spending so much time as if we are catching exceptions and getting stacktraces in a tight loop?

|
closed
|
2020-02-18T16:54:29Z
|
2020-02-26T10:43:18Z
|
https://github.com/aio-libs-abandoned/aioredis-py/issues/702
|
[] |
nurettin
| 1
|
RobertCraigie/prisma-client-py
|
pydantic
| 1,052
|
Prisma client not generating in production aws
|
### Discussed in https://github.com/RobertCraigie/prisma-client-py/discussions/1051
<div type='discussions-op-text'>
<sup>Originally posted by **ifaronti** December 7, 2024</sup>
My set up is quite simple:
```
generator py {
provider = "prisma-client-py"
recursive_type_depth = "5"
interface = "asyncio"
enable_experimental_decimal = true
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
```
However, deploying to both aws lambda and vercel throwing same prisma client not generated yet error even though everything works fine locally.
my project folder structure places prisma folder which contains the schema.prisma file in the root folder.
stack trace:
```
" File \"/var/lang/lib/python3.13/importlib/__init__.py\", line 88, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n",
" File \"<frozen importlib._bootstrap>\", line 1387, in _gcd_import\n",
" File \"<frozen importlib._bootstrap>\", line 1360, in _find_and_load\n",
" File \"<frozen importlib._bootstrap>\", line 1331, in _find_and_load_unlocked\n",
" File \"<frozen importlib._bootstrap>\", line 935, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 1022, in exec_module\n",
" File \"<frozen importlib._bootstrap>\", line 488, in _call_with_frames_removed\n",
" File \"/var/task/main.py\", line 2, in <module>\n from app.routers import transactions\n",
" File \"/var/task/app/routers/transactions.py\", line 2, in <module>\n from ..controllers.transactions.get_transactions import get_Transactions\n",
" File \"/var/task/app/controllers/transactions/get_transactions.py\", line 1, in <module>\n from prisma import Prisma\n",
" File \"<frozen importlib._bootstrap>\", line 1412, in _handle_fromlist\n",
" File \"/opt/python/prisma/__init__.py\", line 53, in __getattr__\n raise RuntimeError(\n"
]
```
Note: It's not the transactions controller or route that is the problem if I put any route as first in order of route calls from the index file, that is what will appear in the stack trace.
Everything is fine locally but production fails. Thanks in advance</div>
|
closed
|
2024-12-07T17:59:55Z
|
2024-12-11T21:40:14Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/1052
|
[] |
ifaronti
| 1
|
cobrateam/splinter
|
automation
| 478
|
add cookies to phantomjs browser have a error
|
phantomjs version: phantomjs-2.1.1-linux-x86_64.tar.bz2 have e error
but phantomjs-1.9.8-linux-x86_64.tar.bz2 is normal.
code:
`from splinter import Browser
cookie = {'test':'1234'}
current_url = 'http://www.google.com'
browser = Browser('phantomjs')
browser.visit(current_url)
browser.cookies.add(cookie)
source = browser.html
browser.quit()
`
except output:
Traceback (most recent call last):
File "dy.py", line 37, in <module>
dy.dynamic_request()
File "dy.py", line 31, in dynamic_request
browser.cookies.add(cookie)
File "/usr/local/lib/python2.7/dist-packages/splinter-0.7.3-py2.7.egg/splinter/driver/webdriver/cookie_manager.py", line 28, in add
self.driver.add_cookie({'name': key, 'value': value})
File "/usr/local/lib/python2.7/dist-packages/selenium-2.53.0-py2.7.egg/selenium/webdriver/remote/webdriver.py", line 666, in add_cookie
self.execute(Command.ADD_COOKIE, {'cookie': cookie_dict})
File "/usr/local/lib/python2.7/dist-packages/selenium-2.53.0-py2.7.egg/selenium/webdriver/remote/webdriver.py", line 233, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python2.7/dist-packages/selenium-2.53.0-py2.7.egg/selenium/webdriver/remote/errorhandler.py", line 194, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: {"errorMessage":"Can only set Cookies for the current domain","request":{"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"98","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:55891","User-Agent":"Python-urllib/2.7"},"httpVersion":"1.1","method":"POST","post":"{\"sessionId\": \"7b996980-de36-11e5-8735-cd0a007cdec8\", \"cookie\": {\"name\": \"test\", \"value\": \"1234\"}}","url":"/cookie","urlParsed":{"anchor":"","query":"","file":"cookie","directory":"/","path":"/cookie","relative":"/cookie","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/cookie","queryKey":{},"chunks":["cookie"]},"urlOriginal":"/session/7b996980-de36-11e5-8735-cd0a007cdec8/cookie"}}
Screenshot: available via screen
|
closed
|
2016-03-31T09:23:13Z
|
2018-08-27T01:04:05Z
|
https://github.com/cobrateam/splinter/issues/478
|
[
"bug",
"help wanted"
] |
galaxy6
| 1
|
babysor/MockingBird
|
deep-learning
| 714
|
capturable=False 怎么办?总结的一些办法。
|
①先卸载pytorch cmd运行 pip uninstall torch
②修改对应的torchvision、torchaudio版本(见下两图)


③安装指令:pip install torchvision==0.10.0 和 pip install torchaudio==0.9.0
④去https://download.pytorch.org/whl/torch_stable.html网站,找到并下载cu111/torchvision-0.9.0%2Bcu111-cp39-cp39-win_amd64.whl此文件。
⑤在下载目录运行cmd,安装指令: pip install torch-1.9.0+cu111-cp39-cp39-win_amd64.whl
|
open
|
2022-08-18T06:45:47Z
|
2022-09-10T16:10:30Z
|
https://github.com/babysor/MockingBird/issues/714
|
[] |
pzhyyd
| 1
|
tensorlayer/TensorLayer
|
tensorflow
| 1,039
|
BatchNorm1d IndexError: list index out of range
|
My code as follow:
```python
ni2 = Input(shape=[None, flags.z2_dim])
ni2 = Dense(100, W_init=w_init, b_init=None)(ni2)
ni2 = BatchNorm1d(decay=0.9, act=act, gamma_init=gamma_init)(ni2)
```
I got the following error.
```
File "/home/asus/Workspace/dcgan-disentangle/model.py", line 18, in get_generator
ni2 = BatchNorm1d(decay=0.9, act=act, gamma_init=gamma_init)(ni2)
File "/home/asus/Workspace/dcgan-disentangle/tensorlayer/layers/core.py", line 238, in __call__
self.build(inputs_shape)
File "/home/asus/Workspace/dcgan-disentangle/tensorlayer/layers/normalization.py", line 250, in build
params_shape, self.axes = self._get_param_shape(inputs_shape)
File "/home/asus/Workspace/dcgan-disentangle/tensorlayer/layers/normalization.py", line 311, in _get_param_shape
channels = inputs_shape[axis]
IndexError: list index out of range
```
According to the docs, `BatchNorm1d` only works for input shape of `[batch, xxx, xxx]` ? How about `[batch, xxx]`?
```
>>> # in static model, no need to specify num_features
>>> net = tl.layers.Input([None, 50, 32], name='input')
>>> net = tl.layers.BatchNorm1d()(net)
>>> # in dynamic model, build by specifying num_features
>>> conv = tl.layers.Conv1d(32, 5, 1, in_channels=3)
>>> bn = tl.layers.BatchNorm1d(num_features=32)
```
|
closed
|
2019-08-28T07:49:14Z
|
2019-08-31T01:41:21Z
|
https://github.com/tensorlayer/TensorLayer/issues/1039
|
[] |
zsdonghao
| 2
|
frol/flask-restplus-server-example
|
rest-api
| 110
|
Please document in enabled_modules that api must be last
|
This bit me and was hard to debug when I was playing around with enabled modules in my project.
|
closed
|
2018-05-09T14:56:14Z
|
2018-06-30T19:29:45Z
|
https://github.com/frol/flask-restplus-server-example/issues/110
|
[
"bug"
] |
bitfinity
| 6
|
facebookresearch/fairseq
|
pytorch
| 5,293
|
Pretrained models not gzip files?
|
I am trying to load a pretrained model from fairseq following the simple example [here](https://pytorch.org/hub/pytorch_fairseq_roberta/). However, whenever I run the following code I get the following `ReadError`.
#### Code
```py
roberta = torch.hub.load('pytorch/fairseq', 'roberta.base')
ReadError: not a gzip file
```
#### What's your environment?
- PyTorch Version (2.0.1+cu117)
- OS: Linux
- How you installed fairseq: `pip install fairseq`
- Python version: 3.10.9
- CUDA version: 12.2
|
open
|
2023-08-22T17:18:09Z
|
2023-08-30T16:27:11Z
|
https://github.com/facebookresearch/fairseq/issues/5293
|
[
"question",
"needs triage"
] |
cdeterman
| 1
|
xlwings/xlwings
|
automation
| 2,529
|
FileNotFoundError(2, 'No such file or directory') when trying to install
|
#### OS (e.g. Windows 10 or macOS Sierra)
Windows 11 Home
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
Excel, Office 365
Python 3.12.4
xlwings 0.33.1
#### Describe your issue (incl. Traceback!)
I just got an annoying message from Windows Defender about xlwings being a trojan horse and therefore tried to remove it (xlwings). After some research I'm confident that it was a false positive and now I'm trying to reinstall xlwings, but instead I get an errormessage saying
> FileNotFoundError(2, 'No such file or directory')

Already tried rebooting (well, it IS Windows, after all...) without any effect. In order to debug further it would help to know what's actually missing?
|
closed
|
2024-10-07T12:52:28Z
|
2024-10-08T09:05:33Z
|
https://github.com/xlwings/xlwings/issues/2529
|
[] |
HansThorsager
| 4
|
InstaPy/InstaPy
|
automation
| 6,381
|
Key Error shortcode_media not find
|
## Expected Behavior
at like_util.py the get_additional_data function get some post info which one of them is shortcode_media.
## Current Behavior
the shortcode_media key not found in the post_page (post_page = get_additional_data(browser))
|
open
|
2021-10-23T06:28:44Z
|
2021-12-25T18:14:48Z
|
https://github.com/InstaPy/InstaPy/issues/6381
|
[] |
kharazian
| 10
|
keras-team/keras
|
tensorflow
| 20,984
|
Is this Bug?
|
Issue Description:
I am running CycleGAN training using Keras v3.8.0 and have observed the following behavior during the initial epoch (0/200):
Discriminator (D):
The D loss starts around 0.70 (e.g., 0.695766 at batch 0) with a very low accuracy (~25%).
In the next few batches, the D loss quickly decreases to approximately 0.47 and the discriminator accuracy increases and stabilizes around 49%.
Generator (G):
The G loss begins at approximately 17.42 and gradually decreases (e.g., reaching around 14.85 by batch 15).
However, the adversarial component of the G loss remains nearly constant at roughly 0.91 throughout these batches.
Both the reconstruction loss and identity loss show a slight downward trend.
Questions/Concerns:
Discriminator Behavior:
The discriminator accuracy quickly climbs to ~49%, which is close to random guessing (50%). Is this the expected behavior in early training stages, or might it indicate an issue with the discriminator setup?
Constant Adversarial Loss:
Despite the overall generator loss decreasing, the adversarial loss remains almost unchanged (~0.91). Should I be concerned that the generator is not improving its ability to fool the discriminator, or is this typical during the initial epochs?
Next Steps:
Would further hyperparameter tuning or changes in the training strategy be recommended at this stage to encourage more effective adversarial learning?
Steps to Reproduce:
Use Keras version 3.8.0.
Train CycleGAN for 200 epochs on the given dataset.
Observe the training logs, especially during the initial epoch (batches 0–15).
Example Log Snippet:
[Epoch 0/200] [Batch 0/3400] [D loss: 0.695766, acc: 25%] [G loss: 17.421459, adv: 0.911943, recon: 0.703973, id: 0.664554] time: 0:00:54.098616
[Epoch 0/200] [Batch 1/3400] [D loss: 0.544706, acc: 41%] [G loss: 17.582617, adv: 0.913612, recon: 0.759879, id: 0.702040] time: 0:00:54.705443
[Epoch 0/200] [Batch 2/3400] [D loss: 0.516353, acc: 44%] [G loss: 17.095995, adv: 0.912766, recon: 0.749940, id: 0.726107] time: 0:00:54.945915
...
[Epoch 0/200] [Batch 15/3400] [D loss: 0.477357, acc: 49%] [G loss: 14.845922, adv: 0.901916, recon: 0.742566, id: 0.698672] time: 0:00:58.317200
Any insights or recommendations would be greatly appreciated. Thank you in advance for your help!
|
open
|
2025-03-05T07:23:22Z
|
2025-03-12T05:04:32Z
|
https://github.com/keras-team/keras/issues/20984
|
[
"type:Bug"
] |
AmadeuSY-labo
| 2
|
matplotlib/matplotlib
|
data-visualization
| 29,380
|
[MNT]: deprecate auto hatch fallback to patch.edgecolor when edgecolor='None'
|
### Summary
Since #28104 now separates out hatchcolor, users should not be allowed to specifically ask to fallback to edgecolor and also explicitly set that edgecolor to none because silently falling back on ```edgecolor="None"``` introduces the problem that:
1. because of eager color resolution and frequently setting "none" by knocking out the alpha, the way to check for none is checking the alpha, which leads to fallback being dependent on the interplay between alpha and edgecolor:

3. fallback to edgecolor rcParams doesn't check if the rcParam is also none so could sink the problem a layer deeper
### Proposed fix
Deprecate this fallback behavior and raise a warning on
``` Rectangle( (0,0), .5, .5, hatchcolor='edge', edgecolor='None')```
The alternatives are:
* set a hatchcolor
* don't set edgecolor at all, .i.e. ``` Rectangle( (0,0), .5, .5, hatchcolor='edge')``` falls back to the rcParam in `get_edgecolor`
|
open
|
2024-12-25T00:19:58Z
|
2024-12-26T03:11:59Z
|
https://github.com/matplotlib/matplotlib/issues/29380
|
[
"Maintenance",
"topic: hatch"
] |
story645
| 2
|
neuml/txtai
|
nlp
| 646
|
ImportError: Textractor pipeline is not available - install "pipeline" extra to enable
|
When i run the code block below, it gives error :
`from txtai.pipeline import Textractor`
`textractor = Textractor()`
ImportError: Textractor pipeline is not available - install "pipeline" extra to enable
Note :` pip install txtai[pipeline]` did not work
|
closed
|
2024-01-23T23:14:23Z
|
2024-09-16T06:04:23Z
|
https://github.com/neuml/txtai/issues/646
|
[] |
berkgungor
| 6
|
Kludex/mangum
|
asyncio
| 264
|
Magnum have huge list of dependencies, many have dubious utility.
|
Currently magnum installs over 150 pacakages along with it:
[pdm list --graph](https://gist.github.com/Fak3/419769a9fb05ec98a46db4d5a9a21dae)
I wish there was a slimmer version, that only installs core depndencies. Currently some deps like yappi and netifaces does not provide binary wheels, which makes magnum installation fail in case of missing python headers:
<details>
<pre>
Preparing isolated env for PEP 517 build...
Building wheel for https://files.pythonhosted.org/packages/9c/bb/47be36b473e56360d3012bf1b6e405785e42d1f2e91da715964c1a705937/yappi-1.3.5.tar.gz#sha256=f54c25f04aa7c613633b529bffd14e0699a4363f414dc9c065616fd52064a49b (from https://pypi.org/simple/yappi/)
Collecting wheel
Using cached wheel-0.37.1-py2.py3-none-any.whl (35 kB)
Collecting setuptools>=40.8.0
Using cached setuptools-62.6.0-py3-none-any.whl (1.2 MB)
Installing collected packages: wheel, setuptools
Successfully installed setuptools-62.6.0 wheel-0.37.1
/tmp/timer_create3280tt_g.c: In function ‘main’:
/tmp/timer_create3280tt_g.c:2:5: warning: implicit declaration of function ‘timer_create’ [-Wimplicit-function-declaration]
2 | timer_create();
| ^~~~~~~~~~~~
running egg_info
writing yappi/yappi.egg-info/PKG-INFO
writing dependency_links to yappi/yappi.egg-info/dependency_links.txt
writing entry points to yappi/yappi.egg-info/entry_points.txt
writing requirements to yappi/yappi.egg-info/requires.txt
writing top-level names to yappi/yappi.egg-info/top_level.txt
reading manifest file 'yappi/yappi.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
adding license file 'LICENSE'
writing manifest file 'yappi/yappi.egg-info/SOURCES.txt'
/tmp/timer_create8z57jb1h.c: In function ‘main’:
/tmp/timer_create8z57jb1h.c:2:5: warning: implicit declaration of function ‘timer_create’ [-Wimplicit-function-declaration]
2 | timer_create();
| ^~~~~~~~~~~~
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-310
copying yappi/yappi.py -> build/lib.linux-x86_64-cpython-310
running build_ext
building '_yappi' extension
creating build/temp.linux-x86_64-cpython-310
creating build/temp.linux-x86_64-cpython-310/yappi
gcc -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -Werror=return-type -g -DOPENSSL_LOAD_CONF -fwrapv -fno-semantic-interposition -O2 -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -Werror=return-type -g -IVendor/ -O2 -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -Werror=return-type -g -IVendor/ -fPIC -DLIB_RT_AVAILABLE=1 -I/usr/include/python3.10 -c yappi/_yappi.c -o build/temp.linux-x86_64-cpython-310/yappi/_yappi.o
In file included from yappi/_yappi.c:10:
yappi/config.h:4:10: fatal error: Python.h: No such file or directory
4 | #include "Python.h"
| ^~~~~~~~~~
compilation terminated.
error: command '/usr/bin/gcc' failed with exit code 1
Error occurs:
Traceback (most recent call last):
File "/usr/lib64/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/lib/python3.10/site-packages/pdm/installers/synchronizers.py", line 207, in install_candidate
self.manager.install(can)
File "/usr/lib/python3.10/site-packages/pdm/installers/manager.py", line 39, in install
installer(prepared.build(), self.environment, prepared.direct_url())
File "/usr/lib/python3.10/site-packages/pdm/models/candidates.py", line 331, in build
self.wheel = builder.build(build_dir, metadata_directory=self._metadata_dir)
File "/usr/lib/python3.10/site-packages/pdm/builders/wheel.py", line 28, in build
filename = self._hook.build_wheel(out_dir, config_settings, metadata_directory)
File "/usr/lib/python3.10/site-packages/pep517/wrappers.py", line 208, in build_wheel
return self._call_hook('build_wheel', {
File "/usr/lib/python3.10/site-packages/pep517/wrappers.py", line 322, in _call_hook
self._subprocess_runner(
File "/usr/lib/python3.10/site-packages/pdm/builders/base.py", line 246, in subprocess_runner
return log_subprocessor(cmd, cwd, extra_environ=env)
File "/usr/lib/python3.10/site-packages/pdm/builders/base.py", line 87, in log_subprocessor
raise BuildError(
pdm.exceptions.BuildError: Call command ['/usr/bin/python3.10', '/usr/lib/python3.10/site-packages/pep517/in_process/_in_process.py', 'build_wheel', '/tmp/tmprefrneax'] return non-zero status(1).
Preparing isolated env for PEP 517 build...
Reusing shared build env: /tmp/pdm-build-env-eq22vk28-shared
Building wheel for https://files.pythonhosted.org/packages/9c/bb/47be36b473e56360d3012bf1b6e405785e42d1f2e91da715964c1a705937/yappi-1.3.5.tar.gz#sha256=f54c25f04aa7c613633b529bffd14e0699a4363f414dc9c065616fd52064a49b (from https://pypi.org/simple/yappi/)
/tmp/timer_createh4zqpscq.c: In function ‘main’:
/tmp/timer_createh4zqpscq.c:2:5: warning: implicit declaration of function ‘timer_create’ [-Wimplicit-function-declaration]
2 | timer_create();
| ^~~~~~~~~~~~
running egg_info
writing yappi/yappi.egg-info/PKG-INFO
writing dependency_links to yappi/yappi.egg-info/dependency_links.txt
writing entry points to yappi/yappi.egg-info/entry_points.txt
writing requirements to yappi/yappi.egg-info/requires.txt
writing top-level names to yappi/yappi.egg-info/top_level.txt
reading manifest file 'yappi/yappi.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
adding license file 'LICENSE'
writing manifest file 'yappi/yappi.egg-info/SOURCES.txt'
/tmp/timer_createnezrnkq7.c: In function ‘main’:
/tmp/timer_createnezrnkq7.c:2:5: warning: implicit declaration of function ‘timer_create’ [-Wimplicit-function-declaration]
2 | timer_create();
| ^~~~~~~~~~~~
running bdist_wheel
running build
running build_py
running build_ext
building '_yappi' extension
gcc -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -Werror=return-type -g -DOPENSSL_LOAD_CONF -fwrapv -fno-semantic-interposition -O2 -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -Werror=return-type -g -IVendor/ -O2 -Wall -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -Werror=return-type -g -IVendor/ -fPIC -DLIB_RT_AVAILABLE=1 -I/usr/include/python3.10 -c yappi/_yappi.c -o build/temp.linux-x86_64-cpython-310/yappi/_yappi.o
In file included from yappi/_yappi.c:10:
yappi/config.h:4:10: fatal error: Python.h: No such file or directory
4 | #include "Python.h"
| ^~~~~~~~~~
compilation terminated.
error: command '/usr/bin/gcc' failed with exit code 1
Error occurs:
Traceback (most recent call last):
File "/usr/lib64/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/lib/python3.10/site-packages/pdm/installers/synchronizers.py", line 207, in install_candidate
self.manager.install(can)
File "/usr/lib/python3.10/site-packages/pdm/installers/manager.py", line 39, in install
installer(prepared.build(), self.environment, prepared.direct_url())
File "/usr/lib/python3.10/site-packages/pdm/models/candidates.py", line 331, in build
self.wheel = builder.build(build_dir, metadata_directory=self._metadata_dir)
File "/usr/lib/python3.10/site-packages/pdm/builders/wheel.py", line 28, in build
filename = self._hook.build_wheel(out_dir, config_settings, metadata_directory)
File "/usr/lib/python3.10/site-packages/pep517/wrappers.py", line 208, in build_wheel
return self._call_hook('build_wheel', {
File "/usr/lib/python3.10/site-packages/pep517/wrappers.py", line 322, in _call_hook
self._subprocess_runner(
File "/usr/lib/python3.10/site-packages/pdm/builders/base.py", line 246, in subprocess_runner
return log_subprocessor(cmd, cwd, extra_environ=env)
File "/usr/lib/python3.10/site-packages/pdm/builders/base.py", line 87, in log_subprocessor
raise BuildError(
pdm.exceptions.BuildError: Call command ['/usr/bin/python3.10', '/usr/lib/python3.10/site-packages/pep517/in_process/_in_process.py', 'build_wheel', '/tmp/tmpga2r2t_o'] return non-zero status(1).
Error occurs
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/pdm/termui.py", line 200, in logging
yield logger
File "/usr/lib/python3.10/site-packages/pdm/installers/synchronizers.py", line 374, in synchronize
raise InstallationError("Some package operations are not complete yet")
pdm.exceptions.InstallationError: Some package operations are not complete yet
</pre>
</details>
|
closed
|
2022-06-24T19:09:38Z
|
2022-06-25T07:31:31Z
|
https://github.com/Kludex/mangum/issues/264
|
[
"more info needed"
] |
Fak3
| 2
|
marimo-team/marimo
|
data-visualization
| 4,145
|
[Newbie Q] Cannot add SQLite table
|
New to Marimo, probably doing something simple wrong.
I want to add my sqlite lite database which is stored in `my_fair_table.db`.
I copy the full path.
I paste the path into the UI flow of SQLite and click "Add".
Nothing happens. No error. No database.

When running python code, I can successfully connect to the database file.
But with the table view of Marimo or using the SQL cell I can't.
After not figuring it out for 45 min I decided to write this issue.
What might I be doing wrong?
OS: Linux
Python: 3.12
Marimo: 0.11.21
|
closed
|
2025-03-18T13:26:27Z
|
2025-03-18T15:42:46Z
|
https://github.com/marimo-team/marimo/issues/4145
|
[] |
dentroai
| 3
|
OpenInterpreter/open-interpreter
|
python
| 615
|
Issues with Azure setting
|
### Describe the bug
I am trying to point to my azure keys in the application, but each time I get the issue
`There might be an issue with your API key(s).
To reset your API key (we'll use OPENAI_API_KEY for this example, but you may need to reset your ANTHROPIC_API_KEY, HUGGINGFACE_API_KEY, etc):
Mac/Linux: 'export OPENAI_API_KEY=your-key-here',
Windows: 'setx OPENAI_API_KEY your-key-here' then restart terminal.`
Have done everything mentioned in the documentation to setup interpreter with azure but it does not work for me. :(
Any help will be appreciated. I am trying to build a simple streamlit application as a frontend invoking interpreter..
I even went and set the model in the code to reflect azure..
### Reproduce
1. Configure the interpreter to use azure:
```
interpreter.azure_api_type = "azure"
interpreter.use_azure = True
interpreter.azure_api_base = "https://something.openai.azure.com/"
interpreter.azure_api_version = "2023-05-15"
interpreter.azure_api_key = "my-azure-key"
interpreter.azure_deployment_name="GPT-4"
interpreter.auto_run = True
interpreter.conversation_history = True
```
```
async def get_response(prompt: str, files: Optional[list] = None):
if files is None:
files = []
with st.chat_message("user"):
st.write(prompt)
with st.spinner():
response=''
for chunk in interpreter.chat(f'content={input_text}, files={file_path}', stream=True):
if 'message' in chunk:
response += chunk['message']
st.write(response)
```
and sometimes i get an issue as below abotu SSL...
<img width="839" alt="image" src="https://github.com/KillianLucas/open-interpreter/assets/40305398/801bb045-3677-416f-b348-ab91ec584761">
### Expected behavior
The azure specific configurations must be picked up and the interpreter must work as expected.
### Screenshots
<img width="1264" alt="Screenshot 2023-10-10 at 12 16 58" src="https://github.com/KillianLucas/open-interpreter/assets/40305398/c6519d1d-55d6-41bf-80a8-95b7cc6df300">
### Open Interpreter version
0.1.7
### Python version
3.11.3
### Operating System name and version
Windows 11
### Additional context
_No response_
|
closed
|
2023-10-10T07:03:35Z
|
2023-10-27T17:21:59Z
|
https://github.com/OpenInterpreter/open-interpreter/issues/615
|
[
"Bug"
] |
nsvbhat
| 1
|
Johnserf-Seed/TikTokDownload
|
api
| 266
|
[BUG]
|
我抄了一部分你的代码 主要来爬取我喜欢点赞的一些抖音视频但是
接口有问题 图集没有下载链接 我抓了返回的接口数据 就没有图集的视频数据

前面有2个图片视频的 但是直接返回了第三个视频的接口数据

|
closed
|
2022-12-09T02:11:33Z
|
2023-01-14T12:14:09Z
|
https://github.com/Johnserf-Seed/TikTokDownload/issues/266
|
[
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] |
zhangsan-feng
| 3
|
koxudaxi/fastapi-code-generator
|
pydantic
| 179
|
support for constraints in Query/Body rather than as "conX
|
open
|
2021-06-28T04:32:27Z
|
2021-07-09T14:48:31Z
|
https://github.com/koxudaxi/fastapi-code-generator/issues/179
|
[
"enhancement"
] |
aeresov
| 1
|
|
google-research/bert
|
nlp
| 1,405
|
MRPC and CoLA Dataset UnicodeDecodeError
|
Error message:UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd5 in position 147: invalid continuation byte
I can't train properly after loading these two data sets. Still report an error after using "ISO-8859-1" and "latin-1" code
After checking the train.txt file of the MRPC dataset, I found that the error byte code corresponds to the character "é", but I modified train.txt and test.txt and preprocessed again to get train.tsv and test.tsv (the file also checked that it did not contain the character "é"). Finally, I still reported an error in training.
|
open
|
2024-04-17T14:33:36Z
|
2024-04-17T14:33:36Z
|
https://github.com/google-research/bert/issues/1405
|
[] |
ZhuHouYi
| 0
|
graphql-python/gql
|
graphql
| 68
|
how to upload file
|
Is there any example to show how to upload a file to the "Input!"?
|
closed
|
2020-03-24T05:27:57Z
|
2021-09-29T12:28:49Z
|
https://github.com/graphql-python/gql/issues/68
|
[
"type: feature"
] |
DogeWatch
| 22
|
unytics/bigfunctions
|
data-visualization
| 6
|
Are you interested in providing RFM clustering?
|
I'm thinking about queries like these ones I've contributed to [my pet Super Store project](https://github.com/mickaelandrieu/dbt-super_store/tree/main/models/marts/core/customers/segmentation).
WDYT?
|
open
|
2022-10-31T21:18:44Z
|
2022-12-06T22:52:24Z
|
https://github.com/unytics/bigfunctions/issues/6
|
[
"new-bigfunction"
] |
mickaelandrieu
| 2
|
littlecodersh/ItChat
|
api
| 728
|
文档里面写uin可以用作标识,可是我获取到的都是0,是现在uin获取不到了,还是我获取的方式不对?
|
文档里面写uin可以用作标识,可是我获取到的都是0,是现在uin获取不到了,还是我获取的方式不对?
|
open
|
2018-09-13T08:47:53Z
|
2018-09-13T08:47:53Z
|
https://github.com/littlecodersh/ItChat/issues/728
|
[] |
yehanliang
| 0
|
ydataai/ydata-profiling
|
data-science
| 1,359
|
Time series mode inactive
|
### Current Behaviour
The following code produces a regular report instead of a time series report as shown in https://ydata.ai/resources/how-to-do-an-eda-for-time-series.
```
from pandas_profiling import ProfileReport
profile = ProfileReport(df, tsmode=True, sortby="datadate", title="Time-Series EDA")
profile.to_file("report_timeseries.html")
```
In addition the time series example [USA Air Quality](https://github.com/ydataai/ydata-profiling/tree/master/examples/usaairquality)) (Time-series air quality dataset EDA example) cannot be found in the git repository.
### Expected Behaviour
A time series report as in https://ydata.ai/resources/how-to-do-an-eda-for-time-series
### Data Description
My dataframe has only two columns "datadate" (pandas datetime) and "value" (float).
### Code that reproduces the bug
_No response_
### pandas-profiling version
v4.2.0
### Dependencies
```Text
python==3.7.1
numpy==1.21.5
pandas==1.3.5
```
### OS
_No response_
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
|
closed
|
2023-06-08T15:11:17Z
|
2023-08-09T15:41:54Z
|
https://github.com/ydataai/ydata-profiling/issues/1359
|
[
"feature request 💬"
] |
xshi19
| 2
|
iterative/dvc
|
machine-learning
| 10,639
|
`dvc exp remove`: should we make --queue work with -n and other flags ?
|
Currently using `dvc exp remove --queue` clears the whole queue (much like `dvc queue remove`).
`dvc exp remove` has some useful modifiers, such as `-n` to specify how many experiments should be deleted, or a list of the names of the experiments to remove.
These modifiers currently don't affect the behavior of the remove command on queued experiments.
The question is : should they ? and if so, how would that work ? and what would be the semantics when composing multiple modifiers together ? e.g.: what would the remove command do exactly if we composed `--rev`, `--queue` and `-n` together ?
This is an open question initially raised during a previous conversation with @shcheklein , see the full thread here : https://github.com/iterative/dvc/pull/10633#discussion_r1857943029)
|
open
|
2024-11-28T19:11:22Z
|
2024-11-28T20:00:44Z
|
https://github.com/iterative/dvc/issues/10639
|
[
"discussion",
"A: experiments"
] |
rmic
| 0
|
yunjey/pytorch-tutorial
|
pytorch
| 126
|
How to configure the Logger to show the structure of neural network in tensorboard?
|
How to configure the Logger to show the structure of neural network? thanks!!!!
|
open
|
2018-07-20T09:06:33Z
|
2018-07-20T09:06:33Z
|
https://github.com/yunjey/pytorch-tutorial/issues/126
|
[] |
mali-nuist
| 0
|
huggingface/datasets
|
pandas
| 7,415
|
Shard Dataset at specific indices
|
I have a dataset of sequences, where each example in the sequence is a separate row in the dataset (similar to LeRobotDataset). When running `Dataset.save_to_disk` how can I provide indices where it's possible to shard the dataset such that no episode spans more than 1 shard. Consequently, when I run `Dataset.load_from_disk`, how can I load just a subset of the shards to save memory and time on different ranks?
I guess an alternative to this would be, given a loaded `Dataset`, how can I run `Dataset.shard` such that sharding doesn't split any episode across shards?
|
open
|
2025-02-20T10:43:10Z
|
2025-02-24T11:06:45Z
|
https://github.com/huggingface/datasets/issues/7415
|
[] |
nikonikolov
| 3
|
vaexio/vaex
|
data-science
| 2,348
|
[BUG-REPORT] AttributeError: module 'vaex.dataset' has no attribute '_parse_f'
|
**Description**
Hi,
I am trying to create a plot2d_contour but it is giving me this error:
```---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[44], line 2
1 df = vaex.example()
----> 2 df.plot2d_contour(x=df.x,y=df.y)
File ~/opt/anaconda3/envs/agb/lib/python3.10/site-packages/vaex/viz/contour.py:43, in plot2d_contour(self, x, y, what, limits, shape, selection, f, figsize, xlabel, ylabel, aspect, levels, fill, colorbar, colorbar_label, colormap, colors, linewidths, linestyles, vmin, vmax, grid, show, **kwargs)
14 """
15 Plot conting contours on 2D grid.
16
(...)
38 :param show:
39 """
42 # Get the function out of the string
---> 43 f = vaex.dataset._parse_f(f)
45 # Internals on what to bin
46 binby = []
AttributeError: module 'vaex.dataset' has no attribute '_parse_f'
```
Code:
```
import vaex
df = vaex.example()
df.plot2d_contour(x=df.x,y=df.y)
```
I also tried to explicitly set f = "identity" and f="log" and it didn't work.
**Software information**
- Vaex version:
{'vaex': '4.16.0',
'vaex-core': '4.16.1',
'vaex-viz': '0.5.4',
'vaex-hdf5': '0.14.1',
'vaex-server': '0.8.1',
'vaex-astro': '0.9.3',
'vaex-jupyter': '0.8.1',
'vaex-ml': '0.18.1'}
- Vaex was installed via: pip
- OS: macOS 11.6.5
|
open
|
2023-03-01T08:11:57Z
|
2024-04-06T17:01:32Z
|
https://github.com/vaexio/vaex/issues/2348
|
[] |
marixko
| 1
|
fastapiutils/fastapi-utils
|
fastapi
| 357
|
[QUESTION] Is it possible to use repeated tasks with lifetimes? if so how?
|
**Description**
How can I use repeated tasks with lifetimes?
The docs only give examples how how to use these utils with the app.on_event(...) api, but fastapi now recommends using lifetimes. Its unclear to me whether the repeated tasks are supported here, and if they are how they would be used.
|
open
|
2025-01-16T11:12:11Z
|
2025-02-19T00:04:45Z
|
https://github.com/fastapiutils/fastapi-utils/issues/357
|
[
"question"
] |
ollz272
| 3
|
google-deepmind/sonnet
|
tensorflow
| 169
|
Colab mnist example does not work
| ERROR: type should be string, got "https://colab.research.google.com/github/deepmind/sonnet/blob/v2/examples/mlp_on_mnist.ipynb\r\n\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/impl/api.py in to_graph(entity, recursive, experimental_optional_features)\r\n 661 autograph_module=tf_inspect.getmodule(to_graph))\r\n--> 662 return conversion.convert(entity, program_ctx)\r\n 663 except (ValueError, AttributeError, KeyError, NameError, AssertionError) as e:\r\n\r\n26 frames\r\nAssertionError: Bad argument number for Name: 4, expecting 3\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nConversionError Traceback (most recent call last)\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/autograph/impl/api.py in to_graph(entity, recursive, experimental_optional_features)\r\n 664 logging.error(1, 'Error converting %s', entity, exc_info=True)\r\n 665 raise ConversionError('converting {}: {}: {}'.format(\r\n--> 666 entity, e.__class__.__name__, str(e)))\r\n 667 \r\n 668 \r\n\r\nConversionError: converting <function BaseBatchNorm.__call__ at 0x7f09acc45ae8>: AssertionError: Bad argument number for Name: 4, expecting 3"
|
closed
|
2020-04-15T13:56:18Z
|
2020-04-17T08:24:13Z
|
https://github.com/google-deepmind/sonnet/issues/169
|
[
"bug"
] |
VogtAI
| 2
|
opengeos/leafmap
|
plotly
| 464
|
Streamlit RuntimeError with default leafmap module
|
### Environment Information
- leafmap version: 0.21.0
- Python version: 3.10
- Operating System: Mac
### Description
The default leafmap module can't work well with Streamlit, but foliumap works.
### What I Did
```
import streamlit as st
import leafmap
#import leafmap.foliumap as leafmap
landsat_url = (
'https://drive.google.com/file/d/1EV38RjNxdwEozjc9m0FcO3LFgAoAX1Uw/view?usp=sharing'
)
leafmap.download_file(landsat_url, 'test.tif', unzip=False, overwrite=True)
m = leafmap.Map()
m.add_raster('test.tif', band=1, colormap='terrain', layer_name='DEM')
col1, col2 = st.columns([7, 3])
with col1:
m.to_streamlit()
```
### Error
```
2023-06-07 20:48:38.297 Uncaught app exception
Traceback (most recent call last):
File "/Users/xinz/miniconda3/envs/streamlit/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/Users/xinz/Documents/github/enmap/app/test/test_raster.py", line 10, in <module>
m.add_raster('test.tif', band=1, colormap='terrain', layer_name='DEM')
File "/Users/xinz/miniconda3/envs/streamlit/lib/python3.10/site-packages/leafmap/leafmap.py", line 1898, in add_raster
self.zoom_to_bounds(bounds)
File "/Users/xinz/miniconda3/envs/streamlit/lib/python3.10/site-packages/leafmap/leafmap.py", line 242, in zoom_to_bounds
self.fit_bounds([[bounds[1], bounds[0]], [bounds[3], bounds[2]]])
File "/Users/xinz/miniconda3/envs/streamlit/lib/python3.10/site-packages/ipyleaflet/leaflet.py", line 2622, in fit_bounds
asyncio.ensure_future(self._fit_bounds(bounds))
File "/Users/xinz/miniconda3/envs/streamlit/lib/python3.10/asyncio/tasks.py", line 615, in ensure_future
return _ensure_future(coro_or_future, loop=loop)
File "/Users/xinz/miniconda3/envs/streamlit/lib/python3.10/asyncio/tasks.py", line 634, in _ensure_future
loop = events._get_event_loop(stacklevel=4)
File "/Users/xinz/miniconda3/envs/streamlit/lib/python3.10/asyncio/events.py", line 656, in get_event_loop
raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'ScriptRunner.scriptThread'.
```
|
closed
|
2023-06-07T18:51:48Z
|
2023-06-07T19:38:14Z
|
https://github.com/opengeos/leafmap/issues/464
|
[
"bug"
] |
zxdawn
| 4
|
scrapy/scrapy
|
web-scraping
| 6,454
|
Random hanging of Scrapy processes in the docker container.
|
Hello, we are running `Scrapy` version `2.11.2` inside a docker container and at a random moment the `Scrapy` process hanging. `Telnet` connection is also unavailable (when trying to connect, the terminal `Scrapy` and there is no suggestion to enter username).
I will be glad of any help 🙏
|
closed
|
2024-08-05T06:27:27Z
|
2024-08-05T12:46:14Z
|
https://github.com/scrapy/scrapy/issues/6454
|
[] |
M3dv3D3V
| 1
|
suitenumerique/docs
|
django
| 469
|
Add a docs favicon
|
## Bug Report
**Problematic behavior**
As a user I want to identify easily the browser tabs that are `docs`.
Right now Docs has La Marianne, which is not generic nor indicates clearly which website it is.
**Expected behavior/code**
I'd like to have a Docs favicon.
It should be easily overidable so that admin sys deploying docs can easily add they own favicon (the nav bar logo should be overidabel as well)
**Additional context / Screenshots**

The Favicon could be derived from this:

@rl-83 could you propose a design for the favicon please ?
|
closed
|
2024-12-02T11:19:40Z
|
2025-01-16T20:01:36Z
|
https://github.com/suitenumerique/docs/issues/469
|
[] |
virgile-dev
| 8
|
ading2210/poe-api
|
graphql
| 32
|
Datastreaming with flask.
|
I posted this earlier but it was closed so I apologize for reopening.
https://github.com/ading2210/poe-api/issues/29#issue-1659761715
I have tried following the flask documentation to stream the generator but it's not working. I am not sure if it has something to do with the addon itself or if this is not possible but if the code contains any mention of "poe_client.send_message" I instantly receive "WARNING:root:SendMessageMutation returned an error: Server Error | Retrying (1/20)". I've tried streaming with context, just printing to the console to see if it will work, literally everything and I have not found a working way to do it. If someone could help me figure out a working way to stream the output to a flask page I would appreciate it. Thanks!
|
closed
|
2023-04-10T01:13:37Z
|
2023-04-27T04:47:44Z
|
https://github.com/ading2210/poe-api/issues/32
|
[
"invalid"
] |
SlickTorpedo
| 6
|
Yorko/mlcourse.ai
|
seaborn
| 593
|
Dead link in README.md
|
The page is not available:
https://github.com/Yorko/mlcourse.ai/wiki/About-the-course-(in-Russian)
|
closed
|
2019-05-18T15:47:47Z
|
2019-05-18T17:58:59Z
|
https://github.com/Yorko/mlcourse.ai/issues/593
|
[] |
i-aztec
| 1
|
yt-dlp/yt-dlp
|
python
| 12,265
|
EXTRACTOR ERROR assumes that "/usr/bin/yt-dlp/ is a directory, when it is actually a binary file, attempting to parse a BAD PATHNAME.
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
USA
### Provide a description that is worded well enough to be understood
Extractor error caused by attempting to parse a bad pathname '/usr/bin/yt-dlp/yt_dlp/extractor/ ' -- where 'yt-dlp' is a binary file, not a directory.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://xhamster.com/videos/selected-beautiful-athletes-fall-sex-sports-part-3-xh4YJLz']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.01.30.232843 from yt-dlp/yt-dlp-nightly-builds [03c3d7057] (zip)
[debug] Python 3.11.2 (CPython x86_64 64bit) - Linux-6.11.10+bpo-amd64-x86_64-with-glibc2.36 (OpenSSL 3.0.15 3 Sep 2024, glibc 2.36)
[debug] exe versions: ffmpeg 5.1.6-0 (setts), ffprobe 5.1.6-0, rtmpdump 2.4
[debug] Optional libraries: brotli-1.0.9, certifi-2022.09.24, mutagen-1.46.0, pyxattr-0.8.1, requests-2.28.1, secretstorage-3.3.3, sqlite3-3.40.1, urllib3-1.26.12
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1840 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2025.01.30.232843 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2025.01.30.232843 from yt-dlp/yt-dlp-nightly-builds)
[XHamster] Extracting URL: https://xhamster.com/videos/selected-beautiful-athletes-fall-sex-sports-part-3-xh4YJLz
[XHamster] xh4YJLz: Downloading webpage
ERROR: An extractor error has occurred. (caused by KeyError('videoModel')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/usr/bin/yt-dlp/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/bin/yt-dlp/yt_dlp/extractor/xhamster.py", line 171, in _real_extract
video = initials['videoModel']
~~~~~~~~^^^^^^^^^^^^^^
KeyError: 'videoModel'
USER NOTE: /usr/bin/yt-dlp is a BINARY FILE, NOT a directory. Therefore, /usr/bin/yt-dlp/yt_dlp/extractor/common.py does not exist, nor its path.
/usr$ sudo find 2>/dev/null /usr/ -name common.py
[sudo] password for MASKED:
/usr/lib/python3/dist-packages/reportlab/graphics/barcode/common.py
/usr/lib/python3/dist-packages/joblib/test/common.py
/usr/lib/python3/dist-packages/setuptools/_vendor/pyparsing/common.py
/usr/lib/python3/dist-packages/nltk/twitter/common.py
/usr/lib/python3/dist-packages/jeepney/io/common.py
/usr/lib/python3/dist-packages/oauthlib/common.py
/usr/lib/python3/dist-packages/pyparsing/common.py
/usr/lib/python3/dist-packages/ufw/common.py
/usr/lib/python3/dist-packages/pkg_resources/_vendor/pyparsing/common.py
/usr/lib/python3/dist-packages/py/_path/common.py
/usr/lib/python3/dist-packages/torbrowser_launcher/common.py
/usr/lib/python3/dist-packages/pip/_vendor/pyparsing/common.py
/usr/lib/python3/dist-packages/pipx/commands/common.py
/usr/share/doc/python3.11/examples/c-analyzer/c_parser/preprocessor/common.py
/usr/local/lib/python3.9/dist-packages/reportlab/graphics/barcode/common.py
/usr/local/lib/python3.9/dist-packages/scipy/optimize/_lsq/common.py
/usr/local/lib/python3.9/dist-packages/scipy/misc/common.py
/usr/local/lib/python3.9/dist-packages/scipy/integrate/_ivp/common.py
/usr/local/lib/python3.9/dist-packages/pip/_vendor/pyparsing/common.py
/usr$
IS IT IN THE HOME DIRECTORY?
~$ find 2>/dev/null ./ -name common.py
~$ [NOT FOUND]
~$ python --version
Python 3.11.2
~$ sudo apt list python3
Listing... Done
python3/stable,now 3.11.2-1+b1 amd64 [installed,automatic]
USER NOTE: THE FILE 'COMMON.PY' APPEARS NOT TO BE INSTALLED ANYWHERE.
```
|
closed
|
2025-02-03T04:37:03Z
|
2025-02-05T11:58:11Z
|
https://github.com/yt-dlp/yt-dlp/issues/12265
|
[
"duplicate",
"NSFW",
"site-bug"
] |
rtimai
| 5
|
jina-ai/serve
|
machine-learning
| 5,807
|
Flow/Deployment would raise timeout warning even if containerized executor is running
|
**Describe your problem**
<!-- A clear and concise description of what the proposal is. -->
---
I follow the tutorial [here](https://docs.jina.ai/concepts/executor/containerize/) and create my own docker image called "my_exec" for an executor. It works perfectly when I use `jina executor --uses docker://my_exec` to serve the executor and use a Client object to use it. However, serving by using a Flow or a Deployment object would always raise a timeout warning and hang indefinitely as shown in the following screenshots.
My executor needs to load a large neural net to initialize, which takes about 1 minute. I thought this loading time could be the cause of the problem and tried setting `timeout-ready` to -1, but I still cannot get rid of this warning.
Another possible cause maybe that the entrypoint of my docker image is `ENTRYPOINT ["conda", "run", "-n", "legacy_subtitle_punc", "jina", "executor", "--uses", "config.yml"]` where I activate a conda environment before calling jina instead of calling jina directly. Does this have something to do with the problem? If it is the cause, are there any workarounds that still allow me to use conda?
<!-- Optional, but really help us locate the problem faster -->
**Environment**
<!-- Run `jina --version-full` and copy paste the output here -->
- jina 3.14.1
- docarray 0.21.0
- jcloud 0.2.6
- jina-hubble-sdk 0.35.0
- jina-proto 0.1.13
- protobuf 3.20.3
- proto-backend cpp
- grpcio 1.47.5
- pyyaml 5.3.1
- python 3.7.16
- platform Linux
- platform-release 3.10.0-957.el7.x86_64
- platform-version #1 SMP Thu Nov 8 23:39:32 UTC 2018
- architecture x86_64
- processor x86_64
- uid 274973452769639
- session-id d8df0eec-da9b-11ed-9f15-fa163ef99967
- uptime 2023-04-14T16:10:40.727171
- ci-vendor (unset)
- internal False
* JINA_DEFAULT_HOST (unset)
* JINA_DEFAULT_TIMEOUT_CTRL (unset)
* JINA_DEPLOYMENT_NAME (unset)
* JINA_DISABLE_UVLOOP (unset)
* JINA_EARLY_STOP (unset)
* JINA_FULL_CLI (unset)
* JINA_GATEWAY_IMAGE (unset)
* JINA_GRPC_RECV_BYTES (unset)
* JINA_GRPC_SEND_BYTES (unset)
* JINA_HUB_NO_IMAGE_REBUILD (unset)
* JINA_LOG_CONFIG (unset)
* JINA_LOG_LEVEL (unset)
* JINA_LOG_NO_COLOR (unset)
* JINA_MP_START_METHOD (unset)
* JINA_OPTOUT_TELEMETRY (unset)
* JINA_RANDOM_PORT_MAX (unset)
* JINA_RANDOM_PORT_MIN (unset)
* JINA_LOCKS_ROOT (unset)
* JINA_K8S_ACCESS_MODES (unset)
* JINA_K8S_STORAGE_CLASS_NAME (unset)
* JINA_K8S_STORAGE_CAPACITY (unset)
* JINA_STREAMER_ARGS (unset)
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
when using Flow to serve the executor (Deployment raises the same warning except that there are no other logs printed):

|
closed
|
2023-04-14T08:22:20Z
|
2023-04-21T07:55:47Z
|
https://github.com/jina-ai/serve/issues/5807
|
[] |
eugene-yh
| 28
|
Sanster/IOPaint
|
pytorch
| 537
|
[BUG]ModuleNotFoundError: No module named '_lzma'
|
**Model**
Which model are you using?
big-lama.pt
**Describe the bug**
A clear and concise description of what the bug is.
I have installed the big-lama.pt model, but this problem is still happening at startup。
**Screenshots**
If applicable, add screenshots to help explain your problem.

**System Info**
Software version used
- iopaint:
- pytorch:
- CUDA:
|
closed
|
2024-06-05T02:27:03Z
|
2024-12-11T02:06:55Z
|
https://github.com/Sanster/IOPaint/issues/537
|
[
"stale"
] |
xiaowangaixuexijishu
| 2
|
tqdm/tqdm
|
pandas
| 846
|
Separate colors for complete and in progress bars
|
Is it possible to see have different colors for completed bars and bars in progress if I use PyCharm (e.g. green for completed and red for in progress)? Right now all I see is red for everything. Below is an example of how I currently do it in PyCharm.
```
from tqdm import tqdm
from time import sleep
for _ in tqdm(range(0, 5), desc="Bar 1"):
sleep(0.5)
for _ in tqdm(range(0, 5), desc="Bar 2"):
sleep(1)
```
|
closed
|
2019-11-20T22:37:31Z
|
2020-09-27T18:48:44Z
|
https://github.com/tqdm/tqdm/issues/846
|
[
"invalid ⛔",
"p2-bug-warning ⚠"
] |
sergeysprogis
| 4
|
freqtrade/freqtrade
|
python
| 11,147
|
Feature Request: Enhanced Debug Logging with Better Navigation and Styling
|
<!--
Note: this section will not show up in the issue.
Have you search for this feature before requesting it? It's highly likely that a similar request was already filed.
-->
## Describe your environment
(if applicable)
* Operating system: ____ debain
* Python Version: _____ Python 3.12.7
* CCXT version: _____ ccxt==4.4.40
* Freqtrade Version: ____ 2024.12-dev-0625e1933
## Describe the enhancement
*Explain the enhancement you would like*
I propose improvements to the logging system, primarily to make debugging more efficient and log messages easier to navigate and read. Below are the specific suggestions:
# 1. Enhanced Log Formatting for Navigation
**Current Issue:**
When debugging, it can be challenging to locate the exact source of a log message.
**Proposed Solution:**
Introduce an enhanced log formatter that allows easier navigation in development environments. For example, we could use a format like this:
```
#freqtrade/loggers/__init__.py
LOGFORMAT = "%(asctime)s | %(filename)s:%(lineno)d:%(funcName)s | %(levelname)s | %(message)s"
```
This format includes:
- **Timestamp (asctime):** for log traceability. (already present)
- **File name and line number (filename, lineno):** for direct reference to the source code.
- **Function name (funcName):** to identify the context of the log message.
- **Log level and message (levelname, message) :** to provide clarity on the log's importance. (already present)
**Benefit:**
In many IDEs or text editors, users can Ctrl+Click on the log line (or equivalent action) and jump directly to the corresponding line in the file, significantly speeding up debugging. (in my opinion)
# 2. Colored Logging for Better CLI Experience
**Current Issue:** (not really a issue but cumbersome)
Log outputs in the CLI can be monotonous and harder to scan quickly.
**Proposed Solution:**
Implement a ColoredFormatter to add color-coded log levels and other components, making logs visually appealing and easier to interpret at a glance. Here is my implementation:
```
#freqtrade/loggers/coloredformatter.py
import logging
from colorama import Fore, Style # , Back
class ColoredFormatter(logging.Formatter):
"""Custom formatter to add colors to log levels and other parts of the log record."""
COLOR_MAP = {
logging.DEBUG: Fore.CYAN,
logging.INFO: Fore.GREEN,
logging.WARNING: Fore.RED,
logging.ERROR: Fore.RED,
logging.CRITICAL: Fore.MAGENTA + Style.BRIGHT,
}
RESET = Style.RESET_ALL
def format(self, record):
# Apply color to the log level
color = self.COLOR_MAP.get(record.levelno, self.RESET)
record.levelname = f"{color}{record.levelname}{self.RESET}"
# Apply yellow color to the function name
if record.funcName:
record.funcName = f"{Fore.LIGHTYELLOW_EX}{record.funcName}{self.RESET}"
if record.filename:
record.filename = f"{Fore.GREEN}{record.filename}{self.RESET}"
# if record.filename: #does not work for some reason
# record.filename = f"{Fore.GREEN}{record.filename}{self.RESET}"
# if record.asctime: #does not work for some reason
# record.asctime = f"{Fore.RED}{record.asctime}{self.RESET}"
# Apply color to the message itself (optional)
record.message = f"{Fore.WHITE}{record.getMessage()}{self.RESET}"
# Apply color to other log fields if desired (e.g., time, filename)
formatted_log = super().format(record)
return formatted_log
```
**Benefit:**
Log levels are clearly distinguished using colors (e.g., DEBUG in cyan, INFO in green, WARNING in red).
Critical information such as function names, file names, and messages are visually emphasized.
# 3. Separate Formatters for Handlers when using ColoredFormatter
Current Issue:
Using a single formatter for both file-based logs (e.g., RotatingFileHandler) and console logs (StreamHandler) will lead to the logfiles looking bad and harder to read. (since we have added color formatting to the StreamHandler and a text file cannot interpret color)
Proposed Solution:
Allow configuration of separate formatters for different handlers. For example:
```
#freqtrade/loggers/__init__.py
from freqtrade.loggers.coloredformatter import ColoredFormatter
#using logging.Formatter for RotatingFileHandler
LOG_FILE_FORMAT = logging.Formatter(
"%(asctime)s | %(filename)s:%(lineno)d:%(funcName)s | %(levelname)s | %(message)s"
)
#using ColoredFormatter for StreamHandler
LOG_STREAM_FORMAT = ColoredFormatter(
"%(asctime)s | %(filename)s:%(lineno)d:%(funcName)s | %(levelname)s | %(message)s"
)
```
Additional Context:
though i don't know how this would be implemented in a good way for it to work in all cases and with verbosity etc. maybe only activated separately when a debug param is provided when launching freqtrade similar to the verbosity parameter ??
|
closed
|
2024-12-25T10:15:23Z
|
2024-12-29T14:45:57Z
|
https://github.com/freqtrade/freqtrade/issues/11147
|
[
"Question"
] |
mcpolo99
| 2
|
keras-team/keras
|
tensorflow
| 20,249
|
TextVectorization returns 'int64' vs 'float32' in TF 2.7 / nightly + Training simple unigram/bigram models much slower than in 2.15
|
Hi there,
I'm running into issues when proofreading some teaching materials based on @fchollet *Deep Learning With Python* & [repo](https://github.com/fchollet/deep-learning-with-python-notebooks).
Running [this Colab notebook](https://drive.google.com/file/d/130cVhEsJT6J-z6f180I3S6DbOb5W8kAa/view?usp=sharing), (which is chapter 11, first part, but fleshed out for teaching, this should be runnable out of the box) is *much* slower in the current versions of TensorFlow (2.17, or nightly), than in 2.15. I am unsure as to where this comes from, as the nets used in this are just fully-connected layers on unigrams/bigrams (XLA compilation seems to play a role, although that seems a lot). The slowness is particularly striking during the `evaluate()` step, where the XLA acceleration happening after the second epoch of training does not happen. I'm wondering if there's something obvious that's wrong in the code, or in the update...
Also, I notice that the `TextVectorization` layer seems to be returning `int64` in TF 2.17 and nightly, as opposed to `float32` previously. I did not find any mention of this in the documentation. Is this the desired behaviour?
Thanks in advance!
(First submitted on the [Tensorflow repo](https://github.com/tensorflow/tensorflow/issues/75423), they redirected me here.)
|
closed
|
2024-09-11T10:37:08Z
|
2024-09-12T08:52:19Z
|
https://github.com/keras-team/keras/issues/20249
|
[] |
jchwenger
| 2
|
litestar-org/litestar
|
pydantic
| 3,600
|
Bug: docs website is not up-to-date
|
### Description
Hello,
It seems the official documentation website is not properly updated on project releases.
https://docs.litestar.dev/2/usage/logging.html (not up-to-date)
https://docs.litestar.dev/main/usage/logging.html (this one is up-to-date)
(It's not a browser caching issue)
### URL to code causing the issue
_No response_
### MCVE
_No response_
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
latest
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
|
closed
|
2024-06-26T09:11:02Z
|
2025-03-20T15:54:47Z
|
https://github.com/litestar-org/litestar/issues/3600
|
[
"Bug :bug:"
] |
jderrien
| 6
|
keras-team/keras
|
deep-learning
| 20,395
|
Loss documentation is wrong? Loss function actually returns means over a batch
|
https://keras.io/api/losses/#standalone-usage-of-losses
It states: `By default, loss functions return one scalar loss value per input sample, e.g.`
But the example is passing 4 samples `ops.ones((2, 2,))` , and returning 2 values `<Array: shape=(2,), dtype=float32, numpy=array([1., 1.], dtype=float32)>`
So which is it?
|
closed
|
2024-10-22T16:27:13Z
|
2024-12-15T16:54:05Z
|
https://github.com/keras-team/keras/issues/20395
|
[
"type:docs-bug"
] |
worthy7
| 2
|
strawberry-graphql/strawberry-django
|
graphql
| 364
|
maximum recursion depth exceeded
|
<!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
<!-- A clear and concise description of what the bug is. -->
Error Maximum recursion depth exceeded has occurred when I use `DjangoOptimizerExtension`
## System Information
- Operating system: OSX 13.5.1 (22G90)
- Strawberry version (if applicable):
strawberry-graphql==0.207.0
strawberry-graphql-django==0.17.1
## Additional Context
<img width="1234" alt="image" src="https://github.com/strawberry-graphql/strawberry-graphql-django/assets/13030735/0384b4c3-18cd-49e9-81ff-c23fa102b7f4">
---
If needed, I'll prepare sample codes that can reproduce these symptoms
<!-- Add any other relevant information about the problem here. -->
|
closed
|
2023-09-16T08:28:30Z
|
2025-03-20T15:57:19Z
|
https://github.com/strawberry-graphql/strawberry-django/issues/364
|
[
"bug"
] |
jeemyeong
| 5
|
IvanIsCoding/ResuLLMe
|
streamlit
| 16
|
Getting an AttributeError: module 'openai' has no attribute 'error' when running from docker
|
Full stack is:
File "/usr/local/lib/python3.10/dist-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/app/src/Main.py", line 104, in <module>
except openai.error.RateLimitError as e:
I assume it is related to the version of openai in the requirements file:
openai
langchain
pdfminer-six
python-docx
streamlit==1.21.0
Jinja2
docx2txt
streamlit-ext==0.1.7
stqdm==0.0.5
Since it is not hard coding the version, it may be using a newer version that is being affected by the ratelimiter.
|
closed
|
2023-11-07T23:22:48Z
|
2023-11-12T22:30:40Z
|
https://github.com/IvanIsCoding/ResuLLMe/issues/16
|
[] |
DevOpsAzurance
| 4
|
hankcs/HanLP
|
nlp
| 977
|
stopwords能不能像自定义字典一样在配置里自定义新文件
|
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [👍 ] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
pyhanlp
当前最新版本号是:1.6.8
我使用的版本是:1.6.8
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
stopwords 能不能自定义文件
print(HanLP.Config.CoreStopWordDictionaryPath)
/home/q/pyhanlp/data/dictionary/stopwords.txt; stop_dict.txt;
能不能这样配置
这样配置,下面这段代码会报错,也就是不能这样配置吗?
--------------------------------
from pyhanlp import *
NotionalTokenizer = JClass("com.hankcs.hanlp.tokenizer.NotionalTokenizer")
text = "小区居民有的反对喂养流浪猫,而有的居民却赞成喂养这些小宝贝"
print(NotionalTokenizer.segment(text))
-----------------------------------
java.lang.NullPointerExceptionPyRaisable Traceback (most recent call last)
<ipython-input-1-bfb7f4ece71a> in <module>()
4
5 text = "小区居民有的反对喂养流浪猫,而有的居民却赞成喂养这些小宝贝"
6 print(NotionalTokenizer.segment(text))
java.lang.NullPointerExceptionPyRaisable: java.lang.NullPointerException
----------------------------------------
## 我的问题2
win10也同样配置了hanlp.properties
CustomDictionaryPath=data/dictionary/custom/CustomDictionary.txt; Custom.txt; 现代汉语补充词库.txt; 全国地名大全.txt ns; 人名词典.txt; 机构名词典.txt; 上海地名.txt ns; my.txt; feature_dict.txt; data/dictionary/person/nrf.txt nrf;
同样删除了缓存文件,不知为什么,生成出来的地址始终没有feature_dict.txt, Custom.txt;
-----------------------------------
print(HanLP.Config.CustomDictionaryPath)
('d:/applications/pyhanlp/data/dictionary/custom/CustomDictionary.txt', 'd:/applications/pyhanlp/data/dictionary/custom/现代汉语补充词库.txt', 'd:/applications/pyhanlp/data/dictionary/custom/全国地名大全.txt ns', 'd:/applications/pyhanlp/data/dictionary/custom/人名词典.txt', 'd:/applications/pyhanlp/data/dictionary/custom/机构名词典.txt', 'd:/applications/pyhanlp/data/dictionary/custom/上海地名.txt ns', 'd:/applications/pyhanlp/data/dictionary/person/nrf.txt nrf')
-----------------------------------
上面就原始的七个,有什么检查方法吗,java也不是特别熟。
|
closed
|
2018-09-26T15:27:00Z
|
2018-10-12T01:56:09Z
|
https://github.com/hankcs/HanLP/issues/977
|
[] |
skyrusai
| 6
|
xlwings/xlwings
|
automation
| 2,467
|
Running xlwings script from .bat causes MoudleNotFoundError (on embedded python distribution)
|
#### OS (e.g. Windows 10 or macOS Sierra)
Windows 10
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
Office 365,, Python 3.11.9, xlwings 0.31.4
#### Describe your issue (incl. Traceback!)
I am attempting to distribute my xlwings worksheet to other users. I have created downloaded and placed an embedded python install in the sub-directory of the .xlsm file. If I run the embedded python.exe on the demoDist.py file (i.e python/python.exe demoDist.py) from the command prompt or powershell inside VS Code, the code executes properly, and I can call RunPython and ImportUDF properly. If I try the same execution from a .bat file, I get the ModuleNotFoundError. I think its a path issue, I am just not sure how to fix it.
```python
# Your traceback here
---------------------------
Error
---------------------------
ModuleNotFoundError: No module named 'demoDist'
File "<string>", line 1, in <module>
exec(stmt, globals, locals)
File "C:\Users\acameron\OneDrive - Farnsworth Group, Inc\_AC Documents\Programing\Projects\FG_Process\Python\Excel_Python\demoDist\Python\Lib\site-packages\xlwings\com_server.py", line 328, in Exec
^^^^^^^^^^^
return func(*args)
File "C:\Users\acameron\OneDrive - Farnsworth Group, Inc\_AC Documents\Programing\Projects\FG_Process\Python\Excel_Python\demoDist\Python\Lib\site-packages\win32com\server\policy.py", line 639, in _invokeex_
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
return S_OK, -1, self._invokeex_(dispid, lcid, wFlags, args, None, None)
File "C:\Users\acameron\OneDrive - Farnsworth Group, Inc\_AC Documents\Programing\Projects\FG_Process\Python\Excel_Python\demoDist\Python\Lib\site-packages\win32com\server\policy.py", line 310, in _invoke_
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
return self._invoke_(dispid, lcid, wFlags, args)
File "C:\Users\acameron\OneDrive - Farnsworth Group, Inc\_AC Documents\Programing\Projects\FG_Process\Python\Excel_Python\demoDist\Python\Lib\site-packages\win32com\server\policy.py", line 305, in _Invoke_
Press Ctrl+C to copy this message to the clipboard.
---------------------------
OK
---------------------------
```
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
```python
# Your code here
import xlwings as xw
#Start xlwings server (to bypass starting server through cmd scriptpip)
if __name__ == '__main__':
xw.serve()
def main():
wb = xw.Book.caller()
sheet = wb.sheets[0]
if sheet["A1"].value == "Hello xlwings!":
sheet["A1"].value = "Bye xlwings!"
else:
sheet["A1"].value = "Hello xlwings!"
@xw.func
def hello(name):
return f"Hello {name}!"
if __name__ == "__main__":
xw.Book("demoDist.xlsm").set_mock_caller()
main()
```
|
closed
|
2024-07-02T14:06:13Z
|
2024-07-02T23:24:20Z
|
https://github.com/xlwings/xlwings/issues/2467
|
[] |
acameronfw
| 5
|
plotly/dash
|
jupyter
| 2,541
|
[BUG] Inconsistent/buggy partial plot updates using Patch
|
**Example scenario / steps to reproduce:**
- App has several plots, all of which have the same x-axis coordinate units
- When one plot is rescaled (x-axis range modified, e.g. zoom in), all plots should rescale the same way
- Rather than rebuild each plot, we can now implement a callback using `Patch`. This way, the heavy lifting of initially rendering the plots happens only once, and when a plot is rescaled, only the layout of each plot is updated.
```python
@callback(
output=[Output(dict(type='plot', id=plot_id), 'figure', allow_duplicate=True) for plot_id in plot_ids))]
inputs=[Input(dict(type='plot', id=ALL), 'relayoutData'],
prevent_initial_call=True
)
def update(inputs):
plot_updates = [Patch() for _ in plot_ids]
for p in plot_updates: p.layout.xaxis.range = get_range(dash.callback_context.triggered) # implemented elsewhere
return plot_updates
```
**Unexpected behavior:**
Upon rescale of a plot, only _some_ of the other plots actually update in the UI. Which plots update in response to a rescale is inconsistent and seemingly random (sometimes they all update and the app appears to work as I expect, sometimes only a few of them update, etc). I have verified that the callback/Patch _does_ update the range property in every figure data structure, so it seems like there's just some inconsistency with listening for the update (or maybe I made an error with how I've implemented this).
Happy to provide a screen recording if it would help.
```
dash 2.9.3
dash-bootstrap-components 1.4.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
|
closed
|
2023-05-25T01:16:45Z
|
2024-07-24T17:08:29Z
|
https://github.com/plotly/dash/issues/2541
|
[] |
abehr
| 2
|
JaidedAI/EasyOCR
|
deep-learning
| 885
|
Suggestion/feature request: create an interface to change model loading/inference strategy
|
Let's say that I want to run easyocr on an edge device or some other achitecture. It would be very easy to do that if I could inject the loading/inference strategy object into easyocr's instance, e.g.: ```easyocr.Reader(..., new ModelInterfaceImpl())```. The default implementation could be the pytorch one and an alternate implementation for ONNX could be provided out-of-the-box too.
Something like this (high level definition):
```
DetectModelInterface {
abstract load()/init()
abstract predict()
abstract batch_predict()
abstract release()/close()/unload()
}
RecogModelInterface {
... same methods with different signatures
}
```
With that mechanism I could implement this interface for, let's say, rockchip's NPU or Google Coral, and run easyocr on an edge device with hw acceleration and without having to rely on/install pytorch. It is also possible to swap the implementations and test various combinations of models.
|
open
|
2022-11-17T19:45:52Z
|
2023-11-30T14:57:17Z
|
https://github.com/JaidedAI/EasyOCR/issues/885
|
[] |
rpvelloso
| 1
|
ivy-llc/ivy
|
numpy
| 28,600
|
Fix Frontend Failing Test: paddle - convolution_functions.torch.nn.functional.unfold
|
closed
|
2024-03-14T13:07:09Z
|
2024-03-21T12:00:40Z
|
https://github.com/ivy-llc/ivy/issues/28600
|
[
"Sub Task"
] |
ZenithFlux
| 0
|
|
microsoft/nni
|
deep-learning
| 5,652
|
How to run fbnet on master branch
|
I am able to run FBNet (https://nni.readthedocs.io/en/v2.5/NAS/FBNet.html) when I used NNI version smaller than 2.9. However, for the master branch, there has been a refactoring, which has led to the example of FBNet (pfld) being moved to another location. I am now wondering how I can run FBNet with the master branch. If FBNet is not maintained anymore, I am unsure how I can use it. Could you please provide any reasons why it was removed after NNI version 2.8?
|
open
|
2023-07-30T14:44:56Z
|
2023-08-17T04:50:02Z
|
https://github.com/microsoft/nni/issues/5652
|
[] |
John1231983
| 1
|
OFA-Sys/Chinese-CLIP
|
nlp
| 187
|
您好,未来有计划将模型转换成 CoreML 吗?
|
open
|
2023-08-15T01:56:57Z
|
2023-09-08T03:16:00Z
|
https://github.com/OFA-Sys/Chinese-CLIP/issues/187
|
[] |
OwenHongJinmu
| 1
|
|
QingdaoU/OnlineJudge
|
django
| 275
|
增强上传测试样例时的功能,使通过压缩文件夹方式也可以成功上传
|
### 1.目的
OJ在上传测试题目用例的时候只可以使用压缩文件的方式,但是对于常用的直接压缩文件夹的方式会提示```Empty file```,对于提高用户(不愿意看文档或看不懂文档)体验来说,不妨将两种上传方式都加入。
### 2.修复方式
为了向下兼容以及符合开闭原则,我只增加了``` OnlineJudge/ problem / views / admin.py```一个函数```judge_dir```,用于获得以压缩文件夹形式上传的文件目录——dir。这个函数对于spj为True或False都适用。我做了两种压缩文件夹的方式的测试
1. 右键文件夹,点击发送到,选择发送为```压缩(zipped)文件夹```,这种压缩文件夹在python中用zipFile.listname()出来的列表格式如下
```py
['testcase/1.in', 'testcase/1.out', 'testcase/2.in', 'testcase/2.out']
```
2. 使用其他压缩软件压缩出的压缩文件夹,这种压缩文件夹在python中用zipFile.listname()出来的列表格式如下
```py
['testcase/', 'testcase/1.in', 'testcase/1.out', 'testcase/2.in', 'testcase/2.out']
```
并且```testcase/```在列表中的位置似乎不固定。
以上这是我增加功能时碰到的坑。
### 3.修复过程
由于我不是专门写python的,本地没有Django的环境。所以我是每改一次代码就用容器部署到远端测试的。因此为了加速oj镜像的构建,我修改Dockerfile中的
```
RUN curl -L $(curl -s https://api.github.com/repos/QingdaoU/OnlineJudgeFE/releases/latest | grep /dist.zip | cut -d '"' -f 4) -o dist.zip && \
unzip dist.zip && \
rm dist.zip
```
为
```bash
RUN curl -L https://opensoftware.oss-cn-hangzhou.aliyuncs.com/dist.zip -o dist.zip && \
unzip dist.zip && \
rm dist.zip
```
事实上我将OnlineJudgeFE项目的release的oj_2.6.1(当前最新)版本发布的dist文件,传到了阿里云上以加速镜像构建。然后打包了镜像发布到了我的阿里云镜像仓库,所以我也修改了OnlineJudgeDeploy项目中的docker-compose.yml,将oj-backend镜像换成我打包后的镜像,然后用docker-compose构建。
### 4.修复效果
对于```2```中提到的两种压缩文件夹方式均可正常工作,并且兼容原来的压缩文件方式。
### 5. 如何重现修复效果
为了下载方便,仓库放在Gitee上,PR将在Github上提
[OnlineJudgeDeploy](https://gitee.com/ayang818/OnlineJudgeDeploy/tree/origin%2F2.0/)
按照README.md的步骤来部署就可以了
或者你可以直接访问[测试网站](http://121.36.138.2/admin/),来检测效果,密码可以联系我。
|
open
|
2019-10-17T12:50:27Z
|
2020-05-26T02:42:42Z
|
https://github.com/QingdaoU/OnlineJudge/issues/275
|
[] |
ayang818
| 5
|
deedy5/primp
|
web-scraping
| 28
|
Add a License
|
Hi, would you be open to adding a license to this project?
https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/licensing-a-repository
I notice it says MIT in [pyproject.toml](https://github.com/deedy5/pyreqwest_impersonate/blob/main/pyproject.toml), perhaps add it as a LICENSE file?
thanks :smiley:
|
closed
|
2024-07-26T23:35:55Z
|
2024-07-27T07:04:22Z
|
https://github.com/deedy5/primp/issues/28
|
[] |
coletdjnz
| 1
|
nerfstudio-project/nerfstudio
|
computer-vision
| 3,082
|
Wrong Result in Depth Map Extraction
|
I've been trying to extract the depth map from a NeRF, specifically from the dataset downloaded by running the line: 'ns-download-data nerfstudio --capture-name=poster" and training it with the Nerfacto model. After reading different issues, I've tried 2 different methods:
The first one involved adding within render.py
(https://github.com/nerfstudio-project/nerfstudio/blob/c599d0f3a1580d4276f4f064fb9cb926eb3b1a71/nerfstudio/scripts/render.py#L156C1-L198C51)
Starting from line 157, after the 'if is_depth' check, the call to a function that would store the output_image into a numpy array.
I extracted this from the following issue: (https://github.com/nerfstudio-project/nerfstudio/issues/2147)
This method resulted in the following image, which as you can see, does not provide sufficient information to extract the depth data for each pixel.

The second method involved modifying render.py again, but this time on line 145, that is, before 'render_image = []', adding the following:

This second method resulted in new information that does not resemble what I need:

I extracted this from the following issue: (https://github.com/nerfstudio-project/nerfstudio/issues/1388)
I'm not exactly sure what the problem is causing the incorrect extraction of depth map information from my NeRF. I would be grateful for any suggestions to solve this issue.
Thanks.
|
open
|
2024-04-16T10:46:33Z
|
2024-09-28T02:57:54Z
|
https://github.com/nerfstudio-project/nerfstudio/issues/3082
|
[] |
ferphy
| 3
|
schenkd/nginx-ui
|
flask
| 58
|
Proposal for Revamping Project with Updated Package Versions and RHEL Compatibility
|
Hi @schenkd,
I'm interested in contributing to this repository and enhancing its functionality. In my current role, I primarily work with Flask, and I've been looking for efficient solutions to update Nginx reverse proxy configuration files.
I have a few suggestions for improving this project:
- Upgrade to Flask 3.0.x: This will ensure compatibility with the latest features and improvements in Flask.
- Package Updates: Revamp the project to use newer versions of other dependencies for better performance and security.
- Red Hat Enterprise Linux (RHEL) Compatibility: I am also keen to make this project compatible with RHEL, including using the Podman alternative to Docker, to provide more flexibility for users.
I'm ready to start working on the changes and submitting pull requests if you're open to these changes. I noticed the last activity from you was in April 2023, and I would love to help keep this project up to date.
Looking forward to your response.
Best,
Mo
|
open
|
2024-05-22T00:46:12Z
|
2024-05-22T00:46:12Z
|
https://github.com/schenkd/nginx-ui/issues/58
|
[] |
mohamedtaee
| 0
|
thtrieu/darkflow
|
tensorflow
| 316
|
Annotation parsing issue
|
Hi guys
I ran a tiny-yolo model with different datasets (datasets that I made). I constantly get
`ZeroDivisionError: float division by zero` error.
After reading multiple past issues, I realized that it has something to do with annotation parsing and tried to fix it by deleting the file `net/yolo/parse-history.txt` but it doesn't exist iny my folder.
Any suggestion?
|
closed
|
2017-06-30T05:18:47Z
|
2018-03-23T16:39:23Z
|
https://github.com/thtrieu/darkflow/issues/316
|
[] |
bestar60
| 10
|
alpacahq/alpaca-trade-api-python
|
rest-api
| 492
|
Feature Request: Customizable TimeFrame
|
We'd like to be able to use variable versions of the TimeFrame, to accommodate different scenarios that we require for barsets.
An ideal usage of the API would be something like this:
```python
rest_api = REST()
time_frame = TimeFrame(15, TimeFrameUnit.minutes)
result = api.get_bars("APPL", time_frame, start='2021-09-09', end='2021-09-10', limit=1000, adjustment='raw').df
```
Relates to #487 and #417
|
closed
|
2021-09-15T15:53:04Z
|
2021-11-02T06:47:45Z
|
https://github.com/alpacahq/alpaca-trade-api-python/issues/492
|
[] |
AlphaGit
| 2
|
ResidentMario/missingno
|
data-visualization
| 82
|
How to configure the size of picture
|
closed
|
2019-01-14T10:35:01Z
|
2019-01-14T16:28:32Z
|
https://github.com/ResidentMario/missingno/issues/82
|
[] |
dpengwang
| 1
|
|
LAION-AI/Open-Assistant
|
machine-learning
| 3,076
|
plugins
|
the openassistant chat page shows a button to select plugins but there arent any am i doing something wrong or is it just not released yet?
|
closed
|
2023-05-07T16:01:39Z
|
2023-05-31T17:21:48Z
|
https://github.com/LAION-AI/Open-Assistant/issues/3076
|
[
"question"
] |
Bensolz
| 3
|
seleniumbase/SeleniumBase
|
pytest
| 2,530
|
Verify `selenium` `4.18.1`, and upgrade `seleniumbase` to it
|
## Verify `selenium` `4.18.1`, and upgrade `seleniumbase` to it
https://pypi.org/project/selenium/4.18.1/ is here. Hopefully it's good...
If everything still works with it, upgrade `seleniumbase` to use it.
|
closed
|
2024-02-22T23:46:17Z
|
2024-02-23T05:38:44Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2530
|
[
"dependencies"
] |
mdmintz
| 4
|
browser-use/browser-use
|
python
| 180
|
[Bug] Font handling crash in create_history_gif
|
## Bug Description
When running `create_history_gif()`, the following error occurs:
```python
OSError: cannot open resource
```
The error happens when trying to access `title_font.path` on the default font after TrueType font loading fails.
## Stack Trace
```
Traceback (most recent call last):
File "...\browser_use\agent\service.py", line 691, in _create_task_frame
larger_title_font = ImageFont.truetype(title_font.path, title_font_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: cannot open resource
```
## Solution
Replace the current font handling with a simpler approach that doesn't try to modify font sizes after loading:
```python
try:
regular_font = ImageFont.truetype('arial.ttf', font_size)
title_font = ImageFont.truetype('arial.ttf', title_font_size)
goal_font = ImageFont.truetype('arial.ttf', goal_font_size)
except OSError:
regular_font = ImageFont.load_default()
title_font = regular_font
goal_font = regular_font
```
This avoids trying to access `.path` on the default font while maintaining the same functionality.
## Full implementation
```python
def create_history_gif(
self,
output_path: str = 'agent_history.gif',
duration: int = 3000,
show_goals: bool = True,
show_task: bool = True,
show_logo: bool = True,
font_size: int = 28,
title_font_size: int = 36,
goal_font_size: int = 28,
margin: int = 20,
line_spacing: float = 1.0,
) -> None:
"""Create a GIF from the agent's history with overlaid task and goal text."""
if not self.history.history:
logger.warning('No history to create GIF from')
return
images = []
# Simplified font handling with single try/except
try:
regular_font = ImageFont.truetype('arial.ttf', font_size)
title_font = ImageFont.truetype('arial.ttf', title_font_size)
goal_font = ImageFont.truetype('arial.ttf', goal_font_size)
except OSError:
logger.warning('Could not load Arial font, using default')
regular_font = ImageFont.load_default()
title_font = regular_font
goal_font = regular_font
# Load logo if requested
logo = None
if show_logo:
try:
logo = Image.open('./static/browser-use.png')
# Resize logo to be small (e.g., 40px height)
logo_height = 150
aspect_ratio = logo.width / logo.height
logo_width = int(logo_height * aspect_ratio)
logo = logo.resize((logo_width, logo_height), Image.Resampling.LANCZOS)
except Exception as e:
logger.warning(f'Could not load logo: {e}')
# Create task frame if requested
if show_task and self.task:
task_frame = self._create_task_frame(
self.task,
self.history.history[0].state.screenshot,
title_font,
regular_font,
logo,
line_spacing,
)
images.append(task_frame)
# Process each history item
for i, item in enumerate(self.history.history, 1):
if not item.state.screenshot:
continue
# Convert base64 screenshot to PIL Image
img_data = base64.b64decode(item.state.screenshot)
image = Image.open(io.BytesIO(img_data))
if show_goals and item.model_output:
image = self._add_overlay_to_image(
image=image,
step_number=i,
goal_text=item.model_output.current_state.next_goal,
regular_font=regular_font,
title_font=title_font,
margin=margin,
logo=logo,
)
images.append(image)
if images:
# Save the GIF
images[0].save(
output_path,
save_all=True,
append_images=images[1:],
duration=duration,
loop=0,
optimize=False,
)
logger.info(f'Created history GIF at {output_path}')
else:
logger.warning('No images found in history to create GIF')
def _create_task_frame(
self,
task: str,
first_screenshot: str,
title_font: ImageFont.FreeTypeFont,
regular_font: ImageFont.FreeTypeFont,
logo: Optional[Image.Image] = None,
line_spacing: float = 1.5,
) -> Image.Image:
"""Create initial frame showing the task."""
img_data = base64.b64decode(first_screenshot)
template = Image.open(io.BytesIO(img_data))
image = Image.new('RGB', template.size, (0, 0, 0))
draw = ImageDraw.Draw(image)
# Calculate vertical center of image
center_y = image.height // 2
# Draw "Task:" title (no font size modification)
title = 'Task:'
title_bbox = draw.textbbox((0, 0), title, font=title_font)
title_width = title_bbox[2] - title_bbox[0]
title_x = (image.width - title_width) // 2
title_y = center_y - 150
draw.text(
(title_x, title_y),
title,
font=title_font, # Use original font
fill=(255, 255, 255),
)
# Draw task text (no font size modification)
margin = 140
max_width = image.width - (2 * margin)
wrapped_text = self._wrap_text(task, regular_font, max_width)
# Calculate line height with spacing
line_height = regular_font.size * line_spacing
# Split text into lines and draw with custom spacing
lines = wrapped_text.split('\n')
total_height = line_height * len(lines)
# Start position for first line
text_y = center_y - (total_height / 2) + 50
for line in lines:
line_bbox = draw.textbbox((0, 0), line, font=regular_font)
text_x = (image.width - (line_bbox[2] - line_bbox[0])) // 2
draw.text(
(text_x, text_y),
line,
font=regular_font, # Use original font
fill=(255, 255, 255),
)
text_y += line_height
# Add logo if provided (top right corner)
if logo:
logo_margin = 20
logo_x = image.width - logo.width - logo_margin
image.paste(logo, (logo_x, logo_margin), logo if logo.mode == 'RGBA' else None)
return image
```
|
open
|
2025-01-07T10:32:45Z
|
2025-02-21T20:30:11Z
|
https://github.com/browser-use/browser-use/issues/180
|
[] |
Alex31y
| 5
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 644
|
Chinese LLaMA 7B预测时回复内容答非所问,产出很多不相关回复
|
### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [X] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
- [X] 模型正确性检查:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行
### 问题类型
效果问题
### 基础模型
LLaMA-7B
### 操作系统
None
### 详细描述问题
我直接加载的Chinese LLaMA 7B的模型,没有做微调,在预测时经常出现前一两句是相关的,后面大量无意义的内容。这种回复是正常吗?请问怎么解决?谢谢!
### 依赖情况(代码类问题务必提供)
_No response_
### 运行日志或截图


|
closed
|
2023-06-20T08:46:59Z
|
2023-07-11T22:02:09Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/644
|
[
"stale"
] |
dazhaxie0526
| 9
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
computer-vision
| 767
|
ValueError: num_samples should be a positive integeral value, but got num_samples=0
|
please help!I met this problem and can't solve it
|
open
|
2019-09-14T09:16:51Z
|
2021-12-08T21:24:06Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/767
|
[] |
tpy9
| 5
|
tortoise/tortoise-orm
|
asyncio
| 1,180
|
Queries by different tasks produce different instances of Model objects for same rows
|
Hi there,
I need a way to ensure that if multiple tasks in an asyncio thread each perform queries on tables, and end up with row objects (instances of models.Model subclasses) representing the same rows, then all row objects produced from Tortoise queries, representing the same underlying table/row, will be singletons.
Usage scenario is a server with dozens or hundreds of worker tasks in the same process, all retrieving row objects, all updating these and saving these back. There will be multiple cases where 2 or more tasks can be working on the same table/row object, and each task needs to be able to hold an object for several seconds at a time without causing other tasks to wait on locks.
I've noticed that if 2 or more tasks perform a query and get an object for the same table/row, then each task will be given a whole different instance of the table class. This of course is a huge integrity issue.
So given a (hypothetical) scenario like
```
class Foo(models.Model):
id = models.IntField(pk=True)
# other fields follow, not important here
class Bar(models.Model):
id = models.IntField(pk=True)
foo = models.ForeignKeyField('mymodels.Foo', related_name='bars', null=True)
# other fields follow, not important here
```
I need a way to ensure that if:
1. task 'alice' does a query on table 'Foo' and gets Foo instance 'foo' (id=54), then
2. task 'bob' does a query on table 'Bar' and gets a Bar instance 'bar' (id=33), then
3. task 'bob' awaits on bar.foo to get the related Foo instance 'foo' (id=54), then
4. task 'alice' iterates over its 'foo.bars' and among these ends up with a Bar instance 'bar' (id=33)
5. task 'charlie' does queries on both 'Foo' and 'Bar', and ends up with a Foo instance (id=54) and a Bar instance (id=33)
Then:
1. the instance of Foo with id=54 held by tasks alice, bob and charlie is the exact same Python object, and
2. the instance of Bar with id=33 held by alice, bob and charlie is the same object (not different Bar instances)
3. there is no delay to alice, bob or charlie in their queries resulting from underlying locks
Is this possible in Tortoise ORM? If so, can someone please point me to a straightforward example?
|
closed
|
2022-07-11T21:16:25Z
|
2022-07-11T22:01:18Z
|
https://github.com/tortoise/tortoise-orm/issues/1180
|
[] |
davidmcnabnz
| 2
|
mithi/hexapod-robot-simulator
|
dash
| 29
|
Reimplement ik_solver (2.0) module
|
Reimplement inverse kinematics solver (`hexapod.ik_solver2`). The current implementation `hexapod.ik_solver` involves a bunch of tiny helper methods and one big god function that tries to do almost everything. I plan to redesign this by making a class that centers around this responsibility.
|
closed
|
2020-04-11T09:55:31Z
|
2020-04-15T12:21:22Z
|
https://github.com/mithi/hexapod-robot-simulator/issues/29
|
[
"PRIORITY",
"code quality"
] |
mithi
| 0
|
psf/requests
|
python
| 6,063
|
How to set multi same key in request header
|
To Bypass some WAF, I Wanna Send a Malformed HTTP request like this:
```
POST /123123 HTTP/1.1
Host: a.b.com
Hello: hello1
Hello: hello2
a=b
```
Request have two same key in headers.
Try:
```
headers = [('interests', 'football'), ('interests', 'basketball')]
headers = {'interests': ['football', 'basketball']}
```
are all execept.
Try:
```
from urllib3._collections import HTTPHeaderDict
headers = HTTPHeaderDict()
headers .add('interests', 'football')
headers .add('interests', 'basketball')
```
The request will be :
```
POST /123123 HTTP/1.1
Host: a.b.com
interests: football, basketball
```
Please help me to know , how to set same key with diff value in request header, thanks.
|
closed
|
2022-02-10T06:57:55Z
|
2022-05-11T14:00:43Z
|
https://github.com/psf/requests/issues/6063
|
[] |
Xyberonz
| 1
|
deepfakes/faceswap
|
machine-learning
| 1,401
|
the gpu docker image not finding the requirements
|
nividia docker image
it keep failing clonning the rep so i increased the buffer and the timeout and the number of retries :
**RUN git config --global http.postBuffer 1048576000
RUN git config --global http.lowSpeedLimit 0
RUN git config --global http.lowSpeedTime 999999
RUN for i in {1..5}; do git clone --depth 1 --no-single-branch https://github.com/deepfakes/faceswap.git && break || sleep 15; done
**
>>> RUN python -m pip --no-cache-dir install -r requirements_nvidia.txt
24 |
25 |
--------------------
ERROR: failed to solve: process "/bin/sh -c python -m pip --no-cache-dir install -r requirements_nvidia.txt" did not complete successfully: exit code: 1
but it still can't find the requirement i fixed it manually but you should look into it.
**
- OS: ubuntu
|
open
|
2024-09-10T18:18:35Z
|
2024-09-26T02:36:31Z
|
https://github.com/deepfakes/faceswap/issues/1401
|
[] |
fady17
| 1
|
chainer/chainer
|
numpy
| 8,594
|
ValueError: Inexistent group is specified
|
I am getting following error for the code. I did found same error on stack overflow. It is mentioned the change relative path to absolute path. Error persists even after changing it to absolute path. Important thing to mention is that code was working till few hours ago, I was able to load the model few hours ago.
Traceback (most recent call last):
File "infer.py", line 226, in <module>
serializers.load_hdf5(args.model, model)
File "C:\Users\Dallas\Anaconda3\envs\RChainer\lib\site-packages\chainer\serializers\hdf5.py", line 143, in load_hdf5
d.load(obj)
File "C:\Users\Dallas\Anaconda3\envs\RChainer\lib\site-packages\chainer\serializer.py", line 85, in load
obj.serialize(self)
File "C:\Users\Dallas\Anaconda3\envs\RChainer\lib\site-packages\chainer\link.py", line 689, in serialize
d[name].serialize(serializer[name])
File "C:\Users\Dallas\Anaconda3\envs\RChainer\lib\site-packages\chainer\link.py", line 480, in serialize
serializer(name, d[name].data)
File "C:\Users\Dallas\Anaconda3\envs\RChainer\lib\site-packages\chainer\serializers\hdf5.py", line 111, in __call__
raise ValueError('Inexistent group is specified')
ValueError: Inexistent group is specified
|
closed
|
2020-12-21T23:37:57Z
|
2022-01-08T23:16:02Z
|
https://github.com/chainer/chainer/issues/8594
|
[
"stale",
"issue-checked"
] |
Deccan12
| 2
|
automl/auto-sklearn
|
scikit-learn
| 819
|
'RuntimeWarning: Mean of empty slice' during metafeature calculation
|
Hello,
When building my own metafeatures using the scripts provided, the following warnings appear during 03_calculate_metafeatures:
```
[Memory] Calling __main__--WORKINGDIR-03_calculate_metafeatures.calculate_metafeatures...
calculate_metafeatures(254)
254
/home/XXX/.local/lib/python3.6/site-packages/numpy/lib/nanfunctions.py:1667: RuntimeWarning: Degrees of freedom <= 0 for slice.
keepdims=keepdims)
/home/XXX/.local/lib/python3.6/site-packages/autosklearn/metalearning/metafeatures/metafeatures.py:437: RuntimeWarning: Mean of empty slice
mean = np.nanmean(values)
```
Is this something to worry about or can it be ignored?
|
closed
|
2020-04-09T14:13:20Z
|
2021-11-17T11:54:17Z
|
https://github.com/automl/auto-sklearn/issues/819
|
[
"enhancement"
] |
RitterHannah
| 3
|
jumpserver/jumpserver
|
django
| 14,250
|
[Feature] 希望控制台首页可以显示当前服务器的资源占用情况(CPU/RAM/硬盘等)
|
### 产品版本
v4.2.0
### 版本类型
- [X] 社区版
- [ ] 企业版
- [ ] 企业试用版
### 安装方式
- [X] 在线安装 (一键命令安装)
- [ ] 离线包安装
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### ⭐️ 需求描述
希望控制台首页可以显示当前服务器的资源占用情况(CPU/RAM/硬盘等)
### 解决方案
控制台首页增加显示当前服务器的资源占用情况(CPU/RAM/硬盘等),以便直观的查阅,并在资源占用较高时能及时进行优化或升级。
### 补充信息
_No response_
|
closed
|
2024-09-29T09:08:18Z
|
2024-10-08T11:04:44Z
|
https://github.com/jumpserver/jumpserver/issues/14250
|
[
"⭐️ Feature Request"
] |
wqinf
| 4
|
erdewit/ib_insync
|
asyncio
| 36
|
ib.fills() returns incorrect results for partially filled orders for multi-leg option COMBO trades
|
I developed a script [TradeLogIB](https://github.com/stenri/TradeLogIB) which uses [ib_insync](https://github.com/erdewit/ib_insync) to download a list of option trades from IB and save them to a .csv file. Basically what this script does is:
```
for fill in ib.fills():
if fill.contract.secType != 'OPT':
continue
....
```
Very compact and elegant code thanks to [ib_insync](https://github.com/erdewit/ib_insync). What I noticed [TradeLogIB](https://github.com/stenri/TradeLogIB) script does not save trades for a partically filled multi-leg option orders. I have to cancel an order or wait till it is completely filled, and only then trades from this order can be dumped using my script.
In contract version of [TradeLogIB](https://github.com/stenri/TradeLogIB) that worked with Python 2.7 + IbPy package works quite well, and is able to dump trades for a partially filled orders. So I know something is wrong with ib.fills() in [ib_insync](https://github.com/erdewit/ib_insync).
Here are the results of my research. When multi-leg option order is partially filled, ib.fills() returns something like this:

Notice a list of Fill() structures containing the same Contract() with the same conId and secType='BAG' indicating this is a contract for a multi-leg COMBO order.
And after the multi-leg option order is filled, ib.fills() finally returns the following:

This is a correct return result, what I expect it to return in both cases. Notice the first Fill() contains a Contract() with secType='BAG' (COMBO order). And second and third lines contains a Contract() with secType='OPT' (fills for individual option legs for a COMBO order above).
At this point I knew something was wrong with the Contract() field in the Fill() structure when order was in a partially filled state.
Now, let's take a look at wrapper.py at execDetails() implementation in [ib_insync](https://github.com/erdewit/ib_insync):
```
@iswrapper
def execDetails(self, reqId, contract, execution):
# must handle both live fills and responses to reqExecutions
key = (execution.clientId, execution.orderId) # <--------- (1)
trade = self.trades.get(key)
if trade:
contract = trade.contract # <--------- (2) BUGBUG:
else:
contract = Contract(**contract.__dict__)
execId = execution.execId
execution = Execution(**execution.__dict__)
fill = Fill(contract, execution, CommissionReport(), self.lastTime)
```
execDetails() merges self.trades.contract (opened orders) with the contract data that is passed as a parameter (see linke marked **(2) BUGBUG:**). And it uses a key consisting from clientId and orderId (see line marked **(1)** in the code above).
The problem is that multi-leg option COMBO orders contain several Fill()'s, each one with it's own Contract(). And all these Contract()'s have the same orderId and clientId as well. Some of the contracts are secType='BAG', others are secType='OPT' (see screenshot above). But orderId is the same as all these Contract()'s and Fill()'s belong to the same COMBO order.
So, when execDetails() decides not to use a contract passed as a function argument, but instead take the trade.contract based on key==(execution.clientId, execution.orderId):
```
if trade:
contract = trade.contract # <--------- (2) BUGBUG:
```
It erroneously substitutes real Contract(secType='OPT') with a Contract(secType='BAG') from self.trades structure. And that causes problems.
Сonclusion: execDetails() can not use clientId / orderId pair as a key to determine a Contract() as it does not work correctly for a multi-leg option orders.
As a proof of concept I developed a fix to verify it solves the issue. If I rewrite the execDetails() code to throw out the offensive lines:
```
@iswrapper
def execDetails(self, reqId, contract, execution):
# must handle both live fills and responses to reqExecutions
key = (execution.clientId, execution.orderId)
trade = self.trades.get(key)
contract = Contract(**contract.__dict__)
execId = execution.execId
execution = Execution(**execution.__dict__)
fill = Fill(contract, execution, CommissionReport(), self.lastTime)
```
[TradeLogIB](https://github.com/stenri/TradeLogIB) script starts to work as expected and correctly dumps partially filled multi-leg orders.
I do not know if this change breaks anything else, as it is not clear to me what the original intention was in replacing the contract in the Fill() structure. So, I am going to submit my patch as a pull request, and it is up to the ib_insync developer to decide how to fix this correctly.
P.S. I've also considered other methods to fix this. For example to use the key that contains "clientId, orderId, contract Id":
```
key = (execution.clientId, execution.orderId, contract.conId)
```But I found that at least in one place in orderStatus() routine there is no access to a contract at all. And orderStatus() calculates a key.
|
closed
|
2018-01-06T12:21:52Z
|
2018-01-06T15:56:51Z
|
https://github.com/erdewit/ib_insync/issues/36
|
[] |
stenri
| 2
|
litestar-org/litestar
|
api
| 3,495
|
Enhancement: Repository pattern interfaces
|
### Summary
It's common practice to provide generic interfaces for implementing the repository pattern. It makes sense to provide several interfaces for specific types of application designs. CrudRepository is suitable for abstracting persistence of CRUD models. More generic repository interfaces are required for domain driven designs (DDD) involving aggregates, entities (value objects persisted as part of entities). Data abstraction objects (DAOs) can be seen as simple variants of a repository pattern or as an own pattern category ([DAOs vs Repository](https://www.baeldung.com/java-dao-vs-repository#:~:text=DAO%20is%20an%20abstraction%20of,closer%20to%20the%20Domain%20objects)).
Examples:
- [Spring Data Repository](https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/repository/package-summary.html) / [usage reference](https://docs.spring.io/spring-data/data-commons/docs/1.6.1.RELEASE/reference/html/repositories.html)
### Basic Example
_No response_
### Drawbacks and Impact
Impacts:
- enables domain driven designs of different complexities
### Unresolved questions
- Should DAO patterns be considered separately from repository patterns?
|
closed
|
2024-05-14T19:08:29Z
|
2025-03-20T15:54:42Z
|
https://github.com/litestar-org/litestar/issues/3495
|
[
"Enhancement"
] |
fkromer
| 1
|
xlwings/xlwings
|
automation
| 2,095
|
win range to pdf landscape?
|
Hi, I need range.to_pdf then to_png and get a high-resolution table image. But now I need to crop the white space,and don't know the exact space value. Is there a method to pdf and then to png without white space? Thank you.
https://stackoverflow.com/questions/72692125/export-an-excel-spreadsheet-to-pdf-using-xlwings-to-pdf-in-landscape-mode
|
open
|
2022-11-11T01:57:17Z
|
2022-11-19T03:51:15Z
|
https://github.com/xlwings/xlwings/issues/2095
|
[] |
zbjdonald
| 2
|
akfamily/akshare
|
data-science
| 5,387
|
futures_zh_minute_sina 接口返回数据不完整
|
Python: 3.10
Akshare: 1.15.27
重现代码:
import akshare as ak
data = ak.futures_zh_minute_sina("RB2501", "60")
print(data)
打印结果见截图。

合约RB2501最新数据应该是2024-11-30,而不是2024-4-18。是数据源接口没有返回最新数据?
谢谢支持。
|
closed
|
2024-11-30T07:28:10Z
|
2024-11-30T10:50:50Z
|
https://github.com/akfamily/akshare/issues/5387
|
[
"bug"
] |
Yzx-1024
| 1
|
PaddlePaddle/PaddleHub
|
nlp
| 2,266
|
安装paddlehub后import paddlehub出错
|
欢迎您反馈PaddleHub使用问题,非常感谢您对PaddleHub的贡献!
在留下您的问题时,辛苦您同步提供如下信息:
- 版本、环境信息
paddlehub 2.3.1
paddlepaddle 2.5.0
Python 3.9.13
conda 22.9.0
Windows 10 系统
- 复现信息:如为报错,请给出复现环境、复现步骤

|
closed
|
2023-06-27T10:29:39Z
|
2023-09-20T11:07:46Z
|
https://github.com/PaddlePaddle/PaddleHub/issues/2266
|
[] |
JAC-z
| 3
|
jina-ai/serve
|
machine-learning
| 5,966
|
Endless Waiting executor0 but the service works normally.
|
the code to run from #5959
```
DEBUG executor0/rep-0@39476 Setting signal handlers [07/14/23 08:58:52]
DEBUG executor0/rep-0@39476 Signal handlers already set
DEBUG executor0-replica-set@39272 Waiting for ReplicaSet to start successfully [07/14/23 08:58:52]
⠋ Waiting ... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/2 -:--:--DEBUG gateway/rep-0@39477 Setting signal handlers [07/14/23 08:58:52]
DEBUG gateway-replica-set@39272 Waiting for ReplicaSet to start successfully [07/14/23 08:58:52]
⠋ Waiting ... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/2 -:--:--DEBUG gateway/rep-0@39477 Signal handlers already set
DEBUG gateway/rep-0@39477 adding connection for deployment executor0/heads/0 to grpc://0.0.0.0:62917 [07/14/23 08:58:52]
DEBUG gateway/rep-0@39477 create_connection connection for executor0 to grpc://0.0.0.0:62917
DEBUG gateway/rep-0@39477 create_connection connection for executor0 to grpc://0.0.0.0:62917
DEBUG gateway/rep-0@39477 connection for deployment executor0/heads/0 to grpc://0.0.0.0:62917 added
DEBUG gateway/rep-0@39477 Setting up GRPC server
DEBUG gateway/rep-0@39477 Get all endpoints from TopologyGraph
DEBUG gateway/rep-0@39477 Running GatewayRuntime warmup
DEBUG gateway/rep-0@39477 Getting Endpoints data from executor0
DEBUG gateway/rep-0@39477 starting warmup task for deployment executor0
DEBUG gateway/rep-0@39477 gRPC call to executor0 for EndpointDiscovery errored, with error <AioRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection refused"
debug_error_string = "UNKNOWN:Failed to pick subchannel {created_time:"2023-07-14T08:58:52.240257381+08:00", children:[UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection
refused {grpc_status:14, created_time:"2023-07-14T08:58:52.240253323+08:00"}]}"
> and for the 1th time.
DEBUG gateway/rep-0@39477 resetting connection for executor0 to 0.0.0.0:62917
DEBUG gateway/rep-0@39477 create_connection connection for executor0 to 0.0.0.0:62917
⠹ Waiting executor0 gateway... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/2 0:00:01DEBUG gateway/rep-0@39477 gRPC call to executor0 for EndpointDiscovery errored, with error <AioRpcError of RPC that terminated with: [07/14/23 08:58:53]
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection refused"
debug_error_string = "UNKNOWN:Failed to pick subchannel {created_time:"2023-07-14T08:58:53.241945446+08:00", children:[UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection
refused {grpc_status:14, created_time:"2023-07-14T08:58:53.241944256+08:00"}]}"
> and for the 2th time.
DEBUG gateway/rep-0@39477 resetting connection for executor0 to 0.0.0.0:62917
DEBUG gateway/rep-0@39477 create_connection connection for executor0 to 0.0.0.0:62917
⠦ Waiting executor0 gateway... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/2 0:00:02DEBUG gateway/rep-0@39477 gRPC call to executor0 for EndpointDiscovery errored, with error <AioRpcError of RPC that terminated with: [07/14/23 08:58:55]
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection refused"
debug_error_string = "UNKNOWN:Failed to pick subchannel {created_time:"2023-07-14T08:58:55.150302259+08:00", children:[UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection
refused {created_time:"2023-07-14T08:58:55.150299709+08:00", grpc_status:14}]}"
> and for the 3th time.
DEBUG gateway/rep-0@39477 resetting connection for executor0 to 0.0.0.0:62917
DEBUG gateway/rep-0@39477 create_connection connection for executor0 to 0.0.0.0:62917
⠧ Waiting executor0 gateway... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/2 0:00:05DEBUG gateway/rep-0@39477 gRPC call to executor0 for EndpointDiscovery errored, with error <AioRpcError of RPC that terminated with: [07/14/23 08:58:57]
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection refused"
debug_error_string = "UNKNOWN:Failed to pick subchannel {created_time:"2023-07-14T08:58:57.65715233+08:00", children:[UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection refused
{created_time:"2023-07-14T08:58:57.657149449+08:00", grpc_status:14}]}"
> and for the 4th time.
DEBUG gateway/rep-0@39477 gRPC call for executor0 failed, retries exhausted
DEBUG gateway/rep-0@39477 resetting connection for executor0 to 0.0.0.0:62917
DEBUG gateway/rep-0@39477 create_connection connection for executor0 to 0.0.0.0:62917
WARNI… gateway/rep-0@39477 Getting endpoints failed: failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection refused. Waiting for another trial
⠏ Waiting executor0 gateway... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/2 0:00:06DEBUG gateway/rep-0@39477 Getting Endpoints data from executor0 [07/14/23 08:58:58]
⠙ Waiting executor0 gateway... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/2 0:00:09DEBUG gateway/rep-0@39477 gRPC call to executor0 for EndpointDiscovery errored, with error <AioRpcError of RPC that terminated with: [07/14/23 08:59:01]
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection refused"
debug_error_string = "UNKNOWN:Failed to pick subchannel {created_time:"2023-07-14T08:59:01.975496864+08:00", children:[UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection
refused {grpc_status:14, created_time:"2023-07-14T08:59:01.975494113+08:00"}]}"
> and for the 1th time.
DEBUG gateway/rep-0@39477 resetting connection for executor0 to 0.0.0.0:62917
DEBUG gateway/rep-0@39477 create_connection connection for executor0 to 0.0.0.0:62917
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:11<00:00, 1.67s/it]
⠦ Waiting executor0 gateway... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/2 0:00:17DEBUG gateway/rep-0@39477 gRPC call to executor0 for EndpointDiscovery errored, with error <AioRpcError of RPC that terminated with: [07/14/23 08:59:09]
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection refused"
debug_error_string = "UNKNOWN:Failed to pick subchannel {created_time:"2023-07-14T08:59:09.637254313+08:00", children:[UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: Failed to connect to remote host: Connection
refused {grpc_status:14, created_time:"2023-07-14T08:59:09.637251379+08:00"}]}"
> and for the 2th time.
⠇ Waiting executor0 gateway... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/2 0:00:17DEBUG gateway/rep-0@39477 resetting connection for executor0 to 0.0.0.0:62917
DEBUG gateway/rep-0@39477 create_connection connection for executor0 to 0.0.0.0:62917
⠋ Waiting executor0 gateway... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/2 0:00:19DEBUG executor0/rep-0@39476 <__main__.GLMInference object at 0x7f129e307f40> is successfully loaded! [07/14/23 08:59:11]
DEBUG executor0/rep-0@39476 Setting up GRPC server
INFO executor0/rep-0@39476 start server bound to 0.0.0.0:62917
DEBUG executor0/rep-0@39476 server bound to 0.0.0.0:62917 started
DEBUG executor0/rep-0@39476 GRPC server setup successful
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC [07/14/23 08:59:11]
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:12]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {created_time:"2023-07-14T08:59:12.556621633+08:00", grpc_status:4}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:13]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {created_time:"2023-07-14T08:59:13.666544143+08:00", grpc_status:4}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:14]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {created_time:"2023-07-14T08:59:14.774546914+08:00", grpc_status:4}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:15]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {grpc_status:4, created_time:"2023-07-14T08:59:15.882533002+08:00"}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:16]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {grpc_status:4, created_time:"2023-07-14T08:59:16.990531376+08:00"}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC [07/14/23 08:59:17]
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:18]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {created_time:"2023-07-14T08:59:18.098532957+08:00", grpc_status:4}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:19]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {grpc_status:4, created_time:"2023-07-14T08:59:19.206557554+08:00"}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:20]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {created_time:"2023-07-14T08:59:20.313535613+08:00", grpc_status:4}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
⠼ Waiting executor0 gateway... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/2 0:00:29DEBUG executor0/rep-0@39476 got an endpoint discovery request [07/14/23 08:59:21]
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:21]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {created_time:"2023-07-14T08:59:21.421539286+08:00", grpc_status:4}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
⠼ Waiting executor0 gateway... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/2 0:00:29DEBUG gateway/rep-0@39477 Got all endpoints from TopologyGraph {'/chat', '/chat_manas', '_jina_dry_run_'} [07/14/23 08:59:21]
INFO gateway/rep-0@39477 start server bound to 0.0.0.0:19002
DEBUG gateway/rep-0@39477 server bound to 0.0.0.0:19002 started
DEBUG gateway/rep-0@39477 GRPC server setup successful
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
DEBUG gateway/rep-0@39272 ready and listening [07/14/23 08:59:21]
DEBUG gateway-replica-set@39272 ReplicaSet started successfully [07/14/23 08:59:21]
DEBUG gateway@39272 Deployment started successfully [07/14/23 08:59:21]
⠦ Waiting executor0 gateway... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/2 0:00:29DEBUG executor0/rep-0@39476 recv _status request
DEBUG gateway/rep-0@39477 completed warmup task in 29.353113174438477s.
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:22]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {grpc_status:4, created_time:"2023-07-14T08:59:22.529546379+08:00"}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:23]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {created_time:"2023-07-14T08:59:23.636531418+08:00", grpc_status:4}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:25]
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: HTTP proxy returned response code 407"
debug_error_string = "UNKNOWN:Failed to pick subchannel {created_time:"2023-07-14T08:59:25.646676539+08:00", children:[UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: HTTP proxy returned response code 407
{created_time:"2023-07-14T08:59:25.646673591+08:00", grpc_status:14}]}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:26]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {grpc_status:4, created_time:"2023-07-14T08:59:26.754537615+08:00"}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:27]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {grpc_status:4, created_time:"2023-07-14T08:59:27.863538749+08:00"}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:28]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {grpc_status:4, created_time:"2023-07-14T08:59:28.971538289+08:00"}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC [07/14/23 08:59:29]
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:30]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {created_time:"2023-07-14T08:59:30.078544313+08:00", grpc_status:4}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:31]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {created_time:"2023-07-14T08:59:31.185541927+08:00", grpc_status:4}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
DEBUG executor0/rep-0@39272 Checking readiness to 0.0.0.0:62917 with protocol GRPC
DEBUG executor0/rep-0@39272 Exception: <AioRpcError of RPC that terminated with: [07/14/23 08:59:32]
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {created_time:"2023-07-14T08:59:32.29353679+08:00", grpc_status:4}"
>
DEBUG executor0/rep-0@39272 Server on 0.0.0.0:62917 with protocol GRPC is not yet ready
```
|
closed
|
2023-07-14T01:11:22Z
|
2023-07-19T06:51:13Z
|
https://github.com/jina-ai/serve/issues/5966
|
[] |
wqh17101
| 41
|
psf/requests
|
python
| 6,669
|
Failed to ignore the SSL certificate verification when using `verify=False` option
|
<!-- Summary. -->
Requests is not ignoring the SSL certificate verification when using the `verify=False` option.
## Expected Result
Same as curl with `-k`
```bash
$ curl -k https://website
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.20.1</center>
</body>
</html>
```
or
```python
import httpx
session = httpx.Client(verify=False)
session.get("https://website")
```
## Actual Result
```python
>> import requests
>> requests.get("https://website", verify=False)
Traceback (most recent call last):
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1092, in _validate_conn
conn.connect()
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/connection.py", line 635, in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/connection.py", line 776, in _ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 466, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 510, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "/usr/local/lib/python3.10/ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "/usr/local/lib/python3.10/ssl.py", line 1071, in _create
self.do_handshake()
File "/usr/local/lib/python3.10/ssl.py", line 1342, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1007)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 790, in urlopen
response = self._make_request(
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request
raise new_e
urllib3.exceptions.SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1007)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/vscode/.local/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
File "/home/vscode/.local/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='website', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1007)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/vscode/.local/lib/python3.10/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/home/vscode/.local/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/vscode/.local/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/home/vscode/.local/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/home/vscode/.local/lib/python3.10/site-packages/requests/adapters.py", line 517, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='website', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1007)')))
```
The result is similar to curl without `-k`.
```
$ curl -v https://website
* Trying 255.255.255.255:443...
* Connected to website (255.255.255.255) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
```
## Reproduction Steps
```python
import requests
session = requests.Session()
session.verify = False
session.get("https://website")
```
or
```python
import requests
session.get("https://website", verify=False)
```
## System Information
Tested in two systems
$ python -m requests.help
```json
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "3.1.0"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.4"
},
"implementation": {
"name": "CPython",
"version": "3.10.12"
},
"platform": {
"release": "5.15.146.1-microsoft-standard-WSL2",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.31.0"
},
"system_ssl": {
"version": "1010117f"
},
"urllib3": {
"version": "2.0.3"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
```
$ python -m requests.help
```json
{
"chardet": {
"version": "5.2.0"
},
"charset_normalizer": {
"version": "3.3.2"
},
"cryptography": {
"version": "42.0.5"
},
"idna": {
"version": "3.6"
},
"implementation": {
"name": "CPython",
"version": "3.10.12"
},
"platform": {
"release": "6.1.58+",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "30200010",
"version": "24.1.0"
},
"requests": {
"version": "2.31.0"
},
"system_ssl": {
"version": "30000020"
},
"urllib3": {
"version": "2.0.7"
},
"using_charset_normalizer": false,
"using_pyopenssl": true
}
```
Note: URL and IP were masked to `https://website` and `255.255.255.255` respectively
|
closed
|
2024-03-23T19:40:30Z
|
2024-03-24T12:49:47Z
|
https://github.com/psf/requests/issues/6669
|
[] |
urbanogilson
| 5
|
axnsan12/drf-yasg
|
rest-api
| 214
|
What is the best way to provide example responses
|
I have implemented a legacy API requiring custom renderers. What they render bears little resemblance to the serializer. However, the auto schema generated for the JSON response is quite useful. What is the best way to add additional responses?
Self-describing APIs is hard ;)
|
closed
|
2018-09-17T19:15:47Z
|
2020-09-29T04:06:46Z
|
https://github.com/axnsan12/drf-yasg/issues/214
|
[] |
danizen
| 5
|
JaidedAI/EasyOCR
|
machine-learning
| 692
|
ocr not reading word by word
|
Hi how to configure the easyocr to print the text word by word with the coordinates
|
open
|
2022-03-23T11:40:28Z
|
2022-04-04T10:30:56Z
|
https://github.com/JaidedAI/EasyOCR/issues/692
|
[] |
sreebalaji2418
| 1
|
deepakpadhi986/AI-Resume-Analyzer
|
streamlit
| 3
|
I successfully done the installation and changed the file resume parser but it shows like this plz rectify ASAP sir
|


|
closed
|
2023-11-25T10:21:07Z
|
2023-11-26T12:53:36Z
|
https://github.com/deepakpadhi986/AI-Resume-Analyzer/issues/3
|
[] |
Sudharsan912
| 1
|
robinhood/faust
|
asyncio
| 179
|
source and sink example
|
Is there an example on how to setup faust to consume the data from kafka and write to single or multiple sinks ( elasticsearch, cassandra etc..)
sample code ( I am not sure the sample code setup is proper way to read and write into the sink)
import faust
app = faust.App(
'demo',
broker='kafka://localhost:9092',
value_serializer='raw'
)
message_topic = app.topic('demo')
@app.agent(message_topic)
async def mystream(stream):
async for message in stream:
await sink.send(value=message.decode('utf-8'))
@app.agent()
async def sink(messages):
async for msg in messages:
print(msg)
|
closed
|
2018-10-09T02:31:39Z
|
2019-07-05T06:41:49Z
|
https://github.com/robinhood/faust/issues/179
|
[] |
Madhu1512
| 4
|
JaidedAI/EasyOCR
|
machine-learning
| 1,045
|
easyocr.Reader not working on windows 11 ARM Parallels (M1) Python 3.11.4
|
import cv2
import easyocr
import matplotlib.pyplot as plt
#read image
imagelocation = 'test.jpg'
img = cv2.imread(imagelocation)
#instance text detector
reader = easyocr.Reader(['en'], gpu=False)
#detect text on image
text = reader.readtext(img)
print(text)
does not return anything, program just terminates.
|
open
|
2023-06-09T18:28:33Z
|
2023-12-20T20:16:35Z
|
https://github.com/JaidedAI/EasyOCR/issues/1045
|
[] |
thendotshikota
| 1
|
flaskbb/flaskbb
|
flask
| 97
|
Bulk operations
|
Following operations should support bulk actions:
- [x] Merge Topic
- [x] Move Topic
- [x] Delete Topic/Post
- [x] Lock Topic
- [x] Unlock Topic
|
closed
|
2015-03-10T10:59:28Z
|
2015-07-23T20:08:23Z
|
https://github.com/flaskbb/flaskbb/issues/97
|
[
"enhancement"
] |
sh4nks
| 0
|
unionai-oss/pandera
|
pandas
| 977
|
`pa.check_types` doesn't seem to work with Union of multiple pandera types
|
**Describe the bug**
A clear and concise description of what the bug is.
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of pandera.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandera.
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
from typing import Union, List
import pytest
import pandas as pd
import pandera as pa
from pandera.typing import DataFrame, Series
class Model1(pa.SchemaModel):
col1: Series[int]
class Model2(pa.SchemaModel):
col2: Series[str]
@pytest.fixture
def good_dataframe():
return pd.DataFrame({'col1': [1, 2, 3], 'col2': ['four', 'five', 'six']})
@pytest.fixture
def bad_dataframe():
return pd.DataFrame({'col3': ['this', 'should', 'not', 'work']})
@pa.check_types
def my_function(df: Union[DataFrame[Model1], DataFrame[Model2]]) -> List[str]:
return df.columns
def test_good_df(good_dataframe):
cols = my_function(good_dataframe)
assert sorted(cols) == sorted(['col1', 'col2'])
def test_bad_df(bad_dataframe):
with pytest.raises(pa.errors.SchemaError):
my_function(bad_dataframe)
```
#### Expected behavior
Because I am using `pa.check_types` on the `my_function` function above, I was expecting a pa.errors.SchemaError to be raised when I tried to pass in a dataframe to `my_function` that didn't conform to either of the SchemaModels that I've defined as being an either-or input to `my_function`. However, it says that no SchemaError is raised by that function when I pass in `bad_dataframe`.
It seems like `Union` wipes out the ability of `pa.check_types` to work properly? I couldn't find anything in the documentation that suggests this shouldn't be possible, but maybe I missed something?
#### Desktop (please complete the following information):
- OS: MacOS 12.5.1
- Pandera Version 0.13.3
#### Screenshots
Output when ran `pytest pandera_bug_test.py` with the above code copy-pasted into it.
<img width="915" alt="image" src="https://user-images.githubusercontent.com/15696062/197363319-61921b8a-7b66-4b36-bc64-845d49b6843a.png">
|
closed
|
2022-10-22T21:43:06Z
|
2022-11-11T14:55:26Z
|
https://github.com/unionai-oss/pandera/issues/977
|
[
"bug",
"help wanted"
] |
kr-hansen
| 12
|
quasarstream/python-ffmpeg-video-streaming
|
dash
| 14
|
.mkv fails to transcode
|
**Describe the bug**
Can't convert to HLS.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Local machine (please complete the following information):**
- OS: [e.g. Linux]
- FFMped [e.g. 4.1]
**Additional context**
```
RuntimeError: ('ffmpeg failed to execute command: ', 'b\'ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers\\n built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)\\n configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared\\n libavutil 55. 78.100 / 55. 78.100\\n libavcodec 57.107.100 / 57.107.100\\n libavformat 57. 83.100 / 57. 83.100\\n libavdevice 57. 10.100 / 57. 10.100\\n libavfilter 6.107.100 / 6.107.100\\n libavresample 3. 7. 0 / 3. 7. 0\\n libswscale 4. 8.100 / 4. 8.100\\n libswresample 2. 9.100 / 2. 9.100\\n libpostproc 54. 7.100 / 54. 7.100\\nInput #0, matroska,webm, from \\\'./webapp/vid_app/tmp_files/smallfile.mkv\\\':\\n Metadata:\\n ENCODER : Lavf57.83.100\\n Duration: 00:00:02.17, start: 0.000000, bitrate: 2682 kb/s\\n Stream #0:0: Video: h264 (High), yuv420p(progressive), 1280x720, 30 fps, 30 tbr, 1k tbn, 60 tbc (default)\\n Metadata:\\n DURATION : 00:00:02.166000000\\n Stream #0:1: Audio: aac (LC), 44100 Hz, stereo, fltp (default)\\n Metadata:\\n title : simple_aac\\n DURATION : 00:00:02.020000000\\n[NULL @ 0x5639407faec0] [Eval @ 0x7ffd5e7423a0] Undefined constant or missing \\\'(\\\' in \\\'copy\\\'\\n[NULL @ 0x5639407faec0] Unable to parse option value "copy"\\n[NULL @ 0x5639407faec0] Error setting option b to value copy.\\nError setting up codec context options.\\nStream mapping:\\n Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))\\n Stream #0:1 -> #0:1 (copy)\\n Stream #0:0 -> #1:0 (h264 (native) -> h264 (libx264))\\n Stream #0:1 -> #1:1 (copy)\\n Stream #0:0 -> #2:0 (h264 (native) -> h264 (libx264))\\n Stream #0:1 -> #2:1 (copy)\\n Stream #0:0 -> #3:0 (h264 (native) -> h264 (libx264))\\n Stream #0:1 -> #3:1 (copy)\\n Stream #0:0 -> #4:0 (h264 (native) -> h264 (libx264))\\n Stream #0:1 -> #4:1 (copy)\\n Last message repeated 1 times\\n\'')
```
[smallfile.zip](https://github.com/aminyazdanpanah/python-ffmpeg-video-streaming/files/4544692/smallfile.zip)
|
closed
|
2020-04-28T09:37:25Z
|
2020-06-24T18:37:48Z
|
https://github.com/quasarstream/python-ffmpeg-video-streaming/issues/14
|
[
"bug"
] |
ddorian
| 2
|
huggingface/text-generation-inference
|
nlp
| 2,775
|
"RuntimeError: weight lm_head.weight does not exist" quantizing Llama-3.2-11B-Vision-Instruct
|
### System Info
Running official docker image: ghcr.io/huggingface/text-generation-inference:2.4.0
os: Linux 5.15.0-124-generic #134-Ubuntu SMP Fri Sep 27 20:20:17 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
nvidia-smi:
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.90.07 Driver Version: 550.90.07 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3090 On | 00000000:01:00.0 Off | N/A |
| 0% 27C P8 23W / 350W | 2MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 3090 On | 00000000:21:00.0 Off | N/A |
| 0% 28C P8 21W / 350W | 2MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA GeForce RTX 3090 On | 00000000:4B:00.0 Off | N/A |
| 0% 28C P8 21W / 350W | 2MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA GeForce RTX 3090 On | 00000000:4C:00.0 Off | N/A |
| 0% 27C P8 19W / 350W | 2MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
I'm attempting to quantize the Llama 3.2 Vision model and get the error "RuntimeError: weight lm_head.weight does not exist"
I'm using the following command:
```docker run --gpus all --shm-size 1g -e HF_TOKEN=REDACTED -v $(pwd):/data --entrypoint='' ghcr.io/huggingface/text-generation-inference:2.4.0 text-generation-server quantize meta-llama/Llama-3.2-11B-Vision-Instruct /data/Llama-3.2-11B-Vision-Instruct-GPTQ-INT4```
I have attached the full output.
[tgi_quantize_error.txt](https://github.com/user-attachments/files/17875744/tgi_quantize_error.txt)
### Expected behavior
I would like the quantization process to succeed. I couldn't find any specific reference to whether multi-modal models work with GPTQ quantization or not.
|
open
|
2024-11-22T20:05:59Z
|
2024-11-22T20:05:59Z
|
https://github.com/huggingface/text-generation-inference/issues/2775
|
[] |
akowalsk
| 0
|
horovod/horovod
|
machine-learning
| 3,149
|
Unit test fails on test-cpu-gloo-py3_8-tfhead-keras_none-torchhead-mxnethead-pyspark3_1_2
|
Recently lots of torch collectives related unit tests are failing under this config.
For example: https://github.com/horovod/horovod/runs/3510886664
```
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allgather_grad FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allgather_grad_process_sets FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allgather_process_sets FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allgather_type_error PASSED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allgather_variable_size FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_async_fused FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_average FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_cpu_gpu_error SKIPPED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_duplicate_name_error FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_error PASSED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_grad FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_grad_average FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_grad_process_sets FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_inplace FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_multi_gpu SKIPPED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_postscale FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_prescale FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_process_sets FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_allreduce_type_error PASSED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_alltoall FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_alltoall_equal_split FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_alltoall_equal_split_grad FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_alltoall_equal_split_length_error PASSED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_alltoall_grad FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_alltoall_grad_process_sets FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_alltoall_process_sets FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_alltoall_rank_error PASSED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_alltoall_splits_error PASSED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_alltoall_splits_on_gpu SKIPPED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_alltoall_splits_type_error PASSED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_alltoall_type_error PASSED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_broadcast FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_broadcast_duplicate_name_error FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_broadcast_error PASSED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_broadcast_grad FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_broadcast_grad_process_sets FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_broadcast_inplace FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_broadcast_process_sets FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_broadcast_rank_error PASSED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_broadcast_type_error PASSED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_grouped_allreduce FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_grouped_allreduce_average FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_grouped_allreduce_cpu_gpu_error SKIPPED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_grouped_allreduce_grad FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_grouped_allreduce_grad_average FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_grouped_allreduce_grad_process_sets FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_grouped_allreduce_inplace FAILED
[1]<stdout>:test_torch.py::TorchTests::test_horovod_grouped_allreduce_process_sets FAILED
```
CC @tgaddair @EnricoMi
|
closed
|
2021-09-04T04:03:07Z
|
2021-10-05T21:28:51Z
|
https://github.com/horovod/horovod/issues/3149
|
[
"bug"
] |
chongxiaoc
| 1
|
supabase/supabase-py
|
flask
| 663
|
failed to get_session after create_client by accees_token as supabase_key
|
**Describe the bug**
``` python
# access_token from AuthResponse.session.access_token by sign in from somewhere
async def get_db(access_token: AccessTokenDep) -> AsyncClient:
client: AsyncClient | None = None
try:
client = await create_client(
settings.SUPABASE_URL,
access_token,
options=ClientOptions(
postgrest_client_timeout=10, storage_client_timeout=10
),
)
session = await client.auth.get_session()
# client.postgrest.auth(token=access_token)
user = await client.auth.get_user()
yield client
except Exception as e:
logging.error(e)
raise HTTPException(status_code=401, detail=e)
finally:
if client:
await client.auth.sign_out()
```
`session = await client.auth.get_session()`,session got None unless i signed in with password etc
in short ,it should be able to recognize the access token from front_end after signed in , create_client with the access token as supabase_key should work for it
**To Reproduce**
just called
```python
async def create_client(
supabase_url: str,
supabase_key: str,
options: ClientOptions = ClientOptions(),
) -> AsyncClient:
....
return await AsyncClient.create(
supabase_url=supabase_url, supabase_key=supabase_key, options=options
)
```
```python
@classmethod
async def create(
cls,
supabase_url: str,
supabase_key: str,
options: ClientOptions = ClientOptions(),
):
client = cls(supabase_url, supabase_key, options)
client._auth_token = await client._get_token_header()
return client
```
add break point at `client._auth_token = await client._get_token_header()`
you will find that client._auth_token set to None!!!,which means the `@property def postgrest(self):` can not be inited correctly by `access_token`
```python
self._auth_token = {
"Authorization": f"Bearer {supabase_key}",
}
``
**Expected behavior**
```python
async def _get_token_header(self):
try:
session = await self.auth.get_session()
access_token = session.access_token
except Exception as err:
access_token = self.supabase_key
return self._create_auth_header(access_token)
```
`client._auth_token = await client._get_token_header()`
the first time after called `get_session()` should return the correct session like `client.auth.get_user(jwt)`,it works
**Desktop (please complete the following information):**
- OS: win
- Version v 2.3.3
|
closed
|
2024-01-12T15:31:50Z
|
2024-05-22T20:05:25Z
|
https://github.com/supabase/supabase-py/issues/663
|
[
"documentation"
] |
AtticusZeller
| 6
|
automagica/automagica
|
automation
| 57
|
Converting a Word document to PDF Error
|
My system is Windows 10
Test Converting a Word document to PDF, Find Error:
`Not implemented for other platforms than Windows.`
|
closed
|
2019-06-14T16:12:41Z
|
2019-08-13T08:21:15Z
|
https://github.com/automagica/automagica/issues/57
|
[] |
mamacmm
| 3
|
sczhou/CodeFormer
|
pytorch
| 2
|
How can I train in my own dataset?
|
I am happy to inference using this excellent works. Please release the training code as soon as possible.
|
closed
|
2022-07-18T08:16:18Z
|
2022-08-21T14:57:12Z
|
https://github.com/sczhou/CodeFormer/issues/2
|
[] |
hongsiyu
| 2
|
remsky/Kokoro-FastAPI
|
fastapi
| 75
|
Open-WebUI - Syntax for passing "speed" parameter?
|
I'm trying to pass the speed parameter in WebUI over the "TTS Voice" field, where the model is selected. I tried the following syntax and nothing worked so far:
af_speed_1_5
af_speed_1.5
af_speed-1.5
af_speed=1.5
af, speed=1.5
af, speed_1.5
af, speed-1.5
af+speed-1.5
af+speed=1.5
if not possible, please make possible : )
|
closed
|
2025-01-18T10:51:47Z
|
2025-01-22T19:36:29Z
|
https://github.com/remsky/Kokoro-FastAPI/issues/75
|
[] |
TheElo
| 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.