qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
53,185,119
|
I want to retrieve the list of resources are currently in used region wise by using python script and boto3 library.
for example script have to give me the out put as follows
Region : us-west-2
service: EC2
//resource list//instance ids //Name
service: VPC
//resource list//VPC ids//Name
|
2018/11/07
|
[
"https://Stackoverflow.com/questions/53185119",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4108278/"
] |
There's no easy way to do it, but you can achieve this with a few describe calls.
First enumerate through the regions that you use:
```
for regionname in ["us-east-1", "eu-west-1"]
```
Or if you want to check all:
```
ec2client = boto3.client('ec2')
regionresponse = ec2client.describe_regions()
for region in regionresponse["Regions"]
regionname = region["RegionName"]
```
Then for each region iteration you need to create a new client for each region's endpoint, and describe\_instances:
```
ec2client = boto3.client('ec2', region_name=regionname)
instanceresponse = ec2client.describe_instances()
for reservation in instanceresponse["Reservations"]:
for instance in reservation["Instances"]:
print(instance["InstanceId"])
```
Do the same describe call for reach resource type you want.
|
There is no way to obtain a list of all resources used. You would need to write it yourself.
Alternatively, there are third-party companies offering services that will do this for you (eg [Hava](https://www.hava.io/).
| 9,145
|
32,714,656
|
This a recuring question and i've read many topics some helped a bit ([python Qt: main widget scroll bar](https://stackoverflow.com/questions/2130446/python-qt-main-widget-scroll-bar), [PyQt: Put scrollbars in this](https://stackoverflow.com/questions/14159337/pyqt-put-scrollbars-in-this)), some not at all ([PyQt adding a scrollbar to my main window](https://stackoverflow.com/questions/26745849/pyqt-adding-a-scrollbar-to-my-main-window)), I still have problem with the scrollbars. They're not usable, the're 'grey'.
Here is my code (I'm using PyQt5) :
```
def setupUi(self, Interface):
Interface.setObjectName("Interface")
Interface.resize(1152, 1009)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(Interface.sizePolicy().hasHeightForWidth())
Interface.setSizePolicy(sizePolicy)
Interface.setMouseTracking(False)
icon = QtGui.QIcon()
self.centralWidget = QtWidgets.QWidget(Interface)
self.centralWidget.setObjectName("centralWidget")
self.scrollArea = QtWidgets.QScrollArea(self.centralWidget)
self.scrollArea.setGeometry(QtCore.QRect(0, 0, 1131, 951))
self.scrollArea.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn)
self.scrollArea.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn)
self.scrollArea.setWidgetResizable(True)
self.scrollArea.setObjectName("scrollArea")
self.scrollArea.setEnabled(True)
self.scrollAreaWidgetContents = QtWidgets.QWidget()
self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, 0, 1112, 932))
self.scrollAreaWidgetContents.setObjectName("scrollAreaWidgetContents")
self.horizontalLayout = QtWidgets.QHBoxLayout(self.scrollAreaWidgetContents)
self.horizontalLayout.setObjectName("horizontalLayout")
```
So i would like to put the scrollbars on the main widget, so if the user resizes the main window, the scrollbar appears, and let he move up and down to see child widgets that is outside the smaller window widget, allowing it to move right and left.
Help appreciated !
|
2015/09/22
|
[
"https://Stackoverflow.com/questions/32714656",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4862605/"
] |
There are several things wrong with the example code. The main problems are that you are not using layouts properly, and the content widget is not being added to the scroll-area.
Below is a fixed version (the commented lines are all junk, and can be removed):
```
def setupUi(self, Interface):
# Interface.setObjectName("Interface")
# Interface.resize(1152, 1009)
# sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed)
# sizePolicy.setHorizontalStretch(0)
# sizePolicy.setVerticalStretch(0)
# sizePolicy.setHeightForWidth(Interface.sizePolicy().hasHeightForWidth())
# Interface.setSizePolicy(sizePolicy)
# Interface.setMouseTracking(False)
# icon = QtGui.QIcon()
self.centralWidget = QtWidgets.QWidget(Interface)
# self.centralWidget.setObjectName("centralWidget")
layout = QtWidgets.QVBoxLayout(self.centralWidget)
self.scrollArea = QtWidgets.QScrollArea(self.centralWidget)
# self.scrollArea.setGeometry(QtCore.QRect(0, 0, 1131, 951))
# self.scrollArea.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn)
# self.scrollArea.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn)
# self.scrollArea.setWidgetResizable(True)
# self.scrollArea.setObjectName("scrollArea")
# self.scrollArea.setEnabled(True)
layout.addWidget(self.scrollArea)
self.scrollAreaWidgetContents = QtWidgets.QWidget()
self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, 0, 1112, 932))
# self.scrollAreaWidgetContents.setObjectName("scrollAreaWidgetContents")
self.scrollArea.setWidget(self.scrollAreaWidgetContents)
layout = QtWidgets.QHBoxLayout(self.scrollAreaWidgetContents)
# self.horizontalLayout.setObjectName("horizontalLayout")
# add child widgets to this layout...
Interface.setCentralWidget(self.centralWidget)
```
|
The scrollbars are grayed out because you made them always visible by setting the scrollbar policy to `Qt.ScrollBarAlwaysOn` but actually there is no content to be scrolled so they are disabled. If you want scrollbars to appear only when they are needed you need to use `Qt.ScrollBarAsNeeded`.
There is no content to be scrolled because there is only 1 widget in the `QHBoxLayout` (see `self.scrollAreaWidgetContents`). Also if this method is being executed from a `QMainWindow` you also have an error when setting the central widget: `self.centralWidget` is a method to retrieve the central widget. It's working because you are overwriting it with a `QWidget` instance (and I believe python allows you to do that). To correctly set the central widget you need to use `setCentralWidget()` in `QMainWindow`.
```
def setupUi(self, Interface):
Interface.setObjectName("Interface")
Interface.resize(1152, 1009)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(Interface.sizePolicy().hasHeightForWidth())
Interface.setSizePolicy(sizePolicy)
Interface.setMouseTracking(False)
icon = QtGui.QIcon()
self.horizontalLayout = QtWidgets.QHBoxLayout()
self.horizontalLayout.setObjectName("horizontalLayout")
self.scrollArea = QtWidgets.QScrollArea()
self.scrollArea.setGeometry(QtCore.QRect(0, 0, 1131, 951))
self.scrollArea.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded)
self.scrollArea.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded)
self.scrollArea.setWidgetResizable(True)
self.scrollArea.setObjectName("scrollArea")
self.scrollArea.setEnabled(True)
self.horizontalLayout.addWidget(self.scrollArea)
centralWidget = QWidgets.QWidget()
centralWidget.setObjectName("centralWidget")
centralWidget.setLayout(self.horizontalLayout)
self.setCentralWidget(centralWidget)
```
I left `Interface` out since I don't know what it is, but the rest should be ok.
| 9,148
|
52,456,516
|
I am doing a list comprehension in python 3 for list of list serially numbers according to given range. My code works fine with but problem is with big range it slows down and take a lot of time. I there any way to do it other way? I don't want to use numpy.
```
global xy
xy = 0
a = 3
def func(x):
global xy
xy += 1
return xy
my_list = [[func(x) for x in range(a)] for x in range(a)]
xy = 0
print(my_list)
```
|
2018/09/22
|
[
"https://Stackoverflow.com/questions/52456516",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4217784/"
] |
In exactly the form you asked
```
[list(range(x, x + a)) for x in range(1, a**2 + 1, a)]
```
**Optimizations**
If you're only iterating over inner list elements
```
(range(x, x + a) for x in range(1, a**2 + 1, a))
```
If you're only indexing inner elements (for indices `inner` and `outer`)
```
range(1, a**2 + 1)[inner * a + outer]
```
And if you're doing both
```
[range(x, x + a) for x in range(1, a**2 + 1, a)]
```
Note: I think you can eek a little more performance out of these by combining them with mousetail's answer
|
Try:
```
t = [list(range(a*i + 1, a*(i+1) + 1)) for i in range(a)]
```
It seems pretty fast for a>100 even, though not fast enough for inside a graphical loop or event handler
| 9,149
|
18,464,237
|
I've have been making changes and uploading them for testing on AppEngine Python 2.7 runtime.
When uploading I only get as far as seeing the message "Getting current resource limits". The next expected message is "Scanning files on local disk", but this never comes, I always get an error instead.
My last successful deploy was at 11:05 AM (UK time).
My next attempt to deploy was at 11:09 AM and this failed with a 503 error
```
ERROR __init__.py:1294 An error occurred processing file '': HTTP Error 503: Service Unavailable. Aborting.
```
Ever since 11:09 i've been getting HTTP 503 errors. I have also had 1, HTTP 500 error.
I normally use the command line and have tried this multiple times and have also tried using the GUI "Google AppEngine Launcher" too. It was when using the GUI that I got the 500 error, using the command line always gives me 503.
```
ERROR __init__.py:1294 An error occurred processing file '': HTTP Error 500: Internal Server Error. Aborting.
```
I have tried getting more information to be reported by using the --verbose and --noisy options but they don't give any further information. The command line I am using is:
```
python appcfg.py --email=*my_email* update "*my_path*" -A *alternate_appID* -V *alternateVersion*
```
This command was working at 11:05 but 4 minutes later it does not. In those 4 minutes I only changed a single line of code (verified using git diff) and have tried rolling that change back so that the code being deployed is the same as the code that I know already deployed fine.
Am I forgetting something obvious or doing something stupid?
Is this happening to anyone else?
Has this previously happened to anyone else? If so how did you resolve it?
|
2013/08/27
|
[
"https://Stackoverflow.com/questions/18464237",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/498463/"
] |
I encountered the same problem. Fortunately, I was able to deploy the project successfully.
Just stop the **appengine** from running your project and try to deploy it again.
Hope this helps.
|
You wouldn't believe it.... I tried deploying again just before posting this question. Everything the same and it works now.
12:30 UK time.... so just a bit of an AppEngine issue? Did anyone else run into this?
Can anyone explain what was going on? I can't be affording to spend 90 minutes messing about everytime I want to deploy my app...
| 9,152
|
61,770,551
|
I have a Django project running on my local machine with dev server `manage.py runserver` and I'm trying to run it with Uvicorn before I deploy it in a virtual machine. So in my virtual environment I installed `uvicorn` and started the server, but as you can see below it fails to find Django static css files.
```
(envdev) user@lenovo:~/python/myproject$ uvicorn myproject.asgi:application --port 8001
Started server process [17426]
Waiting for application startup.
ASGI 'lifespan' protocol appears unsupported.
Application startup complete.
Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit)
INFO: 127.0.0.1:45720 - "GET /admin/ HTTP/1.1" 200 OK
Not Found: /static/admin/css/base.css
Not Found: /static/admin/css/base.css
INFO: 127.0.0.1:45720 - "GET /static/admin/css/base.css HTTP/1.1" 404 Not Found
Not Found: /static/admin/css/dashboard.css
Not Found: /static/admin/css/dashboard.css
INFO: 127.0.0.1:45724 - "GET /static/admin/css/dashboard.css HTTP/1.1" 404 Not Found
Not Found: /static/admin/css/responsive.css
Not Found: /static/admin/css/responsive.css
INFO: 127.0.0.1:45726 - "GET /static/admin/css/responsive.css HTTP/1.1" 404 Not Found
```
Uvicorn has an option `--root-path` so I tried to specify the directory where these files are located but there is still the same error (path is correct). How can I solve this issue?
|
2020/05/13
|
[
"https://Stackoverflow.com/questions/61770551",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3423825/"
] |
When not running with the built-in development server, you'll need to either
* use [whitenoise](http://whitenoise.evans.io/en/stable/) which does this as a Django/WSGI middleware (my recommendation)
* use [the classic staticfile deployment procedure which collects all static files into some root](https://docs.djangoproject.com/en/3.0/howto/static-files/#deployment) and a static file server is expected to serve them. Uvicorn doesn't seem to support static file serving, so you might need something else too (see e.g. <https://www.uvicorn.org/deployment/#running-behind-nginx>).
* (very, very unpreferably!) [have Django serve static files like it does in dev](https://docs.djangoproject.com/en/3.0/howto/static-files/#serving-static-files-during-development)
|
Add below code your settings.py file
```
STATIC_ROOT = os.path.join(BASE_DIR, 'static', )
```
Add below code in your urls.py
```
from django.conf.urls.static import static
from django.conf import settings
urlpatterns = [.
.....] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
```
Then run below command but static directory must exist
```
python manage.py collectstatic --noinput
```
start server
```
uvicorn main.asgi:application --host 0.0.0.0
```
| 9,153
|
50,396,802
|
I want to check if an arg was actually passed on the command line when there is a default value for that arg.
Specifically in my case, I am using SCons and scons has a class which inherits from pythons optparse. So my code is like this so far:
```
from SCons.Environment import Environment
from SCons.Script.SConsOptions import Parser
MAIN_ENV = Environment()
argparser = Parser(MAIN_ENV._get_major_minor_revision(SCons.__version__))
print(argparser.parse_args())
```
Which prints all the args with the values, but I can't tell if one of the args was set or just has the default value in place. In the case I am looking at SCons 'num\_jobs' option, which defaults to 1. I would like to check if the user supplied a num\_jobs value, and use that if so, or otherwise just set num\_jobs to the number of CPUs reported by the system.
I can use sys.argv like this, but would prefer a cleaner option using the option parser:
```
###################################################
# Determine number of Jobs
# start by assuming num_jobs was not set
NUM_JOBS_SET = False
if GetOption("num_jobs") == 1:
# if num_jobs is the default we need to check sys.argv
# to see if the user happened to set the default
for arg in sys.argv:
if arg.startswith("-j") or arg.startswith("--jobs"):
if arg == "-j" or arg == "--jobs":
if(int(sys.argv[sys.argv.index(arg)+1]) == 1):
NUM_JOBS_SET = True
else:
if arg.startswith("-j"):
if(int(arg[2:]) == 1):
NUM_JOBS_SET = True
else:
# user must have set something if it wasn't default
NUM_JOBS_SET = True
# num_jobs wasn't specificed so let use the
# max number since the user doesn't seem to care
if not NUM_JOBS_SET:
NUM_CPUS = get_num_cpus()
print("Building with " + str(NUM_CPUS) + " parallel jobs")
MAIN_ENV.SetOption("num_jobs", NUM_CPUS)
else:
# user wants a certain number of jobs so do that
print("Building with " + str(GetOption('num_jobs')) + " parallel jobs")
```
I tried using pythons OptionParser, but if I call parse\_args() from the OptionParser from python inside a scons script, SCons parser doesn't seem to work, it fails to recognize valid options.
If someone has an example of how to check if the arg was passed with just pythons optparse, that should be sufficient for me to work into scons option parser.
|
2018/05/17
|
[
"https://Stackoverflow.com/questions/50396802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1644736/"
] |
Figured it out.
I didn't add my redirect uri to the list of authorized ones as the option doesn't appear if you set your app type to "Other". I set it to "Web Application" (even though it isn't) and added my redirect uri and that fixed it.
|
Your code snippet lists "<https://googleapis.com/oauth/v4/token>".
The token endpoint is "<https://googleapis.com/oauth2/v4/token>".
| 9,154
|
21,678,165
|
it's still not a year that i code in python in my spare time, and it is my first programming language. I need to generate serie of numbers in range ("1, 2, 3...99; 1, 2, 3...99;" ) and match them against a list. I managed to do this but the code looks pathetic and i failed in some tasks, for example by skipping series with duplicated/non unique numbers in an elegant way, or creating one single function that takes the length of the serie as parameter (for example 3 for 1-99, 1-99 and 1-99, 2 for 1-99 and 1-99 and so on) to avoid handwriting each serie's function.
What i have been capable of doing after exploring numpy multidimensional range ndindex (slow in my tests), creating pre filled lists (too huge), using set to uniquify (slow), tried all() for the "if n in y...", and many other things it, i have a very basic code which is also the fastest till now. After much changes, i'm simply back to the start, i moved the "if n != n" to the beginning of each for cycle in order to save loops and now have even no ideas on how to improve the function, or transforming it in a master function which generates n series of numbers. Any suggestion is really appreciated!
```
y = [#numbers]
def four(a,b):
for i in range(a,b):
for ii in range(a,b):
if i != ii:
for iii in range(a,b):
if i != iii and ii != iii:
for iiii in range(a,b):
if i != iiii and ii != iiii and iii != iiii:
if i in y and ii in y and iii in y and iiii in y:#exact match
#do something
four(1,100)
```
|
2014/02/10
|
[
"https://Stackoverflow.com/questions/21678165",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1348293/"
] |
thanks to user2357112 suggestion on using itertools.permutations(xrange(a, b), 4) I did the following and i'm very satisfied, it is fast and nice:
```
def solution(d, a, q):
for i in itertools.permutations(xrange(d, a), q):
#do something
solution(1,100,4)
```
Thanks to this brilliant community as well.
|
Create array[100], fill it numbers from 0 to 99. Then use random generator to mix them. And then just take needed number of numbers.
| 9,159
|
50,595,357
|
I have a question of while in python.
How to collect the result values using while?
```
ColumnCount_int = 3
while ColumnCount_int > 0 :
ColumnCount_text = str('<colspec colnum="'+ str(ColumnCount_int) +'"' ' ' 'colname="'+ str(ColumnCount_int) + '">')
Blank_text = ""
Blank_text = Blank_text + ColumnCount_text
ColumnCount_int = ColumnCount_int - 1
print(Blank_text)
```
result shows as below
```
<colspec colnum="3" colname="3">
<colspec colnum="2" colname="2">
<colspec colnum="1" colname="1">
```
but i want to collect all result like as below
```
<colspec colnum="3" colname="3"><colspec colnum="2" colname="2"><colspec colnum="1" colname="1">
```
Would you tell me which part wrong is ?
|
2018/05/30
|
[
"https://Stackoverflow.com/questions/50595357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9817554/"
] |
You can fix the code by following where `Blank_text = ""` is moved before `while loop` and `print(Blank_text)` is called after the `loop`.
(**Note**: *since `Blank_text` accumulates, variable name changed to `accumulated_text` as suggested in the comment*):
```
ColumnCount_int = 3
accumulated_text = "" # variable name changed, used instead of Blank_text
while ColumnCount_int > 0 :
ColumnCount_text = str('<colspec colnum="'+ str(ColumnCount_int) +'"' ' ' 'colname="'+ str(ColumnCount_int) + '">')
accumulated_text = accumulated_text + ColumnCount_text
ColumnCount_int = ColumnCount_int - 1
print(accumulated_text)
```
Result:
```
<colspec colnum="3" colname="3"><colspec colnum="2" colname="2"><colspec colnum="1" colname="1">
```
Update:
=======
However, same result can be from following in little compact way with `.join`:
```
result = ''.join('<colspec colnum="{0}" colname="{1}">'.format(i,i) for i in range(3,0,-1))
print(result)
```
|
Try appending it to the new list i created `l`, then do [`''.join(l)`](https://www.tutorialspoint.com/python3/string_join.htm) to output it in one line :
```
l = []
ColumnCount_int = 3
while ColumnCount_int > 0 :
ColumnCount_text = str('<colspec colnum="'+ str(ColumnCount_int) +'"' ' ' 'colname="'+ str(ColumnCount_int) + '">')
Blank_text = ColumnCount_text
ColumnCount_int = ColumnCount_int - 1
l.append(Blank_text)
print(''.join(l))
```
Output:
```
<colspec colnum="3" colname="3"><colspec colnum="2" colname="2"><colspec colnum="1" colname="1">
```
Shorter Way
===========
Also try this:
```
l = []
ColumnCount_int = 3
while ColumnCount_int > 0 :
l.append(str('<colspec colnum="'+str(ColumnCount_int)+'"'' ''colname="'+str(ColumnCount_int)+'">'))
ColumnCount_int-=1
print(''.join(l))
```
Output:
```
<colspec colnum="3" colname="3"><colspec colnum="2" colname="2"><colspec colnum="1" colname="1">
```
| 9,160
|
61,027,226
|
I have already translated in this project in the same way a number of similar templates in the same directory, which are nearly equal to this one. But this template makes me helpless.
Without a translation tag `{% blocktrans %}` it properly works and renders the variable.
[](https://i.stack.imgur.com/IBtrs.jpg)
```
c_filter_size.html
{% load i18n %}
{% if ffilter %}
<div class="badge badge-success text-wrap" style="width: 12rem;"">{% trans "Filter sizing check" %}</div>
<h6><small><p class="p-1 mb-2 bg-info text-white">{% trans "The filter sizing is successfully performed." %}
</p></small></h6>
{% if ffilter1 and ffilter.wfsubtype != ffilter1.wfsubtype %}
<div class="badge badge-success text-wrap" style="width: 12rem;"">{% trans "Filter sizing check" %}</div>
<h6><small><p class="p-1 mb-2 bg-info text-white">
If you insist on the fineness, but allow
to reduce flow rate up to {{ffilter1.flowrate}} m3/hr the filter size and therefore filter
price can be reduced.
</p></small></h6>
{% endif %}
```
with a translation tag `{% blocktrans %}` it works neither in English nor in the translated language for the rendered variable. Other similar templates smoothely work.
[](https://i.stack.imgur.com/MrENi.jpg)
```
c_filter_size.html
{% load i18n %}
{% if ffilter %}
<div class="badge badge-success text-wrap" style="width: 12rem;"">{% trans "Filter sizing check" %}</div>
<h6><small><p class="p-1 mb-2 bg-info text-white">{% trans "The filter sizing is successfully performed." %}
</p></small></h6>
{% if ffilter1 and ffilter.wfsubtype != ffilter1.wfsubtype %}
<div class="badge badge-success text-wrap" style="width: 12rem;"">{% trans "Filter sizing check" %}</div>
<h6><small><p class="p-1 mb-2 bg-info text-white">
{% blocktrans %}
If you insist on the fineness, but allow
to reduce flow rate up to {{ffilter1.flowrate}} m3/hr the filter size and therefore filter
price can be reduced.
{% endblocktrans %}
</p></small></h6>
{% endif %}
```
[](https://i.stack.imgur.com/cknNR.jpg)
```
django.po
...
#: rsf/templates/rsf/comments/c_filter_size.html:11
#, python-format
msgid ""
"\n"
" If you insist on the fineness, but allow\n"
" to reduce flow rate up to <b>%(ffilter1.flowrate)s</b> m3/hr the "
"filter size and therefore filter\n"
" price can be reduced.\n"
" "
msgstr ""
"\n"
" Если тонкость фильтрации изменить невозможно, но возможно уменьшить "
"расход до <b>%(ffilter1.flowrate)s</b> м3/час, то "
"размер фильтра и соответственно его цена могут быть уменьшены."
...
```
Thank you
|
2020/04/04
|
[
"https://Stackoverflow.com/questions/61027226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13099284/"
] |
You can't access to variable properties inside `blocktrans`.
Instead of using `{{ffilter1.flowrate}}` inside `blocktrans` you should use the keyword `with`:
```
{% blocktrans with flowrate=ffilter1.flowrate %}
If you insist on the fineness, but allow to reduce
flow rate up to {{ flowrate }} m3/hr the filter size and
therefore filter price can be reduced.
{% endblocktrans %}
```
Also, to avoid having indents inside your translation use the keyword trimmed:
```
{% blocktrans with flowrate=ffilter1.flowrate trimmed %}
```
Source: <https://docs.djangoproject.com/en/3.0/topics/i18n/translation/#blocktrans-template-tag>
|
Maybe you could break up your trans blocks into two sections.
```
{% blocktrans %}
If you insist on the fineness, but allow
to reduce flow rate up to {% endblocktrans %}
{{ffilter1.flowrate}}
{% blocktrans %} m3/hr the filter size and therefore filter
price can be reduced.
{% endblocktrans %}
```
It doesn't look the best, but it would be impossible to put a variable inside the trans block I think.
On a different note. I noticed you have an error in your inline styles, you have an extra " mark in the following line.
`<div class="badge badge-success text-wrap" style="width: 12rem;"">`
| 9,161
|
48,376,714
|
I am using python 3.5 and Anaconda 4.2 and ubuntu 16.04. I get an error in `train.py` file (from object\_detection import trainer: `no module named object_detection`).But I think that i have problem in python 3.5. Can anyone help me with this error?
[](https://i.stack.imgur.com/wYUKq.png)
|
2018/01/22
|
[
"https://Stackoverflow.com/questions/48376714",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9250190/"
] |
It happened to me. Just copy the "object\_detection" folder from "models" folder into the folder where you are running the train.py. I posted the link to the folder from github but you better copy the folder from your local files so it will match with your code perfectly in case you are using an older version of the object detection api.
There are more professional ways to solve the problem I think but I just used the easiest way to solve the problem.
Link to object\_detection folder from tensorflow github: <https://github.com/tensorflow/models/tree/master/research/object_detection>
|
Move the object\_detection folder to upper folder
cp /models/research/object\_detection object\_detection
| 9,162
|
24,469,353
|
New to python so...
I have a list with two columns, like so:
```
>>>print langs
[{u'code': u'en', u'name': u'ENGLISH'}, {u'code': u'hy', u'name': u'ARMENIAN'}, ... {u'code': u'ms', u'name': u'MALAY'}]
```
I would like to add another row with:
code: xx and name: UNKNOWN
Tried with `langs.append` and so on, but can't get the hang of it.
|
2014/06/28
|
[
"https://Stackoverflow.com/questions/24469353",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1662464/"
] |
It's pretty easy:
```
>>> langs.append({u'code': u'xx', u'name': u'UNKNOWN'})
```
But I'd use `collections.namedtuple` for this kind of job(when columns are well-defined):
```
In [1]: from collections import namedtuple
In [2]: Lang = namedtuple("Lang", ("code", "name"))
In [3]: langs = []
In [4]: langs.append(Lang("xx", "unknown"))
In [5]: langs[0]
Out[5]: Lang(code='xx', name='unknown')
In [6]: langs[0].code
Out[6]: 'xx'
In [7]: langs[0].name
Out[7]: 'unknown'
```
|
This is one way of doing it...
```
langs += [{u'code': u'xx', u'name': u'UNKNOWN'}]
```
| 9,163
|
74,006,179
|
i'm new to python and i got a problem with dictionary filter.
I searched a really long time for solution and asked on several discord server, but no one could really help me.
If i have a dictionary like this:
```
[
{"champion": "ahri", "kills": 12, "assists": 7, "deaths": 4, "puuid": "17hd72he7wu"}
{"champion": "sett", "kills": 14, "assists": 5, "deaths": 7, "puuid": "2123r3ze7wu"}
{"champion": "thresh", "kills": 9, "assists": 16, "deaths": 2, "puuid": "32d72h5t5gu"}
]
```
How do i filter out only 1 certain part by puuid(value)? So it looks like this:
puuid = **"32d72h5t5gu"**
```
[
{"champion": "thresh", "kills": 9, "assists": 16, "deaths": 2, "puuid": "32d72h5t5gu"}
]
```
with all other parts of dictionary **removed**.
|
2022/10/09
|
[
"https://Stackoverflow.com/questions/74006179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20198636/"
] |
use a list comprehension and cycle through the dictionaries in your list to only keep the one that meets the specified conditions.
```
[
{"champion": ahri, "kills": 12, "assists": 7, "deaths": 4, "puuid": 17hd72he7wu}
{"champion": sett, "kills": 14, "assists": 5, "deaths": 7, "puuid": 2123r3ze7wu}
{"champion": thresh, "kills": 9, "assists": 16, "deaths": 2, "puuid": 32d72h5t5gu}
]
```
```
newlist = [i for i in oldlist if (i['puuid'] == '32d72h5t5gu')]
```
|
you want something like this:
new\_list = [ x for x in orgininal\_list if x[puuid] == value ]
its called a list comprehension
| 9,164
|
73,323,330
|
so im learning some python but i got a list out of index error at this point if i put my header index to 0 it works not the way i see it works but ok, but if i get an upper index for header it wont can you help me out why?[photo of my csv file](https://i.stack.imgur.com/cDD13.png)
[heres the first picture of my code][and 2nd pic](https://i.stack.imgur.com/Mi0AC.png)[3](https://i.stack.imgur.com/UrhK8.png)
|
2022/08/11
|
[
"https://Stackoverflow.com/questions/73323330",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19744452/"
] |
When you put your header index as `0` what is output?
Assuming your delimiter is `tab`
```py
with open("sample.csv", "r") as csvfile:
reader = csv.reader(csvfile,delimiter='\t')
headers = next(reader, None)
print(headers[1])
```
|
My delimeter was \t after i changed it to it problem all solved
| 9,167
|
67,933,453
|
I have a file that consists of data shown below
```
GS*642510*18762293*0*0*0*0*0*0*0*HN*056000522*601200162*20210513*101046*200018825*X*005010X214
ST*642510*18762293*1*0*0*0*0*0*0*277*000000001*005010X214
BHT*642510*18762293*1*0*0*0*0*0*0*0085*08*1*20210513*101046*TH
NM1*642510*18762293*1*1*1*1*1*0*0*QC*1*TORIBIO QUEZADA*YERINSON****MI*1000836598
NM1*642510*18762293*1*1*1*4*1*0*0*QC*1*DELACRUZ*JENNIFER*L***MI*1000232209
```
I want to delete the data from the 1st \* till the 9th \*
I have written some code in python but not sure how to use regular expression as the string itself contains a "\*" itself.
My PY Code is
```
import os
import re
path2 = "C:/Users/Lsaxena2/Desktop/RI Stuff/RI RITM1456876 Response files/Processed files with
no Logs"
files = os.listdir(path2)
print(files)
for x in files:
with open(path2+'/'+x,'r') as f:
newText = f.read().replace("*446607*12004230*","*")
with open(path2+'/'+x,'w') as f:
f.write(newText)
```
After update the data should looks like
```
GS*HN*056000522*601200162*20210513*101046*200018825*X*005010X214
ST*277*000000001*005010X214
BHT*0085*08*1*20210513*101046*TH
NM1*QC*1*TORIBIO QUEZADA*YERINSON****MI*1000836598
NM1*QC*1*DELACRUZ*JENNIFER*L***MI*1000232209
```
|
2021/06/11
|
[
"https://Stackoverflow.com/questions/67933453",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15153070/"
] |
<https://regex101.com/r/khgAVj/1>
[](https://i.stack.imgur.com/5Ru4N.png)
regex pattern: `(\*\w+){9}\*`
Explanation of the regex pattern can be found on the regex101 page on the right side.
There is a code generator and replacing/removing the section should be fairly trivial. StackOverflow isn't a place that writes code for you. It helps you at debugging the code. So I will not give a finished solution but merely point you in the direction.
|
You could write a regular expression to solve this, but if you know that you always want to remove the content between the first and ninth stars, then I would split your strings into lists by "\*" and rejoin select slices. For example:
```py
mystring = "GS*642510*18762293*0*0*0*0*0*0*0*HN*056000522*601200162*20210513*101046*200018825*X*005010X214"
split_string = mystring.split("*")
# ['GS', '642510', '18762293', '0', '0', '0', '0', '0', '0', '0', 'HN', '056000522', '601200162', '20210513', '101046', '200018825', 'X', '005010X214']
desired_slices = split_string[:1] + split_string[10:]
pruned_string = "*".join(desired_slices)
pruned_string
# 'GS*HN*056000522*601200162*20210513*101046*200018825*X*005010X214'
```
| 9,168
|
58,335,344
|
I want scheduled to run my python script every hour and save the data in elasticsearch index. So that I used a function I wrote, set\_interval which uses the tweepy library. But it doesn't work as I need it to work. It runs every minute and save the data in index. Even after the set that seconds equal to 3600 it runs in every minute. But I want to configure this to run on an hourly basis.
How can I fix this? Heres my python script:
```
def call_at_interval(time, callback, args):
while True:
timer = Timer(time, callback, args=args)
timer.start()
timer.join()
def set_interval(time, callback, *args):
Thread(target=call_at_interval, args=(time, callback, args)).start()
def get_all_tweets(screen_name):
# authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
screen_name = ""
# initialize a list to hold all the tweepy Tweets
alltweets = []
# make initial request for most recent tweets (200 is the maximum allowed count)
new_tweets = api.user_timeline(screen_name=screen_name, count=200)
# save most recent tweets
alltweets.extend(new_tweets)
# save the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
# keep grabbing tweets until there are no tweets left to grab
while len(new_tweets) > 0:
#print
#"getting tweets before %s" % (oldest)
# all subsiquent requests use the max_id param to prevent duplicates
new_tweets = api.user_timeline(screen_name=screen_name, count=200, max_id=oldest)
# save most recent tweets
alltweets.extend(new_tweets)
# update the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
#print
#"...%s tweets downloaded so far" % (len(alltweets))
outtweets = [{'ID': tweet.id_str, 'Text': tweet.text, 'Date': tweet.created_at, 'author': tweet.user.screen_name} for tweet in alltweets]
def save_es(outtweets, es): # Peps8 convention
data = [ # Please without s in data
{
"_index": "index name",
"_type": "type name",
"_id": index,
"_source": ID
}
for index, ID in enumerate(outtweets)
]
helpers.bulk(es, data)
save_es(outtweets, es)
print('Run at:')
print(datetime.now())
print("\n")
set_interval(3600, get_all_tweets(screen_name))
```
|
2019/10/11
|
[
"https://Stackoverflow.com/questions/58335344",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11931186/"
] |
Why do you need so much complexity to do some task every hour? You can run script every one hour this way below, note that it is runned 1 hour + time to do work:
```
import time
def do_some_work():
print("Do some work")
time.sleep(1)
print("Some work is done!")
if __name__ == "__main__":
time.sleep(60) # imagine you would like to start work in 1 minute first time
while True:
do_some_work()
time.sleep(3600) # do work every one hour
```
If you want to run script exactly every one hour, do the following code below:
```
import time
import threading
def do_some_work():
print("Do some work")
time.sleep(4)
print("Some work is done!")
if __name__ == "__main__":
time.sleep(60) # imagine you would like to start work in 1 minute first time
while True:
thr = threading.Thread(target=do_some_work)
thr.start()
time.sleep(3600) # do work every one hour
```
In this case thr is supposed to finish it's work faster than 3600 seconds, though it does not, you'll still get results, but results will be from another attempt, see the example below:
```
import time
import threading
class AttemptCount:
def __init__(self, attempt_number):
self.attempt_number = attempt_number
def do_some_work(_attempt_number):
print(f"Do some work {_attempt_number.attempt_number}")
time.sleep(4)
print(f"Some work is done! {_attempt_number.attempt_number}")
_attempt_number.attempt_number += 1
if __name__ == "__main__":
attempt_number = AttemptCount(1)
time.sleep(1) # imagine you would like to start work in 1 minute first time
while True:
thr = threading.Thread(target=do_some_work, args=(attempt_number, ),)
thr.start()
time.sleep(1) # do work every one hour
```
The result you'll gey in the case is:
Do some work 1
Do some work 1
Do some work 1
Do some work 1
Some work is done! 1
Do some work 2
Some work is done! 2
Do some work 3
Some work is done! 3
Do some work 4
Some work is done! 4
Do some work 5
Some work is done! 5
Do some work 6
Some work is done! 6
Do some work 7
Some work is done! 7
Do some work 8
Some work is done! 8
Do some work 9
I like using subprocess.Popen for such tasks, if the child subprocess did not finish it's work within one hour due to any reason, you just terminate it and start a new one.
You also can use CRON to schedule some process to run every one hour.
|
Get rid of all timer code just write the logic and
**cron** will do the job for you add this to the end of the file after `crontab -e`
```
0 * * * * /path/to/python /path/to/script.py
```
`0 * * * *` means run at every *zero* minute you can find more explanation [here](https://www.computerhope.com/unix/ucrontab.htm)
And also I noticed you are recursively calling `get_all_tweets(screen_name)` I think you might have to call it from outside
Just keep your script this much
```
def get_all_tweets(screen_name):
# authorize twitter, initialize tweepy
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
screen_name = ""
# initialize a list to hold all the tweepy Tweets
alltweets = []
# make initial request for most recent tweets (200 is the maximum allowed count)
new_tweets = api.user_timeline(screen_name=screen_name, count=200)
# save most recent tweets
alltweets.extend(new_tweets)
# save the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
# keep grabbing tweets until there are no tweets left to grab
while len(new_tweets) > 0:
#print
#"getting tweets before %s" % (oldest)
# all subsiquent requests use the max_id param to prevent duplicates
new_tweets = api.user_timeline(screen_name=screen_name, count=200, max_id=oldest)
# save most recent tweets
alltweets.extend(new_tweets)
# update the id of the oldest tweet less one
oldest = alltweets[-1].id - 1
#print
#"...%s tweets downloaded so far" % (len(alltweets))
outtweets = [{'ID': tweet.id_str, 'Text': tweet.text, 'Date': tweet.created_at, 'author': tweet.user.screen_name} for tweet in alltweets]
def save_es(outtweets, es): # Peps8 convention
data = [ # Please without s in data
{
"_index": "index name",
"_type": "type name",
"_id": index,
"_source": ID
}
for index, ID in enumerate(outtweets)
]
helpers.bulk(es, data)
save_es(outtweets, es)
get_all_tweets("") #your screen name here
```
| 9,171
|
40,753,137
|
suppose I have a list which calls name:
name=['ACCBCDB','CCABACB','CAABBCB']
I want to use python to remove middle B from each element in the list.
the output should display :
['ACCCDB','CCAACB','CAABCB']
|
2016/11/22
|
[
"https://Stackoverflow.com/questions/40753137",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6169548/"
] |
It is not possible to test for `NULL` values with comparison operators, such as `=`, `<`, or `<>`.
You have to use the `IS NULL` and `IS NOT NULL` operators instead, or you have to use functions like `ISNULL()` and `COALESCE()`
```
Select * From MyTableName where [boolfieldX] <> 1 OR [boolfieldX] IS NULL
```
**OR**
```
Select * From MyTableName where ISNULL([boolfieldX],0) <> 1
```
Read more about null comparison in [Stackoverflow Documentation](http://www.riptutorial.com/sql-server/example/17804/null-comparison)
Read more about `ISNULL()` and `COALESCE()` Functions in [Stackoverflow Documentation](http://www.riptutorial.com/sql-server/example/25800/coalesce---)
|
Hi try to use this query:
```
select * from mytablename where [boolFieldX] is null And [boolFieldX] <> 1
```
| 9,172
|
28,530,928
|
I am trying to have a python script execute on the click of an image but whenever the python script gets called it always throws a 500 Internal Server Error? Here is the text from the log
```
[Sun Feb 15 20:31:04 2015] [error] (2)No such file or directory: exec of '/var/www/forward.py' failed
[Sun Feb 15 20:31:04 2015] [error] [client 192.168.15.51] Premature end of script headers: forward.py, referer: http://192.168.15.76/Testing.html
```
I don't understand why it is saying `Premature end of script header`? I can execute a basic python script that just prints an html header or text. The file I am trying to execute just has some basic wiringPi code that executes fine from the sudo python forward.py command?
EDIT
This is the script I'm trying to execute:
```
#!/usr/bin/python
import time
import wiringpi2 as wiringpi
wiringpi.wiringPiSetupPhys()
wiringpi.pinMode(40, 1)
wiringpi.digitalWrite(40, 1)
time.sleep(2)
wiringpi.digitalWrite(40, 0)
wiringpi.pinMode(40, 0)
```
|
2015/02/15
|
[
"https://Stackoverflow.com/questions/28530928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4547894/"
] |
Things to check:
1. Make sure the script is executable (`chmod +x forward.py`) and that it has a she-bang line (e.g. `#!/usr/bin/env python`).
2. Make sure the script's owner & group match up with what user apache is running as.
3. Try running the script from the command line as the apache user. I see that you're testing it with `sudo`. Does it really need `sudo`? If so whatever user is running as apache will also need sudoers access.
4. Since it looks like you're using CGI, try adding: `import cgitb; cgitb.enable();` to the top of your script. This will catch any exceptions and return it as a response instead of causing the script to die.
|
In addition to @lost-theory's recommendations, I would also check to make sure:
* You have executable permission on the script.
* Apache has permission to access the folder/file of the script. For example:
```
<Directory /path/to/your/dir>
<Files *>
Order allow,deny
Allow from all
Require all granted
</Files>
</Directory>
```
| 9,175
|
44,579,050
|
I am using python and OpenCV. I am trying to find the center and angle of the batteries:
[Image of batteries with random angles:](https://i.stack.imgur.com/qB8S7.jpg)
[](https://i.stack.imgur.com/DgD8p.jpg)
The code than I have is this:
```
import cv2
import numpy as np
img = cv2.imread('image/baterias2.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
img2 = cv2.imread('image/baterias4.png',0)
minLineLength = 300
maxLineGap = 5
edges = cv2.Canny(img2,50,200)
cv2.imshow('Canny',edges)
lines = cv2.HoughLinesP(edges,1,np.pi/180,80,minLineLength,maxLineGap)
print lines
salida = np.zeros((img.shape[0],img.shape[1]))
for x in range(0, len(lines)):
for x1,y1,x2,y2 in lines[x]:
cv2.line(salida,(x1,y1),(x2,y2),(125,125,125),0)# rgb
cv2.imshow('final',salida)
cv2.imwrite('result/hough.jpg',img)
cv2.waitKey(0)
```
Any ideas to work it out?
|
2017/06/16
|
[
"https://Stackoverflow.com/questions/44579050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8093906/"
] |
Almost identical to [one of my other answers](https://stackoverflow.com/questions/43863931/contour-axis-for-image/43883758#43883758). PCA seems to work fine.
```
import cv2
import numpy as np
img = cv2.imread("test_images/battery001.png") #load an image of a single battery
img_gs = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #convert to grayscale
#inverted binary threshold: 1 for the battery, 0 for the background
_, thresh = cv2.threshold(img_gs, 250, 1, cv2.THRESH_BINARY_INV)
#From a matrix of pixels to a matrix of coordinates of non-black points.
#(note: mind the col/row order, pixels are accessed as [row, col]
#but when we draw, it's (x, y), so have to swap here or there)
mat = np.argwhere(thresh != 0)
#let's swap here... (e. g. [[row, col], ...] to [[col, row], ...])
mat[:, [0, 1]] = mat[:, [1, 0]]
#or we could've swapped at the end, when drawing
#(e. g. center[0], center[1] = center[1], center[0], same for endpoint1 and endpoint2),
#probably better performance-wise
mat = np.array(mat).astype(np.float32) #have to convert type for PCA
#mean (e. g. the geometrical center)
#and eigenvectors (e. g. directions of principal components)
m, e = cv2.PCACompute(mat, mean = np.array([]))
#now to draw: let's scale our primary axis by 100,
#and the secondary by 50
center = tuple(m[0])
endpoint1 = tuple(m[0] + e[0]*100)
endpoint2 = tuple(m[0] + e[1]*50)
red_color = (0, 0, 255)
cv2.circle(img, center, 5, red_color)
cv2.line(img, center, endpoint1, red_color)
cv2.line(img, center, endpoint2, red_color)
cv2.imwrite("out.png", img)
```
[](https://i.stack.imgur.com/ufsce.png)
|
You can reference the code.
```
import cv2
import imutils
import numpy as np
PIC_PATH = r"E:\temp\Battery.jpg"
image = cv2.imread(PIC_PATH)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(gray, 100, 220)
kernel = np.ones((5,5),np.uint8)
closed = cv2.morphologyEx(edged, cv2.MORPH_CLOSE, kernel)
cnts = cv2.findContours(closed.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if imutils.is_cv2() else cnts[1]
cv2.drawContours(image, cnts, -1, (0, 255, 0), 4)
cv2.imshow("Output", image)
cv2.waitKey(0)
```
The result picture is,
[](https://i.stack.imgur.com/Jgnmp.png)
| 9,176
|
22,948,119
|
i am trying to make a program for my computer science class that has us create a lottery game generator. this game has you input your number, then it creates tickets winning tickets to match to your ticket. so if you match 3, it says you matched 3, 4 says 4, 5 says 5 and at 6 matches it will stop the program. my problem is that if you got a match of 6 on this first randomly generated set (highly unlikely, but possible), it doesn't keep going until a match of 3, 4 and 5. i need it to match a set of 3, say so, then ignore matching another set of three and only worry about match 4, 5 and 6.
```
from random import *
import random
def draw():
#return a list of six randomly picked numbers
numbers=list(range(1,50))
drawn=[]
for n in range (6):
x=randint(0,len(numbers)-1)
no=numbers.pop(x)
drawn.append(no)
return drawn
a=int(input("What is your first number? (maximum of 49)"))
b=int(input("What is your second number? (different from 1)"))
c=int(input("What is your third number? (different from 1,2)"))
i=int(input("What is your fourth number?(different from 1,2,3)"))
e=int(input("What is your fith number?(different from 1,2,3,4)"))
f=int(input("What is your sixth number?(different from 1,2,3,4,5)"))
def winner():
ticket=[a,b,c,i,e,f]
wins=0
costs=0
while True:
costs=costs+1
d=draw()
matches=0
for h in ticket:
if h in d:
matches=matches+1
if matches==3:
print ("You Matched 3 on try", costs)
elif matches==4:
print ("Cool! 4 matches on try", costs)
elif matches==5:
print ("Amazing!", costs, "trys for 5 matches!")
elif matches==6:
print ("Congratulations! you matched all 6 numbers on try", costs)
return False
draw()
winner()
```
one of my classmates made it have a while true statement for every matching pair, but this causes python to crash while finding each matching set. i have no other ideas on how to make the program stop from posting more than one match.
|
2014/04/08
|
[
"https://Stackoverflow.com/questions/22948119",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3512754/"
] |
```
from random import randint, sample
# Ontario Lotto 6/49 prize schedule
COST = 0.50
PRIZES = [0, 0, 0, 5., 50., 500., 1000000.]
def draw():
return set(sample(range(1, 50), 6))
def get_ints(prompt):
while True:
try:
return [int(i) for i in input(prompt).split()]
except ValueError:
pass
def pick():
while True:
nums = set(get_ints(
"Please enter 6 numbers in [1..49], ie 3 4 17 22 44 47: "
))
if len(nums) == 6 and 1 <= min(nums) and max(nums) <= 49:
return nums
def num_matched(picked):
return len(picked & draw()) # set intersection
def report(matches):
total_cost = COST * sum(matches)
total_won = sum(m*p for m,p in zip(matches, PRIZES))
net = total_won - total_cost
# report on the results:
print("\nYou won:")
print(
" nothing {:>8} times -> ${:>12.2f}"
.format(sum(matches[:3]), 0.)
)
for i in range(3, 7):
print(
" ${:>12.2f} {:>8} times -> ${:>12.2f}"
.format(PRIZES[i], matches[i], PRIZES[i] * matches[i])
)
print(
"\nYou paid ${:0.2f} to win ${:0.2f}, for a net result of ${:0.2f}."
.format(total_cost, total_won, net)
)
def main():
# pick a set of numbers
picked = pick()
# repeat until we have seen 3, 4, 5, and 6-ball matches
matches = [0, 0, 0, 0, 0, 0, 0]
while not all(matches[3:]):
matches[num_matched(picked)] += 1
report(matches)
if __name__=="__main__":
main()
```
which results in
```
Please enter 6 numbers in [1..49], ie 3 4 17 22 44 47: 4 6 9 12 14 19
You won:
nothing 10060703 times -> $ 0.00
$ 5.00 181218 times -> $ 906090.00
$ 50.00 9888 times -> $ 494400.00
$ 500.00 189 times -> $ 94500.00
$ 1000000.00 1 times -> $ 1000000.00
You paid $5125999.50 to win $2494990.00, for a net result of $-2631009.50.
```
|
just keep a record of what you have seen
```
...
costs=0
found = []
while True:
...
if matches==3 and 3 not in found:
found.append(3)
print ("You Matched 3 on try", costs)
elif matches==4 add 4 not in found:
found.append(4)
print ("Cool! 4 matches on try", costs)
...
if set([3,4,5,6]).intersection(found) == set([3,4,5,6]):
print "You Found em all!"
return
```
| 9,179
|
7,533,677
|
```
a.zip---
-- b.txt
-- c.txt
-- d.txt
```
Methods to process the zip files with Python,
I could expand the zip file to a temporary directory, then process each txt file one bye one
Here, I am more interested to know whether or not python provides such a way so that
I don't have to manually expand the zip file and just simply treat the zip file as a specialized folder and process each txt accordingly.
|
2011/09/23
|
[
"https://Stackoverflow.com/questions/7533677",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/391104/"
] |
The [Python standard library](http://docs.python.org/library/zipfile.html) helps you.
Doug Hellman writes very informative posts about selected modules: <https://pymotw.com/3/zipfile/>
To comment on Davids post: From Python 2.7 on the Zipfile object provides a context manager, so the recommended way would be:
```
import zipfile
with zipfile.ZipFile("zipfile.zip", "r") as f:
for name in f.namelist():
data = f.read(name)
print name, len(data), repr(data[:10])
```
The `close` method will be called automatically because of the with statement. This is especially important if you write to the file.
|
Yes you can process each file by itself. Take a look at the tutorial [here](http://effbot.org/librarybook/zipfile.htm). For your needs you can do something like this example from that tutorial:
```
import zipfile
file = zipfile.ZipFile("zipfile.zip", "r")
for name in file.namelist():
data = file.read(name)
print name, len(data), repr(data[:10])
```
This will iterate over each file in the archive and print out its name, length and the first 10 bytes.
The comprehensive reference documentation is [here](http://docs.python.org/library/zipfile.html).
| 9,180
|
58,381,152
|
In training a neural network in Tensorflow 2.0 in python, I'm noticing that training accuracy and loss change dramatically between epochs. I'm aware that the metrics printed are an average over the entire epoch, but accuracy seems to drop significantly after each epoch, despite the average always increasing.
The loss also exhibits this behavior, dropping significantly each epoch but the average increases. Here is an image of what I mean (from Tensorboard):
[](https://i.stack.imgur.com/fm6KK.png)
I've noticed this behavior on all of the models I've implemented myself, so it could be a bug, but I want a second opinion on whether this is normal behavior and if so what does it mean?
Also, I'm using a fairly large dataset (roughly 3 million examples). Batch size is 32 and each dot in the accuracy/loss graphs represent 50 batches (2k on the graph = 100k batches). The learning rate graph is 1:1 for batches.
|
2019/10/14
|
[
"https://Stackoverflow.com/questions/58381152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4607066/"
] |
It seems this phenomenon comes from the fact that the model has a high batch-to-batch variance in terms of accuracy and loss. This is illustrated if I take a graph of the model with the actual metrics per step as opposed to the average over the epoch:
[](https://i.stack.imgur.com/DRCUp.png)
Here you can see that the model can vary widely. (This graph is just for one epoch, but the fact remains).
Since the average metrics were being reported per epoch, at the beginning of the next epoch it is highly likely that the average metrics will be lower than the previous average, leading to a dramatic drop in the running average value, illustrated in red below:
[](https://i.stack.imgur.com/zC105.png)
If you imagine the discontinuities in the red graph as being epoch transitions, you can see why you would observe the phenomenon in the question.
TL;DR The model has a very high variance in it's output with respect to each batch.
|
I have just newly experienced this kind of issue while I was working on a project that is about object localization. For my case, there was three **main** candidates.
* I have used no shuffling in my training. That creates a loss increase after each epoch.
* I have defined a new loss function that is calculated using IOU. It was something like;
```
def new_loss(y_true, y_pred):
mse = tf.losses.mean_squared_error(y_true, y_pred)
iou = calculate_iou(y_true, y_pred)
return mse + (1 - iou)
```
I also suspect this loss may be a possible candidate of increase in loss after epoch. However, I was not able to replace it.
* I was using an *Adam* optimizer. So, a possible thing to do is to change it to see how the training affected.
Conclusion
==========
I have just changed the *Adam* to *SGD* and shuffled my data in training. There was still a jump in the loss but it was so minimal compared without a change. For example, my loss spike was ~0.3 before the changes and it became ~0.02.
**Note**
I need to add there are lots of discussions about this topic. I tried to utilize the possible solutions that are possible candidates for my model.
| 9,181
|
59,439,124
|
The relatively new keras-tuner module for tensorflow-2 is causing the error 'Failed to create a NewWriteableFile'. The tuner.search function is working, it is only after the trial completes that the error is thrown. This is a tutorial from the sentdex Youtube channel.
Here is the code:
```
from tensorflow import keras
from tensorflow.keras.datasets import fashion_mnist
from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Activation, Flatten
from kerastuner.tuners import RandomSearch
from kerastuner.engine.hyperparameters import HyperParameters
import matplotlib.pyplot as plt
import time
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train = x_train[:1000].reshape(-1, 28, 28, 1)
x_test = x_test[:100].reshape(-1, 28, 28, 1)
y_train = y_train[:1000]
y_test = y_test[:100]
# x_train = x_train.reshape(-1, 28, 28, 1)
# x_test = x_test.reshape(-1, 28, 28, 1)
LOG_DIR = f"{int(time.time())}"
def build_model(hp):
model = keras.models.Sequential()
model.add(Conv2D(hp.Int("layer1_channels", min_value=32,
max_value=256, step=32), (3,3), input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
for i in range(hp.Int("n_layers", 1, 4)):
model.add(Conv2D(hp.Int(f"conv_{i}_channels", min_value=32,
max_value=256, step=32), (3,3)))
model.add(Flatten())
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
return model
tuner = RandomSearch(build_model,
objective = "val_accuracy",
max_trials = 1,
executions_per_trial = 1,
directory = LOG_DIR,
project_name = 'junk')
tuner.search(x_train,
y_train,
epochs=1,
batch_size=64,
validation_data=(x_test, y_test))
```
This is the traceback printout:
```
(tf_2.0) C:\Users\redex\OneDrive\Documents\Education\Sentdex Tutorials\Keras-Tuner>C:/Users/redex/Anaconda3/envs/tf_2.0/python.exe "c:/Users/redex/OneDrive/Documents/Education/Sentdex Tutorials/Keras-Tuner/keras-tuner.py"
2019-12-21 10:07:47.556531: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations: AVX AVX2
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
2019-12-21 10:07:47.574699: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 8. Tune using inter_op_parallelism_threads for best performance.
Train on 1000 samples, validate on 100 samples
960/1000 [===========================>..] - ETA: 0s - loss: 64.0616 - accuracy: 0.2844
2019-12-21 10:07:55.080024: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at save_restore_v2_ops.cc:109 : Not found: Failed to create a NewWriteableFile: 1576951667\junk\trial_c5a5436b1d28a85446ce55c8d13f9657\checkpoints\epoch_0\checkpoint_temp_8a230a5ae2d046098456d1fdfc696690/part-00000-of-00001.data-00000-of-00001.tempstate15377864750281844169 : The system cannot find the path specified.
; No such process
Traceback (most recent call last):
File "c:/Users/redex/OneDrive/Documents/Education/Sentdex Tutorials/Keras-Tuner/keras-tuner.py", line 65, in <module>
validation_data=(x_test, y_test))
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\kerastuner\engine\base_tuner.py", line 122, in search
self.run_trial(trial, *fit_args, **fit_kwargs)
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\kerastuner\engine\multi_execution_tuner.py", line 95, in run_trial
history = model.fit(*fit_args, **fit_kwargs, callbacks=callbacks)
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 728, in fit
use_multiprocessing=use_multiprocessing)
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 372, in fit
prefix='val_')
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\contextlib.py", line 119, in __exit__
next(self.gen)
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 685, in on_epoch
self.callbacks.on_epoch_end(epoch, epoch_logs)
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\callbacks.py", line 298, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\callbacks.py", line 965, in on_epoch_end
self._save_model(epoch=epoch, logs=logs)
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\callbacks.py", line 999, in _save_model
self.model.save_weights(filepath, overwrite=True)
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 1090, in save_weights
self._trackable_saver.save(filepath, session=session)
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\training\tracking\util.py", line 1155, in save
file_prefix=file_prefix_tensor, object_graph_tensor=object_graph_tensor)
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\training\tracking\util.py", line 1103, in _save_cached_when_graph_building
save_op = saver.save(file_prefix)
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\training\saving\functional_saver.py", line 230, in save
sharded_saves.append(saver.save(shard_prefix))
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\training\saving\functional_saver.py", line 72, in save
return io_ops.save_v2(file_prefix, tensor_names, tensor_slices, tensors)
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\ops\gen_io_ops.py", line 1932, in save_v2
ctx=_ctx)
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\ops\gen_io_ops.py", line 1969, in save_v2_eager_fallback
ctx=_ctx, name=name)
File "C:\Users\redex\Anaconda3\envs\tf_2.0\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.NotFoundError: Failed to create a NewWriteableFile: 1576951667\junk\trial_c5a5436b1d28a85446ce55c8d13f9657\checkpoints\epoch_0\checkpoint_temp_8a230a5ae2d046098456d1fdfc696690/part-00000-of-00001.data-00000-of-00001.tempstate15377864750281844169 : The system cannot find the path specified.
; No such process [Op:SaveV2]
```
My machine is Windows 10
The keras-tuner documentation specifies Tensorflow 2.0 and Python 3.6 but I'm using 3.7.4. I presume more recent is OK. I'm no software expert so this is about all I know, any help is appreciated.
|
2019/12/21
|
[
"https://Stackoverflow.com/questions/59439124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9447938/"
] |
I had the similas problem while using kerastuner in Windows and I've solved it:
1. The first issue is that the path to the log directory may be too long. I had to reduced it.
2. The second problem is that python (or tf) doens't work in Windows with mixed slashes. But kerastuner forms the path with backslashes. So I should provide the path with backslashes. I've done this with os.path.normpath() method:
```
tuner=RandomSearch(build_model,objective='val_accuracy',max_trials=10,directory=os.path.normpath('C:/'))
tuner.search(x_train,y_train,batch_size=256,epochs=30,validation_split=0.2,verbose=1)
```
Now I don't receive this error.
|
The problem it would appear is a Windows issue. Running the same code in a Linux environment had no issue in this regard.
| 9,182
|
54,547,986
|
I couldn't figure out why I'm getting a `NameError` when trying to access a function inside the class.
This is the code I am having a problem with. Am I missing something?
```
class ArmstrongNumber:
def cubesum(num):
return sum([int(i)**3 for i in list(str(num))])
def PrintArmstrong(num):
if cubesum(num) == num:
return "Armstrong Number"
return "Not an Armstrong Number"
def Armstrong(num):
if cubesum(num) == num:
return True
return False
[i for i in range(1000) if ArmstrongNumber.Armstrong(i)] # this return NameError
```
Error-message:
```
NameError Traceback (most recent call last)
<ipython-input-32-f3d39f24a48c> in <module>
----> 1 ArmstrongNumber.Armstrong(153)
<ipython-input-31-fd21586166ed> in Armstrong(num)
10
11 def Armstrong(num):
---> 12 if cubesum(num) == num:
13 return True
14 return False
NameError: name 'cubesum' is not defined
```
|
2019/02/06
|
[
"https://Stackoverflow.com/questions/54547986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10361602/"
] |
Use `classname` before method:
```
class ArmstrongNumber:
def cubesum(num):
return sum([int(i)**3 for i in list(str(num))])
def PrintArmstrong(num):
if ArmstrongNumber.cubesum(num) == num:
return "Armstrong Number"
return "Not an Armstrong Number"
def Armstrong(num):
if ArmstrongNumber.cubesum(num) == num:
return True
return False
print([i for i in range(1000) if ArmstrongNumber.Armstrong(i)])
```
Unlsess you pass `self` to the functions those functions are not `instance methods`. Even if you define that within class you still need to access them using `classname`.
|
this should be your actual solution if you really want to use class
```
class ArmstrongNumber(object):
def cubesum(self, num):
return sum([int(i)**3 for i in list(str(num))])
def PrintArmstrong(self, num):
if self.cubesum(num) == num:
return "Armstrong Number"
return "Not an Armstrong Number"
def Armstrong(self, num):
if self.cubesum(num) == num:
return True
return False
a = ArmstrongNumber()
print([i for i in range(1000) if a.Armstrong(i)])
```
output
```
[0, 1, 153, 370, 371, 407]
```
---
**2nd method:**
if you dont want to use class then use static methods like this
```
def cubesum(num):
return sum([int(i)**3 for i in list(str(num))])
def PrintArmstrong(num):
if cubesum(num) == num:
return "Armstrong Number"
return "Not an Armstrong Number"
def Armstrong(num):
if cubesum(num) == num:
return True
return False
# a = ArmstrongNumber()
print([i for i in range(1000) if Armstrong(i)])
```
| 9,184
|
32,562,253
|
I am trying to run multiple calculations with UI using Tkinter in python where i have to display all the outputs for all the calculations. The problem is, the output for the first calculation is fine but the outputs for further calculations seems to be calculated out of default values. I came to know that i should destroy first label in order to output the second calculation, but when i try to destroy my first label, i could not. The code i tried is as follows:
```
from tkinter import *
def funcname():
#My calculations
GMT = GMT_user.get()
lat = lat_deg_user.get()
E = GMT * 365
Eqntime_label.configure(text=E)
Elevation = E/lat
Elevation_label.configure(text=Elevation)
nUI_pgm = Tk()
GMT_user = DoubleVar()
lat_deg_user = DoubleVar()
nlabel_time = Label(text = "Enter time in accordance to GMT in decimal").pack()
nEntry_time = Entry(nUI_pgm, textvariable = GMT_user).pack()
nlabel_Long = Label(text = "Enter Longitude in Decimal Degrees").pack()
nEntry_Long = Entry(nUI_pgm, textvariable = lat_deg_user).pack()
nbutton = Button(nUI_pgm, text = "Calculate", command = funcname).pack()
#Displaying results
nlabel_E = Label (text = "The Equation of Time is").pack()
Eqntime_label = Label(nUI_pgm, text="")
Eqntime_label.pack()
#when i try
Eqntime_label.destroy() # this doesn't work
nlabel_Elevation = Label(text = "The Elevation of the sun is").pack()
Elevation_label = Label(nUI_pgm, text="")
Elevation_label.pack()
nUI_pgm.mainloop()
```
Here I have to destroy the Eqntime\_label after the result is displayed in order to output Elevation\_label too. What should i do??
|
2015/09/14
|
[
"https://Stackoverflow.com/questions/32562253",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5325381/"
] |
Temporary tables always gets created in TempDb. However, it is not necessary that size of TempDb is only due to temporary tables. TempDb is used in various ways
1. Internal objects (Sort & spool, CTE, index rebuild, hash join etc)
2. User objects (Temporary table, table variables)
3. Version store (AFTER/INSTEAD OF triggers, MARS)
So, as it is clear that it is being use in various SQL operations so size can grow due to other reasons also
You can check what is causing TempDb to grow its size with below query
```
SELECT
SUM (user_object_reserved_page_count)*8 as usr_obj_kb,
SUM (internal_object_reserved_page_count)*8 as internal_obj_kb,
SUM (version_store_reserved_page_count)*8 as version_store_kb,
SUM (unallocated_extent_page_count)*8 as freespace_kb,
SUM (mixed_extent_page_count)*8 as mixedextent_kb
FROM sys.dm_db_file_space_usage
```
if above query shows,
* Higher number of user objects then it means that there is more usage of Temp tables , cursors or temp variables
* Higher number of internal objects indicates that Query plan is using a lot of database. Ex: sorting, Group by etc.
* Higher number of version stores shows Long running transaction or high transaction throughput
based on that you can configure TempDb file size. I've written an article recently about TempDB configuration best practices. You can read that [here](http://social.technet.microsoft.com/wiki/contents/articles/31353.sql-server-demystifying-tempdb-and-recommendations.aspx)
|
Perhaps you can use following SQL command on temp db files seperately
```
DBCC SHRINKFILE
```
Please refer to <https://support.microsoft.com/en-us/kb/307487> for more information
| 9,185
|
5,440,550
|
The sample application on the android developers site validates the purchase json using java code. Has anybody had any luck working out how to validate the purchase in python. In particular in GAE?
The following are the relevant excerpts from the android in-app billing [example program](http://developer.android.com/guide/market/billing/billing_integrate.html#billing-download). This is what would need to be converted to python using [PyCrypto](http://www.dlitz.net/software/pycrypto/) which was re-written to be completely python by Google and is the only Security lib available on app engine. Hopefully Google is cool with me using the excerpts below.
```
private static final String KEY_FACTORY_ALGORITHM = "RSA";
private static final String SIGNATURE_ALGORITHM = "SHA1withRSA";
String base64EncodedPublicKey = "your public key here";
PublicKey key = Security.generatePublicKey(base64EncodedPublicKey);
verified = Security.verify(key, signedData, signature);
public static PublicKey generatePublicKey(String encodedPublicKey) {
try {
byte[] decodedKey = Base64.decode(encodedPublicKey);
KeyFactory keyFactory = KeyFactory.getInstance(KEY_FACTORY_ALGORITHM);
return keyFactory.generatePublic(new X509EncodedKeySpec(decodedKey));
} catch ...
}
}
public static boolean verify(PublicKey publicKey, String signedData, String signature) {
if (Consts.DEBUG) {
Log.i(TAG, "signature: " + signature);
}
Signature sig;
try {
sig = Signature.getInstance(SIGNATURE_ALGORITHM);
sig.initVerify(publicKey);
sig.update(signedData.getBytes());
if (!sig.verify(Base64.decode(signature))) {
Log.e(TAG, "Signature verification failed.");
return false;
}
return true;
} catch ...
}
return false;
}
```
|
2011/03/26
|
[
"https://Stackoverflow.com/questions/5440550",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/677760/"
] |
Here's how i did it:
```
from Crypto.Hash import SHA
from Crypto.PublicKey import RSA
from Crypto.Signature import PKCS1_v1_5
from base64 import b64decode
def chunks(s, n):
for start in range(0, len(s), n):
yield s[start:start+n]
def pem_format(key):
return '\n'.join([
'-----BEGIN PUBLIC KEY-----',
'\n'.join(chunks(key, 64)),
'-----END PUBLIC KEY-----'
])
def validate_purchase(publicKey, signedData, signature):
key = RSA.importKey(pem_format(publicKey))
verifier = PKCS1_v1_5.new(key)
data = SHA.new(signedData)
sig = b64decode(signature)
return verifier.verify(data, sig)
```
This assumes that `publicKey` is your base64 encoded Google Play Store key on one line as you get it from the Developer Console.
For people who rather use m2crypto, `validate_purchase()` would change to:
```
from M2Crypto import RSA, BIO, EVP
from base64 import b64decode
# pem_format() as above
def validate_purchase(publicKey, signedData, signature):
bio = BIO.MemoryBuffer(pem_format(publicKey))
rsa = RSA.load_pub_key_bio(bio)
key = EVP.PKey()
key.assign_rsa(rsa)
key.verify_init()
key.verify_update(signedData)
return key.verify_final(b64decode(signature)) == 1
```
|
I finally figured out that your base64 encoded public key from Google Play is an X.509 subjectPublicKeyInfo DER SEQUENCE, and that the signature scheme is RSASSA-PKCS1-v1\_5 and not RSASSA-PSS. If you have [PyCrypto](https://www.dlitz.net/software/pycrypto/) installed, it's actually quite easy:
```
import base64
from Crypto.Hash import SHA
from Crypto.PublicKey import RSA
from Crypto.Signature import PKCS1_v1_5
# Your base64 encoded public key from Google Play.
_PUBLIC_KEY_BASE64 = "YOUR_BASE64_PUBLIC_KEY_HERE"
# Key from Google Play is a X.509 subjectPublicKeyInfo DER SEQUENCE.
_PUBLIC_KEY = RSA.importKey(base64.standard_b64decode(_PUBLIC_KEY_BASE64))
def verify(signed_data, signature_base64):
"""Returns whether the given data was signed with the private key."""
h = SHA.new()
h.update(signed_data)
# Scheme is RSASSA-PKCS1-v1_5.
verifier = PKCS1_v1_5.new(_PUBLIC_KEY)
# The signature is base64 encoded.
signature = base64.standard_b64decode(signature_base64)
return verifier.verify(h, signature)
```
| 9,186
|
44,972,219
|
I’m pretty new to Python and I just start to understand the basics.
I’m trying to run a script in a loop to check the temperatures and if the outside temp getting higher than inside or the opposite, the function should print it once and continue to check every 5 seconds, for changed state.
I found a similar [questions](https://stackoverflow.com/questions/38001105/python-print-only-one-time-inside-a-loop) what was very helpful but if I execute the code it print the outside temp is higher, next I heat up the inside sensor and it prints that it is inside higher, all good except that it doesn’t continue, the loop works but it doesn’t recognize the next change of state.
.
```
import RPi.GPIO as GPIO
import time
sensor_name_0 = "test"
printed_out = False
printed_in = False
try:
while True:
if sensor_name_0:
sensor_0 = open('/sys/devices/w1_bus_master1/w1_master_slaves','r').read().split('\n')[0]
sensor_1 = open('/sys/devices/w1_bus_master1/w1_master_slaves','r').read().split('\n')[1]
sensorpath = "/sys/bus/w1/devices/"
sensorfile = "/w1_slave"
def callsensor_0(sensor_0):
f = open(sensorpath + sensor_0 + sensorfile, 'r')
lines = f.readlines()
f.close()
temp_line = lines[1].find('t=')
temp_output = lines[1].strip() [temp_line+2:]
temp_celsius = float(temp_output) / 1000
return temp_celsius
def callsensor_1(sensor_1):
f = open(sensorpath + sensor_1 + sensorfile, 'r')
lines = f.readlines()
f.close()
temp_line = lines[1].find('t=')
temp_output = lines[1].strip() [temp_line+2:]
temp_celsius = float(temp_output) / 1000
return temp_celsius
outside = (str('%.1f' % float(callsensor_0(sensor_0))).rstrip('0').rstrip('.'))
inside = (str('%.1f' % float(callsensor_1(sensor_1))).rstrip('0').rstrip('.'))
print "loop"
if outside > inside and not printed_out:
printed_out = True
print "outside is higher then inside"
print outside
if outside < inside and not printed_in:
printed_in = True
print "inside is higher then outside"
print inside
time.sleep(5)
except KeyboardInterrupt:
print('interrupted!')
```
|
2017/07/07
|
[
"https://Stackoverflow.com/questions/44972219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8271048/"
] |
Its been long time, but for future readers thought to share some info.
There is a good article that explains the `getItemLayout`, please find it [here](https://medium.com/@jsoendermann/sectionlist-and-getitemlayout-2293b0b916fb)
I also faced `data[index]` as `undefined`. The reason is that `index` is calculated considering `section.data.length + 2` (1 for section header and 1 for section footer), you can find the code [here (RN-52)](https://github.com/facebook/react-native/blob/0.52-stable/Libraries/Lists/VirtualizedSectionList.js#L334).
With `SectionList` we have to be very careful while processing `index`.
|
For some reason the `react-native-get-item-layout` package keeps crashing with `"height: <<NaN>>"` so I had to write my [own RN SectionList getItemLayout](https://npmjs.com/package/sectionlist-get-itemlayout) . It uses the same interface as the former.
Like the `package` it's also an `O(n)`.
| 9,189
|
9,866,923
|
I'm trying to solve this newbie puzzle:
I've created this function:
```
def bucket_loop(htable, key):
bucket = hashtable_get_bucket(htable, key)
for entry in bucket:
if entry[0] == key:
return entry[1]
return None
```
And I have to call it in two other functions (bellow) in the following way: to change the value of the element entry[1] or to append to this list (entry) a new element. But I can't do that calling the function bucket\_loop the way I did because **"you can't assign to function call"** (assigning to a function call is illegal in Python). What is the alternative (most similar to the code I wrote) to do this (bucket\_loop(htable, key) = value and hashtable\_get\_bucket(htable, key).append([key, value]))?
```
def hashtable_update(htable, key, value):
if bucket_loop(htable, key) != None:
bucket_loop(htable, key) = value
else:
hashtable_get_bucket(htable, key).append([key, value])
def hashtable_lookup(htable, key):
return bucket_loop(htable, key)
```
Thanks, in advance, for any help!
This is the rest of the code to make this script works:
```
def make_hashtable(size):
table = []
for unused in range(0, size):
table.append([])
return table
def hash_string(s, size):
h = 0
for c in s:
h = h + ord(c)
return h % size
def hashtable_get_bucket(htable, key):
return htable[hash_string(key, len(htable))]
```
Similar question (but didn't help me): [SyntaxError: "can't assign to function call"](https://stackoverflow.com/questions/5964927/python-cannot-assign-function-call)
|
2012/03/26
|
[
"https://Stackoverflow.com/questions/9866923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/955883/"
] |
In general, there are three things you can do:
1. Write “setter” functions (ex, `bucket_set`)
2. Return mutable values (ex, `bucket_get(table, key).append(42)` if the value is a `list`)
3. Use a class which overrides `__getitem__` and `__setitem__`
For example, you could have a class like like:
```
class Bucket(object):
def __setitem__(self, key, value):
# … implementation …
def __getitem__(self, key):
# … implementation …
return value
```
Then use it like this:
```
>>> b = Bucket()
>>> b["foo"] = 42
>>> b["foo"]
42
>>>
```
This would be the most Pythonic way to do it.
|
One option that would require few changes would be adding a third argument to `bucket_loop`, optional, to use for assignment:
```
empty = object() # An object that's guaranteed not to be in your htable
def bucket_loop(htable, key, value=empty):
bucket = hashtable_get_bucket(htable, key)
for entry in bucket:
if entry[0] == key:
if value is not empty: # Reference (id) comparison
entry[1] = value
return entry[1]
else: # I think this else is unnecessary/buggy
return None
```
However, a few pointers:
1. I agree with Ignacio Vazquez-Abrams and David Wolever, a class would be better;
2. Since a bucket can have more than one key/value pairs, you shouldn't return None if the first entry didn't match your key. Loop through all of them, and only return None in the end; (you can ommit this statement also, the default behavior is to return None)
3. If your htable doesn't admit `None` as a value, you can use it instead of `empty`.
| 9,191
|
62,065,607
|
I run into problems when calling Spark's MinHashLSH's approxSimilarityJoin on a dataframe of (name\_id, name) combinations.
**A summary of the problem I try to solve:**
I have a dataframe of around 30 million unique (name\_id, name) combinations for company names. Some of those names refer to the same company, but are (i) either misspelled, and/or (ii) include additional names. Performing fuzzy string matching for every combination is not possible. To reduce the number of fuzzy string matching combinations, I use MinHashLSH in Spark. My intended approach is to use a approxSimilarityJoin (self-join) with a relatively large Jaccard threshold, such that I am able to run a fuzzy matching algorithm on the matched combinations to further improve the disambiguation.
**A summary of the steps I took:**
1. Used CountVectorizer to create a vector of character counts for every name,
2. Used MinHashLSH and its approxSimilarityJoin with the following settings:
* numHashTables=100
* threshold=0.3 (Jaccard threshold for approxSimilarityJoin)
3. After the approxSimilarityJoin, I remove duplicate combinations (for which holds that there exists a matched combination (i,j) and (j,i), then I remove (j,i))
4. After removing the duplicate combinations, I run a fuzzy string matching algorithm using the FuzzyWuzzy package to reduce the number of records and improve the disambiguation of the names.
5. Eventually I run a connectedComponents algorithm on the remaining edges (i,j) to match which company names belong together.
**Part of code used:**
```
id_col = 'id'
name_col = 'name'
num_hastables = 100
max_jaccard = 0.3
fuzzy_threshold = 90
fuzzy_method = fuzz.token_set_ratio
# Calculate edges using minhash practices
edges = MinHashLSH(inputCol='vectorized_char_lst', outputCol='hashes', numHashTables=num_hastables).\
fit(data).\
approxSimilarityJoin(data, data, max_jaccard).\
select(col('datasetA.'+id_col).alias('src'),
col('datasetA.clean').alias('src_name'),
col('datasetB.'+id_col).alias('dst'),
col('datasetB.clean').alias('dst_name')).\
withColumn('comb', sort_array(array(*('src', 'dst')))).\
dropDuplicates(['comb']).\
rdd.\
filter(lambda x: fuzzy_method(x['src_name'], x['dst_name']) >= fuzzy_threshold if x['src'] != x['dst'] else False).\
toDF().\
drop(*('src_name', 'dst_name', 'comb'))
```
**Explain plan of `edges`**
```
== Physical Plan ==
*(5) HashAggregate(keys=[datasetA#232, datasetB#263], functions=[])
+- Exchange hashpartitioning(datasetA#232, datasetB#263, 200)
+- *(4) HashAggregate(keys=[datasetA#232, datasetB#263], functions=[])
+- *(4) Project [datasetA#232, datasetB#263]
+- *(4) BroadcastHashJoin [entry#233, hashValue#234], [entry#264, hashValue#265], Inner, BuildRight, (UDF(datasetA#232.vectorized_char_lst, datasetB#263.vectorized_char_lst) < 0.3)
:- *(4) Project [named_struct(id, id#10, name, name#11, clean, clean#90, char_lst, char_lst#95, vectorized_char_lst, vectorized_char_lst#107, hashes, hashes#225) AS datasetA#232, entry#233, hashValue#234]
: +- *(4) Filter isnotnull(hashValue#234)
: +- Generate posexplode(hashes#225), [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107, hashes#225], false, [entry#233, hashValue#234]
: +- *(1) Project [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107, UDF(vectorized_char_lst#107) AS hashes#225]
: +- InMemoryTableScan [char_lst#95, clean#90, id#10, name#11, vectorized_char_lst#107]
: +- InMemoryRelation [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107], StorageLevel(disk, memory, deserialized, 1 replicas)
: +- *(4) Project [id#10, name#11, pythonUDF0#114 AS clean#90, pythonUDF2#116 AS char_lst#95, UDF(pythonUDF2#116) AS vectorized_char_lst#107]
: +- BatchEvalPython [<lambda>(name#11), <lambda>(<lambda>(name#11)), <lambda>(<lambda>(name#11))], [id#10, name#11, pythonUDF0#114, pythonUDF1#115, pythonUDF2#116]
: +- SortAggregate(key=[name#11], functions=[first(id#10, false)])
: +- *(3) Sort [name#11 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(name#11, 200)
: +- SortAggregate(key=[name#11], functions=[partial_first(id#10, false)])
: +- *(2) Sort [name#11 ASC NULLS FIRST], false, 0
: +- Exchange RoundRobinPartitioning(8)
: +- *(1) Filter AtLeastNNulls(n, id#10,name#11)
: +- *(1) FileScan csv [id#10,name#11] Batched: false, Format: CSV, Location: InMemoryFileIndex[file:<path>, PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:string,name:string>
+- BroadcastExchange HashedRelationBroadcastMode(List(input[1, int, false], input[2, vector, true]))
+- *(3) Project [named_struct(id, id#10, name, name#11, clean, clean#90, char_lst, char_lst#95, vectorized_char_lst, vectorized_char_lst#107, hashes, hashes#256) AS datasetB#263, entry#264, hashValue#265]
+- *(3) Filter isnotnull(hashValue#265)
+- Generate posexplode(hashes#256), [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107, hashes#256], false, [entry#264, hashValue#265]
+- *(2) Project [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107, UDF(vectorized_char_lst#107) AS hashes#256]
+- InMemoryTableScan [char_lst#95, clean#90, id#10, name#11, vectorized_char_lst#107]
+- InMemoryRelation [id#10, name#11, clean#90, char_lst#95, vectorized_char_lst#107], StorageLevel(disk, memory, deserialized, 1 replicas)
+- *(4) Project [id#10, name#11, pythonUDF0#114 AS clean#90, pythonUDF2#116 AS char_lst#95, UDF(pythonUDF2#116) AS vectorized_char_lst#107]
+- BatchEvalPython [<lambda>(name#11), <lambda>(<lambda>(name#11)), <lambda>(<lambda>(name#11))], [id#10, name#11, pythonUDF0#114, pythonUDF1#115, pythonUDF2#116]
+- SortAggregate(key=[name#11], functions=[first(id#10, false)])
+- *(3) Sort [name#11 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(name#11, 200)
+- SortAggregate(key=[name#11], functions=[partial_first(id#10, false)])
+- *(2) Sort [name#11 ASC NULLS FIRST], false, 0
+- Exchange RoundRobinPartitioning(8)
+- *(1) Filter AtLeastNNulls(n, id#10,name#11)
+- *(1) FileScan csv [id#10,name#11] Batched: false, Format: CSV, Location: InMemoryFileIndex[file:<path>, PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:string,name:string>
```
**How `data` looks:**
```
+-------+--------------------+--------------------+--------------------+--------------------+
| id| name| clean| char_lst| vectorized_char_lst|
+-------+--------------------+--------------------+--------------------+--------------------+
|3633038|MURATA MACHINERY LTD| MURATA MACHINERY|[M, U, R, A, T, A...|(33,[0,1,2,3,4,5,...|
|3632811|SOCIETE ANONYME D...|SOCIETE ANONYME D...|[S, O, C, I, E, T...|(33,[0,1,2,3,4,5,...|
|3632655|FUJIFILM CORPORATION| FUJIFILM|[F, U, J, I, F, I...|(33,[3,10,12,13,2...|
|3633318|HEINE OPTOTECHNIK...|HEINE OPTOTECHNIK...|[H, E, I, N, E, ...|(33,[0,1,2,3,4,5,...|
|3633523|SUNBEAM PRODUCTS INC| SUNBEAM PRODUCTS|[S, U, N, B, E, A...|(33,[0,1,2,4,5,6,...|
|3633300| HIVAL LTD| HIVAL| [H, I, V, A, L]|(33,[2,3,10,11,21...|
|3632657| NSK LTD| NSK| [N, S, K]|(33,[5,6,16],[1.0...|
|3633240|REHABILITATION IN...|REHABILITATION IN...|[R, E, H, A, B, I...|(33,[0,1,2,3,4,5,...|
|3632732|STUDIENGESELLSCHA...|STUDIENGESELLSCHA...|[S, T, U, D, I, E...|(33,[0,1,2,3,4,5,...|
|3632866|ENERGY CONVERSION...|ENERGY CONVERSION...|[E, N, E, R, G, Y...|(33,[0,1,3,5,6,7,...|
|3632895|ERGENICS POWER SY...|ERGENICS POWER SY...|[E, R, G, E, N, I...|(33,[0,1,3,4,5,6,...|
|3632897| MOLI ENERGY LIMITED| MOLI ENERGY|[M, O, L, I, , E...|(33,[0,1,3,5,7,8,...|
|3633275| NORDSON CORPORATION| NORDSON|[N, O, R, D, S, O...|(33,[5,6,7,8,14],...|
|3633256| PEROXIDCHEMIE GMBH| PEROXIDCHEMIE|[P, E, R, O, X, I...|(33,[0,3,7,8,9,11...|
|3632695| POWER CELL INC| POWER CELL|[P, O, W, E, R, ...|(33,[0,1,7,8,9,10...|
|3633037| ERGENICS INC| ERGENICS|[E, R, G, E, N, I...|(33,[0,3,5,6,8,9,...|
|3632878| FORD MOTOR COMPANY| FORD MOTOR|[F, O, R, D, , M...|(33,[1,4,7,8,13,1...|
|3632573| SAFT AMERICA INC| SAFT AMERICA|[S, A, F, T, , A...|(33,[0,1,2,3,4,6,...|
|3632852|ALCAN INTERNATION...| ALCAN INTERNATIONAL|[A, L, C, A, N, ...|(33,[0,1,2,3,4,5,...|
|3632698| KRUPPKOPPERS GMBH| KRUPPKOPPERS|[K, R, U, P, P, K...|(33,[0,6,7,8,12,1...|
|3633150|ALCAN INTERNATION...| ALCAN INTERNATIONAL|[A, L, C, A, N, ...|(33,[0,1,2,3,4,5,...|
|3632761|AMERICAN TELEPHON...|AMERICAN TELEPHON...|[A, M, E, R, I, C...|(33,[0,1,2,3,4,5,...|
|3632757|HITACHI KOKI COMP...| HITACHI KOKI|[H, I, T, A, C, H...|(33,[1,2,3,4,7,9,...|
|3632836|HUGHES AIRCRAFT C...| HUGHES AIRCRAFT|[H, U, G, H, E, S...|(33,[0,1,2,3,4,6,...|
|3633152| SOSY INC| SOSY| [S, O, S, Y]|(33,[6,7,18],[2.0...|
|3633052|HAMAMATSU PHOTONI...|HAMAMATSU PHOTONI...|[H, A, M, A, M, A...|(33,[1,2,3,4,5,6,...|
|3633450| AKZO NOBEL NV| AKZO NOBEL|[A, K, Z, O, , N...|(33,[0,1,2,5,7,10...|
|3632713| ELTRON RESEARCH INC| ELTRON RESEARCH|[E, L, T, R, O, N...|(33,[0,1,2,4,5,6,...|
|3632533|NEC ELECTRONICS C...| NEC ELECTRONICS|[N, E, C, , E, L...|(33,[0,1,3,4,5,6,...|
|3632562| TARGETTI SANKEY SPA| TARGETTI SANKEY SPA|[T, A, R, G, E, T...|(33,[0,1,2,3,4,5,...|
+-------+--------------------+--------------------+--------------------+--------------------+
only showing top 30 rows
```
**Hardware used:**
1. Master node: m5.2xlarge
8 vCore, 32 GiB memory, EBS only storage
EBS Storage:128 GiB
2. Slave nodes (10x): m5.4xlarge
16 vCore, 64 GiB memory, EBS only storage
EBS Storage:500 GiB
**Spark-submit settings used:**
```
spark-submit --master yarn --conf "spark.executor.instances=40" --conf "spark.default.parallelism=640" --conf "spark.shuffle.partitions=2000" --conf "spark.executor.cores=4" --conf "spark.executor.memory=14g" --conf "spark.driver.memory=14g" --conf "spark.driver.maxResultSize=14g" --conf "spark.dynamicAllocation.enabled=false" --packages graphframes:graphframes:0.7.0-spark2.4-s_2.11 run_disambiguation.py
```
**Task errors from Web UI**
```
ExecutorLostFailure (executor 21 exited caused by one of the running tasks) Reason: Slave lost
```
```
ExecutorLostFailure (executor 31 exited unrelated to the running tasks) Reason: Container marked as failed: container_1590592506722_0001_02_000002 on host: ip-172-31-47-180.eu-central-1.compute.internal. Exit status: -100. Diagnostics: Container released on a *lost* node.
```
**(Part of) executor logs:**
```
20/05/27 16:29:09 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (25 times so far)
20/05/27 16:29:13 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (26 times so far)
20/05/27 16:29:15 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (28 times so far)
20/05/27 16:29:17 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (0 time so far)
20/05/27 16:29:28 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (27 times so far)
20/05/27 16:29:28 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (26 times so far)
20/05/27 16:29:33 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (29 times so far)
20/05/27 16:29:38 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (1 time so far)
20/05/27 16:29:42 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (27 times so far)
20/05/27 16:29:46 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (28 times so far)
20/05/27 16:29:53 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (30 times so far)
20/05/27 16:29:57 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (2 times so far)
20/05/27 16:30:00 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (28 times so far)
20/05/27 16:30:05 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (29 times so far)
20/05/27 16:30:10 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (31 times so far)
20/05/27 16:30:15 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (3 times so far)
20/05/27 16:30:19 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (29 times so far)
20/05/27 16:30:22 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (30 times so far)
20/05/27 16:30:29 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (32 times so far)
20/05/27 16:30:32 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (4 times so far)
20/05/27 16:30:39 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (31 times so far)
20/05/27 16:30:39 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (30 times so far)
20/05/27 16:30:46 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (33 times so far)
20/05/27 16:30:47 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (5 times so far)
20/05/27 16:30:55 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (32 times so far)
20/05/27 16:30:59 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (31 times so far)
20/05/27 16:31:03 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (34 times so far)
20/05/27 16:31:06 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (6 times so far)
20/05/27 16:31:13 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (33 times so far)
20/05/27 16:31:14 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (32 times so far)
20/05/27 16:31:22 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (35 times so far)
20/05/27 16:31:24 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (7 times so far)
20/05/27 16:31:30 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (34 times so far)
20/05/27 16:31:32 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (33 times so far)
20/05/27 16:31:41 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (36 times so far)
20/05/27 16:31:44 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (8 times so far)
20/05/27 16:31:47 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (35 times so far)
20/05/27 16:31:48 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (34 times so far)
20/05/27 16:32:02 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (37 times so far)
20/05/27 16:32:03 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (9 times so far)
20/05/27 16:32:04 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (36 times so far)
20/05/27 16:32:08 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (35 times so far)
20/05/27 16:32:19 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (38 times so far)
20/05/27 16:32:20 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (37 times so far)
20/05/27 16:32:21 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (10 times so far)
20/05/27 16:32:26 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (36 times so far)
20/05/27 16:32:37 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (39 times so far)
20/05/27 16:32:37 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (11 times so far)
20/05/27 16:32:38 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (38 times so far)
20/05/27 16:32:45 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (37 times so far)
20/05/27 16:32:51 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (40 times so far)
20/05/27 16:32:56 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (12 times so far)
20/05/27 16:32:58 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (39 times so far)
20/05/27 16:33:03 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (38 times so far)
20/05/27 16:33:08 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (41 times so far)
20/05/27 16:33:13 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (13 times so far)
20/05/27 16:33:15 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (40 times so far)
20/05/27 16:33:20 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (39 times so far)
20/05/27 16:33:26 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1988.0 MB to disk (42 times so far)
20/05/27 16:33:30 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (41 times so far)
20/05/27 16:33:31 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (14 times so far)
20/05/27 16:33:36 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (40 times so far)
20/05/27 16:33:46 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1992.0 MB to disk (43 times so far)
20/05/27 16:33:47 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1988.0 MB to disk (42 times so far)
20/05/27 16:33:51 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (15 times so far)
20/05/27 16:33:54 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (41 times so far)
20/05/27 16:34:03 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1992.0 MB to disk (43 times so far)
20/05/27 16:34:04 INFO ShuffleExternalSorter: Thread 146 spilling sort data of 1992.0 MB to disk (44 times so far)
20/05/27 16:34:08 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (16 times so far)
20/05/27 16:34:14 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1988.0 MB to disk (42 times so far)
20/05/27 16:34:16 INFO PythonUDFRunner: Times: total = 774701, boot = 3, init = 10, finish = 774688
20/05/27 16:34:21 INFO ShuffleExternalSorter: Thread 147 spilling sort data of 1992.0 MB to disk (44 times so far)
20/05/27 16:34:22 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (17 times so far)
20/05/27 16:34:30 INFO PythonUDFRunner: Times: total = 773372, boot = 2, init = 9, finish = 773361
20/05/27 16:34:32 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1992.0 MB to disk (43 times so far)
20/05/27 16:34:39 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (18 times so far)
20/05/27 16:34:46 INFO ShuffleExternalSorter: Thread 89 spilling sort data of 1992.0 MB to disk (44 times so far)
20/05/27 16:34:52 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (19 times so far)
20/05/27 16:35:01 INFO PythonUDFRunner: Times: total = 776905, boot = 3, init = 11, finish = 776891
20/05/27 16:35:05 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (20 times so far)
20/05/27 16:35:19 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (21 times so far)
20/05/27 16:35:35 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (22 times so far)
20/05/27 16:35:52 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (23 times so far)
20/05/27 16:36:10 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (24 times so far)
20/05/27 16:36:29 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (25 times so far)
20/05/27 16:36:47 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (26 times so far)
20/05/27 16:37:06 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (27 times so far)
20/05/27 16:37:25 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (28 times so far)
20/05/27 16:37:44 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (29 times so far)
20/05/27 16:38:03 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (30 times so far)
20/05/27 16:38:22 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (31 times so far)
20/05/27 16:38:41 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (32 times so far)
20/05/27 16:38:59 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (33 times so far)
20/05/27 16:39:19 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (34 times so far)
20/05/27 16:39:39 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (35 times so far)
20/05/27 16:39:58 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (36 times so far)
20/05/27 16:40:18 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (37 times so far)
20/05/27 16:40:38 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (38 times so far)
20/05/27 16:40:57 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (39 times so far)
20/05/27 16:41:16 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (40 times so far)
20/05/27 16:41:35 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (41 times so far)
20/05/27 16:41:55 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1988.0 MB to disk (42 times so far)
20/05/27 16:42:19 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1992.0 MB to disk (43 times so far)
20/05/27 16:42:41 INFO ShuffleExternalSorter: Thread 145 spilling sort data of 1992.0 MB to disk (44 times so far)
20/05/27 16:42:59 ERROR CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM
20/05/27 16:42:59 INFO DiskBlockManager: Shutdown hook called
20/05/27 16:42:59 INFO ShutdownHookManager: Shutdown hook called
20/05/27 16:42:59 INFO ShutdownHookManager: Deleting directory /mnt/yarn/usercache/hadoop/appcache/application_1590592506722_0001/spark-73af8e3b-f428-47d4-9e13-fed4e19cc2cd
```
```
2020-05-27T16:41:16.336+0000: [GC (Allocation Failure) 2020-05-27T16:41:16.336+0000: [ParNew: 272234K->242K(305984K), 0.0094375 secs] 9076907K->8804915K(13188748K), 0.0094895 secs] [Times: user=0.12 sys=0.00, real=0.01 secs]
2020-05-27T16:41:34.686+0000: [GC (Allocation Failure) 2020-05-27T16:41:34.686+0000: [ParNew: 272242K->257K(305984K), 0.0084179 secs] 9076915K->8804947K(13188748K), 0.0084840 secs] [Times: user=0.09 sys=0.01, real=0.01 secs]
2020-05-27T16:41:35.145+0000: [GC (Allocation Failure) 2020-05-27T16:41:35.145+0000: [ParNew: 272257K->1382K(305984K), 0.0095541 secs] 9076947K->8806073K(13188748K), 0.0096080 secs] [Times: user=0.12 sys=0.00, real=0.01 secs]
2020-05-27T16:41:55.077+0000: [GC (Allocation Failure) 2020-05-27T16:41:55.077+0000: [ParNew: 273382K->2683K(305984K), 0.0097177 secs] 9078073K->8807392K(13188748K), 0.0097754 secs] [Times: user=0.12 sys=0.00, real=0.01 secs]
2020-05-27T16:41:55.513+0000: [GC (Allocation Failure) 2020-05-27T16:41:55.513+0000: [ParNew: 274683K->3025K(305984K), 0.0093345 secs] 9079392K->8807734K(13188748K), 0.0093892 secs] [Times: user=0.12 sys=0.00, real=0.01 secs]
2020-05-27T16:42:05.481+0000: [GC (Allocation Failure) 2020-05-27T16:42:05.481+0000: [ParNew: 275025K->4102K(305984K), 0.0092950 secs] 9079734K->8808830K(13188748K), 0.0093464 secs] [Times: user=0.12 sys=0.00, real=0.01 secs]
2020-05-27T16:42:18.711+0000: [GC (Allocation Failure) 2020-05-27T16:42:18.711+0000: [ParNew: 276102K->2972K(305984K), 0.0098928 secs] 9080830K->8807700K(13188748K), 0.0099510 secs] [Times: user=0.13 sys=0.00, real=0.01 secs]
2020-05-27T16:42:36.493+0000: [GC (Allocation Failure) 2020-05-27T16:42:36.493+0000: [ParNew: 274972K->3852K(305984K), 0.0094324 secs] 9079700K->8808598K(13188748K), 0.0094897 secs] [Times: user=0.11 sys=0.00, real=0.01 secs]
2020-05-27T16:42:40.880+0000: [GC (Allocation Failure) 2020-05-27T16:42:40.880+0000: [ParNew: 275852K->2568K(305984K), 0.0111794 secs] 9080598K->8807882K(13188748K), 0.0112352 secs] [Times: user=0.13 sys=0.00, real=0.01 secs]
Heap
par new generation total 305984K, used 261139K [0x0000000440000000, 0x0000000454c00000, 0x0000000483990000)
eden space 272000K, 95% used [0x0000000440000000, 0x000000044fc82cf8, 0x00000004509a0000)
from space 33984K, 7% used [0x00000004509a0000, 0x0000000450c220a8, 0x0000000452ad0000)
to space 33984K, 0% used [0x0000000452ad0000, 0x0000000452ad0000, 0x0000000454c00000)
concurrent mark-sweep generation total 12882764K, used 8805314K [0x0000000483990000, 0x0000000795e63000, 0x00000007c0000000)
Metaspace used 77726K, capacity 79553K, committed 79604K, reserved 1118208K
class space used 10289K, capacity 10704K, committed 10740K, reserved 1048576K
```
[Screenshot of executors](https://i.stack.imgur.com/MsKzB.png)
**What I tried:**
* Changing `spark.sql.shuffle.partitions`
* Changing `spark.default.parallelism`
* Repartition the dataframe
How can I solve this issue?
Thanks in advance!
Thijs
|
2020/05/28
|
[
"https://Stackoverflow.com/questions/62065607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12383245/"
] |
`approxSimilarityJoin` will only parallelize well across workers if the tokens being input into MinHash are sufficiently distinct. Since individual character tokens appear frequently across many records; include an `NGram` transformation on your character list to make the appearance of each token less frequent; this will greatly reduce data skew and will resolve memory strain.
MinHash simulates the process of creating a random permutation of your token population and selects the token in the sample set that appears first in the permutation. Since you are using individual characters as tokens, let's say you select a MinHash seed that makes the character `e` the first in your random permutation. In this case, every row with the letter `e` in it will have a matching MinHash and will be shuffled to the same worker for set comparison. This will cause extreme data skew and out of memory errors.
|
Thanks for the detailed explanation.
What threshold are you using a and how are reducing false -ve?
| 9,194
|
29,970,679
|
I have this simple code in Python:
```
import sys
class Crawler(object):
def __init__(self, num_of_runs):
self.run_number = 1
self.num_of_runs = num_of_runs
def single_run(self):
#do stuff
pass
def run(self):
while self.run_number <= self.num_of_runs:
self.single_run()
print self.run_number
self.run_number += 1
if __name__ == "__main__":
num_of_runs = sys.argv[1]
crawler = Crawler(num_of_runs)
crawler.run()
```
Then, I run it this way:
`python path/crawler.py 10`
From my understanding, it should loop 10 times and stop, right? Why it doesn't?
|
2015/04/30
|
[
"https://Stackoverflow.com/questions/29970679",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3717289/"
] |
```
num_of_runs = sys.argv[1]
```
`num_of_runs` is a string at that stage.
```
while self.run_number <= self.num_of_runs:
```
You are comparing a `string` and an `int` here.
A simple way to fix this is to convert it to an int
```
num_of_runs = int(sysargv[1])
```
Another way to deal with this is to use `argparser`.
```
import argparse
parser = argparse.ArgumentParser(description='The program does bla and bla')
parser.add_argument(
'my_int',
type=int,
help='an integer for the script'
)
args = parser.parse_args()
print args.my_int
print type(args.my_int)
```
Now if you execute the script like this:
```
./my_script.py 20
```
The output is:
>
> 20
>
>
>
>
Using argparser also gives you the -h option by default:
```
python my_script.py -h
usage: i.py [-h] my_int
The program does bla and bla
positional arguments:
my_int an integer for the script
optional arguments:
-h, --help show this help message and exit
```
For more information, have a look at the [argparser](https://docs.python.org/dev/library/argparse.html) documentation.
Note: The code I have used is from the argparser documentation, but has been slightly modified.
|
When accepting input from the command line, data is passed as a string. You need to convert this value to an `int` before you pass it to your `Crawler` class:
```
num_of_runs = int(sys.argv[1])
```
---
You can also utilize this to determine if the input is valid. If it doesn't convert to an int, it will throw an error.
| 9,196
|
54,873,222
|
I have a scenario where the data is like below in a text file:
```
first_id;"second_id";"name";"performer";"criteria"
12345;"13254";"abc";"def";"criteria_1"
65432;"13254";"abc";"ghi";"criteria_1"
24561;"13254";"abc";"pqr";"criteria_2"
24571;"13254";"abc";"jkl";"criteria_2"
first_id;"second_id";"name";"performer";"criteria"
12345;"78452";"mno";"xyz";"criteria_1"
24561;"78452";"mno";"tuv";"criteria_2"
so on..
```
Note: The name column value remains same for each result fetched, but the performer varies for each row and has criteria set. The second\_id column values are same for each result fetched.
For the above data, I need to capture the name and performer and have to move them to excel sheet as comma separated value like below output. The author value is based on name column defined above, approver values are based on criteria\_1 and reviewer values are based on criteria\_2.
```
**author| approver| reviewer** --> columns in excel
abc | def, ghi| pqr, jkl --> values corresponding to their columns
```
See the below picture for my expected output. The author is of "name" field defined above. approver field is determined based on "criteria" - criteria\_1, reviewer field is determined based on "criteria" - criteria\_2.
[picture for output](https://i.stack.imgur.com/cvzyv.png)
Here, I'm requesting to how to make a script in python to get the above output? Let me know for any further information.
|
2019/02/25
|
[
"https://Stackoverflow.com/questions/54873222",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6909182/"
] |
Here's a general answer that will scale up to as many data frames as you have:
```
library(dplyr)
df_list = list(df5 = df5, df6 = df6)
library(dplyr)
big_df = bind_rows(df_list, .id = "source")
big_df = big_df %>% group_by(Year) %>% summarize_if(is.numeric, mean) %>%
mutate(source = "Mean") %>%
bind_rows(big_df)
ggplot(big_df, aes(x = Year, y = Total, color = source)) +
geom_line()
```
[](https://i.stack.imgur.com/vApcE.png)
Naming the `list` more appropriately will help with the plot labels. If you do have more data frames, I'd strongly recommend reading my answer at [How to make a list of data frames](https://stackoverflow.com/a/24376207/903061).
Using this data:
```
df5 = read.table(text = "Year VegC LittC SoilfC SoilsC Total
1 2013 1.820858 1.704079 4.544182 1.964507 10.03363
2 2014 1.813573 1.722106 4.548287 1.964658 10.04863
3 2015 1.776853 1.722110 4.553425 1.964817 10.01722
4 2016 1.794462 1.691728 4.556691 1.964973 10.00785
5 2017 1.808207 1.708956 4.557116 1.965063 10.03936
6 2018 1.831758 1.728973 4.559844 1.965192 10.08578",
header = T)
df6 = read.table(text =
" Year VegC LittC SoilfC SoilsC Total
1 2013 1.832084 1.736137 4.542052 1.964454 10.07474
2 2014 1.806351 1.741353 4.548349 1.964633 10.06069
3 2015 1.825316 1.729084 4.552433 1.964792 10.07164
4 2016 1.845673 1.735861 4.553766 1.964900 10.10020
5 2017 1.810343 1.754477 4.556542 1.965033 10.08640
6 2018 1.814503 1.728337 4.561960 1.965191 10.07001",
header = T)
```
|
If I understand you right, you would like to plot for the year 2013 10,054185
If you have for every year one line you can create a new col and add this to your existing ggplot:
```
df <- dataframe5$Year
df$total5 <- dataframe5$Total
df$total6 <- dataframe6$Total
df$totalmean <- (df$total5+df$total6)/2
```
By plotting `df$totalmean` you should get the mean of the line. Just add the lines by `+ Geometrie_line(...)`in the existing ggplot.
| 9,197
|
41,951,204
|
I am new to python. I'm trying to connect my client with the broker. But I am getting an error "global name 'mqttClient' is not defined".
Can anyone help me to what is wrong with my code.
Here is my code,
**Test.py**
```
#!/usr/bin/env python
import time, threading
import mqttConnector
class UtilsThread(object):
def __init__(self):
thread = threading.Thread(target=self.run, args=())
thread.daemon = True # Daemonize thread
thread.start() # Start the execution
class SubscribeToMQTTQueue(object):
def __init__(self):
thread = threading.Thread(target=self.run, args=())
thread.daemon = True # Daemonize thread
thread.start() # Start the execution
def run(self):
mqttConnector.main()
def connectAndPushData():
PUSH_DATA = "xxx"
mqttConnector.publish(PUSH_DATA)
def main():
SubscribeToMQTTQueue() # connects and subscribes to an MQTT Queue that receives MQTT commands from the server
LAST_TEMP = 25
try:
if LAST_TEMP > 0:
connectAndPushData()
time.sleep(5000)
except (KeyboardInterrupt, Exception) as e:
print "Exception in RaspberryAgentThread (either KeyboardInterrupt or Other)"
print ("STATS: " + str(e))
pass
if __name__ == "__main__":
main()
```
**mqttConnector.py**
```
#!/usr/bin/env python
import time
import paho.mqtt.client as mqtt
def on_connect(client, userdata, flags, rc):
print("MQTT_LISTENER: Connected with result code " + str(rc))
def on_message(client, userdata, msg):
print 'MQTT_LISTENER: Message Received by Device'
def on_publish(client, userdata, mid):
print 'Temperature Data Published Succesfully'
def publish(msg):
# global mqttClient
mqttClient.publish(TOPIC_TO_PUBLISH, msg)
def main():
MQTT_IP = "IP"
MQTT_PORT = "port"
global TOPIC_TO_PUBLISH
TOPIC_TO_PUBLISH = "xxx/laptop-management/001/data"
global mqttClient
mqttClient = mqtt.Client()
mqttClient.on_connect = on_connect
mqttClient.on_message = on_message
mqttClient.on_publish = on_publish
while True:
try:
mqttClient.connect(MQTT_IP, MQTT_PORT, 180)
mqttClient.loop_forever()
except (KeyboardInterrupt, Exception) as e:
print "MQTT_LISTENER: Exception in MQTTServerThread (either KeyboardInterrupt or Other)"
print ("MQTT_LISTENER: " + str(e))
mqttClient.disconnect()
print "MQTT_LISTENER: " + time.asctime(), "Connection to Broker closed - %s:%s" % (MQTT_IP, MQTT_PORT)
if __name__ == '__main__':
main()
```
I'm getting this,
```
Exception in RaspberryAgentThread (either KeyboardInterrupt or Other)
STATS: global name 'mqttClient' is not defined
```
|
2017/01/31
|
[
"https://Stackoverflow.com/questions/41951204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7134849/"
] |
* **For MongoDB -**
Use AWS quick start MongoDB
<http://docs.aws.amazon.com/quickstart/latest/mongodb/overview.html>
<http://docs.aws.amazon.com/quickstart/latest/mongodb/architecture.html>
* **For rest of the docker stack i.e NodeJS & Nginx -**
Use the AWS ElasticBeanstalk Multi Container Deployment
<http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecs.html>
<http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html>
|
Elastic Beanstalk supports Docker, as [documented here](http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html). Elastic Beanstalk would manage the EC2 resources for you so that you, which should make things a bit easier on you.
| 9,198
|
59,904,969
|
Introduction
============
I want to combine my separate Minecraft worlds into a single world and it seemed like a relatively easy feat, but as I did research it evolved into the need to make a custom program.
The Struggle
------------
I started by shifting the region files and combining them in one region folder, which seemed like the obvious solution and it almost worked. **Note: I've opened the files and it seems entire sectors have their coordinates stored, not entities, hence the terrain itself is spatially mismatched with the region file name.**
That led to quite a bit of lag when I opened the client and the regions failed to render. I read up on the Anvil file format and imagined a scheme for reading NBT files. I figured I could manually read out the bytes and edit them, but in my continued research I got conflicting answers as to whether region files are gzipped.
I finished enough code to read some raw bytes, but the byte values didn't come out as I expected.
According to the info I have on NBT files, they all start with a CompoundTag and a CompoundTag starts as a single byte valued as 10, or x0A.
This is where I got my format information: <https://minecraft.gamepedia.com/NBT_format>
Here's a screenshot of what actually came out:
[](https://i.stack.imgur.com/de1Fe.png)
*Note: The class description in the screenshot is not accurate. I just quickly filled in enough to read the bytes, not flesh out the UI function.*
I assume these bytes coming out as non-sense is a sign that the file is compressed. I found this as a start to the gzip problem:
<http://gnuwin32.sourceforge.net/packages/gzip.htm>
I imagine if I could get this installed it would unzip this .mca file and I could read the bytes as expected, but I don't understand the installation instructions. It says use the "Shell Commands, 'configure', 'make' and 'make install'". To me that sounds like Unix, but the file I downloaded is for Windows? There aren't any exe's, but there are quite a few C files. I don't have a C-compiler. . .
Note: I still have not got the gzip software to work.
### Post Script
I've seen similar questions asked here, but all of them were either old (2016ish) with dead links to software that used to work, or they were recent and unanswered. I found one specific copy of this question asked 5 months ago, but I had to make an account to comment. Here's the link: [How can read Minecraft .mca files so that in python I can extract individual blocks?](https://stackoverflow.com/questions/57397934/how-can-read-minecraft-mca-files-so-that-in-python-i-can-extract-individual-blo) His question is with regard to a Python implementation. He said he found an NBT library for Python, but it was rejecting his MCA files for being *not-gzipped*.
I've got a lead on understanding the problem because I have the NBTExplorer source code (see the answer I posted), but I'll have to update on how that pans out. As far as getting my world fixed, I think I have a viable solution now.
If anyone could point me to a finished Java library, with source code, that opens .mca's or a discussion board related to this topic that'd be cool. I'm still also interested in how file compression works, but that's probably outside this question's scope. I realize this isn't directly bug or error related; it's it was moreso that I didn't know what further steps to take to make a code that accomplishes this task.
Update
------
I found someone else's program to do this and posted it as an answer, but I'd still like to know how the file is converted from bytes to useable info. Using the manual edit method of the answer I posted, I will need at most **241,664 manual edits**, so I still need a better solution.
|
2020/01/24
|
[
"https://Stackoverflow.com/questions/59904969",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12778238/"
] |
First of: As far as I know there is no more information about "where the chunks are", stored in the region files. There are 32(x direction)\*32(z-direction)= 1024 Chunks stored within one region file and each of it has its position of data within the file. So the chunks are just numbered within the file itself and the first 8192 bytes are just about if there is any data about that specific chunk or not, where its found within the file and when it got last updated. Where the complete region (those 1024 Chunks) are positioned within the world can be worked out within the file name where the regions themself are numbered in x and z direction.
So in your case you should be able to rename your region files in a way they stay togehter as they are in the original worlds and you should be able to merge them together.
Second: The NBT Format is not the first thing to look at when you want to decode the data. First of the Region files have their own structure: <https://minecraft.gamepedia.com/Region_file_format> and when you get to the actual data using Zlib (RFC1950) it's getting complicated...
Anyway if you want further information on how to decode I can give you some information (since the files <https://www.rfc-editor.org/rfc/rfc1950.html> and <https://www.rfc-editor.org/rfc/rfc1951> about Zlib (RFC1950) are written in a hard way to understand - at least it was for me). But theres a point where I myself am struggeling right now which is why I came across this question.
|
I found an editor!
==================
Now I can *edit*, but I don't know *how* the editing works. I haven't *learned* anything, but I did finally find someone else's editor. Not quite what I wanted because I wanted to know how to do this myself.
**Update:** To fix a region using this software I have to manually edit 2 fields, for up to 32x32 chunks, and I have *118 regions* that I need to fix. \*\*That's 241,664 potential manual edits! This solution is not viable on a reasonable timescale, but it's the best I have so far:
I found this page: <https://fileinfo.com/extension/mca>
Which linked to this page: <https://fileinfo.com/software/nbtexplorer/nbtexplorer>
Which linked to this page: <https://github.com/jaquadro/NBTExplorer/releases>
I installed the software and it automatically linked to the .minecraft folder, here's a screenshot of the GUI:
[](https://i.stack.imgur.com/xt7vy.png)
On the bright side, the application download page also has a download link for the source, so I intend to read that! I've opened two files so far to take a glance and they were not commented at all. They're also written in C# which I have never seen before, but I've heard it's very similar to Java, so maybe I'll learn that language too.
| 9,200
|
34,344,171
|
Getting a strange error. I created a database in MySQL, set the database to use it. Using the right settings in my Django settings.py. But still there's an error that no database has been selected.
**First I tried:**
```
python manage.py syncdb
```
**Got this traceback:**
```
django.db.utils.OperationalError: (1046, 'No database selected')
```
**settings.py:**
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'my_db',
'USER': 'username',
'PASSWORD': 'password',
'HOST': 'localhost',
'PORT': '3306',
}
}
```
What have I missed?
|
2015/12/17
|
[
"https://Stackoverflow.com/questions/34344171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2058553/"
] |
Check to make sure your database my\_db exists in your MySQL instance. Log into MySQL and run;
```
show databases;
```
make sure my\_db exists. If it does not, run
```
create database my_db;
```
|
GRANT access privileges to the user mentioned in the file
```
GRANT ALL PRIVILEGES ON database_name.* TO 'username'@'localhost';
```
You need not grant all privileges. modify accordingly.
| 9,202
|
35,189,234
|
I am trying to seed an instance of pythons random. However when I run the code below it generates a different answer each time even if user input stays the same.
```
import random
import hashlib
mapSeed = hashlib.sha1(input("Enter seed: ").encode('utf-8'))
rnd = random.Random()
rnd.seed(mapSeed)
print(mapSeed)
print(rnd.random())
```
|
2016/02/03
|
[
"https://Stackoverflow.com/questions/35189234",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4831464/"
] |
Assuming that the seed remains constant during all executions, it will never change. Look at this:
```
>>> import random
>>> r = random.Random()
>>> r.seed(515)
>>> r.random()
0.1646746342919
>>> r.random()
0.9567223584846931
>>> r.seed(515)
>>> r.random()
0.1646746342919
>>> r.random()
0.9567223584846931
```
However, since you develop your seed from a user inputted string, the value will not remain constant. Since the value that is returned by the `Random` object will not have a different value somewhere, you can't rely on it to be constant.
If you want the output to be constant, *the seed can not change.*
|
One very important concept regarding "random numbers" is that are not actually random, they are dependent of:
1) Algorithm used to generate the "random" sequence of numbers
2) The seed for the algorithm
The same seed will generate the same sequence of random numbers. Why? Because if you can have the same stream of random numbers you can actually test changes in your code using the very same stream of random numbers and check if the final output of your code is caused by changes in your code instead of occurrences of different random stream of numbers. This is very common in simulation processes (queues, traffic simulations, etc.).
So, same seed = same stream of random numbers.
Change the seed to have different streams of random number,
I hope it helps.
| 9,203
|
38,772,498
|
I am running the command in my django project:-
```
$python manage.py runserver
```
then I am getting the error like:-
```
from django.core.context_processors import csrf
ImportError: No module named context_processors
```
here is results of
```
$ pip freeze
dj-database-url==0.4.1
dj-static==0.0.6
Django==1.10
django-toolbelt==0.0.1
gunicorn==19.6.0
pkg-resources==0.0.0
psycopg2==2.6.2
static3==0.7.0
```
and
```
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
```
I searched for many answers on stackoverflow but not getting the error.
|
2016/08/04
|
[
"https://Stackoverflow.com/questions/38772498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6623406/"
] |
The `csrf` module is moved from `django.core.context_processors` to `django.views.decorators` in the latest release. You can refer it [here](https://docs.djangoproject.com/ja/1.9/ref/csrf/)
|
`context_processors` in Django 1.10 and above has been moved from `core` to `template`.
Replace
```
django.core.context_processors
```
with
```
django.template.context_processors
```
| 9,206
|
965,663
|
We've had these for a lot of other languages. The one for [C/C++](https://stackoverflow.com/questions/469696/what-is-your-most-useful-c-c-snippet) was quite popular, so was the equivalent for [Python](https://stackoverflow.com/questions/691946/short-and-useful-python-snippets). I thought one for BASH would be interesting too.
|
2009/06/08
|
[
"https://Stackoverflow.com/questions/965663",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
In a BASH script, assign an argument to variable but provide a default if it exists:
```
MYVAR=${1:-default}
```
$MYVAR will contain the first argument if one was given else "default".
|
To remove .svn directories you may also use the combination 'find...-prune...-exec...' (without xargs):
```
# tested on Mac OS X
find -x -E . \( -type d -regex '.*/\.svn/*.*' -prune \) -ls # test
find -x -E . \( -type d -regex '.*/\.svn/*.*' -prune \) -exec /bin/rm -PRfv '{}' \;
```
| 9,207
|
2,980,031
|
I am using a module that is part of a commercial software API. The good news is there is a python module - the bad news is that its pretty unpythonic.
To iterate over rows, the follwoing syntax is used:
```
cursor = gp.getcursor(table)
row = cursor.next()
while row:
#do something with row
row = cursor.next()
```
What is the most pythonic way to deal with this situation? I have considered creating a first class function/generator and wrapping calls to a for loop in it:
```
def cursor_iterator(cursor):
row = cursor.next()
while row:
yield row
row = cursor.next()
[...]
cursor = gp.getcursor(table)
for row in cursor_iterator(cursor):
# do something with row
```
This is an improvement, but feels a little clumsy. Is there a more pythonic approach? Should I create a wrapper class around the `table` type?
|
2010/06/05
|
[
"https://Stackoverflow.com/questions/2980031",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/103225/"
] |
Assuming that one of Next and next is a typo and they're both the same, you can use the not-so-well-known variant of the built-in iter function:
```
for row in iter(cursor.next, None):
<do something>
```
|
The best way is to use a Python iterator interface around the `table` object, imho:
```
class Table(object):
def __init__(self, table):
self.table = table
def rows(self):
cursor = gp.get_cursor(self.table)
row = cursor.Next()
while row:
yield row
row = cursor.next()
```
Now you just call:
```
my_table = Table(t)
for row in my_table.rows():
# do stuff with row
```
It's very readable, in my opinion.
| 9,217
|
74,466,125
|
Pyhton is new to me and i'm having a little problem with the for loops,
Im used to for loop in java where you can set integers as you like in the loops but can't get it right in python.
the task i was given is to make a function that return True of False.
the function get 3 integers: short rope amount, long rope amount and wanted.
it's known the short rope length is 1 meter and the long rope length is 5 meters.
if the wanted length is in range of the possible lengths of the ropes the function will return True, else false,
for example, 1 short rope and 2 long ropes can get you the following length: [1, 5, 6, 10, 11] and if the wanted length that the function got is in this list of lengths it should return True.
here is my code:
```
def wantedLength(short_amount, long_amount, wanted_length):
short_rope_length = 1
long_rope_length = 5
for i in range(short_amount + 1):
for j in range(long_amount + 1):
my_length = [short_rope_length * i + long_rope_length * j, ", "]
if wanted_length in my_length:
return True
else:
return False
```
but when I run the code I get the following error:
TypeError: argument of type 'int' is not iterable
what am I doing wrong in the for loop statement?
thanks in advance!
I tried to change the for loops with other commands like [short\_amount] and etc
the traceback as requsted:
```
Traceback (most recent call last):
File "C:\Users\barva\PycharmProjects\Giraffe\Ariel-Exc\Exc_2.py", line 89, in <module>
print(wantedLength(a,b,c))
File "C:\Users\barva\PycharmProjects\Giraffe\Ariel-Exc\Exc_2.py", line 73, in wantedLength
if wanted_length in my_length:
TypeError: argument of type 'int' is not iterable
```
|
2022/11/16
|
[
"https://Stackoverflow.com/questions/74466125",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20523368/"
] |
Use a nested list comprehension:
```
pd.DataFrame([[k1, k2, v]
for k1,d in sample_dict.items()
for k2,v in d.items()],
columns=['job', 'person', 'age'])
```
Output:
```
job person age
0 doctor docter_a 26
1 doctor docter_b 40
2 doctor docter_c 42
3 teacher teacher_x 21
4 teacher teacher_y 45
5 teacher teacher_z 33
```
|
You can construct a `zip` of length 3 elements, and feed them to `pd.DataFrame` after reshaping:
```
zip_list = [list(zip([key]*len(sample_dict['doctor']),
sample_dict[key],
sample_dict[key].values()))
for key in sample_dict.keys()]
col_len = len(sample_dict['doctor']) # or use any other valid key
output = pd.DataFrame(np.ravel(zip_list).reshape(col_len**2, col_len))
```
| 9,220
|
19,551,186
|
How do I let the user write text in my python program that will transfer into a file using open "w"?
I only figured out how write text into the seperate document using print. But how is it done if I want input to be written to a file? In short terms: Let the user itself write text to a seperate document.
Here is my code so far:
```
def main():
print ("This program let you create your own HTML-page")
name = input("Enter the name for your HTML-page (end it with .html): ")
outfile = open(name, "w")
code = input ("Enter your code here: ")
print ("This is the only thing getting written into the file", file=outfile)
main ()
```
|
2013/10/23
|
[
"https://Stackoverflow.com/questions/19551186",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2877270/"
] |
First off, use raw\_input instead of input. This way you capture the text as a string instead of trying to evaluate it. But to answer your question:
```
with open(name, 'w') as o:
o.write(code)
```
You can also surround that code in a loop that keeps repeating until the user hits a certain key if you would like them to be able to hit enter when typing their html file.
**EDIT:** Example of loop to allow continuous user input:
```
with open(name, 'w') as o:
code = input("blah")
while (code != "exit")
o.write('{0}\n'.format(code))
code = input("blah")
```
That way, the loop will keep running until the user types in "exit" or whatever string you choose. The format line inserts a newline into the file. I'm still on python2 so I'm not completely sure how input handles newlines, but if it includes it, feel free to remove the format line and use it as above.
|
```
def main():
print ("This program let you create your own HTML-page")
name = input("Enter the name for your HTML-page (end it with .html): ")
outfile = open(name),'w')
code = input ("Enter your code here: ")
outfile.write(code)
main ()
```
This does not accept multi line code entries. You will need an additional module for that.
| 9,221
|
61,624,276
|
I'm looking for a pythonic way to define multiple related constants in a single file to be used in multiple modules. I came up with multiple options, but all of them have downsides.
### Approach 1 - simple global constants
```py
# file resources/resource_ids.py
FOO_RESOURCE = 'foo'
BAR_RESOURCE = 'bar'
BAZ_RESOURCE = 'baz'
QUX_RESOURCE = 'qux'
```
```py
# file runtime/bar_handler.py
from resources.resource_ids import BAR_RESOURCE
# ...
def my_code():
value = get_resource(BAR_RESOURCE)
```
This is simple and universal, but has a few downsides:
* `_RESOURCE` has to be appended to all constant names to provide context
* Inspecting the constant name in IDE will not display other constant values
### Approach 2 - enum
```py
# file resources/resource_ids.py
from enum import Enum, unique
@unique
class ResourceIds(Enum):
foo = 'foo'
bar = 'bar'
baz = 'baz'
qux = 'qux'
```
```
# file runtime/bar_handler.py
from resources.resource_ids import ResourceIds
# ...
def my_code():
value = get_resource(ResourceIds.bar.value)
```
This solves the problems of the first approach, but the downside of this solution is the need of using `.value` in order to get the string representation (assuming we need the string value and not just a consistent enum value). Failure to append `.value` can result in hard to debug issues in runtime.
### Approach 3 - class variables
```py
# file resources/resource_ids.py
class ResourceIds:
foo = 'foo'
bar = 'bar'
baz = 'baz'
qux = 'qux'
```
```
# file runtime/bar_handler.py
from resources.resource_ids import ResourceIds
# ...
def my_code():
value = get_resource(ResourceIds.bar)
```
This approach is my favorite, but it may be misinterpreted - classes are made to be instantiated. And while code correctness wouldn't suffer from using an instance of the class instead of the class itself, I would like to avoid this waste.
Another disadvantage of this approach that the values are not actually constant. Any code client can potentially change them.
Is it possible to prevent a class from being instantiated? Am I missing some idiomatic way of grouping closely related constants?
|
2020/05/05
|
[
"https://Stackoverflow.com/questions/61624276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/895490/"
] |
Use `Enum` and mix in `str`:
```
@unique
class ResourceIds(str, Enum):
foo = 'foo'
bar = 'bar'
baz = 'baz'
qux = 'qux'
```
Then you won't need to compare against `.value`:
```
>>> ResourceIds.foo == 'foo'
True
```
And you still get good debugging info:
```
>>> ResourceIds.foo
<ResourceIds.foo: 'foo'>
>>> list(ResourceIds.foo.__class__)
[
<ResourceIds.foo: 'foo'>,
<ResourceIds.bar: 'bar'>,
<ResourceIds.baz: 'baz'>,
<ResourceIds.qux: 'qux'>,
]
```
|
A few ways you can do this, I don't really like using enum in python because you dont *really* need them IMO ;)
This is how most packages out there do it AFAIK:
```
# module_name.py
CSV = 'csv'
JSON = 'json'
def save(path, format=CSV):
# do some thing with format
...
# other_module.py
import module_name
module_name.save('my_path', fomat=module_name.CSV)
```
another way is like this:
```
# module_name.py
options = {
'csv': some_csv_processing_function
'json': some_json_processing_function
}
def save(path, format=options.['csv']:
# do some thing with format
...
# other_module.py
import module_name
module_name.save('my_path', fomat=module_name.options['csv'])
```
(kinda unrelated) You can also make your dicts classes:
```
class DictClass:
def __init__(self, dict_class):
self.__dict__ = dict_class
options = DictClass({
'csv': some_csv_processing_function
'json': some_json_processing_function
})
```
now you can access your dictionary as an object like: `options.csv`
| 9,222
|
45,836,369
|
I know what iterators and generators are. I know the iteration protocol, and I can create both. I read the following line everywhere: "Every generator is an iterator, but not vice versa." I understand the first part, but I don't understand the "not vice versa" part. What does the generator object have that any simple iterator object does not?
I read [this question](https://stackoverflow.com/questions/2776829/difference-between-pythons-generators-and-iterators) but it does not explain why an iterator is not a generator. Is it just the syntax `yield` that explains the difference?
Thanks in advance.
|
2017/08/23
|
[
"https://Stackoverflow.com/questions/45836369",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5567387/"
] |
In python3 an iterator is an object with a `__next__` method. That's all.
For an object to be a generator it needs `__next__` method but it also use a yield statement.
So both object have a `__next__` method and so are iterator but the first object doesn't always have a yield statement so an iterator is not necessarily a generator.
In fact it means that when you generate a generator all its code is **not** run at once. Meanwhile with a more classical iterator you will run only once the generation code.
|
It's just that generators are a specific kind of iterators.
Their two particular traits are the lazy evaluation (no value is computed in anticipation of it being requested), and the fact that once exhausted, they cannot be iterated once again.
On the other hand, an iterator is no more than something with a `__next__` method, and an `__iter__` method.
So lists, tuples, sets, dictionaries... are iterators.
But those are not generators, because all of the elements they contain are defined and evaluated after the container initialization, and they can be iterated over many times.
Therefore, some iterators are not generators.
| 9,223
|
51,129,487
|
I'm using `django-notification` to create notifications. based on [it's documention](https://github.com/django-notifications/django-notifications) I putted:
```
url(r'^inbox/notifications/', include(notifications.urls, namespace='notifications')),
```
in my `urls.py`. I generate a notification for test by using this in my views.py:
```
guy = User.objects.get(username = 'SirSaleh')
notify.send(sender=User, recipient=guy, verb='you visted the site!')
```
and I can easily get the number of unread notification in this url:
```
http://127.0.0.1:8000/inbox/notifications/api/unread_count/
```
it return `{"unread_count": 1}` as I want. but with `/api/unread_list/` I can not to get the list of notifications and I get this error:
```
ValueError at /inbox/notifications/
invalid literal for int() with base 10: '<property object at 0x7fe1b56b6e08>'
```
As I beginner in using `django-notifications` any help will be appreciated.
**Full TraceBack**
>
> Environment:
>
>
> Request Method: GET Request URL:
> <http://127.0.0.1:8000/inbox/notifications/api/unread_list/>
>
>
> Django Version: 2.0.2 Python Version: 3.5.2 Installed Applications:
> ['django.contrib.admin', 'django.contrib.auth',
> 'django.contrib.contenttypes', 'django.contrib.sessions',
> 'django.contrib.messages', 'django.contrib.staticfiles',
> 'django.contrib.sites', 'django.forms', 'rest\_framework',
> 'allauth', 'allauth.account', 'allauth.socialaccount', 'guardian',
> 'axes', 'django\_otp', 'django\_otp.plugins.otp\_static',
> 'django\_otp.plugins.otp\_totp', 'two\_factor', 'invitations',
> 'avatar', 'imagekit', 'import\_export', 'djmoney', 'captcha',
> 'dal', 'dal\_select2', 'widget\_tweaks', 'braces', 'django\_tables2',
> 'phonenumber\_field', 'hitcount', 'el\_pagination',
> 'maintenance\_mode', 'notifications', 'mathfilters',
> 'myproject\_web', 'Order', 'PhotoGallery', 'Search', 'Social',
> 'UserAccount', 'UserAuthentication', 'UserAuthorization',
> 'UserProfile'] Installed Middleware:
> ['django.middleware.security.SecurityMiddleware',
> 'django.contrib.sessions.middleware.SessionMiddleware',
> 'django.middleware.locale.LocaleMiddleware',
> 'django.middleware.common.CommonMiddleware',
> 'django.middleware.csrf.CsrfViewMiddleware',
> 'django.contrib.auth.middleware.AuthenticationMiddleware',
> 'django.contrib.messages.middleware.MessageMiddleware',
> 'django.middleware.clickjacking.XFrameOptionsMiddleware',
> 'django\_otp.middleware.OTPMiddleware',
> 'maintenance\_mode.middleware.MaintenanceModeMiddleware']
>
>
> Traceback:
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/core/handlers/exception.py"
> in inner
> 35. response = get\_response(request)
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/core/handlers/base.py"
> in \_get\_response
> 128. response = self.process\_exception\_by\_middleware(e, request)
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/core/handlers/base.py"
> in \_get\_response
> 126. response = wrapped\_callback(request, \*callback\_args, \*\*callback\_kwargs)
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/notifications/views.py"
> in live\_unread\_notification\_list
> 164. if n.actor:
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/contrib/contenttypes/fields.py"
> in **get**
> 253. rel\_obj = ct.get\_object\_for\_this\_type(pk=pk\_val)
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/contrib/contenttypes/models.py"
> in get\_object\_for\_this\_type
> 169. return self.model\_class().\_base\_manager.using(self.\_state.db).get(\*\*kwargs)
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/db/models/query.py"
> in get
> 394. clone = self.filter(\*args, \*\*kwargs)
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/db/models/query.py"
> in filter
> 836. return self.\_filter\_or\_exclude(False, \*args, \*\*kwargs)
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/db/models/query.py"
> in \_filter\_or\_exclude
> 854. clone.query.add\_q(Q(\*args, \*\*kwargs))
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/db/models/sql/query.py"
> in add\_q
> 1253. clause, \_ = self.\_add\_q(q\_object, self.used\_aliases)
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/db/models/sql/query.py"
> in \_add\_q
> 1277. split\_subq=split\_subq,
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/db/models/sql/query.py"
> in build\_filter
> 1215. condition = self.build\_lookup(lookups, col, value)
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/db/models/sql/query.py"
> in build\_lookup
> 1085. lookup = lookup\_class(lhs, rhs)
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/db/models/lookups.py"
> in **init**
> 18. self.rhs = self.get\_prep\_lookup()
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/db/models/lookups.py"
> in get\_prep\_lookup
> 68. return self.lhs.output\_field.get\_prep\_value(self.rhs)
>
>
> File
> "/home/saleh/Projects/myproject\_web/lib/python3.5/site-packages/django/db/models/fields/**init**.py"
> in get\_prep\_value
> 947. return int(value)
>
>
> Exception Type: ValueError at /inbox/notifications/api/unread\_list/
> Exception Value: invalid literal for int() with base 10: ''
>
>
>
|
2018/07/02
|
[
"https://Stackoverflow.com/questions/51129487",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2454690/"
] |
oops! It was my mistake.
I Finally find out what was the problem. `actor_object_id` was the field of `notifications_notification` table, which `User.objects.get(username = 'SirSaleh')` saved in it. It should be `Interger` (`user_id` of actor).
So I dropped previous table changed instance to `User.objects.get(username = 'SirSaleh')` to User ID. Problem solved.
**So Why type of `actor_object_id` is CharField (varchar)? (at least I don't know)** ;))
|
This is old, but I happen to know the answer.
In your code, you wrote:
```
guy = User.objects.get(username = 'SirSaleh')
notify.send(sender=User, recipient=guy, verb='you visted the site!')
```
You express that you want `guy` to be your sender However, in `notify.send`, you marked the sender as a generic `User` object, not `guy`.
So, change your code to:
```
guy = User.objects.get(username = 'SirSaleh')
notify.send(sender=guy, recipient=guy, verb='you visted the site!')
```
Notifications will take the user object `guy`, extrapolate the ID and store it in the database accordingly.
| 9,226
|
22,725,990
|
I always have a hard time understanding the logic of regex in python.
```
all_lines = '#hello\n#monica, how re "u?\n#hello#robert\necho\nfall and spring'
```
I want to retrieve the substring that STARTS WITH `#` until the FIRST `\n` THAT COMES RIGHT AFTER the LAST `#` - I.e., `'#hello\n#monica, how re "u?\n#hello#robert'`
So if I try:
```
>>> all_lines = '#hello\n#monica, how re "u?\n#hello#robert\necho'
>>> RE_HARD = re.compile(r'(^#.*\n)')
>>> mo = re.search(RE_HARD, all_lines)
>>> print mo.group(0)
#hello
```
Now, if I hardcode what comes after the first \n after the last #, i.e., I hardcode echo, I get:
```
>>> all_lines = '#hello\n#monica, how re "u?\n#hello#robert\necho'
>>> RE_HARD = re.compile(r'(^#.*echo)')
>>> mo = re.search(RE_HARD, all_lines)
>>> print mo.group(0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'group'
```
I get an error, no idea why. Seems the same as before.
This is still not want I want since in reality after the first \n that comes after the last # I may have any character/string...
|
2014/03/29
|
[
"https://Stackoverflow.com/questions/22725990",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1205745/"
] |
This program matches the pattern you request.
```
#!/usr/bin/python
import re
all_lines = '#hello\n#monica, how re "u?\n#hello#robert\necho'
regex = re.compile(
r'''\# # first hash
.* # continues to (note: .* greedy)
\# # last hash
.*?$ # rest of the line. (note .*? non-greedy)
''',
# Flags:
# DOTALL: Make the '.' match any character at all, including a newline
# VERBOSE: Allow comments in pattern
# MULTILINE: Allow $ to match end of line
re.DOTALL | re.VERBOSE | re.MULTILINE)
print re.search(regex, all_lines).group()
```
Reference: <http://docs.python.org/2/library/re.html>
Demo: <http://ideone.com/aZjjVj>
|
Regular expressions are powerful but sometimes they are overkill. String methods should accomplish what you need with much less thought
```
>>> my_string = '#hello\n#monica, how re "u?\n#hello#robert\necho\nfall and spring'
>>> hash_positions = [index for index, c in enumerate(my_string) if c == '#']
>>> hash_positions
[0, 7, 27, 33]
>>> first = hash_positions[0]
>>> last = hash_positions[-1]
>>> new_line_after_last_hash = my_string.index('\n',last)
>>> new_line_after_last_hash
40
>>> new_string = my_string[first:new_line_after_last_hash]
>>> new_string
'#hello\n#monica, how re "u?\n#hello#robert'
```
| 9,228
|
49,126,184
|
i've a docker with redis container
configuration of it
docker-compose.yml
```
# Redis
redis:
image: redis:4.0.6
build:
context: .
dockerfile: dockerfile_redis
volumes:
- "./redis.conf:/usr/local/etc/redis/redis.conf"
ports:
- "6379:6379"
```
dockerfile\_redis
```
CMD ["chown", "redis:redis", "-R", "/etc"]
CMD ["chown", "redis:redis", "-R", "/var/lib"]
CMD ["chown", "redis:redis", "-R", "/run"]
CMD ["sudo", "chmod", "644", "/data/dump.rdb" ]
CMD ["sudo", "chmod", "755", "/etc" ]
CMD ["sudo", "chmod", "770", "/var/lib" ]
CMD ["sudo", "chmod", "777", "/run" ]
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
```
Also i use django and celery, when celery works 4-6 hours, container of celery stopped, with error:
```
[2018-03-05 17:18:24,516: CRITICAL/MainProcess] Unrecoverable error: ResponseError('MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.',)
Traceback (most recent call last):
File "/usr/local/lib/python3.4/site-packages/celery/worker/worker.py", line 203, in start
self.blueprint.start(self)
File "/usr/local/lib/python3.4/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.4/site-packages/celery/bootsteps.py", line 370, in start
return self.obj.start()
File "/usr/local/lib/python3.4/site-packages/celery/worker/consumer/consumer.py", line 320, in start
blueprint.start(self)
File "/usr/local/lib/python3.4/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.4/site-packages/celery/worker/consumer/consumer.py", line 596, in start
c.loop(*c.loop_args())
File "/usr/local/lib/python3.4/site-packages/celery/worker/loops.py", line 88, in asynloop
next(loop)
File "/usr/local/lib/python3.4/site-packages/kombu/async/hub.py", line 354, in create_loop
cb(*cbargs)
File "/usr/local/lib/python3.4/site-packages/kombu/transport/redis.py", line 1040, in on_readable
self.cycle.on_readable(fileno)
File "/usr/local/lib/python3.4/site-packages/kombu/transport/redis.py", line 337, in on_readable
chan.handlers[type]()
File "/usr/local/lib/python3.4/site-packages/kombu/transport/redis.py", line 714, in _brpop_read
**options)
File "/usr/local/lib/python3.4/site-packages/redis/client.py", line 680, in parse_response
response = connection.read_response()
File "/usr/local/lib/python3.4/site-packages/redis/connection.py", line 629, in read_response
raise response
redis.exceptions.ResponseError: MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.
Import Error
-------------- celery@b17b82a69031 v4.1.0 (latentcall)
---- **** -----
--- * *** * -- Linux-4.4.0-34-generic-x86_64-with-debian-8.9 2018-03-05 07:24:00
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: backend:0x7f19e5745208
- ** ---------- .> transport: redis://redis:6379/0
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 20 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. CallbackNotifier
. FB posting
. FB token status
. MD posting
. MD token status
. OK posting
. OK token status
. TW posting
. TW token status
. VK posting
. VK token status
. api.controllers.message.scheduled_message
. backend.celery.debug_task
. stats.views.collect_stats
```
In my redis.conf file i disable snapshots
```
stop-writes-on-bgsave-error no
```
In redis logs:
```
1:M 06 Mar 07:40:04.037 * Background saving started by pid 8228
8228:C 06 Mar 07:40:04.038 # Failed opening the RDB file backupall.db (in server root dir /run) for saving: Permission denied
```
But, when i restart redis container i've get some warnings:
```
1:C 06 Mar 08:12:48.982 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 06 Mar 08:12:48.982 # Redis version=4.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 06 Mar 08:12:48.982 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 06 Mar 08:12:48.986 * Running mode=standalone, port=6379.
1:M 06 Mar 08:12:48.986 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 06 Mar 08:12:48.986 # Server initialized
1:M 06 Mar 08:12:48.987 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 06 Mar 08:12:48.988 * DB loaded from disk: 0.001 seconds
1:M 06 Mar 08:12:48.988 * Ready to accept connections
```
1. Permissions in dockerfile\_redis is correct?
2. How configurate redis with my conf file?
3. What also i need to make the redis work well?
|
2018/03/06
|
[
"https://Stackoverflow.com/questions/49126184",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6488529/"
] |
Please check this blogpost:
<https://blog.huntingmalware.com/notes/LLMalware>
It is very likely a malware causing the working directory of your redis to change, and redis tries to write RDB file to a directory owned by root, following the commands of a malicious script. As it does not run from root, and write access to /run directory is not granted to user 'redis', the writing fails.
So, do not expose your Redis server port to the Internet and it should fix the issue with malware being able to reach it.
|
If you do not really need to expose ports, just remove next lines:
```
ports:
- "6379:6379"
```
| 9,229
|
22,932,789
|
Hi I just start learning python today and get to apply what I learning on a flash cards program, I want to ask the user for their name, and only accept alphabet without numbers or symbols, I've tried several ways but there is something I am missing in my attempts. Here is what I did so far.
```
yname = raw_input('Your Name ?: ')
if yname.isdigit():
print ('{0}, can\'t be your name!'.format(yname))
print "Please use alphbetic characters only!."
yname = raw_input("Enter your name:?")
print "Welcome %s !" %yname
```
but I figured in this one is if the user input any character more than one time it will eventually continue...So I did this instead.
```
yname = raw_input("EnterName").isalpha()
while yname == True:
if yname == yname.isalpha():
print "Welcome %s " %(yname)
else:
if yname == yname.isdigit():
print ("Name must be alphabetical only!")
yname = raw_input('Enter Name:').isalpha()
```
This while loop goes on forever, as well as I tried (-) and (+) the raw input variable as I've seen in some tutorials. So I thought of using while loop.
```
name = raw_input("your name"):
while True:
if name > 0 and name.isalpha():
print "Hi %s " %name
elif name < 0 and name.isdigit():
print "Name must be Alphabet characters only!"
try:
name != name.isalpha():
except (ValueError):
print "Something went wrong"
```
|
2014/04/08
|
[
"https://Stackoverflow.com/questions/22932789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3510177/"
] |
>
> The homepage is fine but the images in the footer are missing in the whole website.
>
>
>
Seems, that problem is with `float: left` in `<li>` elements. Try fixing size of blocks or make elements inline;
|
The problem is compatibility of your css/javascript with upper versions of IE, [This](https://stackoverflow.com/a/19150943/87956) should help you out.
Above is just a work around better way would be to fix your css and javascript/jquery to take care of compatibility issues.
| 9,230
|
3,258,072
|
Customizing `pprint.PrettyPrinter`
==================================
The documentation for the `pprint` module mentions that the method `PrettyPrinter.format` is intended to make it possible to customize formatting.
I gather that it's possible to override this method in a subclass, but this doesn't seem to provide a way to have the base class methods apply line wrapping and indentation.
* Am I missing something here?
* Is there a better way to do this (e.g. another module)?
Alternatives?
-------------
I've checked out the [`pretty`](http://pypi.python.org/pypi/pretty/0.1) module, which looks interesting, but doesn't seem to provide a way to customize formatting of classes from other modules without modifying those modules.
I think what I'm looking for is something that would allow me to provide a mapping of types (or maybe functions) that identify types to routines that process a node. The routines that process a node would take a node and return the string representation it, along with a list of child nodes. And so on.
Why I’m looking into pretty-printing
------------------------------------
My end goal is to compactly print custom-formatted sections of a DocBook-formatted `xml.etree.ElementTree`.
(I was surprised to not find more Python support for DocBook. Maybe I missed something there.)
I built some basic functionality into a client called [xmlearn](http://github.com/intuited/xmlearn) that uses [lxml](http://pypi.python.org/pypi/lxml). For example, to dump a Docbook file, you could:
```
xmlearn -i docbook_file.xml dump -f docbook -r book
```
It's pretty half-ass, but it got me the info I was looking for.
[xmlearn](http://github.com/intuited/xmlearn) has other features too, like the ability to build a graph image and do dumps showing the relationships between tags in an XML document. These are pretty much totally unrelated to this question.
You can also perform a dump to an arbitrary depth, or specify an XPath as a set of starting points. The XPath stuff sort of obsoleted the docbook-specific format, so that isn't really well-developed.
This still isn't really an answer for the question. I'm still hoping that there's a readily customizable pretty printer out there somewhere.
|
2010/07/15
|
[
"https://Stackoverflow.com/questions/3258072",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/192812/"
] |
*This question may be a duplicate of:*
* [Any way to properly pretty-print ordered dictionaries in Python?](https://stackoverflow.com/questions/4301069/any-way-to-properly-pretty-print-ordered-dictionaries-in-python)
---
Using `pprint.PrettyPrinter`
============================
I looked through the [source of pprint](http://svn.python.org/view/python/branches/release27-maint/Lib/pprint.py?view=markup). It seems to suggest that, in order to enhance `pprint()`, you’d need to:
* subclass `PrettyPrinter`
* override `_format()`
* test for `issubclass()`,
* and (if it's not your class), pass back to `_format()`
Alternative
===========
I think a better approach would be just to have your own `pprint()`, which defers to `pprint.pformat` when it doesn't know what's up.
For example:
```python
'''Extending pprint'''
from pprint import pformat
class CrazyClass: pass
def prettyformat(obj):
if isinstance(obj, CrazyClass):
return "^CrazyFoSho^"
else:
return pformat(obj)
def prettyp(obj):
print(prettyformat(obj))
# test
prettyp([1]*100)
prettyp(CrazyClass())
```
The big upside here is that you don't depend on `pprint` internals. It’s explicit and concise.
The downside is that you’ll have to take care of indentation manually.
|
Consider using the `pretty` module:
* <http://pypi.python.org/pypi/pretty/0.1>
| 9,231
|
44,302,426
|
In my python package I have a configuration module that reads a yaml file (when creating the instance) at an explicit location, i.e. something like
```
class YamlConfig(object):
def __init__(self):
filename = os.path.join(os.path.expanduser('~'), '.hanzo\\config.yml')
with open(filename) as fs:
self.cfg = yaml.load(fs.read())
```
Now what should I do when writing my unit test if I don't want to use the explicitly specified file? Instead I want to create a temporary `config.yml` to be used for testing.
I could simply allow for a specified filename in `__init__()`, but I strongly prefer forcing the filename location. I.e. like this
```
class YamlConfig(object):
def __init__(self, filename=os.path.join(os.path.expanduser('~'), '.hanzo\\config.yml')):
with open(filename) as fs:
self.cfg = yaml.load(fs.read())
```
Is there other ways to solve my issue? I guess it might be possible using `mock` right way? Also feel free to give any comments about upside and downside.
|
2017/06/01
|
[
"https://Stackoverflow.com/questions/44302426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2834295/"
] |
JSON only supports a limited number of datatypes. If you want to store other types of data as JSON then you need to convert it to something that JSON accepts. The obvious choice for Numpy arrays is to store them as (possibly nested) lists. Fortunately, Numpy arrays have a `.tolist` method which performs the conversion efficiently.
```
import numpy as np
import json
a = np.array(range(25), dtype=np.uint8).reshape(5, 5)
print(a)
print(json.dumps(a.tolist()))
```
**output**
```
[[ 0 1 2 3 4]
[ 5 6 7 8 9]
[10 11 12 13 14]
[15 16 17 18 19]
[20 21 22 23 24]]
[[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]
```
`.tolist` will convert the array elements to native Python types (int or float) if it can do so losslessly. If you use other datatypes I suggest you convert them to something portable before calling `.tolist`.
|
Here is a full working example of an Encoder/Decoder that can deal with NumPy arrays:
```
import numpy
from json import JSONEncoder,JSONDecoder
import json
# ********************************** #
class NumpyArrayEncoder(JSONEncoder):
def default(self, obj):
if isinstance(obj, numpy.ndarray):
return obj.tolist()
return JSONEncoder.default(self, obj)
class NumpyArrayDecoder(JSONDecoder):
def default(self, obj):
if isinstance(obj, list):
return numpy.asarray(obj)
return JSONEncoder.default(self, obj)
# ********************************** #
if __name__ == "__main__":
# TO TEST
numpyArrayOne = numpy.array([[11 ,22, 33], [44, 55, 66], [77, 88, 99]])
numpyArrayTwo = numpy.array([[51, 61, 91], [121 ,118, 127]])
# Serialization
numpyData = {"arrayOne": numpyArrayOne, "arrayTwo": numpyArrayTwo}
print("Original Data: \n")
print(numpyData)
print("\nSerialize NumPy array into JSON and write into a file")
with open("numpyData.json", "w") as write_file:
json.dump(numpyData, write_file, cls=NumpyArrayEncoder)
print("Done writing serialized NumPy array into file")
# Deserialization
print("Started Reading JSON file")
with open("numpyData.json", "r") as read_file:
print("Converting JSON encoded data into Numpy array")
decodedArray = json.load(read_file, cls=NumpyArrayDecoder)
print("Re-Imported Data: \n")
print(decodedArray)
```
| 9,236
|
17,460,215
|
I would like to know how I can get my code to not crash if a user types anything other than a number for input. I thought that my else statement would cover it but I get an error.
>
> Traceback (most recent call last): File "C:/Python33/Skechers.py",
> line 22, in
> run\_prog = input() File "", line 1, in NameError: name 's' is not defined
>
>
>
In this instance I typed the letter "s".
Below is the portion of the code that gives me the issue. The program runs flawlessly other than if you give it letters or symbols.
I want it to print "Invalid input" instead of crashing if possible.
Is there a trick that I have to do with another elif statement and isalpha function?
```
while times_run == 0:
print("Would you like to run the calculation?")
print("Press 1 for YES.")
print("Press 2 for NO.")
run_prog = input()
if run_prog == 1:
total()
times_run = 1
elif run_prog == 2:
exit()
else:
print ("Invalid input")
print(" ")
```
I tried a few variations of this with no success.
```
elif str(run_prog):
print ("Invalid: input")
print(" ")
```
I appreciate any feedback even if it is for me to reference a specific part of the python manual.
Thanks!
|
2013/07/04
|
[
"https://Stackoverflow.com/questions/17460215",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2361013/"
] |
Contrary to what you think, your script is *not* being run in Python 3.x. Somewhere on your system you have Python 2.x installed and the script is running in that, causing it to use 2.x's insecure/inappropriate `input()` instead.
|
The error message you showed indicates that `input()` tried to evaluate the string typed as a Python expression. This in turn means you're not actually using Python 3; `input` only does that in 2.x. Anyhow, I strongly recommend you do it this way instead, as it makes explicit the kind of input you want.
```
while times_run == 0:
sys.stdout.write("Would you like to run the calculation?\n"
"Press 1 for YES.\n"
"Press 2 for NO.\n")
try:
run_prog = int(sys.stdin.readline())
except ValueError:
run_prog = 0
if not (1 <= run_prog <= 2):
sys.stdout.write("Invalid input.\n")
continue
# ... what you have ...
```
| 9,237
|
62,028,585
|
I have an ML model that predicts a target attribute `y` with `5` other attributes namely `Age`, `Sex`, `Satisfaction`, `Height` and `weight`
Let's say that I have a new dataset **but it is short `Age`** so it has only `4` attributes namely `Sex`, `Satisfaction`, `Height` and `weight`
So that new dataset I am going to predict has lost one column (attribute) which is `Age`
Is it still possible to predict the target attribute 'y`?
Note: - I have my model exported with the `pickel` python library and trying to predict the new dataset as folllow:
```
model=pickle.load(open('gaussian.pkl','rb'))
print(model.predict(inputs)) # ----------------> inputs which has now only 4 attributes
```
And this gives an error:
>
> ValueError: operands could not be broadcast together with shapes (100,4) (5,)
>
>
>
How can I tackle this?
|
2020/05/26
|
[
"https://Stackoverflow.com/questions/62028585",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11433497/"
] |
There are some models that can handle NaN (empty) values, life XGBOOST, and some models that can't.
Your best option here will be to re-train a model with only the 4 features.
If you can't you can give multiple values to "age" (like 2-99), predict y each time, and take the average of those predictions.
Again, you will really be better to re-train your model on the features without age if this is possible.
|
It's possible to predict y using fewer Xs. You need more data to train your model.
Keep in mind that 1st example in machine learning is to predict y using just one x.
| 9,239
|
64,734,616
|
I have a **Payment** Django model that has a *CheckNumber* attribute that I seem to be facing issues with, at least whilst mentioning the attribute in the **str** method. It works just fine on the admin page when creating a Payment instance, but as soon as I called it in the method it gave me the following error message:
```
Request URL: http://127.0.0.1:8000/admin/Vendor/payment/
Django Version: 3.0.7
Exception Type: AttributeError
Exception Value:
'Payment' object has no attribute 'CheckNumber'
Exception Location: /Users/beepboop/PycharmProjects/novatory/Vendor/models.py in __str__, line 72
Python Executable: /Users/beepboop/Environments/novatory/bin/python3.7
Python Version: 3.7.2
```
this is my code:
```
class Payment(models.Model):
Vendor = models.ForeignKey(Vendor, on_delete=models.CASCADE)
Amount = models.DecimalField(decimal_places=2, blank=True, max_digits=8)
PaymentDate = models.DateField(name="Payment date", help_text="Day of payment")
CheckNumber = models.IntegerField(name="Check number", help_text="If payment wasn't made with check, leave blank", blank=True)
def __str__(self):
return f"Vendor: {self.Vendor.Name} | Amount: {prettyPrintCurrency(self.Amount)} | Checknumber: {self.CheckNumber}"
```
I would absolutely love any comments/suggestions about the cause of the error
|
2020/11/08
|
[
"https://Stackoverflow.com/questions/64734616",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6815773/"
] |
Don't.
You have a form. Treat it as such.
```js
document.getElementById('input_listName').addEventListener('submit', function(e) {
e.preventDefault();
const li = document.createElement('li');
li.append(this.listName.value);
document.querySelector(".ul_current").append(li);
// optionally:
// this.listName.value = ""
}, false);
```
```html
<form id="input_listName">
<input type="text" name="listName" />
<button type="submit">add</button>
</form>
<ul class="ul_current"></ul>
```
Making it a form provides all of the benefits that a browser does for you. On desktop, you can press Enter to submit it. On mobile, the virtual keyboard may also provide a quick-access submit button. You could even add validation rules like `required` to the `<input />` element, and the browser will handle it all for you.
|
With the help of the `event`, you can catch the pressed `enter` (keycode = 13) key, as in my example.
Was it necessary?
```
$('#btn_createList').keypress(function(event){
if (event.keyCode == 13) {
$('.ul_current').append($('<li>', {
text: $('#input_listName').val()
}));
}
});
```
| 9,240
|
46,195,187
|
I am working on a remote server, say IP: 192.128.0.3. On this server there are two folders: `cgi-bin` & `html`. My python code file is in `cgi-bin` which wants to make data.json file in `html/Rohith/` where Rohith folder is already exists. I use the following code
```
jsonObj = json.dumps(main_func(s));
fileobj = open("http://192.128.0.3/Rohith/data.json","w+");
fileobj.write(jsonObj);
```
But it is not creating the file there although I can create on the same folder (cgi-bin) of my python script. Can anyone tell me why it is not creating the file in the given destination?
|
2017/09/13
|
[
"https://Stackoverflow.com/questions/46195187",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6816478/"
] |
It might be path issues; I suggest to use full path instead. Try the following:
```
import os
jsonObj = json.dumps(main_func(s));
path = '\\'.join(__file__.split('\\')[:-2]) # this will return the parent
# folder of cgi-bin on Windows
out_file = path + '/html/Ronhith/data.json'
fileobj = open(out_file, 'w+') # w+ mode creates the file if its not exists
fileobj.write(jsonObj)
```
If it still not working, try changing the file mode to `a+` from `w+`
>
> **a+** Opens a file for both appending and reading. The file pointer is at the end of the file if the file exists. The file opens in the append mode. **If the file does not exist, it creates a new file for reading and writing.**
>
>
>
|
file.write does not create a directory First you have to create a directory then use file.write() for example
```
if not os.path.exists("../html/Rohith/"):
os.makedirs("../html/Rohith/")
jsonObj = json.dumps(mainfunc(s))
fileobj = open("../html/Rohith/data.json","w+")
fileobj.write(jsonObj)
```
| 9,243
|
30,107,212
|
I have a deque in Python that I'm iterating over. Sometimes the deque changes while I'm interating which produces a `RuntimeError: deque mutated during iteration`.
If this were a Python list instead of a deque, I would just iterate over a copy of the list (via a slice like `my_list[:]`, but since slice operations can't be used on deques, I wonder what the most pythonic way of handling this is?
My solution is to import the copy module and then iterate over a copy, like `for item in copy(my_deque):` which is fine, but since I searched high and low for this topic I figured I'd post here to ask?
|
2015/05/07
|
[
"https://Stackoverflow.com/questions/30107212",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3486484/"
] |
You can "freeze" it by creating a list. There's no necessity to copy it to a new deque. A list is certainly good enough, since you only need it for iterating.
```
for elem in list(my_deque):
...
```
`list(x)` creates a list from any iterable `x`, including deque, and in most cases is the most pythonic way to do so.
---
Bare in mind this solution is only valid if the deque is being modified in the same thread (i.e. inside the loop). Else, be aware that `list(my_deque)` is not atomic, and is also iterating over the deque. This means that if another thread alters the deque while it runs, you end up with the same error. If you're in a multithreaded environment, use a lock.
|
While you can create a list out of the deque, `for elem in list(deque)`, this is not always optimum if it is a frequently used function: there's a performance cost to it esp. if there is a large number of elements in the deque and you're constantly changing it to an `array` structure.
A possible alternative without needing to create a list is to use a `while` loop with some boolean var to control the conditions. This provides for a time complexity of O(1).
| 9,246
|
35,876,962
|
So I think I know what my problem is but I cant seem to figure out how to fix it. I am relatively new to wxPython. I am moving some functionality I have in a terminal script to a GUI and cant seem to get it right. I use anaconda for my python distribution and have added wxPython for the GUI. I want users to be able to drag and drop files into the text controls and then have the file contents imported into dataframes for analysis with pandas. So far everything is happy. Other than the fact that the program will not quit. I think it has to do with how I am defining the window and the frame. I have removed a significant amount of the functionality from the script to help simplify things. Please let me know what I am missing.
Thanks
Tyler
```
import wx
import os
#import pandas as pd
#import numpy as np
#import matplotlib.pyplot as ply
#from scipy.stats import linregress
class MyFileDropTarget(wx.FileDropTarget):
#----------------------------------------------------------------------
def __init__(self, window):
wx.FileDropTarget.__init__(self)
self.window = window
#----------------------------------------------------------------------
def OnDropFiles(self, x, y, filenames):
self.window.SetInsertionPointEnd(y)
#self.window.updateText("\n%d file(s) dropped at %d,%d:\n" %
# (len(filenames), x, y), y)
for filepath in filenames:
self.window.updateText(filepath + '\n', y)
class MainWindow(wx.Frame):
def __init__(self, parent, title):
wx.Frame.__init__(self, parent, title=title, size=(200,100))
file_drop_target = MyFileDropTarget(self)
self.CreateStatusBar() # A Statusbar in the bottom of the window
# Creating the menubar.
menubar = wx.MenuBar()
fileMenu = wx.Menu()
helpMenu = wx.Menu()
menubar.Append(fileMenu, '&File')
menuOpen = fileMenu.Append(wx.ID_OPEN, "&Open"," Open a file to edit")
#self.Bind(wx.EVT_MENU, self.OnOpen, menuOpen)
fileMenu.AppendSeparator()
menuExit = fileMenu.Append(wx.ID_EXIT,"E&xit"," Terminate the program")
self.Bind(wx.EVT_MENU, self.OnExit, menuExit)
menubar.Append(helpMenu, '&Help')
menuAbout= helpMenu.Append(wx.ID_ABOUT, "&About"," Information about this program")
self.Bind(wx.EVT_MENU, self.OnAbout, menuAbout)
self.SetMenuBar(menubar)
#Create some sizers
mainSizer = wx.BoxSizer(wx.VERTICAL)
grid = wx.GridBagSizer(hgap=5, vgap=5)
hSizer = wx.BoxSizer(wx.HORIZONTAL)
#Create a button
self.button = wx.Button(self, label="Test")
#self.Bind(wx.EVT_BUTTON, self.OnClick,self.button)
# Radio Boxes
sysList = ['QEXL','QEX10','QEX7']
wlList = ['1100', '1400', '1800']
sys = wx.RadioBox(self, label="What system are you calibrating ?", pos=(20, 40), choices=sysList, majorDimension=3,
style=wx.RA_SPECIFY_COLS)
grid.Add(sys, pos=(1,0), span=(1,3))
WL = wx.RadioBox(self, label="Maximum WL you currently Calibrating ?", pos=(20, 100), choices=wlList, majorDimension=0,
style=wx.RA_SPECIFY_COLS)
grid.Add(WL, pos=(2,0), span=(1,3))
self.lblname = wx.StaticText(self, label="Cal File 1 :")
grid.Add(self.lblname, pos=(3,0))
self.Cal_1 = wx.TextCtrl(self, name="Cal_1", value="", size=(240,-1))
self.Cal_1.SetDropTarget(file_drop_target)
grid.Add(self.Cal_1, pos=(3,1))
self.lblname = wx.StaticText(self, label="Cal File 2 :")
grid.Add(self.lblname, pos=(4,0))
self.Cal_2 = wx.TextCtrl(self, value="", name="Cal_2", size=(240,-1))
self.Cal_2.SetDropTarget(file_drop_target)
grid.Add(self.Cal_2, pos=(4,1))
self.lblname = wx.StaticText(self, label="Cal File 3 :")
grid.Add(self.lblname, pos=(5,0))
self.Cal_3 = wx.TextCtrl(self, value="", name="Cal_3", size=(240,-1))
self.Cal_3.SetDropTarget(file_drop_target)
grid.Add(self.Cal_3, pos=(5,1))
hSizer.Add(grid, 0, wx.ALL, 5)
mainSizer.Add(hSizer, 0, wx.ALL, 5)
mainSizer.Add(self.button, 0, wx.CENTER)
self.SetSizerAndFit(mainSizer)
self.Show(True)
def OnAbout(self,e):
# A message dialog box with an OK button. wx.OK is a standard ID in wxWidgets.
dlg = wx.MessageDialog( self, "A quick test to see if your scans pass repeatability", "DOMA-64 Tester", wx.OK)
dlg.ShowModal() # Show it
dlg.Destroy() # finally destroy it when finished.
def OnExit(self,e):
# Close the frame.
self.Close(True)
def SetInsertionPointEnd(self, y):
if y <= -31:
self.Cal_1.SetInsertionPointEnd()
elif y >= -1:
self.Cal_3.SetInsertionPointEnd()
else:
self.Cal_2.SetInsertionPointEnd()
def updateText(self, text, y):
if y <= -31:
self.Cal_1.WriteText(text)
elif y >= -1:
self.Cal_3.WriteText(text)
else:
self.Cal_2.WriteText(text)
app = wx.App(False)
frame = MainWindow(None, "Sample editor")
app.MainLoop()
```
|
2016/03/08
|
[
"https://Stackoverflow.com/questions/35876962",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4996129/"
] |
Assuming you don't want window chrome, you can accomplish this by removing the frame around Electron and filling the rest in with html/css/js. I wrote an article that achieves what you are looking for on my blog here: <http://mylifeforthecode.github.io/making-the-electron-shell-as-pretty-as-the-visual-studio-shell/>. Code to get you started is also hosted here: <https://github.com/srakowski/ElectronLikeVS>
To summarize, you need to pass frame: false when you create the BrowserWindow:
```
mainWindow = new BrowserWindow({width: 800, height: 600, frame: false});
```
Then create and add control buttons for your title bar:
```
<div id="title-bar">
<div id="title">My Life For The Code</div>
<div id="title-bar-btns">
<button id="min-btn">-</button>
<button id="max-btn">+</button>
<button id="close-btn">x</button>
</div>
</div>
```
Bind in the max/min/close functions in js:
```
(function () {
var remote = require('remote');
var BrowserWindow = remote.require('browser-window');
function init() {
document.getElementById("min-btn").addEventListener("click", function (e) {
var window = BrowserWindow.getFocusedWindow();
window.minimize();
});
document.getElementById("max-btn").addEventListener("click", function (e) {
var window = BrowserWindow.getFocusedWindow();
window.maximize();
});
document.getElementById("close-btn").addEventListener("click", function (e) {
var window = BrowserWindow.getFocusedWindow();
window.close();
});
};
document.onreadystatechange = function () {
if (document.readyState == "complete") {
init();
}
};
})();
```
Styling the window can be tricky, but the key use to use special properties from webkit. Here is some minimal CSS:
```
body {
padding: 0px;
margin: 0px;
}
#title-bar {
-webkit-app-region: drag;
height: 24px;
background-color: darkviolet;
padding: none;
margin: 0px;
}
#title {
position: fixed;
top: 0px;
left: 6px;
}
#title-bar-btns {
-webkit-app-region: no-drag;
position: fixed;
top: 0px;
right: 6px;
}
```
Note that these are important:
```
-webkit-app-region: drag;
-webkit-app-region: no-drag;
```
-webkit-app-region: drag on your 'title bar' region will make it so that you can drag it around as is common with windows. The no-drag is applied to the buttons so that they do not cause dragging.
|
I was inspired by Shawn's article and apps like Hyper Terminal to figure out how to exactly replicate the Windows 10 style look as a seamless title bar, and wrote [this tutorial](https://github.com/binaryfunt/electron-seamless-titlebar-tutorial) *(please note: as of 2022 this tutorial is somewhat outdated in terms of Electron)*.
[](https://github.com/binaryfunt/electron-seamless-titlebar-tutorial)
It includes a fix for the resizing issue Shawn mentioned, and also switches between the maximise and restore buttons, even when e.g. the window is maximised by dragging the it to the top of the screen.
### Quick reference
* Title bar height: `32px`
* Title bar title font-size: `12px`
* Window control buttons: `46px` wide, `32px` high
* Window control button assets from font `Segoe MDL2 Assets` ([docs here](https://learn.microsoft.com/en-us/windows/uwp/design/style/segoe-ui-symbol-font)), size: `10px`
* Minimise: ``
* Maximise: ``
* Restore: ``
* Close: ``
* Window control button colours: varies between UWP apps, but seems to be
* Dark mode apps (white window controls): `#FFF`
* Light mode apps (black window controls): `#171717`
* Close button colours
* Hover (`:hover`): background `#E81123`, colour `#FFF`
* Pressed (`:active`): background `#F1707A`, colour `#000` or `#171717`
Note: in the tutorial I have switched to PNG icons with different sizes for pixel-perfect scaling, but I leave the Segoe MDL2 Assets font characters above as an alternative
| 9,247
|
56,355,248
|
Is there anyway to merge npz files in python. In my directory I have output1.npz and output2.npz.
I want a new npz file that merges the arrays from both npz files.
|
2019/05/29
|
[
"https://Stackoverflow.com/questions/56355248",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2443944/"
] |
use `numpy.load('output1.npz')` and `numpy.load('output2.npz')` to load both files as array a1,a2. Then use `a3 =[*a1,*a2]` to merge them. Finally, output via `numpy.savez('output.npz',a3)`
|
If you have 3 npz files ('Data\_chunk1.npz', 'Data\_chunk2.npz' and 'Data\_chunk3.npz'), all containing the same number of arrays (in my case 7 different arrays), then you can do
```
import numpy as np
# Load the 3 files
data_1 = np.load('Data_chunk1.npz')
data_2 = np.load('Data_chunk2.npz')
data_3 = np.load('Data_chunk3.npz')
# Merge each of the 7 arrays of the 3 files
arr_0 = np.concatenate([data_1['arr_0'], data_2['arr_0'], data_3['arr_0']])
arr_1 = np.concatenate([data_1['arr_1'], data_2['arr_1'], data_3['arr_1']])
arr_2 = np.concatenate([data_1['arr_2'], data_2['arr_2'], data_3['arr_2']])
arr_3 = np.concatenate([data_1['arr_3'], data_2['arr_3'], data_3['arr_3']])
arr_4 = np.concatenate([data_1['arr_4'], data_2['arr_4'], data_3['arr_4']])
arr_5 = np.concatenate([data_1['arr_5'], data_2['arr_5'], data_3['arr_5']])
arr_6 = np.concatenate([data_1['arr_6'], data_2['arr_6'], data_3['arr_6']])
# Save the new npz file
np.savez('Data_new.npz', arr_0, arr_1, arr_2, arr_3, arr_4, arr_5, arr_6 )
```
| 9,253
|
8,165,086
|
I'm learning Python using [Learn Python The Hard Way](http://learnpythonthehardway.org/). It is very good and efficient but at one point I had a crash. I've searched the web but could not find an answer.
Here is my question:
One of the exercises tell to do this:
```
from sys import argv
script, filename = argv
```
and then it proceeds to doing things that I do understand:
```
print "we are going to erase %r." % filename
print "if you don't want that, hit CTRL-C (^C)."
print "if you do want that, hit RETURN."
raw_input("?")
print "opening the file..."
target = open(filename, 'w')
```
What does the first part mean?
P.S. the error I get is:
>
> syntaxError Unexpected character after line continuation character
>
>
>
|
2011/11/17
|
[
"https://Stackoverflow.com/questions/8165086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1051438/"
] |
```
script, filename = argv
```
This is [unpacking the sequence](http://docs.python.org/tutorial/datastructures.html#tuples-and-sequences) `argv`. The first element goes into `script`, and the second element goes into `filename`. In general, this can be done with any iterable, as long as there exactly as many variables on the left-hand-side as are items in the iterable on the right-hand-side.
The code you show seems ok, I don't know why you are getting a syntax-error there.
|
`Unexpected character after line continuation character` means that you have split a command in two lines using the continuation character `\` (see [this question](https://stackoverflow.com/questions/53162/how-can-i-do-a-line-break-line-continuation-in-python)) but added some characters (e.g. a white space) after it.
But I do not see any `\` in your code...
| 9,255
|
61,833,460
|
I am creating a program that calculates the optimum angles to fire a projectile from a range of heights and a set initial velocity. Within the final equation I need to utilise, there is an inverse sec function present that is causing some troubles.
I have imported math and attempted to use asec(whatever) however it seems math can not calculate inverse sec functions? I also understand that sec(x) = 1/cos(x) but when I sub 1/cos(x) into the equation instead and algebraically solve for x it becomes a non real result :/.
The code I have is as follows:
```
print("This program calculates the optimum angles to launch a projectile from a given range of heights and a initial speed.")
x = input("Input file name containing list of heights (m): ")
f = open(x, "r")
for line in f:
heights = line
print("the heights you have selected are : ", heights)
f.close()
speed = float(input("Input your initial speed (m/s): "))
print("The initial speed you have selected is : ", speed)
ran0 = speed*speed/9.8
print(ran0)
f = open(x, "r")
for line in f:
heights = (line)
import math
angle = (math.asec(1+(ran0/float(heights))))/2
print(angle)
f.close()
```
So my main question is, is there any way to find the inverse sec of anything in python without installing and importing something else?
I realise this may be more of a math based problem than a coding problem however any help is appreciated :).
|
2020/05/16
|
[
"https://Stackoverflow.com/questions/61833460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13553134/"
] |
Let's say we're looking for real number *x* whose arcsecant is angle *θ*. Then we have:
```
θ = arcsec(x)
sec(θ) = x
1 / cos(θ) = x
cos(θ) = 1 / x
θ = arccos(1/x)
```
So with this reasoning, you can write your arcsecant function as:
```
from math import acos
def asec(x):
return acos(1/x)
```
|
If you can try of inverse of sec then it will be same as
```
>>>from mpmath import *
>>> asec(-1)
mpf('3.1415926535897931')
```
Here are the link in where you can better understand - [<http://omz-software.com/pythonista/sympy/modules/mpmath/functions/trigonometric.html]>
| 9,257
|
4,673,373
|
I would like to put some logging statements within test function to examine some state variables.
I have the following code snippet:
```
import pytest,os
import logging
logging.basicConfig(level=logging.DEBUG)
mylogger = logging.getLogger()
#############################################################################
def setup_module(module):
''' Setup for the entire module '''
mylogger.info('Inside Setup')
# Do the actual setup stuff here
pass
def setup_function(func):
''' Setup for test functions '''
if func == test_one:
mylogger.info(' Hurray !!')
def test_one():
''' Test One '''
mylogger.info('Inside Test 1')
#assert 0 == 1
pass
def test_two():
''' Test Two '''
mylogger.info('Inside Test 2')
pass
if __name__ == '__main__':
mylogger.info(' About to start the tests ')
pytest.main(args=[os.path.abspath(__file__)])
mylogger.info(' Done executing the tests ')
```
I get the following output:
```
[bmaryada-mbp:/Users/bmaryada/dev/platform/main/proto/tests/tpch $]python minitest.py
INFO:root: About to start the tests
======================================================== test session starts =========================================================
platform darwin -- Python 2.6.2 -- pytest-2.0.0
collected 2 items
minitest.py ..
====================================================== 2 passed in 0.01 seconds ======================================================
INFO:root: Done executing the tests
```
Notice that only the logging messages from the `'__name__ == __main__'` block get transmitted to the console.
Is there a way to force `pytest` to emit logging to console from test methods as well?
|
2011/01/12
|
[
"https://Stackoverflow.com/questions/4673373",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/568454/"
] |
Using `pytest --log-cli-level=DEBUG` works fine with pytest (tested from 6.2.2 to 7.1.1)
Using `pytest --log-cli-level=DEBUG --capture=tee-sys` will also print `stdtout`.
|
If you use `vscode`, use following config, assuming you've installed
**Python official plugin** (`ms-python.python`) for your python project.
`./.vscode/setting.json` under your proj
```json
{
....
"python.testing.pytestArgs": ["-s", "src"], //here before discover-path src
"python.testing.unittestEnabled": false,
"python.testing.nosetestsEnabled": false,
"python.testing.pytestEnabled": true,
...
}
```
P.S. Some plugins work on it, **including but not limited to**:
* **Python Test Explorer for Visual Studio Code** (`littlefoxteam.vscode-python-test-adapter`)
* **Test Explorer for Visual Studio Code**(`hbenl.vscode-test-explorer`)
| 9,260
|
36,261,398
|
The following program is from a book of python. In this code, count is first set to 0 and then `while True` is used. In the book I read that zeros and empty strings are evaluated as False while all the other values are evaluated as True. If that is the case then how the program executes the while loop? wouldn't the count evaluated as False as the count is set to 0?
Could someone please explain this?
```
# Finicky Counter
# Demonstrates the break and continue statements
count = 0
while True: # while count is True
count += 1
# end loop if count greater than 10
if count > 10:
break
# skip 5
if count == 5:
continue
print(count)
input("\n\nPress the enter key to exit.")
```
|
2016/03/28
|
[
"https://Stackoverflow.com/questions/36261398",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6014533/"
] |
If you evaluate
```
if count: # and count is zero
break
```
then sure - the loop will break immediately.
But you are evaluating this expression:
```
if count > 10: # 0 > 10
```
which is `False`, so you won't break on the first iteration.
|
If you changed `while True:` to `while count:`, your assumption would indeed be correct
| 9,270
|
2,440,799
|
when I use MySQLdb get this message:
```
/var/lib/python-support/python2.6/MySQLdb/__init__.py:34: DeprecationWarning: the sets module is deprecated from sets import ImmutableSet
```
I try filter the warning with
```
import warnings
warnings.filterwarnings("ignore", message="the sets module is deprecated from sets import ImmutableSet")
```
but, I not get changes.
any suggestion?
Many thanks.
|
2010/03/14
|
[
"https://Stackoverflow.com/questions/2440799",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348081/"
] |
From [python documentation](http://docs.python.org/library/warnings.html#temporarily-suppressing-warnings): you could filter your warning this way, so that if other warnings are caused by an other part of your code, there would still be displayed:
```
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore", DeprecationWarning)
import MySQLdb
[...]
```
but as said by Alex Martelli, the best solution would be to update MySQLdb so that it doesn't use deprecated modules.
|
What release of MySQLdb are you using? I think the current one (1.2.3c1) should have it fixed see [this bug](http://sourceforge.net/tracker/index.php?func=detail&aid=2156977&group_id=22307&atid=374932) (marked as fixed as of Oct 2008, 1.2 branch).
| 9,278
|
15,801,447
|
I'm building an installation EXE for my project using setuptool's bdist\_wininst. However, I've found that when I actually run said installer on a Win7-64bit machine w/ Python 2.7.3, I get a Runtime Error that looks like this: <http://i.imgur.com/8osT3.jpg>. (only the 64 bit installer against python-2.7 64-bit; the 32-bit one (on python2.7 32-bit) appears fine) I can click OK and the installer finishes, but this certainly looks poor to end-users.
Any ideas how to solve it?
|
2013/04/04
|
[
"https://Stackoverflow.com/questions/15801447",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/182284/"
] |
Maybe you have to create the executable specifically for the x64?
This is the command you would have to run:
```
python setup.py build --plat-name=win-amd64
```
More information can be found here:
<http://docs.python.org/2/distutils/builtdist.html#cross-compiling-on-windows>
|
Maybe a Visual C++ Redistributable Package is missing or corrupt, try (re)install Microsoft Visual C++ 2008 SP1/2010 Redistributable Package (x64) or any other version.
| 9,279
|
51,987,427
|
I've got working code below. Currently I'm pulling data from sav files, exporting it to a csv file, and then plotting this data. It looks good, but I'd like to zoom in on it and I'm not entirely sure how to do it. This is because my time is listed in the following format:
```
20141107B205309Y
```
There are both letters and numbers within the code, so I'm not sure what to do.
I could do this two ways, I suppose:
1. I was considering using python to "trim" the time data so it displays with only the "20141107" in the csv file, which should make it easy to browse.
2. I'm not sure if its possible but if someone knows how I could obviously search the code using "xrange=[]" as I typically would with data.
My code:
```
import scipy.io as spio
import numpy as np
import csv
import pandas as pd
import matplotlib as plt
np.set_printoptions(threshold=np.nan)
onfile='/file'
finalfile='/fileout'
s=spio.readsav(onfile,python_dict=true,verbose=true)
time=np.asarray(s["time"])
data=np.asarray(s["data"])
d=pd.DataFrame({'time':time,'data':data})
d.to_csv(finalfile,sep=' ', encoding='utf-u',header=True)
d.plot(x='time',y='data',kind='line')
```
|
2018/08/23
|
[
"https://Stackoverflow.com/questions/51987427",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10236830/"
] |
If your data set is consistent then pandas can trim columns for you. Checkout <https://pandas.pydata.org/pandas-docs/stable/text.html>. You can split using 'B' character. After that convert the column into a date.
You can convert a series to date using [How do I convert dates in a Pandas data frame to a 'date' data type?](https://stackoverflow.com/questions/16852911/how-do-i-convert-dates-in-a-pandas-data-frame-to-a-date-data-type)
|
Maybe try converting your s["time"] into a list of datetime objects instead of string.
```
from datetime import datetime
date_list = [datetime.strptime(d, '%Y%m%dB%H%M%SY') for d in s["time"]]
time=np.asarray(date_list)
```
Here str objects are converted into datetime objects using this format '%Y%m%dB%H%M%SY'
Here
```
%d is the day number
%m is the month number
%b is the month abbreviation
%y is the year last two digits
%Y is the all year
```
| 9,280
|
24,897,145
|
I am using pythons mock.patch and would like to change the return value for each call.
Here is the caveat:
the function being patched has no inputs, so I can not change the return value based on the input.
Here is my code for reference.
```
def get_boolean_response():
response = io.prompt('y/n').lower()
while response not in ('y', 'n', 'yes', 'no'):
io.echo('Not a valid input. Try again'])
response = io.prompt('y/n').lower()
return response in ('y', 'yes')
```
My Test code:
```
@mock.patch('io')
def test_get_boolean_response(self, mock_io):
#setup
mock_io.prompt.return_value = ['x','y']
result = operations.get_boolean_response()
#test
self.assertTrue(result)
self.assertEqual(mock_io.prompt.call_count, 2)
```
`io.prompt` is just a platform independent (python 2 and 3) version of "input". So ultimately I am trying to mock out the users input. I have tried using a list for the return value, but that doesn't seam to work.
You can see that if the return value is something invalid, I will just get an infinite loop here. So I need a way to eventually change the return value, so that my test actually finishes.
(another possible way to answer this question could be to explain how I could mimic user input in a unit-test)
---
Not a dup of [this question](https://stackoverflow.com/questions/7665682/python-mock-object-with-method-called-multiple-times) mainly because I do not have the ability to vary the inputs.
One of the comments of the Answer on [this question](https://stackoverflow.com/questions/21927057/mock-patch-os-path-exists-with-multiple-return-values) is along the same lines, but no answer/comment has been provided.
|
2014/07/22
|
[
"https://Stackoverflow.com/questions/24897145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2434234/"
] |
You can assign an [*iterable*](https://docs.python.org/3/glossary.html#term-iterable) to `side_effect`, and the mock will return the next value in the sequence each time it is called:
```
>>> from unittest.mock import Mock
>>> m = Mock()
>>> m.side_effect = ['foo', 'bar', 'baz']
>>> m()
'foo'
>>> m()
'bar'
>>> m()
'baz'
```
Quoting the [`Mock()` documentation](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock):
>
> If *side\_effect* is an iterable then each call to the mock will return the next value from the iterable.
>
>
>
|
You can also use patch for multiple return values:
```
@patch('Function_to_be_patched', return_value=['a', 'b', 'c'])
```
Remember that if you are making use of more than one patch for a method then the order of it will look like this:
```
@patch('a')
@patch('b')
def test(mock_b, mock_a);
pass
```
as you can see here, it will be reverted. First mentioned patch will be in the last position.
| 9,281
|
72,221,253
|
I was reading python collections's [Counter](https://docs.python.org/3/library/collections.html#counter-objects). It says following:
```
>>> from collections import Counter
>>> Counter({'z': 9,'a':4, 'c':2, 'b':8, 'y':2, 'v':2})
Counter({'z': 9, 'b': 8, 'a': 4, 'c': 2, 'y': 2, 'v': 2})
```
Somehow these printed values are printed in descending order (9 > 8 > 4 > 2). Why is it so? Does `Counter` store values sorted?
PS: Am on python 3.7.7
|
2022/05/12
|
[
"https://Stackoverflow.com/questions/72221253",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1317018/"
] |
In terms of the data stored in a `Counter` object: The data is insertion-ordered as of Python 3.7, because `Counter` is a subclass of the built-in `dict`. Prior to Python 3.7, there was no guaranteed order of the data.
However, the behavior you are seeing is coming from `Counter.__repr__`. We can see from the [source code](https://github.com/python/cpython/blob/a834e2d8e1230c17193c19b425e83e0bf736179e/Lib/collections/__init__.py#L731) that it will first try to display using the [`Counter.most_common`](https://docs.python.org/3/library/collections.html#collections.Counter.most_common) method, which sorts by value in descending order. If that fails because the values are not sortable, it will fall back to the `dict` representation, which, again, is insertion-ordered.
|
The order [depends on the python version](https://docs.python.org/3/library/collections.html#collections.Counter).
For python < 3.7, there is no guaranteed order, since python 3.7 the order is that of insertion.
>
> Changed in version 3.7: As a dict subclass, Counter inherited the
> capability to remember insertion order. Math operations on Counter
> objects also preserve order. Results are ordered according to when an
> element is first encountered in the left operand and then by the order
> encountered in the right operand.
>
>
>
Example on python 3.8 (3.8.10 [GCC 9.4.0]):
```
from collections import Counter
Counter({'z': 9,'a':4, 'c':2, 'b':8, 'y':2, 'v':2})
```
Output:
```
Counter({'z': 9, 'a': 4, 'c': 2, 'b': 8, 'y': 2, 'v': 2})
```
#### how to check that `Counter` doesn't sort by count
As `__str__` in `Counter` return the `most_common`, it is not a reliable way to check the order.
Convert to `dict`, the `__str__` representation will be faithful.
```
c = Counter({'z': 9,'a':4, 'c':2, 'b':8, 'y':2, 'v':2})
print(dict(c))
# {'z': 9, 'a': 4, 'c': 2, 'b': 8, 'y': 2, 'v': 2}
```
| 9,284
|
70,957,167
|
Through Python i'm trying to convert the future date into another format and subtract with current date but it's throwing error.
Python version = Python 3.6.8
```
from datetime import datetime
enddate = 'Thu Jun 02 08:00:00 EDT 2022'
todays = datetime.today()
print ('Tpday =',todays)
Modified_date1 = datetime.strptime(enddate, ' %a %b %d %H:%M:%S %Z %Y')
subtract_days= Modified_date1 - todays
print (subtract_days.days)
```
***Output***
```
Today = 2022-02-02 08:06:53.687342
Traceback (most recent call last):
File "1.py", line 106, in trusstore_output
Modified_date1 = datetime.strptime(enddate1, ' %a %b %d %H:%M:%S %Z %Y')
File "/usr/lib64/python3.6/_strptime.py", line 565, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "/usr/lib64/python3.6/_strptime.py", line 362, in _strptime
(data_string, format))
ValueError: time data ' Thu Jun 02 08:00:00 EDT 2022' does not match format ' %a %b %d %H:%M:%S %Z %Y'
During handling of the above exception, another exception occurred:
```
**Linux server date**
```
$ date
Wed Feb 2 08:08:36 CST 2022
```
|
2022/02/02
|
[
"https://Stackoverflow.com/questions/70957167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9658186/"
] |
Using a `while` loop :
```r
x <- list1
while (inherits(x <- x[[1]], "list")) {}
x
#> Time Series:
#> Start = 1
#> End = 100
#> Frequency = 1
#> [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
#> [19] 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
#> [37] 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
#> [55] 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
#> [73] 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
#> [91] 91 92 93 94 95 96 97 98 99 100
x <- list2
while (inherits(x <- x[[1]], "list")) {}
x
#> Time Series:
#> Start = 1
#> End = 100
#> Frequency = 1
#> [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
#> [19] 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
#> [37] 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
#> [55] 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
#> [73] 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
#> [91] 91 92 93 94 95 96 97 98 99 100
```
|
**base R solution** I've just got the idea for a pretty simple function. It is a `while` loop that runs until the element is not a list.
```
myfun <- function(mylist){
dig_deeper <- TRUE
while(dig_deeper){
mylist<- my_list[[1]]
dig_deeper <- is.list(mylist)
}
return(mylist)
}
```
It works as expected
```
> myfun(list1)
Time Series:
Start = 1
End = 100
Frequency = 1
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
[25] 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
[49] 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
[73] 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96
[97] 97 98 99 100
```
| 9,285
|
1,922,623
|
I am using MySQLdb module of python on FC11 machine. Here, i have an issue. I have the following implementation for one of our requirement:
1. connect to mysqldb and get DB handle,open a cursor, execute a delete statement,commit and then close the cursor.
2. Again using the DB handle above, iam performing a "select" statement one some different table using the cursor way as described above.
I was able to delete few records using Step1, but step2 select is not working. It simply gives no records for step2 though there are some records available under DB.
But, when i comment step1 and execute step2, i could see that step2 works fine. Why this is so?
Though there are records, why the above sequence is failing to do so?
Any ideas would be appreciated.
Thanks!
|
2009/12/17
|
[
"https://Stackoverflow.com/questions/1922623",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/233901/"
] |
I've had the same issue, with the TreeView not scrolling to the selected item.
What I did was, after expanding the tree to the selected TreeViewItem, I called a Dispatcher Helper method to allow the UI to update, and then used the TransformToAncestor on the selected item, to find its position within the ScrollViewer. Here is the code:
```
// Allow UI Rendering to Refresh
DispatcherHelper.WaitForPriority();
// Scroll to selected Item
TreeViewItem tvi = myTreeView.SelectedItem as TreeViewItem;
Point offset = tvi.TransformToAncestor(myScroll).Transform(new Point(0, 0));
myScroll.ScrollToVerticalOffset(offset.Y);
```
Here is the DispatcherHelper code:
```
public class DispatcherHelper
{
private static readonly DispatcherOperationCallback exitFrameCallback = ExitFrame;
/// <summary>
/// Processes all UI messages currently in the message queue.
/// </summary>
public static void WaitForPriority()
{
// Create new nested message pump.
DispatcherFrame nestedFrame = new DispatcherFrame();
// Dispatch a callback to the current message queue, when getting called,
// this callback will end the nested message loop.
// The priority of this callback should be lower than that of event message you want to process.
DispatcherOperation exitOperation = Dispatcher.CurrentDispatcher.BeginInvoke(
DispatcherPriority.ApplicationIdle, exitFrameCallback, nestedFrame);
// pump the nested message loop, the nested message loop will immediately
// process the messages left inside the message queue.
Dispatcher.PushFrame(nestedFrame);
// If the "exitFrame" callback is not finished, abort it.
if (exitOperation.Status != DispatcherOperationStatus.Completed)
{
exitOperation.Abort();
}
}
private static Object ExitFrame(Object state)
{
DispatcherFrame frame = state as DispatcherFrame;
// Exit the nested message loop.
frame.Continue = false;
return null;
}
}
```
|
Jason's ScrollViewer trick is a great way of moving a TreeViewItem to a specific position.
One problem, though: in MVVM you do not have access to the ScrollViewer in the view model. Here is a way to get to it anyway. If you have a TreeViewItem, you can walk up its visual tree until you reach the embedded ScrollViewer:
```
// Get the TreeView's ScrollViewer
DependencyObject parent = VisualTreeHelper.GetParent(selectedTreeViewItem);
while (parent != null && !(parent is ScrollViewer))
{
parent = VisualTreeHelper.GetParent(parent);
}
```
| 9,295
|
47,043,554
|
Suppose we have a dataset like this:
```
X =
6 2 1
-2 4 -1
4 1 -1
1 6 1
2 4 1
6 2 1
```
I would like to get two data from this one having last digit 1 and another having last digit -1.
```
X0 =
-2 4 -1
4 1 -1
```
And,
```
X1 =
6 2 1
1 6 1
2 4 1
6 2 1
```
How can we do this in numpy efficiently?
In simple python, I could do this like this:
```
dataset = np.loadtxt('data.txt')
X0, X1 = [], []
for i in range(len(X)):
if X[i][-1] == 1:
X0.append(X[i])
else:
X1.append(X[i])
```
This is slow and cumbersome, Numpy is fast and easy so, I would appreciate if there is easier way in numpy. Thanks.
|
2017/10/31
|
[
"https://Stackoverflow.com/questions/47043554",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Suppose you have an array:
```
>>> arr
array([[ 6, 2, 1],
[-2, 4, -1],
[ 4, 1, -1],
[ 1, 6, 1],
[ 2, 4, 1],
[ 6, 2, 1]])
```
Then simply:
```
>>> mask1 = arr[:, -1] == 1
>>> mask2 = arr[:, -1] == -1
>>> X1 = arr[mask1]
>>> X2 = arr[mask2]
```
Results:
```
>>> X1
array([[6, 2, 1],
[1, 6, 1],
[2, 4, 1],
[6, 2, 1]])
>>> X2
array([[-2, 4, -1],
[ 4, 1, -1]])
```
|
You could just use `numpy` and use slicing to access your data e.g.:
```
X[X[:, 2] == 1] # Returns all rows where the third column equals 1
```
or as a complete example:
```
import numpy as np
# Random data set
X = np.zeros((6, 3))
X[:3, 2] = 1
X[3:, 2] = -1
np.random.shuffle(X)
print(X[X[:, 2] == 1])
print('-')
print(X[X[:, 2] == -1])
```
| 9,296
|
4,837,218
|
Last night I came across the term called Jython which was kind of new to me so I started reading about it only to add more to my confusion about Python in general. I have never really used Python either. So here is what I am confused about.
1. `Python is implemented in C` - Does that mean that the interpreter was written in C or does the interpreter convert Python source code into C?
2. CPython is nothing but the original Python & the term was just coined to later distinguish it from Jython - true or false?
3. Now that Python is implemented in C (not really sure what that means), but does that mean python can be seamlessly integrated with any C code.
4. Is Jython like a new programming language or does its syntax & other programming constructs look exactly similar to the original python? or is it just python which can be integrated with java code?
5. If none of my above questions answer the difference between Python & Jython, what is it?
|
2011/01/29
|
[
"https://Stackoverflow.com/questions/4837218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/568638/"
] |
"Python" is the name of the language itself, not of a particular interpreter implementation, just as "C" is the name of a programming language and not of a particular compiler.
"CPython" is an implementation of an interpreter of the Python language written in C. It compiles Python source code to byte code and interprets the byte code. It's the oldest and reference implementation of the Python language.
"Jython" is another implementation of the Python language. It translates Python code to Java byte code, which can be executed on a Java virtual machine.
|
a) Python is a programming language. Interpreters of Python code are implemented using other programming languages like C (PyPy even using Python itself to implement one, I believe).
b) CPython, aka Classic Python, is the reference implementation and is written in C. Jython is a Python interpreter written in Java.
c) Using C libraries in Python is pretty easy, e.g. using the ctypes module.
d) see b.
e) see a and b.
| 9,298
|
46,431,145
|
so im a python beginner and i cant find a way to put a input varible in a random like so,
```html
import time
import random
choice = input('easy (max of 100 number), medium(max of 1000) or hard (max of 5000)? Or custom)')
time.sleep(2)
if choice == 'custom':
cus = input(' max number?')
print ('okey dokey!')
eh = 1
cuss = random.randrange(0, (cus))
easy = random.randint(0, 100)
medium = random.randint(0, 1000)
hard = random.randint (0, 5000)
if choice == 'easy' or 'medium' or 'hard' or cus:
while eh == 1:
esy = int(input('what do you think the number is?)'))
if esy > easy:
time.sleep(2)
print ('too high!')
elif esy == easy:
print (' CORRECT!')
break
elif esy < easy:
print ('TOO low')
```
is there any way i can put a number that someone typed in a random, like in line 9?
|
2017/09/26
|
[
"https://Stackoverflow.com/questions/46431145",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8677673/"
] |
There are multiple things wrong with your code. You're using ('s with print, which suggests you're using python3. This means that when you do `cus = input()`, cus is now a string. You probably want to do `cus = int(input())`
Doing `choice == 'easy' or 'medium' or 'hard' or cus` will always be True; I have no idea why people keep doing this in Python, but you need to do `choice == 'easy' or choice == 'medium' or choice == 'hard' or choice == 'cus'`. Of course, a better way to do it is with a dictionary.
```
values = {'easy': 100, 'medium': 1000, 'hard': 5000}
```
And then do
```
value = values.get(choice)
if not value:
value = int(input("Please enter custom value"))
```
And then you can do
`randomvalue = random.randint(1,value)`
|
Convert your variable `cus` to an int and pass it as you normally would
```
cuss = random.randrange(0, int(cus))
```
| 9,303
|
7,817,926
|
I'm trying to use scapy on win32 python2.7
I've manage to compile all the other dependencies expect this one
can some help in the goal of reaching this executable ?
"dnet-1.12.win32-py2.7.exe"
(I promise to update the this question too and the scapy manual,
[Running Scapy on Windows with Python 2.7](https://stackoverflow.com/questions/5447461/running-scapy-on-windows-with-python-v2-7-enthought-python-distribution-7))
**Update:**
I've managed to compile it with mingw32
I'm using vs2005, and I have to make some fixes to libdnet to actually work (look like last time they compiled it on windows it was with vs6.0
I'll try updating scapy manual... (and upload the executables to there)
|
2011/10/19
|
[
"https://Stackoverflow.com/questions/7817926",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/459189/"
] |
I just called super on the following overridden method of the BaseExpandableListAdapter and since then the notifyDataSetChanged() works.
```
@Override
public void registerDataSetObserver(DataSetObserver observer) {
super.registerDataSetObserver(observer);
}
```
|
you can try to additionally call invalidateViews() on your ExpandableListView when refreshing.
| 9,304
|
61,038,562
|
I'm writing a program where I'm converting a function to prefix and calculate.
```
from pythonds.basic import Stack
def doMath(op, op1, op2):
if op == "*":
return int(op1) * int(op2)
elif op == "/":
return int(op1) / int(op2)
elif op == "+":
return int(op1) + int(op2)
elif op == "-":
return int(op1) + int(op2)
def postfixEval(postfixExpr):
operandStack = Stack()
tokenList = postfixExpr.split()
for token in tokenList:
print(tokenList)
print("this is token: ", token)
if token in "0123456789":
operandStack.push(token)
print("pop: ",operandStack.peek())
elif not operandStack.isEmpty():
operand2 = operandStack.pop()
operand1 = operandStack.pop()
result = doMath(token, operand1, operand2)
print (result)
operandStack.push(result)
return operandStack.pop()
print(postfixEval('7 8 + 3 2 + /'))
print(postfixEval("17 10 + 3 * 9 /"))
```
So when I run the first postfixEval it return 3.0,
but at the second print it returns `IndexError: pop from empty list`
Apparently it's bc of the 2 digit numbers, how could I fix that?
Thanks
|
2020/04/05
|
[
"https://Stackoverflow.com/questions/61038562",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7737188/"
] |
When you try:
```
if token in "0123456789":
operandStack.push(token)
```
for `token = 17`, this shall fail since `17` is not in `0123456789`.
So change it to this:
```
try:
if float(token):
operandStack.push(token)
except:
#your code here
```
---
**HOW THIS WORKS:**
When a type of `str` containing digits and numbers are passed, `float()` tries to convert it into float. This is only possible only if it is a number.
|
Replace `if token in "0123456789"` (checks if `token` is a substring of `"0123456789"`) with `if token.isdigit()` (checks if `token` consists of decimal digits).
| 9,309
|
9,816,139
|
I've got an old RRD file that was only set up to track 1 year of history. I decided more history would be nice. I did rrdtool resize, and the RRD is now bigger. I've got old backups of this RRD file and I'd like to merge the old data in so that the up-to-date RRD also has the historical data.
I've tried the rrd contrib "merged-rrd.py" but it gives:
```
$ python merged-rrd.py ../temperature-2010-12-06.rrd ../temperature-2011-05-24.rrd merged1.rrd
merging old:../temperature-2010-12-06.rrd to new:../temperature-2011-05-24.rrd. creating merged rrd: merged1.rrd
Traceback (most recent call last):
File "merged-rrd.py", line 149, in <module>
mergeRRD(old_path, new_path, mer_path)
File "merged-rrd.py", line 77, in mergeRRD
odict = getXmlDict(oxml)
File "merged-rrd.py", line 52, in getXmlDict
cf = line.split()[1]
IndexError: list index out of range
```
Also tried "rrd\_merger.pl":
```
$ perl rrd_merger.pl --oldrrd=../temperature-2010-12-06.rrd --newrrd=../temperature-2011-05-24.rrd --mergedrrd=merged1.rrd
Dumping ../temperature-2010-12-06.rrd to XML: /tmp/temperature-2010-12-06.rrd_old_8615.xml
Dumping ../temperature-2011-05-24.rrd to XML: /tmp/temperature-2011-05-24.rrd_new_8615.xml
Parsing ../temperature-2010-12-06.rrd XML......parsing completed
Parsing ../temperature-2011-05-24.rrd XML...
Last Update: 1306217100
Start processing Round Robin DB
Can't call method "text" on an undefined value at rrd_merger.pl line 61.
at rrd_merger.pl line 286
at rrd_merger.pl line 286
```
Is there a tool to combine or merge RRDs that works?
|
2012/03/22
|
[
"https://Stackoverflow.com/questions/9816139",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/91238/"
] |
I ended up putting together a really simple script that works well enough for my case, by examining the existing python script.
<http://gist.github.com/2166343>
|
Looking at the XML file generated by rrdtool, there is a simple logic error in the Perl script. The elements AVERAGE and are simple enough but the tag is contained within tag with the text inside.
```
<cf> AVERAGE </cf>
<pdp_per_row> 1 </pdp_per_row> <!-- 300 seconds -->
<params>
<xff> 5.0000000000e-01 </xff>
</params>
```
The parsing just has to be tweaked a bit and when it is working, the fix fed back here (where it is easy to 'Google') and also to the script's author for a fix.
| 9,310
|
65,131,966
|
Say I have a collection of scripts organized for convenience in a directory structure like so:
```
root
│ helpers.py
│
├───dir_a
│ data.txt
│ foo.py
│
└───dir_b
data.txt
bar.py
```
`foo.py` and `bar.py` are scripts doing different things, but they both load their distinct `data.txt` the same way. To avoid repeating myself, I wish to put a `load_data()` function in `helpers.py` that can be imported while running scripts in subdirectories `dir_a` and `dir_b`.
I have tried to import helpers in **`dir_a/foo.py`** using a relative import like this:
```
from ..helpers import load_data
foo_data = load_data('data.txt')
# ...rest of code...
```
But directly running **`foo.py`** as a script fails on line 1 with an `ImportError`:
```
Traceback (most recent call last):
File "path/to/file/foo.py", line 1, in <module>
from ..helpers import load_data
ImportError: attempted relative import with no known parent package
```
Is the spirit of what I'm attempting possible in python (i.e. simple and without modifying `sys.path`)? From *Case 3: Importing from Parent Directory* [on this site](https://chrisyeh96.github.io/2017/08/08/definitive-guide-python-imports.html), the answer seems to be no, and I need to restructure my collection of scripts to avoid importing one level up.
Bonus question: if not possible, what is the rationale for why not?
|
2020/12/03
|
[
"https://Stackoverflow.com/questions/65131966",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7227829/"
] |
I don't think there is a way to have node do what you want. I had a similar where the main project was commonjs but one of the libraries was esm or a library it was using was esm. I don't remember the specific details but it was a royal pain.
The basic work around is to use `esm` to `import` the library giving you issue. It likely that once you do that you will end up with a second layer of issues.
```
// example
const esmImport = require('esm')(module)
const {CookieJar, fetch} = esmImport('node-fetch-cookies')
```
Dynamic import also resolved the issue for me.
```
async init() {
const { CookieJar } = await import('node-fetch-cookies')
this.cookieJar = new CookieJar()
this.cookieJar.addCookie(`username=${this._user}`, this._url.toString())
this.cookieJar.addCookie('hippa=yes', this._url.toString())
}
```
|
I had the same issue and I used single quotation for multi-value args like `--exec' to solve it.
```
nodemon --watch 'src/**/*' -e ts,tsx --exec 'ts-node ./src/index.ts'
```
| 9,315
|
8,587,633
|
I'm trying to write a python script for BusyBox on ESXi with mail functionality. It runs Python 2.5 with some libraries missing (i.e. the smtplib). I downloaded Python2.5 sources and copied the lib-folder to ESXi. Now I am trying to import the smtplib via "import lib.smtplib" but Python says:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/pysexi/lib/smtplib.py", line 46, in <module>
import email.Utils
File "/pysexi/lib/email/__init__.py", line 115, in <module>
setattr(sys.modules['email'], _name, importer)
KeyError: 'email'
```
I'm stuck. So every help and every thought is appreciated!
|
2011/12/21
|
[
"https://Stackoverflow.com/questions/8587633",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/921051/"
] |
Trying to install generic applications on an appliance or custom OS is always fun.
Just a guess, but it may be that the email lib is a compiled C module - i.e. not pure python.
I would try use libraries that are as completely python with no compiled code - I don't know if there are pure python versions of the libraries.
The option is to try to track down what OS version that ESXi is based on and then use the matching python version from that OS.
|
I don't know anything about BusyBox or ESXi - therefore this may be more of a suggestion than an answer, but you might consider using a email service that supports an HTTP or RESTful API - such as [MailGun](http://documentation.mailgun.net/wrappers.html#python). They have a free plan for up to 200 emails a day, so it might not cost you anything.
Again, this way be more of a suggestion or a plan "B" (if no one can help you with this specific problem)
| 9,316
|
16,833,285
|
I have searched for an answer to this but the related solutions seem to concern `'print'`ing in the interpreter.
I am wondering if it is possible to print (physically on paper) python code in color from IDLE?
I have gone to: `File > Print Window` in IDLE and it seems to just print out a black and white version without prompting whether to print in color etc.
**Edit:**
It seems like this might not be available so the option is to copy code to a text editor like SciTE and print from there - quite like the default IDLE syntax highlighting though.
|
2013/05/30
|
[
"https://Stackoverflow.com/questions/16833285",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1063287/"
] |
Use the IDLE extension called IDLE2HTML.py (search for this).
This lets IDLE print to an HTML file that has color in its style sheet.
Then save the HTML file to a PDF (the color will still be there).
|
I had the same problem and opened the py file into Visual Studio Code, then simply copied (Ctrl A / Ctrl C) and pasted the text (Ctrl V) into a Microsoft editor such as Word or RTF, or even outlook. The colors stay, including when printing.
Note that in order to avoid the night mode set as a default in VS Code, go to
File->Preferences->Color Theme and choose a day mode which is more suitable for printing.
| 9,317
|
24,520,133
|
I'd like to download a series of pdf files from my intranet. I'm able to see the files in my web browser without issue, but when trying to automate the pulling of the file via python, I run into problems. After talking through the proxy set up at my office, I can download files from the internet quite easily with this [answer](https://stackoverflow.com/questions/22676/how-do-i-download-a-file-over-http-using-python/22776#22776):
```
url = 'http://www.sample.com/fileiwanttodownload.pdf'
user = 'username'
pswd = 'password'
proxy_ip = '12.345.56.78:80'
proxy_url = 'http://' + user + ':' + pswd + '@' + proxy_ip
proxy_support = urllib2.ProxyHandler({"http":proxy_url})
opener = urllib2.build_opener(proxy_support,urllib2.HTTPHandler)
urllib2.install_opener(opener)
file_name = url.split('/')[-1]
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
f.close()
```
but for whatever reason it won't work if the url is pointing to something on my intranet. The following error is returned:
```
Traceback (most recent call last):
File "<ipython-input-13-a055d9eaf05e>", line 1, in <module>
runfile('C:/softwaredev/python/pdfwrite.py', wdir='C:/softwaredev/python')
File "C:\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 585, in runfile
execfile(filename, namespace)
File "C:/softwaredev/python/pdfwrite.py", line 26, in <module>
u = urllib2.urlopen(url)
File "C:\Anaconda\lib\urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "C:\Anaconda\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "C:\Anaconda\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Anaconda\lib\urllib2.py", line 442, in error
result = self._call_chain(*args)
File "C:\Anaconda\lib\urllib2.py", line 382, in _call_chain
result = func(*args)
File "C:\Anaconda\lib\urllib2.py", line 629, in http_error_302
return self.parent.open(new, timeout=req.timeout)
File "C:\Anaconda\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "C:\Anaconda\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Anaconda\lib\urllib2.py", line 448, in error
return self._call_chain(*args)
File "C:\Anaconda\lib\urllib2.py", line 382, in _call_chain
result = func(*args)
File "C:\Anaconda\lib\urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: Service Unavailable
```
Using `requests.py` in the following code, I can successfully pull down files from the internet, but when trying to pull a pdf from my office intranet, I just get a connection error sent back to me in html. The following code is run:
```
import requests
url = 'www.intranet.sample.com/?layout=attachment&cfapp=26&attachmentid=57142'
proxies = {
"http": "http://12.345.67.89:80",
"https": "http://12.345.67.89:80"
}
local_filename = 'test.pdf'
r = requests.get(url, proxies=proxies, stream=True)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
print chunk
if chunk:
f.write(chunk)
f.flush()
```
And the html that comes back:
```
Network Error (tcp_error)
A communication error occurred: "No route to host"
The Web Server may be down, too busy, or experiencing other problems preventing it from responding to requests. You may wish to try again at a later time.
For assistance, contact your network support team.
```
Is it possible that there be some network security setting that prevents automated requests outside the web browser environment?
|
2014/07/01
|
[
"https://Stackoverflow.com/questions/24520133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3746982/"
] |
Installing openers into urllib2 doesn't affect requests. You need to use requests' own support for proxies. It should be enough to pass them in the `proxies` argument to `get`, or you can set the `HTTP_PROXY` and `HTTPS_PROXY` environment variables. See <http://docs.python-requests.org/en/latest/user/advanced/#proxies>
```
import requests
proxies = {
"http": "http://10.10.1.10:3128",
"https": "http://10.10.1.10:1080",
}
requests.get("http://example.org", proxies=proxies)
```
|
Have you tried not using the proxy to download your files when it's on the intranet?
You could try something like this in python2
```
from urllib2 import urlopen
url = 'http://intranet/myfile.pdf'
with open(local_filename, 'wb') as f:
f.write(urlopen(url).read())
```
| 9,327
|
25,643,943
|
I am trying to write a python program in which the user inputs the polynomial and it calculates the derivative. My current code is:
```
print ("Derviatives: ")
k5 = raw.input("Enter 5th degree + coefficent: ")
k4 = raw.input("Enter 4th degree + coefficent: ")
k3 = raw.input("Enter 3rd degree + coefficent: ")
k2 = raw.input("Enter 2nd degree + coefficent: ")
k1 = raw.input("Enter 1st degree + coefficent: ")
k0 = raw.input("Enter constant: ")
int(k5)
int(k4)
int(k3)
int(k2)
int(k1)
int(k0)
print (k5, " ", k4, " ", k3, " ", k2, " ", k1, " ", k0)
1in = raw.input("Correct Y/N?")
if (1in != Y)
k5 = raw.input("Enter 5th degree + coefficent: ")
k4 = raw.input("Enter 4th degree + coefficent: ")
k3 = raw.input("Enter 3rd degree + coefficent: ")
k2 = raw.input("Enter 2nd degree + coefficent: ")
k1 = raw.input("Enter 1st degree + coefficent: ")
k0 = raw.input("Enter constant: ")
else
"""CODE GOES HERE"""
```
I am just a beginning python programmer so I am still a little bit fuzzy on some basic syntax issues. Are there any libraries that I should be importing?
|
2014/09/03
|
[
"https://Stackoverflow.com/questions/25643943",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3953943/"
] |
OK, use `raw_input` instead of `raw.input`. It is builtin, as is `int`, so nothing needs importing. When converting to a integer (`int`), you need to assign the result, or nothing will change. You can chain the functions, and use `k5 = int(raw_input("prompt.. "))`. Also, as pointed out by Evert, variable names cannot begin with numbers, so `1in` would have to be changed. This code should work:
```
print("Derviatives: ")
k5 = raw_input("Enter 5th degree + coefficent: ")
k4 = raw_input("Enter 4th degree + coefficent: ")
k3 = raw_input("Enter 3rd degree + coefficent: ")
k2 = raw_input("Enter 2nd degree + coefficent: ")
k1 = raw_input("Enter 1st degree + coefficent: ")
k0 = raw_input("Enter constant: ")
k5 = int(k5)
k4 = int(k4)
k3 = int(k3)
k2 = int(k2)
k1 = int(k1)
k0 = int(k0)
print(k5, " ", k4, " ", k3, " ", k2, " ", k1, " ", k0)
in1 = raw_input("Correct Y/N?")
if in1 != "Y":
k5 = raw_input("Enter 5th degree + coefficent: ")
k4 = raw_input("Enter 4th degree + coefficent: ")
k3 = raw_input("Enter 3rd degree + coefficent: ")
k2 = raw_input("Enter 2nd degree + coefficent: ")
k1 = raw_input("Enter 1st degree + coefficent: ")
k0 = raw_input("Enter constant: ")
else:
"""CODE GOES HERE"""
```
Also, check which version of python you are using. If it is python 3, you need to change `raw_input` to `input`. If you are using python 2, you don't need the brackets on the print statements. E.g. `print("Derviatives: ")` => `print "Derviatives: "`.
|
I see the following issues with your code:
1. Variable names can't start with a digit: `1in`
2. You have undefined variable `Y`. I suppose you want to compare with a string literal, so write `"Y"` instead.
3. Parentheses aren't needed in python's `if` operators, but a trailing colon is necessary. Same goes for `else`.
4. A list of coefficients would be certainly a better approach than a bunch of variables.
5. It's `raw_input`, not `raw.input`
6. While just doing `int(k5)` would validate the value (raise the exception if string doesn't represent an integer) it won't affect variable, so `k5` would still contain a string.
Guess it wouldn't hurt to show a bit of code, since I'm not doing your task here (implementing the algorithm), but merely displaying language features. So... I'd do it somehow like this:
```
# There are no do...while-style loops in Python, so we'll do it this way
# loop forever, and break when confirmation prompt gets "y"
while True:
k = [] # That's an empty list.
# I'm assuming Python 2.x here. Add parentheses for Python 3.x.
print "Derivatives:"
# This range will work like a list [5, 4, 3, 2, 1].
for n in range(5, 1, -1):
k.append(int(raw_input("Enter {0}th degree + coefficient: ".format(n)))
## Or you could do it this way:
#for num in ["5th", "4th", "3rd", "2nd", "1st"]:
# k.append(int(raw_input("Enter {0} degree + coefficient: ".format(num)))
k.append(int(raw_input("Enter constant: ".format(n)))
# Here's a bit tricky part - a list comprehension.
# Read on those, they're useful.
# We need the comprehension, because our list is full of ints,
# and join experts a list of strings.
print " ".join(str(x) for x in k)
## If you didn't get the previous line - that's fine,
## it's fairly advanced subject. Just do like you're used to:
#print k[0], " ", k[1], " ", k[2], " ", k[3], " ", k[4], " ", k[5]
# Oh, and did you notice 5th coeff. is at k[0]?
# That's because we appended them to the list in reverse order.
# Let's reverse the list in-place, so the constant will be k[0] and so on.
k.reverse()
# You don't always need an intermediate variable for single-use values,
# like when asking for confirmation. Just put raw_input call directly
# in if statement condition - it would work just fine.
#
# And let's be case-insensitive here and accept both "Y" and "y"
if raw_input("Correct [Y/N]? ").lower() == "y":
break
```
| 9,328
|
52,651,733
|
I try to find refresing elements (time minute) on the webpage. My code worked only for simple text earlier. Now I use *Ctrl+Shift+I* and point out my element and *"Copy Xpath"*.
Also, I have Chrome extension *"XPath helper"* and tried to do that with it one. There is more longer XPath, than in my code below. And it doesn't work too.
>
> Error: NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//\*[@id....
>
>
>
And, I also tried to use find by class, by tag, by *CSS* selector.. It only worked by tag and no perfect, on different page.
And I don't even say about print it, sometimes `find_element(By.XPATH,'//*[...).text` work, sometimes not.
I don't understand, why it work on one page and not on second.. I want to work with find elements by *XPath* in flash later.
**UPDATE** Now I retrying code and it work! But still doesn't work on the next webpage.. why it is so changeable? XPath change, when page reload or what? What is the simplest way to get text(refresing) info from flash, opened in chrome browser?
```
import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
options = webdriver.ChromeOptions()
options.add_argument('headless')
driver = webdriver.Chrome(r"C:\Users\vishniakov\Desktop\python bj\driver\chromedriver.exe",chrome_options=options)
driver.get("https://www.betfair.com/sport/football/event?eventId=28935432")
print(driver.title)
elem =driver.find_element(By.XPATH,'//*[@id="yui_3_5_0_1_1538670363144_2571"]').text
print(elem)
```
|
2018/10/04
|
[
"https://Stackoverflow.com/questions/52651733",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10290047/"
] |
You should set `Tag` instead of assigning an id to the dynamic created view.
```
public View getView(int position, View convertView, ViewGroup parent) {
LayoutInflater inflator=LayoutInflater.from(this.mContext);
View layout=inflator.inflate(R.layout.activity_main, parent, false);
ImageView imageView;
if (convertView == null) {
// if it's not recycled, initialize some attributes
imageView = new ImageView(mContext);
imageView.setLayoutParams(new ViewGroup.LayoutParams(170, 170));
imageView.setScaleType(ImageView.ScaleType.CENTER_CROP);
imageView.setPadding(4, 4, 4, 4);
} else {
imageView = (ImageView) convertView;
}
// set tag
imageView.setTag(position);
return imageView;
}
```
Later when getting view in `onClick` or `onItemClicked`, you can do something like this-
```
public void onClick(View view) {
Object tag = view.getTag();
if (tag instanceOf Integer) {
int pos = (Integer)tag;
// use position to identify that item.
}
}
```
As you are new to this, I would suggest to use `RecyclerView` if using `Listview`. If still you need to use normal adapter then go with `ViewHolder` pattern.
|
The position effectively acts as an id. Or I have missed the point?
| 9,330
|
43,207,159
|
first of all, i´m pretty new to Python. I also searched for a solution, but i guess the usual approach (subprocess.popen) won´t work in my case.
I have to pass arguments to a listener in an already running python script without starting the script over and over again. There is an example how to pass a message to a lcd-screen:
```
function printMsgText(message_text)
local f = io.popen("home/test/show_message.py '" .. message_text .. "'")
end
```
This lua-script above defines the process which gets called everytime a message is recieved. The called process (show\_message.py) looks like that:
```
import sys
from sense_hat import SenseHat
sense = SenseHat()
sense.clear()
sense.show_message(sys.argv[1])
```
I need something similar, except that there is another script running in the backround, so show\_message.py is not the final process but needs to pass the argument/message to another, already running script. My idea was to just let show\_message.py print the message to the console and use sys.argv in the main process aswell but i´m a little afraid that it could get messy.
Is there any easy way to do this?
Kind regards
Edit:
The main script is controlling a stepper-motor. Based on the user input, the motor drives a pre-defined number of steps. The part of the script waiting for the user-input looks like this:
```
while wait:
user_position = input("Where do you wanna go? (0, 1, 2, back): ")
wait = False
# Console output
print("Position: " + str(user_position))
if user_position == "0":
stepper.set_target_position(position_zero)
wait = True
elif user_position == "1":
stepper.set_target_position(position_one
wait = True
elif user_position == "2":
stepper.set_target_position(position_two)
wait = True
elif user_position == "back":
break
```
Now i need to pass the desired position via a web-application which is designed the way i described above (e.g. calling a lua-script every time a variable/argument is passed) and not via the console.
|
2017/04/04
|
[
"https://Stackoverflow.com/questions/43207159",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5607983/"
] |
Once a process is running it won't re-evaluate its command line arguments. You need other ways to communicate with it. This is colloquially called *inter-process communication* (IPC), and there are several ways to achieve it. Here are a few:
* Files
* Pipes (on platforms that support them)
* Shared memory
* Socket communication
* Message passing
* RPC
Probably the most approachable way is standard streams (STDIN, STDOUT) as provided by e.g. `subprocess.popen()` which you mentioned. But this requires a parent-child relation between the communicating processes. Maybe you can go this route.
Another way for Python, if you want to avoid parent-child relations, which I have made good experiences with is [Pyro](https://pythonhosted.org/Pyro4/). It was easy to use and worked well, albeit with some performance penalty. It is also a very "infrastructure-ish" way to go about it, both processes have to be coded to use it.
|
You can use some sort of messaging library that will allow you to communicate between processes. [ZeroMQ](http://www.zeromq.org/bindings:python) is a good option, and has python bindings.
| 9,331
|
69,259,654
|
When I debug, I often find it useful to print a variable's name and contents.
```python
a = 5
b = 7
print("a: "+str(a)+"\n b: "+str(b))
```
I want to write a function that achieves this. So far, I have the following function:
```python
def dprint(varslist, *varnames):
string = [var+": "+str(varslist[var]) for var in varnames]
print("\n".join(string))
```
An example for its usage is
```python
a = 5
b = 7
dprint(vars(), "a", "b")
```
My question is: is there any way to write `dprint()` such that `vars()` doesn't need to be explicitly passed to it every time it's called? Specifically, is there a way for `dprint()` to access its calling function's `vars()`?
Then, we could rewrite it as
```python
def dprint(*varnames):
varslist = # magic code that gets vars() from calling function
string = [var+": "+str(varslist[var]) for var in varnames]
print("\n".join(string))
```
|
2021/09/20
|
[
"https://Stackoverflow.com/questions/69259654",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16337951/"
] |
Use [`sys._getframe`](https://docs.python.org/3/library/sys.html#sys._getframe):
>
> Return a frame object from the call stack. If optional integer depth is given, return the frame object that many calls below the top of the stack. If that is deeper than the call stack, ValueError is raised. The default for depth is zero, returning the frame at the top of the call stack.
>
>
>
Then use `f_locals` to get local vars of `caller`.
```
import sys
def dprint(*varnames):
caller = sys._getframe(1) # depth=1
for var in varnames:
print(f'{var}: {caller.f_locals.get(var)}')
def foo():
a = 5
b = 7
dprint("a", "b")
foo()
```
Output:
```
a: 5
b: 7
```
|
In Python you can debug a program calling the `breakpoint` built-in function.
| 9,332
|
3,966,146
|
On our production server we need to split 900k images into different dirs and update 400k rows (MySQL with InnoDB engine). I wrote a python script which goes through next steps:
1. Select small chunk of data from db (10 rows)
- Make new dirs
- Copy files to the created dirs and rename it
- Update db (there are some triggers on update which will load server)
- Repeat
My code:
```
import os, shutil
import database # database.py from tornado
LIMIT_START_OFFSET = 0
LIMIT_ROW_COUNT = 10
SRC_PATHS = ('/var/www/site/public/upload/images/',)
DST_PATH = '/var/www/site/public/upload/new_images/'
def main():
offset = LIMIT_START_OFFSET
while True:
db = Connection(DB_HOST, DB_NAME, DB_USER, DB_PASSWD)
db_data = db.query('''
SELECT id AS news_id, image AS src_filename
FROM emd_news
ORDER BY id ASC
LIMIT %s, %s''', offset, LIMIT_ROW_COUNT)
offset = offset + LIMIT_ROW_COUNT
news_images = get_news_images(db_data) # convert data to easy-to-use list
make_dst_dirs(DST_PATH, [i['dst_dirname'] for i in news_images]) # make news dirs
news_to_update = copy_news_images(SRC_PATHS, DST_PATH, news_images) # list of moved files
db.executemany('''
UPDATE emd_news
SET image = %s
WHERE id = %s
LIMIT 1''', [(i['filename'], i['news_id']) for i in news_to_update])
db.close()
if not db_data: break
if __name__ == '__main__':
main()
```
Quite simple task, but I'm a little bit nervous about performance.
How can I make this script more efficient?
UPD:
After all I've used original script without any modifications. It took about 5 hours. And it was fast in the beginning and very slow in the end.
|
2010/10/19
|
[
"https://Stackoverflow.com/questions/3966146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/228012/"
] |
What I recommend.
1. Add an `isProcessed` column to your table.
2. Make your script work on a chunk of, say, 1k rows for the first run (of course select only rows that are not processed).
3. Benchmark it.
4. Adjust the chunk size if needed.
5. Build another script that calls this one at intervals.
Don't forget to add some sleep time in both your scripts!
This will work if your change does not need to be continuous (and I don't think it has to be). If you have to do it all at once you should put your database offline during the time the script runs.
|
```
db_data = db.query('''
SELECT id AS news_id, image AS src_filename
FROM emd_news
ORDER BY id ASC
LIMIT %s, %s''', offset, LIMIT_ROW_COUNT)
# Why is there any code here at all? If there's no data, why proceed?
if not db_data: break
```
| 9,333
|
32,506,956
|
I am doing ex47 in Learning Python the Hard W[enter link description here](http://learnpythonthehardway.org/book/ex47.html)ay.
My problem here is that I am unable to import a module `from ex47.game.py import Room` from the other file in the following code:
```
from nose.tools import *
from ex47.game.py import Room
def test_room():
gold = Room( "GoldRoom",
"""This room has gold in it you can grab. There's a
door to the north.""")
assert_equal(gold.name, "GoldRoom")
assert_equal(gold.paths, {})
def test_room_paths():
center = Room("Center", "Test room in the center.")
north = Room("North", "Test room in the north.")
south = Room("South", "Test room in the south.")
center.add_paths({'north': north, 'south':south})
assert_equal(center.go('north'), north)
assert_equal(center.go('south'), south)
def test_map():
start = Room("Start", "You can go west and down a hole.")
west = Room("Trees", "There are trees here, you can go east.")
down = Room("Dungeon", "It's dark down here, you can go up.")
start.add_paths({'west':west, 'down':down})
west.add_paths({'east':start})
down.add_paths({'up':start})
assert_equal(start.go('west'),west)
assert_equal(start.go('west').go('east'),start)
assert_equal(start.go('down').go('up'), start)
```
And I have gotten the following error:
[](https://i.stack.imgur.com/3cwQn.png)
According to the website, this is a common problem and the author suggests to run `export PYTHONPATH=.` on Mac. But as you can see, I have ran it first before running the test also. Am I supposed to specify the pythonpath or this is due to some other problem?
|
2015/09/10
|
[
"https://Stackoverflow.com/questions/32506956",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1427176/"
] |
I just went through this exercises right now and since I wanted a bit more explanation I've searched on the internet and found this post... hence my late response.. but I hope this helps if someone else encounter this problem.
He is not really clear about the structure of your Directory. So you should have a **ex47** *folder* with the `__init__.py` file and a **game.py** into the **ex47** *main folder*. Then in the *tests folder* you should have **ex47\_tests.py** ...
So then then `from ex47.game import Room` works fine.
`from ex47`(is the package)`.game`(is the .module) `import Room`(is the class inside game.py)
```
ex47/
├── bin/
├── docs/
├── ex47/
│ ├── __init__.py
│ └── game.py
└── tests/
├── __init__.py
├── ex47_game.py
```
|
`export PYTHONPATH=.` adds current directory to the list of paths that python will search when looking for a module.
That is, current directory at the time of the search, not the time you enter the line.
But, as you run your programe from `skeleton`, that will make python search for `ex47.game` in the `skeleton` directory.
How to fix it? Well, quick fix: `cd ../..` should bring you to the right place. But it's not very convenient. Better fix: change the `PYTHONPATH` variable to be the actual full path to the root of your project. That way, it won't depend on current directory.
Even better fix, you can make a small runner that sets the appropriate paths automatically before running anything. Assuming you put it at the root of your project, those lines will compute the full path to current script and add it to python search path:
```
import os
import sys
sys.path.append(os.path.abspath(os.path.dirname(__file__)))
# do whatever here, for instance run your tests
```
For instance, for running nosetests, you could create a launcher script:
```
#!/usr/bin/env python
import nose, os, sys
sys.path.append(os.path.abspath(os.path.dirname(__file__)))
nose.main()
```
You put it at the root of your project, call it `nosetests.py` and run it instead of `nosetests`
*(As a sidenote, the preferred form of any kind of text on stackoverflow is actual text. This is because it allows other people who may have the same issue to find it through search engines - you can simply copy-paste your text, then use the brackets `{}` icon to make it look right).*
| 9,335
|
71,640,992
|
I have a list of headings and subheadings of a document.
```
test_list = ['heading', 'heading','sub-heading', 'sub-heading', 'heading', 'sub-heading', 'sub-sub-heading', 'sub-sub-heading', 'sub-heading', 'sub-heading', 'sub-sub-heading', 'sub-sub-heading','sub-sub-heading', 'heading']
```
I want to assign unique index to each of the heading and the subheading like follows:
```
seg_ids = ['1', '2', '2_1', '2_2', '3', '3_1', '3_1_1', '3_1_2', '3_2', '3_3', '3_3_1', '3_3_2', '3_3_3', '4']
```
This is my code to create this result but it is messy and it is restricted to depth 3. If there is any document with a sub-sub-sub heading the code would become more complicated. Is there any pythonic way to do this?
```
seg_ids = []
for idx, an_ele in enumerate(test_list):
head_id = 0
subh_id = 0
subsubh_id = 0
if an_ele == 'heading' and idx == 0: # if it is the first element
head_id = '1'
seg_ids.append(head_id)
else:
last_seg_ids = seg_ids[idx-1].split('_') # find the depth of the last element
head_id = last_seg_ids[0]
if len(last_seg_ids) == 2:
subh_id = last_seg_ids[1]
elif len(last_seg_ids) == 3:
subh_id = last_seg_ids[1]
subsubh_id = last_seg_ids[2]
if an_ele == 'heading':
head_id= str(int(head_id)+1)
subh_id = 0 # reset sub_heading index
subsubh_id = 0 # reset sub_sub_heading index
elif an_ele == 'sub-heading':
subh_id= str(int(subh_id)+1)
subsubh_id = 0 # reset sub_sub_heading index
elif an_ele == 'sub-sub-heading':
subsubh_id= str(int(subsubh_id)+1)
else:
print('ERROR')
if subsubh_id==0:
if subh_id !=0:
seg_ids.append(head_id+'_'+subh_id)
else:
seg_ids.append(head_id)
if subsubh_id !=0:
seg_ids.append(str(head_id)+'_'+str(subh_id)+'_'+str(subsubh_id))
print(seg_ids)
```
|
2022/03/27
|
[
"https://Stackoverflow.com/questions/71640992",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9669200/"
] |
Those ghosts updates are a known, long standing issue, as evidenced by [this](https://github.com/hashicorp/terraform-provider-aws/issues/18311) still open, 3 year old issue on GH without a solution.
You can try updating your TF, as 0.13 is a very old version. You can also setup [ignore\_changes](https://www.terraform.io/language/meta-arguments/lifecycle#ignore_changes=) and try if this helps. If nothing works, then there is not much you can do about that. Its AWS provider and/or TF internal issue.
|
I encountered a similar thing when upgrading Aurora mysql from 5.6 to 5.7: `log_output` re-appeared in every plan output.
However, the configured value in the default paramater group changed from 5.6 to 5.7 (from TABLE to FILE). I suspect since there was no change, AWS API returns empty, TF state is not updated, repeat forever.
So: In this case removing the parameter from TF code and leave it to the default was the solution.
```
# plan output example
+ parameter {
+ apply_method = "immediate"
+ name = "log_output"
+ value = "FILE"
}
```
| 9,337
|
70,078,645
|
I have written the following python code to read in XYZ data as CSV and then grid to a GTiff format.
When I run the code I am getting no errors.
However, after trying to debug, I added some print statements and noticed that the functions aren't actually being called.
How can I run this script so that it all completes?
```
import sys
from botocore.exceptions import ClientError
import pandas as pd
import numpy as np
import rasterio
from datetime import datetime
from osgeo import gdal
class gdal_toolbox:
## CONSTANTS ##
## API handling Globals ##
gdal_types = [ 'GDT_Unknown','GDT_Byte','GDT_UInt16','GDT_Int16',\
'GDT_UInt32','GDT_Int32','GDT_Float32','GDT_Float64',\
'GDT_CInt16','GDT_CInt32','GDT_CFloat32','GDT_CFloat64',\
'GDT_TypeCount' ]
jobDict = {}
xyz_dict = {}
layerJson = {}
msk = {}
def __init__( self, kwargs ):
self.jobDict = kwargs
if self.jobDict['no_data'] is None:
self.jobDict['no_data'] = -11000
else:
self.jobDict['no_data'] = int(self.jobDict['no_data'])
if self.jobDict['gridAlgorithm'] is None:
self.jobDict['gridAlgorithm'] = 'nearest:radius1=2.25:radius2=2.25:nodata=' + str(self.jobDict['no_data'])
def normalizeToCsv( self ):
MAX_POINTS = 64000000
try:
# Read in ungridded data
self.df = pd.read_csv('C:/Users/......xyz', sep='\s+|,|:|\t',header=None, engine='python')
cnt = self.df.shape[0]
if(cnt > MAX_POINTS):
raise ValueError('Maximum number of points (' + str(cnt) + ' > ' + str(MAX_POINTS) + ') in datasource exceeded')
# convert to named x,y,z columns
print(str(datetime.now()) + ' normalizeToCsv: to_csv (start)')
self.ds = self.df.to_csv(self.csv_buf,sep=',',header=['x','y','z'],index=None)
self.csv_buf.seek(0)
print(str(datetime.now()) + ' normalizeToCsv: to_csv (end)')
dfsize = sys.getsizeof(self.df)
print('df (1) size : ' + str(dfsize))
#return df
except Exception as e:
self.logException(e)
raise
def csvToTiff(self):
try:
x = self.xyz_dict['xAxis'] / self.xyz_dict['xCellSize']
y = self.xyz_dict['yAxis'] / self.xyz_dict['yCellSize']
no_data = str(self.jobDict['no_data'])
if self.jobDict['srs'] is not None:
srs = self.jobDict['srs']
elif self.jobDict['wkt'] is not None:
srs = rasterio.crs.CRS.from_wkt(self.jobDict['wkt'])
option = gdal.GridOptions(format = 'GTIFF', outputType = gdal.GDT_Float32, width = x, height = y, \
outputBounds = [self.xyz_dict['minX'], self.xyz_dict['minY'], self.xyz_dict['maxX'], self.xyz_dict['maxY']], \
outputSRS = srs, algorithm=self.jobDict['gridAlgorithm'])
self.ds_tif = gdal.Grid('C:/Users/Public/......tif', self.ds, options = option)
except Exception as e:
self.logException(e)
raise
```
|
2021/11/23
|
[
"https://Stackoverflow.com/questions/70078645",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15310475/"
] |
Use
```
if __name__ == "__main__":
app = gdal_toolbox(kwargs)
app.run()
```
or
```
if __name__ == "__main__":
gdal_toolbox(kwargs).run()
```
Use thease codes at the end of your script.
And the solution is :
```
class gdal_toolbox:
## CONSTANTS ##
def __init__( self, kwargs ):
print(kwargs)
def normalizeToCsv( self ):
MAX_POINTS = 64000000
print(MAX_POINTS)
def csvToTiff(self):
print("Its the secoend function")
if __name__ == "__main__":
kwargs="Its the keyword arguments"
gdal_toolbox(kwargs)
```
|
It seems like you are not executing anything in this piece of code, just defining class and functions within it, right?
| 9,338
|
58,822,095
|
I have a problem using pyarrow.orc module in Anaconda on Windows 10.
```
import pyarrow.orc as orc
```
throws an exception:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\apps\Anaconda3\envs\ws\lib\site-packages\pyarrow\orc.py", line 23, in <module>
import pyarrow._orc as _orc
ModuleNotFoundError: No module named 'pyarrow._orc'
```
On the other hand:
`import pyarrow`
works without any issues.
```
conda list
# packages in environment at C:\apps\Anaconda3\envs\ws:
#
# Name Version Build Channel
arrow-cpp 0.13.0 py37h49ee12d_0
...
numpy 1.17.3 py37h4ceb530_0
numpy-base 1.17.3 py37hc3f5095_0
...
pip 19.3.1 py37_0
pyarrow 0.13.0 py37ha925a31_0
...
python 3.7.5 h8c8aaf0_0
...
```
I've tried other versions of pyarrow with the same results.
```
conda -V
conda 4.7.12
```
|
2019/11/12
|
[
"https://Stackoverflow.com/questions/58822095",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12361891/"
] |
The ORC reader is not supported at all on Windows and has never been to my knowledge. Apache ORC in C++ is not known to build yet with the Visual Studio C++ compiler.
|
Bottom line up front,
I had the same error. This was the solution for me:
```
!pip install pyarrow==0.13.0
```
I'm not sure this is limited to Windows 10, I am getting the same error in AWS Sagemaker in the last few days. This was working fine before, on a previous Sagemaker instance.
Using the Conda Packages menu in Jupyter, the conda\_python3 kernel showed it had pyarrow 0.13.0 installed from <https://repo.anaconda.com/pkgs/main/linux-64>, build py36he6710b0\_0.
However a subsequent call to
```
!conda -list
```
Did not show pyarrow as being in the Jupyter conda\_python3 kernel, even after restarting the kernel.
Normally in a Sagemaker [Jupyter notebook] instance, I would use !pip commands because they just seem to work better, and don't have the timeout errors I sometimes find with the Conda Packages menu. (Also I don't need to worry about passing `-y` flags, the installs just happen)
Normally `!pip install pyarrow` was working, but I noticed it was installing **pyarrow 0.15.1 from Nov 1, 2019**.
Perhaps there is an error in that version with loading the \_orc package, or some other conflicting library.
My intuition is that something is wrong with the conda version of pyarrow 0.13.0, and with pyarrow 0.15.1.
In a Jupyter cell I tried this:
```
!pip uninstall pyarrow -y
!pip install pyarrow
from pyarrow import orc
```
Output:
```
Uninstalling pyarrow-0.15.1:
Successfully uninstalled pyarrow-0.15.1
Collecting pyarrow
Downloading https://files.pythonhosted.org/packages/6c/32/ce1926f05679ea5448fd3b98fbd9419d8c7a65f87d1a12ee5fb9577e3a8e/pyarrow-0.15.1-cp36-cp36m-manylinux2010_x86_64.whl (59.2MB)
|████████████████████████████████| 59.2MB 381kB/s eta 0:00:01
Requirement already satisfied: numpy>=1.14 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from pyarrow) (1.14.3)
Requirement already satisfied: six>=1.0.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from pyarrow) (1.11.0)
Installing collected packages: pyarrow
Successfully installed pyarrow-0.15.1
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-6-36378dee5a25> in <module>()
1 get_ipython().system('pip uninstall pyarrow -y')
2 get_ipython().system('pip install pyarrow')
----> 3 from pyarrow import orc
~/anaconda3/envs/python3/lib/python3.6/site-packages/pyarrow/orc.py in <module>()
23 from pyarrow import types
24 from pyarrow.lib import Schema
---> 25 import pyarrow._orc as _orc
26
27
ModuleNotFoundError: No module named 'pyarrow._orc'
```
**Note that when you try to uninstall pyarrow 0.15.1 and install a specific older version, like 0.13.0, you should restart the kernel after uninstalling. There are some incompatible binaries that get left behind.
I did not post that output because it was so long.**
```
pip uninstall pyarrow -y
```
Restart Kernel, then:
```
!pip install pyarrow==0.13.0
from pyarrow import orc
```
Output:
```
Collecting pyarrow==0.13.0
Using cached https://files.pythonhosted.org/packages/ad/25/094b122d828d24b58202712a74e661e36cd551ca62d331e388ff68bae91d/pyarrow-0.13.0-cp36-cp36m-manylinux1_x86_64.whl
Requirement already satisfied: numpy>=1.14 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from pyarrow==0.13.0) (1.14.3)
Requirement already satisfied: six>=1.0.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from pyarrow==0.13.0) (1.11.0)
Installing collected packages: pyarrow
Successfully installed pyarrow-0.13.0
```
There is now no error from the import command, and orc files can be read again.
| 9,339
|
67,928,883
|
I'm working on this REST application in python `Flask` and a driver called `pymongo`. But if someone knows `mongodb` well he/she maybe able to answer my question.
Suppose Im inserting a new document in a collection say `students`. I want to get the whole inserted document as soon as the document is saved in the collection. Here is what i've tried so far.
```py
res = db.students.insert_one({
"name": args["name"],
"surname": args["surname"],
"student_number": args["student_number"],
"course": args["course"],
"mark": args["mark"]
})
```
If i call:
```
print(res.inserted_id) ## i get the id
```
How can i get something like:
```json
{
"name": "student1",
"surname": "surname1",
"mark": 78,
"course": "ML",
"student_number": 2
}
```
from the `res` object. Because if i print `res` i am getting `<pymongo.results.InsertOneResult object at 0x00000203F96DCA80>`
|
2021/06/10
|
[
"https://Stackoverflow.com/questions/67928883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12925831/"
] |
Put the data to be inserted into a dictionary variable; on insert, the variable will have the `_id` added by pymongo.
```
from pymongo import MongoClient
db = MongoClient()['mydatabase']
doc = {
"name": "name"
}
db.students.insert_one(doc)
print(doc)
```
prints:
```
{'name': 'name', '_id': ObjectId('60ce419c205a661d9f80ba23')}
```
|
Unfortunately, the commenters are correct. The PyMongo pattern doesn't specifically allow for what you are asking. You are expected to just use the inserted\_id from the [result](https://pymongo.readthedocs.io/en/stable/api/pymongo/results.html#pymongo.results.InsertOneResult) and if you needed to get the full object from the collection later do a regular [query operation afterwards](https://pymongo.readthedocs.io/en/stable/api/pymongo/collection.html#pymongo.collection.Collection.insert_one)
| 9,341
|
52,501,105
|
I am trying to connect to MSSQL with help of django-pyodbc. I have installed all the required packages like FreeTDS, unixODBC and django-pyodbc. When I connect using tsql and isql I am able to connect:-
```
>tsql -S mssql -U ********* -P ***********
locale is "en_US.UTF-8"
locale charset is "UTF-8"
using default charset "UTF-8"
1>
isql -v mssql ********* ***********
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL>
```
But, when I am trying to connect from python it does not work. I am getting below error:-
```
>python
Python 2.7.14 (default, May 16 2018, 06:48:40)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyodbc;
>>> print(pyodbc.connect("DSN=GB0015APP09.dir.dbs.com;UID=*********;PWD=*************").cursor().execute("select 1"));
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
pyodbc.Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0) (SQLDriverConnect)')
>>>
```
I have checked all other related answers but nothing is working. Not sure which data source name it is looking for.
Below are all the related configurations:-
```
/root>cat .freetds.conf
[mssql]
host = my_server_name
instance = MSSQL
Port = 1433
tds version =
/root>cat /etc/odbc.ini
[ServerDSN]
Driver = FreeTDS
Description = FreeTDS
Trace = No
Servername = mssql
Server = my_server_name
Port = 1433
Database = my_db_name
/root>cat /etc/odbcinst.ini
[FreeTDS]
Description = FreeTDS
Driver = /usr/lib64/libtdsodbc.so
Setup = /usr/lib64/libtdsS.so
fileusage=1
dontdlclose=1
UsageCount=1
```
I have wasted couple of days trying to resolve this issue, but no luck.
Please help me out.
**EDIT**
```
/root>tsql -C
Compile-time settings (established with the "configure" script)
Version: freetds v0.95.81
freetds.conf directory: /etc
MS db-lib source compatibility: yes
Sybase binary compatibility: yes
Thread safety: yes
iconv library: yes
TDS version: 4.2
iODBC: no
unixodbc: yes
SSPI "trusted" logins: no
Kerberos: yes
OpenSSL: no
GnuTLS: yes
/root>odbcinst -j
unixODBC 2.3.1
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /root/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8
```
|
2018/09/25
|
[
"https://Stackoverflow.com/questions/52501105",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2034743/"
] |
This has to do with the current scope that Rankx evaluates de Aggregation.
Try wrapping your aggregation with CALCULATE, and you probably want the SUM not the MAX:
```
Rank Reseller = RANKX(ALL(ResellerSales), CALCULATE(SUM(ResellerSales[SalesAmount])))
```
You can create a Measure like so, and use it on RANKX, since it is a measure it will work without explicitly adding the CALCULATE:
```
Sales Amount = SUM(ResellerSales[SalesAmount])
Rank Reseller = RANKX(ALL(ResellerSales), [Sales Amount])
```
EDIT:
```
Rank Reseller = RANKX(ALL('ResellerSales'[Resellerkey]), [Sales Amount])
```
Try it like this.
|
To rank the [ReSellerkey] by [SalesAmount] you'd want to do something like this:
```
Rank Sales Amount :=
RANKX(
'Table',
'Table'[SalesAmount],
,
ASC,
Dense
)
```
| 9,342
|
42,301,458
|
I have a large netcdf file which is three dimensional. I want to replace for the variable `LU_INDEX` in the netcdf file all the values 10 with 2.
I wrote this python script to do so but it does not seem to work.
```
filelocation = 'D:/dataset.nc'
ncdataset = nc.Dataset(filelocation,'r')
lat = ncdataset.variables['XLAT_M'][0,:,:]
lon = ncdataset.variables['XLONG_M'][0,:,:]
lu_index = ncdataset.variables['LU_INDEX'][0,:,:]
lu_index_new = lu_index
ncdataset.close()
nlat,nlon=lat.shape
for ilat in range(nlat):
for ilon in range(lon):
if lu_index == 10:
lu_index_new[ilat,ilon] = 2
newfilename = 'D:/dataset.new.nc'
copyfile(ncdataset,newfilename)
newfile = nc.Dataset(newfilename,'r+')
newfile.variables['LU_INDEX'][0,:,:] = lu_index_new
newfile.close()
```
I get the error:
```
The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
I am not a very experienced with python, so if there is a more easier way to do this you are very welcome to comment.
|
2017/02/17
|
[
"https://Stackoverflow.com/questions/42301458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7581459/"
] |
You might try [NCO](http://nco.sf.net/nco.html#ncap2)
```
ncap2 -s 'where(LU_INDEX == 10) LU_INDEX=2' in.nc out.nc
```
|
I worked it out as follow:
```
import netCDF4 as nc
import numpy as np
pathname = 'D:'
filename = '%s/dataset.nc'%pathname
ncfile = nc.Dataset(filename,'r+')
lu_index = ncfile.variables['LU_INDEX'][:]
I = np.where(lu_index == 10)
lu_index[I] = 2
ncfile.variables['LU_INDEX'][:] = lu_index
filename.close()
print 'conversion complete'
```
| 9,343
|
25,811,202
|
I am trying to execute a shell command and kill it using python **signal** module.
I know signals work only with main thread, so I run the Django development server with,
```
python manage.py runserver --nothreading --noreload
```
and it works fine.
But when i deploy the django application with Apache/mod\_wsgi, it shows the following error:
```
[Fri Sep 12 20:07:00 2014] [error] response = function.call(request, **data)
[Fri Sep 12 20:07:00 2014] [error] File "/Site/cloud/lib/python2.6/site-packages/dajaxice/core/Dajaxice.py", line 18, in call
[Fri Sep 12 20:07:00 2014] [error] return self.function(*args, **kwargs)
[Fri Sep 12 20:07:00 2014] [error] File "/Site/cloud/soc/website/ajax.py", line 83, in execute
[Fri Sep 12 20:07:00 2014] [error] data = scilab_run(code, token, book_id, dependency_exists)
[Fri Sep 12 20:07:00 2014] [error] File "/Site/cloud/soc/website/helpers.py", line 58, in scilab_run
[Fri Sep 12 20:07:00 2014] [error] output = task.run().communicate()[0]
[Fri Sep 12 20:07:00 2014] [error] File "/Site/cloud/soc/website/timeout.py", line 121, in run
[Fri Sep 12 20:07:00 2014] [error] lambda sig,frame : os.killpg(self.pgid,self.timeoutSignal) )
[Fri Sep 12 20:07:00 2014] [error] ValueError: signal only works in main thread
```
Here is my apache virtualhost setting:
```
WSGIDaemonProcess testcloud display-name=scilab_cloud user=apache group=apache threads=1
WSGIProcessGroup testcloud
WSGIScriptAlias / /Site/cloud/soc/soc/wsgi.py
WSGIImportScript /Site/cloud/soc/soc/wsgi.py process-group=testcloud application-group=%{GLOBAL}
```
I also have the below settings outside virtualhost in httpd.conf:
```
WSGIRestrictSignal Off
WSGISocketPrefix /var/run/wsgi
```
[Here is the link](https://gist.github.com/devietti/1526975) to the program which uses **signal** and the one which I use in my django application.
Any help would be appreciated.
|
2014/09/12
|
[
"https://Stackoverflow.com/questions/25811202",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2102830/"
] |
are you running with DEBUG enabled in setting.py ?
If yes try disabling it to see if the issue persists.
|
I'm not sure that can be done that easily, or at least not with mod\_wsgi. The decision to thread or not to thread is the sum of build and run time options in both apache and mod\_wsgi, which are both set by default, to threading.
I would point you to the docs about that, but I can only post two links, so I think it's better to spend them in proposing a solution:
I had decently good experiences running shell commands from python with [sh](http://amoffat.github.io/sh/index.html), which even has an asynchronous execution module. Maybe you can start your python code running the shell command, and deal with the callback object when needed.
Or, even better, as `sh` asks you to have some cares when handling signals, you could just run it without the asynchronous execution module, but in another process with [multiprocessing.Proces](https://docs.python.org/3.5/library/multiprocessing.html#multiprocessing.Process), which will give you a `Process` object you can just kill with `object.terminate()`
| 9,345
|
64,512,500
|
Arrays of labels of objects and distances to that objects are given. I want to apply knn to find the label of prediction. I want to use `np.bincount` for that. However, I don't understand how to use this.
See some example
```
labels = [[1,1,2,0,0,3,3,3,5,1,3],
[1,1,2,0,0,3,3,3,5,1,3]]
weights= [[0,0,0,0,0,0,0,0,1,0,0],
[0,0,0,0,0,0,0,0,1,0,0]]
```
Imagine 10 nearest neighbors for 2 objects are given and their labels and distances are given above. So I want the output as `[5,5]`, because only neighbours with that label have nonzero weight. I am doing the next thing:
```
eps = 1e-5
lab_weight = np.array(list(zip(labels, weights)))
predict = np.apply_along_axis(lambda x: np.bincount(x[0], weights=x[1]).argmax(), 2, lab_weight)
```
I expect that `x` will correspond to `[[1,1,2,0,0,3,3,3,5,1,3], [0,0,0,0,0,0,0,0,1,0,0]]`, but it won't. Other axis parameters are not working too. How can I achieve the goal? I want to use `numpy` functions and avoid python loops.
|
2020/10/24
|
[
"https://Stackoverflow.com/questions/64512500",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13184183/"
] |
The next solution gives me desired result:
```
labels = [[1,1,2,0,0,3,3,3,5,1,3],
[1,1,2,0,0,3,3,3,5,1,3]]
weights= [[0,0,0,0,0,0,0,0,1,0,0],
[0,0,0,0,0,0,0,0,1,0,0]]
length = len(labels[0])
lab_weight = np.hstack((labels, weights))
predict = np.apply_along_axis(lambda x: np.bincount(x[:length], weights=x[length:]).argmax(), 1, lab_weight)
```
|
The problem with your code is that you attempt to use your
function to **2-D** slices of your array, whereas *apply\_along\_axis*
applies the given function to **1-D** slices.
So your code generates an exception: *ValueError: object of too small
depth for desired array*.
To apply your function to 2-D slices, use a list comprehension based on
*np.rollaxis* and then create a *Numpy* array from it:
```
result = np.array([ np.bincount(x[0], weights=x[1]).argmax()
for x in np.rollaxis(lab_weight, 2) ])
```
The result, for your array, is:
```
array([1, 1, 2, 0, 0, 3, 3, 3, 5, 1, 3], dtype=int64)
```
To trace, for each interation, the source array, intermediate results
and the final result, run:
```
i = 0
for x in np.rollaxis(lab_weight, 2):
print(f' i: {i}\n{x}'); i += 1
bc = np.bincount(x[0], weights=x[1])
bcm = bc.argmax()
print(bc, bcm)
```
| 9,346
|
5,769,382
|
I'm not sure if what I'm asking is possible at all, but since python is an interpreter it might be. I'm trying to make changes in an open-source project but because there are no types in python it's difficult to know what the variables have as data and what they do. You can't just look up the documentation on the var's type since you can't be sure what type it is. I want to drop to the terminal so I can quickly examine the types of the variables and what they do by typing help(var) or print(var). I could do this by changing the code and then re-running the program each time but that would be much slower.
Let's say I have a program:
```
def foo():
a = 5
my_debug_shell()
print a
foo()
```
my\_debug\_shell is the function I'm asking about. It would drop me to the '>>>' shell of the python interpreter where I can type help(a), and it would tell me that a is an integer. Then I type 'a=7', and some 'continue' command, and the program goes on to print 7, not 5, because I changed it.
|
2011/04/24
|
[
"https://Stackoverflow.com/questions/5769382",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/492336/"
] |
Here is a solution that doesn't require code changes:
```
python -m pdb prog.py <prog_args>
(pdb) b 3
Breakpoint 1 at prog.py:3
(pdb) c
...
(pdb) p a
5
(pdb) a=7
(pdb) ...
```
In short:
* start your program under debugger control
* set a break point at a given line of code
* let the program run up to that point
* you get an interactive prompt that let's you do what you want (type 'help' for all options)
|
Not sure what the real question is. Python gives you the 'pdb' debugger (google yourself) and in addition you can add logging and debug output as needed.
| 9,347
|
69,486,648
|
I'm wondering if there's any way in python or perl to build a regex where you can define a set of options can appear at most once in any order. So for example I would like a derivative of `foo(?: [abc])*`, where `a`, `b`, `c` could only appear once. So:
```
foo a b c
foo b c a
foo a b
foo b
```
would all be valid, but
```
foo b b
```
would not be
|
2021/10/07
|
[
"https://Stackoverflow.com/questions/69486648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8710344/"
] |
I have assumed that the elements of the string can be in any order and appear any number of times. For example, `'a foo'` should match and `'a foo b foo'` should not.
You can do that with a series of alternations employing lookaheads, one for each substring of interest, but it becomes a bit of a dog's breakfast when there are many strings to consider. Let's suppose you wanted to match zero or one `"foo"`'s and/or zero or one `"a"`'s. You could use the following regular expression:
```
^(?:(?!.*\bfoo\b)|(?=(?:(?!\bfoo\b).)*\bfoo\b(?!(.*\bfoo\b))))(?:(?!.*\ba\b)|(?=(?:(?!\ba\b).)*\ba\b(?!(.*\ba\b))))
```
[Start your engine!](https://regex101.com/r/sOBFj4/1)
This matches, for example, `'foofoo'`, `'aa'` and `afooa`. If they are not to be matched remove the word breaks (`\b`).
Notice that this expression begins by asserting the start of the string (`^`) followed by two positive lookaheads, one for `'foo'` and one for `'a'`. To also check for, say, `'c'` one would tack on
```
(?:(?!.*\bc\b)|(?=(?:(?!\bc\b).)*\bc\b(?!(.*\bc\b))))
```
which is the same as
```
(?:(?!.*\ba\b)|(?=(?:(?!\ba\b).)*\ba\b(?!(.*\ba\b))))
```
with `\ba\b` changed to `\bc\b`.
It would be nice to be able to use back-references but I don't see how that could be done.
By hovering over the regular expression in the link an explanation is provided for each element of the expression. (If this is not clear I am referring to the cursor.)
Note that
```
(?!\bfoo\b).
```
matches a character provided it does not begin the word `'foo'`. Therefore
```
(?:(?!\bfoo\b).)*
```
matches a substring that does not contain `'foo'` and does not end with `'f'` followed by `'oo'`.
Would I advocate this approach in practice, as opposed to using simple string methods? Let me ponder that.
|
If the order of the strings doesn't matter, and you want to make sure every string occurs only once, you can turn the list into a set in Python:
```
my_lst = ['a', 'a', 'b', 'c']
my_set = set(lst)
print(my_set)
# {'a', 'c', 'b'}
```
| 9,357
|
2,313,032
|
I am trying to create a regex that matches a US state abbreviations in a string using python.
The abbreviation can be in the format:
```
CA
Ca
```
The string could be:
```
Boulder, CO 80303
Boulder, Co
Boulder CO
...
```
Here is what I have, which obviously doesn't work that well. I'm not very good with regular expressions and google didn't turn up much.
```
pat = re.compile("[A-Za-z]{2}")
st = pat.search(str)
stateAbb = st.group(0)
```
|
2010/02/22
|
[
"https://Stackoverflow.com/questions/2313032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/74474/"
] |
A simple and reliable way is to have all the states listed:
```
states = ['IA', 'KS', 'UT', 'VA', 'NC', 'NE', 'SD', 'AL', 'ID', 'FM', 'DE', 'AK', 'CT', 'PR', 'NM', 'MS', 'PW', 'CO', 'NJ', 'FL', 'MN', 'VI', 'NV', 'AZ', 'WI', 'ND', 'PA', 'OK', 'KY', 'RI', 'NH', 'MO', 'ME', 'VT', 'GA', 'GU', 'AS', 'NY', 'CA', 'HI', 'IL', 'TN', 'MA', 'OH', 'MD', 'MI', 'WY', 'WA', 'OR', 'MH', 'SC', 'IN', 'LA', 'MP', 'DC', 'MT', 'AR', 'WV', 'TX']
regex = re.compile(r'\b(' + '|'.join(states) + r')\b', re.IGNORECASE)
```
Use another state list if you want non-US states.
|
```
re.search(r'\b[a-z]{2}\b', subject, re.I)
```
it will find double-letter names of towns, though
| 9,367
|
42,244,819
|
My objective is to insert a key value pair in a YAML file which might be empty.
For example, my `hiera.yaml` (used in puppet) file contains only three hyphens.
Here is my code:
```
#!/usr/bin/python
import ruamel.yaml
import sys
def read_file(f):
with open(f, 'r') as yaml:
return ruamel.yaml.round_trip_load(yaml)
dict = {}
dict['first_name'] = sys.argv[1]
dict['last_name'] = sys.argv[2]
dict['role'] = sys.argv[3]
data = read_file('hiera.yaml')
pos = len(data)
data.insert(pos, sys.argv[1], dict, None)
ruamel.yaml.round_trip_dump(data, open('hiera.yaml', 'w'), block_seq_indent=1)
```
I am running it like:
./alice.py Alice Doe Developer
I get an output like:
```
Traceback (most recent call last):
File "./alice.py", line 16, in <module>
pos = len(data)
TypeError: object of type 'NoneType' has no len()
```
But when my hiera.yaml file is not empty, for example:
```
$ cat hiera.yaml
john:
$./alice.py Alice Doe Developer
$ cat hiera.yaml
john:
alice:
first_name: Alice
last_name: Doe
role: Developer
```
Then it works properly.
Please tell me how to insert a key value pair(in my case a dict) to an empty YAML file. The examples of ruamel.yaml official page use doc string as a sample YAML content and then insert key-value pairs.
|
2017/02/15
|
[
"https://Stackoverflow.com/questions/42244819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7543970/"
] |
If you indeed want to get all the data into a single string you can do it using collect:
```
val rows = df.select("defectDescription").collect().map(_.getString(0)).mkString(" ")
```
You first select the relevant column (so you have just it) and collect it, it would give you an array of rows. the map turns each row to the string (there is just one column - 0). Then mkString would make an overall string of them with a space as the separator.
Note that this would bring the entire dataframe to the driver which might cause memory exceptions. If you need just some of the data you can use take(n) instead of collect to limit the number of rows to n.
|
```
val str1 = df.select("defectDescription").collect.mkString(",")
val str = str1.replaceAll("[\\[\\]]","")
```
Another way to do this is as follows:
The 1st line selects the particular columns then collects the subset, collects behaves as:
Collect (Action) - Return all the elements of the dataset as an array at the driver program. This is usually useful after a filter or other operation that returns a sufficiently small subset of the data.
mkString - mkString method has an overloaded method which allows you to provide a delimiter to separate each element in the collection.
The 2nd line just replaces the additional brackets
| 9,368
|
60,311,694
|
I'm taking a look at some parser combinator libraries in Python ([Parsy](https://parsy.readthedocs.io/en/latest/index.html) to be more precise) and I'm currently faced with the following problem, simplified with a minimally working example below:
```py
text = '''
AAAAAAAAAA AAAAAAAA AAAAAAAAAAAAAA
BBBBBBB START THE TEXT HERE SHOULD
BE CAPTURED STOP CCCCCCCCCC CCCCCC
'''
start, stop = r"STARTS?", r"STOPS?"
s = section(text, start, stop)
print(s)
```
which should output:
```
THE TEXT HERE SHOULD
BE CAPTURED
```
The current solution I'm working is by doing a regex lookahead, it works fine, but my original problem involves combining many of these little regexes, which can get messy and a problem for others to maintain later.
```py
from typing import Pattern, TypeVar
import re
# A Generic type declaration.
T = TypeVar("T")
def first(text: str, pattern: str, default: T, flags=0) -> T:
"""
Given a `text`, a regex `pattern` and a `default` value, return the first match
in `text`. Otherwise return a `default` value if no match is found.
"""
match = re.findall(pattern, text, flags=flags)
return match[0] if len(match) > 0 else default
def section(text: str, begin: str, end: str) -> str:
"""
Given a `text` and two `start` and `stop` regexes, return the captured group
found in the interval. Otherwise, return an empty string if no match is found.
"""
return first(text, fr"{begin}([\s\S]*?)(?={end})", default="")
```
Parser Combinators seem to be perfect for situations like these, but I'm unable to reproduce the same behavior as the working solution, any hints would be welcome:
```
# A Simpler example with hardcoded stuff
from parsy import regex, seq, string
text = '''
AAAAAAAAAA AAAAAAAA AAAAAAAAAAAAAA
BBBBBBB START THE TEXT HERE SHOULD
BE CAPTURED STOP CCCCCCCCCC CCCCCC
'''
start = regex(r"STARTS?")
middle = regex(r"[\s\S]*").optional()
stop = regex(r"STOPS?")
eol = string("\n")
# Work fine
start.parse("START")
middle.parse("")
stop.parse("STOP")
section = seq(
start,
middle,
stop
)
# Simpler case, breaks
section.parse("START AAA STOP")
```
Gives:
```
---------------------------------------------------------------------------
ParseError Traceback (most recent call last)
<ipython-input-260-fdec112e1648> in <module>
24 )
25 # Simpler case, breaks
---> 26 section.parse("START AAA STOP")
~/.venv/lib/python3.8/site-packages/parsy/__init__.py in parse(self, stream)
88 def parse(self, stream):
89 """Parse a string or list of tokens and return the result or raise a ParseError."""
---> 90 (result, _) = (self << eof).parse_partial(stream)
91 return result
92
~/.venv/lib/python3.8/site-packages/parsy/__init__.py in parse_partial(self, stream)
102 return (result.value, stream[result.index:])
103 else:
--> 104 raise ParseError(result.expected, stream, result.furthest)
105
106 def bind(self, bind_fn):
ParseError: expected 'STOPS?' at 0:14
```
|
2020/02/20
|
[
"https://Stackoverflow.com/questions/60311694",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4614840/"
] |
There was an issue related to the routing in the application. I had a parser inside the controller that was used for directing correct routes between a route "aluno" with first Param.
Once I had taken the route with no params and put it first at the controller, there was no need anymore for the parser, and the issue was gone. Hope this answer helps more people if they get the same problem.
|
Please check your method result type of controller
Change this:
```
@Contoller()
export class MyController {
// ...
async myMethod() {
return {}
}
}
```
to:
```
@Contoller()
export class MyController {
// ...
async myMethod():Promise<any> {
return {}
}
}
```
| 9,373
|
27,708,882
|
I have two Anaconda installations on my computer. The first one is based on Python 2.7 and the other is based on Python 3.4. The default Python version is the 3.4 though. What is more, I can start Python 3.4 either by typing **/home/eualin/.bin/anaconda3/bin/python** or just **python**. I can do the same but for Python 2.7 by typing **/home/eualin/.bin/anaconda2/bin/python**. My problem is that I don't know how to install new libraries under certain environments (either under Python 2.7 or Python 3.4). For example, when I do pip install seaborn the library gets installed under Python 3.4 by default when in fact I want to install it under Python 2.7. Any ideas?
**EDIT**
This is what I am doing so far: the ~/.bashrc file contains the following two blocks, of which only one is enabled at any given time.
```
# added by Anaconda 2.1.0 installer
export PATH="/home/eualin/.bin/anaconda2/bin:$PATH"
# added by Anaconda3 2.1.0 installer
#export PATH="/home/eualin/.bin/anaconda3/bin:$PATH"
```
Depending of which version I want to work, I open the fie, comment the opposite block and do `source ~/.bashrc` Then, I install the libraries I want to use one by one. But, is this the recommended way?
|
2014/12/30
|
[
"https://Stackoverflow.com/questions/27708882",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/706838/"
] |
You don't need multiple `anaconda` distributions for different python versions. I would suggest keeping only one.
`conda` basically lets you create environments for your different needs.
`conda create -n myenv python=3.3` creates a new environment named `myenv`, which works with a python3.3 interpreter.
`source activate myenv` switches to the newly created environment. This basically sets the `PATH` such that `pip`, `conda`, `python` and other binaries point to the correct environment and interpreter.
`conda install pip` is the first thing you may want to do. Afterwards you can use `pip` and `conda` to install the packages you need.
After activating your environment `pip install <mypackage>` will point to the right version of `pip` so no need to worry too much.
You may want to create environments for different python versions or different sets of packages. Of course you can easily switch between those environments using `source activate <environment name>`.
For more examples and details you may want to have a look at the [docs](http://conda.pydata.org/docs/).
|
Using virtualenv is your best option as @Dettorer has mentioned.
I found this method of installing and using virtualenv the most useful.
Check it out:
[Proper way to install virtualenv](https://stackoverflow.com/questions/4324558/whats-the-proper-way-to-install-pip-virtualenv-and-distribute-for-python)
| 9,374
|
18,730,612
|
I want to run both a websocket and a flash policy file server on port 80 using Tornado. The reason for not wanting to run a server on the default port 843 is that it's often closed in corporate networks. Is it possible to do this and if so, how should I do this?
I tried the following structure, which does not seem to work: the websocket connection works, but the policy file request is not routed to the `TCPHandler`.
```
#!/usr/bin/python
import tornado.httpserver
import tornado.ioloop
import tornado.web
import tornado.websocket
import tornado.gen
from tornado.options import define, options
from tornado.tcpserver import TCPServer
define("port", default=80, help="run on the given port", type=int)
class FlashPolicyServer(TCPServer):
def handle_stream(self, stream, address):
self._stream = stream
self._read_line()
def _read_line(self):
self._stream.read_until('\n', self._handle_read)
def _handle_read(self, data):
policyFile = ""
self._stream.write(policyFile)
self._read_line()
class WebSocketHandler(tornado.websocket.WebSocketHandler):
def open(self):
pass
def on_message(self, message):
pass
def on_close(self):
pass
def main():
tornado.options.parse_command_line()
mainLoop = tornado.ioloop.IOLoop.instance()
app = tornado.web.Application(
handlers=[
(r"/websocket", WebSocketHandler),
(r"/", FlashPolicyServer)
], main_loop=mainLoop
)
httpServer = tornado.httpserver.HTTPServer(app)
httpServer.listen(options.port)
mainLoop.start()
if __name__ == "__main__":
main()
```
Any ideas? If this is not possible, would another idea be to serve the policy file via port 443?
|
2013/09/10
|
[
"https://Stackoverflow.com/questions/18730612",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2259361/"
] |
the new formula become L= P / [(1+c)^n-1]/[c(1+c)^n]
let me work this out for you. To add L to the other side you have to multiple by 1/L. so now the left side of the equation is P/L. to get rid of the P you have to multiple 1/P. now leaving the L alone makes the L into a ratio of 1/L. To get rid of the one you have to reverse the equationon the right side. so you get the new formula above.
|
Your monthly payment isn't being calculated correctly in the code that isn't working properly. If you look at it, you'll see that you're not factoring in your debts whatsoever.
I'm not sure exactly how you're choosing to calculate the monthly payment, but my guess is you need to subtract your debts from your income and then do your ceiling math, maybe something like this:
```
var debt_ceiling = (income - debts) * .92;
var monthly_payment = (debt_ceiling * .28).toFixed(2);
```
Or something like that... but in there is your error, I think.
| 9,377
|
25,514,378
|
This sample python program:
```
document='''<p>This is <i>something</i>, it happens
in <b>real</b> life</p>'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(document)
print(soup.prettify())
```
produces the following output:
```
<html>
<body>
<p>
This is
<i>
something
</i>
, it happens
in
<b>
real
</b>
life
</p>
</body>
</html>
```
That's wrong, because it adds whitespace before and after each opening and closing tag and, for example, there should be no space between `</i>` and `,`. I would like it to:
1. Not add whitespace where there are none (even around block-level tags they could be problematic, if they are styled with `display:inline` in CSS.)
2. Collapse all whitespace in a single space, except optionally for line wrapping.
Something like this:
```
<html>
<body>
<p>This is
<i>something</i>,
it happens in
<b>real</b> life</p>
</body>
</html>
```
Is this possible with `BeautifulSoup`? Any other recommended HTML parser that can deal with this?
|
2014/08/26
|
[
"https://Stackoverflow.com/questions/25514378",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1538701/"
] |
Beautiful Soup's `.prettify()` method is defined as outputting each tag on its own line (<http://www.crummy.com/software/BeautifulSoup/bs4/doc/index.html#pretty-printing>). If you want something else you'll need to make it yourself by walking the parse tree.
|
As previous comments and thebjorn stated, BeautifulSoup's definition of pretty html is with each tag on it's own line, however, to deal with some of your problems with the spacing of , and such, you can collapse it first like so:
```
from bs4 import BeautifulSoup
document = """<p>This is <i>something</i>, it happens
in <b>real</b> life</p>"""
document_stripped = " ".join(l.strip() for l in document.split("\n"))
soup = BeautifulSoup(document_stripped).prettify()
print(soup)
```
Which outputs this:
```
<html>
<body>
<p>
This is
<i>
something
</i>
, it happens in
<b>
real
</b>
life
</p>
</body>
</html>
```
| 9,379
|
14,281,469
|
So I've got these huge text files that are filled with a single comma delimited record per line. I need a way to process the files line by line, removing lines that meet certain criteria. Some of the removals are easy, such as one of the fields is less than a certain length. The hardest criteria is that these lines all have timestamps. Many records are identical except for their timestamps and I have to remove all records but one that are identical and within 15 seconds of one another.
So I'm wondering if some others can come up with the best approach for this. I did come up with a small program in Java that accomplishes the task, using JodaTime for the timestamp stuff which makes it really easy. However, the initial way I coded the program was running into OutofMemory Heap Space errors. I refactored the code a bit and it seemed ok for the most part but I do still believe it has some memory issues as once in awhile the program just seems to get hung up. That and it just seems to take way too long. I'm not sure if this is a memory leak issue, a poor coding issue, or something else entirely. And yes I tried increasing the Heap Size significantly but still was having issues.
I will say that the program needs to be in either Perl or Java. I might be able to make a python script work too but I'm not overly familiar with python. As I said, the timestamp stuff is easiest (to me) in Java because of the JodaTime library. I'm not sure how I'd accomplish the timestamp stuff in Perl. But I'm up for learning and using whatever would work best.
I will also add the files being read in vary tremendously in size but some big ones are around 100Mb with something like 1.3 million records.
My code essentially reads in all the records and puts them into a Hashmap with the keys being a specific subset of the data from a record that similar records would share. So a subset of the record not including the timestamps which would be different. This way you'd end up with some number of records with identical data but that occurred at different times. (So completely identical minus the timestamps).
The value of each key then, is a Set of all records that have the same subset of data. Then I simply iterate through the Hashmap, taking each set and iterating through it. I take the first record and compare its times to all the rest to see if they're within 15 seconds. If so the record is removed. Once that set is finished it's written out to a file until all the records have been gone through. Hopefully that makes sense.
This works but clearly the way I'm doing it is too memory intensive. Anyone have any ideas on a better way to do it? Or, a way I can do this in Perl would actually be good because trying to insert the Java program into the current implementation has caused a number of other headaches. Though perhaps that's just because of my memory issues and poor coding.
Finally, I'm not asking someone to write the program for me. Pseudo code is fine. Though if you have ideas for Perl I could use more specifics. The main thing I'm not sure how to do in Perl is the time comparison stuff. I've looked a little into Perl libraries but haven't seen anything like JodaTime (though I haven't looked much). Any thoughts or suggestions are appreciated. Thank you.
|
2013/01/11
|
[
"https://Stackoverflow.com/questions/14281469",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/947756/"
] |
Reading all the rows in is not ideal, because you need to store the whole lot in memory.
Instead you could read line by line, writing out the records that you want to keep as you go. You could keep a cache of the rows you've hit previously, bounded to be within 15 seconds of the current program. In very rough pseudo-code, for every line you'd read:
```
var line = ReadLine()
DiscardAnythingInCacheOlderThan(line.Date().Minus(15 seconds);
if (!cache.ContainsSomethingMatchingCriteria()) {
// it's a line we want to keep
WriteLine(line);
}
UpdateCache(line); // make sure we store this line so we don't write it out again.
```
As pointed out, this assumes that the lines are in time stamp order. If they aren't, then I'd just use UNIX `sort` to make it so they are, as that'll quite merrily handle extremely large files.
|
You might read the file and output just the line numbers to be deleted (to be sorted and used in a separate pass.) Your hash map could then contain just the minimum data needed plus the line number. This could save a lot of memory if the data needed is small compared to the line size.
| 9,381
|
32,054,066
|
I'm using the [`websockets`](https://github.com/aaugustin/websockets) library to create a websocket server in Python 3.4. Here's a simple echo server:
```python
import asyncio
import websockets
@asyncio.coroutine
def connection_handler(websocket, path):
while True:
msg = yield from websocket.recv()
if msg is None: # connection lost
break
yield from websocket.send(msg)
start_server = websockets.serve(connection_handler, 'localhost', 8000)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
```
Let's say we – additionally – wanted to send a message to the client whenever some event happens. For simplicity, let's send a message periodically every 60 seconds. How would we do that? I mean, because `connection_handler` is constantly waiting for incoming messages, the server can only take action *after* it has received a message from the client, right? What am I missing here?
Maybe this scenario requires a framework based on events/callbacks rather than one based on coroutines? [Tornado](http://www.tornadoweb.org/en/stable/websocket.html)?
|
2015/08/17
|
[
"https://Stackoverflow.com/questions/32054066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1668646/"
] |
**TL;DR** Use [`asyncio.ensure_future()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.ensure_future) to run several coroutines concurrently.
---
>
> Maybe this scenario requires a framework based on events/callbacks rather than one based on coroutines? Tornado?
>
>
>
No, you don't need any other framework for this. The whole idea the asynchronous application vs synchronous is that it doesn't block, while waiting for result. It doesn't matter how it is implemented, using coroutines or callbacks.
>
> I mean, because connection\_handler is constantly waiting for incoming messages, the server can only take action after it has received a message from the client, right? What am I missing here?
>
>
>
In synchronous application you will write something like `msg = websocket.recv()`, which would block whole application until you receive message (as you described). But in the asynchronous application it's completely different.
When you do `msg = yield from websocket.recv()` you say something like: suspend execution of `connection_handler()` until `websocket.recv()` will produce something. Using `yield from` inside coroutine returns control back to the event loop, so some other code can be executed, while we're waiting for result of `websocket.recv()`. Please, refer to [documentation](https://docs.python.org/3/library/asyncio-task.html#coroutines) to better understand how coroutines work.
>
> Let's say we – additionally – wanted to send a message to the client whenever some event happens. For simplicity, let's send a message periodically every 60 seconds. How would we do that?
>
>
>
You can use [`asyncio.async()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.async) to run as many coroutines as you want, before executing blocking call for [starting event loop](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.BaseEventLoop.run_forever).
```
import asyncio
import websockets
# here we'll store all active connections to use for sending periodic messages
connections = []
@asyncio.coroutine
def connection_handler(connection, path):
connections.append(connection) # add connection to pool
while True:
msg = yield from connection.recv()
if msg is None: # connection lost
connections.remove(connection) # remove connection from pool, when client disconnects
break
else:
print('< {}'.format(msg))
yield from connection.send(msg)
print('> {}'.format(msg))
@asyncio.coroutine
def send_periodically():
while True:
yield from asyncio.sleep(5) # switch to other code and continue execution in 5 seconds
for connection in connections:
print('> Periodic event happened.')
yield from connection.send('Periodic event happened.') # send message to each connected client
start_server = websockets.serve(connection_handler, 'localhost', 8000)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.async(send_periodically()) # before blocking call we schedule our coroutine for sending periodic messages
asyncio.get_event_loop().run_forever()
```
Here is an example client implementation. It asks you to enter name, receives it back from the echo server, waits for two more messages from server (which are our periodic messages) and closes connection.
```
import asyncio
import websockets
@asyncio.coroutine
def hello():
connection = yield from websockets.connect('ws://localhost:8000/')
name = input("What's your name? ")
yield from connection.send(name)
print("> {}".format(name))
for _ in range(3):
msg = yield from connection.recv()
print("< {}".format(msg))
yield from connection.close()
asyncio.get_event_loop().run_until_complete(hello())
```
Important points:
1. In Python 3.4.4 `asyncio.async()` was renamed to [`asyncio.ensure_future()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.ensure_future).
2. There are special methods for scheduling [delayed calls](https://docs.python.org/3/library/asyncio-eventloop.html#delayed-calls), but they don't work with coroutines.
|
Same issue, can hardly got solution until I saw the perfect sample here: <http://websockets.readthedocs.io/en/stable/intro.html#both>
```
done, pending = await asyncio.wait(
[listener_task, producer_task],
return_when=asyncio.FIRST_COMPLETED) # Important
```
So, I can handle multi coroutine tasks such as heartbeat and redis subscribe.
| 9,382
|
27,812,789
|
I read that the assign in python does not copy it works like it does in c where it assigns a pointer to an object.
But when I debug this function:
```
def popall(self):
objs = self.curstack
self.curstack = []
return objs
```
It looks like some kind of copy is taking place. After this function runs obis is full of things and **`self.curstack`** is empty…
So some copy is going on. Is it deep or shallow?
|
2015/01/07
|
[
"https://Stackoverflow.com/questions/27812789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2444342/"
] |
It doesn't copy anything. It's just that assigning to `self.curstack` does not modify whatever `self.curstack` used to refer to. It just makes `self.curstack` refer to something else.
Think of it this way. `self.curstack` points to some stuff. With `objs = self.curstack` you make `objs` point to that same stuff. With `self.curstack = []`, you make `self.curstack` point to an empty list. This doesn't remove any of the stuff that `self.curstack` used to point at; that stuff is still there and `objs` is still pointing at it.
[This article](http://foobarnbaz.com/2012/07/08/understanding-python-variables/) explains it using a nice analogy of label tags.
|
Python uses references everywhere. So your code works like this:
```
objs = self.curstack
```
Now `objs` points to whatever `curstack` was.
```
self.curstack = []
```
Now `curstack` points to an empty list. `objs` is unchanged, and points to the old `curstack`.
```
return objs
```
You return `objs`, which is the old `curstack`.
| 9,388
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.