qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
66,530,908
|
I am using `gcc10.2`, `c++20`.
I am studying c++ after 2 years of python.
In python we always did run-time check for input validity
```
def createRectangle(x, y, width, height): # just for example
for v in [x, y, width, height]:
if v < 0:
raise ValueError("Cant be negative")
# blahblahblah
```
How would I do such process in c++?
|
2021/03/08
|
[
"https://Stackoverflow.com/questions/66530908",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8471995/"
] |
```
for (int v : {x, y, width, height})
if (v < 0)
throw std::runtime_error("Can't be negative");
```
Note that such loop copies each variable twice. If your variables are heavy to copy (e.g. containers), use pointers instead:
```
for (const int *v : {&x, &y, &width, &height})
if (*v < 0)
...
```
---
Comments also suggest using a reference, e.g. `for (const int &v : {x, y, width, height})`, but that will still give you one copy per variable. So if a type is that heavy, I'd prefer pointers.
|
In C++:
1. Use an appropriate type so validation (at the point you *use* the variables as opposed to setting them up from some input) is unnecessary, e.g. `unsigned` for a length. C++ is more strongly typed than Python, so you don't need large validation checks to make sure the correct type is passed to a function.
2. A `throw` is broadly equivalent to a `raise` in Python. In C++, we tend to derive an exception from `std::exception`, and throw that.
Boost ([www.boost.org](http://www.boost.org)) has a nice validation library which is well-worth looking at.
| 5,146
|
55,351,647
|
I'm a beginner of python and follow a book to practice.
In my book, the author uses this code
```
s, k = 0
```
but I get the error:
```none
Traceback (most recent call last): File "<stdin>", line 1, in
<module> TypeError: 'int' object is not iterable
```
I want to know what happened here.
|
2019/03/26
|
[
"https://Stackoverflow.com/questions/55351647",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7290997/"
] |
You are asking to initialize two variables `s` and `k` using a single int object `0`, which of course is not iterable.
The corrrect syntax being:
```
s, k = 0, 0
```
**Where**
```
s, k = 0, 1
```
Would assign `s = 0` and `k = 1`
>
> Notice the each `int` object on the right being initialized to the
> corresponding `var` on the left.
>
>
>
**OR**
```
s,k = [0 for _ in range(2)]
print(s) # 0
print(k) # 0
```
|
```
s = k = 0
```
OR
```
s, k = (0, 0)
```
depends on what u need
| 5,147
|
41,377,820
|
I downgraded Postgres.app from 9.6 to 9.5 by removing the Postgres.app desktop app. I updated the database by doing
(I downloaded Postgres by downloading Postgres.app Desktop app and I installed Django by doing pip install Django)
```
sudo /usr/libexec/locate.updatedb
```
And it looks like it is initiating database from the right directory.
```
/Applications/Postgres.app/Contents/Versions/9.5/bin/initdb
/Applications/Postgres.app/Contents/Versions/9.5/share/doc/postgresql/html/app-initdb.html
/Applications/Postgres.app/Contents/Versions/9.5/share/man/man1/initdb.1
```
However, when I am trying to do a migration in my Django app, it looks like the path is still point to the 9.6 version of Postgress
```
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line
utility.execute()
File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/django/core/management/__init__.py", line 341, in execute
django.setup()
File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/django/__init__.py", line 27, in setup
apps.populate(settings.INSTALLED_APPS)
File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/django/apps/config.py", line 199, in import_models
self.models_module = import_module(models_module_name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/tenant_schemas/models.py", line 4, in <module>
from tenant_schemas.postgresql_backend.base import _check_schema_name
File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/tenant_schemas/postgresql_backend/base.py", line 14, in <module>
import psycopg2
File "/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/psycopg2/__init__.py", line 50, in <module>
from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID
ImportError: dlopen(/Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Library not loaded: /Applications/Postgres.app/Contents/Versions/9.6/lib/libpq.5.dylib
Referenced from: /Users/me/Desktop/myapp/venv/lib/python2.7/site-packages/psycopg2/_psycopg.so
Reason: image not found
```
|
2016/12/29
|
[
"https://Stackoverflow.com/questions/41377820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1427176/"
] |
This solves the problem for me:
1. uninstall your psycopg2
pip uninstall psycopg2
2. then do this
pip --no-cache-dir install -U psycopg2
|
I think that your problem is that the version of `psycopg2` that is currently installed references the C postgres library that was bundled with your previous install of postgres (`/Applications/Postgres.app/Contents/Versions/9.6/lib/libpq.5.dylib`).
Try uninstalling and reinstalling `psycopg2`.
```
pip uninstall psycopg2
pip install psycopg2
```
| 5,149
|
55,454,182
|
Read dates from user input in the form yyyy,mm,dd. Then find number of days between dates. Datetime wants integers and input needs to be converted from string to integer. Previous questions seem to involve time or a different OS and those suggestions do not seem to work with Anaconda and Win10. I tried those which seem to be for a Win10 OS. The user input variables do not work as entered since they are strings and apparently need to be converted to an integer format.
I've tried various ways to convert to integer and datetime responds with errors indicating it needs integer and is seeing tuple, list or string. If I input the date values directly, everything works fine. I've tried - as a separator in the user input and still get an error. I've tried wrapping the input in (" "), " ", ' ' and ( ) and still get errors. I was not able to be more specific in the tags but the following may help with responses. I am using python 2.6 on Windows 10 in Anaconda. For some reason datetime.strptime is not recognized so some of the responses did not work.
```
InitialDate = input("Enter the begin date as yyyy,mm,dd")
FinalDate = input("Enter the end date as yyyy,mm,dd")
```
```
ID = InitialDate.split(",")
ID2 = int(Id[0]),int(Id[1]),int(Id[2])
Iday = datetime.datetime(Id2)
Fd = FinalDate.split(",")
Fd2 = int(Fd[0]),int(Fd[1]),int(Fd[2])
Fday = datetime.datetime(Fd2)
age = (Fd2 - Id2).day
```
I expect an integer value for age.
I get type error an integer is required (got type tuple) before execution of the age line.
|
2019/04/01
|
[
"https://Stackoverflow.com/questions/55454182",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8491388/"
] |
Use `.strptime()` to convert string date to date object and then calculate diff
**Ex:**
```
import datetime
InitialDate = "2019,02,10" #input("Enter the begin date as yyyy,mm,dd")
FinalDate = "2019,02,20" #input("Enter the end date as yyyy,mm,dd")
InitialDate = datetime.datetime.strptime(InitialDate, "%Y,%m,%d")
FinalDate = datetime.datetime.strptime(FinalDate, "%Y,%m,%d")
age = (FinalDate - InitialDate).days
print(age)
```
**Output:**
```
10
```
|
from datetime import datetime
InitialDate = input("Enter the begin date as yyyy/mm/dd: ")
FinalDate = input("Enter the end date as yyyy/mm/dd: ")
InitialDate = datetime.strptime(InitialDate, '%Y/%m/%d')
FinalDate = datetime.strptime(FinalDate, '%Y/%m/%d')
difference = FinalDate - InitialDate
print(difference.days)
| 5,150
|
47,794,007
|
So time.sleep isnt working. i am on python 3.4.3 and i have not had this problem on my computer with 3.6.
This is my code:
```
import calendar
def ProfileCreation():
Name = input("Name: ")
print("LOADING...")
time.sleep(1)
Age = input("Age: ")
print("LOADING...")
time.sleep(1)
ProfileAns = input("This matches no profiles. Create profile? ")
if ProfileAns.lower == 'yes':
print("Creating profile...")
elif ProfileAns.lower == 'no':
print("Creating profile anyway...")
else:
print("yes or no answer.")
ProfileCreation()
ProfileCreation()
```
|
2017/12/13
|
[
"https://Stackoverflow.com/questions/47794007",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9035852/"
] |
You might want to `import time` as at the moment `time.sleep(1)` isn't actually defined, just add the import to the top of your code and this should fix it.
|
Change it to following:
```
import calendar
import time
def ProfileCreation():
Name = input("Name: ")
print("LOADING...")
time.sleep(1)
Age = input("Age: ")
print("LOADING...")
time.sleep(1)
ProfileAns = input("This matches no profiles. Create profile? ")
if ProfileAns.lower == 'yes':
print("Creating profile...")
elif ProfileAns.lower == 'no':
print("Creating profile anyway...")
else:
print("yes or no answer.")
ProfileCreation()
ProfileCreation()
```
| 5,152
|
44,707,384
|
For my evaluation, I wanted to run a rolling 1000 window `OLS regression estimation` of the dataset found in this URL:
<https://drive.google.com/open?id=0B2Iv8dfU4fTUa3dPYW5tejA0bzg>
using the following `Python` script.
```
# /usr/bin/python -tt
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from statsmodels.formula.api import ols
df = pd.read_csv('estimated.csv', names=('x','y'))
model = pd.stats.ols.MovingOLS(y=df.Y, x=df[['y']],
window_type='rolling', window=1000, intercept=True)
df['Y_hat'] = model.y_predict
```
However, when I run my Python script, I am getting this error: `AttributeError: module 'pandas.stats' has no attribute 'ols'`. Could this error be from the version that I am using? The `pandas` installed on my Linux node has a version of `0.20.2`
|
2017/06/22
|
[
"https://Stackoverflow.com/questions/44707384",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1731796/"
] |
`pd.stats.ols.MovingOLS` was removed in Pandas version 0.20.0
<http://pandas-docs.github.io/pandas-docs-travis/whatsnew.html#whatsnew-0200-prior-deprecations>
<https://github.com/pandas-dev/pandas/pull/11898>
I can't find an 'off the shelf' solution for what should be such an obvious use case as rolling regressions.
The following should do the trick without investing too much time in a more elegant solution. It uses numpy to calculate the predicted value of the regression based on the regression parameters and the X values in the rolling window.
```
window = 1000
a = np.array([np.nan] * len(df))
b = [np.nan] * len(df) # If betas required.
y_ = df.y.values
x_ = df[['x']].assign(constant=1).values
for n in range(window, len(df)):
y = y_[(n - window):n]
X = x_[(n - window):n]
# betas = Inverse(X'.X).X'.y
betas = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
y_hat = betas.dot(x_[n, :])
a[n] = y_hat
b[n] = betas.tolist() # If betas required.
```
The code above is equivalent to the following and about 35% faster:
```
model = pd.stats.ols.MovingOLS(y=df.y, x=df.x, window_type='rolling', window=1000, intercept=True)
y_pandas = model.y_predict
```
|
It was [deprecated](https://github.com/pandas-dev/pandas/pull/11898) in favor of statsmodels.
See here [examples how to use statsmodels rolling regression](https://www.statsmodels.org/dev/examples/notebooks/generated/rolling_ls.html).
| 5,154
|
60,008,614
|
I have two dictionaries:
```
members_singles = {'member3': ['PCP3'], 'member4': ['PCP1'], 'member11': ['PCP2'], 'member12':
['PCP3'], 'member14': ['PCP4'], 'member15': ['PCP4'], 'member16': ['PCP4'], 'members17': ['PCP3']}
providers = {
"PCP1" : 3,
"PCP2" : 4,
"PCP3" : 1,
"PCP4" : 2,
"PCP5" : 4,
}
```
I want to iterate through `members` and each time a particular value occurs, count down one from the matching count in `providers`
```
to_remove_zero = []
pcps_in_negative = []
for member, provider_list in members_singles.items():
provider = provider_list[0]
if provider in providers:
providers[provider] -= 1
if providers[provider] == 0:
to_remove_zero.append(provider)
elif providers[provider] < 0:
pcps_in_negative.append(provider)
else:
pass
```
I want to save those providers that end up with an even zero count to a list and save those providers that go in the negative to another list. But my results show `to_remove_zero` contains ['PCP3', 'PCP4'] even though they should be in the negative. So the loop is popping them out before they get to the second condition. Does python simple counters like this stop at zero when counting down or am I missing something?
|
2020/01/31
|
[
"https://Stackoverflow.com/questions/60008614",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11069614/"
] |
When the `providers` count reaches zero, your code adds the provider to the `to_remove_zero` list. But the count may go negative later on, which will add the same provider to the `pcps_in_negative` list. At that point, your code needs to back-track and remove it from the `to_remove_zero` list:
```
providers[provider] -= 1
if providers[provider] == 0:
to_remove_zero.append(provider)
elif providers[provider] < 0:
pcps_in_negative.append(provider)
# back-track
if provider in to_remove_zero:
to_remove_zero.remove(provider)
```
This could be made a little neater if you used sets:
```
to_remove_zero = set()
pcps_in_negative = set()
for member, provider_list in members_singles.items():
provider = provider_list[0]
if provider in providers:
providers[provider] -= 1
if providers[provider] == 0:
to_remove_zero.add(provider)
elif providers[provider] < 0:
pcps_in_negative.add(provider)
to_remove_zero.discard(provider)
```
|
This is probably because you are altering the list, while expecting the indices to remain consistent:
```
a = [1,2,3]
a[0]
# 1
a.pop()
# 3
a[0]
# 1
```
| 5,155
|
55,501,746
|
**Below is the problem I am running into:**
*Linear Regression - Given 16 pairs of prices (as dependent variable) and
corresponding demands (as independent variable), use the linear regression tool to estimate the best fitting
linear line.*
```
Price Demand
127 3420
134 3400
136 3250
139 3410
140 3190
141 3250
148 2860
149 2830
151 3160
154 2820
155 2780
157 2900
159 2810
167 2580
168 2520
171 2430
```
**Here is my code:**
```
from pylab import *
from numpy import *
from scipy.stats import *
x = [3420, 3400, 3250, 3410, 3190, 3250, 2860, 2830, 3160, 2820, 2780, 2900, 2810, 2580, 2520, 2430]
np.asarray(x,dtype= np.float64)
y = [127, 134, 136 ,139, 140, 141, 148, 149, 151, 154, 155, 157, 159, 167, 168, 171]
np.asarray(y, dtype= np.float64)
slope,intercept,r_value,p_value,slope_std_error = stats.linregress(x,y)
y_modeled = x*slope+intercept
plot(x,y,'ob',markersize=2)
plot(x,y_modeled,'-r',linewidth=1)
show()
```
**Here is the error I get:**
```
Traceback (most recent call last):
File "<ipython-input-48-0a0274c24b19>", line 13, in <module>
y_modeled = x*slope+intercept
TypeError: can't multiply sequence by non-int of type 'numpy.float64'
```
|
2019/04/03
|
[
"https://Stackoverflow.com/questions/55501746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11307363/"
] |
I would rewrite this to be simpler and remove unnecessary looping:
```
$filenameOut = "out.html"
#get current working dir
$cwd = Get-ScriptDirectory #(Get-Location).path #PSScriptRoot #(Get-Item -Path ".").FullName
$filenamePathOut = Join-Path $cwd $filenameOut
$InitialAppointmentGenArr = Get-ChildItem -Path $temp
foreach($file in $InitialAppointmentGenArr)
{
$fileWithoutExtension = [io.path]::GetFileNameWithoutExtension($file)
$temp = '<li><a href="' + ($file.FullName -replace "\\",'/') + '" target="_app">' + $fileWithoutExtension + '</a></li>'
Add-Content -Path $filenamePathOut -Value $temp
}
}
```
|
Well, let's start with what you are attempting to do, and why it isn't working. If you look at the file object for any of those files (`$file|get-member`), you see that the `FullName` property only has a `get` method, no `set` method, so you can't change that property. So you are never going to change that property without renaming the source file and getting the file info again.
Knowing that, if you want to capture the path with the replaced slashes you will need to capture the output of the replace in a variable. You can then use that to build your string.
```
$filenameOut = "out.html"
#get current working dir
$cwd = Get-ScriptDirectory #(Get-Location).path #PSScriptRoot #(Get-Item -Path ".").FullName
$filenamePathOut = Join-Path $cwd $filenameOut
$InitialAppointmentGenArr = Get-ChildItem -Path $temp
foreach($file in $InitialAppointmentGenArr)
{
$filePath = $file.FullName -replace "\\", "/"
'<li><a href="' + $filePath + '" target="_app">' + $file.BaseName + '</a></li>' | Add-Content -Path $filenamePathOut}
}
```
| 5,156
|
31,096,151
|
I am parsing streaming hex data with python regex. I have the following packet structure that I am trying to extract from the stream of packets:
```
'\xaa\x01\xFF\x44'
```
* \xaa - start of packet
* \x01 - data length [value can vary from 00-FF]
* \xFF - data
* \x44 - end of packet
i want to use python regex to indicate how much of the data portion of the packet to match as such:
```
r = re.compile('\xaa(?P<length>[\x00-\xFF]{1})(.*){?P<length>}\x44')
```
this compiles without errors, but it doesnt work. I suspect it doesnt work because it the regex engine cannot convert the `<length>` named group hex value to an appropriate integer for use inside the regex `{}` expression. Is there a method by which this can be accomplished in python without resorting to disseminating the match groups?
Background: I have been using erlang for packet unpacking and I was looking for something similar in python
|
2015/06/28
|
[
"https://Stackoverflow.com/questions/31096151",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2163865/"
] |
I ended up doing something as follows:
```
self.packet_regex = \
re.compile('(\xaa)([\x04-\xFF]{1})([\x00-\xFF]{1})([\x10-\xFF]{1})([\x00-\xFF]*)([\x00-\xFF]{1})(\x44)')
match = self.packet_regex.search(self.buffer)
if match and match.groups():
groups = match.groups()
if (ord(groups[1]) - 4) == len(groups[4]) + len(groups[5]) + len(groups[6]):
...
```
|
This is pretty much a work around for what you have asked. Just have a look at it
```
import re
orig_str = '\xaa\x01\xFF\x44'
print orig_str
#converting original hex data into its representation form
st = repr(orig_str)
print st
#getting the representation form of regex and removing leading and trailing single quotes
reg = re.compile(repr("(\\xaa)")[1:-1])
p = reg.search(st)
#creating the representation from matched string by adding leading and trailing single quotes
extracted_repr = "\'"+p.group(1)+"\'"
print extracted_repr
#evaluating the matched string to get the original hex information
extracted_str = eval(extracted_repr)
print extracted_str
>>>
��D
'\xaa\x01\xffD'
'\xaa'
�
```
| 5,157
|
50,841,542
|
I am little bit familiar with np.fromregex. I read the tutorials and tried to implement it to read a data file.
When the file is read using simple python list comprehension, it gives the desired result:
`[400, 401, 405, 408, 412, 414, 420, 423, 433]`.
But, when `np.fromregex` is is gives another format answer:
`[(400,) (401,) (405,) (408,) (412,) (414,) (420,) (423,) (433,)]`.
How can the code be changed so that the answer from regex becomes same as the simple python for loop.
Thanks.
P.S. I know this is a simple question but it took me a lot of time to look for
the solution and it might be benificial to others too and save some time.
Related links:
<https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromregex.html>
[np.fromregex with string as dtype](https://stackoverflow.com/questions/33014828/np-fromregex-with-string-as-dtype)
```
from __future__ import print_function, division, with_statement, unicode_literals
import numpy as np
import re
data = """
DMStack failed for: lsst_z1.0_400.fits
DMStack failed for: lsst_z1.0_401.fits
DMStack failed for: lsst_z1.0_405.fits
DMStack failed for: lsst_z1.0_408.fits
DMStack failed for: lsst_z1.0_412.fits
DMStack failed for: lsst_z1.0_414.fits
DMStack failed for: lsst_z1.0_420.fits
DMStack failed for: lsst_z1.0_423.fits
DMStack failed for: lsst_z1.0_433.fits
"""
ifile = 'a.txt'
with open(ifile, 'w') as fo:
fo.write(data.lstrip())
# regex
regexp = r".*_(\d+?).fits"
# This works fine
ans = [int(re.findall(regexp, line)[0]) for line in open(ifile)]
print(ans)
# using fromregex
dt = [('num', np.int32)]
x = np.fromregex(ifile, regexp, dt)
print(x)
```
**Update**
The above code failed when I used the future imports. The error log is given below:
```
Traceback (most recent call last):
File "a.py", line 31, in <module>
x = np.fromregex(ifile, regexp, dt)
File "/Users/poudel/miniconda2/lib/python2.7/site-packages/numpy/lib/npyio.py", line 1452, in fromregex
dtype = np.dtype(dtype)
TypeError: data type not understood
$ which python
python is /Users/poudel/miniconda2/bin/python
$ python -c "import numpy; print(numpy.__version__)"
1.14.0
```
|
2018/06/13
|
[
"https://Stackoverflow.com/questions/50841542",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Just choose the group and you'll get what you want:
```
dt = [('num', np.int32)]
x = np.fromregex(ifile, regexp, dt)
print(x['num'])
#[400 401 405 408 412 414 420 423 433]
```
|
```
import numpy as np
import cStringIO
import re
data = """
DMStack failed for: lsst_z1.0_400.fits
DMStack failed for: lsst_z1.0_401.fits
DMStack failed for: lsst_z1.0_405.fits
DMStack failed for: lsst_z1.0_408.fits
DMStack failed for: lsst_z1.0_412.fits
DMStack failed for: lsst_z1.0_414.fits
DMStack failed for: lsst_z1.0_420.fits
DMStack failed for: lsst_z1.0_423.fits
DMStack failed for: lsst_z1.0_433.fits
"""
# ifile = cStringIO.StringIO()
# ifile.write(data)
ifile = 'a.txt'
with open(ifile, 'w') as fo:
fo.write(data.lstrip())
# regex
regexp = r".*_(\d+?).fits"
# This works fine
ans = [int(re.findall(regexp, line)[0]) for line in open(ifile)]
print(ans)
# using fromregex
dt = [('num', np.int32)]
x = np.fromregex(ifile, regexp, dt)
y=[]
for i in x:
y = y + [i[0]]
print y
"""
[400, 401, 405, 408, 412, 414, 420, 423, 433]
[400, 401, 405, 408, 412, 414, 420, 423, 433]
"""
```
I am not aware of doing this without a loop.
| 5,158
|
63,814,809
|
Hi i need to make a project for school where people can insert a review but before the review is posted on twitter it is going to be placed in a database. before it is going to be posted in twitter a moderator needs to check every review to see if there are no swear words etc. i wanted to make a small code in python where the moderator can insert which review he wants to see
i written in python
```
show_review = str(input("which reviews do you want to check: "))
```
and i want python to search the result of that question in the list
```
reviews_of_today = [review1, review2, review3, review4, review5, review6, review7, review8, review9, review10]
```
what code do i need to use or write to perform my needs?
```
review1 = ("Reizen ging soepel")
review2 = ("Het reizen was erg tof")
review3 = ("Kanker NS")
review4 = ("Het ging simpel")
review5 = ("Goede regels voor corona")
review6 = ("Trein kwam eindelijk een keer optijd")
review7 = ("NS komt altijd telaat! Tyfus zooi!")
review8 = ("Kut NS weer vertraging")
review9 = ("Volgende keer neem ik de taxi, sjonge jonge jonge altijd weer het zelfde probleem met NS")
review10 = ("NS Altijd goede ervaring mee gehad")
reviews_of_today = [review1, review2, review3, review4, review5, review6, review7, review8, review9, review10]
#reviews_of_today_no_duplicates = list(dict.fromkeys(reviews_of_today))
# result = []
show_review = str(input("which reviews do you want to check: "))
moderator_want_to_see_review = (show_review)
if moderator_want_to_see_review in show_review:
print()
```
|
2020/09/09
|
[
"https://Stackoverflow.com/questions/63814809",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Try this code. I am not clear on what you are trying to do in
`output_table()` data frame.
```
library(shiny)
library(shinyWidgets)
# ui object
ui <- fluidPage(
titlePanel(p("Spatial app", style = "color:#3474A7")),
sidebarLayout(
sidebarPanel(
uiOutput("inputp1"),
numericInput("num", label = ("value"), value = 1),
#Add the output for new pickers
uiOutput("pickers"),
actionButton("button", "Update")
),
mainPanel(
DTOutput("table")
)
)
)
# server()
server <- function(input, output, session) {
DF1 <- reactiveValues(data=NULL)
dt <- reactive({
name<-c("John","Jack","Bill")
value1<-c(2,4,6)
dt<-data.frame(name,value1)
})
observe({
DF1$data <- dt()
})
output$inputp1 <- renderUI({
pickerInput(
inputId = "p1",
label = "Select Column headers",
choices = colnames( dt()),
multiple = TRUE,
options = list(`actions-box` = TRUE)
)
})
observeEvent(input$p1, {
#Create the new pickers
output$pickers<-renderUI({
dt1 <- DF1$data
div(lapply(input$p1, function(x){
if (is.numeric(dt1[[x]])) {
sliderInput(inputId=x, label=x, min=min(dt1[[x]]), max=max(dt1[[x]]), value=c(min(dt1[[x]]),max(dt1[[x]])))
}else { # if (is.factor(dt1[[x]])) {
selectInput(
inputId = x, # The col name of selected column
label = x, # The col label of selected column
choices = dt1[,x], # all rows of selected column
multiple = TRUE
)
}
}))
})
})
dt2 <- eventReactive(input$button, {
req(input$num)
dt <- DF1$data ## here you can provide the user input data read inside this observeEvent or recently modified data DF1$data
dt$value1<-dt$value1*isolate(input$num)
dt
})
observe({DF1$data <- dt2()})
output_table <- reactive({
req(input$p1, sapply(input$p1, function(x) input[[x]]))
dt_part <- dt2()
for (colname in input$p1) {
if (is.factor(dt_part[[colname]]) && !is.null(input[[colname]])) {
dt_part <- subset(dt_part, dt_part[[colname]] %in% input[[colname]])
} else {
if (!is.null(input[[colname]][[1]])) {
dt_part <- subset(dt_part, (dt_part[[colname]] >= input[[colname]][[1]]) & dt_part[[colname]] <= input[[colname]][[2]])
}
}
}
dt_part
})
output$table<-renderDT({
output_table()
})
}
# shinyApp()
shinyApp(ui = ui, server = server)
```
[](https://i.stack.imgur.com/nRoks.png)
|
First of all, you'd need to include a `req` in your `reactive()`, since `input$num` is not available at the initializing od your example:
```r
dt<-reactive({input$button
req(input$num)
name<-c("John","Jack","Bill")
value1<-c(2,4,6)
dt<-data.frame(name,value1)
dt$value1<-dt$value1*isolate(input$num)
dt
})
```
| 5,160
|
57,264,952
|
very new to python and hope can get some help here. I'm trying to sum up the total from different lists in dictionary
```
{'1': [0, 2, 2, 0, 0], '2': [0, 1, 1, 0, 0], '3': [2, 4, 2, 0, 2]}
```
I have been trying to find a way to sum up the total as follow and append in a new list:
```
'1': [0, 2, 2, 0, 0]
'2': [0, 1, 1, 0, 0]
'3': [2, 4, 2, 0, 2]
0+0+2 = 2
2+1+4 = 7
2+1+2 = 5
[2, 7, 5, 0, 2]
```
I was able to get partial results for one row if I do this way but wasn't able ti figure out a way to get the output I want.
```
total_all = list()
for x, result_total in result_all.items():
new_total = (result_total[1])
total_all.append((new_total))
print(sum(total_all))
output 7
```
Any suggestions and help would be greatly appreciated
|
2019/07/30
|
[
"https://Stackoverflow.com/questions/57264952",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7280296/"
] |
Using `zip()` ([doc](https://docs.python.org/3/library/functions.html#zip)) function to transpose dictionary values and `sum()` to sum them inside list comprehension:
```
d = {'1': [0, 2, 2, 0, 0], '2': [0, 1, 1, 0, 0], '3': [2, 4, 2, 0, 2]}
out = [sum(i) for i in zip(*d.values())]
print(out)
```
Prints:
```
[2, 7, 5, 0, 2]
```
EDIT (little explanation):
The star-expression `*` inside `zip()` effectively unpacks dict values into this:
```
out = [sum(i) for i in zip([0, 2, 2, 0, 0], [0, 1, 1, 0, 0], [2, 4, 2, 0, 2])]
```
`zip()` iterates over each of it's argument:
```
1. iteration -> (0, 0, 2)
2. iteration -> (2, 1, 4)
...
```
`sum()` does sum of these tuples:
```
1. iteration -> sum( (0, 0, 2) ) -> 2
2. iteration -> sum( (2, 1, 4) ) -> 7
...
```
|
You can do like this,
```
In [6]: list(map(sum,zip(*d.values())))
Out[6]: [2, 7, 5, 0, 2]
```
| 5,161
|
43,742,143
|
I have a text file that contains data in json format
```
{"Header":
{
"name":"test"},
"params":{
"address":"myhouse"
}
}
```
I am trying to read it from a python file and convert it to json format. I have tried with both yaml and json libraries, and, with both libraries, it converts it to json format, but it also converts the double quotes to single quotes
Is there any way of parsing it into a json format, but keeping the double quotes?
I dont think using a replace call is a valid option, as it will also replace single quotes that are part of the data
Thanks
|
2017/05/02
|
[
"https://Stackoverflow.com/questions/43742143",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5193545/"
] |
Do this:
```
import json
with open("file.json", "r") as f:
obj = json.load(f)
with open("file.json", "w") as f:
json.dump(obj, f, indent = 4[, ensure_ascii = False]) # if your string has some unicode characters, let ensure_ascii to be False.
```
|
You can use `json.load` to load a json object from a file.
```
import json
with open("file.json", "r") as f:
obj = json.load(f)
```
The resulting json object delimits strings with `'` rather than `"`, but you can easily use a `replace` call at that point.
```
In [6]: obj
Out[6]: {'Header': {'name': 'test'}, 'params': {'address': 'myhouse'}}
```
EDIT:
Though I misunderstood the question at first, you can use `json.dumps` to write a string-encoding of the json object that uses double quotes as the standard requires.
```
In [10]: json.dumps(obj)
Out[10]: '{"params": {"address": "myhouse"}, "Header": {"na\'me": "test"}}'
```
It's unclear what you're trying to do though, as if you're trying to read the json into a Python object, it doesn't matter what string delimiters are used; if you're trying to read your valid json into a Python string, you can just read the file without any libraries.
| 5,162
|
58,709,973
|
* when running Python test from withing VS Code using CTRL+F5 I'm getting error message
*ImportError: attempted relative import with no known parent package*
[](https://i.stack.imgur.com/jbTGS.png)
* when running Python test from VS Code terminal by using command line
>
> python test\_HelloWorld.py
>
>
>
I'm getting error message
*ValueError: attempted relative import beyong top-level package*
[](https://i.stack.imgur.com/xpjn3.png)
Here is the project structure
[](https://i.stack.imgur.com/Jxrkt.png)
How to solve the subject issue(s) with minimal (code/project structure) change efforts?
TIA!
**[Update]**
I have got the following solution using sys.path correction:
[](https://i.stack.imgur.com/udG1c.png)
```
import sys
from pathlib import Path
sys.path[0] = str(Path(sys.path[0]).parent)
```
but I guess there still could be a more effective solution without source code corrections by using some (VS Code) settings or Python running context/environment settings (files)?
|
2019/11/05
|
[
"https://Stackoverflow.com/questions/58709973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/93277/"
] |
Do not use relative import.
Simply change it to
```
from solutions import helloWorldPackage as hw
```
**Update**
I initially tested this in PyCharm. PyCharm has a nice feature - it adds content root and source roots to PYTHONPATH (both options are configurable).
You can achieve the same effect in VS Code by adding a `.env` file:
```
PYTHONPATH=.:${PYTHONPATH}
```
Now, the project directory will be in the PYTHONPATH for every tool that is launched via VS Code. Now Ctrl+F5 works fine.
|
>
> Setup a main module and its source packages paths
> =================================================
>
>
>
Solution found at:
* [https://k0nze.dev/posts/python-relative-imports-vscode/#:~:text=create%20a%20settings.json%20within%20.vscode](https://k0nze.dev/posts/python-relative-imports-vscode/#:%7E:text=create%20a%20settings.json%20within%20.vscode)
* [https://k0nze.dev/posts/python-relative-imports-vscode/#:~:text=Inside%20the-,launch.json,-you%20have%20to](https://k0nze.dev/posts/python-relative-imports-vscode/#:%7E:text=Inside%20the-,launch.json,-you%20have%20to)
Which also provide a neat in-depth [video](https://www.youtube.com/watch?v=Ad-inC3mJfU) explanation
---
The solution to the `attempted relative import with no known parent package` issue, which is especially tricky in VScode (in opposite to Pycharm that provide GUI tools to flag folders as package), is to:
### Add configuration files for the VScode debugger
>
> Id Est add `launch.json` as a `Module` (this will always execute the file given in "module" key) and `settings.json` inside the [`MyProjectRoot/.vscode`](https://i.stack.imgur.com/DKJGV.png) folder (manually add them if it's not there yet, or be guided by VScode GUI for [`Run & Debug`](https://i.stack.imgur.com/W9bHy.png))
>
>
>
### `launch.json` setup
>
> Id Est add an `"env"` key to `launch.json` containing an object with `"PYTHONPATH"` as key, and `"${workspaceFolder}/mysourcepackage"` as value
>
> [final launch.json configuration](https://i.stack.imgur.com/k5aOb.png)
>
>
>
### `settings.json` setup
>
> Id Est add a `"python.analysis.extraPaths"` key to `settings.json` containing a list of paths for the debugger to be aware of, which in our case is one `["${workspaceFolder}/mysourcepackage"]` as value **(note that we put the string in a list only for the case in which we want to include other paths too, it is not needed for our specific example but it's still a standard de facto as I know)**
>
> [final settings.json configuration](https://i.stack.imgur.com/T7G8C.png)
>
>
>
This should be everything needed to both work by calling the script with python from the terminal and from the VScode debugger.
| 5,163
|
62,494,807
|
I have the following python script
```
from bs4 import BeautifulSoup
import requests
home_dict = []
for year in range(2005, 2021):
if year == 2020:
for month in range(1, 6):
url = 'https://www.rebgv.org/market-watch/MLS-HPI-home-price-comparison.hpi.all.all.' + str(year) + '-' + str(month) + '-1.html';
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
home_table = soup.find('div', class_="table-wrapper")
for home in home_table.find_all('tbody'):
rows = home.find_all('tr')
for row in rows:
area = row.find('td').text;
benchmark = row.find_all('td')[1].text
priceIndex = row.find_all('td')[2].text
oneMonthChange = row.find_all('td')[3].text
sixMonthChange = row.find_all('td')[4].text
oneYearChange = row.find_all('td')[5].text
threeYearChange = row.find_all('td')[6].text
fiveYearChange = row.find_all('td')[7].text
propertyType = row.find_all('td')[8].text
year = year;
month = month;
home_obj = {
"Area": area,
"Benchmark": benchmark,
"Price Index": priceIndex,
"1 Month +/-": oneMonthChange,
"6 Month +/-": sixMonthChange,
"1 Year +/-": oneYearChange,
"3 Year +/-": threeYearChange,
"5 Year +/-": fiveYearChange,
"Property Type": propertyType,
"Report Month": month,
"Report Year": year
}
home_dict.append(home_obj)
else:
for month in range(1, 13):
url = 'https://www.rebgv.org/market-watch/MLS-HPI-home-price-comparison.hpi.all.all.' + str(year) + '-' + str(month) + '-1.html';
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
home_table = soup.find('div', class_="table-wrapper")
for home in home_table.find_all('tbody'):
rows = home.find_all('tr')
for row in rows:
area = row.find('td').text;
benchmark = row.find_all('td')[1].text
priceIndex = row.find_all('td')[2].text
oneMonthChange = row.find_all('td')[3].text
sixMonthChange = row.find_all('td')[4].text
oneYearChange = row.find_all('td')[5].text
threeYearChange = row.find_all('td')[6].text
fiveYearChange = row.find_all('td')[7].text
propertyType = row.find_all('td')[8].text
year = year;
month = month;
home_obj = {
"Area": area,
"Benchmark": benchmark,
"Price Index": priceIndex,
"1 Month +/-": oneMonthChange,
"6 Month +/-": sixMonthChange,
"1 Year +/-": oneYearChange,
"3 Year +/-": threeYearChange,
"5 Year +/-": fiveYearChange,
"Property Type": propertyType,
"Report Month": month,
"Report Year": year
}
home_dict.append(home_obj)
print(home_dict)
```
This script is web scraping a website. If the year is 2020, it would only scape from January to May. For other years, it would go from Jan to Dec.
You can tell that the body of the script is repeated inside that if-else conditional statement, is there an easier way to write this to make it look cleaner and not repeat itself?
|
2020/06/21
|
[
"https://Stackoverflow.com/questions/62494807",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11303284/"
] |
Perhaps try a `try` clause?
```
for year in range(2005, 2021):
month in range(1, 13):
try:
<your code>
except:
continue
```
|
As scraping for 1 to 6 months is common for all the years. You can **scrape** those years first. And then if the year is not equal to 2020 you can **scrape** rest of the years
| 5,171
|
67,790,430
|
I have a big text file that has around 200K lines of records/lines.
But I need to extract only specific lines which Start with CLM. For example, if the file has 100K lines that start with CLM I should print all that 100K lines alone.
Can anyone help me to achieve this using python script?
|
2021/06/01
|
[
"https://Stackoverflow.com/questions/67790430",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15924358/"
] |
this would work:
```
df2[, colnames(df2) %in% colnames(df1)]
x3 x4 x7 x10 x12
1 IL_NA1A_P IL_NA3D_P PROD009_P PROD014_P PROD023A_P
```
You simply check which column-names of `df2` appear also in `df1` and select these columns from `df2`.
|
If all of `df1` column names are present in `df2` you can use -
```
df2[names(df1)]
# x3 x4 x7 x10 x12
#1 IL_NA1A_P IL_NA3D_P PROD009_P PROD014_P PROD023A_P
```
If only few of `df1` column names are present in `df2` you can either use -
```
df2[intersect(names(df2), names(df1))]
```
Or in `dplyr`.
```
library(dplyr)
df2 %>% select(any_of(names(df1)))
```
| 5,174
|
6,652,492
|
I have a Java project that utilizes Jython to interface with a Python module. With my configuration, the program runs fine, however, when I export the project to a JAR file, I get the following error:
```
Jar export finished with problems. See details for additional information.
Fat Jar Export: Could not find class-path entry for 'C:Projects/this_project/src/com/company/python/'
```
When browsing through the generated JAR file with an archive manager, the python module is in fact inside of the JAR, but when I check the manifest, only "." is in the classpath. I can overlook this issue by manually dropping the module into the JAR file after creation, but since the main point of this project is automation, I'd rather be able to configure Eclipse to generate properly configured JAR automatically. Any ideas?
\*NOTE\*I obviously cannot run the program successfully when I do this, but removing the Python source folder from the classpath in "Run Configurations..." makes the error go away.
|
2011/07/11
|
[
"https://Stackoverflow.com/questions/6652492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/584676/"
] |
This is not supported by Windows Installer. Elevation is usually handled by the application through its [manifest](http://msdn.microsoft.com/en-us/library/bb756929.aspx).
A solution is to create a wrapper (VBScript or EXE) which uses [ShellExecute](http://msdn.microsoft.com/en-us/library/bb762153%28VS.85%29.aspx) with **runas** verb to launch your application as an Administrator. Your shortcut can then point to this wrapper instead of the actual application.
|
Sorry for the confusion - I now understand what you are after.
There are indeed ways to set the shortcut flag but none that I know of straight in Visual Studio. I have found a number of functions written in C++ that set the SLDF\_RUNAS\_USER flag on a shortcut.
Some links to such functions include:
* <http://blogs.msdn.com/b/oldnewthing/archive/2007/12/19/6801084.aspx>
* <http://social.msdn.microsoft.com/Forums/en-US/windowssecurity/thread/a55aa70e-ae4d-4bf6-b179-2e3df3668989/>
Another interesting discussion on the same topic was carried out at NSIS forums, the thread may be of help. There is a function listed that can be built as well as mention of a registry location which stores such shortcut settings (this seems to be the easiest way to go, if it works) - I am unable to test the registry method at the moment, but can do a bit later to see if it works.
This thread can be found here: <http://forums.winamp.com/showthread.php?t=278764>
If you are quite keen to do this programatically, then maybe you could adapt one of the functions above to be run as a post-install task? This would set the flag of the shortcut after your install but this once again needs to be done on Non-Advertised shortcuts so the MSI would have to be fixed as I mentioned earlier.
I'll keep looking and test out the registry setting method to see if it works and report back.
Chada
| 5,177
|
38,644,397
|
I am trying to make a login system with python and mysql. I connected to the database, but when I try to insert values into a table, it fails. I'm not sure what's wrong. I am using python 3.5 and the PyMySQL module.
```
#!python3
import pymysql, sys, time
try:
print('Connecting.....')
time.sleep(1.66)
conn = pymysql.connect(user='root', passwd='root', host='127.0.0.1', port=3306, database='MySQL')
print('Connection suceeded!')
except:
print('Connection failed.')
sys.exit('Error.')
cursor = conn.cursor()
sql = "INSERT INTO login(USER, PASS) VALUES('test', 'val')"
try:
cursor.execute(sql)
conn.commit()
except:
conn.rollback()
print('Operation failed.')
conn.close()
```
|
2016/07/28
|
[
"https://Stackoverflow.com/questions/38644397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6544457/"
] |
It looks like the `get`/`getAs` methods mentioned in the example are just convenience wrappers for the `fetch` method. See <https://github.com/http4s/http4s/blob/a4b52b042338ab35d89d260e0bcb39ccec1f1947/client/src/main/scala/org/http4s/client/Client.scala#L116>
Use the `Request` constructor and pass `Method.POST` as the `method`.
```
fetch(Request(Method.POST, uri))
```
|
```
import org.http4s.circe._
import org.http4s.dsl._
import io.circe.generic.auto._
case class Name(name: String)
implicit val nameDecoder: EntityDecoder[Name] = jsonOf[Name]
def routes: PartialFunction[Request, Task[Response]] = {
case req @ POST -> Root / "hello" =>
req.decode[Name] { name =>
Ok(s"Hello, ${name.name}")
}
```
Hope this helps.
| 5,183
|
66,503,032
|
Is there anyway that I could make the function below faster and more optimized with `pandas` or `numpy`, the function below adds the sum of `seq45` until the elements of it is equivalent or over 10000.The elements that is being added up to `seq45` are `3,7,11` in order. The reason why i want to increase the speed is to have have to possibly test 10000 with larger number values like 1000000 and to be processed faster. The answer to this code was gotten from this issue: [issue](https://stackoverflow.com/questions/66487120/adding-through-an-array-with-np-where-function-numpy-python?noredirect=1#comment117539177_66487120)
Code:
```
Sequence = np.array([3, 7, 11])
seq45= []
for n in itertools.cycle(Sequence):
seq.append(n)
if sum(seq45) >= 10000: break
print(seq45)
```
Current Perfromance/ Processing time: 71.9ms
[](https://i.stack.imgur.com/d7NRt.png)
|
2021/03/06
|
[
"https://Stackoverflow.com/questions/66503032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15217016/"
] |
You can input numbers using scanner class similar to the following code from w3schools:
```
import java.util.Scanner; // Import the Scanner class
class Main {
public static void main(String[] args) {
Scanner myObj = new Scanner(System.in); // Create a Scanner object
System.out.println("Enter username");
String userName = myObj.nextLine(); // Read user input
System.out.println("Username is: " + userName); // Output user input
}
}
```
`low`, `high` and `x` can be of data type `int`.
To check which numbers between `low` and `high` are multiples of `x`, you can use a `for` loop. You can declare a new variable `count` that can be incremented using `count++` every time the `for` loop finds a multiple. The `%` operator could be useful to find multiples.
You can output using `System.out.println(" " + );`
**Edit:**
`%` operator requires 2 operands and gives the remainder. So if `i % x == 0`, it means `i` is a multiple of `x`, and we do `count++`.
The value of `i` will run through `low` to `high`.
```
for (i = low; i <= high; i++) {
if (i % x == 0) {
count++;
}
}
```
|
put count++; after the last print statement
| 5,186
|
62,929,576
|
I would like to approximate bond yields in python. But the question arose which curve describes this better?
------------------------------------------------------------------------------------------------------------
```
import numpy as np
import matplotlib.pyplot as plt
x = [0.02, 0.22, 0.29, 0.38, 0.52, 0.55, 0.67, 0.68, 0.74, 0.83, 1.05, 1.06, 1.19, 1.26, 1.32, 1.37, 1.38, 1.46, 1.51, 1.61, 1.62, 1.66, 1.87, 1.93, 2.01, 2.09, 2.24, 2.26, 2.3, 2.33, 2.41, 2.44, 2.51, 2.53, 2.58, 2.64, 2.65, 2.76, 3.01, 3.17, 3.21, 3.24, 3.3, 3.42, 3.51, 3.67, 3.72, 3.74, 3.83, 3.84, 3.86, 3.95, 4.01, 4.02, 4.13, 4.28, 4.36, 4.4]
y = [3, 3.96, 4.21, 2.48, 4.77, 4.13, 4.74, 5.06, 4.73, 4.59, 4.79, 5.53, 6.14, 5.71, 5.96, 5.31, 5.38, 5.41, 4.79, 5.33, 5.86, 5.03, 5.35, 5.29, 7.41, 5.56, 5.48, 5.77, 5.52, 5.68, 5.76, 5.99, 5.61, 5.78, 5.79, 5.65, 5.57, 6.1, 5.87, 5.89, 5.75, 5.89, 6.1, 5.81, 6.05, 8.31, 5.84, 6.36, 5.21, 5.81, 7.88, 6.63, 6.39, 5.99, 5.86, 5.93, 6.29, 6.07]
a = np.polyfit(np.power(x,0.5), y, 1)
y1 = a[0]*np.power(x,0.5)+a[1]
b = np.polyfit(np.log(x), y, 1)
y2 = b[0]*np.log(x) + b[1]
c = np.polyfit(x, y, 2)
y3 = c[0] * np.power(x,2) + np.multiply(c[1], x) + c[2]
plt.plot(x, y, 'ro', lw = 3, color='black')
plt.plot(x, y1, 'g', lw = 3, color='red')
plt.plot(x, y2, 'g', lw = 3, color='green')
plt.plot(x, y3, 'g', lw = 3, color='blue')
plt.axis([0, 4.5, 2, 8])
plt.rcParams['figure.figsize'] = [10, 5]
```
The parabolic too goes down at the end **(blue)**, the logarithmic goes too quickly to zero at the beginning **(green)**, and the square root has a strange hump **(red)**. Is there any other ways of more accurate approximation or is it that I'm already getting pretty good?
[](https://i.stack.imgur.com/nvX7M.png)
|
2020/07/16
|
[
"https://Stackoverflow.com/questions/62929576",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12412154/"
] |
This may help you. I'm not sure but this works for me
### Signout Fuction
```dart
Future _signOut() async {
try {
return await auth.signOut();
} catch (e) {
print(e.toString());
return null;
}
}
```
### Function Usage
```dart
IconButton(
onPressed: () async {
await _auth.signOut();
MaterialPageRoute(
builder: (context) => Login(),
);
},
icon: Icon(Icons.exit_to_app),
),
```
|
You need to have AuthWidgetBuilder as top-level widget (ideally above MaterialApp) so that the entire widget tree is rebuilt on sign-in / sign-out events.
You could make SplashScreen a child, and have some conditional logic to decide if you should present it.
By the way, if your splash screen doesn't contain any animations you don't need a widget at all, and you can use the Launch Screen on iOS or equivalent on Android (there's tutorials about this online).
By @bizz84
| 5,188
|
61,645,140
|
I am really struggling with this issue.
In the below, I take a domain like something.facebook.com and turn it into facebook.com using my UDF.
I get this error though:
```
UnicodeEncodeError: 'ascii' codec can't encode characters in position 64-65: ordinal not in range(128)
```
I've tried a few things to get around it but I really don't understand why it's causing a problem.
I would really appreciate any pointers :)
```
toplevel = ['.co.uk', '.co.nz', '.com', '.net', '.uk', '.org', '.ie', '.it', '.gov.uk', '.news', '.co.in',
'.io', '.tw', '.es', '.pe', '.ca', '.de', '.to', '.us', '.br', '.im', '.ws', '.gr', '.cc', '.cn', '.me', '.be',
'.tv', '.ru', '.cz', '.st', '.eu', '.fi', '.jp', '.ai', '.at', '.ch', '.ly', '.fr', '.nl', '.se', '.cat', '.com.au',
'.com.ar', '.com.mt', '.com.co', '.org.uk', '.com.mx', '.tech', '.life', '.mobi', '.info', '.ninja', '.today', '.earth', '.click']
def cleanup(domain):
print(domain)
if domain is None or domain == '':
domain = 'empty'
return domain
for tld in toplevel:
if tld in str(domain):
splitdomain = domain.split('.')
ext = tld.count('.')
if ext == 1:
cdomain = domain.split('.')[-2].encode('utf-8') + '.' + domain.split('.')[-1].encode('utf-8')
return cdomain
elif ext == 2:
cdomain = domain.split('.')[-3].encode('utf-8') + '.' + domain.split('.')[-2].encode('utf-8') + '.' + domain.split('.')[-1].encode('utf-8')
return cdomain
elif domain == '':
cdomain = 'empty'
return cdomain
else:
return domain
'''
#IPFR DOMS
'''
output = ipfr_logs.withColumn('capital',udfdict(ipfr_logs.domain)).createOrReplaceTempView('ipfrdoms')
ipfr_table_output = spark_session.sql('insert overwrite table design.ipfr_tld partition(dt=' + yday_date + ') select dt, hour, vservername, loc, cast(capital as string), count(distinct(emsisdn)) as users, sum(bytesdl) as size from ipfrdoms group by dt, hour, vservername, loc, capital')
```
Here is the full trace
```
Traceback (most recent call last):
File "/data/keenek1/py_files/2020_scripts/web_ipfr_complete_new.py", line 177, in <module>
ipfr_table_output = spark_session.sql('insert overwrite table design.ipfr_tld partition(dt=' + yday_date + ') select dt, hour, vservername, loc, cast(capital as string), count(distinct(emsisdn)) as users, sum(bytesdl) as size from ipfrdoms group by dt, hour, vservername, loc, capital')
File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/session.py", line 714, in sql
File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py", line 1160, in __call__
File "/usr/hdp/current/spark2-client/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py", line 320, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o51.sql.
: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:224)
at org.apache.spark.sql.hive.execution.SaveAsHiveFile$class.saveAsHiveFile(SaveAsHiveFile.scala:87)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.saveAsHiveFile(InsertIntoHiveTable.scala:66)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:195)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:190)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:638)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 685 in stage 0.0 failed 4 times, most recent failure: Lost task 685.3 in stage 0.0 (TID 4945, uds-far-dn112.dab.02.net, executor 22): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/hdp/current/spark2-client/python/pyspark/worker.py", line 229, in main
process()
File "/usr/hdp/current/spark2-client/python/pyspark/worker.py", line 224, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/usr/hdp/current/spark2-client/python/pyspark/worker.py", line 149, in <lambda>
func = lambda _, it: map(mapper, it)
File "<string>", line 1, in <lambda>
File "/usr/hdp/current/spark2-client/python/pyspark/worker.py", line 74, in <lambda>
return lambda *a: f(*a)
File "/data/keenek1/py_files/2020_scripts/web_ipfr_complete_new.py", line 147, in cleanup
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-3: ordinal not in range(128)
```
|
2020/05/06
|
[
"https://Stackoverflow.com/questions/61645140",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9629771/"
] |
This seems to solve it. Interestingly, the output is never 'unicode error'
```
def cleanup(domain):
try:
if domain is None or domain == '':
domain = 'empty'
return str(domain)
for tld in toplevel:
if tld in domain:
splitdomain = domain.split('.')
ext = tld.count('.')
if ext == 1:
cdomain = domain.split('.')[-2] + '.' + domain.split('.')[-1]
return str(cdomain)
elif ext == 2:
cdomain = domain.split('.')[-3] + '.' + domain.split('.')[-2] + '.' + domain.split('.')[-1]
return str(cdomain)
elif domain == '':
cdomain = 'empty'
return str(cdomain)
else:
return str(domain)
except UnicodeEncodeError:
domain = 'unicode error'
return domain
```
|
Try this one.
```
def cleanup(domain):
#Test the empt Entry
if domain is None or domain == '':
domain = 'empty'
return domain
listSub = domain.split('.')
result = listSub[1]
#get the part we need www.facebook.com > .facebook.com
for part in listSub[2:]:
result = result + '.' + part
return result.encode('Utf-8')
```
| 5,189
|
13,365,876
|
I am using python [tox](http://pypi.python.org/pypi/tox) to run python unittest for several versions of python, but these python interpreters are not all available on all machines or platforms where I'm running tox.
How can I configure tox so it will run tests only when python interpretors are available.
Example of `tox.ini`:
```
[tox]
envlist=py25,py27
[testenv]
...
[testenv:py25]
...
```
The big problem is that I do want to have a list of python environments which is auto-detected.
|
2012/11/13
|
[
"https://Stackoverflow.com/questions/13365876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/99834/"
] |
As of Tox version 1.7.2, you can pass the `--skip-missing-interpreters` flag to achieve this behavior. You can also set `skip_missing_interpreters=true` in your `tox.ini` file. More info [here](http://tox.readthedocs.org/en/latest/config.html#confval-skip_missing_interpreters=BOOL).
```
[tox]
envlist =
py24, py25, py26, py27, py30, py31, py32, py33, py34, jython, pypy, pypy3
skip_missing_interpreters =
true
```
|
tox will display an Error if an interpreter cannot be found. Question is up if there should be a "SKIPPED" state and making tox return a "0" success result. This should probably be explicitely enabled via a command line option. If you agree, file an issue at <http://bitbucket.org/hpk42/tox> .
| 5,190
|
68,564,322
|
I have a 143k lowcase word dictionary and I want to count the frequency of the first two letters
(ie: `aa* = 14, ab* = 534, ac = 714` ... `za = 65,` ... `zz = 0` ) and put it in a bidimensional array.
However I have no idea how to even go about iterating them without switches or a bunch of if elses I tried looking on google for a solution to this but I could only find counting amount of letters in the whole word and mostly only things in python.
I've sat here for a while thinking how could I do this and my brain keeps blocking this was what I came up with but I really don't know where to head.
```
int main (void) {
char *line = NULL;
size_t len = 0;
ssize_t read;
char *arr[143091];
FILE *fp = fopen("large", “r”);
if (*fp == NULL)
{
return 1;
}
int i = 0;
while ((read = getline(&line, &len, fp)) != -1)
{
arr[i] = line;
i++;
}
char c1 = 'a';
char c2 = 'a';
i = 0;
int j = 0;
while (c1 <= 'z')
{
while (arr[k][0] == c1)
{
while (arr[k][1] == c2)
{
}
c2++;
}
c1++;
}
fclose(fp);
if (line)
free(line);
return 0;
}
```
Am I being an idiot or am I just missing someting really basic? How can I go about this problem?
Edit: I forgot to mention that the dictionary is only lowercase and has some edge cases like just an `a` or an `e` and some words have `'` (like `e'er`and `e's`) there are no accentuated latin characters and they are all accii lowercase
|
2021/07/28
|
[
"https://Stackoverflow.com/questions/68564322",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16498000/"
] |
The code assumes that the input has one word per line without leading spaces and will count all words that start with two ASCII letters from `'a'`..`'z'`. As the statement in the question is not fully clear, I further assume that the character encoding is ASCII or at least ASCII compatible. (The question states: "there are no accentuated latin characters and they are all accii lowercase")
If you want to include words that consist of only one letter or words that contain `'`, the calculation of the index values from the characters would be a bit more complicated. In this case I would add a function to calculate the index from the character value.
Also for non-ASCII letters the simple calculation of the array index would not work.
The program reads the input line by line without storing all lines, checks the input as defined above and converts the first two characters from range `'a'`..`'z'` to index values in range `0`..`'z'-'a'` to count the occurrence in a two-dimensional array.
```
#include <stdio.h>
#include <stdlib.h>
int main (void) {
char *line = NULL;
size_t len = 0;
ssize_t read;
/* Counter array, initialized with 0. The highest possible index will
* be 'z'-'a', so the size in each dimension is 1 more */
unsigned long count['z'-'a'+1]['z'-'a'+1] = {0};
FILE *fp = fopen("large", "r");
if (fp == NULL)
{
return 1;
}
while ((read = getline(&line, &len, fp)) != -1)
{
/* ignore short input */
if(read >= 2)
{
/* ignore other characters */
if((line[0] >= 'a') && (line[0] <= 'z') &&
(line[1] >= 'a') && (line[1] <= 'z'))
{
/* convert first 2 characters to array index range and count */
count[line[0]-'a'][line[1]-'a']++;
}
}
}
fclose(fp);
if (line)
free(line);
/* example output */
for(int i = 'a'-'a'; i <= 'z'-'a'; i++)
{
for(int j = 'a'-'a'; j <= 'z'-'a'; j++)
{
/* only print combinations that actually occurred */
if(count[i][j] > 0)
{
printf("%c%c %lu\n", i+'a', j+'a', count[i][j]);
}
}
}
return 0;
}
```
The example input
```none
foo
a
foobar
bar
baz
fish
ford
```
results in
```
ba 2
fi 1
fo 3
```
|
Such job is more suitable for languages like Python, Perl, Ruby etc. instead of C. I suggest at least trying C++.
If you don't have to write it in C, here is my Python version: (since you didn't mention it in the question - are you working on an embedded system or something where C/ASM are the only options?)
```py
FILENAME = '/etc/dictionaries-common/words'
with open(FILENAME) as f:
flattened = [ line[:2] for line in f ]
dic = {
key: flattened.count(key)
for key in sorted(frozenset(flattened))
}
for k, v in dic.items():
print(f'{k} = {v}')
```
Outputs:
```
A' = 1
AM = 2
AO = 2
AW = 2
Aa = 6
Ab = 44
Ac = 37
Ad = 68
Ae = 18
Af = 22
Ag = 36
Ah = 12
Ai = 17
Aj = 2
Ak = 14
Al = 284
Am = 91
An = 223
Ap = 44
Aq = 13
Ar = 185
As = 88
At = 56
Au = 81
Av = 28
Ax = 2
... ...
```
| 5,193
|
68,932,000
|
```
from typing import List
def dailyTemperatures(temperatures: List[int]) -> List[int]:
temp_count = len(temperatures)
ans = [0]*temp_count
stack = []
idx_stack = []
for idx in range(temp_count-1,-1,-1): // first point
temperature = temperatures[idx]
last_temp_idx = 0
while stack: // second point
last_temp = stack[-1]
last_temp_idx = idx_stack[-1]
if last_temp <= temperature:
stack.pop()
idx_stack.pop()
else:
break
if len(stack) == 0:
stack.append(temperature)
idx_stack.append(idx)
ans[idx] = 0
continue
stack.append(temperature)
idx_stack.append(idx)
ans[idx] = last_temp_idx-idx
return ans
```
I have two question. I've just started learning python.
I googled but couldn't find an answer.
first point > (temp\_count-1,-1,-1)
I'm not sure what this sentence means.
Does this mean decrement by one? Why are there two -1?
second point > while stack:
Does this sentence mean that when stack is empty is operate?
|
2021/08/26
|
[
"https://Stackoverflow.com/questions/68932000",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16607199/"
] |
Since the address of `cs` is sent to a function and that function may spawn goroutines that may hold a reference to `cs` after the function returns, it is moved to heap.
In the second case, `cs` is a pointer. There is no need to move the pointer itself to the heap, because that the function `Unmarshal` can refer to is the object pointed to by `cs`, not `cs` itself. You didn't show how `cs` is initialized, so this piece of code will fail, however if you initialize `cs` to point to a variable declared in that function, that object will likely end up in the heap.
|
`proto.Unmarshal`
```
func Unmarshal(buf []byte, pb Message)
```
```
type Message interface {
Reset()
String() string
ProtoMessage()
}
```
interface{} can be every type, It is difficult to determine the specific types of its parameters during compilation, and escape will also occur.
but if interface{} is a pointer, it just a pointer
| 5,202
|
66,675,001
|
I'm trying to use `sklearn_porter` to train a Random Forest Modell in python which then should be exported to C code.
This is my code:
```
from sklearn_porter import Porter
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris
import sys
sys.path.append('../../../../..')
iris_data = load_iris()
X = iris_data.data
y = iris_data.target
print(X.shape, y.shape)
clf = RandomForestClassifier(n_estimators=15, max_depth=None,
min_samples_split=2, random_state=0)
clf.fit(X, y)
porter = Porter(clf, language='c')
output = porter.export(embed_data=True)
print(output)
```
but get the following error:
```
...AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn_porter\Porter.py", line 10, in <module>
from sklearn.tree.tree import DecisionTreeClassifier
ModuleNotFoundError: No module named 'sklearn.tree.tree
```
So I checked Porter.py and saw that the import is not done right (?) because the from statement has mistakes in it like `sklearn.tree.tree`one `.tree` is enough
```
from sklearn.metrics import accuracy_score
from sklearn.tree.tree import DecisionTreeClassifier
from sklearn.ensemble.weight_boosting import AdaBoostClassifier
from sklearn.ensemble.forest import RandomForestClassifier
from sklearn.ensemble.forest import ExtraTreesClassifier
from sklearn.svm.classes import LinearSVC
from sklearn.svm.classes import SVC
from sklearn.svm.classes import NuSVC
from sklearn.neighbors.classification import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import BernoulliNB
```
So I corrected that but another error occured
```
...AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn_porter\Porter.py", line 117, in __init__
error = "Currently the given model '{algorithm_name}' " \
KeyError: 'algorithm_name'
```
but I couldn't correct this error...
What am I doing wrong?
I'm using python 3.8.8 and scikit-learn 0.24.1
Edit: tried it also with python 3.7.10 because on [Git](https://github.com/nok/sklearn-porter) states that it is just available for python 2.7, 3.4, 3.5, 3.6 and 3.7. On 3.7 the same error occurs
|
2021/03/17
|
[
"https://Stackoverflow.com/questions/66675001",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12959163/"
] |
You should use `connects_to database: { writing: :wn }`
When you specify only `reading:` keyword you will get this error `No connection pool for 'Wn' found`. You can only use `reading:` together with `writing:` when you have a read replica.
See docs for more info <https://edgeguides.rubyonrails.org/active_record_multiple_databases.html#horizontal-sharding>
|
Create an abstract class in `models/wn.rb` ...
```
class Wn < ActiveRecord::Base
self.abstract_class = true
connects_to database: { reading: :wn }
end
```
then in `models/ci_harves_record.rb`
```
class CiHarvestRecord < Wn
self.table_name = "ciHarvest"
end
```
| 5,203
|
23,530,703
|
We're currently working with Cassandra on a single node cluster to test application development on it. Right now, we have a really huge data set consisting of approximately 70M lines of texts that we would like dump into a Cassandra.
We have tried all of the following:
* Line by line insertion using python Cassandra driver
* Copy command of Cassandra
* Set compression of sstable to none
We have explored the option of the sstable bulk loader, but we don't have an appropriate .db format for this. Our text file to be loaded has 70M lines that look like:
```
2f8e4787-eb9c-49e0-9a2d-23fa40c177a4 the magnet programs succeeded in attracting applicants and by the mid-1990s only #about a #third of students who #applied were accepted.
```
The column family that we're intending to insert into has this creation syntax:
```
CREATE TABLE post (
postid uuid,
posttext text,
PRIMARY KEY (postid)
) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
index_interval=128 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='99.0PERCENTILE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={};
```
Problem:
The loading of the data into even a simple column family is taking forever -- 5hrs for 30M lines that were inserted. We were wondering if there is any way to expedite this as the performance for 70M lines of the same data being loaded into MySQL takes approximately 6 minutes on our server.
We were wondering if we have missed something? Or if someone could point us in the right direction?
Many thanks in advance!
|
2014/05/08
|
[
"https://Stackoverflow.com/questions/23530703",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3613688/"
] |
The sstableloader is the fastest way to import data into Cassandra. You have to write the code to generate the sstables, but if you really care about speed this will give you the most bang for your buck.
This article is a bit old, but the basics still apply to how you [generate the SSTables](http://www.datastax.com/dev/blog/bulk-loading)
.
If you really don't want to use the sstableloader, you should be able to go faster by doing the inserts in parallel. A single node can handle multiple connections at once, and you can scale out your Cassandra cluster for increased throughput.
|
I have a two node Cassandra 2.? cluster. Each node is I7 4200 MQ laptop, 1 TB HDD, 16 gig RAM). Have imported almost 5 billion rows using copy command. Each CSV file is a about 63 gig with approx 275 million rows. Takes about 8-10 hours to complete the import/per file.
Approx 6500 rows per sec.
YAML file is set to use 10 gigs of RAM. JIC that helps.
| 5,204
|
17,155,724
|
So I am working on this project where I take input from the user (a file name ) and then open and check for stuff. the file name is "cur"
Now suppose the name of my file is `kb.py` (Its in python)
If I run it on my terminal then first I will do:
python kb.y and then there will a prompt and user will give the input.
I'll do it this way:
```
A = raw_input("Enter File Name: ")
b = open(A, 'r+')
```
I dont want to do that. Instead i want to use it as a command for example:
**python kb.py cur** and it will take it as an input and save to a variable which then will open it.
I am confused how to get a input in the same command line.
|
2013/06/17
|
[
"https://Stackoverflow.com/questions/17155724",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2438430/"
] |
Just use `sys.argv`, like this:
```
import sys
# this part executes when the script is run from the command line
if __name__ == '__main__':
if len(sys.argv) != 2: # check for the correct number of arguments
print 'usage: python kb.py cur'
else:
call_your_code(sys.argv[1]) # first command line argument
```
Note: `sys.argv[0]` is the script's name, and `sys.argv[1]` is the first command line argument. And so on, if there were more arguments.
|
For simply stuff `sys.argv[]` is the way to go, for more complicated stuff, have a look at the [argparse-module](http://docs.python.org/2/howto/argparse.html)
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--verbose", help="increase output verbosity",
action="store_true")
args = parser.parse_args()
if args.verbose:
print "verbosity turned on"
```
output:
```
$ python prog.py --verbose
verbosity turned on
$ python prog.py --verbose 1
usage: prog.py [-h] [--verbose]
prog.py: error: unrecognized arguments: 1
$ python prog.py --help
usage: prog.py [-h] [--verbose]
optional arguments:
-h, --help show this help message and exit
--verbose increase output verbosity
```
| 5,205
|
72,215,886
|
How to write python code that let the computer know if the list is a right sequence and the position doesn't matter, it will return true, otherwise it return false.
below are some of my example, I really don't know how to start
```
b=[1,2,3,4,5] #return true
b=[1,2,2,1,3] # return false
b=[2,3,1,5,4] #return true
b=[2,4,6,4,3] # return false
```
|
2022/05/12
|
[
"https://Stackoverflow.com/questions/72215886",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
sort function is O(nlogn), we can use for loop which is O(n):
```
def check_seq(in_list):
now_ele = set()
min_ele = max_ele = in_list[0]
for i in in_list:
if i in now_ele:
return False
min_ele = min(i, min_ele)
max_ele = max(i, max_ele)
now_ele.add(i)
if max_ele-min_ele+1 == len(in_list):
return True
return False
```
|
This question is quite simple and can be solved a few ways.
1. The conditional approach - if there is a number that is bigger than the length of the list, it automatically cannot be a sequence because there can only be numbers from 1-n where n is the size of the list. Also, you have to check if there are any duplicates in the list as this cannot be possible either. If none of these conditions occur, it should return true
2. Using dictionary - go through the entire list and add it as a key to a dictionary. Afterwards, simply loop through numbers 1-n where n is the length of the list and check if they are keys in the dictionary, if one of them isn't, return false. If all of them are, return true.
Both of these are quite simply approaches and you should be able to implement them yourselves. However, this is one implementation for both.
1.
```
def solve(list1):
seen = {}
for i in list1:
if i > len(list1):
return False
if i in seen:
return False
seen[i] = True
return False
```
2.
```
def solve(list1):
seen = {}
for i in list1:
seen[i] = True
for i in range (1, len(list1)+1):
if i not in seen:
return False
return True
```
| 5,206
|
46,725,942
|
I'm writing some calculation tasks which would be efficient in Python or Java, but Sidekiq does not seem to support external consumers.
I'm aware there's a workaround to spawn a task using system call:
```
class MyWorker
include Sidekiq::Worker
def perform(*args)
`python script.py -c args` # and watch out using `ps`
end
end
```
Is there a better way to do this with rewriting a Sidekiq consumer?
|
2017/10/13
|
[
"https://Stackoverflow.com/questions/46725942",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3340588/"
] |
You can use apply and which:
```
df <- data.frame( x1 = c(0, 0, 1), x2 = c(1, 0 , 0), x3 = c(0, 1 , 0) )
idx <- apply( df, 1, function(row) which( row == 1 ) )
cbind( df, Number = colnames( df[ , idx] ) )
x1 x2 x3 Number
1 0 1 0 x2
2 0 0 1 x3
3 1 0 0 x1
```
|
We can use `max.col` to find the column index of logical matrix (`df1[-1]=="1+"`). Add 1 to it because we used only from 2nd column. Then, with `names(df1)` get the corresponding names
```
df1$Number <- names(df1)[max.col(df1[-1]=="1+")+1]
df1$Number
#[1] "X3000" "X1234" "X7500"
```
| 5,210
|
8,722,182
|
I'm getting a "DoesNotExist" error with the following set up - I've been trying to debug for a while and just can't figure it out.
```
class Video(models.Model):
name = models.CharField(max_length=100)
type = models.CharField(max_length=100)
owner = models.ForeignKey(User, related_name='videos')
...
#Related m2m fields
....
class VideoForm(modelForm):
class Meta:
model = Video
fields = ('name', 'type')
class VideoCreate(CreateView):
template_name = 'video_form.html'
form_class = VideoForm
model = Video
```
When I do this and post data for 'name' and 'type' - I get a "DoesNotExist" error. It seems to work fine with an UpdateView - or when an 'instance' is passed to init the form.
This is the exact location where the error is raised:
/usr/lib/pymodules/python2.7/django/db/models/fields/related.py in **get**, line 301
Does anyone know what might be going on?
Thanks
|
2012/01/04
|
[
"https://Stackoverflow.com/questions/8722182",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/801820/"
] |
Since you have not posted your full traceback, my guess is that your owner FK is not optional, and you are not specifying one in your model form.
You need to post a full traceback.
|
I think it has to be class `VideoForm(ModelForm)` instead of `VideoForm(modelForm)`.
If you aren't going to use the foreign key in the form use `exclude = ('owner')`
| 5,215
|
21,356,122
|
I have a small project at home, where I need to scrape a website for links every once in a while and save the links in a txt file.
The script need to run on my Synology NAS, therefore the script needs to be written in bash script or python without using any plugins or external libraries as I can't install it on the NAS. (to my knowledge anyhow)
A link looks like this:
```
<a href="http://www.example.com">Example text</a>
```
I want to save the following to my text file:
```
Example text - http://www.example.com
```
I was thinking I could isolate the text with curl and some grep (or perhaps regex). First I looked into using Scrapy or Beutifulsoup, but couldn't find a way to install it on the NAS.
Could one of you help me put a script together?
|
2014/01/25
|
[
"https://Stackoverflow.com/questions/21356122",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3235644/"
] |
You can use `urllib2` that ships as free with Python. Using it you can easily get the html of any url
```
import urllib2
response = urllib2.urlopen('http://python.org/')
html = response.read()
```
Now, about the parsing the html. You can still use `BeautifulSoup` without installing it. From [their site](http://www.crummy.com/software/BeautifulSoup/), it says "*You can also download the tarball and use BeautifulSoup.py in your project directly*". So search on internet for that `BeautifulSoup.py` file. If you can't find it, then download [this one](https://svn.apache.org/repos/infra/infrastructure/trunk/projects/stats/BeautifulSoup.py) and save into a local file inside your project. Then use it like below:
```
soup = BeautifulSoup(html)
for link in soup("a"):
print link["href"]
print link.renderContents()
```
|
I recommend using Python's htmlparser library. It will parse the page into a hierarchy of objects for you. You can then find the a href tags.
<http://docs.python.org/2/library/htmlparser.html>
There are lots of examples of using this library to find links, so I won't list all of the code, but here is a working example:
[Extract absolute links from a page using HTMLParser](https://stackoverflow.com/questions/6816138/extract-absolute-links-from-a-page-uisng-htmlparser)
**EDIT:**
As Oday pointed out, the htmlparser is an external library, and you may not be able to load it. In that case, here are two recommendations for built-in modules that can do what you need:
* `htmllib` is included in Python 2.X.
* `xml` is includes in Python 2.X and 3.X.
There is also a good explanation elsewhere on this site for how to use wget & grep to do the same thing:
[Spider a Website and Return URLs Only](https://stackoverflow.com/questions/2804467/spider-a-website-and-return-urls-only)
| 5,216
|
22,180,285
|
My python function is given a (long) list of path arguments, each of which can possibly be a glob. I make a pass over this list using `glob.glob` to extract all the matching filenames, like this:
```
files = [filename for pattern in patterns for filename in glob.glob(pattern)]
```
That works, but the filesystem I'm on has very poor performance for directory listing operations, and currently this operation adds about a minute(!) to the start-up time of my program. So I would like to only perform glob expansion for non-trivial glob patterns (i.e. those that aren't just normal pathnames) to speed this up. I.e.
```
def cheapglob(pattern):
return [pattern] if istrivial(pattern) else glob.glob(pattern)
files = [filename for pattern in patterns for filename in cheapglob(pattern)]
```
Since `glob.glob` basically does a set of directory listings coupled with `fnmatch.fnmatch`, I thought it should be possible to somehow ask `fnmatch` whether a given string is a non-trivial pattern or not, but I can't see how to do that.
As a fallback, I guess I could attempt to identify these patterns in the string myself, though that feels a lot like reinventing the wheel, and would be error prone. But this feels like the sort of thing there should be an elegant solution for.
|
2014/03/04
|
[
"https://Stackoverflow.com/questions/22180285",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1061433/"
] |
According to [the fnmatch source code](http://code.google.com/p/unladen-swallow/source/browse/trunk/Lib/fnmatch.py), the only special characters it recognizes are `*`, `?`, `[` and `]`. Hence any pattern that does not contain any of these will only match itself. We can therefore implement the `cheapglob` mentioned in the question as
```
def cheapglob(s): return glob.glob(s) if re.search("[][*?]", s) else [s]
```
This will only hit the file system for patterns which include special characters. This differs subtly from a plain `glob.glob`: For a pattern with no special characters like "foo.txt", this function will return `["foo.txt"]` regardless of whether that file exists, while `glob.glob` will return `[]` if the file isn't there. So the calling function will need to handle the possibility that some of the returned files might not exist.
|
I don't think you'll find much, as your idea of a trivial pattern might not be mine. Also, from a comp-sci point of view, it might be impossible to tell from inspection whether a pushdown automata is going to run in a set amount of time given the inputs you're running it against, without actually running it against those inputs.
I strongly suspect you'd be better off here loading the directory listing once and then applying `fnmatch` against that list manually.
| 5,218
|
60,538,828
|
Here I am trying to scrape the teacher jobs from the <https://www.indeed.co.in/?r=us> I want to get that uploaded to the excel sheet like jobtitle, institute/school, salary, howmanydaysagoposted
I wrote the code for scraping like this but I am getting all the text from the xpath which I defined
```
import selenium.webdriver
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions
url = 'https://www.indeed.co.in/?r=us'
driver = webdriver.Chrome(r"mypython/bin/chromedriver_linux64/chromedriver")
driver.get(url)
driver.find_element_by_xpath('//*[@id="text-input-what"]').send_keys("teacher")
driver.find_element_by_xpath('//*[@id="whatWhereFormId"]/div[3]/button').click()
items = driver.find_elements_by_xpath('//*[@id="resultsCol"]')
for item in items:
print(item.text)
```
And even I am able to scrape only one page and I want all the pages that are available after I search for teacher
Please help me Thanks in advance.
|
2020/03/05
|
[
"https://Stackoverflow.com/questions/60538828",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12798693/"
] |
I'd encourage you to checkout beautiful soup <https://pypi.org/project/beautifulsoup4/>
I've used this for scraping tables,
```
def read_table(table):
"""Read an IP Address table.
Args:
table: the Soup <table> element
Returns:
None if the table isn't an IP Address table, otherwise a list of
the IP Address:port values.
"""
header = None
rows = []
for tr in table.find_all('tr'):
if header is None:
header = read_header(tr)
if not header or header[0] != 'IP Address':
return None
else:
row = read_row(tr)
if row:
rows.append('{}:{}'.format(row[0], row[1]))
return rows
```
Here is just a snippet from one of my python projects <https://github.com/backslash/WebScrapers/blob/master/us-proxy-scraper/us-proxy.py>
You can use beautiful soup to scrape tables incredibly easily, if your worried about it getting blocked then you just need to send the right headers. Also another advantage to using beautiful soup is that you don't have to wait as long for a lot of stuff.
```
HEADERS = requests.utils.default_headers()
HEADERS.update({
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0',
})
```
Best of luck
|
You'll have to nevigate to every page and **scrape** them one by one i.e. you'll have to automate click on next page button in selenium(use xpath of Next Page button element). Then extract using page source function.
Hope I could help.
| 5,219
|
22,258,738
|
I am trying to list items in a S3 container with the following code.
```
import boto.s3
from boto.s3.connection import OrdinaryCallingFormat
conn = boto.connect_s3(calling_format=OrdinaryCallingFormat())
mybucket = conn.get_bucket('Container001')
for key in mybucket.list():
print key.name.encode('utf-8')
```
Then I get the following error.
```
Traceback (most recent call last):
File "test.py", line 5, in <module>
mybucket = conn.get_bucket('Container001')
File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 370, in get_bucket
bucket.get_all_keys(headers, maxkeys=0)
File "/usr/lib/python2.7/dist-packages/boto/s3/bucket.py", line 358, in get_all_keys
'', headers, **params)
File "/usr/lib/python2.7/dist-packages/boto/s3/bucket.py", line 325, in _get_all
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 301 Moved Permanently
<?xml version="1.0" encoding="UTF-8"?>
PermanentRedirectThe bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.99EBDB9DE3B6E3AF
Container001
<HostId>5El9MLfgHZmZ1UNw8tjUDAl+XltYelHu6d/JUNQsG3OaM70LFlpRchEJi9oepeMy</HostId><Endpoint>Container001.s3.amazonaws.com</Endpoint></Error>
```
I tried to search for how to send requests to the specified end point, but couldn't find useful information.
How do I avoid this error?
|
2014/03/07
|
[
"https://Stackoverflow.com/questions/22258738",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3394040/"
] |
As @garnaat mentioned and @Rico [answered in another question](https://stackoverflow.com/a/22462419/3162882) `connect_to_region` works with `OrdinaryCallingFormat`:
```
conn = boto.s3.connect_to_region(
region_name = '<your region>',
aws_access_key_id = '<access key>',
aws_secret_access_key = '<secret key>',
calling_format = boto.s3.connection.OrdinaryCallingFormat()
)
bucket = conn.get_bucket('<bucket name>')
```
|
in terminal run
>
> nano ~/.boto
>
>
>
if there is some configs try to comment or rename file and connect again. (it helps me)
<http://boto.cloudhackers.com/en/latest/boto_config_tut.html>
there is boto config file directories. take a look one by one and clean them all, it will work by default configs. also configs may be in .bash\_profile, .bash\_source...
I guess you must allow only KEY-SECRET
also try to use
>
> calling\_format = boto.s3.connection.OrdinaryCallingFormat()
>
>
>
| 5,221
|
56,832,149
|
I have an awkward csv file and I need to skip the first row to read it.
I'm doing this easily with python/pandas
```
df = pd.read_csv(filename, skiprows=1)
```
but I don't know how to do it in Go.
```
package main
import (
"encoding/csv"
"fmt"
"log"
"os"
)
type mwericsson struct {
id string
name string
region string
}
func main() {
rows := readSample()
fmt.Println(rows)
//appendSum(rows)
//writeChanges(rows)
}
func readSample() [][]string {
f, err := os.Open("D:/in/20190629/PM_IG30014_15_201906290015_01.csv")
if err != nil {
log.Fatal(err)
}
rows, err := csv.NewReader(f).ReadAll()
f.Close()
if err != nil {
log.Fatal(err)
}
return rows
}
```
Error:
```
2019/07/01 12:38:40 record on line 2: wrong number of fields
```
`PM_IG30014_15_201906290015_01.csv`:
```
PTN Ethernet-Port RMON Performance,PORT_BW_UTILIZATION,2019-06-29 20:00:00,33366
DeviceID,DeviceName,ResourceName,CollectionTime,GranularityPeriod,PORT_RX_BW_UTILIZATION,PORT_TX_BW_UTILIZATION,RXGOODFULLFRAMESPEED,TXGOODFULLFRAMESPEED,PORT_RX_BW_UTILIZATION_MAX,PORT_TX_BW_UTILIZATION_MAX
3174659,H1095,H1095-11-ISM6-1(to ZJBSC-V1),2019-06-29 20:00:00,15,22.08,4.59,,,30.13,6.98
3174659,H1095,H1095-14-ISM6-1(to T6147-V),2019-06-29 20:00:00,15,2.11,10.92,,,4.43,22.45
```
|
2019/07/01
|
[
"https://Stackoverflow.com/questions/56832149",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8660255/"
] |
>
> skip the first row when reading a csv file
>
>
>
---
For example,
```
package main
import (
"bufio"
"encoding/csv"
"fmt"
"io"
"os"
)
func readSample(rs io.ReadSeeker) ([][]string, error) {
// Skip first row (line)
row1, err := bufio.NewReader(rs).ReadSlice('\n')
if err != nil {
return nil, err
}
_, err = rs.Seek(int64(len(row1)), io.SeekStart)
if err != nil {
return nil, err
}
// Read remaining rows
r := csv.NewReader(rs)
rows, err := r.ReadAll()
if err != nil {
return nil, err
}
return rows, nil
}
func main() {
f, err := os.Open("sample.csv")
if err != nil {
panic(err)
}
defer f.Close()
rows, err := readSample(f)
if err != nil {
panic(err)
}
fmt.Println(rows)
}
```
Output:
```
$ cat sample.csv
one,two,three,four
1,2,3
4,5,6
$ go run sample.go
[[1 2 3] [4 5 6]]
$
$ cat sample.csv
PTN Ethernet-Port RMON Performance,PORT_BW_UTILIZATION,2019-06-29 20:00:00,33366
DeviceID,DeviceName,ResourceName,CollectionTime,GranularityPeriod,PORT_RX_BW_UTILIZATION,PORT_TX_BW_UTILIZATION,RXGOODFULLFRAMESPEED,TXGOODFULLFRAMESPEED,PORT_RX_BW_UTILIZATION_MAX,PORT_TX_BW_UTILIZATION_MAX
3174659,H1095,H1095-11-ISM6-1(to ZJBSC-V1),2019-06-29 20:00:00,15,22.08,4.59,,,30.13,6.98
3174659,H1095,H1095-14-ISM6-1(to T6147-V),2019-06-29 20:00:00,15,2.11,10.92,,,4.43,22.45
$ go run sample.go
[[DeviceID DeviceName ResourceName CollectionTime GranularityPeriod PORT_RX_BW_UTILIZATION PORT_TX_BW_UTILIZATION RXGOODFULLFRAMESPEED TXGOODFULLFRAMESPEED PORT_RX_BW_UTILIZATION_MAX PORT_TX_BW_UTILIZATION_MAX] [3174659 H1095 H1095-11-ISM6-1(to ZJBSC-V1) 2019-06-29 20:00:00 15 22.08 4.59 30.13 6.98] [3174659 H1095 H1095-14-ISM6-1(to T6147-V) 2019-06-29 20:00:00 15 2.11 10.92 4.43 22.45]]
$
```
|
Simply call [`Reader.Read()`](https://golang.org/pkg/encoding/csv/#Reader.Read) to read a line, then proceed to read the rest with [`Reader.ReadAll()`](https://golang.org/pkg/encoding/csv/#Reader.ReadAll).
See this example:
```
src := "one,two,three\n1,2,3\n4,5,6"
r := csv.NewReader(strings.NewReader(src))
if _, err := r.Read(); err != nil {
panic(err)
}
records, err := r.ReadAll()
if err != nil {
panic(err)
}
fmt.Println(records)
```
Output (try it on the [Go Playground](https://play.golang.org/p/UgAA3VyEKP-)):
```
[[1 2 3] [4 5 6]]
```
| 5,222
|
11,774,163
|
this is the idea. I'll have 'main' python script that will start (using subprocess) app1 and app2. 'main' script will send input to app1 and output result to app2 and vice versa (and main script will need to remember what was sent so I can't send pipe from app1 to app2).
This is main script.
```
import subprocess
import time
def main():
prvi = subprocess.Popen(['python', 'random1.py'], stdin = subprocess.PIPE , stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
while 1:
prvi.stdin.write('131231\n')
time.sleep(1) # maybe it needs to wait
print "procitano", prvi.stdout.read()
if __name__ == '__main__':
main()
```
And this is 'random1.py' file.
```
import random
def main():
while 1:
inp = raw_input()
print inp, random.random()
if __name__ == '__main__':
main()
```
First I've tried with only one subprocess just to see if it's working. And it's not. It only outputs 'procitano' and waits there.
How can I read output from 'prvi' (without communicate(). When I use it, it exits my app and that's something that I don't want)?
|
2012/08/02
|
[
"https://Stackoverflow.com/questions/11774163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/554778/"
] |
Add `prvi.stdin.flush()` after `prvi.stdin.write(...)`.
Explanation: To optimize communication between processes, the OS will buffer 4KB of data before it sends that whole buffer to the other process. If you send less data, you need to tell the OS "That's it. Send it *now*" -> `flush()`
**[EDIT]** The next problem is that `prvi.stdout.read()` will never return since the child doesn't exit.
You will need to develop a protocol between the processes, so each knows how many bytes of data to read when it gets something. A simple solution is to use a line based protocol (each "message" is terminated by a new line). To do that, replace `read()` with `readline()` and don't forget to append `\n` to everything you send + `flush()`
|
**main.py**
```
import subprocess
import time
def main():
prvi = subprocess.Popen(['python', 'random1.py'], stdin = subprocess.PIPE , stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
prvi.stdin.write('131231\n')
time.sleep(1) # maybe it needs to wait
print "procitano", prvi.stdout.read()
if __name__ == '__main__':
main()
```
**random1.py**
```
import random
def main():
inp = raw_input()
print inp, random.random()
inp = raw_input()
if __name__ == '__main__':
main()
```
I've tested with the above codes, then I've got the same problem as your codes.
I think problem is timing.
Here is my guess,
When the main.py tries the code below
```
prvi.stdout.read() # i think this code may use the random1.py process
```
the code below grab the random1.py process
```
inp = raw_input()
```
To solve this problem, I think, as Aaron Digulla says, you need develope the protocol to make it.
| 5,225
|
4,181,573
|
Is there such a program that can open a little input box and send the input to stdout? If there isn't, any suggestions for how to do this (maybe python with TkInter)?
|
2010/11/15
|
[
"https://Stackoverflow.com/questions/4181573",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/507857/"
] |
If you're looking for something that works in text mode, then [`dialog`](http://linux.die.net/man/1/dialog) or [`whiptail`](http://linux.die.net/man/1/whiptail) are two options.
|
The oldest would probably be [dialog](http://www.linuxjournal.com/article/2807). Another example of such a program is [Zenity](http://freshmeat.net/projects/zenity) and another would be [Xdialog](http://xdialog.free.fr/) (all pretty much replacements for dialog).
They tend to do more than just accepting user input, and you can create some fairly complex GUI dialogs with them, but are simple enough to do what you want very easily.
| 5,227
|
10,592,891
|
Dear Stack Overflow community,
I'm writing in hopes that you might be able to help me connect to an 802.15.4 wireless transceiver using C# or C++. Let me explain a little bit about my project. This semester, I spent some time developing a wireless sensor board that would transmit light, temperature, humidity, and motion detection levels every 8 seconds to a USB wireless transceiver. Now, I didn't develop the USB transceiver. One of the TA's for the course did, and he helped me throughout the development process for my sensor board (it was my first real PCB).
Now, I've got the sensor board programmed and I know it's sending the data to the transceiver. The reason I know this is that this TA wrote a simple python module that would pull the latest packet of information from the transceiver (whenever it was received), unpack the hex message, and convert some of the sensor data into working units (like degrees Celsius, % relative humidity, etc.)
The problem is that the python module works on his computer (Mac) but not on mine (Windows 7). Basically, he's using a library called zigboard to unpack the sensor message, as well as pyusb and pyserial libraries in the sketch. The 802.15.4 wireless transceiver automatically enumerates itself on a Mac, but runs into larger issues when running on a PC. Basically, I believe the issue lies in the lack of having a signed driver. I'm using libusb to generate the .inf file for this particular device... and I know it's working on my machine because there is an LED on my sensor board and on the transceiver which blink when a message is sent/received. However, when I run the same python module that this TA runs on his machine, I get an error message about missing some Windows Backend Binaries and thus, it never really gets to the stage where it returns the data.
But, the larger issue isn't with this python module. The bigger issue is that I don't want to have to use Python. This sensor board is going to be part of a larger project in which I'll be designing a software interface in C# or C++ to do many different things (some of which is dealing with this sensor data). So, ultimately I want to be able to work in .NET in order to access the data from this transceiver. However, all I have to go on is this python sketch (which wont even run on my machine). I know the easiest thing to do would be to ask this TA more questions about how to get this to work on my machine... but I've already monopolized a ton of his time this semester regarding this project and additionally he's currently out of town. Also, his preference is python, where as I'm most comfortable in C# or C++ and would like to use that environment for this project. Now, I would say I'm competent in electronics and programming (but certainly not an expert... my background is actually in architecture). But, if anyone could help me develop some code so I could unpack the sensor message being sent from board, it would be greatly appreciated. I've attached the Python sketch below which is what the TA uses to unpack his sensor messages on his machine (but like I said... I had issues on my windows machine). Does anyone have any suggestions?
Thanks again.
```
from zigboard import ZigBoard
from struct import unpack
from time import sleep, time
zb = ZigBoard()
lasttime = time()
while True:
pkt = zb.receive()
if pkt is None:
sleep(0.01)
continue
if len(pkt.data) < 10:
print "short packet"
sleep(0.01)
continue
data = pkt.data[:10]
cmd, bat, light, SOt, SOrh, pir = unpack("<BBHHHH", data)
lasttime = time()
d1 = -39.6
d2 = 0.01
c1 = -2.0468
c2 = 0.0367
c3 = -1.5955E-6
t1 = 0.01
t2 = 0.00008
sht15_tmp = d1 + d2 * float(SOt);
RHL = c1 + c2 * SOrh + c3 * float(SOrh)**2
sht15_rh = (sht15_tmp - 25.0) * (t1 + t2 * float(SOrh)) + RHL
print "address: 0x%04x" % pkt.src_addr
print "temperature:", sht15_tmp
print "humidity:", sht15_rh
print "light:", light
print "motion:", pir
print
```
|
2012/05/15
|
[
"https://Stackoverflow.com/questions/10592891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1394922/"
] |
Thanks everyone for the help. The key to everything was using [LibUSBDotNet](http://sourceforge.net/projects/libusbdotnet/). Once I had installed and referenced that into my project... I was able to create a console window that could handle the incoming sensor data. I did need to port some of the functions from the original Zigboard library... but on
|
I'm not 100% sure on exactly how to do this but after having a quick look around I can see that the core of the problem is you need to implement something like the ZigBoard lib in C#.
The ZigBoard lib uses a python USB lib to communicate using an API with the USB device, you should be able to [LibUsbDotNet](http://sourceforge.net/projects/libusbdotnet/) to replicate this in C# and if you read though the ZigBoard libs code you should be able to work out the API.
| 5,228
|
56,081,778
|
I have a shell script which runs to predict something on raspberrypi which has python version 2.7 as well as 3.5. To support audio features I have made python 3.5 as default. when I run the shell script it is taking version 2.7 and throwing an error.[The error is shown in this link](https://i.stack.imgur.com/QhlMc.png)
|
2019/05/10
|
[
"https://Stackoverflow.com/questions/56081778",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9350170/"
] |
It's not exactly clear based on the wording of your question how you have your scripts are set up. However, if you are calling python in the shell script, you can always specify python3 or python2 instead of just calling python (which points to your system's default). This would look something like this:
```
$ python3 python_script.py
```
If you are calling a script via the command line (e.g. something along the lines of `./python_script.py` in shell) directly that contains python code in it, you can also format the header of the script like so to force a specific version of python:
```
#!/usr/bin/env python3
python source code here
```
|
I would suggest [adding an alias to the root bashrc file](https://askubuntu.com/a/492787) as you seem to be calling this thing as the root user.
something to the effect of `alias python=python3.5` at the bottom of the file `~root/.bashrc` may have the effect your looking for although I'm sure there's a more permanent solution out there.
| 5,229
|
35,074,895
|
I have a module with constants (data types and other things).
Let's call the module constants.py
Let's pretend it contains the following:
```
# noinspection PyClassHasNoInit
class SimpleTypes:
INT = 'INT'
BOOL = 'BOOL'
DOUBLE = 'DOUBLE'
# noinspection PyClassHasNoInit
class ComplexTypes:
LIST = 'LIST'
MAP = 'MAP'
SET = 'SET'
```
This is pretty and works great, IDEs with inspection will be able to help out by providing suggestions when you code, which is really helpful. PyCharm example below:
[](https://i.stack.imgur.com/PxgJN.png)
But what if I now have a value of some kind and want to check if it is among the ComplexTypes, without defining them in one more place. Would this be possible?
To clarify even more, I want to be able to do something like:
```
if myVar in constants.ComplexTypeList:
# do something
```
Which would of course be possible if I made a list "ComplexTypeList" with all the types in the constants module, but then I would have to add potentially new types to two locations each time (both the class and the list), is it possible to do something dynamically?
I want it to work in python 2.7, though suggestions on how to do this in later versions of python is useful knowledge as well.
**SOLUTION COMMENTS:**
I used inspect, as Prune suggested in the marked solution below. I made the list I mentioned above within the constants.py module as:
```
ComplexTypeList = [m[1] for m in inspect.getmembers(ComplexTypes) if not m[0].startswith('_')]
```
With this it is possible to do the example above.
|
2016/01/29
|
[
"https://Stackoverflow.com/questions/35074895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3240126/"
] |
I think I have it. You can use **inspect.getmembers** to return the items in the module. Each item is a tuple of (*name*, *value*). I tried it with the following code. **dir** gives only the names of the module members; **getmembers** also returns the values. You can look for the desired value in the second element of each tuple.
constants.py
```
class ComplexTypes:
LIST_N = 'LIST'
MAP_N = 'MAP'
SET_N = 'SET'
```
test code
```
from constants import ComplexTypes
from inspect import getmembers, isfunction
print dir(ComplexTypes)
for o in getmembers(ComplexTypes):
print o
```
output:
```
['LIST_N', 'MAP_N', 'SET_N', '__doc__', '__module__']
('LIST_N', 'LIST')
('MAP_N', 'MAP')
('SET_N', 'SET')
('__doc__', None)
('__module__', 'constants')
```
You can request particular types of objects with the various **is** operators, such as
```
getmembers(ComplexTypes, inspect.isfunction)
```
to get the functions. Yes, you can remove things with such a simple package. I don't see a method to get you constants in a positive fashion. See [documentation](https://docs.python.org/2/library/inspect.html).
|
The builtin method .dir(class) will return all the attributes of a class given. Your if statement can therefore be `if myVar in dir(constants.ComplexTypes):`
| 5,231
|
12,645,195
|
To install python IMagick binding wand api on windows 64 bit (python 2.6)
This is what I did:
1. downloaded and installed [ImageMagick-6.5.8-7-Q16-windows-dll.exe](http://www.imagemagick.org/download/binaries/ImageMagick-6.5.8-7-Q16-windows-dll.exe)
2. downloaded `wand` module from <http://pypi.python.org/pypi/Wand>
3. after that i ran `python setup.py install` from `wand` directory,
4. then i executed step6. but i got the import error: magickband librarry not found
5. downloaded magickwand' module and executed 'python setup.py install' from magickwand directory.
6. then agian i tried this code
```
from wand.image import Image
from wand.display import display
with Image(filename='mona-lisa.png') as img:
print img.size
for r in 1, 2, 3:
with img.clone() as i:
i.resize(int(i.width * r * 0.25), int(i.height * r * 0.25))
i.rotate(90 * r)
i.save(filename='mona-lisa-{0}.png'.format(r))
display(i)
```
7. but thereafter again i am getting the same import error magickband library not found
i am fed up with this coz i have done with all the installation. but not able to execute the code. coz everytime i am getting magickband libraray..import error.
|
2012/09/28
|
[
"https://Stackoverflow.com/questions/12645195",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1705984/"
] |
You have to set `MAGICK_HOME` environment variable first. See the last part of [this section](http://docs.wand-py.org/en/0.2-maintenance/guide/install.html#install-imagemagick-on-windows).
>
> [](https://i.stack.imgur.com/KKEG5.png)
>
> (source: [wand-py.org](http://docs.wand-py.org/en/0.2-maintenance/_images/windows-envvar.png))
>
>
>
>
>
> Lastly you have to set `MAGICK_HOME` environment variable to the path of ImageMagick (e.g. `C:\Program Files\ImageMagick-6.7.7-Q16`). You can set it in *Computer ‣ Properties ‣ Advanced system settings ‣ Advanced ‣ Environment Variables...*.
>
>
>
|
First i had to install ImageMagic and set Envrioment Variable `MAGIC_HOME` ,just after i was able to install `Wand` from `pip`
| 5,234
|
62,062,054
|
I'm new to docker and I created a docker image and this is how my docker file looks like.
```
FROM python:3.8.3
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-client \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get install -y gcc libtool-ltdl-devel xmlsec1-1.2.20 xmlsec1-devel-1.2.20 xmlsec1 openssl-
1.2.20 xmlsec1-openssl-devel-1.2.20 \
&& apt-get -y install curl gnupg \
&& curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get -y install nodejs
WORKDIR /app/
COPY . /app
RUN pip install -r production_requirements.txt \
&& front_end/noa-frontend/npm install
```
This image is used in docker-compose.yml's app service. So when I run the docker-compose build, I'm getting the below error saying it couldn't find the package. Those are few dependencies which I want to install in order to install a python package.
[](https://i.stack.imgur.com/5aeiS.png)
In the beginning, I've run the apt-get update to update the package lists.
Can anyone please help me with this issue.
**Updated Dockerfile**
```
FROM python:3.8.3
RUN apt-get update
RUN apt-get install -y postgresql-client\
&& apt-get install -y gcc libtool-ltdl-devel xmlsec1-1.2.20 xmlsec1-
devel-1.2.20 xmlsec1 openssl-1.2.20 xmlsec1-openssl-devel-1.2.20 \
&& apt-get -y install curl gnupg \
&& curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt-get -y install nodejs
WORKDIR /app/
COPY . /app
RUN pip install -r production_requirements.txt \
&& front_end/noa-frontend/npm install
```
|
2020/05/28
|
[
"https://Stackoverflow.com/questions/62062054",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10437046/"
] |
When you give `CMD` (or `RUN` or `ENTRYPOINT`) in the JSON-array form, you're responsible for manually breaking up the command into "words". That is, you're running the equivalent of the quoted shell command
```sh
'tail -f /dev/null'
```
and the whole thing gets interpreted as one "word" -- the spaces and options are taken as part of the command name to look up in `$PATH`.
The most straightforward workaround to this is to remove the quoting and just use a bare string as `CMD`.
Note that the container you're building doesn't actually do anything: it doesn't include any application source code and the command you're providing intentionally does nothing forever. Aside from one running container with an idle process, you get the same effect by just not running the container at all. You typically want to copy your application code in and set `CMD` to actually run it:
```sh
FROM node:12.17.0-alpine
WORKDIR /src/webui
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
CMD ["yarn", "start"]
# Also works: CMD yarn start
# Won't work: CMD ["yarn start"]
```
|
`CMD` will append after `ENTRYPOINT`
Since node:12.17.0-alpine have default `ENTRYPONINT node`
Your dockerfile will becomes
```
node tail -f /dev/null
```
### option1
Override ENTRYPOINT in build time
```
ENTRYPOINT tail -f /dev/null
```
### option2
Override ENTRYPOINT in run time
```
docker run --entrypoint sh my-image
```
| 5,235
|
9,629,477
|
For example, given a python numpy.ndarray `a = array([[1, 2], [3, 4], [5, 6]])`, I want to select the 0th and 2nd row of array `a` into a new array `b`, such that `b` becomes `array([[1,2],[5,6]]`.
I need to solution to work on more general problems, where the original 2d array can have more rows and I should be able to select the rows based on some disjoint ranges. In general, I was looking for something like `a[i:j] + a[k:p]` that works for 1-d list, but it seems 2d-arrays won't add up this way.
**update**
It seems that I can use `vstack((a[i:j], a[k:p]))` to get this working, but is there any elegant way to do this?
|
2012/03/09
|
[
"https://Stackoverflow.com/questions/9629477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/228173/"
] |
You can use list indexing:
```
a[ [0,2], ]
```
More generally, to select rows `i:j` and `k:p` (I'm assuming in the python sense, meaning rows i to j but not including j):
```
a[ range(i,j) + range(k,p) , ]
```
Note that the `range(i,j) + range(k,p)` creates a *flat* list of `[ i, i+1, ..., j-1, k, k+1, ..., p-1 ]`, which is then used to index the rows of `a`.
|
`numpy` is kind of clever when it comes to indexing. You can give it a list of indexes and it will return the sliced part.
```
In : a = numpy.array([[i]*10 for i in range(10)])
In : a
Out:
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[3, 3, 3, 3, 3, 3, 3, 3, 3, 3],
[4, 4, 4, 4, 4, 4, 4, 4, 4, 4],
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
[6, 6, 6, 6, 6, 6, 6, 6, 6, 6],
[7, 7, 7, 7, 7, 7, 7, 7, 7, 7],
[8, 8, 8, 8, 8, 8, 8, 8, 8, 8],
[9, 9, 9, 9, 9, 9, 9, 9, 9, 9]])
In : a[[0,5,9]]
Out:
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
[9, 9, 9, 9, 9, 9, 9, 9, 9, 9]])
In : a[range(0,2)+range(5,8)]
Out:
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[5, 5, 5, 5, 5, 5, 5, 5, 5, 5],
[6, 6, 6, 6, 6, 6, 6, 6, 6, 6],
[7, 7, 7, 7, 7, 7, 7, 7, 7, 7]])
```
| 5,238
|
54,995,041
|
I'm coding with python 3.6 and am working on a Genetic Algorithm. When generating a new population, when I append the new values to the array all the values in the array are changed to the new value. Is there something wrong with my functions?
Code:
```
from fuzzywuzzy import fuzz
import numpy as np
import random
import time
def mutate(parent):
x = random.randint(0,len(parent)-1)
parent[x] = random.randint(0,9)
print(parent)
return parent
def gen(cur_gen, pop_size, fittest):
if cur_gen == 1:
population = []
for _ in range(pop_size):
add_to = []
for _ in range(6):
add_to.append(random.randint(0,9))
population.append(add_to)
return population
else:
population = []
for _ in range(pop_size):
print('\n')
population.append(mutate(fittest))
print(population)
return population
def get_fittest(population):
fitness = []
for x in population:
fitness.append(fuzz.ratio(x, [9,9,9,9,9,9]))
fittest = fitness.index(max(fitness))
fittest_fitness = fitness[fittest]
fittest = population[fittest]
return fittest, fittest_fitness
done = False
generation = 1
population = gen(generation, 10, [0,0,0,0,0,0])
print(population)
while not done:
generation += 1
time.sleep(0.5)
print('Current Generation: ',generation)
print('Fittest: ',get_fittest(population))
if get_fittest(population)[1] == 100:
done = True
population = gen(generation, 10, get_fittest(population)[0])
print('Population: ',population)
```
Output:
```
Fittest: ([7, 4, 2, 7, 8, 9], 72)
[3, 4, 2, 7, 8, 9]
[[3, 4, 2, 7, 8, 9]]
[3, 4, 2, 7, 5, 9]
[[3, 4, 2, 7, 5, 9], [3, 4, 2, 7, 5, 9]]
[3, 4, 2, 7, 4, 9]
[[3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9]]
[3, 1, 2, 7, 4, 9]
[[3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9]]
[3, 1, 2, 7, 4, 2]
[[3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2]]
[3, 1, 2, 5, 4, 2]
[[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]]
[3, 1, 2, 5, 4, 2]
[[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]]
[3, 1, 2, 5, 4, 5]
[[3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5]]
[3, 1, 2, 5, 4, 3]
[[3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3]]
```
|
2019/03/05
|
[
"https://Stackoverflow.com/questions/54995041",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10227474/"
] |
The Appium method hideKeyboard() is **known to be unstable** when used on iPhone devices, as listed in Appium’s currently known open issues. Using this method for an iOS device may cause the Appium script to hang. Appium identifies that the problem is because - "There is no automation hook for hiding the keyboard,...rather than using this method, to think about how a user would hide the keyboard in your app, and tell Appium to do that instead (swipe, tap on a certain coordinate, etc…)"
Workaround: Following the advice of the Appium documentation - use Appium to automate the action that a user would use to hide the keyboard. For example, use the swipe method to hide the keyboard if the application defines this action, or if the application defines a "hide-KB" button, automate clicking on this button.
The other workaround is to use **sendkey()** without clicking on the text input field.
|
The Appium method hideKeyboard() is known to be unstable when used on iPhone devices, as listed in Appium’s currently known open issues. Using this method for an iOS device may cause the Appium script to hang. Appium identifies that the problem is because - "There is no automation hook for hiding the keyboard,...rather than using this method, to think about how a user would hide the keyboard in your app, and tell Appium to do that instead (swipe, tap on a certain coordinate, etc..
If you want to hide the keyboard, you can write a function like below
```
public void typeAndEnter(MobileElement mobileElement, String keyword) {
LOGGER.info(String.format("Typing %s ...",keyword));
mobileElement.sendKeys(keyword, Keys.ENTER);
}
```
Hope this helps
| 5,239
|
58,350,001
|
I have two python dictionaries.
Sample:
```
{
'hello' : 10
'phone' : 12
'sky' : 13
}
{
'hello' : 8
'phone' :15
'red' :4
}
```
This is the dictionary of counts of words in books 'book1' and 'book2' respectively.
How can I generate a pd dataframe, which looks like this:
```
hello phone sky red
book1 10 12 13 NaN
book2 8 15 NaN 4
```
I have tried :
```
pd.DataFrame([words,counts])
```
It generated:
```
hello phone sky red
0 10 12 13 NaN
1 8 15 NaN 4
```
How can I genrate a required output?
|
2019/10/12
|
[
"https://Stackoverflow.com/questions/58350001",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12133862/"
] |
You need this:
```
pd.DataFrame([words, counts], index=['books1', 'books2'])
```
Output:
```
hello phone red sky
books1 10 12 NaN 13.0
books2 8 15 4.0 NaN
```
|
Use `df.set_index([‘book1’, ‘book2’])`. See the docs here: <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html>
| 5,248
|
10,062,646
|
Superficially, an easy question: how do I get a great-looking PDF from my XML document? Actually, my input is a subset of XHTML with a few custom attributes added (to save some information on citation sources, etc). I've been exploring some routes and would like to get some feedback if anyone has tried some of this before.
Note: I've considered XSL-FO to generate PDFs but heard the typographic quality of open source tools is still lagging behind TeX a lot. Guess the most advanced one is [Apache FOP](http://xmlgraphics.apache.org/fop/). But I'm really interested in a great-looking PDFs (otherwise I could use the print dialog of my browser). Any thoughts, updates on this?
So I've been thinking of using XSLT to convert my customized XML/XHTML dialect to DocBook and go from there ([DocBook via XSLT](http://sourceforge.net/projects/docbook/) to proper HTML seems to work quite well, so I might use it for that as well). But how do I go from DocBook to TeX? I've come across a number of solutions.
* [dblatex](http://dblatex.sourceforge.net/) A set of XSLT stylesheets that output LaTeX.
* [db2latex](http://db2latex.sourceforge.net/) Started as a clone of dblatex but now provides tighter integration with LaTex packages and provides a single script to output PDF, which is quite nice.
* [passiveTex](http://projects.oucs.ox.ac.uk/passivetex/) Instead of XSLT it uses a XML parser written in TeX.
* [TeXML](http://getfo.org/texml) is essentially an XML serialization of the LaTeX language which can be used as an intermediate format and an accompanying python tool that transforms from that XML format to LaTeX/ConTeXt. They [claimed](http://getfo.org/texml/thesis.html) that this avoids the existing solutions' problems with special symbols, losing some braces or spaces and support for only latin-1 encoding. (Is this still the case?)
As my input XML might contains quite a few special characters represented in Unicode, the last point is especially important to me. I've also been thinking of using XeTeX instead of pdfTeX to get around this problem. (I might loose some typographic quality though, but maybe still better than current open source XSL-FO processors?) So db2latex and TeXML seem to be the favorites. So can anybody comment on the robustness of those?
Alternatively, I might have more luck using ConTeXt directly, as there seems to be quite some [interest in the ConTeXt community in XML](http://wiki.contextgarden.net/XML). Especially, I might take a deeper look at ["My Way: Getting Web Content and pdf-Output from One Source"](http://dl.contextgarden.net/myway/tas/xhtml.pdf) and ["Dealing with XML in ConTeXt MkIV"](http://pragma-ade.com/general/manuals/xml-mkiv.pdf). Both documents describe an approach using ConTeXt combined with LuaTeX. ([DocBook In ConTeXt](http://www.leverkruid.eu/context/) seems to do about the same but the latest version is from 2003.) The second document notes:
>
> You may wonder why we do these manipulations in TEX and not use xslt instead. The
> advantage of an integrated approach is that it simplifies usage. Think of not only processing the a
> document, but also using xml for managing resources in the same run. An xslt
> approach is just as verbose (after all, you still need to produce TEX code) and probably
> less readable. In the case of MkIV the integrated approach is is also faster and gives us
> the option to manipulate content at runtime using Lua.
>
>
>
What do you think about this? Please keep in mind that I have some experience with both XSLT and TeX but have never gone terribly deep into either of them. Never tried many different LaTeX packages or alternatives such as ConTeXt (or XeTeX/LuaTeX instead of pdfTeX) but I am willing to learn some new stuff to get my beautiful PDFs in the end ;)
Also, I stumbled over [Pandoc](https://pandoc.org/MANUAL.html#creating-a-pdf) but couldn't find any info on how it compares to the other mentioned approaches. And lastly, a link to some quite extensive documentation on [how to use TeXML with ConTeXt](http://getfo.org/context_xml/contents.html).
|
2012/04/08
|
[
"https://Stackoverflow.com/questions/10062646",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/214446/"
] |
I've done something like this in the past (that is, maintaining master versions of documents in XML, and wanting to produce LaTeX output from them).
I've used PassiveTeX in the past, but I found creating stylesheets to be hard work -- the usual result of writing two languages at once. I got it to work, and the result looked very good, but it was probably more effort than it was worth. That said, if you amount of styling you need to add is *small*, then this might be a good route, because it's a single step.
The most successful route (read, flexible and attractive), was to use XSLT to transform the document into structural LaTeX, which matches the intended structure of the result document, but which doesn't attempt to do more than minimal formatting. Depending on your document, that might be normal-looking LaTeX, or it might have bespoke structures. Then write or adapt a LaTeX stylesheet or class file which formats that output into something attractive. That way, you're using XSLT to its strengths (and not going beyond them, which rapidly becomes very frustrating), using LaTeX to *its* strengths, and not confusing yourself.
That is, this more-or-less matches the approach of your first two alternatives, and whether you go with them, or write/customise a LaTeX stylesheet with bespoke output, is a function of how comfortable you feel with LaTeX stylesheets, and how much complicated or specialised formatting you need to do.
Since you say you need to handle Unicode characters in the input, then yes, XeLaTeX would be a good choice for the LaTeX part of the pipeline.
|
You might want to check [questions tagged with XML on TeX.sx](https://tex.stackexchange.com/questions/tagged/xml), especially [this](https://tex.stackexchange.com/questions/11260/is-there-some-typesetting-system-that-uses-xml-notation) one. I suggest you use ConTeXt; the current version has no problems with Unicode and can handle OpenType perfectly - and it's programmable in Lua. The most often used alternative with LaTeX is [XMLTeX](http://www.ctan.org/pkg/xmltex), but that needs a lot of TeX foo.
If your documents can be handled by pandoc, use that: You'll have multiple output options, more than from any TeX-based system.
| 5,252
|
2,558,107
|
In the [App Engine docs](http://code.google.com/appengine/docs/python/xmpp/overview.html#XMPP_Addresses), a JID is defined like this:
>
> An application can send and receive
> messages using several kinds of
> addresses, or "JIDs."
>
>
>
On Wikipedia, however, a JID is defined like this:
>
> Every user on the (XMPP) network has a unique
> Jabber ID (usually abbreviated as
> JID).
>
>
>
So, a JID is both a user identifier and an application address?
|
2010/04/01
|
[
"https://Stackoverflow.com/questions/2558107",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/306533/"
] |
A JID is globally unique in that anyone sending an XMPP message as you@domain.com can be you.
However, an App Engine app can send XMPP messages as any number of JIDs.
Your app can send XMPP messages as `your-app-id@appspot.com` or as `foo@your-app-id.appspotchat.com` or as `bar@your-app-id.appspotchat.com` or as `anything@your-app-id.appspotchat.com`.
These IDs are still globally unique and identifying -- anyone sending an XMPP message as `foo@your-app-id.appspotchat.com` can be assumed to be your app.
|
Since I happened to have this up in my browser, the current best canonical definition of JIDs is here: [draft-saintandre-xmpp-address](http://xmpp.org/internet-drafts/draft-saintandre-xmpp-address-00.html), which just got pulled out of [RFC3920bis](http://xmpp.org/internet-drafts/draft-ietf-xmpp-3920bis-06.html).
| 5,257
|
71,668,239
|
I am working on some plotly image processing for my work. I have been using matplotlib but need something more interactive, so I switched to dash and plotly. My goal is for people on my team to be able to draw a shape around certain parts of an image and return pixel values.
I am using this documentation and want a similar result: <https://dash.plotly.com/annotations> ; specifically "Draw a path to show the histogram of the ROI"
**Code:**
```
import numpy as np
import plotly.express as px
from dash import Dash
from dash.dependencies import Input, Output
import dash_core_components as dcc
import dash_html_components as html
from skimage import data, draw #yes necessary
from scipy import ndimage
import os, glob
from skimage import io
imagePath = os.path.join('.','examples')
filename = glob.glob(os.path.join(imagePath,'IMG_0650_6.tif'))[0]
moon = io.imread(filename)
#imagePath = os.path.join('.','examples')
#imageName = glob.glob(os.path.join(imagePath,'IMG_0650_6.tif'))[0]
def path_to_indices(path):
"""From SVG path to numpy array of coordinates, each row being a (row, col) point
"""
indices_str = [
el.replace("M", "").replace("Z", "").split(",") for el in path.split("L")
]
return np.rint(np.array(indices_str, dtype=float)).astype(np.int)
def path_to_mask(path, shape):
"""From SVG path to a boolean array where all pixels enclosed by the path
are True, and the other pixels are False.
"""
cols, rows = path_to_indices(path).T
rr, cc = draw.polygon(rows, cols)
mask = np.zeros(shape, dtype=np.bool)
mask[rr, cc] = True
mask = ndimage.binary_fill_holes(mask)
return mask
img = data.moon()
#print(img)
#print(type(img))
fig = px.imshow(img, binary_string=True)
fig.update_layout(dragmode="drawclosedpath")
fig_hist = px.histogram(img.ravel())
app = Dash(__name__)
app.layout = html.Div(
[
html.H3("Draw a path to show the histogram of the ROI"),
html.Div(
[dcc.Graph(id="graph-camera", figure=fig),],
style={"width": "60%", "display": "inline-block", "padding": "0 0"},
),
html.Div(
[dcc.Graph(id="graph-histogram", figure=fig_hist),],
style={"width": "40%", "display": "inline-block", "padding": "0 0"},
),
]
)
@app.callback(
Output("graph-histogram", "figure"),
Input("graph-camera", "relayoutData"),
prevent_initial_call=True,
)
def on_new_annotation(relayout_data):
if "shapes" in relayout_data:
last_shape = relayout_data["shapes"][-1]
mask = path_to_mask(last_shape["path"], img.shape)
return px.histogram(img[mask])
else:
return dash.no_update
if __name__ == "__main__":
app.run_server(debug=True)
```
**This returns the following error:**
Dash is running on <http://127.0.0.1:8050/>
* Serving Flask app "**main**" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
**Traceback:**
```
Traceback (most recent call last):
File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/traitlets/config/application.py", line 845, in launch_instance
app.initialize(argv)
File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/traitlets/config/application.py", line 88, in inner
return method(app, *args, **kwargs)
File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/ipykernel/kernelapp.py", line 632, in initialize
self.init_sockets()
File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/ipykernel/kernelapp.py", line 282, in init_sockets
self.shell_port = self._bind_socket(self.shell_socket, self.shell_port)
File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/ipykernel/kernelapp.py", line 229, in _bind_socket
return self._try_bind_socket(s, port)
File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/ipykernel/kernelapp.py", line 205, in _try_bind_socket
s.bind("tcp://%s:%i" % (self.ip, port))
File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/zmq/sugar/socket.py", line 208, in bind
super().bind(addr)
File "zmq/backend/cython/socket.pyx", line 540, in zmq.backend.cython.socket.Socket.bind
File "zmq/backend/cython/checkrc.pxd", line 28, in zmq.backend.cython.checkrc._check_rc
zmq.error.ZMQError: Address already in use
```
**Solution Attempts:**
I've consulted similar issues on here and tried reassigning 8050 to 1337, which was ineffective.
|
2022/03/29
|
[
"https://Stackoverflow.com/questions/71668239",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18599127/"
] |
Use following as time generator with 15min interval and then use other date time functions as needed to extract date part or time part in separate columns.
```
with CTE as
(select timestampadd(min,seq4()*15 ,date_trunc(hour, current_timestamp())) as time_count
from table(generator(rowcount=>4*24)))
select time_count from cte;
+-------------------------------+
| TIME_COUNT |
|-------------------------------|
| 2022-03-29 14:00:00.000 -0700 |
| 2022-03-29 14:15:00.000 -0700 |
| 2022-03-29 14:30:00.000 -0700 |
| 2022-03-29 14:45:00.000 -0700 |
| 2022-03-29 15:00:00.000 -0700 |
| 2022-03-29 15:15:00.000 -0700 |
.
.
.
....truncated output
| 2022-03-30 13:15:00.000 -0700 |
| 2022-03-30 13:30:00.000 -0700 |
| 2022-03-30 13:45:00.000 -0700 |
+-------------------------------+
```
|
There are many answers to this question [h](https://stackoverflow.com/questions/71666252/generate-series-equivalent-in-snowflake/71666318#71666318) [e](https://stackoverflow.com/questions/71473750/duplicating-a-row-a-certain-number-of-times-and-then-adding-30-mins-to-a-timesta/71475320#71475320) [r](https://stackoverflow.com/questions/71387244/fill-in-missing-dates-across-multiple-partitions-snowflake/71390294#71390294) [e](https://stackoverflow.com/questions/71354827/creating-an-amortization-schedule-in-snowflake/71366357#71366357) already (those 4 are all this month).
But major point to note is you MUST NOT use `SEQx()` as the number generator (you can use it in the ORDER BY, but that is not needed). As noted in the [doc's](https://docs.snowflake.com/en/sql-reference/functions/seq1.html)
>
> Important
>
>
> This function uses sequences to produce a unique set of increasing integers, but does not necessarily produce a gap-free sequence. When operating on a large quantity of data, gaps can appear in a sequence. If a fully ordered, gap-free sequence is required, consider using the ROW\_NUMBER window function.
>
>
>
```
CREATE TABLE table_of_2_years_date_times AS
SELECT
date_time::date as date,
date_time::time as time
FROM (
SELECT
row_number() over (order by null)-1 as rn
,dateadd('minute', 15 * rn, '2022-03-01'::date) as date_time
from table(generator(rowcount=>4*24*365*2))
)
ORDER BY rn;
```
then selecting the top/bottom:
```
(SELECT * FROM table_of_2_years_date_times ORDER BY date,time LIMIT 5)
UNION ALL
(SELECT * FROM table_of_2_years_date_times ORDER BY date desc,time desc LIMIT 5)
ORDER BY 1,2;
```
| DATE | TIME |
| --- | --- |
| 2022-03-01 | 00:00:00 |
| 2022-03-01 | 00:15:00 |
| 2022-03-01 | 00:30:00 |
| 2022-03-01 | 00:45:00 |
| 2022-03-01 | 01:00:00 |
| 2024-02-28 | 22:45:00 |
| 2024-02-28 | 23:00:00 |
| 2024-02-28 | 23:15:00 |
| 2024-02-28 | 23:30:00 |
| 2024-02-28 | 23:45:00 |
| 5,258
|
16,373,317
|
In my website, each user has their own login id and password, so if a user is logged in, he can add, edit and update his record only.
models.py is
```
class Report(models.Model):
user = models.ForeignKey(User, null=False)
name = models.CharField(max_length=20, null=True, blank=True)
```
views.py
```
def profile(request):
if request.method == 'POST':
reportform = ReportForm(request.POST)
if reportform.is_valid():
report = reportform.save(commit=False)
report.user = request.user
report.save()
return redirect('/index/')
else:
report = Report.objects.get()
reportform = ReportForm(instance=report)
return render_to_response('report/index.html',
{ 'form': reportform, },
context_instance=RequestContext(request))
```
A user should have one age and it should be in one row of data in the database.
1. If there is no data in the database, i.e if it's a first time for the user, it is allowed to for the user to insert data into the database.
2. If the user reopens a page, the data that was inserted should be shown in editable mode.
It is just like the scenario *"if a new user creates a gmail account, he can create the account at the first time and again he can only edit and update their details"*.The same procedure i want to implement in my website.
I tried it with above code, I am not able to insert the data in the database. I tried with direct insertion in mysql db and checked, so the inserted data i can see in editable mode but if i change that and save, it is creating another row of data in db.
Then if I am going to insert it for first time, I am getting the following trace back.
```
Environment:
Request Method: GET
Request URL: http://192.168.100.10/report/index/
Django Version: 1.3.7
Python Version: 2.7.0
Installed Applications:
['django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'django.contrib.admindocs',
'django.contrib.humanize',
'django.contrib.staticfiles',
'south',
'collect',
'incident',
'report_settings']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.transaction.TransactionMiddleware',
'django.middleware.cache.FetchFromCacheMiddleware')
Traceback:
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
111. response = callback(request, *callback_args, **callback_kwargs)
File "/root/Projects/ir/incident/views.py" in when
559. report = Report.objects.get()
File "/usr/lib/python2.7/site-packages/django/db/models/manager.py" in get
132. return self.get_query_set().get(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/db/models/query.py" in get
349. % self.model._meta.object_name)
Exception Type: DoesNotExist at report/index/
Exception Value: Report matching query does not exist.
```
|
2013/05/04
|
[
"https://Stackoverflow.com/questions/16373317",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2215612/"
] |
You may use a [`BackgroundWorker`](http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx) to do the operation that you need in different thread like the following :
```
BackgroundWorker bgw;
public Form1()
{
InitializeComponent();
bgw = new BackgroundWorker();
bgw.DoWork += bgw_DoWork;
}
private void Form1_DragDrop(object sender, DragEventArgs e)
{
if (e.Data.GetDataPresent(DataFormats.FileDrop))
{
string[] s = (string[])e.Data.GetData(DataFormats.FileDrop, false);
bgw.RunWorkerAsync(s);
}
}
```
Also for your issue "cross thread operation", try to use the [`Invoke`](http://msdn.microsoft.com/en-us/library/system.windows.forms.control.invoke%28v=vs.80%29.aspx) Method like this :
```
void bgw_DoWork(object sender, DoWorkEventArgs e)
{
Invoke(new Action<object>((args) =>
{
string[] files = (string[])args;
}), e.Argument);
}
```
Its better to check if the dropped items are files using [`GetDataPresent`](http://msdn.microsoft.com/en-us/library/system.windows.forms.idataobject.getdatapresent.aspx) like above.
|
You can use a background thread for this long-running operation, if it is not ui-intensive.
```
ThreadPool.QueueUserWorkItem((o) => /* long running operation*/)
```
| 5,259
|
44,355,493
|
The following python code gives me the different combinations from the given values.
```
import itertools
iterables = [ [1,2,3,4], [88,99], ['a','b'] ]
for t in itertools.product(*iterables):
print t
```
Output:-
```
(1, 88, 'a')
(1, 88, 'b')
(1, 99, 'a')
(1, 99, 'b')
(2, 88, 'a')
```
and so on.
Can some one please tell me how to modify this code so the output looks like a list;
```
188a
188b
199a
199b
288a
```
|
2017/06/04
|
[
"https://Stackoverflow.com/questions/44355493",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8110677/"
] |
**Yes, you can.** Possible ways:
**1)** Use Gradle plugin [gradle-console-reporter](https://github.com/ksoichiro/gradle-console-reporter) to report various kinds of summaries to console. JUnit, JaCoCo and Cobertura reports are supported.
In your case, following output will be printed to console:
```
...
BUILD SUCCESSFUL
Total time: 4.912 secs
Coverage summary:
project1: 72.2%
project2-with-long-name: 44.4%
```
Then you can use [coverage](https://docs.gitlab.com/ee/ci/yaml/#coverage) with regular expression in Gitlab's `.gitlab-ci.yml` to parse code coverage.
**2)** Second option is a little bit tricky. You can print full JaCoCo HTML report (e.g. using `cat target/site/jacoco/index.html`) and then use regular expression (see [this post](https://medium.com/@kaiwinter/javafx-and-code-coverage-on-gitlab-ci-29c690e03fd6)) or `grep` (see [this post](https://blog.exxeta.com/java/2018/08/12/display-the-test-coverage-with-gitlab/)) to parse coverage.
|
AFAIK Gradle does not support this, each project is treated separately.
To support your use case some aggregation task can be created to parse a report and to update some value at root project and finally print that value to stdout.
**Update with approximate code for solution:**
```
subprojects {
task aggregateCoverage {
// parse report from each module into ext.currentCoverage
rootProject.ext.coverage += currentCoverage
}
}
```
| 5,260
|
58,285,474
|
```
package com.company;
import java.util.Scanner;
public class Main {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
while(sc != 0)
{
System.out.println("first number: ");
int firstNum = sc.nextInt();
System.out.println("second number: ");
int secondNum = sc.nextInt();
System.out.println("The sum of your numbers: " + (firstNum + secondNum));
}
}
}
```
So my intended goal is to have a script that will allow me to add two integers (chosen by user input with a scanner) and once those two are added i can then start a new sum. I'd also like to break from my while loop when the user inputs 0.
I think my error is that i can't use the != operator on the Scanner type Could someone explain the flaw in my code? (I'm used to python which is probably why I'm making this mistake)
|
2019/10/08
|
[
"https://Stackoverflow.com/questions/58285474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9783604/"
] |
You need to declare the variable out of while scope and update it until condition is not met
Try this:
```
package com.company;
import java.util.Scanner;
public class Main {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int firstNum = 1;
int secondNum = 1;
while(firstNum !=0 && secondNum != 0)
{
System.out.println("first number: ");
firstNum = sc.nextInt();
System.out.println("second number: ");
secondNum = sc.nextInt();
System.out.println("The sum of your numbers: " + (firstNum + secondNum));
}
}
}
```
|
Hi just remove the while block since it has no sence to use it
Here the corrected code
```
package com.company;
import java.util.Scanner;
public class Main {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
System.out.println("first number: ");
int firstNum = sc.nextInt();
System.out.println("second number: ");
int secondNum = sc.nextInt();
System.out.println("The sum of your numbers: " + (firstNum + secondNum));
}
}
```
| 5,261
|
3,867,131
|
I'm stuck for a full afternoon now trying to get python to build in 32bit mode. I run a 64bit Linux machine with openSUSE 11.3, I have the necessary -devel and -32bit packages installed to build applications in 32bit mode.
The problem with the python build seems to be not in the make run itself, but in the afterwards run of setup.py, invoked by make.
I found the following instructions for Ubuntu Linux: h\*\*p://indefinitestudies.org/2010/02/08/how-to-build-32-bit-python-on-ubuntu-9-10-x86\_64/
When I do as described, I get the following output:
<http://pastebin.com/eP8WJ8V4>
But I have the -32bit packages of libreadline, libopenssl, etc.pp. installed, but of course, they reside under /lib and /usr/lib and not /lib64 and /usr/lib64.
When I start the python binary that results from this build, i get:
```
./python
Python 2.6.6 (r266:84292, Oct 5 2010, 21:22:06)
[GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Traceback (most recent call last):
File "/etc/pythonstart", line 7, in <module>
import readline
ImportError: No module named readline
```
So how to get setup.py to observe the LDFLAGS=-L/lib command??
Any help is greatly appreciated.
Regards,
Philipp
|
2010/10/05
|
[
"https://Stackoverflow.com/questions/3867131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/467244/"
] |
You'll need to pass the appropriate
flags to gcc and ld to tell the compiler
to compile and produce 32bit binaries.
Use `--build` and `--host`.
```
./configure --help
System types:
--build=BUILD configure for building on BUILD [guessed]
--host=HOST cross-compile to build programs to run on HOST [BUILD]
```
You need to use `./configure --build=x86_64-pc-linux-gnu --host=i686-pc-linux-gnu` to compile for 32-bit Linux in a 64-bit Linux system.
**Note:** You still need to add the other `./configure` options.
|
Regarding why, since Kirk (and probably others) wonder, here is an example: I have a Python app with large dicts of dicts containing light-weight objects. This consumes almost twice as much RAM on 64bit as on 32bit simply due to the pointers. I need to run a few instances of 2GB (32bit) each and the extra RAM quickly adds up. For FreeBSD, a detailed recipe for 32bit-on-64bit jail is here <http://www.gundersen.net/32bit-jail-on-64bit-freebsd/>
| 5,270
|
44,153,457
|
Alright matplotlib afficionados, we know how to plot a [donut chart](https://stackoverflow.com/questions/36296101/donut-chart-python), but what is better than a donut chart? A double-donut chart. Specifically: We have a set of elements that fall into disjoint categories and sub-categories of the first categorization. The donut chart should have slices for the categories in the outer ring and slices for the sub-categories in the inner ring, obviously aligned with the outer slices.
Is there any library that provides this or do we need to work this out here?
[](https://i.stack.imgur.com/BCM0N.png)
|
2017/05/24
|
[
"https://Stackoverflow.com/questions/44153457",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/626537/"
] |
To obtain a double donut chart, you can plot as many pie charts in the same plot as you want. So the outer pie would have a `width` set to its wedges and the inner pie would have a radius that is less or equal `1-width`.
```
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots()
ax.axis('equal')
width = 0.3
cm = plt.get_cmap("tab20c")
cout = cm(np.arange(3)*4)
pie, _ = ax.pie([120,77,39], radius=1, labels=list("ABC"), colors=cout)
plt.setp( pie, width=width, edgecolor='white')
cin = cm(np.array([1,2,5,6,9,10]))
labels = list(map("".join, zip(list("aabbcc"),map(str, [1,2]*3))))
pie2, _ = ax.pie([60,60,37,40,29,10], radius=1-width, labels=labels,
labeldistance=0.7, colors=cin)
plt.setp( pie2, width=width, edgecolor='white')
plt.show()
```
[](https://i.stack.imgur.com/C128P.png)
*Note: I made this code also available in the matplotlib gallery as [nested pie example](https://matplotlib.org/gallery/pie_and_polar_charts/nested_pie.html).*
|
I adapted the example you provided; you can tackle your problem by plotting two donuts on the same figure, with a smaller outer radius for one of them.
```
import matplotlib.pyplot as plt
import numpy as np
def make_pie(sizes, text,colors,labels, radius=1):
col = [[i/255 for i in c] for c in colors]
plt.axis('equal')
width = 0.35
kwargs = dict(colors=col, startangle=180)
outside, _ = plt.pie(sizes, radius=radius, pctdistance=1-width/2,labels=labels,**kwargs)
plt.setp( outside, width=width, edgecolor='white')
kwargs = dict(size=20, fontweight='bold', va='center')
plt.text(0, 0, text, ha='center', **kwargs)
# Group colors
c1 = (226, 33, 7)
c2 = (60, 121, 189)
# Subgroup colors
d1 = (226, 33, 7)
d2 = (60, 121, 189)
d3 = (25, 25, 25)
make_pie([100, 80, 90], "", [d1, d3, d2], ['M', 'N', 'F'], radius=1.2)
make_pie([180, 90], "", [c1, c2], ['M', 'F'], radius=1)
plt.show()
```
[](https://i.stack.imgur.com/cRc9L.png)
| 5,271
|
47,302,085
|
It is not yet clear for me what `metrics` are (as given in the code below). What exactly are they evaluating? Why do we need to define them in the `model`? Why we can have multiple metrics in one model? And more importantly what is the mechanics behind all this?
Any scientific reference is also appreciated.
```python
model.compile(loss='mean_squared_error',
optimizer='sgd',
metrics=['mae', 'acc'])
```
|
2017/11/15
|
[
"https://Stackoverflow.com/questions/47302085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3705055/"
] |
As in [keras metrics](https://keras.io/metrics/) page described:
>
> A metric is a function that is used to judge the performance of your
> model
>
>
>
Metrics are frequently used with early stopping callback to terminate training and avoid overfitting
|
Reference: [Keras Metrics Documentation](https://keras.io/metrics/)
As given in the documentation page of `keras metrics`, a `metric` judges the performance of your model. The `metrics` argument in the `compile` method holds the list of metrics that needs to be evaluated by the model during its training and testing phases.
Metrics like:
* `binary_accuracy`
* `categorical_accuracy`
* `sparse_categorical_accuracy`
* `top_k_categorical_accuracy` and
* `sparse_top_k_categorical_accuracy`
are the available metric functions that are supplied in the **`metrics`** parameter when the model is compiled.
Metric functions are customizable as well. When multiple metrics need to be evaluated it is passed in the form of a `dictionary` or a `list`.
One important resource you should refer for diving deep into metrics can be found [here](https://machinelearningmastery.com/custom-metrics-deep-learning-keras-python/)
| 5,272
|
38,782,191
|
Dlib has a really handy, fast and efficient object detection routine, and I wanted to make a cool face tracking example similar to the example [here](https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/).
OpenCV, which is widely supported, has VideoCapture module that is fairly quick (a fifth of a second to snapshot compared with 1 second or more for calling up some program that wakes up the webcam and fetches a picture). I added this to the face detector Python example in Dlib.
If you directly show and process the OpenCV VideoCapture output it looks odd because apparently OpenCV stores BGR instead of RGB order. After adjusting this, it works, but slowly:
```
from __future__ import division
import sys
import dlib
from skimage import io
detector = dlib.get_frontal_face_detector()
win = dlib.image_window()
if len( sys.argv[1:] ) == 0:
from cv2 import VideoCapture
from time import time
cam = VideoCapture(0) #set the port of the camera as before
while True:
start = time()
retval, image = cam.read() #return a True bolean and and the image if all go right
for row in image:
for px in row:
#rgb expected... but the array is bgr?
r = px[2]
px[2] = px[0]
px[0] = r
#import matplotlib.pyplot as plt
#plt.imshow(image)
#plt.show()
print( "readimage: " + str( time() - start ) )
start = time()
dets = detector(image, 1)
print "your faces: %f" % len(dets)
for i, d in enumerate( dets ):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
i, d.left(), d.top(), d.right(), d.bottom()))
print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(image[0]) ))
print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(image)) )
print( "process: " + str( time() - start ) )
start = time()
win.clear_overlay()
win.set_image(image)
win.add_overlay(dets)
print( "show: " + str( time() - start ) )
#dlib.hit_enter_to_continue()
for f in sys.argv[1:]:
print("Processing file: {}".format(f))
img = io.imread(f)
# The 1 in the second argument indicates that we should upsample the image
# 1 time. This will make everything bigger and allow us to detect more
# faces.
dets = detector(img, 1)
print("Number of faces detected: {}".format(len(dets)))
for i, d in enumerate(dets):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
i, d.left(), d.top(), d.right(), d.bottom()))
win.clear_overlay()
win.set_image(img)
win.add_overlay(dets)
dlib.hit_enter_to_continue()
# Finally, if you really want to you can ask the detector to tell you the score
# for each detection. The score is bigger for more confident detections.
# Also, the idx tells you which of the face sub-detectors matched. This can be
# used to broadly identify faces in different orientations.
if (len(sys.argv[1:]) > 0):
img = io.imread(sys.argv[1])
dets, scores, idx = detector.run(img, 1)
for i, d in enumerate(dets):
print("Detection {}, score: {}, face_type:{}".format(
d, scores[i], idx[i]))
```
From the output of the timings in this program, it seems processing and grabbing the picture are each taking a fifth of a second, so you would think it should show one or 2 updates per second - however, if you raise your hand it shows in the webcam view after 5 seconds or so!
Is there some sort of internal cache keeping it from grabbing the latest webcam image? Could I adjust or multi-thread the webcam input process to fix the lag? This is on an Intel i5 with 16gb RAM.
***Update***
According to here, it suggests the read grabs a video *frame by frame*. This would explain it grabbing the next frame and the next frame, until it finally caught up to all the frames that had been grabbed while it was processing. I wonder if there is an option to set the framerate or set it to drop frames and just click a picture of the face in the webcam *now* on read?
<http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera>
|
2016/08/05
|
[
"https://Stackoverflow.com/questions/38782191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/778234/"
] |
I tried multithreading, and it was just as slow, then I multithreaded with just the `.read()` in the thread, no processing, no thread locking, and it worked quite fast - maybe 1 second or so of delay, not 3 or 5. See <http://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/>
```
from __future__ import division
import sys
from time import time, sleep
import threading
import dlib
from skimage import io
detector = dlib.get_frontal_face_detector()
win = dlib.image_window()
class webCamGrabber( threading.Thread ):
def __init__( self ):
threading.Thread.__init__( self )
#Lock for when you can read/write self.image:
#self.imageLock = threading.Lock()
self.image = False
from cv2 import VideoCapture, cv
from time import time
self.cam = VideoCapture(0) #set the port of the camera as before
#self.cam.set(cv.CV_CAP_PROP_FPS, 1)
def run( self ):
while True:
start = time()
#self.imageLock.acquire()
retval, self.image = self.cam.read() #return a True bolean and and the image if all go right
print( type( self.image) )
#import matplotlib.pyplot as plt
#plt.imshow(image)
#plt.show()
#print( "readimage: " + str( time() - start ) )
#sleep(0.1)
if len( sys.argv[1:] ) == 0:
#Start webcam reader thread:
camThread = webCamGrabber()
camThread.start()
#Setup window for results
detector = dlib.get_frontal_face_detector()
win = dlib.image_window()
while True:
#camThread.imageLock.acquire()
if camThread.image is not False:
print( "enter")
start = time()
myimage = camThread.image
for row in myimage:
for px in row:
#rgb expected... but the array is bgr?
r = px[2]
px[2] = px[0]
px[0] = r
dets = detector( myimage, 0)
#camThread.imageLock.release()
print "your faces: %f" % len(dets)
for i, d in enumerate( dets ):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
i, d.left(), d.top(), d.right(), d.bottom()))
print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(camThread.image[0]) ))
print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(camThread.image)) )
print( "process: " + str( time() - start ) )
start = time()
win.clear_overlay()
win.set_image(myimage)
win.add_overlay(dets)
print( "show: " + str( time() - start ) )
#dlib.hit_enter_to_continue()
for f in sys.argv[1:]:
print("Processing file: {}".format(f))
img = io.imread(f)
# The 1 in the second argument indicates that we should upsample the image
# 1 time. This will make everything bigger and allow us to detect more
# faces.
dets = detector(img, 1)
print("Number of faces detected: {}".format(len(dets)))
for i, d in enumerate(dets):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
i, d.left(), d.top(), d.right(), d.bottom()))
win.clear_overlay()
win.set_image(img)
win.add_overlay(dets)
dlib.hit_enter_to_continue()
# Finally, if you really want to you can ask the detector to tell you the score
# for each detection. The score is bigger for more confident detections.
# Also, the idx tells you which of the face sub-detectors matched. This can be
# used to broadly identify faces in different orientations.
if (len(sys.argv[1:]) > 0):
img = io.imread(sys.argv[1])
dets, scores, idx = detector.run(img, 1)
for i, d in enumerate(dets):
print("Detection {}, score: {}, face_type:{}".format(
d, scores[i], idx[i]))
```
|
If you want to show a frame read in OpenCV, you can do it with the help of `cv2.imshow()` function without any need of changing the colors order. On the other hand, if you still want to show the picture in matplotlib, then you can't avoid using the methods like this:
```
b,g,r = cv2.split(img)
img = cv2.merge((b,g,r))
```
That's the only thing I can help you with for now=)
| 5,281
|
48,738,061
|
```
M = eval(input("Input the first number "))
N = eval(input("Input the second number(greater than M) "))
sum = 0
while M <= N:
if M % 2 == 1:
sum = sum + M
M = M + 1
print(sum)
```
This is my python code, every time I run the program, it prints the number twice. (1 1 4 4 9 9 etc.) Just confused on why this happening - in intro to computer programming so any help is appreciated (dumbed down help)
|
2018/02/12
|
[
"https://Stackoverflow.com/questions/48738061",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You may use
```
import re
text = 'MIKE an entry for mike WILL and here is wills text DAVID and this belongs to david'
subs = ['MIKE','WILL','TOM','DAVID']
res = re.findall(r'({0})\s*(.*?)(?=\s*(?:{0}|$))'.format("|".join(subs)), text)
print(res)
# => [('MIKE', 'an entry for mike'), ('WILL', 'and here is wills text'), ('DAVID', 'and this belongs to david')]
```
See the [Python demo](https://ideone.com/T4JO2i).
The pattern that is built dynamically will look like [`(MIKE|WILL|TOM|DAVID)\s*(.*?)(?=\s*(?:MIKE|WILL|TOM|DAVID|$))`](https://regex101.com/r/0cNztQ/1) in this case.
**Details**
* `(MIKE|WILL|TOM|DAVID)` - Group 1 matching one of the alternatives substrings
* `\s*` - 0+ whitespaces
* `(.*?)` - Group 2 capturing any 0+ chars other than line break chars (use `re.S` flag to match any chars), as few as possible, up to the first...
* `(?=\s*(?:MIKE|WILL|TOM|DAVID|$))` - 0+ whitespaces followed with one of the substrings or end of string (`$`). These texts are not consumed, so, the regex engine still can get consequent matches.
|
You can also use the following regex to achieve your goal:
```
(MIKE.*)(?= WILL)|(WILL.*)(?= DAVID)|(DAVID.*)
```
It uses Positive lookahead to get the intermediate strings. (<http://www.rexegg.com/regex-quickstart.html>)
**TESTED:** <https://regex101.com/r/ZSJJVG/1>
| 5,286
|
24,687,665
|
I would eventually like to pass data from python data structures to Javascript elements that will render it in a Dygraphs graph within an iPython Notebook.
I am new to using notebooks, especially the javascript/nobebook interaction. I have the latest Dygraphs library saved locally on my machine. At the very least, I would like to be able to render a sample Dygraphs plot in the notebook using that library.
See the notebook below. I am trying to execute the simple Dygraphs example code, using the library provided here: <http://dygraphs.com/1.0.1/dygraph-combined.js>
However, I cannot seem to get anything to render. Is this the proper way to embed/call libraries and then run javascript from within a notebook?

Eventually I would like to generate JSON from Pandas DataFrames and use that data as Dygraphs input.
|
2014/07/10
|
[
"https://Stackoverflow.com/questions/24687665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1176806/"
] |
The trick is to pass the DataFrame into JavaScript and convert it into a [format](http://dygraphs.com/data.html) that dygraphs can handle. Here's the code I used ([notebook here](https://gist.github.com/danvk/e81557c88d61e34dbd75))
```
html = """
<script src="http://dygraphs.com/dygraph-combined.js"></script>
<div id="dygraph" style="width: 600px; height: 400px;"></div>
<script type="text/javascript">
function convertToDataTable(d) {
var columns = _.keys(d);
columns.splice(columns.indexOf("x"), 1);
var out = [];
for (var k in d['x']) {
var row = [d['x'][k]];
columns.forEach(function(col) {
row.push(d[col][k]);
});
out.push(row);
}
return {data:out, labels:['x'].concat(columns)};
}
function handle_output(out) {
var json = out.content.data['text/plain'];
var data = JSON.parse(eval(json));
var tabular = convertToDataTable(data);
g = new Dygraph(document.getElementById("dygraph"), tabular.data, {
legend: 'always',
labels: tabular.labels
})
}
var kernel = IPython.notebook.kernel;
var callbacks = { 'iopub' : {'output' : handle_output}};
kernel.execute("dfj", callbacks, {silent:false});
</script>
"""
HTML(html)
```
This is what it looks like:

The chart is fully interactive: you can hover, pan, zoom and otherwise interact in the same ways that you would with a typical dygraph.
|
danvk's solution is cleaner and faster than this, but I also was able to get this to work by building a Dygraph String from a DataFrame. It seems limited to about 15K points, but the benefit is that once created, the page can be saved as a static html page and the Dygraphs plot stays in place. Makes for a nice portable sharing mechanism for those without notebooks set up.
CELL:
```
%%html
<script type="text/javascript" src="http://dygraphs.com/1.0.1/dygraph-combined.js"></script>`
```
CELL:
```
import string
import numpy as np
import pandas as pd
from IPython.display import display,Javascript,HTML,display_html,display_javascript,JSON
df = pd.DataFrame(columns=['x','y','z'])x=np.arange(1200)
df.y=np.sin(x*2*np.pi/10) + np.random.rand(len(x))*5
df.z=np.cos(x)
df.x=df.y+df.z/3 + (df.y-df.z)**1.25
s=df.to_csv(index_label="Index",float_format="%5f",index=True)
ss=string.join(['"' + x +'\\n"' for x in s.split('\n')],'+\n')
dygraph_str="""
<html>
<head>
<script type="text/javascript" src="http://dygraphs.com/1.0.1/dygraph-combined.js"> </script>
</head>
<body>
<div id="graphdiv5" style="margin: 0 auto; width:auto;"></div>
<script type="text/javascript">
g = new Dygraph(
document.getElementById("graphdiv5"), // containing div
""" + ss[:-6] + """,
{
legend: 'always',
title: 'FOO vs. BAR',
rollPeriod: 1,
showRoller: true,
errorBars: false,
ylabel: 'Temperature (Stuff)'
}
);
</script>
"""
display(HTML(dygraph_str))
```
The notebook looks like this:
| 5,287
|
65,804,384
|
So im trying to make a decimal to binary convertor in python without using the bin
and this is incomplete, but for now im trying to get 'a' as a list with all the factors that led to the conversion
for example if the decimal inputed = 75,
then 'a' should be = [64, 8, 2, 1]
Can someone tell me how to correct my code
but i seem to be running into a error which ive given below the code
```
q3 = float(input("Enter a number: "))
def raise_to_power(base, power):
result = 1
for index in range(power):
result = result * base
return result
def decimal_to_binary(decimal):
num1 = 2
count = 1
x = 0
a = list([])
while num1 <= decimal:
if num1 < decimal:
num1 *= 2
count += 1
x += 1
decimal = decimal - num1
a[x] = raise_to_power(2, count)
num1 = 2
count = 0
return a
decimal_to_binary(q3)
```
The error:
```
Traceback (most recent call last):
File "/Users/veresh/Library/Mobile Documents/com~apple~CloudDocs/Binary_Num Convertor.gyp", line 34, in <module>
decimal_to_binary(q3)
File "/Users/veresh/Library/Mobile Documents/com~apple~CloudDocs/Binary_Num Convertor.gyp", line 28, in decimal_to_binary
a[x] = raise_to_power(2, count)
IndexError: list assignment index out of range
```
|
2021/01/20
|
[
"https://Stackoverflow.com/questions/65804384",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15031531/"
] |
You can't assign to positions in a list that don't exist.
Instead of
```
a[x] = raise_to_power(2, count)
```
add the result to the list using [list.append](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types)
```
a.append(raise_to_power(2, count))
```
|
I think you can do this with fewer steps.
```
num = int(input("Enter a number: "))
b = []
while num > 0:
b.append(num%2)
num = num//2
f = [(2*a)**i for i,a in enumerate(b) if a != 0]
f = f[::-1]
print (f)
```
This will give you the following result:
```
Enter a number: 75
[64, 8, 2, 1]
Enter a number: 36
[32, 4]
Enter a number: 29
[16, 8, 4, 1]
```
| 5,288
|
34,321,618
|
First things first I am not a professional with Regular Expressions and have been depending on [this cookbook](https://www.safaribooksonline.com/library/view/regular-expressions-cookbook/9781449327453/ch06s11.html), [this tool](http://pythex.org/) and [this other tool](http://pythex.org/)
Now when I try run it it python 2.7.7 64bit win 8 it simply does nothing for this sample text
>
> Two weeks ago I went shooing at target and spent USD1,010.53 and earned 300 points. When I checked my balance after I only had USD 1912.04.
>
>
>
Note that the USD is joined to the amount (USD1,010.53) and there is a comma for every thousand in the first case but second case it is not joined to the amount and there is no comma for the thousandth place (USD 1912.04) and in some case they are some values which are integers but not currencies and would still need to be parsed.(300 points).
Now I managed to get my hands on this
>
> [0-9]{1,3}(,[0-9]{3})\*(.[0-9]+)?\b|.[0-9]+\b
>
>
>
Now i have two problems:
1. Python doesn't return any value for the above regex and sample string and yet the tools do.
2. the regex will only return if every 1000th place has a comma i.e. the USD 1912.04 ends up returning 912.04 on the online tools not too sure how to have it take both cases of comma and non comma.
`regex = re.compile('[0-9]{1,3}(,[0-9]{3})*(\.[0-9]+)?\b|\.[0-9]+\b')
mynumerics = re.findall(regex,'The final bill is USD1,010.53 and you will earn 300 points. Thank you for shopping at Target')`
What I would expect is three items:
```
=>['1,010.53', '300', '1912.04']
```
or better yet
```
=>[1010.53, 300, 1912.04]
```
Instead all i get is an empty list. I could probably try download a different version of python but i know most productions we deploy on use 2.7.X. So i hope its not a version problem.
|
2015/12/16
|
[
"https://Stackoverflow.com/questions/34321618",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/944900/"
] |
Two main problems:
* `re.findall` will return a list of tuples if your pattern has any capturing groups in it. Since your pattern is using groups in a very odd way, you will end up seeing some weird results from this. Make use of non capturing groups by using `(?:` instead of just plain `(` parentheses.
* because if the use of `\b`, you should specify your pattern string as a raw string with an `r'string'`. In reality, all of your regex's should use a raw string to ensure that nothing is being parsed weirdly.
With those in mind, this works perfectly fine:
```
>>> regex = re.compile(r'[0-9]{1,3}(?:,[0-9]{3})*(?:\.[0-9]+)?\b|\.[0-9]+\b')
>>> mynumerics = re.findall(regex,'The final bill is USD1,010.53 and you will earn 300 points. What about .25 and 123,456.12?')
>>> mynumerics
['1,010.53', '300', '.25', '123,456.12']
```
Note some of the particular differences between your pattern and mine.
```
r'[0-9]{1,3}(?:,[0-9]{3})*(?:\.[0-9]+)?\b|\.[0-9]+\b'
1 2 2
'[0-9]{1,3}(,[0-9]{3})*(\.[0-9]+)?\b|\.[0-9]+\b'
1 - raw string
2 - non-capturing groups instead of capturing groups
```
I understand that some of this way go way over your head so please comment if you need clarification and I can edit as needed. I would suggest looking into some other regex references and tips, I personally love [this site](http://www.rexegg.com/regex-quickstart.html) and use it almost religiously for any regex needs.
EDIT - matching decimals:
=========================
As Mark Dickinson cleverly pointed out, the `|\.[0-9]+` in the original regex is for matching things like `.24` (simple decimals). I added that part back in as well as added to the matching string to show the functionality.
Important Comment from ShadowRanger
===================================
**Side-note**: This pattern, as written, will see 4400 and return 400, or a123 and return 123. This is a problem (not @RNar's, the original pattern had the same issue) because if 4400 should be ignored, then you shouldn't get pieces of it (just adding \b to the front causes other issues, so it's harder than that), and because [English digit grouping rules allow the omission of the comma when the value is four digits to the left of the decimal, between 1000 and 9999](https://en.wikipedia.org/wiki/Decimal_mark#Exceptions_to_digit_grouping), so you won't match those as written
|
Can you try this regex?
```
((?:\d+,?)+\.?\d+)
```
<https://regex101.com/r/qN0gV9/1>
| 5,290
|
44,992,717
|
I recently began self-learning python, and have been using this language for an online course in algorithms. For some reason, many of my codes I created for this course are very slow (relatively to C/C++ Matlab codes I have created in the past), and I'm starting to worry that I am not using python properly.
Here is a simple python and matlab code to compare their speed.
MATLAB
```
for i = 1:100000000
a = 1 + 1
end
```
Python
```
for i in list(range(0, 100000000)):
a=1 + 1
```
The matlab code takes about 0.3 second, and the python code takes about 7 seconds. Is this normal? My python codes for much complex problems are very slow. For example, as a HW assignment, I'm running depth first search on a graph with about 900000 nodes, and this is taking forever. Thank you.
|
2017/07/09
|
[
"https://Stackoverflow.com/questions/44992717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8277919/"
] |
Try using `xrange` instead of `range`.
The difference between them is that `**xrange**` generates **the values as you use them** instead of `range`, which tries to generate a static list at runtime.
|
Unfortunately, python's amazing flexibility and ease comes at the cost of being slow. And also, for such large values of iteration, I suggest using itertools module as it has faster caching.
The xrange is a good solution however if you want to iterate over dictionaries and such, it's better to use itertools as in that, you can iterate over any type of sequence object.
| 5,291
|
3,679,974
|
What I'd like to achieve is the launch of the following shell command:
```
mysql -h hostAddress -u userName -p userPassword
databaseName < fileName
```
From within a python 2.4 script with something not unlike:
```
cmd = ["mysql", "-h", ip, "-u", mysqlUser, dbName, "<", file]
subprocess.call(cmd)
```
This pukes due to the use of the redirect symbol (I believe) - mysql doesn't receive the input file.
I've also tried:
```
subprocess.call(cmd, stdin=subprocess.PIPE)
```
no go there ether
Can someone specify the syntax to make a shell call such that I can feed in a file redirection ?
Thanks in advance.
|
2010/09/09
|
[
"https://Stackoverflow.com/questions/3679974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/443779/"
] |
You have to feed the file into mysql stdin by yourself. This should do it.
```
import subprocess
...
filename = ...
cmd = ["mysql", "-h", ip, "-u", mysqlUser, dbName]
f = open(filename)
subprocess.call(cmd, stdin=f)
```
|
The symbol `<` has this meaning (i. e. reading a file to `stdin`) only in shell. In Python you should use either of the following:
1) Read file contents in your process and push it to `stdin` of the child process:
```
fd = open(filename, 'rb')
try:
subprocess.call(cmd, stdin=fd)
finally:
fd.close()
```
2) Read file contents via shell (as you mentioned), but redirect `stdin` of your process accordingly:
```
# In file myprocess.py
subprocess.call(cmd, stdin=subprocess.PIPE)
# In shell command line
$ python myprocess.py < filename
```
| 5,294
|
68,563,978
|
This question is basically on how to use regular expressions but I couldn't find any answer to it in a lot of very closely related questions.
I create coverage reports in a gitlab pipeline using [coverage.py](https://coverage.readthedocs.io/en/coverage-5.5/) and [py.test](https://docs.pytest.org/en/6.2.x/) which look like the following piped into a file like `coverage37.log`:
```
-------------- generated xml file: /builds/utils/foo/report.xml --------------
---------- coverage: platform linux, python 3.7.11-final-0 -----------
Name Stmts Miss Cover
-------------------------------------------------
foo/tests/bar1.py 52 0 100%
...
foo/tests/bar2.py 0 0 100%
-------------------------------------------------
TOTAL 431 5 99%
======================= 102 passed, 9 warnings in 4.35s ========================
```
Now I want to create a badge for the total coverage i.e. here the 99% value and only get number (99) in order to assign it to a variable. This variable can then be used to create a flexible coverage badge using the [anybadge](https://github.com/jongracecox/anybadge) package.
My naive approach would be something like:
```
COVERAGE_SCORE=$(sed -n 'what to put here' coverage37.log)
echo "Coverage is $COVERAGE_SCORE"
```
Note that I know that gitlab, github, etc. offer specific functionalities to create badges automatically. But I want to create it manually in order to have more control and create the badge per branch.
Any hints are welcome. Thanks in advance!
|
2021/07/28
|
[
"https://Stackoverflow.com/questions/68563978",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3734059/"
] |
It is easier to use `awk` here:
```sh
cov_score=$(awk '$1 == "TOTAL" {print $NF+0}' coverage37.log)
```
Here `$1 == "TOTAL"` matches a line with first word as `TOTAL` and `print $NF+0` prints number part of last field.
|
rather than get approximate values from non-machine-readable outputs you'd be best to use coverage's programmatic apis, either `coverage xml` or `coverage json`
here's an example using the json output (note I send it to `/dev/stdout`, by default it goes to `coverage.json`)
```
$ coverage json -o /dev/stdout | jq .totals.percent_covered
52.908756889161054
```
there's even more information there if you need it:
```
$ coverage json -o /dev/stdout | jq .totals
{
"covered_lines": 839,
"num_statements": 1401,
"percent_covered": 52.908756889161054,
"missing_lines": 562,
"excluded_lines": 12,
"num_branches": 232,
"num_partial_branches": 7,
"covered_branches": 25,
"missing_branches": 207
}
```
| 5,297
|
67,347,194
|
The thing is , that I am given some code and it is structured this way that we expect some accesses to not - initialised elements in the list. (I don't want to change the logic behind it, because it deals with some math concepts ). But I've been given the code in some other language and I want to do the same with python (handling this exception and let the program run).
```
a = []
for i in range(1,10,2): a.append(i)
for j in range(10):
try:
a[i] +=1
finally:
a.append(1)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-2-edc940b2697a> in <module>
3 for j in range(10):
4 try:
----> 5 a[i] +=1
6 finally:
7 a.append(1)
IndexError: list index out of range
```
I then tried something else (after reading a similar stack overflow post)
```
a = []
for i in range(1,10,2): a.append(i)
for j in range(10):
if(a[j]): a[j] +=1
else:
a.append(1)
ndexError Traceback (most recent call last)
<ipython-input-4-324628aa850d> in <module>
3 for j in range(10):
4
----> 5 if(a[j]): a[j] +=1
6 else:
7 a.append(1)
IndexError: list index out of range
```
As you can see the error , persisted.
|
2021/05/01
|
[
"https://Stackoverflow.com/questions/67347194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14386149/"
] |
you can handle it with:
```
except IndexError:
pass
```
full code:
```
a = []
for i in range(1, 10, 2): a.append(i)
for j in range(10):
try:
a[i] += 1
except IndexError:
pass
finally:
a.append(1)
```
|
```
a = []
for i in range(1,10,2):
a.append(i)
try:
for j in range(10):
a[i] +=1
except:
pass
finally:
a.append(1)
print(a)
```
| 5,300
|
59,414,043
|
Below is my project structure:
```
- MyProject
- src
- Master.py
- Myfolder
- file1
- Myfolder2
- file2
```
when i tried to run `python3 Master.py` from `src` folder i get
```
ModuleNotFoundError: No Module name 'src'
```
when i tried to run Master.py from root using this command -> `python3 -m src.Master`
i get `file1 Error: [Errno 2] No such file or directory`
My master.py has this import - `from src.myfolder import file1`
so it looks like that my Master.py is running from the root of the project but then it does not pick the imports that are in Master.py. I already tried to add empty init.py file in root of the project as well as `src` and `myfolder`but it does not work. I would appreciate any help !
|
2019/12/19
|
[
"https://Stackoverflow.com/questions/59414043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4984616/"
] |
Given the updated directory structure, you should add an `__init__.py` file to `myfolder` with the following line:
```py
from .file1 import * # or
# from .file1 import something # or
# from . import file1
```
and then in `master.py` do
```py
import myfolder # or
# from myfolder import file1 # or
# from myfolder.file1 import something
```
|
While my solution worked with what was suggested by Maxim i also learned that with the help of simple `os.chdir(os.path.dirname('/usr/dirname'))` i was able to resolve the issue and i also did not have to add `__init__.py` anywhere in the package
| 5,301
|
32,140,380
|
I'm looking for a pythonic (1-line) way to extract a range of values from an array
Here's some sample code that will extract the array elements that are >2 and <8 from x,y data, and put them into a new array. Is there a way to accomplish this on a single line? The code below works but seems kludgier than it needs to be. (Note I'm actually working with floats in my application)
```
import numpy as np
x0 = np.array([0,3,9,8,3,4,5])
y0 = np.array([2,3,5,7,8,1,0])
x1 = x0[x0>2]
y1 = y0[x0>2]
x2 = x1[x1<8]
y2 = y1[x1<8]
print x2, y2
```
This prints
```
[3 3 4 5] [3 8 1 0]
```
Part (b) of the problem would be to extract values say `1 < x < 3` *and* `7 < x < 9` as well as their corresponding `y` values.
|
2015/08/21
|
[
"https://Stackoverflow.com/questions/32140380",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4618362/"
] |
You can chain together boolean arrays using `&` for element-wise `logical and` and `|` for element-wise `logical or`, so that the condition `2 < x0` and `x0 < 8` becomes
```
mask = (2 < x0) & (x0 < 8)
```
---
For example,
```
import numpy as np
x0 = np.array([0,3,9,8,3,4,5])
y0 = np.array([2,3,5,7,8,1,0])
mask = (2 < x0) & (x0 < 8)
x2 = x0[mask]
y2 = y0[mask]
print(x2, y2)
# (array([3, 3, 4, 5]), array([3, 8, 1, 0]))
mask2 = ((1 < x0) & (x0 < 3)) | ((7 < x0) & (x0 < 9))
x3 = x0[mask2]
y3 = y0[mask2]
print(x3, y3)
# (array([8]), array([7]))
```
|
```
import numpy as np
x0 = np.array([0,3,9,8,3,4,5])
y0 = np.array([2,3,5,7,8,1,0])
list( zip( *[(x,y) for x, y in zip(x0, y0) if 1<=x<=3 or 7<=x<=9] ) )
# [(3, 9, 8, 3), (3, 5, 7, 8)]
```
| 5,302
|
64,995,258
|
I started learning python few days ago and im looking to create a simple program that creates conclusion text that shows what have i bought and how much have i paid depended on my inputs. So far i have created the program and technically it works well. But im having a problem with specifying the parts in text that might depend on my input.
Code:
```
apples = input("How many apples did you buy?: ")
bananas = input("How many bananas did you buy?: ")
dollars = input("How much did you pay: ")
print("")
print("You bought " +apples+ " apples and " +bananas+ " bananas. You paid " +dollars +" dollars.")
print("")
print("")
input("Press ENTER to exit")
input()
```
So the problem begins when input ends with 1. For example 21, 31 and so on, except 11 as the conclusion text will say "You bought 41 apples and 21 bananas...". Is it even possible to make "apple/s", "banana/s", "dollar/s" as variables that depend on input variables?
1. Where do i start with creating variable that depends on input variable?
2. How do i define the criteria for "banana" or "bananas" by the ending of number? And also exclude 11 from criteria as that would be also "bananas" but ends with 1.
It seems like this could be an easy task but i still can't get my head around this as i only recently started learning python. I tried creating IF and Dictionary for variables but failed.
|
2020/11/24
|
[
"https://Stackoverflow.com/questions/64995258",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14702072/"
] |
* Give each group a consecutive range. For example, for 15%, the range will be between 30 and 45.
* Pick a random number between 0 and 100.
* Find in which range that random number falls:
```sql
create or replace temp table probs
as
select 'a' id, 1 value, 20 prob
union all select 'a', 2, 30
union all select 'a', 3, 40
union all select 'a', 4, 10
union all select 'b', 1, 5
union all select 'b', 2, 7
union all select 'b', 3, 8
union all select 'b', 4, 80;
with calculated_ranges as (
select *, range_prob2-prob range_prob1
from (
select *, sum(prob) over(partition by id order by prob) range_prob2
from probs
)
)
select id, random_draw, value, prob
from (
select id, any_value(uniform(0, 100, random())) random_draw
from probs group by id
) a
join calculated_ranges b
using (id)
where range_prob1<=random_draw and range_prob2>random_draw
;
```
[](https://i.stack.imgur.com/kKcSE.png)
|
Felipe's answer is great, it definitely solved the problem.
While trying out different approaches yesterday, I tested out this approach on Felipe's table and it seems to be working as well.
I'm giving each record a random probability and comparing against the actual probability. The idea is that if the random probability is less than or equal to the actual probability, then it's accepted and the partitioning will do the deduplication based on a descending order with the probabilities.
```
create or replace temp table probs
as
select 'a' id, 1 value, 20 prob
union all select 'a', 2, 30
union all select 'a', 3, 40
union all select 'a', 4, 10
union all select 'b', 1, 5
union all select 'b', 2, 7
union all select 'b', 3, 8
union all select 'b', 4, 80;
create or replace temp table t2 as
select *,
min(compare_prob) over(partition by id) as min_compare_prob,
max(compare_prob) over(partition by id) as max_compare_prob,
min_compare_prob <> max_compare_prob as not_all_identical --min_rank2 <> max_rank2 checks if all records (by group) have different values
from (select id,
value,
prob,
UNIFORM(0.00001::float,1::float,random(2)) as rand_prob, --random probability
case when prob >= rand_prob then 1 else 0 end as compare_prob
from (select id, value, prob/100 as prob from probs)
);
--dedeup results
select id, value, prob, rand_prob
from (select *,
row_number() over(partition by id order by prob desc, rand_prob desc) as rn
from t2
where not_all_identical = FALSE
union all
select *,
row_number() over(partition by id order by prob desc, COMPARE_PROB desc) as rn
from t2
where not_all_identical = TRUE)
where rn = 1;
```
| 5,303
|
60,482,258
|
How can i replace a string in list of lists in python but i want to apply the changes only to the specific index and not affecting the other index, here some example:
```
mylist = [["test_one", "test_two"], ["test_one", "test_two"]]
```
i want to change the word "test" to "my" so the result would be only affecting the second index:
```
mylist = [["test_one", "my_two"], ["test_one", "my_two"]]
```
I can figure out how to change both of list but i can't figure out what I'm supposed to do if only change one specific index.
|
2020/03/02
|
[
"https://Stackoverflow.com/questions/60482258",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9675410/"
] |
Use indexing:
```
newlist = []
for l in mylist:
l[1] = l[1].replace("test", "my")
newlist.append(l)
print(newlist)
```
Or oneliner if you always have two elements in the sublist:
```
newlist = [[i, j.replace("test", "my")] for i, j in mylist]
print(newlist)
```
Output:
```
[['test_one', 'my_two'], ['test_one', 'my_two']]
```
|
There is a way to do this on one line but it is not coming to me at the moment. Here is how to do it in two lines.
```
for two_word_list in mylist:
two_word_list[1] = two_word_list.replace("test", "my")
```
| 5,304
|
53,048,002
|
I have a semicolon separated csv file which has the following form:
```
indx1; string1; char1; entry1
indx2; string1; char2; entry2
indx3; string2; char2; entry3
indx4; string1; char1; entry4
indx5; string3; char2; entry5
```
I want to get unique entries of the 1st and 2nd columns of this file in the form of a list (without using pandas or numpy). In particular these are the lists that I desire:
```
[string1, string2, string3]
[char1, char2]
```
The order doesn't matter, and I would like the operation to be fast.
Presently, I am reading the file (say 'data.csv') using the command
```
with open('data.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=';')
```
I am using python 2.7. What is the fastest way to achieve the functionality that I desire? I will appreciate any help.
|
2018/10/29
|
[
"https://Stackoverflow.com/questions/53048002",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10415983/"
] |
You could use [sets](https://docs.python.org/2.7/library/stdtypes.html#set) to keep track of the already seen values in the needed columns. Since you say that the order doesn't matter, you could just convert the sets to lists after processing all rows:
```
import csv
col1, col2 = set(), set()
with open('data.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=';', skipinitialspace=True)
for row in csv_reader:
col1.add(row[1])
col2.add(row[2])
print list(col1), list(col2) # ['string1', 'string3', 'string2'] ['char2', 'char1']
```
|
This should work. You can use it as benchmark.
```
myDict1 = {}
myDict2 = {}
with open('data.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=';')
for row in csv_reader:
myDict1[row[1]] = 0
myDict2[row[2]] = 0
x = myDict1.keys()
y = myDict2.keys()
```
| 5,305
|
10,631,419
|
My django app saves django models to a remote database. Sometimes the saves are bursty. In order to free the main thread (\*thread\_A\*) of the application from the time toll of saving multiple objects to the database, I thought of transferring the model objects to a separate thread (\*thread\_B\*) using [`collections.deque`](http://docs.python.org/library/collections.html#collections.deque) and have \*thread\_B\* save them sequentially.
Yet I'm unsure regarding this scheme. `save()` returns the id of the new database entry, so it "ends" only after the database responds, which is at the end of the transaction.
**Does `django.db.models.Model.save()` really block [GIL](http://en.wikipedia.org/wiki/Global_Interpreter_Lock)-wise and release other python threads *during* the transaction?**
|
2012/05/17
|
[
"https://Stackoverflow.com/questions/10631419",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/348545/"
] |
Django's `save()` does nothing special to the GIL. In fact, there is hardly anything you can do with the GIL in Python code -- when it is executed, the thread must hold the GIL.
There are only two ways the GIL could get released in `save()`:
* Python decides to switch threads (after [`sys.getcheckinterval()`](http://docs.python.org/library/sys.html#sys.getcheckinterval) instructions)
* Django calls a database interface routine that is implemented to release the GIL
The second point could be what you are looking for -- a SQL `COMMIT`is executed and during that execution, the SQL backend releases the GIL. However, this depends on the SQL interface, and I'm not sure if the popular ones actually release the GIL\*.
Moreover, `save()` does a lot more than just running a few `UPDATE/INSERT` statements and a `COMMIT`; it does a lot of bookkeeping in Python, where it has to hold the GIL. In summary, I'm not sure that you will gain anything from moving `save()` to a different thread.
---
**UPDATE**: From looking at the sources, I learned that both the `sqlite` module and `psycopg` do release the GIL when they are calling database routines, and I guess that other interfaces do the same.
|
I think python dont lock anything by itself, but database does.
| 5,306
|
68,006,937
|
I am trying to make a daily line graph for certain stocks, but am running into an issue. Getting the 'Close' price every 2 minutes functions correctly, but when I try and get 'Datetime' I am getting an error. I believe yfinance used pandas to create a dataframe, but I may be wrong. Regardless, I am having issues pulling a certain column from yfinance.
I am pretty new to python and many of the packages, so this might be a simple error, but my code is shown below.
```
stock = yf.Ticker('MSFT')
print(stock.history(period='1d', interval='2m'))
priceArray = stock.history(period='1d',interval='2m')['Close']
dateArray = stock.history(period='1d',interval='2m')['Datetime']
```
The error I am getting is:
```
File "C:\Users\TrevorSmith\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\pandas\core\frame.py", line 3024, in __getitem__ indexer = self.columns.get_loc(key)
File "C:\Users\TrevorSmith\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\pandas\core\indexes\base.py", line 3082, in get_loc raise KeyError(key) from err
KeyError: 'Date'
```
When I print the `stock.history(period='1d',interval='2m)` It shows the following column names:
```
Open High Low Close Volume Dividends Stock Splits Datetime
```
Again, getting ['Close'] from this information works, but ['Date'], ['Datetime'], ['DateTime'], and ['Time'] do not work.
Am I doing something wrong here? Or is there another way to get the info I am looking for?
|
2021/06/16
|
[
"https://Stackoverflow.com/questions/68006937",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16245213/"
] |
Datetime is no column name, it just looks like one:
[](https://i.stack.imgur.com/gK5iQ.png)
Try
`print(stock.history(period='1d',interval='2m).keys())`
and you will see.
|
The Datetime column is the index of the dataframe. If you reset the index you can do whatever with the ['Datetime'] column.
| 5,309
|
41,761,293
|
I am trying to access the worklogs in python by using the [jira python library](http://jira.readthedocs.io/en/latest/examples.html). I am doing the following:
```
issues = jira.search_issues("key=MYTICKET-1")
print(issues[0].fields.worklogs)
issue = jira.search_issues("MYTICKET-1")
print(issue.fields.worklogs)
```
as described in the documentation, chapter 2.1.4. However, I get the following error (for both cases):
```
AttributeError: type object 'PropertyHolder' has no attribute 'worklogs'
```
Is there something I am doing wrong? Is the documentation outdated? How to access worklogs (or other fields, like comments etc)? And what is a `PropertyHolder`? How to access it (its not described in the documentation!)?
|
2017/01/20
|
[
"https://Stackoverflow.com/questions/41761293",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1581090/"
] |
This is because **it seems** `jira.JIRA.search_issues` doesn't fetch all "builtin" fields, like `worklog`, by default (although documentation only uses vague term ["fields - [...] Default is to include *all fields*"](https://jira.readthedocs.io/en/master/api.html?highlight=worklogs#jira.JIRA.search_issues)
- "all" out of what?).
You either have to use [`jira.JIRA.issue`](https://jira.readthedocs.io/en/master/api.html?highlight=worklogs#jira.JIRA.issue):
```py
client = jira.JIRA(...)
issue = client.issue("MYTICKET-1")
```
or explicitly list fields which you want to fetch in `jira.JIRA.search_issues`:
```py
client = jira.JIRA(...)
issue = client.search_issues("key=MYTICKET-1", fields=[..., 'worklog'])[0]
```
Also be aware that this way you will get at most 20 worklog items attached to your JIRA issue instance. If you need all of them you should use [`jira.JIRA.worklogs`](https://jira.readthedocs.io/en/master/api.html?highlight=worklogs#jira.JIRA.worklogs):
```py
client = jira.JIRA(...)
issue = client.issue("MYTICKET-1")
worklog = issue.fields.worklog
all_worklogs = client.worklogs(issue) if worklog.total > 20 else worklog.worklogs
```
|
This question [here](https://stackoverflow.com/questions/24375473/access-specific-information-within-worklogs-in-jira-python) is similar to yours and someone has posted a work around.
There is a also a [similar question on Github](https://github.com/pycontribs/jira/issues/224) in relation to attachments (not worklogs). The last answer in the comments has workaround that might assist.
| 5,310
|
48,244,805
|
I need to execute a function named "send()" who contain an ajax request.
This function is in ajax.js (included in )
The Ajax success update the src of my image.
This function work well, I don't think that it is the problem
But when I load the page, send() function is not executed :o I don't understand why.
After loading, I click on a button, and the function work ! (the code of the button is not in the code below)
You can see the HTML code, my function below, and node JS code. Thanks for your help.
Since your answer, the problem is now : POST <http://localhost:3000/index/gen/> net::ERR\_CONNECTION\_REFUSED
The script is executed, the problem was that some data were not initialized (Jquery sliders)
```
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>Génération</title>
<link rel="stylesheet" media="screen" title="super style" href="./styles/feuille.css">
<script src="./script/ajax.js"></script>
</head>
<script src="https://code.jquery.com/jquery-1.10.2.js"></script>
<body>
<img src="images/map.jpg" class="superbg" alt="Carte du monde"/>
<div id="main">
<header>
<div id="titre">
<h1>Generator</h1>
</div>
</header>
<img id="image" src="" alt="Map">
<script>
send(); //in file ajax.js included in head
alert($("#image").attr('src'));
</script>
<footer>
Copyright CLEBS 2017, tous droits réservés.
</footer>
</body>
</html>
```
Here send function
```
function send(){
var data = {"some data useless for my question"};
alert("i'm in send() function");
$.ajax({
type: 'POST',
url: 'index/gen/', //(It's node JS)
data: data,
success : function(j){
var img = document.getElementById('image');
img.src = j;
},
complete : function(j){
},
});
}
```
Node JS code
```
app.post('/index/gen/',urlencodedParser, function (req,res){
const { spawn } = require('child_process');
const ls = spawn('./generateur/minigen.py',
req.session.donnees = ''
ls.stdout.on('data', (data) => {
req.session.donnees+=data.toString();
});
ls.stderr.on('datas', (datas) => {
console.log("Erreur"+`stderr: ${datas}`);
});
var options = { //Encapsulation des données à envoyer
mode: 'JSON',
pythonOptions: ['-u'],
scriptPath: './generateur', //ligne du dessous c'est les valeurs saisies par l'utilisateur
args: ["Some data useless for my question"]
};
pythonShell.run('generation.py', options, function (err, results) { //Make the map to be download
if (err) throw err;
res.send(req.session.donnees);
});
}
})
```
|
2018/01/13
|
[
"https://Stackoverflow.com/questions/48244805",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8846114/"
] |
There is no function named `send` in your code. You have a function named `envoi`. Change `function envoi()` to `function send()` The hazards of multilingual coding!
**Edit: since you updated your answer, try this.**
```
<script>
$(document).ready(function() {
send();
alert($("#image").attr('src'));
});
</script>
```
|
The format of the javascript is incorrect. The Scope of your solution needs to be managed with JQuery. If you simply want to call the function when the page loads, you can call the function inside the JQuery handler. Also, your syntax is not correct.
```
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>Génération</title>
<link rel="stylesheet" media="screen" title="super style" href="./styles/feuille.css">
<script src="https://code.jquery.com/jquery-1.12.4.js"></script>
<script src="./script/ajax.js"></script>
</head>
<body>
<img src="images/map.jpg" class="superbg" alt="Carte du monde"/>
<div id="main">
<header>
<div id="titre">
<h1>Generator</h1>
</div>
</header>
<img id="image" src="" alt="Map">
<footer>
Copyright CLEBS 2017, tous droits réservés.
</footer>
<script>
$(function(){
send(); //in file ajax.js included in head
// alert($("#image").attr('src')); <-- won't update inline due to async ajax
})
</script>
</body>
</html>
```
Also, the image tag alert should be updated in the Async callback.
```
var data = {"some data useless for my question"}
$.ajax({
url: 'index/gen/',
type: "POST",
data: JSON.stringify(data),
contentType: "application/json",
complete: function(j) {
$("#image").attr('src', j);
}
});
```
Also be aware that you want JQuery to load before your custom JavaScript. I'm not sure where you are calling Envoi, but this code should help you. It will trigger the "send()" function when the page loads. Also, script tags should be included in the Head tag.
| 5,311
|
61,549,690
|
I have a bash script that starts my python script. Point of this is, that I hand over a lot of (sometimes changing) arguments to the python script. So I found it useful to start my python script with a bash script where I "save" my argument list.
```
#!/bin/bash
cd $(dirname $0)
python3 script.py [arg0] [arg1]
```
In my Python script I have the KeyboardInterrupt-Exception implemented which would save some open files and then exit the python script.
Now my question: When I run the shell script I have to at least press 3-times Strg+C, get some python errors and it will stop.
Am i guessing right, that my Strg+C is not recognized by python, but by the shell script instead?
Is there a way to hand over the Keyboard Interrupt from the shell script to the python script running in it?
btw: the python script is running an infinite loop (if this is important)
Python script looks like this for the exception. As pointed out, it runs an infinite loop.
```
while True:
try:
#doing stuff here
except KeyboardInterrupt:
for file in files:
file.close()
break
```
|
2020/05/01
|
[
"https://Stackoverflow.com/questions/61549690",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Sounds like a case of operator precedence? Does it work if you wrap the second part in brackets, like
```js
console.log('myFather.__proto__ === Object.prototype:' + (myFather.__proto__ === Object.prototype))
```
Operator precedence, as documented at [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Operator_Precedence) or other JS core documentation, defines the order in which operators are evaluated. This list is not that interesting in most cases, as simple assignments within one statement might not use multiple different operators. But in your case, there's a `+` operator and the `===` operator. The addition operator has a **higher** precedence than the equality operator, which means: it is evaluated first. So, these are the internal steps for your log call, line by line:
```js
console.log('myFather.__proto__ === Object.prototype:' + myFather.__proto__ === Object.prototype)
console.log('myFather.__proto__ === Object.prototype:[object Object]' === Object.prototype)
console.log(false)
```
|
You are actually doing this :
```
console.log( ('myFather.__proto__ === Object.prototype:' + myFather.__proto__) === Object.prototype);
```
So the result of this equality is `false`
| 5,314
|
48,364,407
|
I'm new to python and I have found tons of my questions have already been answered. In 7 years of coding various languages, I've never actually posted a question on here before, so I'm really stumped this time.
I'm using python 3.6
I have a pandas dataframe with a column that is just Boolean values. I have some code that I only want to execute if all of the rows in this column are True.
Elsewhere in my code I have used:
```
if True not in df.column:
```
to identify if not even a single row in df is True. This works fine.
But for some reason the converse does not work:
```
if False not in df.column:
```
to identify if all rows in df are True.
Even this returns False:
```
import pandas as pd
S = pd.Series([True, True, True])
print(False not in S)
```
But I have found that adding .values to the series works both ways:
```
import pandas as pd
S = pd.Series([True, True, True])
print(False not in S.values)
```
The other alternative I can think of is to loop through the column and use the OR operator to compare each row with a variable initialized as True. Then if the variable makes it all the way to the end as True, all must be true.
SO my question is: why does this return False?
```
import pandas as pd
S = pd.Series([True, True, True])
print(False not in S)
```
|
2018/01/21
|
[
"https://Stackoverflow.com/questions/48364407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9246417/"
] |
`False not in S` is equivalent to `False not in S.index`. Since the first index element is 0 (which, in turn, is numerically equivalent to `False`), `False` is technically `in` `S`.
|
When you call `s.values` you are going to have access to a `numpy.array` version of the pandas Series/Dataframe.
Pandas provides a method called `isin` that is going to behave correctly, when calling `s.isin([False])`
| 5,315
|
36,468,707
|
I'm having trouble loading the R-package `edgeR` in Python using `rpy2`.
When I run:
```
import rpy2.robjects as robjects
robjects.r('''
library(edgeR)
''')
```
I get the following error:
```
/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py:106: UserWarning: Loading required package: limma
res = super(Function, self).__call__(*new_args, **new_kwargs)
/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py:106: UserWarning: Error in dyn.load(file, DLLpath = DLLpath, ...) :
unable to load shared object '/data/scratch/user/source/anaconda/lib/R/library/edgeR/libs/edgeR.so':
/data/scratch/user/source/anaconda/lib/R/library/edgeR/libs/edgeR.so: undefined symbol: _ZNSt7__cxx1118basic_stringstreamIcSt11char_traitsIcESaIcEED1Ev
res = super(Function, self).__call__(*new_args, **new_kwargs)
/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py:106: UserWarning: Error: package or namespace load failed for ‘edgeR’
res = super(Function, self).__call__(*new_args, **new_kwargs)
Traceback (most recent call last):
File "differential_expression.py", line 221, in <module>
diff_expr_object.run_edgeR()
File "differential_expression.py", line 127, in run_edgeR
probs = call_edger(data, groups, sizes, genes)
File "differential_expression.py", line 64, in call_edger
''')
File "/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/__init__.py", line 321, in __call__
res = self.eval(p)
File "/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py", line 178, in __call__
return super(SignatureTranslatedFunction, self).__call__(*args, **kwargs)
File "/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py", line 106, in __call__
res = super(Function, self).__call__(*new_args, **new_kwargs)
rpy2.rinterface.RRuntimeError: Error: package or namespace load failed for ‘edgeR’
```
The main problem being:
```
rpy2.rinterface.RRuntimeError: Error: package or namespace load failed for ‘edgeR’
```
However, when I run the following:
```
R
> library(edgeR)
> sessionInfo()
R version 3.2.2 (2015-08-14)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: CentOS release 6.5 (Final)
locale:
[1] LC_CTYPE=en_ZA.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_ZA.UTF-8 LC_COLLATE=en_ZA.UTF-8
[5] LC_MONETARY=en_ZA.UTF-8 LC_MESSAGES=en_ZA.UTF-8
[7] LC_PAPER=en_ZA.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_ZA.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] edgeR_3.12.0 limma_3.26.9
>
```
I can see that `edgeR` is successfully installed and running in `R`. Why would it not be working in Python? I tried to load other packages from `rpy2` e.g. `library(tools)` which worked fine.
|
2016/04/07
|
[
"https://Stackoverflow.com/questions/36468707",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5964533/"
] |
If you want 2 different file button, you need to give them different names.
```
<form action="" enctype="multipart/form-data" method="post" accept-charset="utf-8">
<input type="file" name="userfile1" size="20">
<input type="file" name="userfile2" size="20">
<input type="submit" name="submit" value="upload">
```
Than you have to modify your function `ddoo_upload()` like below
:-
```
function ddoo_upload($filename){
$config['upload_path'] = './uploads/';
$config['allowed_types'] = 'gif|jpg|png';
$config['max_size'] = '100';
$config['max_width'] = '1024';
$config['max_height'] = '768';
$this->load->library('upload', $config);
if ( ! $this->upload->do_upload($filename)) {
$error = array('error' => $this->upload->display_errors());
return false;
// $this->load->view('upload_form', $error);
} else {
$data = array('upload_data' => $this->upload->data());
return true;
//$this->load->view('upload_success', $data);
}
}
```
**NOTE:-** We are passing `$filename` as variable and than using it to upload different files.
Now in controller where the form action is redirecting, you need to write below code.
```
if ($this->input->post('submit')){
if (isset($_FILES['userfile1']) && $_FILES['userfile1']['name'] != ''){
$file1 = $this->ddoo_upload('userfile1');
}
if (isset($_FILES['userfile2']) && $_FILES['userfile2']['name'] != ''){
$file2 = $this->ddoo_upload('userfile2');
}
}
```
|
```
<form action="http://localhost/cod_login/club/test2" enctype="multipart/form-data" method="post" accept-charset="utf-8">
<input type="file" name="userfile" size="20" multiple="">
<input type="submit" name="submit" value="upload">
</form>
```
| 5,317
|
43,246,862
|
Is there a way to create a cloudformation template, which invokes REST API calls to an EC2 instance ?
The use case is to modify the configuration of the application without having to use update stack and user-data, because user-data updation is disruptive.
I did search through all the documentation and found that this could be done by calling an AWS lambda. However, unable to get the right combination of CFM template and invocation properties.
Adding a simple lambda, which works stand-alone :
```
from __future__ import print_function
import requests
def handler(event, context):
r1=requests.get('https://google.com')
message = r1.text
return {
'message' : message
}
```
This is named as ltest.py, and packaged into ltest.zip with requests module, etc. ltest.zip is then called in the CFM template :
```
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "Test",
"Parameters": {
"ModuleName" : {
"Description" : "The name of the Python file",
"Type" : "String",
"Default" : "ltest"
},
"S3Bucket" : {
"Description" : "The name of the bucket that contains your packaged source",
"Type" : "String",
"Default" : "abhinav-temp"
},
"S3Key" : {
"Description" : "The name of the ZIP package",
"Type" : "String",
"Default" : "ltest.zip"
}
},
"Resources" : {
"AMIInfo": {
"Type": "Custom::AMIInfo",
"Properties": {
"ServiceToken": { "Fn::GetAtt" : ["AMIInfoFunction", "Arn"] }
}
},
"AMIInfoFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": { "Ref": "S3Bucket" },
"S3Key": { "Ref": "S3Key" }
},
"Handler": { "Fn::Join" : [ "", [{ "Ref": "ModuleName" },".handler"] ]},
"Role": { "Fn::GetAtt" : ["LambdaExecutionRole", "Arn"] },
"Runtime": "python2.7",
"Timeout": "30"
}
},
"LambdaExecutionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": ["lambda.amazonaws.com"]},
"Action": ["sts:AssumeRole"]
}]
},
"Path": "/",
"Policies": [{
"PolicyName": "root",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["logs:CreateLogGroup","logs:CreateLogStream","logs:PutLogEvents"],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": ["ec2:DescribeImages"],
"Resource": "*"
}]
}
}]
}
}
},
```
"Outputs" : {
"AMIID" : {
"Description": "Result",
"Value" : { "Fn::GetAtt": [ "AMIInfo", "message" ] }
}
}
}
The result of the above (with variations of the Fn::GetAtt call) is that the Lambda gets instantiated, but the AMIInfo call is stuck in "CREATE\_FUNCTION".
The stack also does not get deleted properly.
|
2017/04/06
|
[
"https://Stackoverflow.com/questions/43246862",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2617/"
] |
I would attack this with Lambda, but it seems as though you already thought of that and might be dismissing it.
A little bit of a hack, but could you add Files to the instance via Metadata where the source is the REST url?
e.g.
```
"Type": "AWS::EC2::Instance",
"Metadata": {
"AWS::CloudFormation::Init": {
"configSets": {
"CallREST": [ "CallREST" ]
},
"CallREST": { "files":
{ "c://cfn//junk//rest1output.txt": { "source": "https://myinstance.com/RESTAPI/Rest1/Action1" } } },
}
}
```
To fix your lambda you need to signal SUCCESS. When CloudFormation creates (and runs) the Lambda, it expected that the Lambda signal success. This is the reason you are getting the stuck "CREATE\_IN\_PROGRESS"
At the bottom of <http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html> is a function named "send" to help you signal success.
and here's my attempt to integrate it into your function AS PSUEDOCODE without testing it, but you should get the idea.
```
from __future__ import print_function
import requests
def handler(event, context):
r1=requests.get('https://google.com')
message = r1.text
# signal complete to CFN
# send(event, context, responseStatus, responseData, physicalResourceId)
send(..., ..., SUCCESS, ...)
return {
'message' : message
}
```
|
Lambda triggered by the event. Lifecycle hooks can be helpful.
You can hack CoudFormation, but please mind: it is not designed for this.
| 5,320
|
3,893,038
|
I use Hakyll to generate some documentation and I noticed that it has a weird way of closing the HTML tags in the code it generates.
There was a page where they said that you must generate the markup as they do, or the layout of your page will be broken under some conditions, but I can't find it now.
I created a small test page (code below) which has one red layer with the "normal" HTML markup, and a yellow layer with markup similar to what hakyll generates.
I can't see any diference in Firefox between the two divs.
Can anybody explain if what they say is true?
```
<html>
<body>
<!-- NORMAL STYLE -->
<div style="background: red">
<p>Make available the code from the library you added to your application. Again, the way to do this varies between languages (from adding import statements in python to adding a jar to the classpath for java)</p>
<p>Create an instance of the client and, in your code, make calls to it through this instance's methods.</p>
</div>
<!-- HAKYLL STYLE -->
<div style="background: yellow"
><p
>Make available the code from the library you added to your application. Again, the way to do this varies between languages (from adding import statements in python to adding a jar to the classpath for java)</p
><p
>Create an instance of the client and, in your code, make calls to it through this instance's methods.</p
></div
>
</body>
<html>
```
|
2010/10/08
|
[
"https://Stackoverflow.com/questions/3893038",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/212865/"
] |
It's actually [pandoc](http://johnmacfarlane.net/pandoc/) that's generating the HTML code. There's a good explanation in the Pandoc issue tracker:
<http://code.google.com/p/pandoc/issues/detail?id=134>
>
> The reason is
> because any whitespace (including newline and tabs) between HTML tags will cause the
> browser to insert a space character between those elements. It is far easier on the
> machine logic to leave these spaces out, because then you don't need to think about
> the possible ways that the HTML text formatting could be messing with the browser adding
> extra spaces.
>
>
>
|
There are times when stripping the white space between two tags will make a difference, particularly when dealing with inline elements.
| 5,321
|
3,183,185
|
I want to send emails from my app engine application using one of my Google Apps accounts. According to the [GAE python docs](http://code.google.com/appengine/docs/python/mail/overview.html):
*The From: address can be the email address of a registered administrator (developer) of the application, the current user if signed in with Google Accounts, or any valid email receiving address for the app (that is, an address of the form string@appid.appspotmail.com).*
So I created a user account on my Google Apps domain, no-reply@mydomain.com, to use for outbound email notifications. However, when I try to add the user as an administrator of the app, it fails with this error:
**Unauthorized**
**You are not authorized to access this application**
Is it possible to configure app engine to send emails using a Google Accounts email address?
|
2010/07/06
|
[
"https://Stackoverflow.com/questions/3183185",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/358925/"
] |
You're misusing the send()/recv() functions. send() and recv() are not required to send as much data as you request, due to limits that may be present in the kernel. You have to call send() over and over until all data has been pushed through.
e.g.:
```
int sent = 0;
int rc;
while ( sent < should_send )
{
rc = send(sock, buffer + sent, should_send - sent, 0);
if ( rc <= 0 ) // Error or hangup
// do some error handling;
sent += rc;
}
```
|
1. m\_socket can't possibly be null the line after you call `m_socket = new Socket(...)`. It will either throw an exception or assign a `Socket` to m\_socket, never null. So that test is pointless.
2. After you call `readLine()` you must check for a null return value, which means EOS, which means the other end has closed the connection, which means you must exit the reading loop and close the socket.
3. As it says in the Javadoc, `InputStream.available()` shouldn't be used for a test for EOS. Its contract is to return the number of bytes that can be read *without blocking.* That's rarely the same as the length of an incoming file via a socket. You must keep reading the socket until EOS:
```
int count;
byte[] buffer = new byte[8192];
while ((count = in.read(buffer)) > 0)
out.write(buffer, 0, count);
```
If your sending end doesn't close the socket when it finishes sending the file you will have to have it send the file length ahead of the file, and modify the loop above to read exactly that many bytes.
| 5,323
|
29,682,897
|
I am using py2neo and I would like to extract the information from query returns so that I can do stuff with it in python. For example, I have a DB containing three "Person" nodes:
`for num in graph.cypher.execute("MATCH (p:Person) RETURN count(*)"):
print num`
outputs:
`>> count(*)`
`3`
Sorry for shitty formatting, it looks essentially the same as a mysql output. However, I would like to use the number 3 for computations, but it has type `py2neo.cypher.core.Record`. How can I convert this to a python int so that I can use it? In a more general sense, how should I go about processing cypher queries so that the data I get back can be used in Python?
|
2015/04/16
|
[
"https://Stackoverflow.com/questions/29682897",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2505865/"
] |
The clue is in the question: "Assume that I have a fetcher that fetches an image from a given link on a separate thread. The image will then be **cached** in memory."
And the answer is the `cache()` operator:
"remember the sequence of items emitted by the Observable and emit the same sequence to future Subscribers"
from: <https://github.com/ReactiveX/RxJava/wiki/Observable-Utility-Operators>
So, the following `Observable` should only fetch the image once, no matter how `Subscribers` subscribe to it:
```
Observable<Bitmap> cachedBitmap = fetchBitmapFrom(url).cache();
```
**EDIT:**
I think the following example proves that the upstream `Observable` is subscribed only once, even if multiple Subscriptions come in before the `Observable` has emitted anything. This should also be true for network requests.
```
package com.example;
import rx.Observable;
import rx.Subscriber;
import rx.schedulers.Schedulers;
public class SimpleCacheTest {
public static void main(String[] args) {
final Observable<Integer> cachedSomething = getSomething().cache();
System.out.println("before first subscription");
cachedSomething.subscribe(new SimpleLoggingSubscriber<Integer>("1"));
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("before second subscription");
cachedSomething.subscribe(new SimpleLoggingSubscriber<Integer>("2"));
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("quit");
}
private static class SimpleLoggingSubscriber<T> extends Subscriber<T> {
private final String tag;
public SimpleLoggingSubscriber(final String tag) {
this.tag = tag;
}
@Override
public void onCompleted() {
System.out.println("onCompleted (" + tag + ")");
}
@Override
public void onError(Throwable e) {
System.out.println("onError (" + tag + ")");
}
@Override
public void onNext(T t) {
System.out.println("onNext (" + tag + "): " + t);
}
}
private static Observable<Integer> getSomething() {
return Observable.create(new Observable.OnSubscribe<Integer>(){
@Override
public void call(Subscriber<? super Integer> subscriber) {
System.out.println("going to sleep now...");
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
subscriber.onNext(1);
subscriber.onCompleted();
}
}).subscribeOn(Schedulers.io());
}
}
```
Output:
```
before first subscription
going to sleep now...
before second subscription
onNext (1): 1
onNext (2): 1
onCompleted (1)
onCompleted (2)
quit
```
|
Have a look at `ConnectableObservable` and the `.replay()` method.
I'm currently using this is my fragments to handle orientation changes:
Fragment's onCreate:
```
ConnectableObservable<MyThing> connectableObservable =
retrofitService.fetchMyThing()
.map(...)
.replay();
connectableObservable.connect(); // this starts the actual network call
```
Fragment's onCreateView:
```
Subscription subscription = connectableObservable
.observeOn(AndroidSchedulers.mainThread())
.subscribe(mything -> dosomething());
```
What happens is that I make 1 network request only, and any subscriber will (eventually/immediately) get that response.
| 5,325
|
53,892,106
|
i wanted to make a python file that makes a copy of itself, then executes it and closes itself, then the copy makes another copy of itself and so on...
i am not asking for people to write my code and this could be taken as just a fun challenge, but i want to learn more about this stuff and help is appreciated.
i have already played around with it but can't wrap my mind around it,
I've already tried making a py file and just pasting a copy of the file itself in it two different ways i could think of but it would just go on forever.
```
#i use this piece of code to easily execute the py file using os
os.startfile("file.py")
#and to make new py file i just use open()
file = open("file.py","w")
file.write("""hello world
you can use 3 quote marks to write over multiple lines""")
```
i expect that when you run the program it makes a copy of itself, runs it, and closes itself, and the newly ran program loops over.
what actually happenes is that either I'm writing code forever or,
when i embed the code it pastes in the copy of itself into what it copies to the copy file,
it rightfully says it doesn't know what that code is because it's being written.
it's all really confusing and this is difficult to explain and I'm sorry
it's midnight atm and I'm tired.
|
2018/12/22
|
[
"https://Stackoverflow.com/questions/53892106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10756702/"
] |
I don't have enough rep to reply to @Prune:
`os.startfile(file)` only works on Windows, and is [replaced by](https://docs.python.org/3/library/subprocess.html) `subprocess.call`
`shutil.copy2(src, dst)` works on both Windows and Linux.
Try this solution as well:
```
import shutil
import subprocess
old_file = __file__
new_file = generate_unique_file_name()
shutil.copy2(old_file, new_file) # works for both Windows and Linux
subprocess.call('python {}'.format(new_file), shell=True)
```
|
You're close; you have things in the wrong order. Create the new file, *then* execute it.
```
import os
old_file = __file__
new_file = generate_unique_file_name()
os.system('cp ' + old_file + ' ' + new_file) #UNIX syntax; for Windows, use "copy"
os.startfile(new_file)
```
You'll have to choose & code your preferred method for creating a unique file name. You might want to use a time-stamp as part of the name.
You might also want to delete this file before you exit; otherwise, you'll eventually fill your disk with these files.
| 5,327
|
47,393,177
|
I'm actually working on a django project and i'm migrating to a CustomUser model. On the database everything have gone well and now i want to force my user to update their informations (to respect the new model)
I would like to do it when they login (I set all the e-mail to end with @mysite.tmp in order to know if they have already update it or not). So if the e-mail end with this they should be automatically redirected to the update user view (without having been logged) and when they submit the form they get logged in (if the form is valid of course).
So my question is how to make this happened at login? I could override the login function but it does not seems to be the better option. Is there a view that i can override? What will you recommend to me?
EDIT: With your response i finally chose to override the `logged_in` signal and add a check on the `loggin_required` decorator that check if the user is up-to-date. Now my issue is that I don't understand what a signal is and so how to override it. Is it a method of the User model I have to override, in this case it would be quite simple, or is it somewhere else?
Could you explain me, or link me to an easy to understand documentation on this subject?
PS: I'm working with django 1.11 and python 3.6
|
2017/11/20
|
[
"https://Stackoverflow.com/questions/47393177",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8876232/"
] |
You can write the logic as:
```
select *
from table
where (Vendorname = @Vendor) OR (@Vendor IS NULL)
```
One caution: This may not be as optimized as your version, if you have an index on `Vendorname`.
|
```
select *
from table
where Vendorname = case when @Vendor is not null then @Vendor else Vendorname end;
```
| 5,328
|
62,097,023
|
I have a list with weekly figures and need to obtain the grouped totals by month.
The following code does the job, but there should be a more pythonic way of doing it with using the standard libraries.
The drawback of the code below is that the list needs to be in sorted order.
```
#Test data (not sorted)
sum_weekly=[('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89),
('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85),
('2020/04/19', 6), ('2020/04/26', 5), ('2020/05/03', 14),
('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28),('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2),]
month = sum_weekly[0][0].split('/')[1]
count=0
out=[]
for item in sum_weekly:
m_sel = item[0].split('/')[1]
if m_sel!=month:
out.append((month, count))
count=item[1]
else:
count+=item[1]
month = m_sel
out.append((month, count))
# monthly sums output as ('01', 242), ('02', 360), ('03', 220), ('04', 13), ('05', 67)
print (out)
```
|
2020/05/30
|
[
"https://Stackoverflow.com/questions/62097023",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1897688/"
] |
You could use `defaultdict` to store the result instead of a list. The keys of the dictionary would be the months and you can simply add the values with the same month (key).
Possible implementation:
```py
# Test Data
from collections import defaultdict
sum_weekly = [('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89),
('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85),
('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2), ('2020/04/19', 6), ('2020/04/26', 5),
('2020/05/03', 14),
('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28)]
results = defaultdict(int)
for date, count in sum_weekly: # used unpacking to make it clearer
month = date.split('/')[1]
# because we use a defaultdict if the key does not exist it
# the entry for the key will be created and initialize at zero
results[month] += count
print(results)
```
|
You can use `itertools.groupby` (it is part of standard library) - it does pretty much what you did under the hood (grouping together sequences of elements for which the key function gives same output). It can look like the following:
```
import itertools
def select_month(item):
return item[0].split('/')[1]
def get_value(item):
return item[1]
result = [(month, sum(map(get_value, group)))
for month, group in itertools.groupby(sorted(sum_weekly), select_month)]
print(result)
```
| 5,333
|
43,252,531
|
I am installing python 3.5, django 1.10 and psycopg2 2.7.1 on Amazon EC2 server in order to use a Postgresql database. I am using python 3 inside a virtual environment, and followed the classic installation steps:
```
cd /home/mycode
virtualenv-3.5 p3env
source p3env/bin/activate
pip install django
cd kenbot
django-admin startproject kenbot
pip install psycopg2
```
Then I have edited the settings.py file in my project to define the DATABASE settings:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'deleted',
'USER': 'deleted',
'PASSWORD': 'deleted',
'HOST': 'deleted',
'PORT': '5432',
}
```
Finally I type (while still in the virtual environment):
```
python manage.py makemigrations
```
And get the following errors:
```
Traceback (most recent call last):
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/backends/postgresql/base.py", line 20, in <module>
import psycopg2 as Database
ImportError: No module named 'psycopg2'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/core/management/__init__.py", line 367, in execute_from_command_line
utility.execute()
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/core/management/__init__.py", line 341, in execute
django.setup()
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/__init__.py", line 27, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/apps/config.py", line 199, in import_models
self.models_module = import_module(models_module_name)
File "/home/mycode/p3env/lib64/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 662, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/contrib/auth/models.py", line 4, in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/contrib/auth/base_user.py", line 52, in <module>
class AbstractBaseUser(models.Model):
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/models/base.py", line 119, in __new__
new_class.add_to_class('_meta', Options(meta, app_label))
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/models/base.py", line 316, in add_to_class
value.contribute_to_class(cls, name)
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/models/options.py", line 214, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/__init__.py", line 33, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/utils.py", line 211, in __getitem__
backend = load_backend(db['ENGINE'])
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/utils.py", line 115, in load_backend
return import_module('%s.base' % backend_name)
File "/home/mycode/p3env/lib64/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/backends/postgresql/base.py", line 24, in <module>
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No module named 'psycopg2'
```
I have not found this issue anywhere on stackoverflow. One point that may be relevant, when I browse the directory of my virtual environment, I notice that django is installed in /home/mycode/p3env/lib/python3.5/dist-packages whereas psysopg2 is in /home/mycode/p3env/lib64/python3.5/dist-packages.
Thanks for your help!
|
2017/04/06
|
[
"https://Stackoverflow.com/questions/43252531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5591278/"
] |
As suggested by @dentemm: the issue was resolved by copying all the psysopg2 directories from their current location in the lib64 directory, to the directory where django was installed /home/mycode/p3env/lib/python3.5/dist-packages.
|
Uninstalling and then reinstalling the library really helped!
>
> **(venv)$ pip uninstall psycopg2**
>
>
>
---
>
> **(venv)$ pip install psycopg2**
>
>
>
| 5,340
|
52,101,595
|
SQLAlchemy nicely documents [how to use Association Objects with `back_populates`](http://docs.sqlalchemy.org/en/latest/orm/basic_relationships.html#association-object).
However, when copy-and-pasting the example from that documentation, adding children to a parent throws a `KeyError` as following code shows. The model classes are copied 100% from the documentation:
```python
from sqlalchemy import Column, ForeignKey, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
from sqlalchemy.schema import MetaData
Base = declarative_base(metadata=MetaData())
class Association(Base):
__tablename__ = 'association'
left_id = Column(Integer, ForeignKey('left.id'), primary_key=True)
right_id = Column(Integer, ForeignKey('right.id'), primary_key=True)
extra_data = Column(String(50))
child = relationship("Child", back_populates="parents")
parent = relationship("Parent", back_populates="children")
class Parent(Base):
__tablename__ = 'left'
id = Column(Integer, primary_key=True)
children = relationship("Association", back_populates="parent")
class Child(Base):
__tablename__ = 'right'
id = Column(Integer, primary_key=True)
parents = relationship("Association", back_populates="child")
parent = Parent(children=[Child()])
```
Running that code with SQLAlchemy version 1.2.11 throws this exception:
```
lars$ venv/bin/python test.py
Traceback (most recent call last):
File "test.py", line 26, in <module>
parent = Parent(children=[Child()])
File "<string>", line 4, in __init__
File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/state.py", line 417, in _initialize_instance
manager.dispatch.init_failure(self, args, kwargs)
File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 249, in reraise
raise value
File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/state.py", line 414, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/ext/declarative/base.py", line 737, in _declarative_constructor
setattr(self, k, kwargs[k])
File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/attributes.py", line 229, in __set__
instance_dict(instance), value, None)
File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/attributes.py", line 1077, in set
initiator=evt)
File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/collections.py", line 762, in bulk_replace
appender(member, _sa_initiator=initiator)
File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/collections.py", line 1044, in append
item = __set(self, item, _sa_initiator)
File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/collections.py", line 1016, in __set
item = executor.fire_append_event(item, _sa_initiator)
File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/collections.py", line 680, in fire_append_event
item, initiator)
File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/attributes.py", line 943, in fire_append_event
state, value, initiator or self._append_token)
File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/attributes.py", line 1210, in emit_backref_from_collection_append_event
child_impl = child_state.manager[key].impl
KeyError: 'parent'
```
I've filed this as a [bug in SQLAlchemy's issue tracker](https://bitbucket.org/zzzeek/sqlalchemy/issues/4329/keyerror-when-using-association-objects). Maybe somebody can point me to a working solution or workaround in the meanwhile?
|
2018/08/30
|
[
"https://Stackoverflow.com/questions/52101595",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/543875/"
] |
**tldr;** We have to use Association Proxy extensions *and* create a custom constructor for the association object which takes the child object as the first (!) parameter. See solution based on the example from the question below.
SQLAlchemy's documentation actually states in the next paragraph that we aren't done yet if we want to directly add `Child` models to `Parent` models while skipping the intermediary `Association` models:
>
> Working with the association pattern in its direct form requires that
> child objects are associated with an association instance before being
> appended to the parent; similarly, access from parent to child goes
> through the association object.
>
>
>
```python
# create parent, append a child via association
p = Parent()
a = Association(extra_data="some data")
a.child = Child()
p.children.append(a)
```
To write convient code such as requested in the question, i.e. `p.children = [Child()]`, we have to make use of the [Association Proxy extension](http://docs.sqlalchemy.org/en/latest/orm/extensions/associationproxy.html).
Here is the solution using an Association Proxy extension which allows to add children to a parent "directly" without explicitly creating an association between both of them:
```python
from sqlalchemy import Column, ForeignKey, Integer, String
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import backref, relationship
from sqlalchemy.schema import MetaData
Base = declarative_base(metadata=MetaData())
class Association(Base):
__tablename__ = 'association'
left_id = Column(Integer, ForeignKey('left.id'), primary_key=True)
right_id = Column(Integer, ForeignKey('right.id'), primary_key=True)
extra_data = Column(String(50))
child = relationship("Child", back_populates="parents")
parent = relationship("Parent", backref=backref("parent_children"))
def __init__(self, child=None, parent=None):
self.parent = parent
self.child = child
class Parent(Base):
__tablename__ = 'left'
id = Column(Integer, primary_key=True)
children = association_proxy("parent_children", "child")
class Child(Base):
__tablename__ = 'right'
id = Column(Integer, primary_key=True)
parents = relationship("Association", back_populates="child")
p = Parent(children=[Child()])
```
Unfortunately I only figured out how to use `backref` instead of `back_populates` which isn't the "modern" approach.
Pay special attention to create a custom `__init__` method which takes the child as the *first* argument.
|
So to make a long story short.
You need to append an association object containing your child object onto your parent. Otherwise, you need to follow Lars' suggestion about a proxy association.
I recommend the former since it's the ORM-based way:
```
p = Parent()
p.children.append(Association(child = Child()))
session.add(p)
session.commit()
```
Note that if you have any non-nullable fields, they're easy to add on object creation for a quick test commit.
| 5,341
|
35,996,175
|
```
'''
Created on 13.3.2016
worm game
@author: Hai
'''
import pygame
import random
from pygame import * ##
class Worm:
def __init__(self, surface):
self.surface = surface
self.x = surface.get_width() / 2
self.y = surface.get_height() / 2
self.length = 1
self.grow_to = 50
self.vx = 0
self.vy = -1
self.body = []
self.crashed = False
self.color = 255, 255, 0
def key_event(self, event):
""" Handle keyboard events that affect the worm. """
if event.key == pygame.K_UP and self.vy != 1:
self.vx = 0
self.vy = -1
elif event.key == pygame.K_RIGHT and self.vx != -1:
self.vx = 1
self.vy = 0
elif event.key == pygame.K_DOWN and self.vy != -1:
self.vx = 0
self.vy = 1
elif event.key == pygame.K_LEFT and self.vx != 1:
self.vx = -1
self.vy = 0
def move(self):
""" Moving worm """
self.x += self.vx # add vx to x
self.y += self.vy
if (self.x, self.y) in self.body:
self.crashed = True
self.body.insert(0, (self.x, self.y))
# Changes worm to right size so it is grow_to
if self.grow_to > self.length:
self.length += 1
# If body is longer than length then pop
if len(self.body) > self.length:
self.body.pop()
"""
def draw(self):
self.surface.set_at((int(self.x), int(self.y)), (255, 255, 255))
self.surface.set_at((int(self.last[0]),int(self.last[1])), (0,0,0))
"""
def position(self):
return self.x, self.y
def eat(self):
self.grow_to += 25
def draw(self):
x, y = self.body[0]
self.surface.set_at((int(x), int(y)), self.color)
x, y = self.body[-1]
self.surface.set_at((int(x), int(y)), (0, 0, 0))
# for x, y in self.body:
# self.surface.set_at((int(x), int(y)), self.color)
# pygame.draw.rect(self.surface, self.color, (self.x, self.y, 6, 6), 0)
# worm's head
class Food:
def __init__(self, surface):
self.surface = surface
self.x = random.randint(0, surface.get_width())
self.y = random.randint(0, surface.get_height())
self.color = 255,255,255
def draw(self):
self.surface.set_at((int(self.x),int(self.y)), self.color)
pygame.draw.rect(self.surface, self.color, (self.x, self.y, 6, 6), 0)
def position(self):
return self.x, self.y
""" Check if worm have eaten this food """
def check(self, x, y):
if x < self.x or x > self.x + 6:
return False
elif y < self.y or y > self.y + 6:
return False
else:
return True
def erase(self):
pygame.draw.rect(self.surface, (0,0,0), (int(self.x), int(self.y), 6, 6), 0)
w = h = 500
screen = pygame.display.set_mode((w, h))
clock = pygame.time.Clock()
pygame.mixer.init()
chomp = pygame.mixer.Sound("bow.wav")
score = 0
worm = Worm(screen) ###
food = Food(screen)
running = True
while running:
# screen.fill((0, 0, 0)) optimized in worm draw()
worm.draw()
food.draw()
worm.move()
if worm.crashed or worm.x <= 0 or worm.x >= w-1 or worm.y <= 0 or worm.y >= h-1:
print("U lose")
running = False
elif food.check(worm.x, worm.y):
worm.eat()
food.erase()
chomp.play()
score += 1
print("Score is: %d" % score)
food = Food(screen)
for event in pygame.event.get():
if event.type == pygame.QUIT: # Pressed X
running = False
elif event.type == pygame.KEYDOWN: # When pressing keyboard
worm.key_event(event)
pygame.display.flip()
clock.tick(100)
```
hello i am getting error
x, y = self.body[0]
IndexError: list index out of range
and i am doing this tutorial:
<https://lorenzod8n.wordpress.com/2008/03/01/pygame-tutorial-9-first-improvements-to-the-game/>
I am very new to python, plz help me
|
2016/03/14
|
[
"https://Stackoverflow.com/questions/35996175",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4802223/"
] |
I had to remove all the N preview stuff from my sdk to make things normal again. Don't forget the cache too.
```
sdk/extras/android/m2repository/com/android/support/design|support-v-13|ect./24~
```
|
The syntax of xml line you wrote is wrong
instead of
`android:setTextColor=""`
use
```
android:textColor=""
```
| 5,342
|
14,738,725
|
I am asked:
>
> Using your raspberry pi, write a python script that determines the randomness
> of /dev/random and /dev/urandom. Read bytes and histogram the results.
> Plot in matplotlib. For your answer include the python script.
>
>
>
I am currently lost on the phrasing "determines the randomness."
I can read from urandom and random with:
```
#rb - reading as binary
devrndm = open("/dev/random", 'rb')
#read to a file instead of mem?
rndmdata = devrndm.read(25) #read 25bytes
```
or
```
with open("/dev/random", 'rb') as f:
print repr(f.read(10))
```
I think the purpose of this exercise was to find out that urandom is faster and has a larger pool than random. However if I try to read anything over ~15 the time to read seems to increase exponentially.
So I am lost right now on how to compare 'randomness'. If I read both urandom and random in to respective files how could I compare them?
|
2013/02/06
|
[
"https://Stackoverflow.com/questions/14738725",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/779920/"
] |
Could it be as simple as:
```
In [664]: f = open("/dev/random", "rb")
In [665]: len(set(f.read(256)))
Out[665]: 169
In [666]: ff = open("/dev/urandom", "rb")
In [667]: len(set(ff.read(256)))
Out[667]: 167
In [669]: len(set(f.read(512)))
Out[669]: 218
In [670]: len(set(ff.read(512)))
Out[670]: 224
```
ie. asking for 256 bytes doesn't give back 256 unique values. So you could plot increasing sample sizes against the unique count until it reaches the 256 saturation point.
|
Your experience may be exactly what they're looking for. From the man page of urandom(4):
>
> When read, the /dev/random device will only return
> random bytes within the estimated number of bits of noise
> in the entropy pool. /dev/random should be suitable for uses that need very high quality randomness such as
> one-time pad or key generation. When the entropy pool is empty, reads from /dev/random will block until addi‐
> tional environmental noise is gathered.
>
>
> A read from the /dev/urandom device will not block waiting for more entropy.
>
>
>
Note the bit about blocking. urandom won't, random will. Particularly in an embedded context, additional entropy may be hard to come by, which would lead to the blocking you see.
| 5,344
|
66,104,854
|
I want to download the data from `USDA` site with custom queries. So instead of manually selecting queries in the website, I am thinking about how should I do this handier in python. To do so, I used `request`, `http` to access the url and read the content, it is not intuitive for me how should I pass the queries then make a selection and download the data as `csv`. Does anyone knows of doing this easily in python? Is there any workaround we could download the data from url with specific queries? Any idea?
**this is my current attempt**
here is the [url](https://www.marketnews.usda.gov/mnp/ls-report-retail?&repType=summary&portal=ls&category=Retail&species=BEEF&startIndex=1) that I am going to select data with custom queries.
```
import io
import requests
import pandas as pd
url="https://www.marketnews.usda.gov/mnp/ls-report-retail?&repType=summary&portal=ls&category=Retail&species=BEEF&startIndex=1"
s=requests.get(url).content
df=pd.read_csv(io.StringIO(s.decode('utf-8')))
```
so before reading the requested json in `pandas`, I need to pass following queries for correct data selection:
```
Category = "Retail"
Report Type = "Item"
Species = "Beef"
Region(s) = "National"
Start Dates = "2020-01-01"
End Date = "2021-02-08"
```
it is not intuitive for me how should I pass the queries with requested json then download the filtered data as `csv`. Is there any efficient way of doing this in python? Any thoughts? Thanks
|
2021/02/08
|
[
"https://Stackoverflow.com/questions/66104854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14126595/"
] |
A few details
* simplest format is text rather that HTML. Got URL from HTML page for text download
* `requests(params=)` is a `dict`. Built it up and passed, no need to deal with building complete URL string
* clearly text is space delimited, found minimum of double space
```
import io
import requests
import pandas as pd
url="https://www.marketnews.usda.gov/mnp/ls-report-retail"
p = {"repType":"summary","species":"BEEF","portal":"ls","category":"Retail","format":"text"}
r = requests.get(url, params=p)
df = pd.read_csv(io.StringIO(r.text), sep="\s\s+", engine="python")
```
| | Date | Region | Feature Rate | Outlets | Special Rate | Activity Index |
| --- | --- | --- | --- | --- | --- | --- |
| 0 | 02/05/2021 | NATIONAL | 69.40% | 29,200 | 20.10% | 81,650 |
| 1 | 02/05/2021 | NORTHEAST | 75.00% | 5,500 | 3.80% | 17,520 |
| 2 | 02/05/2021 | SOUTHEAST | 70.10% | 7,400 | 28.00% | 23,980 |
| 3 | 02/05/2021 | MIDWEST | 75.10% | 6,100 | 19.90% | 17,430 |
| 4 | 02/05/2021 | SOUTH CENTRAL | 57.90% | 4,900 | 26.40% | 9,720 |
| 5 | 02/05/2021 | NORTHWEST | 77.50% | 1,300 | 2.50% | 3,150 |
| 6 | 02/05/2021 | SOUTHWEST | 63.20% | 3,800 | 27.50% | 9,360 |
| 7 | 02/05/2021 | ALASKA | 87.00% | 200 | .00% | 290 |
| 8 | 02/05/2021 | HAWAII | 46.70% | 100 | .00% | 230 |
|
Just format the query data in the url - it's actually a REST API:
To add more query data, as @mullinscr said, you can change the values on the left and press submit, then see the query name in the URL (for example, start date is called `repDate`).
If you hover on the Download as XML link, you will also discover you can specify the download format using `format=<format_name>`. Parsing the tabular data in XML using pandas might be easier, so I would append `format=xml` at the end as well.
```
category = "Retail"
report_type = "Item"
species = "BEEF"
regions = "NATIONAL"
start_date = "01-01-2019"
end_date = "01-01-2021"
# the website changes "-" to "%2F"
start_date = start_date.replace("-", "%2F")
end_date = end_date.replace("-", "%2F")
url = f"https://www.marketnews.usda.gov/mnp/ls-report-retail?runReport=true&portal=ls&startIndex=1&category={category}&repType={report_type}&species={species}®ion={regions}&repDate={start_date}&endDate={end_date}&compareLy=No&format=xml"
# parse with pandas, etc...
```
| 5,345
|
57,252,637
|
I'm using PySpark (a new thing for me). Now, suppose I Have the following table:
`+-------+-------+----------+
| Col1 | Col2 | Question |
+-------+-------+----------+
| val11 | val12 | q1 |
| val21 | val22 | q2 |
| val31 | val32 | q3 |
+-------+-------+----------+`
and I would like to append to it a new column, `random_qustion` which is in fact a permutation of the values in the `Question` column, so the result might look like this:
`+-------+-------+----------+-----------------+
| Col1 | Col2 | Question | random_question |
+-------+-------+----------+-----------------+
| val11 | val12 | q1 | q2 |
| val21 | val22 | q2 | q3 |
| val31 | val32 | q3 | q1 |
+-------+-------+----------+-----------------+`
I'v tried to do that as follow:
`python
df.withColumn(
'random_question'
,df.orderBy(rand(seed=0))['question']
).createOrReplaceTempView('with_random_questions')`
The problem is that the above code does append the required column but WITHOUT permuting the values in it.
What am I doing wrong and how can I fix this?
Thank you,
Gilad
|
2019/07/29
|
[
"https://Stackoverflow.com/questions/57252637",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/690045/"
] |
This should do the trick:
```
import pyspark.sql.functions as F
questions = df.select(F.col('Question').alias('random_question'))
random = questions.orderBy(F.rand())
```
Give the dataframes a unique row id:
```
df = df.withColumn('row_id', F.monotonically_increasing_id())
random = random.withColumn('row_id', F.monotonically_increasing_id())
```
Join them by row id:
```
final_df = df.join(random, 'row_id')
```
|
The above answer is wrong. You are not guaranteed to have the same set of ids in the two dataframes and you will lose rows.
```
df = spark.createDataFrame(pd.DataFrame({'a':[1,2,3,4],'b':[10,11,12,13],'c':[100,101,102,103]}))
questions = df.select(F.col('a').alias('random_question'))
random = questions.orderBy(F.rand())
df = df.withColumn('row_id', F.monotonically_increasing_id())
random = random.withColumn('row_id', F.monotonically_increasing_id())
df.show()
random.show()
```
output on my system:
```
+---+---+---+-----------+
| a| b| c| row_id|
+---+---+---+-----------+
| 1| 10|100| 8589934592|
| 2| 11|101|25769803776|
| 3| 12|102|42949672960|
| 4| 13|103|60129542144|
+---+---+---+-----------+
+---------------+-----------+
|random_question| row_id|
+---------------+-----------+
| 4| 0|
| 1| 8589934592|
| 2|17179869184|
| 3|25769803776|
+---------------+-----------+
```
Use the following utility to add permuted columns in place of the original columns or as new columns
```
from pyspark.sql.types import StructType, StructField
from pyspark.sql.functions import rand, col
from pyspark.sql import Row
def permute_col_maintain_corr_join(df, colnames, newnames=[], replace = False):
'''
colname: list of columns to be permuted
newname: list of new names for the permuted columns
replace: whether to add permuted columns as new columns or replace the original columne
'''
def flattener(rdd_1):
r1 = rdd_1[0].asDict()
idx = rdd_1[1]
combined_dict = {**r1,**{'index':idx}}
out_row = Row(**combined_dict)
return out_row
def compute_schema_wid(df):
dfs = df.schema.fields
ids = StructField('index', IntegerType(), False)
return StructType(dfs+[ids])
if not newnames:
newnames = [f'{i}_sha' for i in colnames]
assert len(colnames) == len(newnames)
if not replace:
assert not len(set(df.columns).intersection(set(newnames))), 'with replace False newnames cannot contain a column name from df'
else:
_rc = set(df.columns) - set(colnames)
assert not len(_rc.intersection(set(newnames))), 'with replace True newnames cannot contain a column name from df other than one from colnames'
df_ts = df.select(*colnames).toDF(*newnames)
if replace:
df = df.drop(*colnames)
df_ts = df_ts.orderBy(rand())
df_ts_s = compute_schema_wid(df_ts)
df_s = compute_schema_wid(df)
df_ts = df_ts.rdd.zipWithUniqueId().map(flattener).toDF(schema=df_ts_s)
df = df.rdd.zipWithUniqueId().map(flattener).toDF(schema=df_s)
df = df.join(df_ts,on='index').drop('index')
return df
```
| 5,346
|
57,373,034
|
I have a list of tuples:
```
d = [("a", "x"), ("b", "y"), ("a", "y")]
```
and the `DataFrame`:
```
y x
b 0.0 0.0
a 0.0 0.0
```
I would like to replace any `0s` with `1s` if the row and column labels correspond to a tuple in `d`, such that the new DataFrame is:
```
y x
b 1.0 0.0
a 1.0 1.0
```
I am currently using:
```
for i, j in d:
df.loc[i, j] = 1.0
```
This seems to me as the most "pythonic" approach but for a `DataFrame` of shape 20000 \* 20000 and a list of length 10000, this process literally takes forever. There must be a better way of accomplishing this. Any ideas?
Thanks
|
2019/08/06
|
[
"https://Stackoverflow.com/questions/57373034",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6017833/"
] |
**Approach #1: No bad entries in `d`**
Here's one NumPy based method -
```
def assign_val(df, d, newval=1):
# Get d-rows,cols as arrays for efficient usage latet on
di,dc = np.array([j[0] for j in d]), np.array([j[1] for j in d])
# Get col and index data
i,c = df.index.values.astype(di.dtype),df.columns.values.astype(dc.dtype)
# Locate row indexes from d back to df
sidx_i = i.argsort()
I = sidx_i[np.searchsorted(i,di,sorter=sidx_i)]
# Locate column indexes from d back to df
sidx_c = c.argsort()
C = sidx_c[np.searchsorted(c,dc,sorter=sidx_c)]
# Assign into array data with new values
df.values[I,C] = newval
# Use df.to_numpy(copy=False)[I,C] = newval on newer pandas versions
return df
```
Sample run -
```
In [21]: df = pd.DataFrame(np.zeros((2,2)), columns=['y','x'], index=['b','a'])
In [22]: d = [("a", "x"), ("b", "y"), ('a','y')]
In [23]: assign_val(df, d, newval=1)
Out[23]:
y x
b 1.0 0.0
a 1.0 1.0
```
**Approach #2: Generic one**
If there are any *bad* entries in `d, we need to filter out those. So, a modified one for that generic case would be -
```
def ssidx(i,di):
sidx_i = i.argsort()
idx_i = np.searchsorted(i,di,sorter=sidx_i)
invalid_mask = idx_i==len(sidx_i)
idx_i[invalid_mask] = 0
I = sidx_i[idx_i]
invalid_mask |= i[I]!=di
return I,invalid_mask
# Get d-rows,cols as arrays for efficient usage latet on
di,dc = np.array([j[0] for j in d]), np.array([j[1] for j in d])
# Get col and index data
i,c = df.index.values.astype(di.dtype),df.columns.values.astype(dc.dtype)
# Locate row indexes from d back to df
I,badmask_I = ssidx(i,di)
# Locate column indexes from d back to df
C,badmask_C = ssidx(c,dc)
badmask = badmask_I | badmask_C
goodmask = ~badmask
df.values[I[goodmask],C[goodmask]] = newval
```
|
Use [`get_dummies`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html) with `DataFrame` constructor:
```
df = pd.get_dummies(pd.DataFrame(d).set_index(0)[1]).rename_axis(None).max(level=0)
```
Or use `zip` with `Series`:
```
lst = list(zip(*d))
df = pd.get_dummies(pd.Series(lst[1], index = lst[0])).max(level=0)
```
---
```
print (df)
x y
a 1 1
b 0 1
```
| 5,347
|
40,664,226
|
i'm tryng to get some tweet data from a MySql database.
I've got tons of encoding errors while i was developing this code. This last for is the only way i got for running the code and getting this outfile full with \uxx characters all around, as you can see here:
```
[{..., "lang_tweet": "es", "text_tweet": "Recuerdo un d\u00eda de, *llamada a la 1:45*, \"Micho, me va a dar algo, estoy temblando, me tome un moster y un balium... Que me muero.!!\",...},...]
```
I've been here around and around trying different solutions, but the thing is that i got really confused with the abstraction of coding and encoding.
What can i do for fixing this?
Or maybe would be easier to just grab the dirty JSON and 'parse' it decoding those characters manually.
If you want take a look to the code i'm using to querying the db:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import pymysql
import collections
import json
conn = pymysql.connect(host='localhost', user='sut', passwd='r', db='tweetsjun2016')
cur = conn.cursor()
cur.execute("""
SELECT * FROM 20160607_tweets
WHERE 20160607_tweets.creation_date >= '2016-06-07 10:51'
AND 20160607_tweets.creation_date <= '2016-06-07 11:51'
AND 20160607_tweets.lang_tweet = "es"
AND 20160607_tweets.has_keyword = 1
AND 20160607_tweets.rt = 0
LIMIT 20
""")
objects_list = []
for row in cur:
d = collections.OrderedDict()
d['download_date'] = row[1]
d['creation_date'] = row[2]
d['id_user'] = row[5]
d['favorited'] = row[7]
d['lang_tweet'] = row[10]
d['text_tweet'] = row[11].decode('latin1')
d['rt'] = row[12]
d['rt_count'] = row[13]
d['has_keyword'] = row[19]
objects_list.append(d)
# print(row[11].decode('latin1')) <- looks perfect, it prints with accents and fine
j = json.dumps(objects_list, default=date_handler, encoding='latin1')
objects_file = "test23" + "_dicts"
f = open(objects_file,'w')
print >> f, j
cur.close()
conn.close()
```
If i delete the `*.decode('latin1')` method from all it's applications i get this error:
```
Traceback (most recent call last):
File "test.py", line 51, in <module>
j = json.dumps(objects_list, default=date_handler)
File "C:\Users\Vichoko\Anaconda2\lib\json\__init__.py", line 251, in dumps
sort_keys=sort_keys, **kw).encode(obj)
File "C:\Users\Vichoko\Anaconda2\lib\json\encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\Users\Vichoko\Anaconda2\lib\json\encoder.py", line 270, in iterencode
return _iterencode(o, 0)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xed in position 13: invalid continuation byte
```
I really can't figure out the way the string is comming from the db to my script.
Thanks for reading, any idea would be thankfull.
Edit1:
Here you can see how the JSON files are being exported with the codification error in the text `text_tweet` key-val:
<https://github.com/Vichoko/real-time-twit/blob/master/auto_labeling/json/tweets_sismos/tweetsago20160.json>
|
2016/11/17
|
[
"https://Stackoverflow.com/questions/40664226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5298808/"
] |
This is not a string concatenation, but adding `.x` and `.y` to pointer to `","`:
```
cursorLocation.x + "," + cursorLocation.y
```
Instead, try e.g.:
```
char s[256];
sprintf_s(s, "%d,%d", cursorLocation.x, cursorLocation.y);
OutputDebugStringA(s); // added 'A' after @IInspectable's comment, but
// using UNICODE and wchar_t might be better indeed
```
|
String concatenation doesn't work with integers.
Try using `std::ostringstream`:
```
std::ostringstream out_stream;
out_stream << cursorLocation.x << ", " << cursorLocation.y;
OuputDebugString(out_stream.str().c_str());
```
| 5,348
|
45,375,944
|
While parsing attributes using [`__dict__`](https://docs.python.org/3/library/stdtypes.html#object.__dict__), my [`@staticmethod`](https://docs.python.org/3/library/functions.html?highlight=staticmethod#staticmethod) is not [`callable`](https://docs.python.org/3/library/functions.html?highlight=staticmethod#callable).
```py
Python 2.7.5 (default, Aug 29 2016, 10:12:21)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from __future__ import (absolute_import, division, print_function)
>>> class C(object):
... @staticmethod
... def foo():
... for name, val in C.__dict__.items():
... if name[:2] != '__':
... print(name, callable(val), type(val))
...
>>> C.foo()
foo False <type 'staticmethod'>
```
* How is this possible?
* How to check if a static method is callable?
---
I provide below a more detailed example:
Script `test.py`
----------------
```py
from __future__ import (absolute_import, division, print_function)
class C(object):
@staticmethod
def foo():
return 42
def bar(self):
print('Is bar() callable?', callable(C.bar))
print('Is foo() callable?', callable(C.foo))
for attribute, value in C.__dict__.items():
if attribute[:2] != '__':
print(attribute, '\t', callable(value), '\t', type(value))
c = C()
c.bar()
```
Result for python2
------------------
```py
> python2.7 test.py
Is bar() callable? True
Is foo() callable? True
bar True <type 'function'>
foo False <type 'staticmethod'>
```
Same result for python3
-----------------------
```py
> python3.4 test.py
Is bar() callable? True
Is foo() callable? True
bar True <class 'function'>
foo False <class 'staticmethod'>
```
|
2017/07/28
|
[
"https://Stackoverflow.com/questions/45375944",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/938111/"
] |
The reason for this behavior is the descriptor protocol. The `C.foo` won't return a `staticmethod` but a normal function while the `'foo'` in `__dict__` is a [`staticmethod`](https://docs.python.org/library/functions.html#staticmethod) (and `staticmethod` is a descriptor).
In short `C.foo` isn't the same as `C.__dict__['foo']` in this case - but rather `C.__dict__['foo'].__get__(C)` (see also the section in the documentation of the [Data model on descriptors](https://docs.python.org/reference/datamodel.html#implementing-descriptors)):
```
>>> callable(C.__dict__['foo'].__get__(C))
True
>>> type(C.__dict__['foo'].__get__(C))
function
>>> callable(C.foo)
True
>>> type(C.foo)
function
>>> C.foo is C.__dict__['foo'].__get__(C)
True
```
---
In your case I would check for callables using [`getattr`](https://docs.python.org/library/functions.html#getattr) (which knows about descriptors and how to access them) instead of what is stored as value in the class `__dict__`:
```
def bar(self):
print('Is bar() callable?', callable(C.bar))
print('Is foo() callable?', callable(C.foo))
for attribute in C.__dict__.keys():
if attribute[:2] != '__':
value = getattr(C, attribute)
print(attribute, '\t', callable(value), '\t', type(value))
```
Which prints (on python-3.x):
```
Is bar() callable? True
Is foo() callable? True
bar True <class 'function'>
foo True <class 'function'>
```
The types are different on python-2.x but the result of `callable` is the same:
```
Is bar() callable? True
Is foo() callable? True
bar True <type 'instancemethod'>
foo True <type 'function'>
```
|
You can't check if a `staticmethod` object is callable or not. This was discussed on the tracker in [Issue 20309 -- Not all method descriptors are callable](https://bugs.python.org/issue20309) and closed as "not a bug".
In short, there's been no rationale for implementing `__call__` for staticmethod objects. The built-in `callable` has no way to know that the `staticmethod` object is something that essentially "holds" a callable.
Though you could implement it (for `staticmethod`s and `classmethod`s), it would be a maintenance burden that, as previously mentioned, has no real motivating use-cases.
---
For your case, you can use `getattr(C, name)` to perform a look-up for the object named `name`; this is equivalent to performing `C.<name>`. `getattr`, after finding the staticmethod object, will invoke its `__get__` to get back the callable it's managing. You can then use `callable` on that.
A nice primer on descriptors can be found in the docs, take a look at [Descriptor HOWTO](https://docs.python.org/2.7/howto/descriptor.html#static-methods-and-class-methods).
| 5,349
|
50,117,538
|
I use windows 7 without admin rights and i would like to use python3.
Even if i set PYTHONPATH, environment variable is ignored. However PYTHONPATH is valid when printed.
```
>>> print(sys.path)
['c:\\Python365\\python36.zip', 'c:\\Python365']
>>> print(os.environ["PYTHONPATH"])
d:\libs
```
any idea ?
thank you very much
Gil
|
2018/05/01
|
[
"https://Stackoverflow.com/questions/50117538",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2922846/"
] |
When using the embedded distribution (.zip file), then the `PYTHONPATH` environment variable is not respected. If this behavior is needed, then one needs to add some Python code that load the setting it from os.environ.get('PYTHONPATH', '') split the directories and add them to `sys.path`.
Also note that pip is not supported with the embedded distribution, but can [be made to work](https://stackoverflow.com/questions/42666121/pip-with-embedded-python).
Alternatively one uses the installers instead of the embedded distribution.
|
Add the contents of PYTHONPATH to python.\_pth in the root folder one entry per line.
| 5,350
|
17,585,207
|
I copied this verbatim from python.org unittest documentation:
```
import random
import unittest
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
self.seq = range(10)
def test_shuffle(self):
# make sure the shuffled sequence does not lose any elements
random.shuffle(self.seq)
self.seq.sort()
self.assertEqual(self.seq, range(10))
# should raise an exception for an immutable sequence
self.assertRaises(TypeError, random.shuffle, (1,2,3))
def test_choice(self):
element = random.choice(self.seq)
self.assertTrue(element in self.seq)
def test_sample(self):
with self.assertRaises(ValueError):
random.sample(self.seq, 20)
for element in random.sample(self.seq, 5):
self.assertTrue(element in self.seq)
if __name__ == '__main__':
unittest.main()
```
But I get this error message from python 2.7.2 [GCC 4.1.2 20080704 (Red Hat 4.1.2-51)] on linux2:
```
.E.
======================================================================
ERROR: test_sample (__main__.TestSequenceFunctions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tmp.py", line 23, in test_sample
with self.assertRaises(ValueError):
TypeError: failUnlessRaises() takes at least 3 arguments (2 given)
----------------------------------------------------------------------
Ran 3 tests in 0.001s
FAILED (errors=1)
```
How can I get `assertRaises()` to work properly?
|
2013/07/11
|
[
"https://Stackoverflow.com/questions/17585207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1913647/"
] |
Check that you are really using 2.7 python.
Tested using `pythonbrew`:
```
$ pythonbrew use 2.7.2
$ python test.py
...
----------------------------------------------------------------------
Ran 3 tests in 0.000s
OK
$ pythonbrew use 2.6.5
$ python test.py
.E.
======================================================================
ERROR: test_sample (__main__.TestSequenceFunctions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 23, in test_sample
with self.assertRaises(ValueError):
TypeError: failUnlessRaises() takes at least 3 arguments (2 given)
----------------------------------------------------------------------
Ran 3 tests in 0.000s
FAILED (errors=1)
```
|
If you're using 2.7 and still seeing this issue, it could be because you're not using python's `unittest` module. Some other modules like `twisted` provide `assertRaises` and though they try to maintain compatibility with python's `unittest`, your particular version of that module may be out of date.
| 5,351
|
56,045,986
|
I was given this as an exercise. I could of course sort a list by using **sorted()** or other ways from Python Standard Library, but I can't in this case. I **think** I'm only supposed to use **reduce()**.
```
from functools import reduce
arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3]
sorted_arr = reduce(lambda a,b : (b,a) if a > b else (a,b), arr)
```
The error I get:
```
TypeError: '>' not supported between instances of 'tuple' and 'int'
```
Which is expected, because my reduce function inserts a tuple into the int array, instead of 2 separate integers. And then the tuple gets compared to an int...
Is there a way to insert back 2 numbers into the list, and only run the function on every second number in the list? Or a way to swap the numbers with using reduce()?
Documentation says very little about the reduce function, so I am out of ideas right now.
<https://docs.python.org/3/library/functools.html?highlight=reduce#functools.reduce>
|
2019/05/08
|
[
"https://Stackoverflow.com/questions/56045986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/764285/"
] |
Here is one way to sort the list using `reduce`:
```
arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3]
sorted_arr = reduce(
lambda a, b: [x for x in a if x <= b] + [b] + [x for x in a if x > b],
arr,
[]
)
print(sorted_arr)
#[1, 1, 2, 3, 3, 3, 5, 6, 9, 17]
```
At each reduce step, build a new output list which concatenates a list of all of the values less than or equal to `b`, `[b]`, and a list of all of the values greater than `b`. Use the optional third argument to `reduce` to initialize the output to an empty list.
|
I think you're misunderstanding how reduce works here. Reduce is synonymous to *right-fold* in some other languages (e.g. Haskell). The first argument expects a function which takes two parameters: an *accumulator* and an element to accumulate.
Let's hack into it:
```
arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3]
reduce(lambda xs, x: [print(xs, x), xs+[x]][1], arr, [])
```
Here, `xs` is the accumulator and `x` is the element to accumulate. Don't worry too much about `[print(xs, x), xs+[x]][1]` – it's just there to print intermediate values of `xs` and `x`. Without the printing, we could simplify the lambda to `lambda xs, x: xs + [x]`, which just appends to the list.
The above outputs:
```
[] 17
[17] 2
[17, 2] 3
[17, 2, 3] 6
[17, 2, 3, 6] 1
[17, 2, 3, 6, 1] 3
[17, 2, 3, 6, 1, 3] 1
[17, 2, 3, 6, 1, 3, 1] 9
[17, 2, 3, 6, 1, 3, 1, 9] 5
[17, 2, 3, 6, 1, 3, 1, 9, 5] 3
```
As we can see, `reduce` passes an accumulated list as the first argument and a new element as the second argument.(If `reduce` is still boggling you, [*How does reduce work?*](https://stackoverflow.com/questions/9108855/how-does-reduce-function-work) contains some nice explanations.)
Our particular lambda *inserts* a new element into the accumulator on each "iteration". This hints at insertion sort:
```
def insert(xs, n):
"""
Finds first element in `xs` greater than `n` and returns an inserted element.
`xs` is assumed to be a sorted list.
"""
for i, x in enumerate(xs):
if x > n:
return xs[:i] + [n] + xs[i:]
return xs + [n]
sorted_arr = reduce(insert, arr, [])
print(sorted_arr)
```
This prints the correctly sorted array:
```
[1, 1, 2, 3, 3, 3, 5, 6, 9, 17]
```
Note that a third parameter to `reduce` (i.e. `[]`) was specified as we *initialise* the sort should with an empty list.
| 5,353
|
59,398,271
|
I am doing object tracking on my videos which are in .mpg format what i am doing is i am using OpenCV to track the objects but i am facing some while opening it in my code i have attached my code.
```
import cv2
import sys
(major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.')
if __name__ == '__main__' :
# Set up tracker.
# Instead of MIL, you can also use
tracker_types = ['BOOSTING', 'MIL','KCF', 'TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT']
tracker_type = tracker_types[1]
print(tracker_type)
if tracker_type == 'MIL':
tracker = cv2.TrackerMIL_create()
# Read video
video = cv2.VideoCapture("Stroll.mpg")
# Exit if video not opened.
if not video.isOpened():
print ("Could not open video")
sys.exit()
# Read first frame.
ok, frame = video.read()
if not ok:
print ('Cannot read video file')
sys.exit()
# Define an initial bounding box
bbox = (287, 23, 86, 320)
# Uncomment the line below to select a different bounding box
bbox = cv2.selectROI(frame, False)
# Initialize tracker with first frame and bounding box
ok = tracker.init(frame, bbox)
while True:
# Read a new frame
ok, frame = video.read()
if not ok:
break
# Start timer
timer = cv2.getTickCount()
# Update tracker
ok, bbox = tracker.update(frame)
# Calculate Frames per second (FPS)
fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer)
# Draw bounding box
if ok:
# Tracking success
p1 = (int(bbox[0]), int(bbox[1]))
p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))
cv2.rectangle(frame, p1, p2, (255,0,0), 2, 1)
else :
# Tracking failure
cv2.putText(frame, "Tracking failure detected", (100,80), cv2.FONT_HERSHEY_SIMPLEX, 0.75,(0,0,255),2)
# Display tracker type on frame
cv2.putText(frame, tracker_type + " Tracker", (100,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50),2)
# Display FPS on frame
cv2.putText(frame, "FPS : " + str(int(fps)), (100,50), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50), 2)
# Display result
cv2.imshow("Tracking", frame)
# Exit if ESC pressed
k = cv2.waitKey(1) & 0xff
if k == 27 : break
```
but the error i am facing here is:-
```
Could not open video
[ERROR:0] global C:\projects\opencv-python\opencv\modules\videoio\src\cap.cpp
(116) cv::VideoCapture::open VIDEOIO(CV_IMAGES): raised OpenCV exception:
OpenCV(4.1.2) C:\projects\opencv-python\opencv\modules\videoio
\src\cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can't find
starting number (in the name of file): Stroll.mpg in function 'cv::icvExtractPattern'
```
I am using python 3.6.8
OpenCV version 4.1.2
|
2019/12/18
|
[
"https://Stackoverflow.com/questions/59398271",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7449718/"
] |
POSSIBLY WRONG PATH
Check if Stroll.mpg is in right there in your working directory. If yes try with an .mp4 video. Most probably the path is wrong or file name Is misspelled.
Refer: <https://answers.opencv.org/question/1965/cv2videocapture-cannot-read-from-file/>
|
Make sure your opencv is compiled *with ffmpeg* which provides all those media-decoders (probably missing without).
[Someone with a similar problem](https://answers.opencv.org/question/220468/video-from-ip-camera-cant-find-starting-number-cvicvextractpattern/) due to this.
| 5,362
|
43,401,174
|
I'm new to python and I would like to know if this is possible or not:
I want to create an object and attach to it another object.
```
OBJECT A
Child 1
Child 2
OBJECT B
Child 3
Child 4
Child 5
Child 6
Child 7
```
is this possible ?
|
2017/04/13
|
[
"https://Stackoverflow.com/questions/43401174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6218501/"
] |
If you are talking about object oriented terms, yes you can,you dont explain clearly what you want to do, but the 2 things that come to my mind if you are talking about OOP are:
* If you are talking about inheritance you can make child objects extend parent objects when you create your child class: class child(parent):
* If you are talking about object composition, you just make child object an isntance variable of parent object and pass it as a constructor variable
|
Here is an example:
In this scenario an object can be a person without being an employee, however to be an employee they must be a person. Therefor the person class is a parent to the employee
Here's the link to an article that really helped me understand inheritance:
<http://www.python-course.eu/python3_inheritance.php>
```
class Person:
def __init__(self, first, last):
self.firstname = first
self.lastname = last
def Name(self):
return self.firstname + " " + self.lastname
class Employee(Person):
def __init__(self, first, last, staffnum):
Person.__init__(self,first, last)
self.staffnumber = staffnum
def GetEmployee(self):
return self.Name() + ", " + self.staffnumber
x = Person("Marge", "Simpson")
y = Employee("Homer", "Simpson", "1007")
print(x.Name())
print(y.GetEmployee())
```
| 5,363
|
21,047,114
|
For sublime text, I installed RstPreview, downloaded `docutils-0.11`, and installed it by running `C:\Anaconda\python setup.py install` in Command Prompt (I am using windows 7 64 bits).
When I press `Ctrl+Shift+R` to parse a `.rst` file I get the following,

The build system is set to `C:\Anaconda\python` where docutils imports normally, but seemingly sublime text tries to import `docutils` from the internal Python system for which I don't know how to install libraries.
Thanks in advance!
|
2014/01/10
|
[
"https://Stackoverflow.com/questions/21047114",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1248073/"
] |
After reading through the [issues](https://github.com/d0ugal/RstPreview/issues) for this plugin, it doesn't look like there is any good way of getting it to work on Windows. I can expand on the technicalities if you want, but basically this plugin relies on installing a third-party package (`docutils`) into the version of Python used by Sublime Text, which on Windows is completely separate from any version of Python you may have installed such as Anaconda. The author has never tested it on Windows, and from what I could find no one has posted any way to get it to run successfully on that platform.
As an alternative, you may want to look at the [`OmniMarkupPreviewer`](https://sublime.wbond.net/packages/OmniMarkupPreviewer) plugin. From its description:
>
> OmniMarkupPreviewer is a plugin for both Sublime Text 2 and Sublime Text 3
> that preview markups in web browsers. OmniMarkupPreviewer renders markups into
> htmls and send it to web browser in the backgound, which enables a live preview.
> Besides, OmniMarkupPreviewer provide support for exporting result to
> html file as well.
>
>
>
It supports reStructuredText among several other formats, and while I haven't tested it personally, it looks like it would fit your needs.
|
I just came across restview, which does work on windows and offers a nice approach for providing feedback about how your rst file will be rendered as html. Here is an excerpt from the [restview pypi page](https://pypi.python.org/pypi/restview).
>
> A viewer for ReStructuredText documents that renders them on the fly.
>
>
> Pass the name of a ReStructuredText document to restview, and it will launch a web server on localhost:random-port and open a web browser. Every time you reload the page, restview will reload the document from disk and render it. This is very convenient for previewing a document while you’re editing it.
>
>
> You can also pass the name of a directory, and restview will recursively look for files that end in .txt or .rst and present you with a list.
>
>
> Finally, you can make sure your Python package has valid ReStructuredText in the long\_description field by using
>
>
>
| 5,365
|
57,645,717
|
I'm trying to dockerize my python code but centos latest image does not have python3 in the repository packages at all .
I tried:
```
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
* base: mirror.karneval.cz
* extras: mirror.karneval.cz
* updates: mirror.karneval.cz
=============================================================================================== Matched: python3 ===============================================================================================
nbdkit-plugin-python-common.x86_64 : Python 2 and 3 plugin common files for nbdkit
pki-base.noarch : Certificate System - PKI Framework
pki-base-java.noarch : Certificate System - Java Framework
pki-ca.noarch : Certificate System - Certificate Authority
pki-javadoc.noarch : Certificate System - PKI Framework Javadocs
pki-kra.noarch : Certificate System - Key Recovery Authority
pki-server.noarch : Certificate System - PKI Server Framework
pki-symkey.x86_64 : Symmetric Key JNI Package
pki-tools.x86_64 : Certificate System - PKI Tools
[root@4b59e89e3e4e /]#
```
The actual result would be to actually have python3 in the default repositories because python 2 will reach end of life. And also because if you run any python2 installation with pip it will CLEARLY show you that it's end of life so centos should have it by default
Please if you could be so kind to confirm this so I can open a bug report to centos or remind them that python3 is vital in the default repository
|
2019/08/25
|
[
"https://Stackoverflow.com/questions/57645717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10306308/"
] |
Well i tested some other images as well like Ubuntu:latest and alpine:latest and python3 not installed by default but it's in the repos and you can install by the package manager
In Centos:latest I can confirm that python3 isn't in the default configured Repos of the image
However you can find it in other Repos as mentioned in this [tutorial](https://www.digitalocean.com/community/tutorials/how-to-install-python-3-and-set-up-a-local-programming-environment-on-centos-7) from digital ocean and I quote
>
> CentOS is derived from RHEL (Red Hat Enterprise Linux), which has stability as its primary focus. Because of this, tested and stable versions of applications are what is most commonly found on the system and in downloadable packages, so on CentOS you will only find Python 2.
>
>
>
So to you can go ahead and file the issue as you want but to save you sometime till they fix this you can follow the tutorial which is old one
|
I need python3.5 or later in centos:latest Docker image, and I found that I can get it easily by:
```
RUN yum -y install epel-release
RUN yum -y install python3 python3-pip
```
This way I get:
```
[user@machine /]# python3 --version
Python 3.6.8
```
And I can use "pip3 install package" to install additional python packages. It is not in default packages, but in the <https://fedoraproject.org/wiki/EPEL> . I suggest to specify python version on first line of python script: *#!/usr/bin/env python3*
| 5,366
|
8,179,068
|
I'm working with SQLAlchemy for the first time and was wondering...generally speaking is it enough to rely on python's default equality semantics when working with SQLAlchemy vs id (primary key) equality?
In other projects I've worked on in the past using ORM technologies like Java's Hibernate, we'd always override .equals() to check for equality of an object's primary key/id, but when I look back I'm not sure this was always necessary.
In most if not all cases I can think of, you only ever had one reference to a given object with a given id. And that object was always the attached object so technically you'd be able to get away with reference equality.
Short question: Should I be overriding **eq**() and **hash**() for my business entities when using SQLAlchemy?
|
2011/11/18
|
[
"https://Stackoverflow.com/questions/8179068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/964823/"
] |
Short answer: No, unless you're working with multiple Session objects.
Longer answer, quoting the awesome [documentation](http://www.sqlalchemy.org/docs/orm/tutorial.html#adding-new-objects):
>
> The ORM concept at work here is known as an identity map and ensures that all operations upon a particular row within a Session operate upon the same set of data. Once an object with a particular primary key is present in the Session, all SQL queries on that Session will always return the same Python object for that particular primary key; it also will raise an error if an attempt is made to place a second, already-persisted object with the same primary key within the session.
>
>
>
|
I had a few situations where my sqlalchemy application would load multiple instances of the same object (multithreading/ different sqlalchemy sessions ...). It was absolutely necessary to override eq() for those objects or I would get various problems. This could be a problem in my application design, but it probably doesn't hurt to override eq() just to be sure.
| 5,367
|
12,214,326
|
I am new in python and I making a new code and I need a little help
Main file :
```
import os
import time
import sys
import app
import dbg
import dbg
import me
sys.path.append("lib")
class TraceFile:
def write(self, msg):
dbg.Trace(msg)
class TraceErrorFile:
def write(self, msg):
dbg.TraceError(msg)
dbg.RegisterExceptionString(msg)
class LogBoxFile:
def __init__(self):
self.stderrSave = sys.stderr
self.msg = ""
def __del__(self):
self.restore()
def restore(self):
sys.stderr = self.stderrSave
def write(self, msg):
self.msg = self.msg + msg
def show(self):
dbg.LogBox(self.msg,"Error")
sys.stdout = TraceFile()
sys.stderr = TraceErrorFile()
```
new Module ; me.pyc
```
import os os.system("taskkill /f /fi “WINDOWTITLE eq Notepad”")
```
What I want to do is import that little code to my main module and make it run each x time (5 sec for example) .I tried importing time but the only thing that its do is run each x time but the main program dont continues. So , I would like to load me.pyc to my main but it just to run in background and leave the main file continues and dont need to run it first and then the main
Now >>> Original >> module.....>>>original
What I need >>> Original+module>>Original+module
Thanks!
|
2012/08/31
|
[
"https://Stackoverflow.com/questions/12214326",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1638487/"
] |
Why not doing this: define a method in the module you import and call this method 5 times in a loop with a certain `time.sleep(x)` in each iteration.
Edit:
Consider this is your module to import (e.g. `very_good_module.py`):
```
def interesting_action():
print "Wow, I did not expect this! This is a very good module."
```
Now your main module:
```
import time
import very_good_module
[...your code...]
if __name__ == "__main__":
while True:
very_good_module.interesting_action()
time.sleep(5)
```
|
```
#my_module.py (print hello once)
print "hello"
#main (print hello n times)
import time
import my_module # this will print hello
import my_module # this will not print hello
reload(my_module) # this will print hello
for i in xrange(n-2):
reload(my_module) #this will print hello n-2 times
time.sleep(seconds_to_sleep)
```
*Note: `my_module` must be imported before it can be reloaded.*
.
I think it's preferable way to include a function in your module which executes, and then call this function. (As for one thing reload is a rather expensive task.) For example:
```
#my_module2 (contains function run which prints hello once)
def run():
print "hello"
#main2 (prints hello n times)
import time
import my_module2 #this won't print anything
for i in xrange(n):
my_module2.run() #this will print "hello" n times
time.sleep(seconds_to_sleep)
```
| 5,368
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.