qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
2,046,912
|
I am seeing some weird behavior while parsing shared paths (shared paths on server, e.g. \storage\Builds)
I am reading text file which contains directory paths which I want to process further. In order to do so I do as below:
```
def toWin(path):
return path.replace("\\", "\\\\")
for line in open(fileName):
l = toWin(line).strip()
if os.path.isdir(l):
print l # os.listdir(l) etc..
```
This works for local directories but fails for paths specified on shared system.
```
e.g.
E:\Test -- works
\\StorageMachine\Test -- fails [internally converts to \\\\StorageMachine\\Test]
\\StorageMachine\Test\ -- fails [internally converts to \\\\StorageMachine\\Test\\]
```
But if I open python shell, import script and invoke function with same path string then it works!
It seems that parsing windows shared paths is behaving differently in both cases.
Any ideas/suggestions?
|
2010/01/12
|
[
"https://Stackoverflow.com/questions/2046912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/62056/"
] |
This may not be your actual issue, but your UNC paths are actually not correct - they should start with a double backslash, but internally only use a single backslash as a divider.
I'm not sure why the same thing would be working within the shell.
**Update:**
I suspect that what's happening is that in the shell, your string is being interpreted by the shell (with replacements happening) while in your code it's being treated as seen for the first time - basically, specifying the string in the shell is different from reading it from an input. To get the same effect from the shell, you'd need to specify it as a raw string with `r"string"`
|
Have to convert input to forward slash (unix-style) for os.\* modules to parse correctly.
changed code as below
```
def toUnix(path):
return path.replace("\\", "/")
```
Now all modules parse correctly.
|
2,046,912
|
I am seeing some weird behavior while parsing shared paths (shared paths on server, e.g. \storage\Builds)
I am reading text file which contains directory paths which I want to process further. In order to do so I do as below:
```
def toWin(path):
return path.replace("\\", "\\\\")
for line in open(fileName):
l = toWin(line).strip()
if os.path.isdir(l):
print l # os.listdir(l) etc..
```
This works for local directories but fails for paths specified on shared system.
```
e.g.
E:\Test -- works
\\StorageMachine\Test -- fails [internally converts to \\\\StorageMachine\\Test]
\\StorageMachine\Test\ -- fails [internally converts to \\\\StorageMachine\\Test\\]
```
But if I open python shell, import script and invoke function with same path string then it works!
It seems that parsing windows shared paths is behaving differently in both cases.
Any ideas/suggestions?
|
2010/01/12
|
[
"https://Stackoverflow.com/questions/2046912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/62056/"
] |
There is simply no reason to "convert". Backslashes are only interpreted when they are contained in string literals in your code, not when you read them programmatically from a file. Therefore, you should disable your conversion function and things will probably work.
|
Have to convert input to forward slash (unix-style) for os.\* modules to parse correctly.
changed code as below
```
def toUnix(path):
return path.replace("\\", "/")
```
Now all modules parse correctly.
|
31,620,161
|
I am trying to run a python script using nrpe to monitor rabbitmq. Inside the script is a command 'sudo rabbiqmqctl list\_queues' which gives me a message count on each queue. However this is resulting in nagios giving htis message:
```
CRITICAL - Command '['sudo', 'rabbitmqctl', 'list_queues']' returned non-zero exit status 1
```
I thought this might be a permissions issue so proceeded in the following manner
/etc/group:
```
ec2-user:x:500:
rabbitmq:x:498:nrpe,nagios,ec2-user
nagios:x:497:
nrpe:x:496:
rpc:x:32:
```
/etc/sudoers:
```
%rabbitmq ALL=NOPASSWD: /usr/sbin/rabbitmqctl
```
nagios configuration:
```
command[check_rabbitmq_queuecount_prod]=/usr/bin/python27 /etc/nagios/check_rabbitmq_prod -a queues_count -C 3000 -W 1500
```
check\_rabbitmq\_prod:
```html
#!/usr/bin/env python
from optparse import OptionParser
import shlex
import subprocess
import sys
class RabbitCmdWrapper(object):
"""So basically this just runs rabbitmqctl commands and returns parsed output.
Typically this means you need root privs for this to work.
Made this it's own class so it could be used in other monitoring tools
if desired."""
@classmethod
def list_queues(cls):
args = shlex.split('sudo rabbitmqctl list_queues')
cmd_result = subprocess.check_output(args).strip()
results = cls._parse_list_results(cmd_result)
return results
@classmethod
def _parse_list_results(cls, result_string):
results = result_string.strip().split('\n')
#remove text fluff
results.remove(results[-1])
results.remove(results[0])
return_data = []
for row in results:
return_data.append(row.split('\t'))
return return_data
def check_queues_count(critical=1000, warning=1000):
"""
A blanket check to make sure all queues are within count parameters.
TODO: Possibly break this out so test can be done on individual queues.
"""
try:
critical_q = []
warning_q = []
ok_q = []
results = RabbitCmdWrapper.list_queues()
for queue in results:
if queue[0] == 'SFS_Production_Queue':
count = int(queue[1])
if count >= critical:
critical_q.append("%s: %s" % (queue[0], count))
elif count >= warning:
warning_q.append("%s: %s" % (queue[0], count))
else:
ok_q.append("%s: %s" % (queue[0], count))
if critical_q:
print "CRITICAL - %s" % ", ".join(critical_q)
sys.exit(2)
elif warning_q:
print "WARNING - %s" % ", ".join(warning_q)
sys.exit(1)
else:
print "OK - %s" % ", ".join(ok_q)
sys.exit(0)
except Exception, err:
print "CRITICAL - %s" % err
sys.exit(2)
USAGE = """Usage: ./check_rabbitmq -a [action] -C [critical] -W [warning]
Actions:
- queues_count
checks the count in each of the queues in rabbitmq's list_queues"""
if __name__ == "__main__":
parser = OptionParser(USAGE)
parser.add_option("-a", "--action", dest="action",
help="Action to Check")
parser.add_option("-C", "--critical", dest="critical",
type="int", help="Critical Threshold")
parser.add_option("-W", "--warning", dest="warning",
type="int", help="Warning Threshold")
(options, args) = parser.parse_args()
if options.action == "queues_count":
check_queues_count(options.critical, options.warning)
else:
print "Invalid action: %s" % options.action
print USAGE
```
At this point I'm not sure what is preventing the script from running. It runs fine via the command-line. Any help is appreciated.
|
2015/07/24
|
[
"https://Stackoverflow.com/questions/31620161",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/811220/"
] |
The "non-zero exit code" error is often associated with `requiretty` being applied to all users by default in your sudoers file.
Disabling "requiretty" in your sudoers file for the user that runs the check is safe, and may potentially fix the issue.
E.g. (assuming nagios/nrpe are the users)
@ /etc/sudoers
```
Defaults:nagios !requiretty
Defaults:nrpe !requiretty
```
|
I guess what Mr @EE1213 mentions is right. If you have the permission to see /var/log/secure, the log probably contains error messages regarding sudoers. Like:
```
"sorry, you must have a tty to run sudo"
```
|
24,494,437
|
I am using the Facebook Ads API and am wondering about Ad Image creation. This page, <https://developers.facebook.com/docs/reference/ads-api/adimage/#create>, makes it look pretty simple, except I'm not sure what's going on with the 'test.jpg=@test.jpg'. What is the @ for and how does it work?
I currently make the post request as described in the API docs (the above link) with a parameter 'pic.jpg' with value '@<https://s3.amazonaws.com/path/to/my/image.jpg>' but the response is an empty array instead of the JSON object with the 'images' key as shown in the API docs (the above link).
Can someone explain the idea/process/syntax of specifying files in HTTP requests, or perhaps more specific to Facebook Ads API? I'm not sure what is at play here.
EDIT:
(I should mention that I'm using python requests library.)
|
2014/06/30
|
[
"https://Stackoverflow.com/questions/24494437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3391108/"
] |
First if you want to upload an Image on Facebook Ads API you can't give a URL you will have to provide the real path for it.
in many cases you can use 'image\_url' directly & give the URL of the image, but in some cases you will need to upload the image to facebook and use the HASH provided by it.
PHP/cURL implementation
```
$myPhoto = realpath('c.jpg');
$ch = curl_init();
$data = array('name' => "@{$myPhoto}",'access_token' => "{$access_token}");
$urlStr = "https://graph.facebook.com/act_{$ad_account_id}/adimages";
curl_setopt($ch, CURLOPT_URL, $urlStr);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$output = curl_exec($ch);
```
to know how to POST a file with Python (refer: <http://docs.python-requests.org/en/latest/user/quickstart/#post-a-multipart-encoded-file>)
Also you can use this library it does all the work for you : <https://github.com/jgorset/facepy>
|
The @ feature is to upload a file. It is supported by curl and PHP for example. So if you add a https link it is not strange that it doesn't work
|
10,925,410
|
In a single character string when I try to fetch second character python fails quite verbosely which is correct and expected behavior.
```
print 'a'[1]
```
However when I try to fetch an invalid sub sequence range, it fails silently.
```
print 'a'[1:]
```
What is the reason for the difference in behavior?
|
2012/06/07
|
[
"https://Stackoverflow.com/questions/10925410",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1137587/"
] |
actually `slicing` never generates any `error` in case of `strings` and `lists`:
**for eg.**
```
>>> s='abc'
>>> s[-1000:1000]
'abc'
```
works fine.
On the other hand, while accesing `Indexes` that are not defined will always raise an `IndexError` in both `strings` and `lists`:
```
>>> s[4]
Traceback (most recent call last):
File "<pyshell#6>", line 1, in <module>
s[4]
IndexError: string index out of range
```
|
The semantics differ:
```
print 'a'[1]
```
tries to index into a non-existent (ie invalid) index/location, which *is* an error.
```
print 'a'[1:]
```
simply returns, based on the specified range, an empty string (`''`), which is *not* an error.
I.e.,
```
In [175]: 'a'[1]
---------------------------------------------------------------------------
----> 1 'a'[1]
IndexError: string index out of range
In [176]: 'a'[1:]
Out[176]: ''
```
|
10,925,410
|
In a single character string when I try to fetch second character python fails quite verbosely which is correct and expected behavior.
```
print 'a'[1]
```
However when I try to fetch an invalid sub sequence range, it fails silently.
```
print 'a'[1:]
```
What is the reason for the difference in behavior?
|
2012/06/07
|
[
"https://Stackoverflow.com/questions/10925410",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1137587/"
] |
This makes more sense when you look at how mutable slicing on a list behaves:
```
>>> a = list(range(10))
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> a[10] = 2
Traceback (most recent call last):
File "<pyshell#16>", line 1, in <module>
a[10] = 2
IndexError: list assignment index out of range
>>> a[10:] = [1, 2, 3]
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3]
```
Modifying a slice past the end tacks the new values onto the end, equivalent to doing `a.extend([1, 2, 3])` (although slightly different if your start point exists). This isn't surprising once you get your head around:
```
>>> a = list(range(10))
>>> a[2:4] = range(10)
>>> a
[0, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 4, 5, 6, 7, 8, 9]
```
But since you can modify this slice, it would be slightly surprising for trying to access it to be an `IndexError` - nowhere else in the language does getting something you can set fail with anything other than a `NameError`. But, `NameError` wouldn't make sense here - Python *has* found the an object with the right name, and has called a method on it.
Therefore, Python doesn't consider past-the-end slicing as an error with lists. With that in mind, why should accessing a slice behave differently between the builtin sequences? Strings (and tuples) are immutable, so slice assignment will always fail - but seeing what value is there isn't a mutation.
So, really, the ultimate reason is - it is because the devs felt this behaviour is less surprising than the other possible behaviours.
|
The semantics differ:
```
print 'a'[1]
```
tries to index into a non-existent (ie invalid) index/location, which *is* an error.
```
print 'a'[1:]
```
simply returns, based on the specified range, an empty string (`''`), which is *not* an error.
I.e.,
```
In [175]: 'a'[1]
---------------------------------------------------------------------------
----> 1 'a'[1]
IndexError: string index out of range
In [176]: 'a'[1:]
Out[176]: ''
```
|
10,925,410
|
In a single character string when I try to fetch second character python fails quite verbosely which is correct and expected behavior.
```
print 'a'[1]
```
However when I try to fetch an invalid sub sequence range, it fails silently.
```
print 'a'[1:]
```
What is the reason for the difference in behavior?
|
2012/06/07
|
[
"https://Stackoverflow.com/questions/10925410",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1137587/"
] |
The semantics differ:
```
print 'a'[1]
```
tries to index into a non-existent (ie invalid) index/location, which *is* an error.
```
print 'a'[1:]
```
simply returns, based on the specified range, an empty string (`''`), which is *not* an error.
I.e.,
```
In [175]: 'a'[1]
---------------------------------------------------------------------------
----> 1 'a'[1]
IndexError: string index out of range
In [176]: 'a'[1:]
Out[176]: ''
```
|
It can be thought of this way:
When you use `a[1]`, it is assumed that you exactly know what you want to access (in this case - second element in a string). Since `a[1]` does not exist python raises an exception.
However, range operator `a[1:]` is implemented with the semantics that one may not know the exact range of data so even if you specify indices that are very large (random guess) you will still get the valid portion of the string (or sequence). The reason you get an empty string in this case is that there are no elements in the string in this range `a[1:]`.
Hope this helps.
|
10,925,410
|
In a single character string when I try to fetch second character python fails quite verbosely which is correct and expected behavior.
```
print 'a'[1]
```
However when I try to fetch an invalid sub sequence range, it fails silently.
```
print 'a'[1:]
```
What is the reason for the difference in behavior?
|
2012/06/07
|
[
"https://Stackoverflow.com/questions/10925410",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1137587/"
] |
actually `slicing` never generates any `error` in case of `strings` and `lists`:
**for eg.**
```
>>> s='abc'
>>> s[-1000:1000]
'abc'
```
works fine.
On the other hand, while accesing `Indexes` that are not defined will always raise an `IndexError` in both `strings` and `lists`:
```
>>> s[4]
Traceback (most recent call last):
File "<pyshell#6>", line 1, in <module>
s[4]
IndexError: string index out of range
```
|
A slicing operation is different from an index operation. An index returns an element, and a slice returns a range, even an empty range or an empty string.
An array with a single element has two "boundaries" where indexing pointers can be: 0 and 1. You can slice like `'a'[0:1]` and you`ll get the string (or range in a list or array) that lies between these positions.
If you slice from the leftmost border to the end, the reading goes to the end, where it already is, and you get the empty string.
|
10,925,410
|
In a single character string when I try to fetch second character python fails quite verbosely which is correct and expected behavior.
```
print 'a'[1]
```
However when I try to fetch an invalid sub sequence range, it fails silently.
```
print 'a'[1:]
```
What is the reason for the difference in behavior?
|
2012/06/07
|
[
"https://Stackoverflow.com/questions/10925410",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1137587/"
] |
actually `slicing` never generates any `error` in case of `strings` and `lists`:
**for eg.**
```
>>> s='abc'
>>> s[-1000:1000]
'abc'
```
works fine.
On the other hand, while accesing `Indexes` that are not defined will always raise an `IndexError` in both `strings` and `lists`:
```
>>> s[4]
Traceback (most recent call last):
File "<pyshell#6>", line 1, in <module>
s[4]
IndexError: string index out of range
```
|
This makes more sense when you look at how mutable slicing on a list behaves:
```
>>> a = list(range(10))
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> a[10] = 2
Traceback (most recent call last):
File "<pyshell#16>", line 1, in <module>
a[10] = 2
IndexError: list assignment index out of range
>>> a[10:] = [1, 2, 3]
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3]
```
Modifying a slice past the end tacks the new values onto the end, equivalent to doing `a.extend([1, 2, 3])` (although slightly different if your start point exists). This isn't surprising once you get your head around:
```
>>> a = list(range(10))
>>> a[2:4] = range(10)
>>> a
[0, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 4, 5, 6, 7, 8, 9]
```
But since you can modify this slice, it would be slightly surprising for trying to access it to be an `IndexError` - nowhere else in the language does getting something you can set fail with anything other than a `NameError`. But, `NameError` wouldn't make sense here - Python *has* found the an object with the right name, and has called a method on it.
Therefore, Python doesn't consider past-the-end slicing as an error with lists. With that in mind, why should accessing a slice behave differently between the builtin sequences? Strings (and tuples) are immutable, so slice assignment will always fail - but seeing what value is there isn't a mutation.
So, really, the ultimate reason is - it is because the devs felt this behaviour is less surprising than the other possible behaviours.
|
10,925,410
|
In a single character string when I try to fetch second character python fails quite verbosely which is correct and expected behavior.
```
print 'a'[1]
```
However when I try to fetch an invalid sub sequence range, it fails silently.
```
print 'a'[1:]
```
What is the reason for the difference in behavior?
|
2012/06/07
|
[
"https://Stackoverflow.com/questions/10925410",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1137587/"
] |
actually `slicing` never generates any `error` in case of `strings` and `lists`:
**for eg.**
```
>>> s='abc'
>>> s[-1000:1000]
'abc'
```
works fine.
On the other hand, while accesing `Indexes` that are not defined will always raise an `IndexError` in both `strings` and `lists`:
```
>>> s[4]
Traceback (most recent call last):
File "<pyshell#6>", line 1, in <module>
s[4]
IndexError: string index out of range
```
|
It can be thought of this way:
When you use `a[1]`, it is assumed that you exactly know what you want to access (in this case - second element in a string). Since `a[1]` does not exist python raises an exception.
However, range operator `a[1:]` is implemented with the semantics that one may not know the exact range of data so even if you specify indices that are very large (random guess) you will still get the valid portion of the string (or sequence). The reason you get an empty string in this case is that there are no elements in the string in this range `a[1:]`.
Hope this helps.
|
10,925,410
|
In a single character string when I try to fetch second character python fails quite verbosely which is correct and expected behavior.
```
print 'a'[1]
```
However when I try to fetch an invalid sub sequence range, it fails silently.
```
print 'a'[1:]
```
What is the reason for the difference in behavior?
|
2012/06/07
|
[
"https://Stackoverflow.com/questions/10925410",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1137587/"
] |
This makes more sense when you look at how mutable slicing on a list behaves:
```
>>> a = list(range(10))
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> a[10] = 2
Traceback (most recent call last):
File "<pyshell#16>", line 1, in <module>
a[10] = 2
IndexError: list assignment index out of range
>>> a[10:] = [1, 2, 3]
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3]
```
Modifying a slice past the end tacks the new values onto the end, equivalent to doing `a.extend([1, 2, 3])` (although slightly different if your start point exists). This isn't surprising once you get your head around:
```
>>> a = list(range(10))
>>> a[2:4] = range(10)
>>> a
[0, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 4, 5, 6, 7, 8, 9]
```
But since you can modify this slice, it would be slightly surprising for trying to access it to be an `IndexError` - nowhere else in the language does getting something you can set fail with anything other than a `NameError`. But, `NameError` wouldn't make sense here - Python *has* found the an object with the right name, and has called a method on it.
Therefore, Python doesn't consider past-the-end slicing as an error with lists. With that in mind, why should accessing a slice behave differently between the builtin sequences? Strings (and tuples) are immutable, so slice assignment will always fail - but seeing what value is there isn't a mutation.
So, really, the ultimate reason is - it is because the devs felt this behaviour is less surprising than the other possible behaviours.
|
A slicing operation is different from an index operation. An index returns an element, and a slice returns a range, even an empty range or an empty string.
An array with a single element has two "boundaries" where indexing pointers can be: 0 and 1. You can slice like `'a'[0:1]` and you`ll get the string (or range in a list or array) that lies between these positions.
If you slice from the leftmost border to the end, the reading goes to the end, where it already is, and you get the empty string.
|
10,925,410
|
In a single character string when I try to fetch second character python fails quite verbosely which is correct and expected behavior.
```
print 'a'[1]
```
However when I try to fetch an invalid sub sequence range, it fails silently.
```
print 'a'[1:]
```
What is the reason for the difference in behavior?
|
2012/06/07
|
[
"https://Stackoverflow.com/questions/10925410",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1137587/"
] |
A slicing operation is different from an index operation. An index returns an element, and a slice returns a range, even an empty range or an empty string.
An array with a single element has two "boundaries" where indexing pointers can be: 0 and 1. You can slice like `'a'[0:1]` and you`ll get the string (or range in a list or array) that lies between these positions.
If you slice from the leftmost border to the end, the reading goes to the end, where it already is, and you get the empty string.
|
It can be thought of this way:
When you use `a[1]`, it is assumed that you exactly know what you want to access (in this case - second element in a string). Since `a[1]` does not exist python raises an exception.
However, range operator `a[1:]` is implemented with the semantics that one may not know the exact range of data so even if you specify indices that are very large (random guess) you will still get the valid portion of the string (or sequence). The reason you get an empty string in this case is that there are no elements in the string in this range `a[1:]`.
Hope this helps.
|
10,925,410
|
In a single character string when I try to fetch second character python fails quite verbosely which is correct and expected behavior.
```
print 'a'[1]
```
However when I try to fetch an invalid sub sequence range, it fails silently.
```
print 'a'[1:]
```
What is the reason for the difference in behavior?
|
2012/06/07
|
[
"https://Stackoverflow.com/questions/10925410",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1137587/"
] |
This makes more sense when you look at how mutable slicing on a list behaves:
```
>>> a = list(range(10))
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> a[10] = 2
Traceback (most recent call last):
File "<pyshell#16>", line 1, in <module>
a[10] = 2
IndexError: list assignment index out of range
>>> a[10:] = [1, 2, 3]
>>> a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 2, 3]
```
Modifying a slice past the end tacks the new values onto the end, equivalent to doing `a.extend([1, 2, 3])` (although slightly different if your start point exists). This isn't surprising once you get your head around:
```
>>> a = list(range(10))
>>> a[2:4] = range(10)
>>> a
[0, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 4, 5, 6, 7, 8, 9]
```
But since you can modify this slice, it would be slightly surprising for trying to access it to be an `IndexError` - nowhere else in the language does getting something you can set fail with anything other than a `NameError`. But, `NameError` wouldn't make sense here - Python *has* found the an object with the right name, and has called a method on it.
Therefore, Python doesn't consider past-the-end slicing as an error with lists. With that in mind, why should accessing a slice behave differently between the builtin sequences? Strings (and tuples) are immutable, so slice assignment will always fail - but seeing what value is there isn't a mutation.
So, really, the ultimate reason is - it is because the devs felt this behaviour is less surprising than the other possible behaviours.
|
It can be thought of this way:
When you use `a[1]`, it is assumed that you exactly know what you want to access (in this case - second element in a string). Since `a[1]` does not exist python raises an exception.
However, range operator `a[1:]` is implemented with the semantics that one may not know the exact range of data so even if you specify indices that are very large (random guess) you will still get the valid portion of the string (or sequence). The reason you get an empty string in this case is that there are no elements in the string in this range `a[1:]`.
Hope this helps.
|
55,441,517
|
I am following the official [docker get started guide](https://docs.docker.com/get-started/part2/). Instead of using a python image, I would like to setup a mongodb instance. I decided on a tag, and found the relevant [Dockerfile](https://github.com/docker-library/mongo/blob/89f19dc16431025c00a4709e0da6d751cf94830f/4.0/Dockerfile). I have the docker file inside a otherwise empty folder (the content is the same as in the link).
I ran `docker build --tag=mongoplayground:1.0.0 .`, and then `docker image ls`, but the output I get is this:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 838852e4e564 32 minutes ago 409MB
ubuntu xenial 9361ce633ff1 2 weeks ago 118MB
```
I would expect the repository name to be `mognoplayground` and the tag to be `1.0.0`.
What am I doing wrong?
|
2019/03/31
|
[
"https://Stackoverflow.com/questions/55441517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4341439/"
] |
It seems the only thing I had to add was `<import type="android.view.View" />` in the data tags...
|
You have to define variable as ObservableField below :
```
public final ObservableField<String> name = new ObservableField<>();
public final ObservableField<String> family = new ObservableField<>();
```
|
55,441,517
|
I am following the official [docker get started guide](https://docs.docker.com/get-started/part2/). Instead of using a python image, I would like to setup a mongodb instance. I decided on a tag, and found the relevant [Dockerfile](https://github.com/docker-library/mongo/blob/89f19dc16431025c00a4709e0da6d751cf94830f/4.0/Dockerfile). I have the docker file inside a otherwise empty folder (the content is the same as in the link).
I ran `docker build --tag=mongoplayground:1.0.0 .`, and then `docker image ls`, but the output I get is this:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 838852e4e564 32 minutes ago 409MB
ubuntu xenial 9361ce633ff1 2 weeks ago 118MB
```
I would expect the repository name to be `mognoplayground` and the tag to be `1.0.0`.
What am I doing wrong?
|
2019/03/31
|
[
"https://Stackoverflow.com/questions/55441517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4341439/"
] |
It seems the only thing I had to add was `<import type="android.view.View" />` in the data tags...
|
The issue for me was , there were multiple qualifier layout's for example I had one for Night Mode and the base file, I was only making changes in the base file but not in other qualifier resource file, once I made the changes in all qualifiers, it worked for me.
|
55,441,517
|
I am following the official [docker get started guide](https://docs.docker.com/get-started/part2/). Instead of using a python image, I would like to setup a mongodb instance. I decided on a tag, and found the relevant [Dockerfile](https://github.com/docker-library/mongo/blob/89f19dc16431025c00a4709e0da6d751cf94830f/4.0/Dockerfile). I have the docker file inside a otherwise empty folder (the content is the same as in the link).
I ran `docker build --tag=mongoplayground:1.0.0 .`, and then `docker image ls`, but the output I get is this:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 838852e4e564 32 minutes ago 409MB
ubuntu xenial 9361ce633ff1 2 weeks ago 118MB
```
I would expect the repository name to be `mognoplayground` and the tag to be `1.0.0`.
What am I doing wrong?
|
2019/03/31
|
[
"https://Stackoverflow.com/questions/55441517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4341439/"
] |
It seems the only thing I had to add was `<import type="android.view.View" />` in the data tags...
|
I had a `BindingAdapter` tag that was present in an *unreachable* module.
|
55,441,517
|
I am following the official [docker get started guide](https://docs.docker.com/get-started/part2/). Instead of using a python image, I would like to setup a mongodb instance. I decided on a tag, and found the relevant [Dockerfile](https://github.com/docker-library/mongo/blob/89f19dc16431025c00a4709e0da6d751cf94830f/4.0/Dockerfile). I have the docker file inside a otherwise empty folder (the content is the same as in the link).
I ran `docker build --tag=mongoplayground:1.0.0 .`, and then `docker image ls`, but the output I get is this:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 838852e4e564 32 minutes ago 409MB
ubuntu xenial 9361ce633ff1 2 weeks ago 118MB
```
I would expect the repository name to be `mognoplayground` and the tag to be `1.0.0`.
What am I doing wrong?
|
2019/03/31
|
[
"https://Stackoverflow.com/questions/55441517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4341439/"
] |
It seems the only thing I had to add was `<import type="android.view.View" />` in the data tags...
|
I had the same error show up but it was because my `android:text="@={...}"` was outside the editText.
|
55,441,517
|
I am following the official [docker get started guide](https://docs.docker.com/get-started/part2/). Instead of using a python image, I would like to setup a mongodb instance. I decided on a tag, and found the relevant [Dockerfile](https://github.com/docker-library/mongo/blob/89f19dc16431025c00a4709e0da6d751cf94830f/4.0/Dockerfile). I have the docker file inside a otherwise empty folder (the content is the same as in the link).
I ran `docker build --tag=mongoplayground:1.0.0 .`, and then `docker image ls`, but the output I get is this:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 838852e4e564 32 minutes ago 409MB
ubuntu xenial 9361ce633ff1 2 weeks ago 118MB
```
I would expect the repository name to be `mognoplayground` and the tag to be `1.0.0`.
What am I doing wrong?
|
2019/03/31
|
[
"https://Stackoverflow.com/questions/55441517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4341439/"
] |
If you use two-ways databinding (`@={myBindingValue}`, with the **'='** sign instead of `@{myBindingValue}`) sometimes, you'll have this unusefull generic error because the value you are trying to bind is declared as immutable => **val** instead of **var** in Kotlin in your data class.
**Exemple :**
```
data class User(
val name,
var email
)
```
In this example, you could bind the user's email variable as : `text="@={myViewModel.user.email}"`
But, if you try to bind the user's name : `text="@={myViewModel.user.name}"` you will get this error.
|
You have to define variable as ObservableField below :
```
public final ObservableField<String> name = new ObservableField<>();
public final ObservableField<String> family = new ObservableField<>();
```
|
55,441,517
|
I am following the official [docker get started guide](https://docs.docker.com/get-started/part2/). Instead of using a python image, I would like to setup a mongodb instance. I decided on a tag, and found the relevant [Dockerfile](https://github.com/docker-library/mongo/blob/89f19dc16431025c00a4709e0da6d751cf94830f/4.0/Dockerfile). I have the docker file inside a otherwise empty folder (the content is the same as in the link).
I ran `docker build --tag=mongoplayground:1.0.0 .`, and then `docker image ls`, but the output I get is this:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 838852e4e564 32 minutes ago 409MB
ubuntu xenial 9361ce633ff1 2 weeks ago 118MB
```
I would expect the repository name to be `mognoplayground` and the tag to be `1.0.0`.
What am I doing wrong?
|
2019/03/31
|
[
"https://Stackoverflow.com/questions/55441517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4341439/"
] |
It seems the only thing I had to add was `<import type="android.view.View" />` in the data tags...
|
In your xml code inside textView tag, for android:text attribute you have used @{viewmodel}. It just refers your shopViewModel class, you must target the text variable inside that class. Then the gen. class file errors will vanish.
>
> bindingImpl errors are mostly generated for invalid
> assignment for XML-text or XML-onClick attributes.
>
>
>
|
55,441,517
|
I am following the official [docker get started guide](https://docs.docker.com/get-started/part2/). Instead of using a python image, I would like to setup a mongodb instance. I decided on a tag, and found the relevant [Dockerfile](https://github.com/docker-library/mongo/blob/89f19dc16431025c00a4709e0da6d751cf94830f/4.0/Dockerfile). I have the docker file inside a otherwise empty folder (the content is the same as in the link).
I ran `docker build --tag=mongoplayground:1.0.0 .`, and then `docker image ls`, but the output I get is this:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 838852e4e564 32 minutes ago 409MB
ubuntu xenial 9361ce633ff1 2 weeks ago 118MB
```
I would expect the repository name to be `mognoplayground` and the tag to be `1.0.0`.
What am I doing wrong?
|
2019/03/31
|
[
"https://Stackoverflow.com/questions/55441517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4341439/"
] |
In your xml code inside textView tag, for android:text attribute you have used @{viewmodel}. It just refers your shopViewModel class, you must target the text variable inside that class. Then the gen. class file errors will vanish.
>
> bindingImpl errors are mostly generated for invalid
> assignment for XML-text or XML-onClick attributes.
>
>
>
|
I had the same error show up but it was because my `android:text="@={...}"` was outside the editText.
|
55,441,517
|
I am following the official [docker get started guide](https://docs.docker.com/get-started/part2/). Instead of using a python image, I would like to setup a mongodb instance. I decided on a tag, and found the relevant [Dockerfile](https://github.com/docker-library/mongo/blob/89f19dc16431025c00a4709e0da6d751cf94830f/4.0/Dockerfile). I have the docker file inside a otherwise empty folder (the content is the same as in the link).
I ran `docker build --tag=mongoplayground:1.0.0 .`, and then `docker image ls`, but the output I get is this:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 838852e4e564 32 minutes ago 409MB
ubuntu xenial 9361ce633ff1 2 weeks ago 118MB
```
I would expect the repository name to be `mognoplayground` and the tag to be `1.0.0`.
What am I doing wrong?
|
2019/03/31
|
[
"https://Stackoverflow.com/questions/55441517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4341439/"
] |
In your xml code inside textView tag, for android:text attribute you have used @{viewmodel}. It just refers your shopViewModel class, you must target the text variable inside that class. Then the gen. class file errors will vanish.
>
> bindingImpl errors are mostly generated for invalid
> assignment for XML-text or XML-onClick attributes.
>
>
>
|
The issue for me was , there were multiple qualifier layout's for example I had one for Night Mode and the base file, I was only making changes in the base file but not in other qualifier resource file, once I made the changes in all qualifiers, it worked for me.
|
55,441,517
|
I am following the official [docker get started guide](https://docs.docker.com/get-started/part2/). Instead of using a python image, I would like to setup a mongodb instance. I decided on a tag, and found the relevant [Dockerfile](https://github.com/docker-library/mongo/blob/89f19dc16431025c00a4709e0da6d751cf94830f/4.0/Dockerfile). I have the docker file inside a otherwise empty folder (the content is the same as in the link).
I ran `docker build --tag=mongoplayground:1.0.0 .`, and then `docker image ls`, but the output I get is this:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 838852e4e564 32 minutes ago 409MB
ubuntu xenial 9361ce633ff1 2 weeks ago 118MB
```
I would expect the repository name to be `mognoplayground` and the tag to be `1.0.0`.
What am I doing wrong?
|
2019/03/31
|
[
"https://Stackoverflow.com/questions/55441517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4341439/"
] |
If you use two-ways databinding (`@={myBindingValue}`, with the **'='** sign instead of `@{myBindingValue}`) sometimes, you'll have this unusefull generic error because the value you are trying to bind is declared as immutable => **val** instead of **var** in Kotlin in your data class.
**Exemple :**
```
data class User(
val name,
var email
)
```
In this example, you could bind the user's email variable as : `text="@={myViewModel.user.email}"`
But, if you try to bind the user's name : `text="@={myViewModel.user.name}"` you will get this error.
|
The issue for me was , there were multiple qualifier layout's for example I had one for Night Mode and the base file, I was only making changes in the base file but not in other qualifier resource file, once I made the changes in all qualifiers, it worked for me.
|
55,441,517
|
I am following the official [docker get started guide](https://docs.docker.com/get-started/part2/). Instead of using a python image, I would like to setup a mongodb instance. I decided on a tag, and found the relevant [Dockerfile](https://github.com/docker-library/mongo/blob/89f19dc16431025c00a4709e0da6d751cf94830f/4.0/Dockerfile). I have the docker file inside a otherwise empty folder (the content is the same as in the link).
I ran `docker build --tag=mongoplayground:1.0.0 .`, and then `docker image ls`, but the output I get is this:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 838852e4e564 32 minutes ago 409MB
ubuntu xenial 9361ce633ff1 2 weeks ago 118MB
```
I would expect the repository name to be `mognoplayground` and the tag to be `1.0.0`.
What am I doing wrong?
|
2019/03/31
|
[
"https://Stackoverflow.com/questions/55441517",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4341439/"
] |
It seems the only thing I had to add was `<import type="android.view.View" />` in the data tags...
|
If you use two-ways databinding (`@={myBindingValue}`, with the **'='** sign instead of `@{myBindingValue}`) sometimes, you'll have this unusefull generic error because the value you are trying to bind is declared as immutable => **val** instead of **var** in Kotlin in your data class.
**Exemple :**
```
data class User(
val name,
var email
)
```
In this example, you could bind the user's email variable as : `text="@={myViewModel.user.email}"`
But, if you try to bind the user's name : `text="@={myViewModel.user.name}"` you will get this error.
|
72,671,082
|
very new to VBA.
Suppose I have a 6 by 2 array with values shown on right, and I have an empty 2 by 3 array (excluding the header). My goal is to get the array on the left looks as how it is shown.
```
(Header) 1 2 3 1 a
a c e 1 b
b d f 2 c
2 d
3 e
3 f
```
Since the array on the right is already sorted, I noticed that it can be faster if I just let the 1st column of the 2 by 3 array take the first 2 values (a and b), the 2nd column takes the following 2 values (c and d), and so on. This way, it can avoid using a nested for loop to populate the left array.
However, I was unable to find a way to populate a specific column of an array. Another way to describe my question is: Is there a way in VBA to replicate this code from python, which directly modifies a specific column of an array? Thanks!
```
array[:, 0] = [a, b]
```
|
2022/06/18
|
[
"https://Stackoverflow.com/questions/72671082",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19365488/"
] |
Populate Array With Values From Another Array
---------------------------------------------
* It is always a nested loop, but in Python, it is obviously 'under the hood' i.e. not seen to the end-user. They have integrated this possibility (written some code) into the language.
* The following is a simplified version of what you could do in VBA since there is just too much hard-coded data with 'convenient' numbers in your question.
* The line of your interest is:
```
PopulateColumn dData, c, sData, SourceColumn
```
to populate column `c` in the destination array (`dData`) using one line of code. It's just shorter, not faster.
* Sure, it has no loop but if you look at the called procedure, `PopulateColumn`, you'll see that there actually is one (`For dr = 1 To drCount`).
* You can even go further with simplifying the life of the end-user by using classes but that's 'above my paygrade', and yours at the moment since you're saying you're a noob.
* Copy the code into a standard module, e.g. `Module1`, and run the `PopulateColumnTEST` procedure.
* Note that there are results written to the Visual Basic's Immediate window (`Ctrl`+`G`).
**The Code**
```vb
Option Explicit
Sub PopulateColumnTEST()
Const SourceColumn As Long = 2
' Populate the source array.
Dim sData As Variant: ReDim sData(1 To 6, 1 To 2)
Dim r As Long
For r = 1 To 6
sData(r, 1) = Int((r + 1) / 2) ' kind of irrelevant
sData(r, 2) = Chr(96 + r)
Next r
' Print source values.
DebugPrintCharData sData, "Source:" & vbLf & "R 1 2"
' Populate the destination array.
Dim dData As Variant: ReDim dData(1 To 2, 1 To 3)
Dim c As Long
' Loop through the columns of the destination array.
For c = 1 To 3
' Populate the current column of the destination array
' with the data from the source column of the source array
' by calling the 'PopulateColumn' procedure.
PopulateColumn dData, c, sData, SourceColumn
Next c
' Print destination values.
DebugPrintCharData dData, "Destination:" & vbLf & "R 1 2 3"
End Sub
Sub PopulateColumn( _
ByRef dData As Variant, _
ByVal dDataCol As Long, _
ByVal sData As Variant, _
ByVal sDataCol As Long)
Dim drCount As Long: drCount = UBound(dData, 1)
Dim dr As Long
For dr = 1 To drCount
dData(dr, dDataCol) = sData(drCount * (dDataCol - 1) + dr, sDataCol)
Next dr
End Sub
Sub DebugPrintCharData( _
ByVal Data As Variant, _
Optional Title As String = "", _
Optional ByVal ColumnDelimiter As String = " ")
If Len(Title) > 0 Then Debug.Print Title
Dim r As Long
Dim c As Long
Dim rString As String
For r = LBound(Data, 1) To UBound(Data, 1)
For c = LBound(Data, 2) To UBound(Data, 2)
rString = rString & ColumnDelimiter & Data(r, c)
Next c
rString = r & rString
Debug.Print rString
rString = vbNullString
Next r
End Sub
```
**The Results**
```
Source:
R 1 2
1 1 a
2 1 b
3 2 c
4 2 d
5 3 e
6 3 f
Destination:
R 1 2 3
1 a c e
2 b d f
```
|
**Alternative avoiding loops**
For the *sake of the art* and in order to *approximate* your requirement to find a way replicating Python's code
```
array[:, 0] = [a, b]
```
in VBA without nested loops, you could try the following function combining several column value inputs (via a ParamArray)
returning a combined 2-dim array.
*Note that the function*
* *will return a* **1-based** *array by using `Application.Index` and*
* *will be slower than any combination of array loops.*
```
Function JoinColumnValues(ParamArray cols()) As Variant
'Purp: change ParamArray containing "flat" 1-dim column values to 2-dim array !!
'Note: Assumes 1-dim arrays (!) as column value inputs into ParamArray
' returns a 1-based 2-dim array
Dim tmp As Variant
tmp = cols
With Application
tmp = .Transpose(.Index(tmp, 0, 0))
End With
JoinColumnValues = tmp
End Function
```
[](https://i.stack.imgur.com/U6OnB.png)
**Example call**
Assumes "flat" 1-dim array inputs with identical element boundaries
```
Dim arr
arr = JoinColumnValues(Array("a", "b"), Array("c", "d"), Array("e", "f"))
```
|
74,478,463
|
I am developing deployment via DBX to Azure Databricks. In this regard I need a data job written in SQL to happen everyday. The job is located in the file `data.sql`. I know how to do it with a python file. Here I would do the following:
```
build:
python: "pip"
environments:
default:
workflows:
- name: "workflow-name"
#schedule:
quartz_cron_expression: "0 0 9 * * ?" # every day at 9.00
timezone_id: "Europe"
format: MULTI_TASK #
job_clusters:
- job_cluster_key: "basic-job-cluster"
<<: *base-job-cluster
tasks:
- task_key: "task-name"
job_cluster_key: "basic-job-cluster"
spark_python_task:
python_file: "file://filename.py"
```
But how can I change it so I can run a SQL job instead? I imagine it is the last two lines of code (`spark_python_task:` and `python_file: "file://filename.py"`) which needs to be changed.
|
2022/11/17
|
[
"https://Stackoverflow.com/questions/74478463",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13219123/"
] |
There are various ways to do that.
(1) One of the most simplest is to add a SQL query in the Databricks SQL lens, and then reference this query via `sql_task` as described [here](https://dbx.readthedocs.io/en/latest/reference/deployment/?h=sql_task#configuring-complex-deployments).
(2) If you want to have a Python project that re-uses SQL statements from a static file, you can [add this file](https://dbx.readthedocs.io/en/latest/guides/python/packaging_files/) to your Python Package and then call it from your package, e.g.:
```py
sql_statement = ... # code to read from the file
spark.sql(sql_statement)
```
(3) A third option is to use the DBT framework with Databricks. In this case you probably would like to use `dbt_task` as described [here](https://dbx.readthedocs.io/en/latest/reference/deployment/?h=dbt_task#configuring-complex-deployments).
|
I found a simple workaround (although might not be the prettiest) to simply change the `data.sql` to a python file and run the queries using spark. This way I could use the same `spark_python_task`.
|
59,989,572
|
I'm working with a database called `international_education` from the `world_bank_intl_education` dataset of `bigquery-public-data`.
```
FIELDS
country_name
country_code
indicator_name
indicator_code
value
year
```
My aim is to plot a line graph with countries who have had the biggest and smallest change in Population growth (annual %) (one of the `indicator_name` values).
I have done this below using two partitions finding the first and last value of the year by each country but I'm rough on my SQL and am wondering if there is a way to optimize this formula.
```
query = """
WITH differences AS
(
SELECT country_name, year, value,
FIRST_VALUE(value)
OVER (
PARTITION BY country_name
ORDER BY year
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) AS small_value,
LAST_VALUE(value)
OVER (
PARTITION BY country_name
ORDER BY year
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) AS large_value
FROM `bigquery-public-data.world_bank_intl_education.international_education`
WHERE indicator_name = 'Population growth (annual %)'
ORDER BY year
)
SELECT country_name, year, (large_value-small_value) AS total_range, value
FROM differences
ORDER BY total_range
"""
```
Convert to pandas dataframe.
```
df= wbed.query_to_pandas_safe(query)
df.head(10)
```
Resulting table.
```
country_name year total_range value
0 United Arab Emirates 1970 -13.195183 14.446942
1 United Arab Emirates 1971 -13.195183 16.881671
2 United Arab Emirates 1972 -13.195183 17.689814
3 United Arab Emirates 1973 -13.195183 17.695296
4 United Arab Emirates 1974 -13.195183 17.125615
5 United Arab Emirates 1975 -13.195183 16.211873
6 United Arab Emirates 1976 -13.195183 15.450884
7 United Arab Emirates 1977 -13.195183 14.530119
8 United Arab Emirates 1978 -13.195183 13.033461
9 United Arab Emirates 1979 -13.195183 11.071306
```
I would then plot this with python as follows.
```
all_countries = df.groupby('country_name', as_index=False).max().sort_values(by='total_range').country_name.values
countries = np.concatenate((all_countries[:3], all_countries[-4:]))
plt.figure(figsize=(16, 8))
sns.lineplot(x='year',y='value', data=df[df.country_name.isin(countries)], hue='country_name')
```
|
2020/01/30
|
[
"https://Stackoverflow.com/questions/59989572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4459665/"
] |
You don't need the CTE and you don't need the window frame definitions. So this should be equivalent:
```
SELECT country_name, year, value,
(first_value(value) OVER (PARTITION BY country_name ORDER BY YEAR DESC) -
first_value(value) OVER (PARTITION BY country_name ORDER BY YEAR)
) as total_range
FROM `bigquery-public-data.world_bank_intl_education.international_education`
WHERE indicator_name = 'Population growth (annual %)';
```
Note that `LAST_VALUE()` is finicky with window frame definitions. So I routinely just use `FIRST_VALUE()` with the order by reversed.
If you want just one row per country, then you need aggregation. BigQuery doesn't have "first" and "last" aggregation functions, but they are very easy to do with arrays:
```
SELECT country_name,
((array_agg(value ORDER BY year DESC LIMIT 1))[ordinal(1)] -
(array_agg(value ORDER BY year LIMIT 1))[ordinal(1)]
) as total_range
FROM `bigquery-public-data.world_bank_intl_education.international_education`
WHERE indicator_name = 'Population growth (annual %)'
GROUP BY country_name
ORDER BY total_range;
```
|
If I understand correctly what you are trying to calculate, I wrote a query that do everything in BigQuery without the need to do anything in pandas.
This query returns all the rows for each country that rank top 3 or bottom 3 in change in Population growth.
```
WITH differences AS
(
SELECT
country_name,
year,
value,
LAST_VALUE(value) OVER (PARTITION BY country_name ORDER BY year) - FIRST_VALUE(value) OVER (PARTITION BY country_name ORDER BY year) AS total_range,
FROM `bigquery-public-data.world_bank_intl_education.international_education`
WHERE indicator_name = 'Population growth (annual %)'
ORDER BY year
)
, differences_with_ranks as (
SELECT
country_name,
year,
value,
total_range,
row_number() OVER (PARTITION BY country_name ORDER BY total_range) as rank,
FROM differences
)
, top_bottom as (
SELECT
country_name
FROM (
SELECT
country_name,
FROM differences_with_ranks
WHERE rank = 1
ORDER BY total_range DESC
LIMIT 3
)
UNION DISTINCT
SELECT
country_name
FROM (
SELECT
country_name,
FROM differences_with_ranks
WHERE rank = 1
ORDER BY total_range ASC
LIMIT 3
)
)
SELECT
*
FROM differences
WHERE country_name in (SELECT country_name FROM top_bottom)
```
I don't really understand what do you mean with "optimize", this query run very fast (1.5 seconds) if you need a solution with lower latency BigQuery is not the right solution.
|
59,989,572
|
I'm working with a database called `international_education` from the `world_bank_intl_education` dataset of `bigquery-public-data`.
```
FIELDS
country_name
country_code
indicator_name
indicator_code
value
year
```
My aim is to plot a line graph with countries who have had the biggest and smallest change in Population growth (annual %) (one of the `indicator_name` values).
I have done this below using two partitions finding the first and last value of the year by each country but I'm rough on my SQL and am wondering if there is a way to optimize this formula.
```
query = """
WITH differences AS
(
SELECT country_name, year, value,
FIRST_VALUE(value)
OVER (
PARTITION BY country_name
ORDER BY year
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) AS small_value,
LAST_VALUE(value)
OVER (
PARTITION BY country_name
ORDER BY year
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) AS large_value
FROM `bigquery-public-data.world_bank_intl_education.international_education`
WHERE indicator_name = 'Population growth (annual %)'
ORDER BY year
)
SELECT country_name, year, (large_value-small_value) AS total_range, value
FROM differences
ORDER BY total_range
"""
```
Convert to pandas dataframe.
```
df= wbed.query_to_pandas_safe(query)
df.head(10)
```
Resulting table.
```
country_name year total_range value
0 United Arab Emirates 1970 -13.195183 14.446942
1 United Arab Emirates 1971 -13.195183 16.881671
2 United Arab Emirates 1972 -13.195183 17.689814
3 United Arab Emirates 1973 -13.195183 17.695296
4 United Arab Emirates 1974 -13.195183 17.125615
5 United Arab Emirates 1975 -13.195183 16.211873
6 United Arab Emirates 1976 -13.195183 15.450884
7 United Arab Emirates 1977 -13.195183 14.530119
8 United Arab Emirates 1978 -13.195183 13.033461
9 United Arab Emirates 1979 -13.195183 11.071306
```
I would then plot this with python as follows.
```
all_countries = df.groupby('country_name', as_index=False).max().sort_values(by='total_range').country_name.values
countries = np.concatenate((all_countries[:3], all_countries[-4:]))
plt.figure(figsize=(16, 8))
sns.lineplot(x='year',y='value', data=df[df.country_name.isin(countries)], hue='country_name')
```
|
2020/01/30
|
[
"https://Stackoverflow.com/questions/59989572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4459665/"
] |
>
> thought there might have been a quicker way to write this query as seemed a bit long winded
>
>
>
I think below is the least verbose version (BigQuery Standard SQL)
```
#standardSQL
SELECT
country_name,
year,
(LAST_VALUE(value) OVER(win) - FIRST_VALUE(value) OVER(win)) AS total_range,
value
FROM `bigquery-public-data.world_bank_intl_education.international_education`
WHERE indicator_name = 'Population growth (annual %)'
WINDOW win AS (PARTITION BY country_name ORDER BY YEAR RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
ORDER BY total_range
```
... producing same result as your original query
|
If I understand correctly what you are trying to calculate, I wrote a query that do everything in BigQuery without the need to do anything in pandas.
This query returns all the rows for each country that rank top 3 or bottom 3 in change in Population growth.
```
WITH differences AS
(
SELECT
country_name,
year,
value,
LAST_VALUE(value) OVER (PARTITION BY country_name ORDER BY year) - FIRST_VALUE(value) OVER (PARTITION BY country_name ORDER BY year) AS total_range,
FROM `bigquery-public-data.world_bank_intl_education.international_education`
WHERE indicator_name = 'Population growth (annual %)'
ORDER BY year
)
, differences_with_ranks as (
SELECT
country_name,
year,
value,
total_range,
row_number() OVER (PARTITION BY country_name ORDER BY total_range) as rank,
FROM differences
)
, top_bottom as (
SELECT
country_name
FROM (
SELECT
country_name,
FROM differences_with_ranks
WHERE rank = 1
ORDER BY total_range DESC
LIMIT 3
)
UNION DISTINCT
SELECT
country_name
FROM (
SELECT
country_name,
FROM differences_with_ranks
WHERE rank = 1
ORDER BY total_range ASC
LIMIT 3
)
)
SELECT
*
FROM differences
WHERE country_name in (SELECT country_name FROM top_bottom)
```
I don't really understand what do you mean with "optimize", this query run very fast (1.5 seconds) if you need a solution with lower latency BigQuery is not the right solution.
|
59,989,572
|
I'm working with a database called `international_education` from the `world_bank_intl_education` dataset of `bigquery-public-data`.
```
FIELDS
country_name
country_code
indicator_name
indicator_code
value
year
```
My aim is to plot a line graph with countries who have had the biggest and smallest change in Population growth (annual %) (one of the `indicator_name` values).
I have done this below using two partitions finding the first and last value of the year by each country but I'm rough on my SQL and am wondering if there is a way to optimize this formula.
```
query = """
WITH differences AS
(
SELECT country_name, year, value,
FIRST_VALUE(value)
OVER (
PARTITION BY country_name
ORDER BY year
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) AS small_value,
LAST_VALUE(value)
OVER (
PARTITION BY country_name
ORDER BY year
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) AS large_value
FROM `bigquery-public-data.world_bank_intl_education.international_education`
WHERE indicator_name = 'Population growth (annual %)'
ORDER BY year
)
SELECT country_name, year, (large_value-small_value) AS total_range, value
FROM differences
ORDER BY total_range
"""
```
Convert to pandas dataframe.
```
df= wbed.query_to_pandas_safe(query)
df.head(10)
```
Resulting table.
```
country_name year total_range value
0 United Arab Emirates 1970 -13.195183 14.446942
1 United Arab Emirates 1971 -13.195183 16.881671
2 United Arab Emirates 1972 -13.195183 17.689814
3 United Arab Emirates 1973 -13.195183 17.695296
4 United Arab Emirates 1974 -13.195183 17.125615
5 United Arab Emirates 1975 -13.195183 16.211873
6 United Arab Emirates 1976 -13.195183 15.450884
7 United Arab Emirates 1977 -13.195183 14.530119
8 United Arab Emirates 1978 -13.195183 13.033461
9 United Arab Emirates 1979 -13.195183 11.071306
```
I would then plot this with python as follows.
```
all_countries = df.groupby('country_name', as_index=False).max().sort_values(by='total_range').country_name.values
countries = np.concatenate((all_countries[:3], all_countries[-4:]))
plt.figure(figsize=(16, 8))
sns.lineplot(x='year',y='value', data=df[df.country_name.isin(countries)], hue='country_name')
```
|
2020/01/30
|
[
"https://Stackoverflow.com/questions/59989572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4459665/"
] |
You don't need the CTE and you don't need the window frame definitions. So this should be equivalent:
```
SELECT country_name, year, value,
(first_value(value) OVER (PARTITION BY country_name ORDER BY YEAR DESC) -
first_value(value) OVER (PARTITION BY country_name ORDER BY YEAR)
) as total_range
FROM `bigquery-public-data.world_bank_intl_education.international_education`
WHERE indicator_name = 'Population growth (annual %)';
```
Note that `LAST_VALUE()` is finicky with window frame definitions. So I routinely just use `FIRST_VALUE()` with the order by reversed.
If you want just one row per country, then you need aggregation. BigQuery doesn't have "first" and "last" aggregation functions, but they are very easy to do with arrays:
```
SELECT country_name,
((array_agg(value ORDER BY year DESC LIMIT 1))[ordinal(1)] -
(array_agg(value ORDER BY year LIMIT 1))[ordinal(1)]
) as total_range
FROM `bigquery-public-data.world_bank_intl_education.international_education`
WHERE indicator_name = 'Population growth (annual %)'
GROUP BY country_name
ORDER BY total_range;
```
|
>
> thought there might have been a quicker way to write this query as seemed a bit long winded
>
>
>
I think below is the least verbose version (BigQuery Standard SQL)
```
#standardSQL
SELECT
country_name,
year,
(LAST_VALUE(value) OVER(win) - FIRST_VALUE(value) OVER(win)) AS total_range,
value
FROM `bigquery-public-data.world_bank_intl_education.international_education`
WHERE indicator_name = 'Population growth (annual %)'
WINDOW win AS (PARTITION BY country_name ORDER BY YEAR RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
ORDER BY total_range
```
... producing same result as your original query
|
29,833,789
|
I am learning python, I get this error:
```
getattr(args, args.tool)(args)
AttributeError: 'Namespace' object has no attribute 'cat'
```
If I execute my script like this:
```
myscript.py -t cat
```
What i want is print
```
Run cat here
```
Here is my full code:
```
#!/usr/bin/python
import sys, argparse
parser = argparse.ArgumentParser(str(sys.argv[0]))
parser.add_argument('-t', '--tool', help='Input tool name.', required=True, choices=["dog","cat","fish"])
args = parser.parse_args()
# Call function requested by user
getattr(args, args.tool)(args)
def dog(args):
print 'Run something dog here'
def cat(args):
print 'Run cat here'
def fish(args):
print 'Yes run fish here'
print "Bye !"
```
Thanks, for your help :D
|
2015/04/23
|
[
"https://Stackoverflow.com/questions/29833789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4358977/"
] |
EvenLisle's answer gives the correct idea, but you can easily generalize it by using `arg.tools` as the key to `globals()`. Moreover, to simplify validation, you can use the `choices` argument of `add_argument` so that you know the possible values of `args.tool`. If someone provides an argument other than dog, cat, or fish for the -t command line option, your parser will automatically notify them of the usage error. Thus, your code would become:
```
#!/usr/bin/python
import sys, argparse
parser = argparse.ArgumentParser(str(sys.argv[0]))
parser.add_argument('-t', '--tool', help='Input tool name.', required=True,
choices=["dog","cat","fish"])
args = parser.parse_args()
def dog(args):
print 'Run something dog here'
def cat(args):
print 'Run cat here'
def fish(args):
print 'Yes run fish here'
if callable(globals().get(args.tool)):
globals()[args.tool](args)
```
|
This:
```
def cat(args):
print 'Run cat here'
if "cat" in globals():
globals()["cat"]("arg")
```
will print "Run cat here". You should consider making a habit of having your function definitions at the top of your file. Otherwise, the above snippet would not have worked, as your function `cat` would not yet be in the dictionary returned by `globals()`.
|
41,531,571
|
I am working on a GUI in python 3.5 with PyQt5 for a small chat bot. The problem i have is that the pre-processing, post-processing and brain are taking too much time to give back the answer for the user provided input.
The GUI is very simple and looks like this: <http://prntscr.com/dsxa39> it loads very fast without connecting it to other modules. I mention that using sleep before receiving answer from brain module will still make it unresponsive.
`self.conversationBox.append("You: "+self.textbox.toPlainText())
self.textbox.setText("")
time.sleep(20)
self.conversationBox.append("Chatbot: " + "message from chatbot")`
this is a small sample of code, the one that i need to fix.
And this is the error I encounter: <http://prnt.sc/dsxcqu>
I mention that I've searched for the solution already and everywhere I've found what I've already tried, to use sleep. But again, this won't work as it makes the program unresponsive too.
|
2017/01/08
|
[
"https://Stackoverflow.com/questions/41531571",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5956553/"
] |
Slow functions, such as `sleep`, will always block unless they are running asynchronously in another thread.
If you want to avoid threads a workaround is to break up the slow function. In your case it might look like:
```
for _ in range(20):
sleep(1)
self.app.processEvents()
```
where `self.app` is a reference to your QApplication instance. This solution is a little hacky as it will simply result in 20 short hangs instead of one long hang.
If you want to use this approach for your brain function then you'll need it to break it up in a similar manner. Beyond that you'll need to use a threaded approach.
|
```
import sys
from PyQt5 import QtCore, QtGui
from PyQt5.QtWidgets import QMainWindow, QGridLayout, QLabel, QApplication, QWidget, QTextBrowser, QTextEdit, \
QPushButton, QAction, QLineEdit, QMessageBox
from PyQt5.QtGui import QPalette, QIcon, QColor, QFont
from PyQt5.QtCore import pyqtSlot, Qt
import threading
import time
textboxValue = ""
FinalAnsw = ""
class myThread (threading.Thread):
print ("Start")
def __init__(self):
threading.Thread.__init__(self)
def run(self):
def getAnswer(unString):
#do brain here
time.sleep(10)
return unString
global textboxValue
global FinalAnsw
FinalAnsw = getAnswer(textboxValue)
class App(QWidget):
def __init__(self):
super().__init__()
self.title = 'ChatBot'
self.left = 40
self.top = 40
self.width = 650
self.height = 600
self.initUI()
def initUI(self):
self.setWindowTitle(self.title)
self.setGeometry(self.left, self.top, self.width, self.height)
pal = QPalette();
pal.setColor(QPalette.Background, QColor(40, 40, 40));
self.setAutoFillBackground(True);
self.setPalette(pal);
font = QtGui.QFont()
font.setFamily("FreeMono")
font.setBold(True)
font.setPixelSize(15)
self.setStyleSheet("QTextEdit {color:#3d3838; font-size:12px; font-weight: bold}")
historylabel = QLabel('View your conversation history here: ')
historylabel.setStyleSheet('color: #82ecf9')
historylabel.setFont(font)
messagelabel = QLabel('Enter you message to the chat bot here:')
messagelabel.setStyleSheet('color: #82ecf9')
messagelabel.setFont(font)
self.conversationBox = QTextBrowser(self)
self.textbox = QTextEdit(self)
self.button = QPushButton('Send message', self)
self.button.setStyleSheet(
"QPushButton { background-color:#82ecf9; color: #3d3838 }" "QPushButton:pressed { background-color: black }")
grid = QGridLayout()
grid.setSpacing(10)
self.setLayout(grid)
grid.addWidget(historylabel, 1, 0)
grid.addWidget(self.conversationBox, 2, 0)
grid.addWidget(messagelabel, 3, 0)
grid.addWidget(self.textbox, 4, 0)
grid.addWidget(self.button, 5, 0)
# connect button to function on_click
self.button.clicked.connect(self.on_click)
self.show()
def on_click(self):
global textboxValue
textboxValue = self.textbox.toPlainText()
self.conversationBox.append("You: " + textboxValue)
th = myThread()
th.start()
th.join()
global FinalAnsw
self.conversationBox.append("Rocket: " + FinalAnsw)
self.textbox.setText("")
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = App()
app.exec_()
```
So creating a simple thread solved the problem, the code above will still freeze because of the sleep function call, but if you replace that with a normal function that lasts long it won't freeze anymore. It was tested by the brain module of my project with their functions.
For a simple example of building a thread use <https://www.tutorialspoint.com/python/python_multithreading.htm>
and for the PyQt GUI I've used examples from this website to learn <http://zetcode.com/gui/pyqt5/>
|
39,501,277
|
I have many (4000+) CSVs of stock data (Date, Open, High, Low, Close) which I import into individual Pandas dataframes to perform analysis. I am new to python and want to calculate a rolling 12month beta for each stock, I found a post to calculate rolling beta ([Python pandas calculate rolling stock beta using rolling apply to groupby object in vectorized fashion](https://stackoverflow.com/questions/34802972/python-pandas-calculate-rolling-stock-beta-using-rolling-apply-to-groupby-object)) however when used in my code below takes over 2.5 hours! Considering I can run the exact same calculations in SQL tables in under 3 minutes this is too slow.
How can I improve the performance of my below code to match that of SQL? I understand Pandas/python has that capability. My current method loops over each row which I know slows performance but I am unaware of any aggregate way to perform a rolling window beta calculation on a dataframe.
Note: the first 2 steps of loading the CSVs into individual dataframes and calculating daily returns only takes ~20seconds. All my CSV dataframes are stored in the dictionary called 'FilesLoaded' with names such as 'XAO'.
Your help would be much appreciated!
Thank you :)
```
import pandas as pd, numpy as np
import datetime
import ntpath
pd.set_option('precision',10) #Set the Decimal Point precision to DISPLAY
start_time=datetime.datetime.now()
MarketIndex = 'XAO'
period = 250
MinBetaPeriod = period
# ***********************************************************************************************
# CALC RETURNS
# ***********************************************************************************************
for File in FilesLoaded:
FilesLoaded[File]['Return'] = FilesLoaded[File]['Close'].pct_change()
# ***********************************************************************************************
# CALC BETA
# ***********************************************************************************************
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
#Build Custom "Rolling_Apply" function
def rolling_apply(df, period, func, min_periods=None):
if min_periods is None:
min_periods = period
result = pd.Series(np.nan, index=df.index)
for i in range(1, len(df)+1):
sub_df = df.iloc[max(i-period, 0):i,:]
if len(sub_df) >= min_periods:
idx = sub_df.index[-1]
result[idx] = func(sub_df)
return result
#Create empty BETA dataframe with same index as RETURNS dataframe
df_join = pd.DataFrame(index=FilesLoaded[MarketIndex].index)
df_join['market'] = FilesLoaded[MarketIndex]['Return']
df_join['stock'] = np.nan
for File in FilesLoaded:
df_join['stock'].update(FilesLoaded[File]['Return'])
df_join = df_join.replace(np.inf, np.nan) #get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.replace(-np.inf, np.nan)#get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.fillna(0) #get rid of the NaNs in the return data
FilesLoaded[File]['Beta'] = rolling_apply(df_join[['market','stock']], period, calc_beta, min_periods = MinBetaPeriod)
# ***********************************************************************************************
# CLEAN-UP
# ***********************************************************************************************
print('Run-time: {0}'.format(datetime.datetime.now() - start_time))
```
|
2016/09/14
|
[
"https://Stackoverflow.com/questions/39501277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6107994/"
] |
While efficient subdivision of the input data set into rolling windows is important to the optimization of the overall calculations, the performance of the beta calculation itself can also be significantly improved.
The following optimizes only the subdivision of the data set into rolling windows:
```
def numpy_betas(x_name, window, returns_data, intercept=True):
if intercept:
ones = numpy.ones(window)
def lstsq_beta(window_data):
x_data = numpy.vstack([window_data[x_name], ones]).T if intercept else window_data[[x_name]]
beta_arr, residuals, rank, s = numpy.linalg.lstsq(x_data, window_data)
return beta_arr[0]
indices = [int(x) for x in numpy.arange(0, returns_data.shape[0] - window + 1, 1)]
return DataFrame(
data=[lstsq_beta(returns_data.iloc[i:(i + window)]) for i in indices]
, columns=list(returns_data.columns)
, index=returns_data.index[window - 1::1]
)
```
The following also optimizes the beta calculation itself:
```
def custom_betas(x_name, window, returns_data):
window_inv = 1.0 / window
x_sum = returns_data[x_name].rolling(window, min_periods=window).sum()
y_sum = returns_data.rolling(window, min_periods=window).sum()
xy_sum = returns_data.mul(returns_data[x_name], axis=0).rolling(window, min_periods=window).sum()
xx_sum = numpy.square(returns_data[x_name]).rolling(window, min_periods=window).sum()
xy_cov = xy_sum - window_inv * y_sum.mul(x_sum, axis=0)
x_var = xx_sum - window_inv * numpy.square(x_sum)
betas = xy_cov.divide(x_var, axis=0)[window - 1:]
betas.columns.name = None
return betas
```
Comparing the performance of the two different calculations, you can see that as the window used in the beta calculation increases, the second method dramatically outperforms the first:
[](https://i.stack.imgur.com/A5204.png)
Comparing the performance to that of @piRSquared's implementation, the custom method takes roughly 350 millis to evaluate compared to over 2 seconds.
|
Created a simple python package [finance-calculator](https://finance-calculator.readthedocs.io/en/latest/usage.html) based on numpy and pandas to calculate financial ratios including beta. I am using the simple formula ([as per investopedia](https://www.investopedia.com/ask/answers/070615/what-formula-calculating-beta.asp)):
```
beta = covariance(returns, benchmark returns) / variance(benchmark returns)
```
Covariance and variance are directly calculated in pandas which makes it fast. Using the api in the package is also simple:
```
import finance_calculator as fc
beta = fc.get_beta(scheme_data, benchmark_data, tail=False)
```
which will give you a dataframe of date and beta or the last beta value if tail is true.
|
39,501,277
|
I have many (4000+) CSVs of stock data (Date, Open, High, Low, Close) which I import into individual Pandas dataframes to perform analysis. I am new to python and want to calculate a rolling 12month beta for each stock, I found a post to calculate rolling beta ([Python pandas calculate rolling stock beta using rolling apply to groupby object in vectorized fashion](https://stackoverflow.com/questions/34802972/python-pandas-calculate-rolling-stock-beta-using-rolling-apply-to-groupby-object)) however when used in my code below takes over 2.5 hours! Considering I can run the exact same calculations in SQL tables in under 3 minutes this is too slow.
How can I improve the performance of my below code to match that of SQL? I understand Pandas/python has that capability. My current method loops over each row which I know slows performance but I am unaware of any aggregate way to perform a rolling window beta calculation on a dataframe.
Note: the first 2 steps of loading the CSVs into individual dataframes and calculating daily returns only takes ~20seconds. All my CSV dataframes are stored in the dictionary called 'FilesLoaded' with names such as 'XAO'.
Your help would be much appreciated!
Thank you :)
```
import pandas as pd, numpy as np
import datetime
import ntpath
pd.set_option('precision',10) #Set the Decimal Point precision to DISPLAY
start_time=datetime.datetime.now()
MarketIndex = 'XAO'
period = 250
MinBetaPeriod = period
# ***********************************************************************************************
# CALC RETURNS
# ***********************************************************************************************
for File in FilesLoaded:
FilesLoaded[File]['Return'] = FilesLoaded[File]['Close'].pct_change()
# ***********************************************************************************************
# CALC BETA
# ***********************************************************************************************
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
#Build Custom "Rolling_Apply" function
def rolling_apply(df, period, func, min_periods=None):
if min_periods is None:
min_periods = period
result = pd.Series(np.nan, index=df.index)
for i in range(1, len(df)+1):
sub_df = df.iloc[max(i-period, 0):i,:]
if len(sub_df) >= min_periods:
idx = sub_df.index[-1]
result[idx] = func(sub_df)
return result
#Create empty BETA dataframe with same index as RETURNS dataframe
df_join = pd.DataFrame(index=FilesLoaded[MarketIndex].index)
df_join['market'] = FilesLoaded[MarketIndex]['Return']
df_join['stock'] = np.nan
for File in FilesLoaded:
df_join['stock'].update(FilesLoaded[File]['Return'])
df_join = df_join.replace(np.inf, np.nan) #get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.replace(-np.inf, np.nan)#get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.fillna(0) #get rid of the NaNs in the return data
FilesLoaded[File]['Beta'] = rolling_apply(df_join[['market','stock']], period, calc_beta, min_periods = MinBetaPeriod)
# ***********************************************************************************************
# CLEAN-UP
# ***********************************************************************************************
print('Run-time: {0}'.format(datetime.datetime.now() - start_time))
```
|
2016/09/14
|
[
"https://Stackoverflow.com/questions/39501277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6107994/"
] |
***Generate Random Stock Data***
20 Years of Monthly Data for 4,000 Stocks
```
dates = pd.date_range('1995-12-31', periods=480, freq='M', name='Date')
stoks = pd.Index(['s{:04d}'.format(i) for i in range(4000)])
df = pd.DataFrame(np.random.rand(480, 4000), dates, stoks)
```
---
```
df.iloc[:5, :5]
```
[](https://i.stack.imgur.com/DCpxf.png)
---
***Roll Function***
Returns groupby object ready to apply custom functions
See [Source](https://stackoverflow.com/a/37491779/2336654)
```
def roll(df, w):
# stack df.values w-times shifted once at each stack
roll_array = np.dstack([df.values[i:i+w, :] for i in range(len(df.index) - w + 1)]).T
# roll_array is now a 3-D array and can be read into
# a pandas panel object
panel = pd.Panel(roll_array,
items=df.index[w-1:],
major_axis=df.columns,
minor_axis=pd.Index(range(w), name='roll'))
# convert to dataframe and pivot + groupby
# is now ready for any action normally performed
# on a groupby object
return panel.to_frame().unstack().T.groupby(level=0)
```
---
***Beta Function***
Use closed form solution of OLS regression
Assume column 0 is market
See [Source](https://stats.stackexchange.com/a/23132/114499)
```
def beta(df):
# first column is the market
X = df.values[:, [0]]
# prepend a column of ones for the intercept
X = np.concatenate([np.ones_like(X), X], axis=1)
# matrix algebra
b = np.linalg.pinv(X.T.dot(X)).dot(X.T).dot(df.values[:, 1:])
return pd.Series(b[1], df.columns[1:], name='Beta')
```
---
***Demonstration***
```
rdf = roll(df, 12)
betas = rdf.apply(beta)
```
---
***Timing***
[](https://i.stack.imgur.com/t6lvj.png)
---
***Validation***
Compare calculations with OP
```
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
```
---
```
print(calc_beta(df.iloc[:12, :2]))
-0.311757542437
```
---
```
print(beta(df.iloc[:12, :2]))
s0001 -0.311758
Name: Beta, dtype: float64
```
---
***Note the first cell***
Is the same value as validated calculations above
```
betas = rdf.apply(beta)
betas.iloc[:5, :5]
```
[](https://i.stack.imgur.com/V9RGy.png)
---
***Response to comment***
Full working example with simulated multiple dataframes
```
num_sec_dfs = 4000
cols = ['Open', 'High', 'Low', 'Close']
dfs = {'s{:04d}'.format(i): pd.DataFrame(np.random.rand(480, 4), dates, cols) for i in range(num_sec_dfs)}
market = pd.Series(np.random.rand(480), dates, name='Market')
df = pd.concat([market] + [dfs[k].Close.rename(k) for k in dfs.keys()], axis=1).sort_index(1)
betas = roll(df.pct_change().dropna(), 12).apply(beta)
for c, col in betas.iteritems():
dfs[c]['Beta'] = col
dfs['s0001'].head(20)
```
[](https://i.stack.imgur.com/5RuJ9.png)
|
Further optimizing on @piRSquared's implementation for both speed and memory. the code is also simplified for clarity.
```
from numpy import nan, ndarray, ones_like, vstack, random
from numpy.lib.stride_tricks import as_strided
from numpy.linalg import pinv
from pandas import DataFrame, date_range
def calc_beta(s: ndarray, m: ndarray):
x = vstack((ones_like(m), m))
b = pinv(x.dot(x.T)).dot(x).dot(s)
return b[1]
def rolling_calc_beta(s_df: DataFrame, m_df: DataFrame, period: int):
result = ndarray(shape=s_df.shape, dtype=float)
l, w = s_df.shape
ls, ws = s_df.values.strides
result[0:period - 1, :] = nan
s_arr = as_strided(s_df.values, shape=(l - period + 1, period, w), strides=(ls, ls, ws))
m_arr = as_strided(m_df.values, shape=(l - period + 1, period), strides=(ls, ls))
for row in range(period, l):
result[row, :] = calc_beta(s_arr[row - period, :], m_arr[row - period])
return DataFrame(data=result, index=s_df.index, columns=s_df.columns)
if __name__ == '__main__':
num_sec_dfs, num_periods = 4000, 480
dates = date_range('1995-12-31', periods=num_periods, freq='M', name='Date')
stocks = DataFrame(data=random.rand(num_periods, num_sec_dfs), index=dates,
columns=['s{:04d}'.format(i) for i in
range(num_sec_dfs)]).pct_change()
market = DataFrame(data=random.rand(num_periods), index=dates, columns=
['Market']).pct_change()
betas = rolling_calc_beta(stocks, market, 12)
```
**%timeit betas = rolling\_calc\_beta(stocks, market, 12)**
**335 ms ± 2.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)**
|
39,501,277
|
I have many (4000+) CSVs of stock data (Date, Open, High, Low, Close) which I import into individual Pandas dataframes to perform analysis. I am new to python and want to calculate a rolling 12month beta for each stock, I found a post to calculate rolling beta ([Python pandas calculate rolling stock beta using rolling apply to groupby object in vectorized fashion](https://stackoverflow.com/questions/34802972/python-pandas-calculate-rolling-stock-beta-using-rolling-apply-to-groupby-object)) however when used in my code below takes over 2.5 hours! Considering I can run the exact same calculations in SQL tables in under 3 minutes this is too slow.
How can I improve the performance of my below code to match that of SQL? I understand Pandas/python has that capability. My current method loops over each row which I know slows performance but I am unaware of any aggregate way to perform a rolling window beta calculation on a dataframe.
Note: the first 2 steps of loading the CSVs into individual dataframes and calculating daily returns only takes ~20seconds. All my CSV dataframes are stored in the dictionary called 'FilesLoaded' with names such as 'XAO'.
Your help would be much appreciated!
Thank you :)
```
import pandas as pd, numpy as np
import datetime
import ntpath
pd.set_option('precision',10) #Set the Decimal Point precision to DISPLAY
start_time=datetime.datetime.now()
MarketIndex = 'XAO'
period = 250
MinBetaPeriod = period
# ***********************************************************************************************
# CALC RETURNS
# ***********************************************************************************************
for File in FilesLoaded:
FilesLoaded[File]['Return'] = FilesLoaded[File]['Close'].pct_change()
# ***********************************************************************************************
# CALC BETA
# ***********************************************************************************************
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
#Build Custom "Rolling_Apply" function
def rolling_apply(df, period, func, min_periods=None):
if min_periods is None:
min_periods = period
result = pd.Series(np.nan, index=df.index)
for i in range(1, len(df)+1):
sub_df = df.iloc[max(i-period, 0):i,:]
if len(sub_df) >= min_periods:
idx = sub_df.index[-1]
result[idx] = func(sub_df)
return result
#Create empty BETA dataframe with same index as RETURNS dataframe
df_join = pd.DataFrame(index=FilesLoaded[MarketIndex].index)
df_join['market'] = FilesLoaded[MarketIndex]['Return']
df_join['stock'] = np.nan
for File in FilesLoaded:
df_join['stock'].update(FilesLoaded[File]['Return'])
df_join = df_join.replace(np.inf, np.nan) #get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.replace(-np.inf, np.nan)#get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.fillna(0) #get rid of the NaNs in the return data
FilesLoaded[File]['Beta'] = rolling_apply(df_join[['market','stock']], period, calc_beta, min_periods = MinBetaPeriod)
# ***********************************************************************************************
# CLEAN-UP
# ***********************************************************************************************
print('Run-time: {0}'.format(datetime.datetime.now() - start_time))
```
|
2016/09/14
|
[
"https://Stackoverflow.com/questions/39501277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6107994/"
] |
Using a generator to improve memory efficiency
***Simulated data***
```
m, n = 480, 10000
dates = pd.date_range('1995-12-31', periods=m, freq='M', name='Date')
stocks = pd.Index(['s{:04d}'.format(i) for i in range(n)])
df = pd.DataFrame(np.random.rand(m, n), dates, stocks)
market = pd.Series(np.random.rand(m), dates, name='Market')
df = pd.concat([df, market], axis=1)
```
***Beta Calculation***
```
def beta(df, market=None):
# If the market values are not passed,
# I'll assume they are located in a column
# named 'Market'. If not, this will fail.
if market is None:
market = df['Market']
df = df.drop('Market', axis=1)
X = market.values.reshape(-1, 1)
X = np.concatenate([np.ones_like(X), X], axis=1)
b = np.linalg.pinv(X.T.dot(X)).dot(X.T).dot(df.values)
return pd.Series(b[1], df.columns, name=df.index[-1])
```
***roll function***
This returns a generator and will be far more memory efficient
```
def roll(df, w):
for i in range(df.shape[0] - w + 1):
yield pd.DataFrame(df.values[i:i+w, :], df.index[i:i+w], df.columns)
```
***Putting it all together***
```
betas = pd.concat([beta(sdf) for sdf in roll(df.pct_change().dropna(), 12)], axis=1).T
```
---
Validation
==========
***OP beta calc***
```
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
```
***Experiment setup***
```
m, n = 12, 2
dates = pd.date_range('1995-12-31', periods=m, freq='M', name='Date')
cols = ['Open', 'High', 'Low', 'Close']
dfs = {'s{:04d}'.format(i): pd.DataFrame(np.random.rand(m, 4), dates, cols) for i in range(n)}
market = pd.Series(np.random.rand(m), dates, name='Market')
df = pd.concat([market] + [dfs[k].Close.rename(k) for k in dfs.keys()], axis=1).sort_index(1)
betas = pd.concat([beta(sdf) for sdf in roll(df.pct_change().dropna(), 12)], axis=1).T
for c, col in betas.iteritems():
dfs[c]['Beta'] = col
dfs['s0000'].head(20)
```
[](https://i.stack.imgur.com/W8WLk.png)
```
calc_beta(df[['Market', 's0000']])
0.0020118230147777435
```
***NOTE:***
The calculations are the same
|
Created a simple python package [finance-calculator](https://finance-calculator.readthedocs.io/en/latest/usage.html) based on numpy and pandas to calculate financial ratios including beta. I am using the simple formula ([as per investopedia](https://www.investopedia.com/ask/answers/070615/what-formula-calculating-beta.asp)):
```
beta = covariance(returns, benchmark returns) / variance(benchmark returns)
```
Covariance and variance are directly calculated in pandas which makes it fast. Using the api in the package is also simple:
```
import finance_calculator as fc
beta = fc.get_beta(scheme_data, benchmark_data, tail=False)
```
which will give you a dataframe of date and beta or the last beta value if tail is true.
|
39,501,277
|
I have many (4000+) CSVs of stock data (Date, Open, High, Low, Close) which I import into individual Pandas dataframes to perform analysis. I am new to python and want to calculate a rolling 12month beta for each stock, I found a post to calculate rolling beta ([Python pandas calculate rolling stock beta using rolling apply to groupby object in vectorized fashion](https://stackoverflow.com/questions/34802972/python-pandas-calculate-rolling-stock-beta-using-rolling-apply-to-groupby-object)) however when used in my code below takes over 2.5 hours! Considering I can run the exact same calculations in SQL tables in under 3 minutes this is too slow.
How can I improve the performance of my below code to match that of SQL? I understand Pandas/python has that capability. My current method loops over each row which I know slows performance but I am unaware of any aggregate way to perform a rolling window beta calculation on a dataframe.
Note: the first 2 steps of loading the CSVs into individual dataframes and calculating daily returns only takes ~20seconds. All my CSV dataframes are stored in the dictionary called 'FilesLoaded' with names such as 'XAO'.
Your help would be much appreciated!
Thank you :)
```
import pandas as pd, numpy as np
import datetime
import ntpath
pd.set_option('precision',10) #Set the Decimal Point precision to DISPLAY
start_time=datetime.datetime.now()
MarketIndex = 'XAO'
period = 250
MinBetaPeriod = period
# ***********************************************************************************************
# CALC RETURNS
# ***********************************************************************************************
for File in FilesLoaded:
FilesLoaded[File]['Return'] = FilesLoaded[File]['Close'].pct_change()
# ***********************************************************************************************
# CALC BETA
# ***********************************************************************************************
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
#Build Custom "Rolling_Apply" function
def rolling_apply(df, period, func, min_periods=None):
if min_periods is None:
min_periods = period
result = pd.Series(np.nan, index=df.index)
for i in range(1, len(df)+1):
sub_df = df.iloc[max(i-period, 0):i,:]
if len(sub_df) >= min_periods:
idx = sub_df.index[-1]
result[idx] = func(sub_df)
return result
#Create empty BETA dataframe with same index as RETURNS dataframe
df_join = pd.DataFrame(index=FilesLoaded[MarketIndex].index)
df_join['market'] = FilesLoaded[MarketIndex]['Return']
df_join['stock'] = np.nan
for File in FilesLoaded:
df_join['stock'].update(FilesLoaded[File]['Return'])
df_join = df_join.replace(np.inf, np.nan) #get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.replace(-np.inf, np.nan)#get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.fillna(0) #get rid of the NaNs in the return data
FilesLoaded[File]['Beta'] = rolling_apply(df_join[['market','stock']], period, calc_beta, min_periods = MinBetaPeriod)
# ***********************************************************************************************
# CLEAN-UP
# ***********************************************************************************************
print('Run-time: {0}'.format(datetime.datetime.now() - start_time))
```
|
2016/09/14
|
[
"https://Stackoverflow.com/questions/39501277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6107994/"
] |
Using a generator to improve memory efficiency
***Simulated data***
```
m, n = 480, 10000
dates = pd.date_range('1995-12-31', periods=m, freq='M', name='Date')
stocks = pd.Index(['s{:04d}'.format(i) for i in range(n)])
df = pd.DataFrame(np.random.rand(m, n), dates, stocks)
market = pd.Series(np.random.rand(m), dates, name='Market')
df = pd.concat([df, market], axis=1)
```
***Beta Calculation***
```
def beta(df, market=None):
# If the market values are not passed,
# I'll assume they are located in a column
# named 'Market'. If not, this will fail.
if market is None:
market = df['Market']
df = df.drop('Market', axis=1)
X = market.values.reshape(-1, 1)
X = np.concatenate([np.ones_like(X), X], axis=1)
b = np.linalg.pinv(X.T.dot(X)).dot(X.T).dot(df.values)
return pd.Series(b[1], df.columns, name=df.index[-1])
```
***roll function***
This returns a generator and will be far more memory efficient
```
def roll(df, w):
for i in range(df.shape[0] - w + 1):
yield pd.DataFrame(df.values[i:i+w, :], df.index[i:i+w], df.columns)
```
***Putting it all together***
```
betas = pd.concat([beta(sdf) for sdf in roll(df.pct_change().dropna(), 12)], axis=1).T
```
---
Validation
==========
***OP beta calc***
```
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
```
***Experiment setup***
```
m, n = 12, 2
dates = pd.date_range('1995-12-31', periods=m, freq='M', name='Date')
cols = ['Open', 'High', 'Low', 'Close']
dfs = {'s{:04d}'.format(i): pd.DataFrame(np.random.rand(m, 4), dates, cols) for i in range(n)}
market = pd.Series(np.random.rand(m), dates, name='Market')
df = pd.concat([market] + [dfs[k].Close.rename(k) for k in dfs.keys()], axis=1).sort_index(1)
betas = pd.concat([beta(sdf) for sdf in roll(df.pct_change().dropna(), 12)], axis=1).T
for c, col in betas.iteritems():
dfs[c]['Beta'] = col
dfs['s0000'].head(20)
```
[](https://i.stack.imgur.com/W8WLk.png)
```
calc_beta(df[['Market', 's0000']])
0.0020118230147777435
```
***NOTE:***
The calculations are the same
|
but these would be blockish when you require beta calculations across the dates(m) for multiple stocks(n) resulting (m x n) number of calculations.
Some relief could be taken by running each date or stock on multiple cores, but then you will end up having huge hardware.
The major time requirement for the solutions available is finding the variance and co-variance and also **NaN** should be avoided in (Index and stock) data for a correct calculation as per pandas==0.23.0.
Thus running again would result stupid move unless the calculations are cached.
numpy variance and co-variance version also happens to miss-calculate the beta if **NaN** are not dropped.
A Cython implementation is must for huge set of data.
|
39,501,277
|
I have many (4000+) CSVs of stock data (Date, Open, High, Low, Close) which I import into individual Pandas dataframes to perform analysis. I am new to python and want to calculate a rolling 12month beta for each stock, I found a post to calculate rolling beta ([Python pandas calculate rolling stock beta using rolling apply to groupby object in vectorized fashion](https://stackoverflow.com/questions/34802972/python-pandas-calculate-rolling-stock-beta-using-rolling-apply-to-groupby-object)) however when used in my code below takes over 2.5 hours! Considering I can run the exact same calculations in SQL tables in under 3 minutes this is too slow.
How can I improve the performance of my below code to match that of SQL? I understand Pandas/python has that capability. My current method loops over each row which I know slows performance but I am unaware of any aggregate way to perform a rolling window beta calculation on a dataframe.
Note: the first 2 steps of loading the CSVs into individual dataframes and calculating daily returns only takes ~20seconds. All my CSV dataframes are stored in the dictionary called 'FilesLoaded' with names such as 'XAO'.
Your help would be much appreciated!
Thank you :)
```
import pandas as pd, numpy as np
import datetime
import ntpath
pd.set_option('precision',10) #Set the Decimal Point precision to DISPLAY
start_time=datetime.datetime.now()
MarketIndex = 'XAO'
period = 250
MinBetaPeriod = period
# ***********************************************************************************************
# CALC RETURNS
# ***********************************************************************************************
for File in FilesLoaded:
FilesLoaded[File]['Return'] = FilesLoaded[File]['Close'].pct_change()
# ***********************************************************************************************
# CALC BETA
# ***********************************************************************************************
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
#Build Custom "Rolling_Apply" function
def rolling_apply(df, period, func, min_periods=None):
if min_periods is None:
min_periods = period
result = pd.Series(np.nan, index=df.index)
for i in range(1, len(df)+1):
sub_df = df.iloc[max(i-period, 0):i,:]
if len(sub_df) >= min_periods:
idx = sub_df.index[-1]
result[idx] = func(sub_df)
return result
#Create empty BETA dataframe with same index as RETURNS dataframe
df_join = pd.DataFrame(index=FilesLoaded[MarketIndex].index)
df_join['market'] = FilesLoaded[MarketIndex]['Return']
df_join['stock'] = np.nan
for File in FilesLoaded:
df_join['stock'].update(FilesLoaded[File]['Return'])
df_join = df_join.replace(np.inf, np.nan) #get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.replace(-np.inf, np.nan)#get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.fillna(0) #get rid of the NaNs in the return data
FilesLoaded[File]['Beta'] = rolling_apply(df_join[['market','stock']], period, calc_beta, min_periods = MinBetaPeriod)
# ***********************************************************************************************
# CLEAN-UP
# ***********************************************************************************************
print('Run-time: {0}'.format(datetime.datetime.now() - start_time))
```
|
2016/09/14
|
[
"https://Stackoverflow.com/questions/39501277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6107994/"
] |
While efficient subdivision of the input data set into rolling windows is important to the optimization of the overall calculations, the performance of the beta calculation itself can also be significantly improved.
The following optimizes only the subdivision of the data set into rolling windows:
```
def numpy_betas(x_name, window, returns_data, intercept=True):
if intercept:
ones = numpy.ones(window)
def lstsq_beta(window_data):
x_data = numpy.vstack([window_data[x_name], ones]).T if intercept else window_data[[x_name]]
beta_arr, residuals, rank, s = numpy.linalg.lstsq(x_data, window_data)
return beta_arr[0]
indices = [int(x) for x in numpy.arange(0, returns_data.shape[0] - window + 1, 1)]
return DataFrame(
data=[lstsq_beta(returns_data.iloc[i:(i + window)]) for i in indices]
, columns=list(returns_data.columns)
, index=returns_data.index[window - 1::1]
)
```
The following also optimizes the beta calculation itself:
```
def custom_betas(x_name, window, returns_data):
window_inv = 1.0 / window
x_sum = returns_data[x_name].rolling(window, min_periods=window).sum()
y_sum = returns_data.rolling(window, min_periods=window).sum()
xy_sum = returns_data.mul(returns_data[x_name], axis=0).rolling(window, min_periods=window).sum()
xx_sum = numpy.square(returns_data[x_name]).rolling(window, min_periods=window).sum()
xy_cov = xy_sum - window_inv * y_sum.mul(x_sum, axis=0)
x_var = xx_sum - window_inv * numpy.square(x_sum)
betas = xy_cov.divide(x_var, axis=0)[window - 1:]
betas.columns.name = None
return betas
```
Comparing the performance of the two different calculations, you can see that as the window used in the beta calculation increases, the second method dramatically outperforms the first:
[](https://i.stack.imgur.com/A5204.png)
Comparing the performance to that of @piRSquared's implementation, the custom method takes roughly 350 millis to evaluate compared to over 2 seconds.
|
**HERE'S THE SIMPLEST AND FASTEST SOLUTION**
The accepted answer was too slow for what I needed and the I didn't understand the math behind the solutions asserted as faster. They also gave different answers, though in fairness I probably just messed it up.
I don't think you need to make a custom rolling function to calculate beta with pandas 1.1.4 (or even since at least .19). The below code assumes the data is in the same format as the above problems--a pandas dataframe with a date index, percent returns of some periodicity for the stocks, and market values are located in a column named 'Market'.
If you don't have this format, I recommend joining the stock returns to the market returns to ensure the same index with:
```
# Use .pct_change() only if joining Close data
beta_data = stock_data.join(market_data), how = 'inner').pct_change().dropna()
```
After that, it's just covariance divided by variance.
```
ticker_covariance = beta_data.rolling(window).cov()
# Limit results to the stock (i.e. column name for the stock) vs. 'Market' covariance
ticker_covariance = ticker_covariance.loc[pd.IndexSlice[:, stock], 'Market'].dropna()
benchmark_variance = beta_data['Market'].rolling(window).var().dropna()
beta = ticker_covariance / benchmark_variance
```
NOTES: If you have a multi-index, you'll have to drop the non-date levels to use the rolling().apply() solution. I only tested this for one stock and one market. If you have multiple stocks, a modification to the ticker\_covariance equation after .loc is probably needed. Last, if you want to calculate beta values for the periods before the full window (ex. stock\_data begins 1 year ago, but you use 3yrs of data), then you can modify the above to and expanding (instead of rolling) window with the same calculation and then .combine\_first() the two.
|
39,501,277
|
I have many (4000+) CSVs of stock data (Date, Open, High, Low, Close) which I import into individual Pandas dataframes to perform analysis. I am new to python and want to calculate a rolling 12month beta for each stock, I found a post to calculate rolling beta ([Python pandas calculate rolling stock beta using rolling apply to groupby object in vectorized fashion](https://stackoverflow.com/questions/34802972/python-pandas-calculate-rolling-stock-beta-using-rolling-apply-to-groupby-object)) however when used in my code below takes over 2.5 hours! Considering I can run the exact same calculations in SQL tables in under 3 minutes this is too slow.
How can I improve the performance of my below code to match that of SQL? I understand Pandas/python has that capability. My current method loops over each row which I know slows performance but I am unaware of any aggregate way to perform a rolling window beta calculation on a dataframe.
Note: the first 2 steps of loading the CSVs into individual dataframes and calculating daily returns only takes ~20seconds. All my CSV dataframes are stored in the dictionary called 'FilesLoaded' with names such as 'XAO'.
Your help would be much appreciated!
Thank you :)
```
import pandas as pd, numpy as np
import datetime
import ntpath
pd.set_option('precision',10) #Set the Decimal Point precision to DISPLAY
start_time=datetime.datetime.now()
MarketIndex = 'XAO'
period = 250
MinBetaPeriod = period
# ***********************************************************************************************
# CALC RETURNS
# ***********************************************************************************************
for File in FilesLoaded:
FilesLoaded[File]['Return'] = FilesLoaded[File]['Close'].pct_change()
# ***********************************************************************************************
# CALC BETA
# ***********************************************************************************************
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
#Build Custom "Rolling_Apply" function
def rolling_apply(df, period, func, min_periods=None):
if min_periods is None:
min_periods = period
result = pd.Series(np.nan, index=df.index)
for i in range(1, len(df)+1):
sub_df = df.iloc[max(i-period, 0):i,:]
if len(sub_df) >= min_periods:
idx = sub_df.index[-1]
result[idx] = func(sub_df)
return result
#Create empty BETA dataframe with same index as RETURNS dataframe
df_join = pd.DataFrame(index=FilesLoaded[MarketIndex].index)
df_join['market'] = FilesLoaded[MarketIndex]['Return']
df_join['stock'] = np.nan
for File in FilesLoaded:
df_join['stock'].update(FilesLoaded[File]['Return'])
df_join = df_join.replace(np.inf, np.nan) #get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.replace(-np.inf, np.nan)#get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.fillna(0) #get rid of the NaNs in the return data
FilesLoaded[File]['Beta'] = rolling_apply(df_join[['market','stock']], period, calc_beta, min_periods = MinBetaPeriod)
# ***********************************************************************************************
# CLEAN-UP
# ***********************************************************************************************
print('Run-time: {0}'.format(datetime.datetime.now() - start_time))
```
|
2016/09/14
|
[
"https://Stackoverflow.com/questions/39501277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6107994/"
] |
Using a generator to improve memory efficiency
***Simulated data***
```
m, n = 480, 10000
dates = pd.date_range('1995-12-31', periods=m, freq='M', name='Date')
stocks = pd.Index(['s{:04d}'.format(i) for i in range(n)])
df = pd.DataFrame(np.random.rand(m, n), dates, stocks)
market = pd.Series(np.random.rand(m), dates, name='Market')
df = pd.concat([df, market], axis=1)
```
***Beta Calculation***
```
def beta(df, market=None):
# If the market values are not passed,
# I'll assume they are located in a column
# named 'Market'. If not, this will fail.
if market is None:
market = df['Market']
df = df.drop('Market', axis=1)
X = market.values.reshape(-1, 1)
X = np.concatenate([np.ones_like(X), X], axis=1)
b = np.linalg.pinv(X.T.dot(X)).dot(X.T).dot(df.values)
return pd.Series(b[1], df.columns, name=df.index[-1])
```
***roll function***
This returns a generator and will be far more memory efficient
```
def roll(df, w):
for i in range(df.shape[0] - w + 1):
yield pd.DataFrame(df.values[i:i+w, :], df.index[i:i+w], df.columns)
```
***Putting it all together***
```
betas = pd.concat([beta(sdf) for sdf in roll(df.pct_change().dropna(), 12)], axis=1).T
```
---
Validation
==========
***OP beta calc***
```
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
```
***Experiment setup***
```
m, n = 12, 2
dates = pd.date_range('1995-12-31', periods=m, freq='M', name='Date')
cols = ['Open', 'High', 'Low', 'Close']
dfs = {'s{:04d}'.format(i): pd.DataFrame(np.random.rand(m, 4), dates, cols) for i in range(n)}
market = pd.Series(np.random.rand(m), dates, name='Market')
df = pd.concat([market] + [dfs[k].Close.rename(k) for k in dfs.keys()], axis=1).sort_index(1)
betas = pd.concat([beta(sdf) for sdf in roll(df.pct_change().dropna(), 12)], axis=1).T
for c, col in betas.iteritems():
dfs[c]['Beta'] = col
dfs['s0000'].head(20)
```
[](https://i.stack.imgur.com/W8WLk.png)
```
calc_beta(df[['Market', 's0000']])
0.0020118230147777435
```
***NOTE:***
The calculations are the same
|
**HERE'S THE SIMPLEST AND FASTEST SOLUTION**
The accepted answer was too slow for what I needed and the I didn't understand the math behind the solutions asserted as faster. They also gave different answers, though in fairness I probably just messed it up.
I don't think you need to make a custom rolling function to calculate beta with pandas 1.1.4 (or even since at least .19). The below code assumes the data is in the same format as the above problems--a pandas dataframe with a date index, percent returns of some periodicity for the stocks, and market values are located in a column named 'Market'.
If you don't have this format, I recommend joining the stock returns to the market returns to ensure the same index with:
```
# Use .pct_change() only if joining Close data
beta_data = stock_data.join(market_data), how = 'inner').pct_change().dropna()
```
After that, it's just covariance divided by variance.
```
ticker_covariance = beta_data.rolling(window).cov()
# Limit results to the stock (i.e. column name for the stock) vs. 'Market' covariance
ticker_covariance = ticker_covariance.loc[pd.IndexSlice[:, stock], 'Market'].dropna()
benchmark_variance = beta_data['Market'].rolling(window).var().dropna()
beta = ticker_covariance / benchmark_variance
```
NOTES: If you have a multi-index, you'll have to drop the non-date levels to use the rolling().apply() solution. I only tested this for one stock and one market. If you have multiple stocks, a modification to the ticker\_covariance equation after .loc is probably needed. Last, if you want to calculate beta values for the periods before the full window (ex. stock\_data begins 1 year ago, but you use 3yrs of data), then you can modify the above to and expanding (instead of rolling) window with the same calculation and then .combine\_first() the two.
|
39,501,277
|
I have many (4000+) CSVs of stock data (Date, Open, High, Low, Close) which I import into individual Pandas dataframes to perform analysis. I am new to python and want to calculate a rolling 12month beta for each stock, I found a post to calculate rolling beta ([Python pandas calculate rolling stock beta using rolling apply to groupby object in vectorized fashion](https://stackoverflow.com/questions/34802972/python-pandas-calculate-rolling-stock-beta-using-rolling-apply-to-groupby-object)) however when used in my code below takes over 2.5 hours! Considering I can run the exact same calculations in SQL tables in under 3 minutes this is too slow.
How can I improve the performance of my below code to match that of SQL? I understand Pandas/python has that capability. My current method loops over each row which I know slows performance but I am unaware of any aggregate way to perform a rolling window beta calculation on a dataframe.
Note: the first 2 steps of loading the CSVs into individual dataframes and calculating daily returns only takes ~20seconds. All my CSV dataframes are stored in the dictionary called 'FilesLoaded' with names such as 'XAO'.
Your help would be much appreciated!
Thank you :)
```
import pandas as pd, numpy as np
import datetime
import ntpath
pd.set_option('precision',10) #Set the Decimal Point precision to DISPLAY
start_time=datetime.datetime.now()
MarketIndex = 'XAO'
period = 250
MinBetaPeriod = period
# ***********************************************************************************************
# CALC RETURNS
# ***********************************************************************************************
for File in FilesLoaded:
FilesLoaded[File]['Return'] = FilesLoaded[File]['Close'].pct_change()
# ***********************************************************************************************
# CALC BETA
# ***********************************************************************************************
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
#Build Custom "Rolling_Apply" function
def rolling_apply(df, period, func, min_periods=None):
if min_periods is None:
min_periods = period
result = pd.Series(np.nan, index=df.index)
for i in range(1, len(df)+1):
sub_df = df.iloc[max(i-period, 0):i,:]
if len(sub_df) >= min_periods:
idx = sub_df.index[-1]
result[idx] = func(sub_df)
return result
#Create empty BETA dataframe with same index as RETURNS dataframe
df_join = pd.DataFrame(index=FilesLoaded[MarketIndex].index)
df_join['market'] = FilesLoaded[MarketIndex]['Return']
df_join['stock'] = np.nan
for File in FilesLoaded:
df_join['stock'].update(FilesLoaded[File]['Return'])
df_join = df_join.replace(np.inf, np.nan) #get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.replace(-np.inf, np.nan)#get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.fillna(0) #get rid of the NaNs in the return data
FilesLoaded[File]['Beta'] = rolling_apply(df_join[['market','stock']], period, calc_beta, min_periods = MinBetaPeriod)
# ***********************************************************************************************
# CLEAN-UP
# ***********************************************************************************************
print('Run-time: {0}'.format(datetime.datetime.now() - start_time))
```
|
2016/09/14
|
[
"https://Stackoverflow.com/questions/39501277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6107994/"
] |
Using a generator to improve memory efficiency
***Simulated data***
```
m, n = 480, 10000
dates = pd.date_range('1995-12-31', periods=m, freq='M', name='Date')
stocks = pd.Index(['s{:04d}'.format(i) for i in range(n)])
df = pd.DataFrame(np.random.rand(m, n), dates, stocks)
market = pd.Series(np.random.rand(m), dates, name='Market')
df = pd.concat([df, market], axis=1)
```
***Beta Calculation***
```
def beta(df, market=None):
# If the market values are not passed,
# I'll assume they are located in a column
# named 'Market'. If not, this will fail.
if market is None:
market = df['Market']
df = df.drop('Market', axis=1)
X = market.values.reshape(-1, 1)
X = np.concatenate([np.ones_like(X), X], axis=1)
b = np.linalg.pinv(X.T.dot(X)).dot(X.T).dot(df.values)
return pd.Series(b[1], df.columns, name=df.index[-1])
```
***roll function***
This returns a generator and will be far more memory efficient
```
def roll(df, w):
for i in range(df.shape[0] - w + 1):
yield pd.DataFrame(df.values[i:i+w, :], df.index[i:i+w], df.columns)
```
***Putting it all together***
```
betas = pd.concat([beta(sdf) for sdf in roll(df.pct_change().dropna(), 12)], axis=1).T
```
---
Validation
==========
***OP beta calc***
```
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
```
***Experiment setup***
```
m, n = 12, 2
dates = pd.date_range('1995-12-31', periods=m, freq='M', name='Date')
cols = ['Open', 'High', 'Low', 'Close']
dfs = {'s{:04d}'.format(i): pd.DataFrame(np.random.rand(m, 4), dates, cols) for i in range(n)}
market = pd.Series(np.random.rand(m), dates, name='Market')
df = pd.concat([market] + [dfs[k].Close.rename(k) for k in dfs.keys()], axis=1).sort_index(1)
betas = pd.concat([beta(sdf) for sdf in roll(df.pct_change().dropna(), 12)], axis=1).T
for c, col in betas.iteritems():
dfs[c]['Beta'] = col
dfs['s0000'].head(20)
```
[](https://i.stack.imgur.com/W8WLk.png)
```
calc_beta(df[['Market', 's0000']])
0.0020118230147777435
```
***NOTE:***
The calculations are the same
|
While efficient subdivision of the input data set into rolling windows is important to the optimization of the overall calculations, the performance of the beta calculation itself can also be significantly improved.
The following optimizes only the subdivision of the data set into rolling windows:
```
def numpy_betas(x_name, window, returns_data, intercept=True):
if intercept:
ones = numpy.ones(window)
def lstsq_beta(window_data):
x_data = numpy.vstack([window_data[x_name], ones]).T if intercept else window_data[[x_name]]
beta_arr, residuals, rank, s = numpy.linalg.lstsq(x_data, window_data)
return beta_arr[0]
indices = [int(x) for x in numpy.arange(0, returns_data.shape[0] - window + 1, 1)]
return DataFrame(
data=[lstsq_beta(returns_data.iloc[i:(i + window)]) for i in indices]
, columns=list(returns_data.columns)
, index=returns_data.index[window - 1::1]
)
```
The following also optimizes the beta calculation itself:
```
def custom_betas(x_name, window, returns_data):
window_inv = 1.0 / window
x_sum = returns_data[x_name].rolling(window, min_periods=window).sum()
y_sum = returns_data.rolling(window, min_periods=window).sum()
xy_sum = returns_data.mul(returns_data[x_name], axis=0).rolling(window, min_periods=window).sum()
xx_sum = numpy.square(returns_data[x_name]).rolling(window, min_periods=window).sum()
xy_cov = xy_sum - window_inv * y_sum.mul(x_sum, axis=0)
x_var = xx_sum - window_inv * numpy.square(x_sum)
betas = xy_cov.divide(x_var, axis=0)[window - 1:]
betas.columns.name = None
return betas
```
Comparing the performance of the two different calculations, you can see that as the window used in the beta calculation increases, the second method dramatically outperforms the first:
[](https://i.stack.imgur.com/A5204.png)
Comparing the performance to that of @piRSquared's implementation, the custom method takes roughly 350 millis to evaluate compared to over 2 seconds.
|
39,501,277
|
I have many (4000+) CSVs of stock data (Date, Open, High, Low, Close) which I import into individual Pandas dataframes to perform analysis. I am new to python and want to calculate a rolling 12month beta for each stock, I found a post to calculate rolling beta ([Python pandas calculate rolling stock beta using rolling apply to groupby object in vectorized fashion](https://stackoverflow.com/questions/34802972/python-pandas-calculate-rolling-stock-beta-using-rolling-apply-to-groupby-object)) however when used in my code below takes over 2.5 hours! Considering I can run the exact same calculations in SQL tables in under 3 minutes this is too slow.
How can I improve the performance of my below code to match that of SQL? I understand Pandas/python has that capability. My current method loops over each row which I know slows performance but I am unaware of any aggregate way to perform a rolling window beta calculation on a dataframe.
Note: the first 2 steps of loading the CSVs into individual dataframes and calculating daily returns only takes ~20seconds. All my CSV dataframes are stored in the dictionary called 'FilesLoaded' with names such as 'XAO'.
Your help would be much appreciated!
Thank you :)
```
import pandas as pd, numpy as np
import datetime
import ntpath
pd.set_option('precision',10) #Set the Decimal Point precision to DISPLAY
start_time=datetime.datetime.now()
MarketIndex = 'XAO'
period = 250
MinBetaPeriod = period
# ***********************************************************************************************
# CALC RETURNS
# ***********************************************************************************************
for File in FilesLoaded:
FilesLoaded[File]['Return'] = FilesLoaded[File]['Close'].pct_change()
# ***********************************************************************************************
# CALC BETA
# ***********************************************************************************************
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
#Build Custom "Rolling_Apply" function
def rolling_apply(df, period, func, min_periods=None):
if min_periods is None:
min_periods = period
result = pd.Series(np.nan, index=df.index)
for i in range(1, len(df)+1):
sub_df = df.iloc[max(i-period, 0):i,:]
if len(sub_df) >= min_periods:
idx = sub_df.index[-1]
result[idx] = func(sub_df)
return result
#Create empty BETA dataframe with same index as RETURNS dataframe
df_join = pd.DataFrame(index=FilesLoaded[MarketIndex].index)
df_join['market'] = FilesLoaded[MarketIndex]['Return']
df_join['stock'] = np.nan
for File in FilesLoaded:
df_join['stock'].update(FilesLoaded[File]['Return'])
df_join = df_join.replace(np.inf, np.nan) #get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.replace(-np.inf, np.nan)#get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.fillna(0) #get rid of the NaNs in the return data
FilesLoaded[File]['Beta'] = rolling_apply(df_join[['market','stock']], period, calc_beta, min_periods = MinBetaPeriod)
# ***********************************************************************************************
# CLEAN-UP
# ***********************************************************************************************
print('Run-time: {0}'.format(datetime.datetime.now() - start_time))
```
|
2016/09/14
|
[
"https://Stackoverflow.com/questions/39501277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6107994/"
] |
While efficient subdivision of the input data set into rolling windows is important to the optimization of the overall calculations, the performance of the beta calculation itself can also be significantly improved.
The following optimizes only the subdivision of the data set into rolling windows:
```
def numpy_betas(x_name, window, returns_data, intercept=True):
if intercept:
ones = numpy.ones(window)
def lstsq_beta(window_data):
x_data = numpy.vstack([window_data[x_name], ones]).T if intercept else window_data[[x_name]]
beta_arr, residuals, rank, s = numpy.linalg.lstsq(x_data, window_data)
return beta_arr[0]
indices = [int(x) for x in numpy.arange(0, returns_data.shape[0] - window + 1, 1)]
return DataFrame(
data=[lstsq_beta(returns_data.iloc[i:(i + window)]) for i in indices]
, columns=list(returns_data.columns)
, index=returns_data.index[window - 1::1]
)
```
The following also optimizes the beta calculation itself:
```
def custom_betas(x_name, window, returns_data):
window_inv = 1.0 / window
x_sum = returns_data[x_name].rolling(window, min_periods=window).sum()
y_sum = returns_data.rolling(window, min_periods=window).sum()
xy_sum = returns_data.mul(returns_data[x_name], axis=0).rolling(window, min_periods=window).sum()
xx_sum = numpy.square(returns_data[x_name]).rolling(window, min_periods=window).sum()
xy_cov = xy_sum - window_inv * y_sum.mul(x_sum, axis=0)
x_var = xx_sum - window_inv * numpy.square(x_sum)
betas = xy_cov.divide(x_var, axis=0)[window - 1:]
betas.columns.name = None
return betas
```
Comparing the performance of the two different calculations, you can see that as the window used in the beta calculation increases, the second method dramatically outperforms the first:
[](https://i.stack.imgur.com/A5204.png)
Comparing the performance to that of @piRSquared's implementation, the custom method takes roughly 350 millis to evaluate compared to over 2 seconds.
|
but these would be blockish when you require beta calculations across the dates(m) for multiple stocks(n) resulting (m x n) number of calculations.
Some relief could be taken by running each date or stock on multiple cores, but then you will end up having huge hardware.
The major time requirement for the solutions available is finding the variance and co-variance and also **NaN** should be avoided in (Index and stock) data for a correct calculation as per pandas==0.23.0.
Thus running again would result stupid move unless the calculations are cached.
numpy variance and co-variance version also happens to miss-calculate the beta if **NaN** are not dropped.
A Cython implementation is must for huge set of data.
|
39,501,277
|
I have many (4000+) CSVs of stock data (Date, Open, High, Low, Close) which I import into individual Pandas dataframes to perform analysis. I am new to python and want to calculate a rolling 12month beta for each stock, I found a post to calculate rolling beta ([Python pandas calculate rolling stock beta using rolling apply to groupby object in vectorized fashion](https://stackoverflow.com/questions/34802972/python-pandas-calculate-rolling-stock-beta-using-rolling-apply-to-groupby-object)) however when used in my code below takes over 2.5 hours! Considering I can run the exact same calculations in SQL tables in under 3 minutes this is too slow.
How can I improve the performance of my below code to match that of SQL? I understand Pandas/python has that capability. My current method loops over each row which I know slows performance but I am unaware of any aggregate way to perform a rolling window beta calculation on a dataframe.
Note: the first 2 steps of loading the CSVs into individual dataframes and calculating daily returns only takes ~20seconds. All my CSV dataframes are stored in the dictionary called 'FilesLoaded' with names such as 'XAO'.
Your help would be much appreciated!
Thank you :)
```
import pandas as pd, numpy as np
import datetime
import ntpath
pd.set_option('precision',10) #Set the Decimal Point precision to DISPLAY
start_time=datetime.datetime.now()
MarketIndex = 'XAO'
period = 250
MinBetaPeriod = period
# ***********************************************************************************************
# CALC RETURNS
# ***********************************************************************************************
for File in FilesLoaded:
FilesLoaded[File]['Return'] = FilesLoaded[File]['Close'].pct_change()
# ***********************************************************************************************
# CALC BETA
# ***********************************************************************************************
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
#Build Custom "Rolling_Apply" function
def rolling_apply(df, period, func, min_periods=None):
if min_periods is None:
min_periods = period
result = pd.Series(np.nan, index=df.index)
for i in range(1, len(df)+1):
sub_df = df.iloc[max(i-period, 0):i,:]
if len(sub_df) >= min_periods:
idx = sub_df.index[-1]
result[idx] = func(sub_df)
return result
#Create empty BETA dataframe with same index as RETURNS dataframe
df_join = pd.DataFrame(index=FilesLoaded[MarketIndex].index)
df_join['market'] = FilesLoaded[MarketIndex]['Return']
df_join['stock'] = np.nan
for File in FilesLoaded:
df_join['stock'].update(FilesLoaded[File]['Return'])
df_join = df_join.replace(np.inf, np.nan) #get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.replace(-np.inf, np.nan)#get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.fillna(0) #get rid of the NaNs in the return data
FilesLoaded[File]['Beta'] = rolling_apply(df_join[['market','stock']], period, calc_beta, min_periods = MinBetaPeriod)
# ***********************************************************************************************
# CLEAN-UP
# ***********************************************************************************************
print('Run-time: {0}'.format(datetime.datetime.now() - start_time))
```
|
2016/09/14
|
[
"https://Stackoverflow.com/questions/39501277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6107994/"
] |
**HERE'S THE SIMPLEST AND FASTEST SOLUTION**
The accepted answer was too slow for what I needed and the I didn't understand the math behind the solutions asserted as faster. They also gave different answers, though in fairness I probably just messed it up.
I don't think you need to make a custom rolling function to calculate beta with pandas 1.1.4 (or even since at least .19). The below code assumes the data is in the same format as the above problems--a pandas dataframe with a date index, percent returns of some periodicity for the stocks, and market values are located in a column named 'Market'.
If you don't have this format, I recommend joining the stock returns to the market returns to ensure the same index with:
```
# Use .pct_change() only if joining Close data
beta_data = stock_data.join(market_data), how = 'inner').pct_change().dropna()
```
After that, it's just covariance divided by variance.
```
ticker_covariance = beta_data.rolling(window).cov()
# Limit results to the stock (i.e. column name for the stock) vs. 'Market' covariance
ticker_covariance = ticker_covariance.loc[pd.IndexSlice[:, stock], 'Market'].dropna()
benchmark_variance = beta_data['Market'].rolling(window).var().dropna()
beta = ticker_covariance / benchmark_variance
```
NOTES: If you have a multi-index, you'll have to drop the non-date levels to use the rolling().apply() solution. I only tested this for one stock and one market. If you have multiple stocks, a modification to the ticker\_covariance equation after .loc is probably needed. Last, if you want to calculate beta values for the periods before the full window (ex. stock\_data begins 1 year ago, but you use 3yrs of data), then you can modify the above to and expanding (instead of rolling) window with the same calculation and then .combine\_first() the two.
|
but these would be blockish when you require beta calculations across the dates(m) for multiple stocks(n) resulting (m x n) number of calculations.
Some relief could be taken by running each date or stock on multiple cores, but then you will end up having huge hardware.
The major time requirement for the solutions available is finding the variance and co-variance and also **NaN** should be avoided in (Index and stock) data for a correct calculation as per pandas==0.23.0.
Thus running again would result stupid move unless the calculations are cached.
numpy variance and co-variance version also happens to miss-calculate the beta if **NaN** are not dropped.
A Cython implementation is must for huge set of data.
|
39,501,277
|
I have many (4000+) CSVs of stock data (Date, Open, High, Low, Close) which I import into individual Pandas dataframes to perform analysis. I am new to python and want to calculate a rolling 12month beta for each stock, I found a post to calculate rolling beta ([Python pandas calculate rolling stock beta using rolling apply to groupby object in vectorized fashion](https://stackoverflow.com/questions/34802972/python-pandas-calculate-rolling-stock-beta-using-rolling-apply-to-groupby-object)) however when used in my code below takes over 2.5 hours! Considering I can run the exact same calculations in SQL tables in under 3 minutes this is too slow.
How can I improve the performance of my below code to match that of SQL? I understand Pandas/python has that capability. My current method loops over each row which I know slows performance but I am unaware of any aggregate way to perform a rolling window beta calculation on a dataframe.
Note: the first 2 steps of loading the CSVs into individual dataframes and calculating daily returns only takes ~20seconds. All my CSV dataframes are stored in the dictionary called 'FilesLoaded' with names such as 'XAO'.
Your help would be much appreciated!
Thank you :)
```
import pandas as pd, numpy as np
import datetime
import ntpath
pd.set_option('precision',10) #Set the Decimal Point precision to DISPLAY
start_time=datetime.datetime.now()
MarketIndex = 'XAO'
period = 250
MinBetaPeriod = period
# ***********************************************************************************************
# CALC RETURNS
# ***********************************************************************************************
for File in FilesLoaded:
FilesLoaded[File]['Return'] = FilesLoaded[File]['Close'].pct_change()
# ***********************************************************************************************
# CALC BETA
# ***********************************************************************************************
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
#Build Custom "Rolling_Apply" function
def rolling_apply(df, period, func, min_periods=None):
if min_periods is None:
min_periods = period
result = pd.Series(np.nan, index=df.index)
for i in range(1, len(df)+1):
sub_df = df.iloc[max(i-period, 0):i,:]
if len(sub_df) >= min_periods:
idx = sub_df.index[-1]
result[idx] = func(sub_df)
return result
#Create empty BETA dataframe with same index as RETURNS dataframe
df_join = pd.DataFrame(index=FilesLoaded[MarketIndex].index)
df_join['market'] = FilesLoaded[MarketIndex]['Return']
df_join['stock'] = np.nan
for File in FilesLoaded:
df_join['stock'].update(FilesLoaded[File]['Return'])
df_join = df_join.replace(np.inf, np.nan) #get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.replace(-np.inf, np.nan)#get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.fillna(0) #get rid of the NaNs in the return data
FilesLoaded[File]['Beta'] = rolling_apply(df_join[['market','stock']], period, calc_beta, min_periods = MinBetaPeriod)
# ***********************************************************************************************
# CLEAN-UP
# ***********************************************************************************************
print('Run-time: {0}'.format(datetime.datetime.now() - start_time))
```
|
2016/09/14
|
[
"https://Stackoverflow.com/questions/39501277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6107994/"
] |
**HERE'S THE SIMPLEST AND FASTEST SOLUTION**
The accepted answer was too slow for what I needed and the I didn't understand the math behind the solutions asserted as faster. They also gave different answers, though in fairness I probably just messed it up.
I don't think you need to make a custom rolling function to calculate beta with pandas 1.1.4 (or even since at least .19). The below code assumes the data is in the same format as the above problems--a pandas dataframe with a date index, percent returns of some periodicity for the stocks, and market values are located in a column named 'Market'.
If you don't have this format, I recommend joining the stock returns to the market returns to ensure the same index with:
```
# Use .pct_change() only if joining Close data
beta_data = stock_data.join(market_data), how = 'inner').pct_change().dropna()
```
After that, it's just covariance divided by variance.
```
ticker_covariance = beta_data.rolling(window).cov()
# Limit results to the stock (i.e. column name for the stock) vs. 'Market' covariance
ticker_covariance = ticker_covariance.loc[pd.IndexSlice[:, stock], 'Market'].dropna()
benchmark_variance = beta_data['Market'].rolling(window).var().dropna()
beta = ticker_covariance / benchmark_variance
```
NOTES: If you have a multi-index, you'll have to drop the non-date levels to use the rolling().apply() solution. I only tested this for one stock and one market. If you have multiple stocks, a modification to the ticker\_covariance equation after .loc is probably needed. Last, if you want to calculate beta values for the periods before the full window (ex. stock\_data begins 1 year ago, but you use 3yrs of data), then you can modify the above to and expanding (instead of rolling) window with the same calculation and then .combine\_first() the two.
|
Created a simple python package [finance-calculator](https://finance-calculator.readthedocs.io/en/latest/usage.html) based on numpy and pandas to calculate financial ratios including beta. I am using the simple formula ([as per investopedia](https://www.investopedia.com/ask/answers/070615/what-formula-calculating-beta.asp)):
```
beta = covariance(returns, benchmark returns) / variance(benchmark returns)
```
Covariance and variance are directly calculated in pandas which makes it fast. Using the api in the package is also simple:
```
import finance_calculator as fc
beta = fc.get_beta(scheme_data, benchmark_data, tail=False)
```
which will give you a dataframe of date and beta or the last beta value if tail is true.
|
35,257,550
|
I want to migrate from sqlite3 to MySQL in Django. First I used below command:
```
python manage.py dumpdata > datadump.json
```
then I changed the settings of my Django application and configured it with my new MySQL database. Finally, I used the following command:
```
python manage.py loaddata datadump.json
```
but I got this error :
>
> integrityError: Problem installing fixtures: The row in table
> 'django\_admin\_log' with primary key '20' has an invalid foregin key:
> django\_admin\_log.user\_id contains a value '19' that does not have a
> corresponding value in auth\_user.id.
>
>
>
|
2016/02/07
|
[
"https://Stackoverflow.com/questions/35257550",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5214998/"
] |
You have consistency error in your data, django\_admin\_log table refers to auth\_user which does not exist. sqlite does not enforce foreign key constraints, but mysql does. You need to fix data and then you can import it into mysql.
|
I had to move my database from a postgres to a MySql-Database.
This worked for me:
Export (old machine):
```
python manage.py dumpdata --natural --all --indent=2 --exclude=sessions --format=xml > dump.xml
```
Import (new machine):
(note that for older versions of Django you'll need **syncdb** instead of migrate)
```
manage.py migrate --no-initial-data
```
Get SQL for resetting Database:
```
manage.py sqlflush
```
setting.py:
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'asdf',
'USER': 'asdf',
'PASSWORD': 'asdf',
'HOST': 'localhost',
#IMPORTANT!!
'OPTIONS': {
"init_command": "SET foreign_key_checks = 0;",
},
}
python manage.py loaddata dump.xml
```
|
44,481,386
|
I have a python script which is used to remove noise from background of image. When I am calling this script from terminal it is working fine without any error. I am calling that script as below from terminal:
```
/usr/bin/python noise.py 1.png 100
```
But When I tried to calling it from PHP using apache it is giving me below error:
```
Traceback (most recent call last): File "./noise.py", line 2, in from PIL import Image, ImageFilter ImportError: No module named PIL
```
Can someone help me in resolving this issue? I tried to give permission to www-data user to that script, like this:
```
sudo chown www-data:www-data noise.py
```
But it doesn't help. Please help me.
|
2017/06/11
|
[
"https://Stackoverflow.com/questions/44481386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3198113/"
] |
Use this as a drawable
```
<?xml version="1.0" encoding="utf-8"?>
<shape xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/listview_background_shape">
<stroke android:width="2dp" android:color="@android:color/transparent" />
<padding android:left="2dp"
android:top="2dp"
android:right="2dp"
android:bottom="2dp" />
<corners android:radius="5dp" />
<solid android:color="@android:color/transparent" />
</shape>
```
and put it as background for the ImageButton
|
```
You can also make the android:background="@null" and remove android:cropToPadding="false"
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<ImageButton
android:id="@+id/imageButton"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentTop="true"
android:layout_centerHorizontal="true"
android:adjustViewBounds="true"
android:background="@null"
android:scaleType="fitXY"
app:srcCompat="@mipmap/ic_launcher" />
<ImageButton
android:id="@+id/imageButton6"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@id/imageButton"
android:layout_centerHorizontal="true"
android:adjustViewBounds="true"
android:background="@null"
android:scaleType="fitXY"
app:srcCompat="@mipmap/ic_launcher" />
<ImageButton
android:id="@+id/imageButton4"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@id/imageButton6"
android:layout_centerHorizontal="true"
android:adjustViewBounds="true"
android:background="@null"
android:scaleType="fitXY"
app:srcCompat="@mipmap/ic_launcher" />
</RelativeLayout>
```
|
44,481,386
|
I have a python script which is used to remove noise from background of image. When I am calling this script from terminal it is working fine without any error. I am calling that script as below from terminal:
```
/usr/bin/python noise.py 1.png 100
```
But When I tried to calling it from PHP using apache it is giving me below error:
```
Traceback (most recent call last): File "./noise.py", line 2, in from PIL import Image, ImageFilter ImportError: No module named PIL
```
Can someone help me in resolving this issue? I tried to give permission to www-data user to that script, like this:
```
sudo chown www-data:www-data noise.py
```
But it doesn't help. Please help me.
|
2017/06/11
|
[
"https://Stackoverflow.com/questions/44481386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3198113/"
] |
Use this as a drawable
```
<?xml version="1.0" encoding="utf-8"?>
<shape xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/listview_background_shape">
<stroke android:width="2dp" android:color="@android:color/transparent" />
<padding android:left="2dp"
android:top="2dp"
android:right="2dp"
android:bottom="2dp" />
<corners android:radius="5dp" />
<solid android:color="@android:color/transparent" />
</shape>
```
and put it as background for the ImageButton
|
An Example to insert in all ImageButtons,remove the extra padding statement which will not apply to you
```
<ImageButton
android:id="@+id/imageButton4"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@id/imageButton6"
android:layout_centerHorizontal="true"
app:srcCompat="@drawable/btnE"
android:adjustViewBounds="true"
android:paddingRight="30dp"
android:paddingLeft="30dp"
android:paddingTop="30dp"
android:paddingBottom="30dp"
android:background="@android:color/transparent"
android:scaleType="fitXY" />
```
|
44,481,386
|
I have a python script which is used to remove noise from background of image. When I am calling this script from terminal it is working fine without any error. I am calling that script as below from terminal:
```
/usr/bin/python noise.py 1.png 100
```
But When I tried to calling it from PHP using apache it is giving me below error:
```
Traceback (most recent call last): File "./noise.py", line 2, in from PIL import Image, ImageFilter ImportError: No module named PIL
```
Can someone help me in resolving this issue? I tried to give permission to www-data user to that script, like this:
```
sudo chown www-data:www-data noise.py
```
But it doesn't help. Please help me.
|
2017/06/11
|
[
"https://Stackoverflow.com/questions/44481386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3198113/"
] |
Use this as a drawable
```
<?xml version="1.0" encoding="utf-8"?>
<shape xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/listview_background_shape">
<stroke android:width="2dp" android:color="@android:color/transparent" />
<padding android:left="2dp"
android:top="2dp"
android:right="2dp"
android:bottom="2dp" />
<corners android:radius="5dp" />
<solid android:color="@android:color/transparent" />
</shape>
```
and put it as background for the ImageButton
|
btn\_border XML in res/drawable
```
<?xml version="1.0" encoding="utf-8"?>
<shape xmlns:android="http://schemas.android.com/apk/res/android"
android:shape="rectangle" >
<solid android:color="#85d1fa" />
<stroke
android:width="2.2dp"
android:color="#ffffff" />
</shape>
```
ImageButton on your Layout XML
```
<ImageButton
android:id="@+id/imageButton4"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@id/imageButton6"
android:layout_centerHorizontal="true"
app:srcCompat="@drawable/btnE"
android:adjustViewBounds="true"
android:paddingRight="30dp"
android:paddingLeft="30dp"
android:paddingTop="30dp"
android:paddingBottom="30dp"
android:background="@drawable/btn_border"
android:scaleType="fitXY" />
```
|
55,939,474
|
I am trying to deploy the lambda function along with the `serverless.yml` file to AWS, but it throw below error
The following is the function defined in the YAML file
```
functions:
s3-thumbnail-generator:
handler:handler.s3_thumbnail_generator
events:
- s3:
bucket: ${self:custom.bucket}
event: s3.ObjectCreated:*
rules:
- suffix: .png
plugins:
- serverless-python-requirements
```
Error I am getting:
>
> can not read a block mapping entry; a multiline key may not be an implicit key in serverless.yml" at line 45, column 10:
>
>
>
I would need to understand how to fix this issue in YAML file in order to deploy to the function to AWS?
|
2019/05/01
|
[
"https://Stackoverflow.com/questions/55939474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11040619/"
] |
The problem is that there is no value indicator (`:`) at the end of the line:
```
handler:handler.s3_thumbnail_generator
```
so the parser continues to try and gather a multi-line plain scalar by adding `events` followed by a value indicator. But a multi-line plain scalar cannot be a key in YAML.
It is unclear what your actual error is. It might be that you need to add the value indicator and have a colon emmbedded in your key:
```
functions:
s3-thumbnail-generator:
handler:handler.s3_thumbnail_generator:
events:
- s3:
bucket: ${self:custom.bucket}
event: s3.ObjectCreated:*
rules:
- suffix: .png
plugins:
- serverless-python-requirements
```
Or it could be that that colon should have been a value indicator (which usually needs a following space) and you were sloppy with indentation:
```
functions:
s3-thumbnail-generator:
handler: handler.s3_thumbnail_generator
events:
- s3:
bucket: ${self:custom.bucket}
event: s3.ObjectCreated:*
rules:
- suffix: .png
plugins:
- serverless-python-requirements
```
|
If it is your original file there is a syntax error in your YAML file. I added a note under the line of possible error:
```
functions:
s3-thumbnail-generator:
handler:handler.s3_thumbnail_generator
events:
- s3:
bucket: ${self:custom.bucket}
event: s3.ObjectCreated:*
rules:
- suffix: .png
^^^ this line should be indented one level
plugins:
- serverless-python-requirements
```
|
11,743,378
|
I'm trying to talk to `supervisor` over xmlrpc. Based on [`supervisorctl`](https://github.com/Supervisor/supervisor/blob/master/supervisor/supervisorctl.py) (especially [this line](https://github.com/Supervisor/supervisor/blob/master/supervisor/options.py#L1512)), I have the following, which seems like it should work, and indeed it works, in so far as it connects enough to receive an error from the server:
```
#socketpath is the full path to the socket, which exists
# None and None are the default username and password in the supervisorctl options
In [12]: proxy = xmlrpclib.ServerProxy('http://127.0.0.1', transport=supervisor.xmlrpc.SupervisorTransport(None, None, serverurl='unix://'+socketpath))
In [13]: proxy.supervisor.getState()
```
Resulting in this error:
```
---------------------------------------------------------------------------
ProtocolError Traceback (most recent call last)
/home/marcintustin/webapps/django/oneclickcosvirt/oneclickcos/<ipython-input-13-646258924bc2> in <module>()
----> 1 proxy.supervisor.getState()
/usr/local/lib/python2.7/xmlrpclib.pyc in __call__(self, *args)
1222 return _Method(self.__send, "%s.%s" % (self.__name, name))
1223 def __call__(self, *args):
-> 1224 return self.__send(self.__name, args)
1225
1226 ##
/usr/local/lib/python2.7/xmlrpclib.pyc in __request(self, methodname, params)
1576 self.__handler,
1577 request,
-> 1578 verbose=self.__verbose
1579 )
1580
/home/marcintustin/webapps/django/oneclickcosvirt/lib/python2.7/site-packages/supervisor/xmlrpc.pyc in request(self, host, handler, request_body, verbose)
469 r.status,
470 r.reason,
--> 471 '' )
472 data = r.read()
473 p, u = self.getparser()
ProtocolError: <ProtocolError for 127.0.0.1/RPC2: 401 Unauthorized>
```
This is the `unix_http_server` section of `supervisord.conf`:
```
[unix_http_server]
file=/home/marcintustin/webapps/django/oneclickcosvirt/tmp/supervisor.sock ; (the path to the socket file)
;chmod=0700 ; socket file mode (default 0700)
;chown=nobody:nogroup ; socket file uid:gid owner
;username=user ; (default is no username (open server))
;password=123 ; (default is no password (open server))
```
So, there should be no authentication problems.
It seems like my code is in all material respects identical to the equivalent code from `supervisorctl`, but `supervisorctl` actually works. What am I doing wrong?
|
2012/07/31
|
[
"https://Stackoverflow.com/questions/11743378",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21640/"
] |
Your code looks substantially correct. I'm running Supervisor 3.0 with Python 2.7, and given the following:
```
import supervisor.xmlrpc
import xmlrpclib
p = xmlrpclib.ServerProxy('http://127.0.0.1',
transport=supervisor.xmlrpc.SupervisorTransport(
None, None,
'unix:///home/lars/lib/supervisor/tmp/supervisor.sock'))
print p.supervisor.getState()
```
I get:
```
{'statename': 'RUNNING', 'statecode': 1}
```
Are you certain that your running Supervisor instance is using the configuration file you think it is? What if you run `supervisord` in debug mode, do you see the connection?
|
I don't use the ServerProxy from xmlrpclib, I use the Server class instead and I don't have to define any transports or paths to sockets. Not sure if your purposes require that, but here's a thin client I use fairly frequently. It's pretty much straight out of the docs.
```
python -c "import xmlrpclib;\
supervisor_client = xmlrpclib.Server('http://localhost:9001/RPC2');\
print( supervisor_client.supervisor.stopProcess(<some_proc_name>) )"
```
|
11,743,378
|
I'm trying to talk to `supervisor` over xmlrpc. Based on [`supervisorctl`](https://github.com/Supervisor/supervisor/blob/master/supervisor/supervisorctl.py) (especially [this line](https://github.com/Supervisor/supervisor/blob/master/supervisor/options.py#L1512)), I have the following, which seems like it should work, and indeed it works, in so far as it connects enough to receive an error from the server:
```
#socketpath is the full path to the socket, which exists
# None and None are the default username and password in the supervisorctl options
In [12]: proxy = xmlrpclib.ServerProxy('http://127.0.0.1', transport=supervisor.xmlrpc.SupervisorTransport(None, None, serverurl='unix://'+socketpath))
In [13]: proxy.supervisor.getState()
```
Resulting in this error:
```
---------------------------------------------------------------------------
ProtocolError Traceback (most recent call last)
/home/marcintustin/webapps/django/oneclickcosvirt/oneclickcos/<ipython-input-13-646258924bc2> in <module>()
----> 1 proxy.supervisor.getState()
/usr/local/lib/python2.7/xmlrpclib.pyc in __call__(self, *args)
1222 return _Method(self.__send, "%s.%s" % (self.__name, name))
1223 def __call__(self, *args):
-> 1224 return self.__send(self.__name, args)
1225
1226 ##
/usr/local/lib/python2.7/xmlrpclib.pyc in __request(self, methodname, params)
1576 self.__handler,
1577 request,
-> 1578 verbose=self.__verbose
1579 )
1580
/home/marcintustin/webapps/django/oneclickcosvirt/lib/python2.7/site-packages/supervisor/xmlrpc.pyc in request(self, host, handler, request_body, verbose)
469 r.status,
470 r.reason,
--> 471 '' )
472 data = r.read()
473 p, u = self.getparser()
ProtocolError: <ProtocolError for 127.0.0.1/RPC2: 401 Unauthorized>
```
This is the `unix_http_server` section of `supervisord.conf`:
```
[unix_http_server]
file=/home/marcintustin/webapps/django/oneclickcosvirt/tmp/supervisor.sock ; (the path to the socket file)
;chmod=0700 ; socket file mode (default 0700)
;chown=nobody:nogroup ; socket file uid:gid owner
;username=user ; (default is no username (open server))
;password=123 ; (default is no password (open server))
```
So, there should be no authentication problems.
It seems like my code is in all material respects identical to the equivalent code from `supervisorctl`, but `supervisorctl` actually works. What am I doing wrong?
|
2012/07/31
|
[
"https://Stackoverflow.com/questions/11743378",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21640/"
] |
Your code looks substantially correct. I'm running Supervisor 3.0 with Python 2.7, and given the following:
```
import supervisor.xmlrpc
import xmlrpclib
p = xmlrpclib.ServerProxy('http://127.0.0.1',
transport=supervisor.xmlrpc.SupervisorTransport(
None, None,
'unix:///home/lars/lib/supervisor/tmp/supervisor.sock'))
print p.supervisor.getState()
```
I get:
```
{'statename': 'RUNNING', 'statecode': 1}
```
Are you certain that your running Supervisor instance is using the configuration file you think it is? What if you run `supervisord` in debug mode, do you see the connection?
|
I faced the same issue; the problem was simple; `supervisord` was not running!
First:
```
supervisord
```
And then:
```
supervisorctl start all
```
**Done!** :)
>
> If you've set nodaemon to true, you must keep the process runing in another tab of your terminal.
>
>
>
|
11,743,378
|
I'm trying to talk to `supervisor` over xmlrpc. Based on [`supervisorctl`](https://github.com/Supervisor/supervisor/blob/master/supervisor/supervisorctl.py) (especially [this line](https://github.com/Supervisor/supervisor/blob/master/supervisor/options.py#L1512)), I have the following, which seems like it should work, and indeed it works, in so far as it connects enough to receive an error from the server:
```
#socketpath is the full path to the socket, which exists
# None and None are the default username and password in the supervisorctl options
In [12]: proxy = xmlrpclib.ServerProxy('http://127.0.0.1', transport=supervisor.xmlrpc.SupervisorTransport(None, None, serverurl='unix://'+socketpath))
In [13]: proxy.supervisor.getState()
```
Resulting in this error:
```
---------------------------------------------------------------------------
ProtocolError Traceback (most recent call last)
/home/marcintustin/webapps/django/oneclickcosvirt/oneclickcos/<ipython-input-13-646258924bc2> in <module>()
----> 1 proxy.supervisor.getState()
/usr/local/lib/python2.7/xmlrpclib.pyc in __call__(self, *args)
1222 return _Method(self.__send, "%s.%s" % (self.__name, name))
1223 def __call__(self, *args):
-> 1224 return self.__send(self.__name, args)
1225
1226 ##
/usr/local/lib/python2.7/xmlrpclib.pyc in __request(self, methodname, params)
1576 self.__handler,
1577 request,
-> 1578 verbose=self.__verbose
1579 )
1580
/home/marcintustin/webapps/django/oneclickcosvirt/lib/python2.7/site-packages/supervisor/xmlrpc.pyc in request(self, host, handler, request_body, verbose)
469 r.status,
470 r.reason,
--> 471 '' )
472 data = r.read()
473 p, u = self.getparser()
ProtocolError: <ProtocolError for 127.0.0.1/RPC2: 401 Unauthorized>
```
This is the `unix_http_server` section of `supervisord.conf`:
```
[unix_http_server]
file=/home/marcintustin/webapps/django/oneclickcosvirt/tmp/supervisor.sock ; (the path to the socket file)
;chmod=0700 ; socket file mode (default 0700)
;chown=nobody:nogroup ; socket file uid:gid owner
;username=user ; (default is no username (open server))
;password=123 ; (default is no password (open server))
```
So, there should be no authentication problems.
It seems like my code is in all material respects identical to the equivalent code from `supervisorctl`, but `supervisorctl` actually works. What am I doing wrong?
|
2012/07/31
|
[
"https://Stackoverflow.com/questions/11743378",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21640/"
] |
I don't use the ServerProxy from xmlrpclib, I use the Server class instead and I don't have to define any transports or paths to sockets. Not sure if your purposes require that, but here's a thin client I use fairly frequently. It's pretty much straight out of the docs.
```
python -c "import xmlrpclib;\
supervisor_client = xmlrpclib.Server('http://localhost:9001/RPC2');\
print( supervisor_client.supervisor.stopProcess(<some_proc_name>) )"
```
|
I faced the same issue; the problem was simple; `supervisord` was not running!
First:
```
supervisord
```
And then:
```
supervisorctl start all
```
**Done!** :)
>
> If you've set nodaemon to true, you must keep the process runing in another tab of your terminal.
>
>
>
|
26,476,939
|
What's a concise python way to say
```
if <none of the elements of this array are None>:
# do a bunch of stuff once
```
|
2014/10/21
|
[
"https://Stackoverflow.com/questions/26476939",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/554807/"
] |
The [`all`](https://docs.python.org/2/library/functions.html#all) builtin is nice for this. Given an iterable, it returns `True` if all elements of the iterable evaluate to `True`.
```
if all(x is not None for x in array):
# your code
```
|
You could use all
```
all(i is not None for i in l)
```
|
26,476,939
|
What's a concise python way to say
```
if <none of the elements of this array are None>:
# do a bunch of stuff once
```
|
2014/10/21
|
[
"https://Stackoverflow.com/questions/26476939",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/554807/"
] |
Why not simply,
```
None not in lst
```
|
The [`all`](https://docs.python.org/2/library/functions.html#all) builtin is nice for this. Given an iterable, it returns `True` if all elements of the iterable evaluate to `True`.
```
if all(x is not None for x in array):
# your code
```
|
26,476,939
|
What's a concise python way to say
```
if <none of the elements of this array are None>:
# do a bunch of stuff once
```
|
2014/10/21
|
[
"https://Stackoverflow.com/questions/26476939",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/554807/"
] |
Why not simply,
```
None not in lst
```
|
You could use all
```
all(i is not None for i in l)
```
|
26,476,939
|
What's a concise python way to say
```
if <none of the elements of this array are None>:
# do a bunch of stuff once
```
|
2014/10/21
|
[
"https://Stackoverflow.com/questions/26476939",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/554807/"
] |
Of course in this case, [Jared's answer](https://stackoverflow.com/a/26476967/908494) is obviously the shortest, and also the most readable. And it has other advantages (like automatically becoming O(1) or O(log N) if you switch from a list to a set or a SortedSet). But there will be cases where that doesn't work, and it's worth understanding how to get from your English-language statement, to the most direct translation, to the most idiomatic.
You start with "none of the elements in array are None".
Python doesn't have a "none of" function. But if you think about it, "none of" is exactly the same as "not any of". And it does have an "any of" function, `any`. So, the closest thing to a direct translation is:
```
if not any(element is None for element in array):
```
However, people who use `any` and `all` frequently (whether in Python, or in symbolic logic) usually get used to using [De Morgan's law](http://en.wikipedia.org/wiki/De_Morgan's_laws) to translate to a "normal" form. An `any` is just an iterated disjunction, and the negation of a disjunction is the conjunction of the negations, so, this translates into [Sam Mussmann's answer](https://stackoverflow.com/a/26476953/908494):
```
if all(element is not None for element in array):
```
|
You could use all
```
all(i is not None for i in l)
```
|
26,476,939
|
What's a concise python way to say
```
if <none of the elements of this array are None>:
# do a bunch of stuff once
```
|
2014/10/21
|
[
"https://Stackoverflow.com/questions/26476939",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/554807/"
] |
Why not simply,
```
None not in lst
```
|
Of course in this case, [Jared's answer](https://stackoverflow.com/a/26476967/908494) is obviously the shortest, and also the most readable. And it has other advantages (like automatically becoming O(1) or O(log N) if you switch from a list to a set or a SortedSet). But there will be cases where that doesn't work, and it's worth understanding how to get from your English-language statement, to the most direct translation, to the most idiomatic.
You start with "none of the elements in array are None".
Python doesn't have a "none of" function. But if you think about it, "none of" is exactly the same as "not any of". And it does have an "any of" function, `any`. So, the closest thing to a direct translation is:
```
if not any(element is None for element in array):
```
However, people who use `any` and `all` frequently (whether in Python, or in symbolic logic) usually get used to using [De Morgan's law](http://en.wikipedia.org/wiki/De_Morgan's_laws) to translate to a "normal" form. An `any` is just an iterated disjunction, and the negation of a disjunction is the conjunction of the negations, so, this translates into [Sam Mussmann's answer](https://stackoverflow.com/a/26476953/908494):
```
if all(element is not None for element in array):
```
|
59,888,355
|
I am having issues with having Conda install the library at this link:
<https://github.com/ozgur/python-firebase>
I am running: `conda install python-firebase`
This is the response I get:
```
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- python-firebase
Current channels:
- https://repo.anaconda.com/pkgs/main/win-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/win-64
- https://repo.anaconda.com/pkgs/r/noarch
- https://repo.anaconda.com/pkgs/msys2/win-64
- https://repo.anaconda.com/pkgs/msys2/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
```
Does anyone have a solution? I successfully installed it through `pip`, but I can't get the package in the `Conda` environment.
Python 3.7.4
|
2020/01/23
|
[
"https://Stackoverflow.com/questions/59888355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11964771/"
] |
You have to run this
```
conda install -c auto python-firebase
```
Take a look at [this](https://anaconda.org/auto/python-firebase)
|
Try doing `conda install -c auto python-firebase`
Check <https://anaconda.org/auto/python-firebase> for further information
|
59,888,355
|
I am having issues with having Conda install the library at this link:
<https://github.com/ozgur/python-firebase>
I am running: `conda install python-firebase`
This is the response I get:
```
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- python-firebase
Current channels:
- https://repo.anaconda.com/pkgs/main/win-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/win-64
- https://repo.anaconda.com/pkgs/r/noarch
- https://repo.anaconda.com/pkgs/msys2/win-64
- https://repo.anaconda.com/pkgs/msys2/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
```
Does anyone have a solution? I successfully installed it through `pip`, but I can't get the package in the `Conda` environment.
Python 3.7.4
|
2020/01/23
|
[
"https://Stackoverflow.com/questions/59888355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11964771/"
] |
Checking <https://anaconda.org/search?q=firebase> you can see that there is only one entry that has `win64` listed on the right side. Since you are running on windows, you need to select that one and then enter the correct installation command:
```
conda install -c nayyaung python-firebase
```
(Note that the channel `auto` suggested in other answers has only `linux-64` available)
As to your question from the comments:
>
> I guess I was asking if theres anyway for Conda to pull libraries out of my PIP env
>
>
>
I don't really know what you mean by `pull from my pip env`. If that means somehow pointing to the `site-packages` of another python installation, then no, this would be rather difficult to implement I guess. However, you can always `pip install` any package from pypi to your `conda` environments if they are not available for `win64` from the conda channels. Also [Read This](https://www.anaconda.com/using-pip-in-a-conda-environment/) on using `pip` in `conda` environments
|
Try doing `conda install -c auto python-firebase`
Check <https://anaconda.org/auto/python-firebase> for further information
|
14,321,679
|
I'm a new to programming and I chose python as my first language because its easy. But I'm confused here with this code:
```
option = 1
while option != 0:
print "/n/n/n************MENU************" #Make a menu
print "1. Add numbers"
print "2. Find perimeter and area of a rectangle"
print "0. Forget it!"
print "*" * 28
option = input("Please make a selection: ") #Prompt user for a selection
if option == 1: #If option is 1, get input and calculate
firstnumber = input("Enter 1st number: ")
secondnumber = input("Enter 2nd number: ")
add = firstnumber + secondnumber
print firstnumber, "added to", secondnumber, "equals", add #show results
elif option == 2: #If option is 2, get input and calculate
length = input("Enter length: ")
width = input("Enter width: ")
perimeter = length * 2 + width * 2
area = length * width
print "The perimeter of your rectangle is", perimeter #show results
print "The area of your rectangle is", area
else: #if the input is anything else its not valid
print "That is not a valid option!"
```
Okay Okay I get every thing below the `Option` variable. I just want to know why we assigned the value of `Option=1`, why we added it on the top of the program ,and what is its function. Also can we change its value. Please make me understand it in simple language as I'm new to programming.
|
2013/01/14
|
[
"https://Stackoverflow.com/questions/14321679",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1977722/"
] |
If you didn't create the variable `option` at the start of the program, the line
```
while option != 0:
```
would break, because no `option` variable would yet exist.
As for how to change its value, notice that it is changed every time the line:
```
option = input("Please make a selection: ")
```
happens- that is reassigning its value to the user's input.
|
Python requires variables to be declared before they can be used.
In this case, a decision is being made whether `option` is set to `1` or `2` (so we set it to one of those values, ordinarily we could just as easily set it to `0` or an empty string). While some languages are less stringent on variable declaration (PHP comes to mind), most require variables to exist prior to use.
Python does not require variables to be explicitly declared, only that they are given a value to reserve memory space. VB.NET, by default, on the other hand, requires variables to be explicitly declared...
```
Dim var as D
```
Which sets the variables `type` but doesn't give it an initial value.
See [the Python docs](http://www.tutorialspoint.com/python/python_variable_types.htm).
|
14,321,679
|
I'm a new to programming and I chose python as my first language because its easy. But I'm confused here with this code:
```
option = 1
while option != 0:
print "/n/n/n************MENU************" #Make a menu
print "1. Add numbers"
print "2. Find perimeter and area of a rectangle"
print "0. Forget it!"
print "*" * 28
option = input("Please make a selection: ") #Prompt user for a selection
if option == 1: #If option is 1, get input and calculate
firstnumber = input("Enter 1st number: ")
secondnumber = input("Enter 2nd number: ")
add = firstnumber + secondnumber
print firstnumber, "added to", secondnumber, "equals", add #show results
elif option == 2: #If option is 2, get input and calculate
length = input("Enter length: ")
width = input("Enter width: ")
perimeter = length * 2 + width * 2
area = length * width
print "The perimeter of your rectangle is", perimeter #show results
print "The area of your rectangle is", area
else: #if the input is anything else its not valid
print "That is not a valid option!"
```
Okay Okay I get every thing below the `Option` variable. I just want to know why we assigned the value of `Option=1`, why we added it on the top of the program ,and what is its function. Also can we change its value. Please make me understand it in simple language as I'm new to programming.
|
2013/01/14
|
[
"https://Stackoverflow.com/questions/14321679",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1977722/"
] |
So that the while statement below it doesn't try to check a non-existent name. It doesn't *have* to be assigned `1`, it just happens to be the first non-zero natural number.
|
Python requires variables to be declared before they can be used.
In this case, a decision is being made whether `option` is set to `1` or `2` (so we set it to one of those values, ordinarily we could just as easily set it to `0` or an empty string). While some languages are less stringent on variable declaration (PHP comes to mind), most require variables to exist prior to use.
Python does not require variables to be explicitly declared, only that they are given a value to reserve memory space. VB.NET, by default, on the other hand, requires variables to be explicitly declared...
```
Dim var as D
```
Which sets the variables `type` but doesn't give it an initial value.
See [the Python docs](http://www.tutorialspoint.com/python/python_variable_types.htm).
|
59,951,747
|
i tried the example project of the Flask-MQTT (<https://github.com/stlehmann/Flask-MQTT>) with my local mosquitto broker. But unfortunatly it is not working.
Subscription and publish are not forwared correctly. so i've added some logger messages:
```
def handle_connect(client, userdata, flags, rc):
print("CLIENT CONNECTED")
@mqtt.on_disconnect()
def handle_disconnect():
print("CLIENT DISCONNECTED")
@mqtt.on_log()
def handle_logging(client, userdata, level, buf):
print(level, buf)
```
>
> 16 Sending CONNECT (u0, p0, wr0, wq0, wf0, c1, k30) client\_id=b'flask\_mqtt'
>
> CLIENT DISCONNECTED
>
> 16 Received CONNACK (0, 0)
>
> CLIENT CONNECTED
>
> 16 Sending CONNECT (u0, p0, wr0, wq0, wf0, c1, k30) client\_id=b'flask\_mqtt'
>
> CLIENT DISCONNECTED
>
> 16 Received CONNACK (0, 0)
>
>
>
>
The mosquitto Broker shows that it disconnects the flask app because of the client is already connected:
>
> 1580163250: New connection from 127.0.0.1 on port 1883.
>
> 1580163250: Client flask\_mqtt already connected, closing old connection.
>
> 1580163250: New client connected from 127.0.0.1 as flask\_mqtt (p2, c1, k30).
>
> 1580163250: No will message specified.
>
> 1580163250: Sending CONNACK to flask\_mqtt (0, 0)
>
> 1580163251: New connection from 127.0.0.1 on port 1883.
>
> 1580163251: Client flask\_mqtt already connected, closing old connection.
>
> 1580163251: New client connected from 127.0.0.1 as flask\_mqtt (p2, c1, k30).
>
> 1580163251: No will message specified.
>
> 1580163251: Sending CONNACK to flask\_mqtt (0, 0)
>
> 1580163251: Socket error on client flask\_mqtt, disconnecting.
>
>
>
>
i also tested a simple python.paho mqtt client example without flask and it works as expected.
i also changed tried several loop starts inside the flask-mqtt code `self.client.loop_start()
--> self.client.loop_forever()` ... did not change anything.
so any idea where's the problem? i also debugged the flask-mqtt code and cannot find issues.
(my python version is Python 3.6.9 (default, Nov 7 2019, 10:44:02)
(my host system is elementary Linux)
maybe the FLASK-MQTT lib is deprecated ?
any hint or idea is appreciated!
|
2020/01/28
|
[
"https://Stackoverflow.com/questions/59951747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5409884/"
] |
The reason this is failing is in the mosquitto logs.
```
1580163250: New connection from 127.0.0.1 on port 1883.
1580163250: Client flask_mqtt already connected, closing old connection.
1580163250: New client connected from 127.0.0.1 as flask_mqtt (p2, c1, k30).
1580163250: No will message specified.
1580163250: Sending CONNACK to flask_mqtt (0, 0)
```
Every client that connects to the broker must have a unique client id. In this case the flask client is trying to make multiple connections to the broker with the same client id. When the second connection starts, the broker sees that client id is the same and automatically disconnects the first.
You've not actually supplied any code showing how you are settings up the client connections so we can't make any suggestions on how to actually fix it. Did you pay attention to the comment at the end of the last example in the README.md on the github page?
|
thanks for your fast reply! this helped me a lot and fixes the problem:
The code is
```
"""
A small Test application to show how to use Flask-MQTT.
"""
import eventlet
import json
from flask import Flask, render_template
from flask_mqtt import Mqtt
from flask_socketio import SocketIO
from flask_bootstrap import Bootstrap
eventlet.monkey_patch()
app = Flask(__name__)
app.config['SECRET'] = 'my secret key'
app.config['TEMPLATES_AUTO_RELOAD'] = True
app.config['MQTT_BROKER_URL'] = '127.0.0.1'
app.config['MQTT_BROKER_PORT'] = 1883
app.config['MQTT_CLIENT_ID'] = 'flask_mqtt'
#app.config['MQTT_USERNAME'] = ''
#app.config['MQTT_PASSWORD'] = ''
app.config['MQTT_KEEPALIVE'] = 30
#app.config['MQTT_TLS_ENABLED'] = False
#app.config['MQTT_REFRESH_TIME'] = 1.0 # refresh time in seconds
#app.config['MQTT_LAST_WILL_TOPIC'] = 'home/lastwill'
#app.config['MQTT_LAST_WILL_MESSAGE'] = 'bye'
#app.config['MQTT_LAST_WILL_QOS'] = 2
# Parameters for SSL enabled
# app.config['MQTT_BROKER_PORT'] = 8883
# app.config['MQTT_TLS_ENABLED'] = True
# app.config['MQTT_TLS_INSECURE'] = True
# app.config['MQTT_TLS_CA_CERTS'] = 'ca.crt'
mqtt = Mqtt(app)
socketio = SocketIO(app)
bootstrap = Bootstrap(app)
@app.route('/')
def index():
return render_template('index.html')
@socketio.on('publish')
def handle_publish(json_str):
data = json.loads(json_str)
mqtt.publish(data['topic'], data['message'], data['qos'])
@socketio.on('subscribe')
def handle_subscribe(json_str):
data = json.loads(json_str)
mqtt.subscribe(data['topic'], data['qos'])
@socketio.on('unsubscribe_all')
def handle_unsubscribe_all():
mqtt.unsubscribe_all()
@mqtt.on_message()
def handle_mqtt_message(client, userdata, message):
data = dict(
topic=message.topic,
payload=message.payload.decode(),
qos=message.qos,
)
socketio.emit('mqtt_message', data=data)
@mqtt.on_log()
def handle_logging(client, userdata, level, buf):
# print(level, buf)
pass
@mqtt.on_connect()
def handle_connect(client, userdata, flags, rc):
print("CLIENT CONNECTED")
@mqtt.on_disconnect()
def handle_disconnect():
print("CLIENT DISCONNECTED")
@mqtt.on_log()
def handle_logging(client, userdata, level, buf):
print(level, buf)
if __name__ == '__main__':
socketio.run(app, host='0.0.0.0', port=5000, use_reloader=True, debug=True)
```
changing the `use_reloader=False` solves the problem!
In the example it is set to True .. maybe should be fixed.
by the way what means use\_reloader ? (i am new to flask)
Thanks alot!
|
53,932,357
|
When installing packages with sudo apt-get install or building libraries from source inside a python virtual environment (I am not talking about pip install), does doing it inside a python virtual environment isolate the applications being installed? I mean do they exist only inside the python virtual environment?
|
2018/12/26
|
[
"https://Stackoverflow.com/questions/53932357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2651062/"
] |
Things that a virtual environment gives you an isolated version of:
* You get a separate `PATH` entry, so unqualified command-line references to `python`, `pip`, etc., will refer to the selected Python distribution. This can be convenient if you have many copies of Python installed on the system (common on developer workstations). This means that a shebang line like `#!/usr/bin/env python` will "do the right thing" inside of a virtualenv (on a Unix or Unix-like system, at least).
* You get a separate `site-packages` directory, so Python packages (installed using `pip` or built locally inside this environment using e.g. `setup.py build`) are installed locally to the virtualenv and not in a system-wide location. This is especially useful on systems where the core Python interpreter is installed in a place where unprivileged users are not allowed to write files, as it allows each user to have their own private virtualenvs with third-party packages installed, without needing to use `sudo` or equivalent to install those third-party packages system-wide.
... and that's about it.
A virtual environment will *not* isolate you from:
* Your operating system (Linux, Windows) or machine architecture (x86).
* Scripts that reference a particular Python interpreter directly (e.g. `#!/usr/bin/python`).
* Non-Python things on your system `PATH` (e.g. third party programs or utilities installed via your operating system's package manager).
* Non-Python libraries or headers that are installed into a operating system specific location (e.g. `/usr/lib`, `/usr/include`, `/usr/local/lib`, `/usr/local/include`).
* Python packages that are installed using the operating system's package manager (e.g. `apt`) rather than a Python package manager (`pip`) might not be visible from the the virtualenv's `site-packages` folder, but the "native" parts of such packages (in e.g. `/usr/lib`) will (probably) still be visible.
|
As per the comment by @deceze, virtual environments have no influence over `apt` operations.
When building from source, any compiled binaries will be linked to the python binaries of that environment. So if your virtualenv python version varies from the system version, and you use the system python (path problems usually), you can encounter runtime linking errors.
As for isolation, this same property (binary compatibility) isolates you from system upgrades which might change your system python binaries. Generally we're stable in the 2.x and 3.x, so it isn't likely to happen. But has, and can.
And of course, when building from source inside a virtualenv, installed packages are stashed in that virtualenv; no other python binary will have access to those packages, unless you are manipulating your path or PYTHONPATH in strange ways.
|
13,728,325
|
I'm trying to use Z3 from its python interface, but I would prefer not to do a system-wide install (i.e. sudo make install). I tried doing a local install with a --prefix, but the Makefile is hard-coded to install into the system's python directory.
Best case, I would like run z3 directly from the build directly, in the same way I use the z3 binary (build/z3). Does anyone know how to, or have script, to run the z3py directly from the build directory, without doing an install?
|
2012/12/05
|
[
"https://Stackoverflow.com/questions/13728325",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1406686/"
] |
Yes, you can do it by including the build directory in your `LD_LIBRARY_PATH` and `PYTHONPATH` environment variables.
|
If you don't care about the python interface, edit the `build/Makefile` and comment out or delete the following lines in the `install` target:
```
@cp libz3$(SO_EXT) /usr/lib/python2.7/dist-packages/libz3$(SO_EXT)
@cp z3*.pyc /usr/lib/python2.7/dist-packages
```
|
40,222,971
|
The answer presented here: [How to work with surrogate pairs in Python?](https://stackoverflow.com/questions/38147259/how-to-work-with-surrogate-pairs-in-python) tells you how to convert a surrogate pair, such as `'\ud83d\ude4f'` into a single non-BMP unicode character (the answer being `"\ud83d\ude4f".encode('utf-16', 'surrogatepass').decode('utf-16')`). I would like to know how to do this in reverse. How can I, using Python, find the equivalent surrogate pair from a non-BMP character, converting `'\U0001f64f'` () back to `'\ud83d\ude4f'`. I couldn't find a clear answer to that.
|
2016/10/24
|
[
"https://Stackoverflow.com/questions/40222971",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6555884/"
] |
You'll have to manually replace each non-BMP point with the surrogate pair. You could do this with a regular expression:
```
import re
_nonbmp = re.compile(r'[\U00010000-\U0010FFFF]')
def _surrogatepair(match):
char = match.group()
assert ord(char) > 0xffff
encoded = char.encode('utf-16-le')
return (
chr(int.from_bytes(encoded[:2], 'little')) +
chr(int.from_bytes(encoded[2:], 'little')))
def with_surrogates(text):
return _nonbmp.sub(_surrogatepair, text)
```
Demo:
```
>>> with_surrogates('\U0001f64f')
'\ud83d\ude4f'
```
|
It's a little complex, but here's a one-liner to convert a single character:
```
>>> emoji = '\U0001f64f'
>>> ''.join(chr(x) for x in struct.unpack('>2H', emoji.encode('utf-16be')))
'\ud83d\ude4f'
```
To convert a mix of characters requires surrounding that expression with another:
```
>>> emoji_str = 'Here is a non-BMP character: \U0001f64f'
>>> ''.join(c if c <= '\uffff' else ''.join(chr(x) for x in struct.unpack('>2H', c.encode('utf-16be'))) for c in emoji_str)
'Here is a non-BMP character: \ud83d\ude4f'
```
|
52,264,354
|
I have the following dataframe:
```
Sentence
0 Cat is a big lion
1 Dogs are descendants of wolf
2 Elephants are pachyderm
3 Pachyderm animals include rhino, Elephants and hippopotamus
```
I need to create a python code which looks at the words in sentence above and calculates the sum of scores for each based on following distinct data frame.
```
Name Score
cat 1
dog 2
wolf 2
lion 3
elephants 5
rhino 4
hippopotamus 5
```
For example, for row 0, the score will be 1 (cat) + 3 (lion) = 4
I am looking to create an output that looks like following.
```
Sentence Value
0 Cat is a big lion 4
1 Dogs are descendants of wolf 4
2 Elephants are pachyderm 5
3 Pachyderm animals include rhino, Elephants and hippopotamus 14
```
|
2018/09/10
|
[
"https://Stackoverflow.com/questions/52264354",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9244542/"
] |
As a first effort, you can try a `split` and `map`-based approach, and then compute the score using `groupby`.
```
v = df1['Sentence'].str.split(r'[\s.!?,]+', expand=True).stack().str.lower()
df1['Value'] = (
v.map(df2.set_index('Name')['Score'])
.sum(level=0)
.fillna(0, downcast='infer'))
```
```
df1
Sentence Value
0 Cat is a big lion 4
1 Dogs are descendants of wolf 4 # s/dog/dogs in df2
2 Elephants are pachyderm 5
3 Pachyderm animals include rhino, Elephants and... 14
```
|
### `nltk`
You may need to download stuff
```
import nltk
nltk.download('punkt')
```
Then set up stemming and tokenizing
```
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize
ps = PorterStemmer()
```
Create a handy dictionary
```
m = dict(zip(map(ps.stem, scores.Name), scores.Score))
```
And generate scores
```
def f(s):
return sum(filter(None, map(m.get, map(ps.stem, word_tokenize(s)))))
df.assign(Score=[*map(f, df.Sentence)])
Sentence Score
0 Cat is a big lion 4
1 Dogs are descendants of wolf 4
2 Elephants are pachyderm 5
3 Pachyderm animals include rhino, Elephants and... 14
```
|
52,264,354
|
I have the following dataframe:
```
Sentence
0 Cat is a big lion
1 Dogs are descendants of wolf
2 Elephants are pachyderm
3 Pachyderm animals include rhino, Elephants and hippopotamus
```
I need to create a python code which looks at the words in sentence above and calculates the sum of scores for each based on following distinct data frame.
```
Name Score
cat 1
dog 2
wolf 2
lion 3
elephants 5
rhino 4
hippopotamus 5
```
For example, for row 0, the score will be 1 (cat) + 3 (lion) = 4
I am looking to create an output that looks like following.
```
Sentence Value
0 Cat is a big lion 4
1 Dogs are descendants of wolf 4
2 Elephants are pachyderm 5
3 Pachyderm animals include rhino, Elephants and hippopotamus 14
```
|
2018/09/10
|
[
"https://Stackoverflow.com/questions/52264354",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9244542/"
] |
As a first effort, you can try a `split` and `map`-based approach, and then compute the score using `groupby`.
```
v = df1['Sentence'].str.split(r'[\s.!?,]+', expand=True).stack().str.lower()
df1['Value'] = (
v.map(df2.set_index('Name')['Score'])
.sum(level=0)
.fillna(0, downcast='infer'))
```
```
df1
Sentence Value
0 Cat is a big lion 4
1 Dogs are descendants of wolf 4 # s/dog/dogs in df2
2 Elephants are pachyderm 5
3 Pachyderm animals include rhino, Elephants and... 14
```
|
Trying to using `findall` with `re` `re.I`
```
df.Sentence.str.findall(df1.Name.str.cat(sep='|'),flags=re.I).\
map(lambda x : sum([df1.loc[df1.Name==str.lower(y),'Score' ].values for y in x])[0])
Out[49]:
0 4
1 4
2 5
3 14
Name: Sentence, dtype: int64
```
|
22,590,892
|
I have a python list of string tuples of the form: `lst = [('xxx', 'yyy'), ...etc]`. The list has around `8154741` tuples. I used a profiler and it says that the list takes around 500 MB in memory.
Then I wrote all tuples in the list into a text file and it took around 72MB on disk size.
I have three questions:
* Why the memory consumption is different from disk usage?
* And is it logical to consume 500MB of memory for such a list?
* Is there a way/technique to reduce the size of the list?
|
2014/03/23
|
[
"https://Stackoverflow.com/questions/22590892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2464658/"
] |
you have `8154741` tuples, that means your list, assuming 8 byte pointers, already contains `62 MB` of pointers to tuples.
Assuming each tuple contains two ascii strings in python2, thats another `124 MB` of pointers for each tuple.
Then you still have the overhead for the tuple and string objects, each object has a reference count, assuming that is a 8 byte integer you have another `186 MB` of reference count storage. That is already `372 MB` of overhead for the `46 MB` of data you would have with two 3 byte long strings in size 2 tuples.
Under python3 your data is unicode and may be larger than 1 byte per character too.
So yes it is expected this type of structure consumes an large amount of excess memory.
If your strings are all of similar length and the tuples all have the same length a way to reduce this is to use numpy string arrays. They store the strings in one continuous memory block avoiding the object overheads. But this will not work well if the strings vary in size a lot as numpy does not support ragged arrays.
```
>>> d = [("xxx", "yyy") for i in range(8154741)]
>>> a = numpy.array(d)
>>> print a.nbytes/1024**2
46
>>> print a[2,1]
yyy
```
|
Python objects can take much more memory than the raw data in them. This is because to achieve the features of Python's advanced and superfast data structures, you have to create some intermediate and temporary objects.
Read more [here](http://deeplearning.net/software/theano/tutorial/python-memory-management.html).
Working around this issue has several ways, see a case study [here](http://guillaume.segu.in/blog/code/487/optimizing-memory-usage-in-python-a-case-study/). In most cases, it is enough to find the best suitable python data type for your application (would it not be better to use a numpy array instead of a list in your case?). For more optimizing, you can move to Cython where you can directly declare the types (and so, the sizes) of your variables, like in C.
There are also packages like [IOPro](https://store.continuum.io/cshop/iopro/) that try to optimize memory usage (this one is commercial though, does anyone know a free package for this?).
|
22,590,892
|
I have a python list of string tuples of the form: `lst = [('xxx', 'yyy'), ...etc]`. The list has around `8154741` tuples. I used a profiler and it says that the list takes around 500 MB in memory.
Then I wrote all tuples in the list into a text file and it took around 72MB on disk size.
I have three questions:
* Why the memory consumption is different from disk usage?
* And is it logical to consume 500MB of memory for such a list?
* Is there a way/technique to reduce the size of the list?
|
2014/03/23
|
[
"https://Stackoverflow.com/questions/22590892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2464658/"
] |
Python objects can take much more memory than the raw data in them. This is because to achieve the features of Python's advanced and superfast data structures, you have to create some intermediate and temporary objects.
Read more [here](http://deeplearning.net/software/theano/tutorial/python-memory-management.html).
Working around this issue has several ways, see a case study [here](http://guillaume.segu.in/blog/code/487/optimizing-memory-usage-in-python-a-case-study/). In most cases, it is enough to find the best suitable python data type for your application (would it not be better to use a numpy array instead of a list in your case?). For more optimizing, you can move to Cython where you can directly declare the types (and so, the sizes) of your variables, like in C.
There are also packages like [IOPro](https://store.continuum.io/cshop/iopro/) that try to optimize memory usage (this one is commercial though, does anyone know a free package for this?).
|
Well are the strings mostly shared or unique?
What is the significance of the tuples: bag-of-words or skip-gram representation?
If so, one good library for vector representations of words is [word2vec](https://code.google.com/p/word2vec/)
and here's a good [article on optimizing word2vec's performance](http://radimrehurek.com/2013/09/word2vec-in-python-part-two-optimizing/)
Do you actually need to keep your string contents in-memory, or can you just convert to a vector of features, and write the string<->feature correspondence to disk?
|
22,590,892
|
I have a python list of string tuples of the form: `lst = [('xxx', 'yyy'), ...etc]`. The list has around `8154741` tuples. I used a profiler and it says that the list takes around 500 MB in memory.
Then I wrote all tuples in the list into a text file and it took around 72MB on disk size.
I have three questions:
* Why the memory consumption is different from disk usage?
* And is it logical to consume 500MB of memory for such a list?
* Is there a way/technique to reduce the size of the list?
|
2014/03/23
|
[
"https://Stackoverflow.com/questions/22590892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2464658/"
] |
you have `8154741` tuples, that means your list, assuming 8 byte pointers, already contains `62 MB` of pointers to tuples.
Assuming each tuple contains two ascii strings in python2, thats another `124 MB` of pointers for each tuple.
Then you still have the overhead for the tuple and string objects, each object has a reference count, assuming that is a 8 byte integer you have another `186 MB` of reference count storage. That is already `372 MB` of overhead for the `46 MB` of data you would have with two 3 byte long strings in size 2 tuples.
Under python3 your data is unicode and may be larger than 1 byte per character too.
So yes it is expected this type of structure consumes an large amount of excess memory.
If your strings are all of similar length and the tuples all have the same length a way to reduce this is to use numpy string arrays. They store the strings in one continuous memory block avoiding the object overheads. But this will not work well if the strings vary in size a lot as numpy does not support ragged arrays.
```
>>> d = [("xxx", "yyy") for i in range(8154741)]
>>> a = numpy.array(d)
>>> print a.nbytes/1024**2
46
>>> print a[2,1]
yyy
```
|
Well are the strings mostly shared or unique?
What is the significance of the tuples: bag-of-words or skip-gram representation?
If so, one good library for vector representations of words is [word2vec](https://code.google.com/p/word2vec/)
and here's a good [article on optimizing word2vec's performance](http://radimrehurek.com/2013/09/word2vec-in-python-part-two-optimizing/)
Do you actually need to keep your string contents in-memory, or can you just convert to a vector of features, and write the string<->feature correspondence to disk?
|
14,088,294
|
I'm trying to create multithreaded web server in python, but it only responds to one request at a time and I can't figure out why. Can you help me, please?
```
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
from time import sleep
class ThreadingServer(ThreadingMixIn, HTTPServer):
pass
class RequestHandler(SimpleHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/plain')
sleep(5)
response = 'Slept for 5 seconds..'
self.send_header('Content-length', len(response))
self.end_headers()
self.wfile.write(response)
ThreadingServer(('', 8000), RequestHandler).serve_forever()
```
|
2012/12/30
|
[
"https://Stackoverflow.com/questions/14088294",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1937459/"
] |
Check [this](http://pymotw.com/2/BaseHTTPServer/index.html#module-BaseHTTPServer) post from Doug Hellmann's blog.
```
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
from SocketServer import ThreadingMixIn
import threading
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.end_headers()
message = threading.currentThread().getName()
self.wfile.write(message)
self.wfile.write('\n')
return
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
"""Handle requests in a separate thread."""
if __name__ == '__main__':
server = ThreadedHTTPServer(('localhost', 8080), Handler)
print 'Starting server, use <Ctrl-C> to stop'
server.serve_forever()
```
|
I have developed a PIP Utility called [ComplexHTTPServer](https://github.com/vickysam/ComplexHTTPServer) that is a multi-threaded version of SimpleHTTPServer.
To install it, all you need to do is:
```
pip install ComplexHTTPServer
```
Using it is as simple as:
```
python -m ComplexHTTPServer [PORT]
```
(By default, the port is 8000.)
|
14,088,294
|
I'm trying to create multithreaded web server in python, but it only responds to one request at a time and I can't figure out why. Can you help me, please?
```
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
from time import sleep
class ThreadingServer(ThreadingMixIn, HTTPServer):
pass
class RequestHandler(SimpleHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/plain')
sleep(5)
response = 'Slept for 5 seconds..'
self.send_header('Content-length', len(response))
self.end_headers()
self.wfile.write(response)
ThreadingServer(('', 8000), RequestHandler).serve_forever()
```
|
2012/12/30
|
[
"https://Stackoverflow.com/questions/14088294",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1937459/"
] |
Check [this](http://pymotw.com/2/BaseHTTPServer/index.html#module-BaseHTTPServer) post from Doug Hellmann's blog.
```
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
from SocketServer import ThreadingMixIn
import threading
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.end_headers()
message = threading.currentThread().getName()
self.wfile.write(message)
self.wfile.write('\n')
return
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
"""Handle requests in a separate thread."""
if __name__ == '__main__':
server = ThreadedHTTPServer(('localhost', 8080), Handler)
print 'Starting server, use <Ctrl-C> to stop'
server.serve_forever()
```
|
It's amazing how many votes these solutions that break streaming are getting. If streaming might be needed down the road, then `ThreadingMixIn` and gunicorn are no good because they just collect up the response and write it as a unit at the end (which actually does nothing if your stream is infinite).
Your basic approach of combining `BaseHTTPServer` with threads is fine. But the default `BaseHTTPServer` settings re-bind a new socket on every listener, which won't work in Linux if all the listeners are on the same port. Change those settings before the `serve_forever()` call. (Just like you have to set `self.daemon = True` on a thread to stop ctrl-C from being disabled.)
The following example launches 100 handler threads on the same port, with each handler started through `BaseHTTPServer`.
```
import time, threading, socket, SocketServer, BaseHTTPServer
class Handler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(self):
if self.path != '/':
self.send_error(404, "Object not found")
return
self.send_response(200)
self.send_header('Content-type', 'text/html; charset=utf-8')
self.end_headers()
# serve up an infinite stream
i = 0
while True:
self.wfile.write("%i " % i)
time.sleep(0.1)
i += 1
# Create ONE socket.
addr = ('', 8000)
sock = socket.socket (socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(addr)
sock.listen(5)
# Launch 100 listener threads.
class Thread(threading.Thread):
def __init__(self, i):
threading.Thread.__init__(self)
self.i = i
self.daemon = True
self.start()
def run(self):
httpd = BaseHTTPServer.HTTPServer(addr, Handler, False)
# Prevent the HTTP server from re-binding every handler.
# https://stackoverflow.com/questions/46210672/
httpd.socket = sock
httpd.server_bind = self.server_close = lambda self: None
httpd.serve_forever()
[Thread(i) for i in range(100)]
time.sleep(9e9)
```
|
14,088,294
|
I'm trying to create multithreaded web server in python, but it only responds to one request at a time and I can't figure out why. Can you help me, please?
```
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
from time import sleep
class ThreadingServer(ThreadingMixIn, HTTPServer):
pass
class RequestHandler(SimpleHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/plain')
sleep(5)
response = 'Slept for 5 seconds..'
self.send_header('Content-length', len(response))
self.end_headers()
self.wfile.write(response)
ThreadingServer(('', 8000), RequestHandler).serve_forever()
```
|
2012/12/30
|
[
"https://Stackoverflow.com/questions/14088294",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1937459/"
] |
Check [this](http://pymotw.com/2/BaseHTTPServer/index.html#module-BaseHTTPServer) post from Doug Hellmann's blog.
```
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
from SocketServer import ThreadingMixIn
import threading
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.end_headers()
message = threading.currentThread().getName()
self.wfile.write(message)
self.wfile.write('\n')
return
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
"""Handle requests in a separate thread."""
if __name__ == '__main__':
server = ThreadedHTTPServer(('localhost', 8080), Handler)
print 'Starting server, use <Ctrl-C> to stop'
server.serve_forever()
```
|
In python3, you can use the code below (https or http):
```
from http.server import HTTPServer, BaseHTTPRequestHandler
from socketserver import ThreadingMixIn
import threading
USE_HTTPS = True
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.end_headers()
self.wfile.write(b'Hello world\t' + threading.currentThread().getName().encode() + b'\t' + str(threading.active_count()).encode() + b'\n')
class ThreadingSimpleServer(ThreadingMixIn, HTTPServer):
pass
def run():
server = ThreadingSimpleServer(('0.0.0.0', 4444), Handler)
if USE_HTTPS:
import ssl
server.socket = ssl.wrap_socket(server.socket, keyfile='./key.pem', certfile='./cert.pem', server_side=True)
server.serve_forever()
if __name__ == '__main__':
run()
```
You will figure out this code will create a new thread to deal with every request.
Command below to generate self-sign certificate:
```
openssl req -x509 -newkey rsa:4096 -nodes -out cert.pem -keyout key.pem -days 365
```
If you are using Flask, [this blog](https://blog.miguelgrinberg.com/post/running-your-flask-application-over-https) is great.
|
14,088,294
|
I'm trying to create multithreaded web server in python, but it only responds to one request at a time and I can't figure out why. Can you help me, please?
```
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
from time import sleep
class ThreadingServer(ThreadingMixIn, HTTPServer):
pass
class RequestHandler(SimpleHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/plain')
sleep(5)
response = 'Slept for 5 seconds..'
self.send_header('Content-length', len(response))
self.end_headers()
self.wfile.write(response)
ThreadingServer(('', 8000), RequestHandler).serve_forever()
```
|
2012/12/30
|
[
"https://Stackoverflow.com/questions/14088294",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1937459/"
] |
Check [this](http://pymotw.com/2/BaseHTTPServer/index.html#module-BaseHTTPServer) post from Doug Hellmann's blog.
```
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
from SocketServer import ThreadingMixIn
import threading
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.end_headers()
message = threading.currentThread().getName()
self.wfile.write(message)
self.wfile.write('\n')
return
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
"""Handle requests in a separate thread."""
if __name__ == '__main__':
server = ThreadedHTTPServer(('localhost', 8080), Handler)
print 'Starting server, use <Ctrl-C> to stop'
server.serve_forever()
```
|
A multithreaded https server in python3.7
```
from http.server import BaseHTTPRequestHandler, HTTPServer
from socketserver import ThreadingMixIn
import threading
import ssl
hostName = "localhost"
serverPort = 8080
class MyServer(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(bytes("<html><head><title>https://pythonbasics.org</title></head>", "utf-8"))
self.wfile.write(bytes("<p>Request: %s</p>" % self.path, "utf-8"))
self.wfile.write(bytes("<p>Thread: %s</p>" % threading.currentThread().getName(), "utf-8"))
self.wfile.write(bytes("<p>Thread Count: %s</p>" % threading.active_count(), "utf-8"))
self.wfile.write(bytes("<body>", "utf-8"))
self.wfile.write(bytes("<p>This is an example web server.</p>", "utf-8"))
self.wfile.write(bytes("</body></html>", "utf-8"))
class ThreadingSimpleServer(ThreadingMixIn,HTTPServer):
pass
if __name__ == "__main__":
webServer = ThreadingSimpleServer((hostName, serverPort), MyServer)
webServer.socket = ssl.wrap_socket(webServer.socket, keyfile='./privkey.pem',certfile='./certificate.pem', server_side=True)
print("Server started http://%s:%s" % (hostName, serverPort))
try:
webServer.serve_forever()
except KeyboardInterrupt:
pass
webServer.server_close()
print("Server stopped.")
```
you can test it in a browser: <https://localhost:8080>
the running result is:
[enter image description here](https://i.stack.imgur.com/FRioS.png)
[enter image description here](https://i.stack.imgur.com/iGi7B.png)
remind that you can generate your own keyfile and certificate use
```
$openssl req -newkey rsa:2048 -keyout privkey.pem -x509 -days 36500 -out certificate.pem
```
To learn details about creating self-signed certificate with openssl:<https://www.devdungeon.com/content/creating-self-signed-ssl-certificates-openssl>
|
14,088,294
|
I'm trying to create multithreaded web server in python, but it only responds to one request at a time and I can't figure out why. Can you help me, please?
```
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
from time import sleep
class ThreadingServer(ThreadingMixIn, HTTPServer):
pass
class RequestHandler(SimpleHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/plain')
sleep(5)
response = 'Slept for 5 seconds..'
self.send_header('Content-length', len(response))
self.end_headers()
self.wfile.write(response)
ThreadingServer(('', 8000), RequestHandler).serve_forever()
```
|
2012/12/30
|
[
"https://Stackoverflow.com/questions/14088294",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1937459/"
] |
I have developed a PIP Utility called [ComplexHTTPServer](https://github.com/vickysam/ComplexHTTPServer) that is a multi-threaded version of SimpleHTTPServer.
To install it, all you need to do is:
```
pip install ComplexHTTPServer
```
Using it is as simple as:
```
python -m ComplexHTTPServer [PORT]
```
(By default, the port is 8000.)
|
It's amazing how many votes these solutions that break streaming are getting. If streaming might be needed down the road, then `ThreadingMixIn` and gunicorn are no good because they just collect up the response and write it as a unit at the end (which actually does nothing if your stream is infinite).
Your basic approach of combining `BaseHTTPServer` with threads is fine. But the default `BaseHTTPServer` settings re-bind a new socket on every listener, which won't work in Linux if all the listeners are on the same port. Change those settings before the `serve_forever()` call. (Just like you have to set `self.daemon = True` on a thread to stop ctrl-C from being disabled.)
The following example launches 100 handler threads on the same port, with each handler started through `BaseHTTPServer`.
```
import time, threading, socket, SocketServer, BaseHTTPServer
class Handler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(self):
if self.path != '/':
self.send_error(404, "Object not found")
return
self.send_response(200)
self.send_header('Content-type', 'text/html; charset=utf-8')
self.end_headers()
# serve up an infinite stream
i = 0
while True:
self.wfile.write("%i " % i)
time.sleep(0.1)
i += 1
# Create ONE socket.
addr = ('', 8000)
sock = socket.socket (socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(addr)
sock.listen(5)
# Launch 100 listener threads.
class Thread(threading.Thread):
def __init__(self, i):
threading.Thread.__init__(self)
self.i = i
self.daemon = True
self.start()
def run(self):
httpd = BaseHTTPServer.HTTPServer(addr, Handler, False)
# Prevent the HTTP server from re-binding every handler.
# https://stackoverflow.com/questions/46210672/
httpd.socket = sock
httpd.server_bind = self.server_close = lambda self: None
httpd.serve_forever()
[Thread(i) for i in range(100)]
time.sleep(9e9)
```
|
14,088,294
|
I'm trying to create multithreaded web server in python, but it only responds to one request at a time and I can't figure out why. Can you help me, please?
```
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
from time import sleep
class ThreadingServer(ThreadingMixIn, HTTPServer):
pass
class RequestHandler(SimpleHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/plain')
sleep(5)
response = 'Slept for 5 seconds..'
self.send_header('Content-length', len(response))
self.end_headers()
self.wfile.write(response)
ThreadingServer(('', 8000), RequestHandler).serve_forever()
```
|
2012/12/30
|
[
"https://Stackoverflow.com/questions/14088294",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1937459/"
] |
I have developed a PIP Utility called [ComplexHTTPServer](https://github.com/vickysam/ComplexHTTPServer) that is a multi-threaded version of SimpleHTTPServer.
To install it, all you need to do is:
```
pip install ComplexHTTPServer
```
Using it is as simple as:
```
python -m ComplexHTTPServer [PORT]
```
(By default, the port is 8000.)
|
A multithreaded https server in python3.7
```
from http.server import BaseHTTPRequestHandler, HTTPServer
from socketserver import ThreadingMixIn
import threading
import ssl
hostName = "localhost"
serverPort = 8080
class MyServer(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(bytes("<html><head><title>https://pythonbasics.org</title></head>", "utf-8"))
self.wfile.write(bytes("<p>Request: %s</p>" % self.path, "utf-8"))
self.wfile.write(bytes("<p>Thread: %s</p>" % threading.currentThread().getName(), "utf-8"))
self.wfile.write(bytes("<p>Thread Count: %s</p>" % threading.active_count(), "utf-8"))
self.wfile.write(bytes("<body>", "utf-8"))
self.wfile.write(bytes("<p>This is an example web server.</p>", "utf-8"))
self.wfile.write(bytes("</body></html>", "utf-8"))
class ThreadingSimpleServer(ThreadingMixIn,HTTPServer):
pass
if __name__ == "__main__":
webServer = ThreadingSimpleServer((hostName, serverPort), MyServer)
webServer.socket = ssl.wrap_socket(webServer.socket, keyfile='./privkey.pem',certfile='./certificate.pem', server_side=True)
print("Server started http://%s:%s" % (hostName, serverPort))
try:
webServer.serve_forever()
except KeyboardInterrupt:
pass
webServer.server_close()
print("Server stopped.")
```
you can test it in a browser: <https://localhost:8080>
the running result is:
[enter image description here](https://i.stack.imgur.com/FRioS.png)
[enter image description here](https://i.stack.imgur.com/iGi7B.png)
remind that you can generate your own keyfile and certificate use
```
$openssl req -newkey rsa:2048 -keyout privkey.pem -x509 -days 36500 -out certificate.pem
```
To learn details about creating self-signed certificate with openssl:<https://www.devdungeon.com/content/creating-self-signed-ssl-certificates-openssl>
|
14,088,294
|
I'm trying to create multithreaded web server in python, but it only responds to one request at a time and I can't figure out why. Can you help me, please?
```
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
from time import sleep
class ThreadingServer(ThreadingMixIn, HTTPServer):
pass
class RequestHandler(SimpleHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/plain')
sleep(5)
response = 'Slept for 5 seconds..'
self.send_header('Content-length', len(response))
self.end_headers()
self.wfile.write(response)
ThreadingServer(('', 8000), RequestHandler).serve_forever()
```
|
2012/12/30
|
[
"https://Stackoverflow.com/questions/14088294",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1937459/"
] |
In python3, you can use the code below (https or http):
```
from http.server import HTTPServer, BaseHTTPRequestHandler
from socketserver import ThreadingMixIn
import threading
USE_HTTPS = True
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.end_headers()
self.wfile.write(b'Hello world\t' + threading.currentThread().getName().encode() + b'\t' + str(threading.active_count()).encode() + b'\n')
class ThreadingSimpleServer(ThreadingMixIn, HTTPServer):
pass
def run():
server = ThreadingSimpleServer(('0.0.0.0', 4444), Handler)
if USE_HTTPS:
import ssl
server.socket = ssl.wrap_socket(server.socket, keyfile='./key.pem', certfile='./cert.pem', server_side=True)
server.serve_forever()
if __name__ == '__main__':
run()
```
You will figure out this code will create a new thread to deal with every request.
Command below to generate self-sign certificate:
```
openssl req -x509 -newkey rsa:4096 -nodes -out cert.pem -keyout key.pem -days 365
```
If you are using Flask, [this blog](https://blog.miguelgrinberg.com/post/running-your-flask-application-over-https) is great.
|
It's amazing how many votes these solutions that break streaming are getting. If streaming might be needed down the road, then `ThreadingMixIn` and gunicorn are no good because they just collect up the response and write it as a unit at the end (which actually does nothing if your stream is infinite).
Your basic approach of combining `BaseHTTPServer` with threads is fine. But the default `BaseHTTPServer` settings re-bind a new socket on every listener, which won't work in Linux if all the listeners are on the same port. Change those settings before the `serve_forever()` call. (Just like you have to set `self.daemon = True` on a thread to stop ctrl-C from being disabled.)
The following example launches 100 handler threads on the same port, with each handler started through `BaseHTTPServer`.
```
import time, threading, socket, SocketServer, BaseHTTPServer
class Handler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(self):
if self.path != '/':
self.send_error(404, "Object not found")
return
self.send_response(200)
self.send_header('Content-type', 'text/html; charset=utf-8')
self.end_headers()
# serve up an infinite stream
i = 0
while True:
self.wfile.write("%i " % i)
time.sleep(0.1)
i += 1
# Create ONE socket.
addr = ('', 8000)
sock = socket.socket (socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(addr)
sock.listen(5)
# Launch 100 listener threads.
class Thread(threading.Thread):
def __init__(self, i):
threading.Thread.__init__(self)
self.i = i
self.daemon = True
self.start()
def run(self):
httpd = BaseHTTPServer.HTTPServer(addr, Handler, False)
# Prevent the HTTP server from re-binding every handler.
# https://stackoverflow.com/questions/46210672/
httpd.socket = sock
httpd.server_bind = self.server_close = lambda self: None
httpd.serve_forever()
[Thread(i) for i in range(100)]
time.sleep(9e9)
```
|
14,088,294
|
I'm trying to create multithreaded web server in python, but it only responds to one request at a time and I can't figure out why. Can you help me, please?
```
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
from time import sleep
class ThreadingServer(ThreadingMixIn, HTTPServer):
pass
class RequestHandler(SimpleHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/plain')
sleep(5)
response = 'Slept for 5 seconds..'
self.send_header('Content-length', len(response))
self.end_headers()
self.wfile.write(response)
ThreadingServer(('', 8000), RequestHandler).serve_forever()
```
|
2012/12/30
|
[
"https://Stackoverflow.com/questions/14088294",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1937459/"
] |
It's amazing how many votes these solutions that break streaming are getting. If streaming might be needed down the road, then `ThreadingMixIn` and gunicorn are no good because they just collect up the response and write it as a unit at the end (which actually does nothing if your stream is infinite).
Your basic approach of combining `BaseHTTPServer` with threads is fine. But the default `BaseHTTPServer` settings re-bind a new socket on every listener, which won't work in Linux if all the listeners are on the same port. Change those settings before the `serve_forever()` call. (Just like you have to set `self.daemon = True` on a thread to stop ctrl-C from being disabled.)
The following example launches 100 handler threads on the same port, with each handler started through `BaseHTTPServer`.
```
import time, threading, socket, SocketServer, BaseHTTPServer
class Handler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(self):
if self.path != '/':
self.send_error(404, "Object not found")
return
self.send_response(200)
self.send_header('Content-type', 'text/html; charset=utf-8')
self.end_headers()
# serve up an infinite stream
i = 0
while True:
self.wfile.write("%i " % i)
time.sleep(0.1)
i += 1
# Create ONE socket.
addr = ('', 8000)
sock = socket.socket (socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(addr)
sock.listen(5)
# Launch 100 listener threads.
class Thread(threading.Thread):
def __init__(self, i):
threading.Thread.__init__(self)
self.i = i
self.daemon = True
self.start()
def run(self):
httpd = BaseHTTPServer.HTTPServer(addr, Handler, False)
# Prevent the HTTP server from re-binding every handler.
# https://stackoverflow.com/questions/46210672/
httpd.socket = sock
httpd.server_bind = self.server_close = lambda self: None
httpd.serve_forever()
[Thread(i) for i in range(100)]
time.sleep(9e9)
```
|
A multithreaded https server in python3.7
```
from http.server import BaseHTTPRequestHandler, HTTPServer
from socketserver import ThreadingMixIn
import threading
import ssl
hostName = "localhost"
serverPort = 8080
class MyServer(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(bytes("<html><head><title>https://pythonbasics.org</title></head>", "utf-8"))
self.wfile.write(bytes("<p>Request: %s</p>" % self.path, "utf-8"))
self.wfile.write(bytes("<p>Thread: %s</p>" % threading.currentThread().getName(), "utf-8"))
self.wfile.write(bytes("<p>Thread Count: %s</p>" % threading.active_count(), "utf-8"))
self.wfile.write(bytes("<body>", "utf-8"))
self.wfile.write(bytes("<p>This is an example web server.</p>", "utf-8"))
self.wfile.write(bytes("</body></html>", "utf-8"))
class ThreadingSimpleServer(ThreadingMixIn,HTTPServer):
pass
if __name__ == "__main__":
webServer = ThreadingSimpleServer((hostName, serverPort), MyServer)
webServer.socket = ssl.wrap_socket(webServer.socket, keyfile='./privkey.pem',certfile='./certificate.pem', server_side=True)
print("Server started http://%s:%s" % (hostName, serverPort))
try:
webServer.serve_forever()
except KeyboardInterrupt:
pass
webServer.server_close()
print("Server stopped.")
```
you can test it in a browser: <https://localhost:8080>
the running result is:
[enter image description here](https://i.stack.imgur.com/FRioS.png)
[enter image description here](https://i.stack.imgur.com/iGi7B.png)
remind that you can generate your own keyfile and certificate use
```
$openssl req -newkey rsa:2048 -keyout privkey.pem -x509 -days 36500 -out certificate.pem
```
To learn details about creating self-signed certificate with openssl:<https://www.devdungeon.com/content/creating-self-signed-ssl-certificates-openssl>
|
14,088,294
|
I'm trying to create multithreaded web server in python, but it only responds to one request at a time and I can't figure out why. Can you help me, please?
```
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
from SocketServer import ThreadingMixIn
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
from time import sleep
class ThreadingServer(ThreadingMixIn, HTTPServer):
pass
class RequestHandler(SimpleHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/plain')
sleep(5)
response = 'Slept for 5 seconds..'
self.send_header('Content-length', len(response))
self.end_headers()
self.wfile.write(response)
ThreadingServer(('', 8000), RequestHandler).serve_forever()
```
|
2012/12/30
|
[
"https://Stackoverflow.com/questions/14088294",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1937459/"
] |
In python3, you can use the code below (https or http):
```
from http.server import HTTPServer, BaseHTTPRequestHandler
from socketserver import ThreadingMixIn
import threading
USE_HTTPS = True
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.end_headers()
self.wfile.write(b'Hello world\t' + threading.currentThread().getName().encode() + b'\t' + str(threading.active_count()).encode() + b'\n')
class ThreadingSimpleServer(ThreadingMixIn, HTTPServer):
pass
def run():
server = ThreadingSimpleServer(('0.0.0.0', 4444), Handler)
if USE_HTTPS:
import ssl
server.socket = ssl.wrap_socket(server.socket, keyfile='./key.pem', certfile='./cert.pem', server_side=True)
server.serve_forever()
if __name__ == '__main__':
run()
```
You will figure out this code will create a new thread to deal with every request.
Command below to generate self-sign certificate:
```
openssl req -x509 -newkey rsa:4096 -nodes -out cert.pem -keyout key.pem -days 365
```
If you are using Flask, [this blog](https://blog.miguelgrinberg.com/post/running-your-flask-application-over-https) is great.
|
A multithreaded https server in python3.7
```
from http.server import BaseHTTPRequestHandler, HTTPServer
from socketserver import ThreadingMixIn
import threading
import ssl
hostName = "localhost"
serverPort = 8080
class MyServer(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(bytes("<html><head><title>https://pythonbasics.org</title></head>", "utf-8"))
self.wfile.write(bytes("<p>Request: %s</p>" % self.path, "utf-8"))
self.wfile.write(bytes("<p>Thread: %s</p>" % threading.currentThread().getName(), "utf-8"))
self.wfile.write(bytes("<p>Thread Count: %s</p>" % threading.active_count(), "utf-8"))
self.wfile.write(bytes("<body>", "utf-8"))
self.wfile.write(bytes("<p>This is an example web server.</p>", "utf-8"))
self.wfile.write(bytes("</body></html>", "utf-8"))
class ThreadingSimpleServer(ThreadingMixIn,HTTPServer):
pass
if __name__ == "__main__":
webServer = ThreadingSimpleServer((hostName, serverPort), MyServer)
webServer.socket = ssl.wrap_socket(webServer.socket, keyfile='./privkey.pem',certfile='./certificate.pem', server_side=True)
print("Server started http://%s:%s" % (hostName, serverPort))
try:
webServer.serve_forever()
except KeyboardInterrupt:
pass
webServer.server_close()
print("Server stopped.")
```
you can test it in a browser: <https://localhost:8080>
the running result is:
[enter image description here](https://i.stack.imgur.com/FRioS.png)
[enter image description here](https://i.stack.imgur.com/iGi7B.png)
remind that you can generate your own keyfile and certificate use
```
$openssl req -newkey rsa:2048 -keyout privkey.pem -x509 -days 36500 -out certificate.pem
```
To learn details about creating self-signed certificate with openssl:<https://www.devdungeon.com/content/creating-self-signed-ssl-certificates-openssl>
|
51,106,340
|
I am trying to create an application in appengine that searches for a list of keys and then I use this list to delete these records from the datastore, this service has to be a generic service so I could not use a model just search by the name of kind, it is possible to do this through appengine features?
Below my code, but it requires that I have a model.
import httplib
import logging
from datetime import datetime, timedelta
```
import webapp2
from google.appengine.api import urlfetch
from google.appengine.ext import ndb
DEFAULT_PAGE_SIZE = 100000
DATE_PATTERN = "%Y-%m-%dT%H:%M:%S"
def get_date(amount):
date = datetime.today() - timedelta(days=30 * amount)
date = date.replace(hour=0, minute=0, second=0)
return date
class Purge(webapp2.RequestHandler):
def get(self):
kind = self.request.get('kind')
datefield = self.request.get('datefield')
amount = self.request.get('amount', default_value=3)
date = get_date(amount)
logging.info('Executando purge para Entity {}, mantendo periodo de {} meses.'.format(kind, amount))
# cria a query
query = ndb.Query(kind=kind, namespace='development')
logging.info('Setando o filtro [{} <= {}]'.format(datefield, date.strftime(DATE_PATTERN)))
# cria um filtro
query.filter(ndb.DateTimeProperty(datefield) <= date)
query.fetch_page(DEFAULT_PAGE_SIZE)
while True:
# executa a consulta
keys = query.fetch(keys_only=True)
logging.info('Encontrados {} {} a serem exluidos'.format(len(keys), kind))
# exclui usando as keys
ndb.delete_multi(keys)
if len(keys) < DEFAULT_PAGE_SIZE:
logging.info('Nao existem mais registros a serem excluidos')
break
app = webapp2.WSGIApplication(
[
('/cloud-datastore-purge', Purge),
], debug=True)
```
Trace
```
Traceback (most recent call last):
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/p~telefonica-dev-155211/cloud-datastore-purge-python:20180629t150020.410785498982375644/purge.py", line 38, in get
query.fetch_page(_DEFAULT_PAGE_SIZE)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/utils.py", line 160, in positional_wrapper
return wrapped(*args, **kwds)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/query.py", line 1362, in fetch_page
return self.fetch_page_async(page_size, **q_options).get_result()
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 383, in get_result
self.check_success()
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/query.py", line 1380, in _fetch_page_async
while (yield it.has_next_async()):
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/query.py", line 1793, in has_next_async
yield self._fut
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/context.py", line 890, in helper
batch, i, ent = yield inq.getq()
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/query.py", line 969, in run_to_queue
batch = yield rpc
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 513, in _on_rpc_completion
result = rpc.get_result()
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result
return self.__get_result_hook(self)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/datastore/datastore_query.py", line 2951, in __query_result_hook
self.__results = self._process_results(query_result.result_list())
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/datastore/datastore_query.py", line 2984, in _process_results
for result in results]
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 194, in pb_to_query_result
return self.pb_to_entity(pb)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 690, in pb_to_entity
modelclass = Model._lookup_model(kind, self.default_model)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 3101, in _lookup_model
kind)
KindError: No model class found for kind 'Test'. Did you forget to import it?
```
|
2018/06/29
|
[
"https://Stackoverflow.com/questions/51106340",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8484943/"
] |
The problem was found on the line where fetch\_page is set.
Removing this line
```
query.fetch_page(DEFAULT_PAGE_SIZE)
```
for this
```
keys = query.fetch(limit=_DEFAULT_LIMIT, keys_only=True)
```
|
To run a datastore query without a model class available in the environment, you can use the [`google.appengine.api.datastore.Query`](https://cloud.google.com/appengine/docs/standard/python/refdocs/google.appengine.api.datastore#google.appengine.api.datastore.Query) class from the low-level [datastore API](https://cloud.google.com/appengine/docs/standard/python/refdocs/google.appengine.api.datastore).
See [this question](https://stackoverflow.com/questions/54900142/datastore-query-without-model-class) for other ideas.
|
51,106,340
|
I am trying to create an application in appengine that searches for a list of keys and then I use this list to delete these records from the datastore, this service has to be a generic service so I could not use a model just search by the name of kind, it is possible to do this through appengine features?
Below my code, but it requires that I have a model.
import httplib
import logging
from datetime import datetime, timedelta
```
import webapp2
from google.appengine.api import urlfetch
from google.appengine.ext import ndb
DEFAULT_PAGE_SIZE = 100000
DATE_PATTERN = "%Y-%m-%dT%H:%M:%S"
def get_date(amount):
date = datetime.today() - timedelta(days=30 * amount)
date = date.replace(hour=0, minute=0, second=0)
return date
class Purge(webapp2.RequestHandler):
def get(self):
kind = self.request.get('kind')
datefield = self.request.get('datefield')
amount = self.request.get('amount', default_value=3)
date = get_date(amount)
logging.info('Executando purge para Entity {}, mantendo periodo de {} meses.'.format(kind, amount))
# cria a query
query = ndb.Query(kind=kind, namespace='development')
logging.info('Setando o filtro [{} <= {}]'.format(datefield, date.strftime(DATE_PATTERN)))
# cria um filtro
query.filter(ndb.DateTimeProperty(datefield) <= date)
query.fetch_page(DEFAULT_PAGE_SIZE)
while True:
# executa a consulta
keys = query.fetch(keys_only=True)
logging.info('Encontrados {} {} a serem exluidos'.format(len(keys), kind))
# exclui usando as keys
ndb.delete_multi(keys)
if len(keys) < DEFAULT_PAGE_SIZE:
logging.info('Nao existem mais registros a serem excluidos')
break
app = webapp2.WSGIApplication(
[
('/cloud-datastore-purge', Purge),
], debug=True)
```
Trace
```
Traceback (most recent call last):
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/p~telefonica-dev-155211/cloud-datastore-purge-python:20180629t150020.410785498982375644/purge.py", line 38, in get
query.fetch_page(_DEFAULT_PAGE_SIZE)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/utils.py", line 160, in positional_wrapper
return wrapped(*args, **kwds)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/query.py", line 1362, in fetch_page
return self.fetch_page_async(page_size, **q_options).get_result()
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 383, in get_result
self.check_success()
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/query.py", line 1380, in _fetch_page_async
while (yield it.has_next_async()):
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 427, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/query.py", line 1793, in has_next_async
yield self._fut
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/context.py", line 890, in helper
batch, i, ent = yield inq.getq()
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/query.py", line 969, in run_to_queue
batch = yield rpc
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 513, in _on_rpc_completion
result = rpc.get_result()
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result
return self.__get_result_hook(self)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/datastore/datastore_query.py", line 2951, in __query_result_hook
self.__results = self._process_results(query_result.result_list())
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/datastore/datastore_query.py", line 2984, in _process_results
for result in results]
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 194, in pb_to_query_result
return self.pb_to_entity(pb)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 690, in pb_to_entity
modelclass = Model._lookup_model(kind, self.default_model)
File "/base/alloc/tmpfs/dynamic_runtimes/python27g/7894e0c59273b2b7/python27/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 3101, in _lookup_model
kind)
KindError: No model class found for kind 'Test'. Did you forget to import it?
```
|
2018/06/29
|
[
"https://Stackoverflow.com/questions/51106340",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8484943/"
] |
The problem was found on the line where fetch\_page is set.
Removing this line
```
query.fetch_page(DEFAULT_PAGE_SIZE)
```
for this
```
keys = query.fetch(limit=_DEFAULT_LIMIT, keys_only=True)
```
|
If the goal is simply to wipe entities regardless of their kind then you have some options besides specifying the kinds/models yourself.
* you could obtain the list of all kinds from your models file and iterate through them, see [How to delete all the entries from google datastore?](https://stackoverflow.com/questions/46744074/how-to-delete-all-the-entries-from-google-datastore/46802370#46802370)
* using the generic datastore client library (not ndb) you could obtain the list of kinds from the datastore itself, see
[Is there a way to delete all entities from a Datastore namespace Using Python3 (WITHOUT Dataflow)?](https://stackoverflow.com/questions/54764032/is-there-a-way-to-delete-all-entities-from-a-datastore-namespace-using-python3/54789067#54789067) But I vaguely remember some issues which *might* translate to that library not working on the 1st generation standard environment anymore, so I'm not 100% certain of this.
|
26,529,791
|
this is the first time I am trying to code in python and I am implementing the Apriori algorithm. I have generated till 2-itemsets and below is the function I have to generate 2-Itemsets by combining the keys of the 1-itemset.
How do I go about making this function generic? I mean, by passing the keys of a dictionary and the number of elements required in the tuple, the algorithm should generate all possible n-number(k+1) subsets using the keys. I know that Union on sets is a possibility, but is there a way to do union of tuples which is essentially the keys of a dictionary?
```
# generate 2-itemset candidates by joining the 1-itemset candidates
def candidate_gen(keys):
adict={}
for i in keys:
for j in keys:
#if i != j and (j,i) not in adict:
if j>i:
#call join procedure which will generate f(k+1) keys
#call has_infrequent_subset --> generates all possible k+1 itemsets and checks if k itemsets are present in f(k) keys
adict[tuple([min(i,j),max(i,j)])] = 0
return adict
```
For example, if my initial dictionary looks like: {key, value} --> value is the frequency
```
{'382': 1163, '298': 560, '248': 1087, '458': 720,
'118': 509, '723': 528, '390': 1288}
```
I take the keys of this dictionary and pass it to the candidate\_gen function mentioned above
it will generate the subsets of 2-itemsets and give the output of keys. I will then pass the keys to a function to find the frequency by comparing against the original database to get this output:
```
{('390', '723'): 65, ('118', '298'): 20, ('298', '390'): 70, ('298', '458'): 35,
('248', '382'): 88, ('248', '458'): 76, ('248', '723'): 26, ('382', '723'): 203,
('390', '458'): 33, ('118', '458'): 26, ('458', '723'): 26, ('248', '390'): 87,
('118', '248'): 54, ('298', '382'): 47, ('118', '723'): 41, ('382', '390'): 413,
('382', '458'): 57, ('248', '298'): 64, ('118', '382'): 40, ('298', '723'): 36,
('118', '390'): 52}
```
How do I generated 3-itemset subsets from the above keys.
|
2014/10/23
|
[
"https://Stackoverflow.com/questions/26529791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4077331/"
] |
I assume that, given your field, you can benefit very much from the study of python's [itertools](https://docs.python.org/3/library/itertools.html) library.
In your use case you can directly use the itertools `combinations`
or wrap it in a helper function
```
from itertools import combinations
def ord_comb(l,n):
return list(combinations(l,n))
#### TESTING ####
a = [1,2,3,4,5]
print(ord_comb(a,1))
print(ord_comb(a,5))
print(ord_comb(a,6))
print(ord_comb([],2))
print(ord_comb(a,3))
```
**Output**
```
[(1,), (2,), (3,), (4,), (5,)]
[(1, 2, 3, 4, 5)]
[]
[]
[(1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 3, 4), (1, 3, 5), (1, 4, 5), (2, 3, 4), (2, 3, 5), (2, 4, 5), (3, 4, 5)]
```
Please note that the order of the elements in the `n`-uples depends on the order that you used in the iterable that you pass to `combinations`.
|
This?
```
In [12]: [(x, y) for x in keys for y in keys if y>x]
Out[12]:
[('382', '723'),
('382', '458'),
('382', '390'),
('458', '723'),
('298', '382'),
('298', '723'),
('298', '458'),
('298', '390'),
('390', '723'),
('390', '458'),
('248', '382'),
('248', '723'),
('248', '458'),
('248', '298'),
('248', '390'),
('118', '382'),
('118', '723'),
('118', '458'),
('118', '298'),
('118', '390'),
('118', '248')]
```
|
35,799,809
|
I am playing around with `unicode` in python.
So there is a simple script:
```
# -*- coding: cp1251 -*-
print 'юникод'.decode('cp1251')
print unicode('юникод', 'cp1251')
print unicode('юникод', 'utf-8')
```
In cmd I've switched encoding to `Active code page: 1251`.
And there is the output:
```
СЋРЅРёРєРѕРґ
СЋРЅРёРєРѕРґ
юникод
```
I am a little bit confused.
Since I've specified encoding to `cp1251` I expect that it would be decoded correctly.
But as result there is some trash code points were interpreted.
I am understand that `'юникод'` is just a bytes like:
`'\xd1\x8e\xd0\xbd\xd0\xb8\xd0\xba\xd0\xbe\xd0\xb4'`.
But there is a way to get correct output in terminal with `cp1251`?
Should I build byte string manually?
Seems like I misunderstood something.
|
2016/03/04
|
[
"https://Stackoverflow.com/questions/35799809",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3990145/"
] |
I think I can understand what happened to you. The last line gave me the hint, that your *trash codepoints* confirmed. You try to display cp1251 characters but your editor is configured to use utf8.
The `# -*- coding: cp1251 -*-` is only used by the Python interpretor to convert characters from source python files that are outside of the ASCII range. And anyway it it is only used for unicode litterals because bytes from original source give er... exactly same bytes in byte strings. Some text editors are kind enough to automagically use this line (IDLE editor is), but I'm little confident in that and allways switch **manually** to the proper encoding when I use gvim for example. Short story: `# -*- coding: cp1251 -*-` in unused in your code and can only mislead a reader since it it not the actual encoding.
If you want to be sure of what lies in your source, you'd better use explicit escapes. In code page 1251, this word `юникод` is composed by those characters: `'\xfe\xed\xe8\xea\xee\xe4'`
If you write this source:
```
txt = '\xfe\xed\xe8\xea\xee\xe4'
print txt
print txt.decode('cp1251')
print unicode(txt, 'cp1251')
print unicode(txt, 'utf-8')
```
and execute it in a console configured to use CP1251 charset, the first three lines will output `юникод`, and the last one will throw a UnicodeDecodeError exception because the input is no longer valid 'utf8'.
Alternatively, if you find comfortable with you current editor, you could write:
```
# -*- coding: utf8 -*-
txt = 'юникод'.decode('utf8').encode('cp1251') # or simply txt = u'юникод'.encode('cp1251')
print txt
print txt.decode('cp1251')
print unicode(txt, 'cp1251')
print unicode(txt, 'utf-8')
```
which should give same results - but now the declared source encoding should be the actual encoding of your python source.
---
BTW, a Python 3.5 IDLE that natively uses unicode confirmed that:
```
>>> 'СЋРЅРёРєРѕРґ'.encode('cp1251').decode('utf8')
'юникод'
```
|
Just use the following, but **ensure** you save the source code in the declared encoding. It can be *any* encoding that supports the characters you want to print. The terminal can be in a different encoding, as long as it *also* supports the characters you want to print:
```
#coding:utf8
print u'юникод'
```
The advantage is that you don't need to know the terminal's encoding. Python will normally1 detect the terminal encoding and encode the print output correctly.
1Unless your terminal is misconfigured.
|
35,799,809
|
I am playing around with `unicode` in python.
So there is a simple script:
```
# -*- coding: cp1251 -*-
print 'юникод'.decode('cp1251')
print unicode('юникод', 'cp1251')
print unicode('юникод', 'utf-8')
```
In cmd I've switched encoding to `Active code page: 1251`.
And there is the output:
```
СЋРЅРёРєРѕРґ
СЋРЅРёРєРѕРґ
юникод
```
I am a little bit confused.
Since I've specified encoding to `cp1251` I expect that it would be decoded correctly.
But as result there is some trash code points were interpreted.
I am understand that `'юникод'` is just a bytes like:
`'\xd1\x8e\xd0\xbd\xd0\xb8\xd0\xba\xd0\xbe\xd0\xb4'`.
But there is a way to get correct output in terminal with `cp1251`?
Should I build byte string manually?
Seems like I misunderstood something.
|
2016/03/04
|
[
"https://Stackoverflow.com/questions/35799809",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3990145/"
] |
I think I can understand what happened to you. The last line gave me the hint, that your *trash codepoints* confirmed. You try to display cp1251 characters but your editor is configured to use utf8.
The `# -*- coding: cp1251 -*-` is only used by the Python interpretor to convert characters from source python files that are outside of the ASCII range. And anyway it it is only used for unicode litterals because bytes from original source give er... exactly same bytes in byte strings. Some text editors are kind enough to automagically use this line (IDLE editor is), but I'm little confident in that and allways switch **manually** to the proper encoding when I use gvim for example. Short story: `# -*- coding: cp1251 -*-` in unused in your code and can only mislead a reader since it it not the actual encoding.
If you want to be sure of what lies in your source, you'd better use explicit escapes. In code page 1251, this word `юникод` is composed by those characters: `'\xfe\xed\xe8\xea\xee\xe4'`
If you write this source:
```
txt = '\xfe\xed\xe8\xea\xee\xe4'
print txt
print txt.decode('cp1251')
print unicode(txt, 'cp1251')
print unicode(txt, 'utf-8')
```
and execute it in a console configured to use CP1251 charset, the first three lines will output `юникод`, and the last one will throw a UnicodeDecodeError exception because the input is no longer valid 'utf8'.
Alternatively, if you find comfortable with you current editor, you could write:
```
# -*- coding: utf8 -*-
txt = 'юникод'.decode('utf8').encode('cp1251') # or simply txt = u'юникод'.encode('cp1251')
print txt
print txt.decode('cp1251')
print unicode(txt, 'cp1251')
print unicode(txt, 'utf-8')
```
which should give same results - but now the declared source encoding should be the actual encoding of your python source.
---
BTW, a Python 3.5 IDLE that natively uses unicode confirmed that:
```
>>> 'СЋРЅРёРєРѕРґ'.encode('cp1251').decode('utf8')
'юникод'
```
|
Your issue is that the encoding declaration is wrong: your editor uses `utf-8` character encoding to save the source code. **Use `# -*- coding: utf-8 -*-` to fix it.**
```
>>> u'юникод'
u'\u044e\u043d\u0438\u043a\u043e\u0434'
>>> u'юникод'.encode('utf-8')
'\xd1\x8e\xd0\xbd\xd0\xb8\xd0\xba\xd0\xbe\xd0\xb4'
>>> print _.decode('cp1251') # mojibake due to the wrong encoding
СЋРЅРёРєРѕРґ
>>> print u'юникод'
юникод
```
Do not use bytestrings (`''` literals create `bytes` object on Python 2) to represent text; **use Unicode strings** (`u''` literals -- `unicode` type) instead.
If your code uses Unicode strings then a code page that your Windows console uses doesn't matter as long as the chosen font can display the corresponding (non-BMP) characters. See [Python, Unicode, and the Windows console](https://stackoverflow.com/q/5419/4279)
Here's complete code, for reference:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
print(u'юникод')
```
Note: no `.decode()`, `unicode()`. If you are using a literal to create a string; you should use Unicode literals if the string contains text. It is the only option on Python 3 where you can't put non-ascii characters inside a `bytes` literal and it is a good practice (to use Unicode for text instead of bytestrings) on Python 2 too.
If you are given a bytestring as an input (not literal) by some API then its encoding has *nothing* to do with the encoding declaration. What specific encoding to use depends on the source of the data.
|
35,799,809
|
I am playing around with `unicode` in python.
So there is a simple script:
```
# -*- coding: cp1251 -*-
print 'юникод'.decode('cp1251')
print unicode('юникод', 'cp1251')
print unicode('юникод', 'utf-8')
```
In cmd I've switched encoding to `Active code page: 1251`.
And there is the output:
```
СЋРЅРёРєРѕРґ
СЋРЅРёРєРѕРґ
юникод
```
I am a little bit confused.
Since I've specified encoding to `cp1251` I expect that it would be decoded correctly.
But as result there is some trash code points were interpreted.
I am understand that `'юникод'` is just a bytes like:
`'\xd1\x8e\xd0\xbd\xd0\xb8\xd0\xba\xd0\xbe\xd0\xb4'`.
But there is a way to get correct output in terminal with `cp1251`?
Should I build byte string manually?
Seems like I misunderstood something.
|
2016/03/04
|
[
"https://Stackoverflow.com/questions/35799809",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3990145/"
] |
Your issue is that the encoding declaration is wrong: your editor uses `utf-8` character encoding to save the source code. **Use `# -*- coding: utf-8 -*-` to fix it.**
```
>>> u'юникод'
u'\u044e\u043d\u0438\u043a\u043e\u0434'
>>> u'юникод'.encode('utf-8')
'\xd1\x8e\xd0\xbd\xd0\xb8\xd0\xba\xd0\xbe\xd0\xb4'
>>> print _.decode('cp1251') # mojibake due to the wrong encoding
СЋРЅРёРєРѕРґ
>>> print u'юникод'
юникод
```
Do not use bytestrings (`''` literals create `bytes` object on Python 2) to represent text; **use Unicode strings** (`u''` literals -- `unicode` type) instead.
If your code uses Unicode strings then a code page that your Windows console uses doesn't matter as long as the chosen font can display the corresponding (non-BMP) characters. See [Python, Unicode, and the Windows console](https://stackoverflow.com/q/5419/4279)
Here's complete code, for reference:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
print(u'юникод')
```
Note: no `.decode()`, `unicode()`. If you are using a literal to create a string; you should use Unicode literals if the string contains text. It is the only option on Python 3 where you can't put non-ascii characters inside a `bytes` literal and it is a good practice (to use Unicode for text instead of bytestrings) on Python 2 too.
If you are given a bytestring as an input (not literal) by some API then its encoding has *nothing* to do with the encoding declaration. What specific encoding to use depends on the source of the data.
|
Just use the following, but **ensure** you save the source code in the declared encoding. It can be *any* encoding that supports the characters you want to print. The terminal can be in a different encoding, as long as it *also* supports the characters you want to print:
```
#coding:utf8
print u'юникод'
```
The advantage is that you don't need to know the terminal's encoding. Python will normally1 detect the terminal encoding and encode the print output correctly.
1Unless your terminal is misconfigured.
|
58,959,226
|
I am trying to install a package which needs `psycopg2` as a dependency, so I installed `psycopg2-binary` using `pip install psycopg2-binary` but when I try to `pip install django-tenant-schemas` I get this error:
```
In file included from psycopg/psycopgmodule.c:27:0:
./psycopg/psycopg.h:34:10: fatal error: Python.h: No such file or directory
#include <Python.h>
^~~~~~~~~~
compilation terminated.
You may install a binary package by installing 'psycopg2-binary' from PyPI.
If you want to install psycopg2 from source, please install the packages required for the build and try again.
For further information please check the 'doc/src/install.rst' file (also at <http://initd.org/psycopg/docs/install.html>).
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
ERROR: Command errored out with exit status 1:
/home/david/PycharmProjects/clearpath/venv/bin/python -u
-c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ckbbq00w/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-ckbbq00w/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install
--record /tmp/pip-record-pi6j7x5l/install-record.txt
--single-version-externally-managed
--compile
--install-headers /home/david/PycharmProjects/clearpath/venv/include/site/python3.7/psycopg2 Check the logs for full command output.
```
When I go into my projects (using PyCharm) repo settings I can see psycopg2-binary is installed. I assume this has something to do with the PATH but I can't seem to figure out how to solve the issue.
`which psql`: /usr/bin/psql
`which pg_config`: /usr/bin/pg\_config
I am not comfortable doing much in the Environment variables as I really don't want to break something.
|
2019/11/20
|
[
"https://Stackoverflow.com/questions/58959226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10796680/"
] |
This takes a whole 1 line fewer. Whether it's cleaner or easier to understand is up to you ....
```
int sides[3];
for (int i=0; i < 3; i++)
{
cout << "Enter side " << i+1 << endl;
cin >> sides[i];
}
```
It's good to write short code where it makes it clearer, so do keep considering how you can do that. Making it look pretty is a worthy consideration too - again as long as it makes what you're doing clearer.
Clarity is everything!!
|
To make the code more maintainable and readable:
1) Use more meaningful variable names, or if you would name them consecutively, use an array
e.g. `int numbers[3]`
2) Similarly, when you are taking prompts like this, consider having the prompts in a parallel array for the questions, or if they are the same prompt use something similar to [noelicus](https://stackoverflow.com/a/58959334/8760895) answer.
I would do something like this:
```
int numbers[3];
String prompts[3] = {"put your", "prompts", "here"};
for(int i=0; i<3; i++){
cout << prompts[i] << endl;
cin >> numbers[i]
}
//do math
//print output
```
also, you may want to check to make sure the user has entered a number using [this](https://stackoverflow.com/questions/5655142/how-to-check-if-input-is-numeric-in-c).
|
58,959,226
|
I am trying to install a package which needs `psycopg2` as a dependency, so I installed `psycopg2-binary` using `pip install psycopg2-binary` but when I try to `pip install django-tenant-schemas` I get this error:
```
In file included from psycopg/psycopgmodule.c:27:0:
./psycopg/psycopg.h:34:10: fatal error: Python.h: No such file or directory
#include <Python.h>
^~~~~~~~~~
compilation terminated.
You may install a binary package by installing 'psycopg2-binary' from PyPI.
If you want to install psycopg2 from source, please install the packages required for the build and try again.
For further information please check the 'doc/src/install.rst' file (also at <http://initd.org/psycopg/docs/install.html>).
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
ERROR: Command errored out with exit status 1:
/home/david/PycharmProjects/clearpath/venv/bin/python -u
-c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ckbbq00w/psycopg2/setup.py'"'"'; __file__='"'"'/tmp/pip-install-ckbbq00w/psycopg2/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install
--record /tmp/pip-record-pi6j7x5l/install-record.txt
--single-version-externally-managed
--compile
--install-headers /home/david/PycharmProjects/clearpath/venv/include/site/python3.7/psycopg2 Check the logs for full command output.
```
When I go into my projects (using PyCharm) repo settings I can see psycopg2-binary is installed. I assume this has something to do with the PATH but I can't seem to figure out how to solve the issue.
`which psql`: /usr/bin/psql
`which pg_config`: /usr/bin/pg\_config
I am not comfortable doing much in the Environment variables as I really don't want to break something.
|
2019/11/20
|
[
"https://Stackoverflow.com/questions/58959226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10796680/"
] |
If you really want to shorten your project and make it more "clean" you can do it this way.
```
#include <iostream>
int main()
{
std::string str = "string"; std::cin >> str;
return 0;
}
```
Like you wanted to write `cout` and `cin` in a single line, you could write like that, but I think it's not "clean" to write that way and it's better to drop a line.
And, please don't use `using namespace std;` [its a bad practice](https://stackoverflow.com/questions/1452721/why-is-using-namespace-std-considered-bad-practice).
|
To make the code more maintainable and readable:
1) Use more meaningful variable names, or if you would name them consecutively, use an array
e.g. `int numbers[3]`
2) Similarly, when you are taking prompts like this, consider having the prompts in a parallel array for the questions, or if they are the same prompt use something similar to [noelicus](https://stackoverflow.com/a/58959334/8760895) answer.
I would do something like this:
```
int numbers[3];
String prompts[3] = {"put your", "prompts", "here"};
for(int i=0; i<3; i++){
cout << prompts[i] << endl;
cin >> numbers[i]
}
//do math
//print output
```
also, you may want to check to make sure the user has entered a number using [this](https://stackoverflow.com/questions/5655142/how-to-check-if-input-is-numeric-in-c).
|
61,643,039
|
When I run the cv.Canny edge detector on drawings, it detects hundreds of little edges densely packed in the shaded areas. How can I get it to stop doing that, while still detecting lighter features like eyes and nose? I tried blurring too.
Here's an example, compared with an [online photo tool](https://online.rapidresizer.com/photograph-to-pattern.php).
[Original image](https://i.stack.imgur.com/8VcUa.jpg).
[Output of online tool](https://i.stack.imgur.com/JYhjJ.png).
[My python program](https://i.stack.imgur.com/cFJ5E.png)
Here's my code:
```
def outline(image, sigma = 5):
image = cv.GaussianBlur(image, (11, 11), sigma)
ratio = 2
lower = .37 * 255
upper = lower * ratio
outlined = cv.Canny(image, lower, upper)
return outlined
```
How can I improve it?
|
2020/05/06
|
[
"https://Stackoverflow.com/questions/61643039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12346436/"
] |
Here is one way to do that in Python/OpenCV.
**Morphologic edge out is the absolute difference between a mask and the dilated mask**
* Read the input
* Convert to gray
* Threshold (as mask)
* Dilate the thresholded image
* Compute the absolute difference
* Invert its polarity as the edge image
* Save the result
Input:
[](https://i.stack.imgur.com/bM0Wn.jpg)
```
import cv2
import numpy as np
# read image
img = cv2.imread("cartoon.jpg")
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 180, 255, cv2.THRESH_BINARY)[1]
# morphology edgeout = dilated_mask - mask
# morphology dilate
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
dilate = cv2.morphologyEx(thresh, cv2.MORPH_DILATE, kernel)
# get absolute difference between dilate and thresh
diff = cv2.absdiff(dilate, thresh)
# invert
edges = 255 - diff
# write result to disk
cv2.imwrite("cartoon_thresh.jpg", thresh)
cv2.imwrite("cartoon_dilate.jpg", dilate)
cv2.imwrite("cartoon_diff.jpg", diff)
cv2.imwrite("cartoon_edges.jpg", edges)
# display it
cv2.imshow("thresh", thresh)
cv2.imshow("dilate", dilate)
cv2.imshow("diff", diff)
cv2.imshow("edges", edges)
cv2.waitKey(0)
```
Thresholded image:
[](https://i.stack.imgur.com/KejXu.jpg)
Dilated threshold image:
[](https://i.stack.imgur.com/kUZjt.jpg)
Difference image:
[](https://i.stack.imgur.com/HXtdh.jpg)
Edge image:
[](https://i.stack.imgur.com/3xC95.jpg)
|
I was successfully able to make `cv.Canny` give satisfactory results by changing the kernel dimension from (11, 11) to (0, 0), allowing the kernel to be dynamically determined by sigma. By doing this and tuning sigma, I got pretty good results. Also, `cv.imshow` distorts images, so when I was using it to test, the results looked significantly worse than they actually were.
|
38,451,831
|
I am using Zeppelin and matplotlib to visualize some data. I try them but fail with the error below. Could you give me some guidance how to fix it?
```
%pyspark
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
```
And here is the error I've got
```
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-3580576524078731606.py", line 235, in <module>
eval(compiledCode)
File "<string>", line 1, in <module>
File "/usr/lib64/python2.6/site-packages/matplotlib/pyplot.py", line 78, in <module>
new_figure_manager, draw_if_interactive, show = pylab_setup()
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/__init__.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtkagg.py", line 10, in <module>
from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK, FigureCanvasGTK,\
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtk.py", line 8, in <module>
import gtk; gdk = gtk.gdk
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 64, in <module>
_init()
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 52, in _init
_gtk.init_check()
RuntimeError: could not open display
```
I also try to add these lines, but still cannot work
```
import matplotlib
matplotlib.use('Agg')
```
|
2016/07/19
|
[
"https://Stackoverflow.com/questions/38451831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6151388/"
] |
The following works for me with Spark & Python 3:
```
%pyspark
import matplotlib
import io
# If you use the use() function, this must be done before importing matplotlib.pyplot. Calling use() after pyplot has been imported will have no effect.
# see: http://matplotlib.org/faq/usage_faq.html#what-is-a-backend
matplotlib.use('Agg')
import matplotlib.pyplot as plt
def show(p):
img = io.StringIO()
p.savefig(img, format='svg')
img.seek(0)
print("%html <div style='width:600px'>" + img.getvalue() + "</div>")
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
show(plt)
```
The Zeppelin [documentation](https://zeppelin.apache.org/docs/0.6.0/interpreter/python.html#matplotlib-integration) suggests that the following should work:
```
%python
import matplotlib.pyplot as plt
plt.figure()
(.. ..)
z.show(plt)
plt.close()
```
This doesn't work for me with Python 3, but looks to be addressed with the soon-to-be-merged [PR #1213](https://github.com/apache/zeppelin/pull/1213).
|
As per @eddies suggestion, I tried and this is what worked for me on Zeppelin 0.6.1 python 2.7
```
%python
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
plt.figure()
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
z.show(plt, width='500px')
plt.close()
```
|
38,451,831
|
I am using Zeppelin and matplotlib to visualize some data. I try them but fail with the error below. Could you give me some guidance how to fix it?
```
%pyspark
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
```
And here is the error I've got
```
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-3580576524078731606.py", line 235, in <module>
eval(compiledCode)
File "<string>", line 1, in <module>
File "/usr/lib64/python2.6/site-packages/matplotlib/pyplot.py", line 78, in <module>
new_figure_manager, draw_if_interactive, show = pylab_setup()
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/__init__.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtkagg.py", line 10, in <module>
from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK, FigureCanvasGTK,\
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtk.py", line 8, in <module>
import gtk; gdk = gtk.gdk
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 64, in <module>
_init()
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 52, in _init
_gtk.init_check()
RuntimeError: could not open display
```
I also try to add these lines, but still cannot work
```
import matplotlib
matplotlib.use('Agg')
```
|
2016/07/19
|
[
"https://Stackoverflow.com/questions/38451831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6151388/"
] |
The following works for me with Spark & Python 3:
```
%pyspark
import matplotlib
import io
# If you use the use() function, this must be done before importing matplotlib.pyplot. Calling use() after pyplot has been imported will have no effect.
# see: http://matplotlib.org/faq/usage_faq.html#what-is-a-backend
matplotlib.use('Agg')
import matplotlib.pyplot as plt
def show(p):
img = io.StringIO()
p.savefig(img, format='svg')
img.seek(0)
print("%html <div style='width:600px'>" + img.getvalue() + "</div>")
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
show(plt)
```
The Zeppelin [documentation](https://zeppelin.apache.org/docs/0.6.0/interpreter/python.html#matplotlib-integration) suggests that the following should work:
```
%python
import matplotlib.pyplot as plt
plt.figure()
(.. ..)
z.show(plt)
plt.close()
```
This doesn't work for me with Python 3, but looks to be addressed with the soon-to-be-merged [PR #1213](https://github.com/apache/zeppelin/pull/1213).
|
Note that as of Zeppelin 0.7.3, matplotlib integration is much more seamless, so the methods described here are no longer necessary. <https://zeppelin.apache.org/docs/latest/interpreter/python.html#matplotlib-integration>
|
38,451,831
|
I am using Zeppelin and matplotlib to visualize some data. I try them but fail with the error below. Could you give me some guidance how to fix it?
```
%pyspark
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
```
And here is the error I've got
```
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-3580576524078731606.py", line 235, in <module>
eval(compiledCode)
File "<string>", line 1, in <module>
File "/usr/lib64/python2.6/site-packages/matplotlib/pyplot.py", line 78, in <module>
new_figure_manager, draw_if_interactive, show = pylab_setup()
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/__init__.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtkagg.py", line 10, in <module>
from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK, FigureCanvasGTK,\
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtk.py", line 8, in <module>
import gtk; gdk = gtk.gdk
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 64, in <module>
_init()
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 52, in _init
_gtk.init_check()
RuntimeError: could not open display
```
I also try to add these lines, but still cannot work
```
import matplotlib
matplotlib.use('Agg')
```
|
2016/07/19
|
[
"https://Stackoverflow.com/questions/38451831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6151388/"
] |
The following works for me with Spark & Python 3:
```
%pyspark
import matplotlib
import io
# If you use the use() function, this must be done before importing matplotlib.pyplot. Calling use() after pyplot has been imported will have no effect.
# see: http://matplotlib.org/faq/usage_faq.html#what-is-a-backend
matplotlib.use('Agg')
import matplotlib.pyplot as plt
def show(p):
img = io.StringIO()
p.savefig(img, format='svg')
img.seek(0)
print("%html <div style='width:600px'>" + img.getvalue() + "</div>")
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
show(plt)
```
The Zeppelin [documentation](https://zeppelin.apache.org/docs/0.6.0/interpreter/python.html#matplotlib-integration) suggests that the following should work:
```
%python
import matplotlib.pyplot as plt
plt.figure()
(.. ..)
z.show(plt)
plt.close()
```
This doesn't work for me with Python 3, but looks to be addressed with the soon-to-be-merged [PR #1213](https://github.com/apache/zeppelin/pull/1213).
|
Change this:
```
import matplotlib
matplotlib.use('Agg')
```
with
```
import matplotlib.pyplot as plt; plt.rcdefaults()
plt.switch_backend('agg')
```
Complete code example Spark 2.2.0 + python3(anaconda3.5):
```
%spark.pyspark
import matplotlib.pyplot as plt; plt.rcdefaults()
plt.switch_backend('agg')
import numpy as np
import io
def show(p):
img = io.StringIO()
p.savefig(img, format='svg')
img.seek(0)
print ("%html <div style='width:600px'>" + img.getvalue() + "</div>")
# Example data
people=('Tom', 'Dick', 'Harry', 'Slim', 'Jim')
y_pos=np.arange(len(people))
performance=3 + 10 * np.random.rand(len(people))
error=np.random.rand(len(people))
plt.barh(y_pos, performance, xerr=error, align='center', alpha=0.4)
plt.yticks(y_pos, people)
plt.xlabel('Performance')
plt.title('How fast do you want to go today?')
show(plt)
```
|
38,451,831
|
I am using Zeppelin and matplotlib to visualize some data. I try them but fail with the error below. Could you give me some guidance how to fix it?
```
%pyspark
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
```
And here is the error I've got
```
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-3580576524078731606.py", line 235, in <module>
eval(compiledCode)
File "<string>", line 1, in <module>
File "/usr/lib64/python2.6/site-packages/matplotlib/pyplot.py", line 78, in <module>
new_figure_manager, draw_if_interactive, show = pylab_setup()
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/__init__.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtkagg.py", line 10, in <module>
from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK, FigureCanvasGTK,\
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtk.py", line 8, in <module>
import gtk; gdk = gtk.gdk
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 64, in <module>
_init()
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 52, in _init
_gtk.init_check()
RuntimeError: could not open display
```
I also try to add these lines, but still cannot work
```
import matplotlib
matplotlib.use('Agg')
```
|
2016/07/19
|
[
"https://Stackoverflow.com/questions/38451831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6151388/"
] |
The following works for me with Spark & Python 3:
```
%pyspark
import matplotlib
import io
# If you use the use() function, this must be done before importing matplotlib.pyplot. Calling use() after pyplot has been imported will have no effect.
# see: http://matplotlib.org/faq/usage_faq.html#what-is-a-backend
matplotlib.use('Agg')
import matplotlib.pyplot as plt
def show(p):
img = io.StringIO()
p.savefig(img, format='svg')
img.seek(0)
print("%html <div style='width:600px'>" + img.getvalue() + "</div>")
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
show(plt)
```
The Zeppelin [documentation](https://zeppelin.apache.org/docs/0.6.0/interpreter/python.html#matplotlib-integration) suggests that the following should work:
```
%python
import matplotlib.pyplot as plt
plt.figure()
(.. ..)
z.show(plt)
plt.close()
```
This doesn't work for me with Python 3, but looks to be addressed with the soon-to-be-merged [PR #1213](https://github.com/apache/zeppelin/pull/1213).
|
I would suggest you to use IPython/IPySpark interpreter in zeppelin 0.8.0 which will be released soon. The matplotlib integration in ipython is almost the same as jupyter. There's one tutorial <https://www.zepl.com/viewer/notebooks/bm90ZTovL3pqZmZkdS9lN2Q3ODNiODRkNjA0ZjVjODM1OWZlMWExZjM4OTk3Zi9ub3RlLmpzb24>
|
38,451,831
|
I am using Zeppelin and matplotlib to visualize some data. I try them but fail with the error below. Could you give me some guidance how to fix it?
```
%pyspark
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
```
And here is the error I've got
```
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-3580576524078731606.py", line 235, in <module>
eval(compiledCode)
File "<string>", line 1, in <module>
File "/usr/lib64/python2.6/site-packages/matplotlib/pyplot.py", line 78, in <module>
new_figure_manager, draw_if_interactive, show = pylab_setup()
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/__init__.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtkagg.py", line 10, in <module>
from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK, FigureCanvasGTK,\
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtk.py", line 8, in <module>
import gtk; gdk = gtk.gdk
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 64, in <module>
_init()
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 52, in _init
_gtk.init_check()
RuntimeError: could not open display
```
I also try to add these lines, but still cannot work
```
import matplotlib
matplotlib.use('Agg')
```
|
2016/07/19
|
[
"https://Stackoverflow.com/questions/38451831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6151388/"
] |
Note that as of Zeppelin 0.7.3, matplotlib integration is much more seamless, so the methods described here are no longer necessary. <https://zeppelin.apache.org/docs/latest/interpreter/python.html#matplotlib-integration>
|
As per @eddies suggestion, I tried and this is what worked for me on Zeppelin 0.6.1 python 2.7
```
%python
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
plt.figure()
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
z.show(plt, width='500px')
plt.close()
```
|
38,451,831
|
I am using Zeppelin and matplotlib to visualize some data. I try them but fail with the error below. Could you give me some guidance how to fix it?
```
%pyspark
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
```
And here is the error I've got
```
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-3580576524078731606.py", line 235, in <module>
eval(compiledCode)
File "<string>", line 1, in <module>
File "/usr/lib64/python2.6/site-packages/matplotlib/pyplot.py", line 78, in <module>
new_figure_manager, draw_if_interactive, show = pylab_setup()
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/__init__.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtkagg.py", line 10, in <module>
from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK, FigureCanvasGTK,\
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtk.py", line 8, in <module>
import gtk; gdk = gtk.gdk
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 64, in <module>
_init()
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 52, in _init
_gtk.init_check()
RuntimeError: could not open display
```
I also try to add these lines, but still cannot work
```
import matplotlib
matplotlib.use('Agg')
```
|
2016/07/19
|
[
"https://Stackoverflow.com/questions/38451831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6151388/"
] |
As per @eddies suggestion, I tried and this is what worked for me on Zeppelin 0.6.1 python 2.7
```
%python
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
plt.figure()
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
z.show(plt, width='500px')
plt.close()
```
|
I would suggest you to use IPython/IPySpark interpreter in zeppelin 0.8.0 which will be released soon. The matplotlib integration in ipython is almost the same as jupyter. There's one tutorial <https://www.zepl.com/viewer/notebooks/bm90ZTovL3pqZmZkdS9lN2Q3ODNiODRkNjA0ZjVjODM1OWZlMWExZjM4OTk3Zi9ub3RlLmpzb24>
|
38,451,831
|
I am using Zeppelin and matplotlib to visualize some data. I try them but fail with the error below. Could you give me some guidance how to fix it?
```
%pyspark
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
```
And here is the error I've got
```
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-3580576524078731606.py", line 235, in <module>
eval(compiledCode)
File "<string>", line 1, in <module>
File "/usr/lib64/python2.6/site-packages/matplotlib/pyplot.py", line 78, in <module>
new_figure_manager, draw_if_interactive, show = pylab_setup()
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/__init__.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtkagg.py", line 10, in <module>
from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK, FigureCanvasGTK,\
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtk.py", line 8, in <module>
import gtk; gdk = gtk.gdk
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 64, in <module>
_init()
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 52, in _init
_gtk.init_check()
RuntimeError: could not open display
```
I also try to add these lines, but still cannot work
```
import matplotlib
matplotlib.use('Agg')
```
|
2016/07/19
|
[
"https://Stackoverflow.com/questions/38451831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6151388/"
] |
Note that as of Zeppelin 0.7.3, matplotlib integration is much more seamless, so the methods described here are no longer necessary. <https://zeppelin.apache.org/docs/latest/interpreter/python.html#matplotlib-integration>
|
Change this:
```
import matplotlib
matplotlib.use('Agg')
```
with
```
import matplotlib.pyplot as plt; plt.rcdefaults()
plt.switch_backend('agg')
```
Complete code example Spark 2.2.0 + python3(anaconda3.5):
```
%spark.pyspark
import matplotlib.pyplot as plt; plt.rcdefaults()
plt.switch_backend('agg')
import numpy as np
import io
def show(p):
img = io.StringIO()
p.savefig(img, format='svg')
img.seek(0)
print ("%html <div style='width:600px'>" + img.getvalue() + "</div>")
# Example data
people=('Tom', 'Dick', 'Harry', 'Slim', 'Jim')
y_pos=np.arange(len(people))
performance=3 + 10 * np.random.rand(len(people))
error=np.random.rand(len(people))
plt.barh(y_pos, performance, xerr=error, align='center', alpha=0.4)
plt.yticks(y_pos, people)
plt.xlabel('Performance')
plt.title('How fast do you want to go today?')
show(plt)
```
|
38,451,831
|
I am using Zeppelin and matplotlib to visualize some data. I try them but fail with the error below. Could you give me some guidance how to fix it?
```
%pyspark
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
```
And here is the error I've got
```
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-3580576524078731606.py", line 235, in <module>
eval(compiledCode)
File "<string>", line 1, in <module>
File "/usr/lib64/python2.6/site-packages/matplotlib/pyplot.py", line 78, in <module>
new_figure_manager, draw_if_interactive, show = pylab_setup()
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/__init__.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtkagg.py", line 10, in <module>
from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK, FigureCanvasGTK,\
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtk.py", line 8, in <module>
import gtk; gdk = gtk.gdk
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 64, in <module>
_init()
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 52, in _init
_gtk.init_check()
RuntimeError: could not open display
```
I also try to add these lines, but still cannot work
```
import matplotlib
matplotlib.use('Agg')
```
|
2016/07/19
|
[
"https://Stackoverflow.com/questions/38451831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6151388/"
] |
Note that as of Zeppelin 0.7.3, matplotlib integration is much more seamless, so the methods described here are no longer necessary. <https://zeppelin.apache.org/docs/latest/interpreter/python.html#matplotlib-integration>
|
I would suggest you to use IPython/IPySpark interpreter in zeppelin 0.8.0 which will be released soon. The matplotlib integration in ipython is almost the same as jupyter. There's one tutorial <https://www.zepl.com/viewer/notebooks/bm90ZTovL3pqZmZkdS9lN2Q3ODNiODRkNjA0ZjVjODM1OWZlMWExZjM4OTk3Zi9ub3RlLmpzb24>
|
38,451,831
|
I am using Zeppelin and matplotlib to visualize some data. I try them but fail with the error below. Could you give me some guidance how to fix it?
```
%pyspark
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
```
And here is the error I've got
```
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-3580576524078731606.py", line 235, in <module>
eval(compiledCode)
File "<string>", line 1, in <module>
File "/usr/lib64/python2.6/site-packages/matplotlib/pyplot.py", line 78, in <module>
new_figure_manager, draw_if_interactive, show = pylab_setup()
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/__init__.py", line 25, in pylab_setup
globals(),locals(),[backend_name])
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtkagg.py", line 10, in <module>
from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK, FigureCanvasGTK,\
File "/usr/lib64/python2.6/site-packages/matplotlib/backends/backend_gtk.py", line 8, in <module>
import gtk; gdk = gtk.gdk
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 64, in <module>
_init()
File "/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py", line 52, in _init
_gtk.init_check()
RuntimeError: could not open display
```
I also try to add these lines, but still cannot work
```
import matplotlib
matplotlib.use('Agg')
```
|
2016/07/19
|
[
"https://Stackoverflow.com/questions/38451831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6151388/"
] |
Change this:
```
import matplotlib
matplotlib.use('Agg')
```
with
```
import matplotlib.pyplot as plt; plt.rcdefaults()
plt.switch_backend('agg')
```
Complete code example Spark 2.2.0 + python3(anaconda3.5):
```
%spark.pyspark
import matplotlib.pyplot as plt; plt.rcdefaults()
plt.switch_backend('agg')
import numpy as np
import io
def show(p):
img = io.StringIO()
p.savefig(img, format='svg')
img.seek(0)
print ("%html <div style='width:600px'>" + img.getvalue() + "</div>")
# Example data
people=('Tom', 'Dick', 'Harry', 'Slim', 'Jim')
y_pos=np.arange(len(people))
performance=3 + 10 * np.random.rand(len(people))
error=np.random.rand(len(people))
plt.barh(y_pos, performance, xerr=error, align='center', alpha=0.4)
plt.yticks(y_pos, people)
plt.xlabel('Performance')
plt.title('How fast do you want to go today?')
show(plt)
```
|
I would suggest you to use IPython/IPySpark interpreter in zeppelin 0.8.0 which will be released soon. The matplotlib integration in ipython is almost the same as jupyter. There's one tutorial <https://www.zepl.com/viewer/notebooks/bm90ZTovL3pqZmZkdS9lN2Q3ODNiODRkNjA0ZjVjODM1OWZlMWExZjM4OTk3Zi9ub3RlLmpzb24>
|
8,510,615
|
I have ubuntu 11.10. I apt-get installed pypy from this launchpad repository: <https://launchpad.net/~pypy> the computer already has python on it, and python has its own pip. How can I install pip for pypy and how can I use it differently from that of python?
|
2011/12/14
|
[
"https://Stackoverflow.com/questions/8510615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1098562/"
] |
To keep a separate installation, you might want to create a [virtualenv](http://pypi.python.org/pypi/virtualenv) for PyPy. Within the virtualenv, you can then just run `pip install whatever` and it will install it for PyPy. When you create a virtualenv, it automatically installs pip for you.
Otherwise, you will need to work out where PyPy will import from and install distribute and pip in one of those locations. [pip's installer](http://www.pip-installer.org/en/latest/installing.html) should do this automatically when run with PyPy. Be careful with this option - if it decides to install in your system Python directories, it could break other things.
|
The problem with `pip` installing from the `pypy` (at least when installing `pypy` via `apt-get`) is that it is installed into the system path:
```
$ whereis pip
pip: /usr/local/bin/pip /usr/bin/pip
```
So after such install, `pypy pip` is executed by default (/usr/local/bin/pip) instead of the `python pip` (/usr/bin/pip) which may break subsequent updates of the whole Ubuntu.
The problem with `virtualenv` is that you should remember where and what env you created.
Convenient alternative solution is `conda` (miniconda), which manages not only python deployments: <http://conda.pydata.org/miniconda.html>.
Comparison of `conda`, `pip` and `virtualenv`:
<http://conda.pydata.org/docs/_downloads/conda-pip-virtualenv-translator.html>
|
8,510,615
|
I have ubuntu 11.10. I apt-get installed pypy from this launchpad repository: <https://launchpad.net/~pypy> the computer already has python on it, and python has its own pip. How can I install pip for pypy and how can I use it differently from that of python?
|
2011/12/14
|
[
"https://Stackoverflow.com/questions/8510615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1098562/"
] |
To keep a separate installation, you might want to create a [virtualenv](http://pypi.python.org/pypi/virtualenv) for PyPy. Within the virtualenv, you can then just run `pip install whatever` and it will install it for PyPy. When you create a virtualenv, it automatically installs pip for you.
Otherwise, you will need to work out where PyPy will import from and install distribute and pip in one of those locations. [pip's installer](http://www.pip-installer.org/en/latest/installing.html) should do this automatically when run with PyPy. Be careful with this option - if it decides to install in your system Python directories, it could break other things.
|
if you want to use pip with pypy:
```
pypy -m pip install [package]
```
pip is included with pypy so just target pip with the -m flag
|
8,510,615
|
I have ubuntu 11.10. I apt-get installed pypy from this launchpad repository: <https://launchpad.net/~pypy> the computer already has python on it, and python has its own pip. How can I install pip for pypy and how can I use it differently from that of python?
|
2011/12/14
|
[
"https://Stackoverflow.com/questions/8510615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1098562/"
] |
Quoting (with minor changes) from here the [pypy website](http://doc.pypy.org/en/latest/install.html):
>
> If you want to install 3rd party libraries, the most convenient way is
> to install pip:
>
>
>
> ```
> $ curl -O https://bootstrap.pypa.io/get-pip.py
> $ ./pypy-2.1/bin/pypy get-pip.py
> $ ./pypy-2.1/bin/pip install pygments # for example
>
> ```
>
>
In order to use it nicely, you might want to add an alias into e.g. `~/.bashrc`:
```
alias pypy_pip='./pypy-2.1/bin/pip'
```
Where the actual pip executable is located has to be taken from the output of `pypy get-pip.py`
|
The problem with `pip` installing from the `pypy` (at least when installing `pypy` via `apt-get`) is that it is installed into the system path:
```
$ whereis pip
pip: /usr/local/bin/pip /usr/bin/pip
```
So after such install, `pypy pip` is executed by default (/usr/local/bin/pip) instead of the `python pip` (/usr/bin/pip) which may break subsequent updates of the whole Ubuntu.
The problem with `virtualenv` is that you should remember where and what env you created.
Convenient alternative solution is `conda` (miniconda), which manages not only python deployments: <http://conda.pydata.org/miniconda.html>.
Comparison of `conda`, `pip` and `virtualenv`:
<http://conda.pydata.org/docs/_downloads/conda-pip-virtualenv-translator.html>
|
8,510,615
|
I have ubuntu 11.10. I apt-get installed pypy from this launchpad repository: <https://launchpad.net/~pypy> the computer already has python on it, and python has its own pip. How can I install pip for pypy and how can I use it differently from that of python?
|
2011/12/14
|
[
"https://Stackoverflow.com/questions/8510615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1098562/"
] |
Quoting (with minor changes) from here the [pypy website](http://doc.pypy.org/en/latest/install.html):
>
> If you want to install 3rd party libraries, the most convenient way is
> to install pip:
>
>
>
> ```
> $ curl -O https://bootstrap.pypa.io/get-pip.py
> $ ./pypy-2.1/bin/pypy get-pip.py
> $ ./pypy-2.1/bin/pip install pygments # for example
>
> ```
>
>
In order to use it nicely, you might want to add an alias into e.g. `~/.bashrc`:
```
alias pypy_pip='./pypy-2.1/bin/pip'
```
Where the actual pip executable is located has to be taken from the output of `pypy get-pip.py`
|
if you want to use pip with pypy:
```
pypy -m pip install [package]
```
pip is included with pypy so just target pip with the -m flag
|
8,510,615
|
I have ubuntu 11.10. I apt-get installed pypy from this launchpad repository: <https://launchpad.net/~pypy> the computer already has python on it, and python has its own pip. How can I install pip for pypy and how can I use it differently from that of python?
|
2011/12/14
|
[
"https://Stackoverflow.com/questions/8510615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1098562/"
] |
if you want to use pip with pypy:
```
pypy -m pip install [package]
```
pip is included with pypy so just target pip with the -m flag
|
The problem with `pip` installing from the `pypy` (at least when installing `pypy` via `apt-get`) is that it is installed into the system path:
```
$ whereis pip
pip: /usr/local/bin/pip /usr/bin/pip
```
So after such install, `pypy pip` is executed by default (/usr/local/bin/pip) instead of the `python pip` (/usr/bin/pip) which may break subsequent updates of the whole Ubuntu.
The problem with `virtualenv` is that you should remember where and what env you created.
Convenient alternative solution is `conda` (miniconda), which manages not only python deployments: <http://conda.pydata.org/miniconda.html>.
Comparison of `conda`, `pip` and `virtualenv`:
<http://conda.pydata.org/docs/_downloads/conda-pip-virtualenv-translator.html>
|
63,191,779
|
I've created previously a python script that creates an author index.
To spare you the details, (since extracting text from a pdf was pretty hard) I created
a minimal reproducible example. My current status is I get a new line for each author and
a comma separated list of the pages on which the author appears.
However I would like to sort the list of pages in ascending manner.
```
import pandas as pd
import csv
words = ["Autor1","Max Mustermann","Max Mustermann","Autor1","Bertha Musterfrau","Author2"]
pages = [15,13,5,1,17,20]
str_pages = list(map(str, pages))
df = pd.DataFrame({"Autor":words,"Pages":str_pages})
df = df.drop_duplicates().sort_values(by="Autor").reset_index(drop=True)
df = df.groupby("Autor")['Pages'].apply(lambda x: ','.join(x)).reset_index()
df
```
This produces the desired output (except the sorting of the pages).
```
Autor Pages
0 Author2 20
1 Autor1 15,1
2 Bertha Musterfrau 17
3 Max Mustermann 13,5
```
I tried to vectorize the `Pages` column to string, split by the comma and applied a lambda function that is supposed to sort the resulting list.
```
df["Pages"] = df["Pages"].str.split(",").apply(lambda x: sorted(x))
df
```
However this only worked for `Autor1` but not for `Max Mustermann`.
I cant seem to figure out why this is the case
```
Autor Pages
0 Author2 [20]
1 Autor1 [1, 15]
2 Bertha Musterfrau [17]
3 Max Mustermann [13, 5]
```
|
2020/07/31
|
[
"https://Stackoverflow.com/questions/63191779",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7318488/"
] |
`str.split` returns lists of strings. So `lambda x: sorted(x)` still sort by strings, not integers.
You can try:
```
df['Pages'] = (df.Pages.str.split(',')
.explode().astype(int)
.sort_values()
.groupby(level=0).agg(list)
)
```
Output:
```
Autor Pages
0 Author2 [20]
1 Autor1 [1, 15]
2 Bertha Musterfrau [17]
3 Max Mustermann [5, 13]
```
|
If you want to use your existing approach,
```
df.Pages = (
df.Pages.str.split(",")
.apply(lambda x: sorted(x, key=lambda x: int(x)))
)
```
---
```
Autor Pages
0 Author2 [20]
1 Autor1 [1, 15]
2 Bertha Musterfrau [17]
3 Max Mustermann [5, 13]
```
|
53,796,705
|
why so in python 3.6.1 with simple code like:
```
print(f'\xe4')
```
Result:
```
Traceback (most recent call last):
File "<pyshell#16>", line 1, in <module>
print(f'\xe4')
File "<pyshell#13>", line 1, in <lambda>
print = lambda text, end='\n', file=sys.stdout: print(text, end=end, file=file)
File "<pyshell#13>", line 1, in <lambda>
print = lambda text, end='\n', file=sys.stdout: print(text, end=end, file=file)
File "<pyshell#13>", line 1, in <lambda>
print = lambda text, end='\n', file=sys.stdout: print(text, end=end, file=file)
[Previous line repeated 990 more times]
RecursionError: maximum recursion depth exceeded
```
|
2018/12/15
|
[
"https://Stackoverflow.com/questions/53796705",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10540454/"
] |
So let's recap: you have overridden the built-in `print` function with this:
```
print = lambda text, end='\n', file=sys.stdout: print(text, end=end, file=file)
```
Which is the same as
```
def print(text, end='\n', file=sys.stdout):
print(text, end=end, file=file)
```
As you can see, this function calls itself recursively, but there is no recursion base, no condition when it finishes. You end up with a classic example of infinite recursion.
This has absolutely nothing to do with Unicode or formatting. Simply do not name your functions after builtins:
```
def my_print(text, end='\n', file=sys.stdout):
print(text, end=end, file=file)
my_print('abc') # works
```
Or at least keep the reference to the original:
```
print_ = print
def print(text, end='\n', file=sys.stdout):
print_(text, end=end, file=file)
print('abc') # works as well
```
Note: if the function is already overwritten, you will have to run `del print` (or restart the interpreter) to get back the original builtin.
|
Works for me as well. But maybe it'll work for you with:
```
print(chr(0xe4))
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.