qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
9,002,569
|
I am using python 2.7 and I would like to do some web2py stuff. I am trying to run web2py from eclipse and I am getting the following error trying to run web2py.py script:
WARNING:web2py:GUI not available because Tk library is not installed
I am running on Windows 7 (64 bit).
Any suggestions?
Thank you
|
2012/01/25
|
[
"https://Stackoverflow.com/questions/9002569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/945446/"
] |
Simplest solution: The Oracle client is not installed on the remote server where the SSIS package is being executed.
Slightly less simple solution: The Oracle client is installed on the remote server, but in the wrong bit-count for the SSIS installation. For example, if the 64-bit Oracle client is installed but SSIS is being executed with the 32-bit `dtexec` executable, SSIS will not be able to find the Oracle client.
The solution in this case would be to install the 32-bit Oracle client side-by-side with the 64-bit client.
|
After you install Oracle Client components on the remote server, restart SQL Server Agent from the PC Management Console or directly from Sql Server Management Studio. This will allow the service to load correctly the path to the Oracle components. Otherwise your package will work on design time but fail on run time.
|
9,002,569
|
I am using python 2.7 and I would like to do some web2py stuff. I am trying to run web2py from eclipse and I am getting the following error trying to run web2py.py script:
WARNING:web2py:GUI not available because Tk library is not installed
I am running on Windows 7 (64 bit).
Any suggestions?
Thank you
|
2012/01/25
|
[
"https://Stackoverflow.com/questions/9002569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/945446/"
] |
Simplest solution: The Oracle client is not installed on the remote server where the SSIS package is being executed.
Slightly less simple solution: The Oracle client is installed on the remote server, but in the wrong bit-count for the SSIS installation. For example, if the 64-bit Oracle client is installed but SSIS is being executed with the 32-bit `dtexec` executable, SSIS will not be able to find the Oracle client.
The solution in this case would be to install the 32-bit Oracle client side-by-side with the 64-bit client.
|
In my case this was because a file named **ociw32.dll** had been placed in **c:\windows\system32**. This is however only allowed to exist in **c:\oracle\11.2.0.3\bin**.
Deleting the file from system32, which had been placed there by an installation of Crystal Reports, fixed this issue
|
9,002,569
|
I am using python 2.7 and I would like to do some web2py stuff. I am trying to run web2py from eclipse and I am getting the following error trying to run web2py.py script:
WARNING:web2py:GUI not available because Tk library is not installed
I am running on Windows 7 (64 bit).
Any suggestions?
Thank you
|
2012/01/25
|
[
"https://Stackoverflow.com/questions/9002569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/945446/"
] |
Simplest solution: The Oracle client is not installed on the remote server where the SSIS package is being executed.
Slightly less simple solution: The Oracle client is installed on the remote server, but in the wrong bit-count for the SSIS installation. For example, if the 64-bit Oracle client is installed but SSIS is being executed with the 32-bit `dtexec` executable, SSIS will not be able to find the Oracle client.
The solution in this case would be to install the 32-bit Oracle client side-by-side with the 64-bit client.
|
Technology used: Windows 7, UFT 32 bit, Data Source ODBC pointing out to 32 bit `C:\Windows\System32\odbcad32.exe`, Oracle client with both versions installed 32 bit and 64 bit.
What worked for me:
1.Start -> search for `Edit the system environment variables`
2.System Variables -> `Edit Path`
3.Place the path for `Oracle client 32 bit` in front of the path for `Oracle Client 64 bit`.
Ex:
```
C:\APP\ORACLE\product\11.2.0\client_32\bin;C:\APP\ORACLE\product\11.2.0\client_64\bin
```
|
9,002,569
|
I am using python 2.7 and I would like to do some web2py stuff. I am trying to run web2py from eclipse and I am getting the following error trying to run web2py.py script:
WARNING:web2py:GUI not available because Tk library is not installed
I am running on Windows 7 (64 bit).
Any suggestions?
Thank you
|
2012/01/25
|
[
"https://Stackoverflow.com/questions/9002569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/945446/"
] |
Technology used: Windows 7, UFT 32 bit, Data Source ODBC pointing out to 32 bit `C:\Windows\System32\odbcad32.exe`, Oracle client with both versions installed 32 bit and 64 bit.
What worked for me:
1.Start -> search for `Edit the system environment variables`
2.System Variables -> `Edit Path`
3.Place the path for `Oracle client 32 bit` in front of the path for `Oracle Client 64 bit`.
Ex:
```
C:\APP\ORACLE\product\11.2.0\client_32\bin;C:\APP\ORACLE\product\11.2.0\client_64\bin
```
|
1.Go to My Computer Properties
2.Then click on Advance setting.
3.Go to Environment variable
4.Set the path to
```
F:\oracle\product\10.2.0\db_2\perl\5.8.3\lib\MSWin32-x86;F:\oracle\product\10.2.0\db_2\perl\5.8.3\lib;F:\oracle\product\10.2.0\db_2\perl\5.8.3\lib\MSWin32-x86;F:\oracle\product\10.2.0\db_2\perl\site\5.8.3;F:\oracle\product\10.2.0\db_2\perl\site\5.8.3\lib;F:\oracle\product\10.2.0\db_2\sysman\admin\scripts;
```
change your drive and folder depending on your requirement...
|
9,002,569
|
I am using python 2.7 and I would like to do some web2py stuff. I am trying to run web2py from eclipse and I am getting the following error trying to run web2py.py script:
WARNING:web2py:GUI not available because Tk library is not installed
I am running on Windows 7 (64 bit).
Any suggestions?
Thank you
|
2012/01/25
|
[
"https://Stackoverflow.com/questions/9002569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/945446/"
] |
Technology used: Windows 7, UFT 32 bit, Data Source ODBC pointing out to 32 bit `C:\Windows\System32\odbcad32.exe`, Oracle client with both versions installed 32 bit and 64 bit.
What worked for me:
1.Start -> search for `Edit the system environment variables`
2.System Variables -> `Edit Path`
3.Place the path for `Oracle client 32 bit` in front of the path for `Oracle Client 64 bit`.
Ex:
```
C:\APP\ORACLE\product\11.2.0\client_32\bin;C:\APP\ORACLE\product\11.2.0\client_64\bin
```
|
After you install Oracle Client components on the remote server, restart SQL Server Agent from the PC Management Console or directly from Sql Server Management Studio. This will allow the service to load correctly the path to the Oracle components. Otherwise your package will work on design time but fail on run time.
|
9,002,569
|
I am using python 2.7 and I would like to do some web2py stuff. I am trying to run web2py from eclipse and I am getting the following error trying to run web2py.py script:
WARNING:web2py:GUI not available because Tk library is not installed
I am running on Windows 7 (64 bit).
Any suggestions?
Thank you
|
2012/01/25
|
[
"https://Stackoverflow.com/questions/9002569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/945446/"
] |
Technology used: Windows 7, UFT 32 bit, Data Source ODBC pointing out to 32 bit `C:\Windows\System32\odbcad32.exe`, Oracle client with both versions installed 32 bit and 64 bit.
What worked for me:
1.Start -> search for `Edit the system environment variables`
2.System Variables -> `Edit Path`
3.Place the path for `Oracle client 32 bit` in front of the path for `Oracle Client 64 bit`.
Ex:
```
C:\APP\ORACLE\product\11.2.0\client_32\bin;C:\APP\ORACLE\product\11.2.0\client_64\bin
```
|
In my case this was because a file named **ociw32.dll** had been placed in **c:\windows\system32**. This is however only allowed to exist in **c:\oracle\11.2.0.3\bin**.
Deleting the file from system32, which had been placed there by an installation of Crystal Reports, fixed this issue
|
72,539,738
|
```
import RPi.GPIO as GPIO
import paho.mqtt.client as mqtt
import time
def privacyfunc():
# Pin Definitions:
led_pin_1 = 7
led_pin_2 = 21
but_pin = 18
# blink LED 2 quickly 5 times when button pressed
def blink(channel):
x=GPIO.input(18)
print("blinked")
for i in range(1):
GPIO.output(led_pin_2, GPIO.HIGH)
time.sleep(0.5)
GPIO.output(led_pin_2, GPIO.LOW)
time.sleep(0.5)
mqttBroker="mqtt.fluux.io"#connect mqtt broker
client=mqtt.Client("privacybtn") #create a client and give a name
client.connect_async(mqttBroker)#from the client -connect broker
while True:
client.publish("privacy", x)#publish this random number to the topic called temparature
print("Just published"+str(x)+"to Topc to privacy")#just print random no to topic temparature
break
def main():
# Pin Setup:
GPIO.setmode(GPIO.BOARD) # BOARD pin-numbering scheme
GPIO.setup([led_pin_1, led_pin_2], GPIO.OUT) # LED pins set as output
GPIO.setup(but_pin, GPIO.IN) # button pin set as input
# Initial state for LEDs:
GPIO.output(led_pin_1, GPIO.LOW)
GPIO.output(led_pin_2, GPIO.LOW)
GPIO.add_event_detect(but_pin, GPIO.FALLING, callback=blink, bouncetime=10)
print("Starting demo now! Press CTRL+C to exit")
try:
while True:
x=GPIO.input(18)
print(x)
# blink LED 1 slowly
GPIO.output(led_pin_1, GPIO.HIGH)
time.sleep(2)
finally:
GPIO.cleanup() # cleanup all GPIOs
if __name__ == '__main__':
main()
```
need to take this whole code into a function that can be accessed from another python file.plz, help me with this coding part.
|
2022/06/08
|
[
"https://Stackoverflow.com/questions/72539738",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19295287/"
] |
For the function call you happened to post, the following regex seems to work:
```
(?:\w+ \w+|\w+\(\w*\))
```
This pattern matches, alternatively, a type followed by variable name, or a function name, with optional parameters. Here is a working [demo](https://regex101.com/r/IAep1Z/1).
|
You can use
```none
(?:\G(?!^)\s*,\s*|\()\K\w+(?:\s+\w+|\s*\([^()]*\))
```
See the [regex demo](https://regex101.com/r/OZZi09/1). *Details*:
* `(?:\G(?!^)\s*,\s*|\()` - either the end of the preceding match and then a comma enclosed with zero or more whitespaces, or a `(` char
* `\K` - omit the text matched so far
* `\w+` - one or more word chars
* `(?:\s+\w+|\s*\([^()]*\))` - a non-capturing group matching one of two alternatives
+ `\s+\w+` - one or more whitespaces and then one or more word chars
+ `|` - or
+ `\s*` - zero or more whitespaces
+ `\([^()]*\)` - a `(` char, then zero or more chars other than `(` and `)` and then a `)` char.
|
72,539,738
|
```
import RPi.GPIO as GPIO
import paho.mqtt.client as mqtt
import time
def privacyfunc():
# Pin Definitions:
led_pin_1 = 7
led_pin_2 = 21
but_pin = 18
# blink LED 2 quickly 5 times when button pressed
def blink(channel):
x=GPIO.input(18)
print("blinked")
for i in range(1):
GPIO.output(led_pin_2, GPIO.HIGH)
time.sleep(0.5)
GPIO.output(led_pin_2, GPIO.LOW)
time.sleep(0.5)
mqttBroker="mqtt.fluux.io"#connect mqtt broker
client=mqtt.Client("privacybtn") #create a client and give a name
client.connect_async(mqttBroker)#from the client -connect broker
while True:
client.publish("privacy", x)#publish this random number to the topic called temparature
print("Just published"+str(x)+"to Topc to privacy")#just print random no to topic temparature
break
def main():
# Pin Setup:
GPIO.setmode(GPIO.BOARD) # BOARD pin-numbering scheme
GPIO.setup([led_pin_1, led_pin_2], GPIO.OUT) # LED pins set as output
GPIO.setup(but_pin, GPIO.IN) # button pin set as input
# Initial state for LEDs:
GPIO.output(led_pin_1, GPIO.LOW)
GPIO.output(led_pin_2, GPIO.LOW)
GPIO.add_event_detect(but_pin, GPIO.FALLING, callback=blink, bouncetime=10)
print("Starting demo now! Press CTRL+C to exit")
try:
while True:
x=GPIO.input(18)
print(x)
# blink LED 1 slowly
GPIO.output(led_pin_1, GPIO.HIGH)
time.sleep(2)
finally:
GPIO.cleanup() # cleanup all GPIOs
if __name__ == '__main__':
main()
```
need to take this whole code into a function that can be accessed from another python file.plz, help me with this coding part.
|
2022/06/08
|
[
"https://Stackoverflow.com/questions/72539738",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19295287/"
] |
For the function call you happened to post, the following regex seems to work:
```
(?:\w+ \w+|\w+\(\w*\))
```
This pattern matches, alternatively, a type followed by variable name, or a function name, with optional parameters. Here is a working [demo](https://regex101.com/r/IAep1Z/1).
|
Try this one
```
((?<=\(|,\s)((\w+\s)+\w+)|(\w+\((\w+(\,\s){0,1})*\)))
```
Check demo <https://regex101.com/r/Bv1Y5L/1>
|
4,574,605
|
I wrote a simple python script using the SocketServer, it works well on Windows, but when I execute it on a remote Linux machine(Ubuntu), it doesn't work at all..
The script is like below:
```
#-*-coding:utf-8-*-
import SocketServer
class MyHandler(SocketServer.BaseRequestHandler):
def handle(self):
data_rcv = self.request.recv(1024).strip()
print data_rcv
myServer = SocketServer.ThreadingTCPServer(('127.0.0.1', 7777), MyHandler)
myServer.serve_forever()
```
I upload it to the remote machine by SSH, and then run the command `python server.py` on the remote machine, and try to access to `xxx.xxx.xxx.xxx:7777/test` with my browser, but nothing is printed on the remote machine's teminal...any ideas?
**UPDATE: Problem solved, it's a firewall issue, thanks you all.**
|
2011/01/01
|
[
"https://Stackoverflow.com/questions/4574605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/325241/"
] |
You are binding to 127.0.0.1:7777 but then trying to access it through the servers external IP (I'll use your placeholder - xxx.xxx.xxx.xxx). 127.0.0.1:7777 and xxx.xxx.xxx.xxx:7777 are *different ports* and can be bound by different processes IIRC.
If that doesn't fix it, check your firewall, many hosts set up firewalls that block everything but the handful you are likely to use
|
Try with telnet or nc first, telnet to your public ip with your port and see what response you get. Also, why are accessing /test from the browser? I don't see that part in the code. I hope you have taken care of that.
|
4,574,605
|
I wrote a simple python script using the SocketServer, it works well on Windows, but when I execute it on a remote Linux machine(Ubuntu), it doesn't work at all..
The script is like below:
```
#-*-coding:utf-8-*-
import SocketServer
class MyHandler(SocketServer.BaseRequestHandler):
def handle(self):
data_rcv = self.request.recv(1024).strip()
print data_rcv
myServer = SocketServer.ThreadingTCPServer(('127.0.0.1', 7777), MyHandler)
myServer.serve_forever()
```
I upload it to the remote machine by SSH, and then run the command `python server.py` on the remote machine, and try to access to `xxx.xxx.xxx.xxx:7777/test` with my browser, but nothing is printed on the remote machine's teminal...any ideas?
**UPDATE: Problem solved, it's a firewall issue, thanks you all.**
|
2011/01/01
|
[
"https://Stackoverflow.com/questions/4574605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/325241/"
] |
You are binding the server to `127.0.0.1`, the IP address for localhost. This means the server will only accept connections originating from the same machine; it won't recognize ones coming from another machine.
You need to either bind to your external IP address, or bind to a wildcard address (i.e. don't bind to any particular IP address, just a port). Try:
```
myServer = SocketServer.ThreadingTCPServer(('0.0.0.0', 7777), MyHandler)
```
|
Try with telnet or nc first, telnet to your public ip with your port and see what response you get. Also, why are accessing /test from the browser? I don't see that part in the code. I hope you have taken care of that.
|
4,574,605
|
I wrote a simple python script using the SocketServer, it works well on Windows, but when I execute it on a remote Linux machine(Ubuntu), it doesn't work at all..
The script is like below:
```
#-*-coding:utf-8-*-
import SocketServer
class MyHandler(SocketServer.BaseRequestHandler):
def handle(self):
data_rcv = self.request.recv(1024).strip()
print data_rcv
myServer = SocketServer.ThreadingTCPServer(('127.0.0.1', 7777), MyHandler)
myServer.serve_forever()
```
I upload it to the remote machine by SSH, and then run the command `python server.py` on the remote machine, and try to access to `xxx.xxx.xxx.xxx:7777/test` with my browser, but nothing is printed on the remote machine's teminal...any ideas?
**UPDATE: Problem solved, it's a firewall issue, thanks you all.**
|
2011/01/01
|
[
"https://Stackoverflow.com/questions/4574605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/325241/"
] |
You are binding the server to `127.0.0.1`, the IP address for localhost. This means the server will only accept connections originating from the same machine; it won't recognize ones coming from another machine.
You need to either bind to your external IP address, or bind to a wildcard address (i.e. don't bind to any particular IP address, just a port). Try:
```
myServer = SocketServer.ThreadingTCPServer(('0.0.0.0', 7777), MyHandler)
```
|
You are binding to 127.0.0.1:7777 but then trying to access it through the servers external IP (I'll use your placeholder - xxx.xxx.xxx.xxx). 127.0.0.1:7777 and xxx.xxx.xxx.xxx:7777 are *different ports* and can be bound by different processes IIRC.
If that doesn't fix it, check your firewall, many hosts set up firewalls that block everything but the handful you are likely to use
|
17,548,865
|
I followed this link : "<https://pypi.python.org/pypi/bottle-mysql/0.1.1>"
and "<http://bottlepy.org/docs/dev/>"
this is my py file:
```
import bottle
from bottle import route, run, template
import bottle_mysql
app = bottle.Bottle()
# # dbhost is optional, default is localhost
plugin = bottle_mysql.Plugin(dbuser='root', dbpass='root', dbname='delhipoc')
app.install(plugin)
@route('/hai/<name>')
def show(name,dbname):
dbname.execute('SELECT id from poc_people where name="%s"', (name))
print "i am in show"
return template('<b>Hello {{name}}</b>!',name=name)
run(host='localhost', port=8080)
```
this is my code and it is throwing error like:
```
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\bottle.py", line 764, i
return route.call(**args)
File "C:\Python27\lib\site-packages\bottle.py", line 1575,
rv = callback(*a, **ka)
TypeError: show() takes exactly 2 arguments (1 given)
```
please help me
|
2013/07/09
|
[
"https://Stackoverflow.com/questions/17548865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2049591/"
] |
Here is the complete behavior you wanted
Use `$(document)` instead `$('body')`
```
$(document).not('#menu, #menu-trigger').click(function(event) {
event.preventDefault();
if (menu.is(":visible"))
{
menu.slideUp(400);
}
});
```
[### Updated Fiddle](http://jsfiddle.net/yGMZt/6/)
And also use `event.stopPropagation()` in `click()` of menu trigger
|
You need to add `event.stopPropagation();` [demo](http://jsfiddle.net/ruddog2003/yGMZt/3/)
>
> Description: Prevents the event from bubbling up the DOM tree, preventing any parent handlers from being notified of the event.
> [more](http://api.jquery.com/event.stopPropagation/)
>
>
>
|
17,548,865
|
I followed this link : "<https://pypi.python.org/pypi/bottle-mysql/0.1.1>"
and "<http://bottlepy.org/docs/dev/>"
this is my py file:
```
import bottle
from bottle import route, run, template
import bottle_mysql
app = bottle.Bottle()
# # dbhost is optional, default is localhost
plugin = bottle_mysql.Plugin(dbuser='root', dbpass='root', dbname='delhipoc')
app.install(plugin)
@route('/hai/<name>')
def show(name,dbname):
dbname.execute('SELECT id from poc_people where name="%s"', (name))
print "i am in show"
return template('<b>Hello {{name}}</b>!',name=name)
run(host='localhost', port=8080)
```
this is my code and it is throwing error like:
```
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\bottle.py", line 764, i
return route.call(**args)
File "C:\Python27\lib\site-packages\bottle.py", line 1575,
rv = callback(*a, **ka)
TypeError: show() takes exactly 2 arguments (1 given)
```
please help me
|
2013/07/09
|
[
"https://Stackoverflow.com/questions/17548865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2049591/"
] |
Found a very good answer by Frederik in [Document click not in elements jQuery](https://stackoverflow.com/questions/10851312/document-click-not-in-elements-jquery/10851375#10851375)
Modified a bit to support elements not yet present in document for example elements fetched by ajax calls.
```
$(document).on('click',function(event) {
if (!$(event.target).closest("#selector").length) {
if ($('#selector').is(":visible"))
$('#selector').slideUp();
}
});
```
|
You need to add `event.stopPropagation();` [demo](http://jsfiddle.net/ruddog2003/yGMZt/3/)
>
> Description: Prevents the event from bubbling up the DOM tree, preventing any parent handlers from being notified of the event.
> [more](http://api.jquery.com/event.stopPropagation/)
>
>
>
|
17,548,865
|
I followed this link : "<https://pypi.python.org/pypi/bottle-mysql/0.1.1>"
and "<http://bottlepy.org/docs/dev/>"
this is my py file:
```
import bottle
from bottle import route, run, template
import bottle_mysql
app = bottle.Bottle()
# # dbhost is optional, default is localhost
plugin = bottle_mysql.Plugin(dbuser='root', dbpass='root', dbname='delhipoc')
app.install(plugin)
@route('/hai/<name>')
def show(name,dbname):
dbname.execute('SELECT id from poc_people where name="%s"', (name))
print "i am in show"
return template('<b>Hello {{name}}</b>!',name=name)
run(host='localhost', port=8080)
```
this is my code and it is throwing error like:
```
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\bottle.py", line 764, i
return route.call(**args)
File "C:\Python27\lib\site-packages\bottle.py", line 1575,
rv = callback(*a, **ka)
TypeError: show() takes exactly 2 arguments (1 given)
```
please help me
|
2013/07/09
|
[
"https://Stackoverflow.com/questions/17548865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2049591/"
] |
Here is the complete behavior you wanted
Use `$(document)` instead `$('body')`
```
$(document).not('#menu, #menu-trigger').click(function(event) {
event.preventDefault();
if (menu.is(":visible"))
{
menu.slideUp(400);
}
});
```
[### Updated Fiddle](http://jsfiddle.net/yGMZt/6/)
And also use `event.stopPropagation()` in `click()` of menu trigger
|
event.stopPropagation() to stop propagation default browser
<http://api.jquery.com/event.stopPropagation/>
|
17,548,865
|
I followed this link : "<https://pypi.python.org/pypi/bottle-mysql/0.1.1>"
and "<http://bottlepy.org/docs/dev/>"
this is my py file:
```
import bottle
from bottle import route, run, template
import bottle_mysql
app = bottle.Bottle()
# # dbhost is optional, default is localhost
plugin = bottle_mysql.Plugin(dbuser='root', dbpass='root', dbname='delhipoc')
app.install(plugin)
@route('/hai/<name>')
def show(name,dbname):
dbname.execute('SELECT id from poc_people where name="%s"', (name))
print "i am in show"
return template('<b>Hello {{name}}</b>!',name=name)
run(host='localhost', port=8080)
```
this is my code and it is throwing error like:
```
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\bottle.py", line 764, i
return route.call(**args)
File "C:\Python27\lib\site-packages\bottle.py", line 1575,
rv = callback(*a, **ka)
TypeError: show() takes exactly 2 arguments (1 given)
```
please help me
|
2013/07/09
|
[
"https://Stackoverflow.com/questions/17548865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2049591/"
] |
Found a very good answer by Frederik in [Document click not in elements jQuery](https://stackoverflow.com/questions/10851312/document-click-not-in-elements-jquery/10851375#10851375)
Modified a bit to support elements not yet present in document for example elements fetched by ajax calls.
```
$(document).on('click',function(event) {
if (!$(event.target).closest("#selector").length) {
if ($('#selector').is(":visible"))
$('#selector').slideUp();
}
});
```
|
event.stopPropagation() to stop propagation default browser
<http://api.jquery.com/event.stopPropagation/>
|
17,548,865
|
I followed this link : "<https://pypi.python.org/pypi/bottle-mysql/0.1.1>"
and "<http://bottlepy.org/docs/dev/>"
this is my py file:
```
import bottle
from bottle import route, run, template
import bottle_mysql
app = bottle.Bottle()
# # dbhost is optional, default is localhost
plugin = bottle_mysql.Plugin(dbuser='root', dbpass='root', dbname='delhipoc')
app.install(plugin)
@route('/hai/<name>')
def show(name,dbname):
dbname.execute('SELECT id from poc_people where name="%s"', (name))
print "i am in show"
return template('<b>Hello {{name}}</b>!',name=name)
run(host='localhost', port=8080)
```
this is my code and it is throwing error like:
```
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\bottle.py", line 764, i
return route.call(**args)
File "C:\Python27\lib\site-packages\bottle.py", line 1575,
rv = callback(*a, **ka)
TypeError: show() takes exactly 2 arguments (1 given)
```
please help me
|
2013/07/09
|
[
"https://Stackoverflow.com/questions/17548865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2049591/"
] |
Here is the complete behavior you wanted
Use `$(document)` instead `$('body')`
```
$(document).not('#menu, #menu-trigger').click(function(event) {
event.preventDefault();
if (menu.is(":visible"))
{
menu.slideUp(400);
}
});
```
[### Updated Fiddle](http://jsfiddle.net/yGMZt/6/)
And also use `event.stopPropagation()` in `click()` of menu trigger
|
Found a very good answer by Frederik in [Document click not in elements jQuery](https://stackoverflow.com/questions/10851312/document-click-not-in-elements-jquery/10851375#10851375)
Modified a bit to support elements not yet present in document for example elements fetched by ajax calls.
```
$(document).on('click',function(event) {
if (!$(event.target).closest("#selector").length) {
if ($('#selector').is(":visible"))
$('#selector').slideUp();
}
});
```
|
17,548,865
|
I followed this link : "<https://pypi.python.org/pypi/bottle-mysql/0.1.1>"
and "<http://bottlepy.org/docs/dev/>"
this is my py file:
```
import bottle
from bottle import route, run, template
import bottle_mysql
app = bottle.Bottle()
# # dbhost is optional, default is localhost
plugin = bottle_mysql.Plugin(dbuser='root', dbpass='root', dbname='delhipoc')
app.install(plugin)
@route('/hai/<name>')
def show(name,dbname):
dbname.execute('SELECT id from poc_people where name="%s"', (name))
print "i am in show"
return template('<b>Hello {{name}}</b>!',name=name)
run(host='localhost', port=8080)
```
this is my code and it is throwing error like:
```
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\bottle.py", line 764, i
return route.call(**args)
File "C:\Python27\lib\site-packages\bottle.py", line 1575,
rv = callback(*a, **ka)
TypeError: show() takes exactly 2 arguments (1 given)
```
please help me
|
2013/07/09
|
[
"https://Stackoverflow.com/questions/17548865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2049591/"
] |
Here is the complete behavior you wanted
Use `$(document)` instead `$('body')`
```
$(document).not('#menu, #menu-trigger').click(function(event) {
event.preventDefault();
if (menu.is(":visible"))
{
menu.slideUp(400);
}
});
```
[### Updated Fiddle](http://jsfiddle.net/yGMZt/6/)
And also use `event.stopPropagation()` in `click()` of menu trigger
|
This is what you want:
```
$(document).ready(
function() {
var expanded = false;
// When clicked on the menu-trigger
$("#menu-trigger").click(
function(event) {
// Slide down menu if hidden
if (!expanded) {
event.stopPropagation();
$("#menu").slideDown();
expanded = true;
}
// Slide up menu if shown
else {
$("#menu").slideUp();
expanded = false;
}
}
);
// Hide if clicked anywhere on the page
$(document).click(
function () {
if (expanded) {
$("#menu").slideUp();
expanded = false;
}
}
);
// Prevent slideUp if clicked on the Menu div itself
// (You can omit this part if you don't need it)
$("#menu").click(
function(event) {
event.stopPropagation();
}
);
}
);
```
|
17,548,865
|
I followed this link : "<https://pypi.python.org/pypi/bottle-mysql/0.1.1>"
and "<http://bottlepy.org/docs/dev/>"
this is my py file:
```
import bottle
from bottle import route, run, template
import bottle_mysql
app = bottle.Bottle()
# # dbhost is optional, default is localhost
plugin = bottle_mysql.Plugin(dbuser='root', dbpass='root', dbname='delhipoc')
app.install(plugin)
@route('/hai/<name>')
def show(name,dbname):
dbname.execute('SELECT id from poc_people where name="%s"', (name))
print "i am in show"
return template('<b>Hello {{name}}</b>!',name=name)
run(host='localhost', port=8080)
```
this is my code and it is throwing error like:
```
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\bottle.py", line 764, i
return route.call(**args)
File "C:\Python27\lib\site-packages\bottle.py", line 1575,
rv = callback(*a, **ka)
TypeError: show() takes exactly 2 arguments (1 given)
```
please help me
|
2013/07/09
|
[
"https://Stackoverflow.com/questions/17548865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2049591/"
] |
Found a very good answer by Frederik in [Document click not in elements jQuery](https://stackoverflow.com/questions/10851312/document-click-not-in-elements-jquery/10851375#10851375)
Modified a bit to support elements not yet present in document for example elements fetched by ajax calls.
```
$(document).on('click',function(event) {
if (!$(event.target).closest("#selector").length) {
if ($('#selector').is(":visible"))
$('#selector').slideUp();
}
});
```
|
This is what you want:
```
$(document).ready(
function() {
var expanded = false;
// When clicked on the menu-trigger
$("#menu-trigger").click(
function(event) {
// Slide down menu if hidden
if (!expanded) {
event.stopPropagation();
$("#menu").slideDown();
expanded = true;
}
// Slide up menu if shown
else {
$("#menu").slideUp();
expanded = false;
}
}
);
// Hide if clicked anywhere on the page
$(document).click(
function () {
if (expanded) {
$("#menu").slideUp();
expanded = false;
}
}
);
// Prevent slideUp if clicked on the Menu div itself
// (You can omit this part if you don't need it)
$("#menu").click(
function(event) {
event.stopPropagation();
}
);
}
);
```
|
46,647,167
|
I am working on a simple chat app in python 3.6.1 for personal use. I get this error with select.select:
```
Traceback (most recent call last):
File "C:\Users\Nathan Glover\Google Drive\MAGENTA Chat\chat_server.py", line
27, in <module>
ready_to_read,ready_to_write,in_error = select.select(SOCKET_LIST,[],[],0)
ValueError: file descriptor cannot be a negative integer (-1)
```
Here is the code:
```
ready_to_read,ready_to_write,in_error = select.select(SOCKET_LIST,[],[],0)
```
This is entirely because i don't understand select very well, and the documentation was no help. Could someone explain why this is happening?
|
2017/10/09
|
[
"https://Stackoverflow.com/questions/46647167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8099482/"
] |
I know it's been a long time since this question was asked. But I wanted to let OP and others know about the problem here.
The problem here is that the SOCKET\_LIST must be containing a non-existing socket connection which may have been disconnected earlier. If you pass such connection to select it gives this error
```
ValueError: file descriptor cannot be a negative integer (-1)
```
A simple solution to this would be to put the `select` block inside a try - except block and the catch the error. When an error is found, the connection can be removed from the SOCKET\_LIST.
|
I happened to have this problem too, and I have solved it.
I hope this answer will help everyone.
The fileno of the closed socket will become -1.
So it is most likely because there is a closed socket in the input parameter list of slelect.
That is to say, in the select cycle, when your judgment logic is added to rlist, wlist, xlist, these logics may be problematic.
Although the simple and crude method is try-except to remove the socket with negative fileno.
But I recommend that you reorganize your logic. If you are not sure, use rlist, wlist, and xlist instead of lists to avoid duplicate elements in the list.
|
46,647,167
|
I am working on a simple chat app in python 3.6.1 for personal use. I get this error with select.select:
```
Traceback (most recent call last):
File "C:\Users\Nathan Glover\Google Drive\MAGENTA Chat\chat_server.py", line
27, in <module>
ready_to_read,ready_to_write,in_error = select.select(SOCKET_LIST,[],[],0)
ValueError: file descriptor cannot be a negative integer (-1)
```
Here is the code:
```
ready_to_read,ready_to_write,in_error = select.select(SOCKET_LIST,[],[],0)
```
This is entirely because i don't understand select very well, and the documentation was no help. Could someone explain why this is happening?
|
2017/10/09
|
[
"https://Stackoverflow.com/questions/46647167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8099482/"
] |
I know it's been a long time since this question was asked. But I wanted to let OP and others know about the problem here.
The problem here is that the SOCKET\_LIST must be containing a non-existing socket connection which may have been disconnected earlier. If you pass such connection to select it gives this error
```
ValueError: file descriptor cannot be a negative integer (-1)
```
A simple solution to this would be to put the `select` block inside a try - except block and the catch the error. When an error is found, the connection can be removed from the SOCKET\_LIST.
|
It might not work if you did not properly initialize your server socket.
Try this code:
```
HOST = '127.0.0.1' # '' = all available interfaces
PORT = 9009 # Arbitrary non-privileged port
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.setblocking(0) # non-blocking I/O
sock.bind((HOST, PORT))
sock.listen(5) # queue up as many as 5 connect requests
SOCKET_LIST = [sock]
```
|
46,647,167
|
I am working on a simple chat app in python 3.6.1 for personal use. I get this error with select.select:
```
Traceback (most recent call last):
File "C:\Users\Nathan Glover\Google Drive\MAGENTA Chat\chat_server.py", line
27, in <module>
ready_to_read,ready_to_write,in_error = select.select(SOCKET_LIST,[],[],0)
ValueError: file descriptor cannot be a negative integer (-1)
```
Here is the code:
```
ready_to_read,ready_to_write,in_error = select.select(SOCKET_LIST,[],[],0)
```
This is entirely because i don't understand select very well, and the documentation was no help. Could someone explain why this is happening?
|
2017/10/09
|
[
"https://Stackoverflow.com/questions/46647167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8099482/"
] |
I know it's been a long time since this question was asked. But I wanted to let OP and others know about the problem here.
The problem here is that the SOCKET\_LIST must be containing a non-existing socket connection which may have been disconnected earlier. If you pass such connection to select it gives this error
```
ValueError: file descriptor cannot be a negative integer (-1)
```
A simple solution to this would be to put the `select` block inside a try - except block and the catch the error. When an error is found, the connection can be removed from the SOCKET\_LIST.
|
I know I am late to the party, but this problem seems to be caused by a closed file descriptor in the `SOCKET_LIST` which is being passed to `select`, another solution would be removing closed file in `SOCKET_LIST` before calling `select`.
**this solution is O(n^2)**, but it's les hacky than try and catch.
```py
for sock in SOCKET_LIST:
# Remove file descriptor if closed
if sock.fileno < 0:
SOCKET_LIST.remove(sock)
ready_to_read,ready_to_write,in_error = select.select(SOCKET_LIST,[],[],0)
```
|
46,647,167
|
I am working on a simple chat app in python 3.6.1 for personal use. I get this error with select.select:
```
Traceback (most recent call last):
File "C:\Users\Nathan Glover\Google Drive\MAGENTA Chat\chat_server.py", line
27, in <module>
ready_to_read,ready_to_write,in_error = select.select(SOCKET_LIST,[],[],0)
ValueError: file descriptor cannot be a negative integer (-1)
```
Here is the code:
```
ready_to_read,ready_to_write,in_error = select.select(SOCKET_LIST,[],[],0)
```
This is entirely because i don't understand select very well, and the documentation was no help. Could someone explain why this is happening?
|
2017/10/09
|
[
"https://Stackoverflow.com/questions/46647167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8099482/"
] |
I happened to have this problem too, and I have solved it.
I hope this answer will help everyone.
The fileno of the closed socket will become -1.
So it is most likely because there is a closed socket in the input parameter list of slelect.
That is to say, in the select cycle, when your judgment logic is added to rlist, wlist, xlist, these logics may be problematic.
Although the simple and crude method is try-except to remove the socket with negative fileno.
But I recommend that you reorganize your logic. If you are not sure, use rlist, wlist, and xlist instead of lists to avoid duplicate elements in the list.
|
It might not work if you did not properly initialize your server socket.
Try this code:
```
HOST = '127.0.0.1' # '' = all available interfaces
PORT = 9009 # Arbitrary non-privileged port
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.setblocking(0) # non-blocking I/O
sock.bind((HOST, PORT))
sock.listen(5) # queue up as many as 5 connect requests
SOCKET_LIST = [sock]
```
|
46,647,167
|
I am working on a simple chat app in python 3.6.1 for personal use. I get this error with select.select:
```
Traceback (most recent call last):
File "C:\Users\Nathan Glover\Google Drive\MAGENTA Chat\chat_server.py", line
27, in <module>
ready_to_read,ready_to_write,in_error = select.select(SOCKET_LIST,[],[],0)
ValueError: file descriptor cannot be a negative integer (-1)
```
Here is the code:
```
ready_to_read,ready_to_write,in_error = select.select(SOCKET_LIST,[],[],0)
```
This is entirely because i don't understand select very well, and the documentation was no help. Could someone explain why this is happening?
|
2017/10/09
|
[
"https://Stackoverflow.com/questions/46647167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8099482/"
] |
I happened to have this problem too, and I have solved it.
I hope this answer will help everyone.
The fileno of the closed socket will become -1.
So it is most likely because there is a closed socket in the input parameter list of slelect.
That is to say, in the select cycle, when your judgment logic is added to rlist, wlist, xlist, these logics may be problematic.
Although the simple and crude method is try-except to remove the socket with negative fileno.
But I recommend that you reorganize your logic. If you are not sure, use rlist, wlist, and xlist instead of lists to avoid duplicate elements in the list.
|
I know I am late to the party, but this problem seems to be caused by a closed file descriptor in the `SOCKET_LIST` which is being passed to `select`, another solution would be removing closed file in `SOCKET_LIST` before calling `select`.
**this solution is O(n^2)**, but it's les hacky than try and catch.
```py
for sock in SOCKET_LIST:
# Remove file descriptor if closed
if sock.fileno < 0:
SOCKET_LIST.remove(sock)
ready_to_read,ready_to_write,in_error = select.select(SOCKET_LIST,[],[],0)
```
|
63,182,541
|
I am trying to send message from Raspberry Pi (Ubuntu 20) to Laptop (Virtualbox Ubuntu 20) via UDP socket. So I am using simple code from <https://wiki.python.org/moin/UdpCommunication>
Sending (from Raspberry Pi)
```
import socket
UDP_IP = "127.0.0.1"
UDP_PORT = 5005
MESSAGE = b"Hello, World!"
print("UDP target IP: %s" % UDP_IP)
print("UDP target port: %s" % UDP_PORT)
print("message: %s" % MESSAGE)
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
sock.sendto(MESSAGE, (UDP_IP, UDP_PORT))
```
Receiving (from laptop)
```
import socket
UDP_IP = "127.0.0.1"
UDP_PORT = 5005
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
sock.bind((UDP_IP, UDP_PORT))
while True:
data, addr = sock.recvfrom(1024) # buffer size is 1024 bytes
print("received message: %s" % data)
```
I tried `UDP_IP = "0.0.0.0"` in both ends.
I tried Ip address of my laptop at RPI end.
I tried both machine's IP address at both ends.
I tried `sock.bind(("", UDP_PORT))` to bind all.
I tried adding the UDP port number in firewall settings.
I checked multiple questions from this forum related to this.
Still, I cannot get any packets at the laptop receiving side. I do not know what is wrong.
Please advise.
|
2020/07/30
|
[
"https://Stackoverflow.com/questions/63182541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13118262/"
] |
The problem may be in the IP address because you are using the IP '127.0.0.1' (localhost) to reach an outside device. Please find out the IPs of your devices, try using ifconfig Linux command. Also, check that nothing is blocking your connection.
Consider that for socket.bind(address) you can use '0.0.0.0' and for socket.sendto(bytes, address) you should use the IP of the device you want to send to.
I recommend you to download a program called Hercules, with this program you can create a UDP peer to figure out what is not working properly. For example, you could use python in one side and Hercules in the other to rule out mistakes in one side of the code execution, you could also try two Hercules connections and see if you can establish communication in which case the problem is most likely related to the code execution, in the other hand if you can not establish a connection between the two Hercules UDP peers the problem is most likely with the devices or the network itself.
|
If you are using a static IP on the RPi, you need to add a static route:
```
sudo ip route add 236.0.0.0/8 dev eth0
```
|
63,182,541
|
I am trying to send message from Raspberry Pi (Ubuntu 20) to Laptop (Virtualbox Ubuntu 20) via UDP socket. So I am using simple code from <https://wiki.python.org/moin/UdpCommunication>
Sending (from Raspberry Pi)
```
import socket
UDP_IP = "127.0.0.1"
UDP_PORT = 5005
MESSAGE = b"Hello, World!"
print("UDP target IP: %s" % UDP_IP)
print("UDP target port: %s" % UDP_PORT)
print("message: %s" % MESSAGE)
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
sock.sendto(MESSAGE, (UDP_IP, UDP_PORT))
```
Receiving (from laptop)
```
import socket
UDP_IP = "127.0.0.1"
UDP_PORT = 5005
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
sock.bind((UDP_IP, UDP_PORT))
while True:
data, addr = sock.recvfrom(1024) # buffer size is 1024 bytes
print("received message: %s" % data)
```
I tried `UDP_IP = "0.0.0.0"` in both ends.
I tried Ip address of my laptop at RPI end.
I tried both machine's IP address at both ends.
I tried `sock.bind(("", UDP_PORT))` to bind all.
I tried adding the UDP port number in firewall settings.
I checked multiple questions from this forum related to this.
Still, I cannot get any packets at the laptop receiving side. I do not know what is wrong.
Please advise.
|
2020/07/30
|
[
"https://Stackoverflow.com/questions/63182541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13118262/"
] |
The problem may be in the IP address because you are using the IP '127.0.0.1' (localhost) to reach an outside device. Please find out the IPs of your devices, try using ifconfig Linux command. Also, check that nothing is blocking your connection.
Consider that for socket.bind(address) you can use '0.0.0.0' and for socket.sendto(bytes, address) you should use the IP of the device you want to send to.
I recommend you to download a program called Hercules, with this program you can create a UDP peer to figure out what is not working properly. For example, you could use python in one side and Hercules in the other to rule out mistakes in one side of the code execution, you could also try two Hercules connections and see if you can establish communication in which case the problem is most likely related to the code execution, in the other hand if you can not establish a connection between the two Hercules UDP peers the problem is most likely with the devices or the network itself.
|
**Make sure the port you are using have UDP enabled in windows 10.**
To open any UDP ports, you can do the following:
1. Go to Control Panel> System and Security and Windows Firewall.
2. Advanced settings > right-click Inbound Rules and select New Rule.
3. Add the port(s) you want to open and click Next.
4. Select UDP protocol and the port(s) number(s) into the next window
and click Next.
5. Select Allow the connection and hit Next.
6. Select the network type and click Next.
7. Give a name for the rule and click Finish.
|
7,213,965
|
I am using Windows 7, Apache 2.28 and Web Developer Server Suite for my server.
All files are stored under C:/www/vhosts
I downloaded Portable Python 2.7 from <http://www.portablepython.com/> and have installed it to
C:/www/portablepython
I'm trying to find mod\_wsgi to get it to work with 2.7 - but how can I do this?
The reason I'm doing all this is to get a basic site running that uses Python coding, with a view to using Django, in the same way that <http://www.heart.co.uk/westmids/> or <http://www.capitalfm.com/birmingham> do. Obviously my site won't be as advanced as theirs, but you get the gist of it; I'm using Python/Django as a sort of CMS for a news/articles website.
In any case, here's my code from C:/www/vhosts/localhost/testing.py:
```
#!/www/portablepython
print "Content-type: text/html"
print
print "<html><head>"
print ""
print "</head><body>"
print "Hello."
print "</body></html>"
```
This generates a 403 Forbidden error, i.e.:
**You don't have permission to access /testing.py on this server.**
I followed <http://code.google.com/p/modwsgi/wiki/InstallationOnWindows> but renamed modwsgi-version-number-datedownload.so to modwsgi.so so did that cause the error?
What do I need to do to prevent this re-occurring?
I used the Portable version for testing purposes, thinking that I can just delete the folder, and I can install again if necessary without adding to environment variables (I think portable ones do this, correct me if I'm wrong)?
What, if any changes do I need to make? Do I need to make them to the vhosts in httpd-vhosts.conf [my virtual hosts] or elsewhere?
Any help is appreciated; I'll post more as this situation develops.
|
2011/08/27
|
[
"https://Stackoverflow.com/questions/7213965",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/847047/"
] |
The script you have at C:/www/vhosts/localhost/testing.py is a CGI script and not a WSGI script. Follow the instructions for configuring mod\_wsgi and what a WSGI script file for hello world should look like at:
<http://code.google.com/p/modwsgi/wiki/QuickConfigurationGuide>
|
also, you should look into using a system install of python from python.org and pip+distribute+virtualenv to keep contained python environments for your different sites. This will give you maximum portability.
|
7,213,965
|
I am using Windows 7, Apache 2.28 and Web Developer Server Suite for my server.
All files are stored under C:/www/vhosts
I downloaded Portable Python 2.7 from <http://www.portablepython.com/> and have installed it to
C:/www/portablepython
I'm trying to find mod\_wsgi to get it to work with 2.7 - but how can I do this?
The reason I'm doing all this is to get a basic site running that uses Python coding, with a view to using Django, in the same way that <http://www.heart.co.uk/westmids/> or <http://www.capitalfm.com/birmingham> do. Obviously my site won't be as advanced as theirs, but you get the gist of it; I'm using Python/Django as a sort of CMS for a news/articles website.
In any case, here's my code from C:/www/vhosts/localhost/testing.py:
```
#!/www/portablepython
print "Content-type: text/html"
print
print "<html><head>"
print ""
print "</head><body>"
print "Hello."
print "</body></html>"
```
This generates a 403 Forbidden error, i.e.:
**You don't have permission to access /testing.py on this server.**
I followed <http://code.google.com/p/modwsgi/wiki/InstallationOnWindows> but renamed modwsgi-version-number-datedownload.so to modwsgi.so so did that cause the error?
What do I need to do to prevent this re-occurring?
I used the Portable version for testing purposes, thinking that I can just delete the folder, and I can install again if necessary without adding to environment variables (I think portable ones do this, correct me if I'm wrong)?
What, if any changes do I need to make? Do I need to make them to the vhosts in httpd-vhosts.conf [my virtual hosts] or elsewhere?
Any help is appreciated; I'll post more as this situation develops.
|
2011/08/27
|
[
"https://Stackoverflow.com/questions/7213965",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/847047/"
] |
The script you have at C:/www/vhosts/localhost/testing.py is a CGI script and not a WSGI script. Follow the instructions for configuring mod\_wsgi and what a WSGI script file for hello world should look like at:
<http://code.google.com/p/modwsgi/wiki/QuickConfigurationGuide>
|
Here is a sample - [PortableWinPy](https://github.com/yn-coder/PortableWinPy) - how to get Django work with Apache and `mod_wsgi`, in portale way.
|
28,097,471
|
**Update**: Here is a more specific example
Suppose I want to compile some statistical data from a sizable set of files:
I can make a generator `(line for line in fileinput.input(files))` and some processor:
```
from collections import defaultdict
scores = defaultdict(int)
def process(line):
if 'Result' in line:
res = line.split('\"')[1].split('-')[0]
scores[res] += 1
```
The question is how to handle this when one gets to the `multiprocessing.Pool`.
Of course it's possible to define a `multiprocessing.sharedctypes` as well as a custom `struct` instead of a `defaultdict` but this seems rather painful. On the other hand I can't think of a pythonic way to instantiate something before the process or to return something after a generator has run out to the main thread.
|
2015/01/22
|
[
"https://Stackoverflow.com/questions/28097471",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3467349/"
] |
So you basically create a histogram. This is can easily be parallelized, because histograms can be merged without complication. One might want to say that this problem is trivially parallelizable or ["embarrassingly parallel"](http://en.wikipedia.org/wiki/Embarrassingly_parallel). That is, you do not need to worry about communication among workers.
Just split your data set into multiple chunks, let your workers work on these chunks **independently**, collect the histogram of each worker, and then merge the histograms.
In practice, this problem is best off by letting each worker process/read its own file. That is, a "task" could be a file name. You should not start pickling file contents and send them around between processes through pipes. Let each worker process retrieve the bulk data *directly* from files. Otherwise your architecture spends too much time with inter-process communication, instead of doing some real work.
Do you need an example or can you figure this out yourself?
Edit: example implementation
----------------------------
I have a number of data files with file names in this format: `data0.txt`, `data1.txt`, ... .
Example contents:
```
wolf
wolf
cat
blume
eisenbahn
```
The goal is to create a histogram over the words contained in the data files. This is the code:
```
from multiprocessing import Pool
from collections import Counter
import glob
def build_histogram(filepath):
"""This function is run by a worker process.
The `filepath` argument is communicated to the worker
through a pipe. The return value of this function is
communicated to the manager through a pipe.
"""
hist = Counter()
with open(filepath) as f:
for line in f:
hist[line.strip()] += 1
return hist
def main():
"""This function runs in the manager (main) process."""
# Collect paths to data files.
datafile_paths = glob.glob("data*.txt")
# Create a pool of worker processes and distribute work.
# The input to worker processes (function argument) as well
# as the output by worker processes is transmitted through
# pipes, behind the scenes.
pool = Pool(processes=3)
histograms = pool.map(build_histogram, datafile_paths)
# Properly shut down the pool of worker processes, and
# wait until all of them have finished.
pool.close()
pool.join()
# Merge sub-histograms. Do not create too many intermediate
# objects: update the first sub-histogram with the others.
# Relevant docs: collections.Counter.update
merged_hist = histograms[0]
for h in histograms[1:]:
merged_hist.update(h)
for word, count in merged_hist.items():
print "%s: %s" % (word, count)
if __name__ == "__main__":
main()
```
Test output:
```
python countwords.py
eisenbahn: 12
auto: 6
cat: 1
katze: 10
stadt: 1
wolf: 3
zug: 4
blume: 5
herbert: 14
destruction: 4
```
|
I had to modify the original pool.py (the trouble was worker is defined as a method without any inheritance) to get what I want but it's not so bad, and probably better than writing a new pool entirely.
```
class worker(object):
def __init__(self, inqueue, outqueue, initializer=None, initargs=(), maxtasks=None,
wrap_exception=False, finalizer=None, finargs=()):
assert maxtasks is None or (type(maxtasks) == int and maxtasks > 0)
put = outqueue.put
get = inqueue.get
self.completed = 0
if hasattr(inqueue, '_writer'):
inqueue._writer.close()
outqueue._reader.close()
if initializer is not None:
initializer(self, *initargs)
def run(self):
while maxtasks is None or (maxtasks and self.completed < maxtasks):
try:
task = get()
except (EOFError, OSError):
util.debug('worker got EOFError or OSError -- exiting')
break
if task is None:
util.debug('worker got sentinel -- exiting')
break
job, i, func, args, kwds = task
try:
result = (True, func(*args, **kwds))
except Exception as e:
if wrap_exception:
e = ExceptionWithTraceback(e, e.__traceback__)
result = (False, e)
try:
put((job, i, result))
except Exception as e:
wrapped = MaybeEncodingError(e, result[1])
util.debug("Possible encoding error while sending result: %s" % (
wrapped))
put((job, i, (False, wrapped)))
self.completed += 1
if finalizer:
finalizer(self, *finargs)
util.debug('worker exiting after %d tasks' % self.completed)
run(self)
```
|
13,335,679
|
I want to run a system command in the background using python 2.7, this is what I have:
```
import commands
path = '/fioverify.fio'
cmd= "/usr/local/bin/fio" + path + " "+ " &"
print cmd
handle = commands.getstatusoutput(cmd)
```
This fails. If I remove the ampersand `&` it works. I need to run a command (`/usr/local/bin/fio/fioverifypath`) in the background.
Any pointers on how to accomplish this?
|
2012/11/11
|
[
"https://Stackoverflow.com/questions/13335679",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1618238/"
] |
Don't use `commands`; it's deprecated and not actually useful for your purposes. Use `subprocess` instead.
```
fio = subprocess.Popen(["/usr/local/bin/fio", path])
```
runs the `fio` command in parallel with your process and binds the variable `fio` to a handle to the process. You can then call `fio.wait()` to wait for the process to finish and retrieve its return status.
|
Use the [subprocess](http://docs.python.org/2/library/subprocess.html) module, [`subprocess.Popen`](http://docs.python.org/2/library/subprocess.html#subprocess.Popen) allows you to run a command as a subprocess (in the background) and check its status.
|
13,335,679
|
I want to run a system command in the background using python 2.7, this is what I have:
```
import commands
path = '/fioverify.fio'
cmd= "/usr/local/bin/fio" + path + " "+ " &"
print cmd
handle = commands.getstatusoutput(cmd)
```
This fails. If I remove the ampersand `&` it works. I need to run a command (`/usr/local/bin/fio/fioverifypath`) in the background.
Any pointers on how to accomplish this?
|
2012/11/11
|
[
"https://Stackoverflow.com/questions/13335679",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1618238/"
] |
Don't use `commands`; it's deprecated and not actually useful for your purposes. Use `subprocess` instead.
```
fio = subprocess.Popen(["/usr/local/bin/fio", path])
```
runs the `fio` command in parallel with your process and binds the variable `fio` to a handle to the process. You can then call `fio.wait()` to wait for the process to finish and retrieve its return status.
|
You can also try [sh.py](http://amoffat.github.com/sh/), it has support for background commands:
```
import sh
bin = sh.Command("/usr/local/bin/fio/fioverify.fio")
handle = bin(_bg=True)
# ... do other code ...
handle.wait()
```
|
13,335,679
|
I want to run a system command in the background using python 2.7, this is what I have:
```
import commands
path = '/fioverify.fio'
cmd= "/usr/local/bin/fio" + path + " "+ " &"
print cmd
handle = commands.getstatusoutput(cmd)
```
This fails. If I remove the ampersand `&` it works. I need to run a command (`/usr/local/bin/fio/fioverifypath`) in the background.
Any pointers on how to accomplish this?
|
2012/11/11
|
[
"https://Stackoverflow.com/questions/13335679",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1618238/"
] |
Don't use `commands`; it's deprecated and not actually useful for your purposes. Use `subprocess` instead.
```
fio = subprocess.Popen(["/usr/local/bin/fio", path])
```
runs the `fio` command in parallel with your process and binds the variable `fio` to a handle to the process. You can then call `fio.wait()` to wait for the process to finish and retrieve its return status.
|
Running processes in the background using Python 2.7
----------------------------------------------------
`commands.getstatusoutput(...)` isn't smart enough to handle background processes, use `subprocess.Popen` or `os.system`.
**Reproducing the error of how `commands.getstatusoutput` fails on background processes:**
```
import commands
import subprocess
#This sleeps for 2 seconds, then stops,
#commands.getstatus output handles sleep 2 in foreground okay
print(commands.getstatusoutput("sleep 2"))
#This sleeps for 2 seconds in background, this fails with error:
#sh: -c: line 0: syntax error near unexpected token `;'
print(commands.getstatusoutput("sleep 2 &"))
```
**A demo of how subprocess.Popen succeeds on background processes:**
```
#subprocess handles the sleep 2 in foreground okay:
proc = subprocess.Popen(["sleep", "2"], stdout=subprocess.PIPE)
output = proc.communicate()[0]
print(output)
#alternate way subprocess handles the sleep 2 in foreground perfectly fine:
proc = subprocess.Popen(['/bin/sh', '-c', 'sleep 2'], stdout=subprocess.PIPE)
output = proc.communicate()[0]
print("we hung for 2 seconds in foreground, now we are done")
print(output)
#And subprocess handles the sleep 2 in background as well:
proc = subprocess.Popen(['/bin/sh', '-c', 'sleep 2'], stdout=subprocess.PIPE)
print("Broke out of the sleep 2, sleep 2 is in background now")
print("twiddling our thumbs while we wait.\n")
proc.wait()
print("Okay now sleep is done, resume shenanigans")
output = proc.communicate()[0]
print(output)
```
**A demo of how os.system can handle background processes:**
```
import os
#sleep 2 in the foreground with os.system works as expected
os.system("sleep 2")
import os
#sleep 2 in the background with os.system works as expected
os.system("sleep 2 &")
print("breaks out immediately, sleep 2 continuing on in background")
```
|
53,055,671
|
How do you run the command line in order to read and write a csv file in python on a MacBook?
|
2018/10/30
|
[
"https://Stackoverflow.com/questions/53055671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10387703/"
] |
The first APK you linked to isn't a valid APK. It's just a plain text file, with the following text repeated over and over:
```
HTTP/1.1 200 OK
Date: Sat, 27 Oct 2018 17:35:36 GMT
Strict-Transport-Security: max-age=31536000;includeSubDomains; preload
Last-Modified: Sat, 28 Jul 2018 11:40:03 GMT
ETag: "23b1fe5-5720db0636ac0"
Accept-Ranges: bytes
Content-Length: 37429221
Keep-Alive: timeout=20
Connection: Keep-Alive
```
Obviously, just HTTP response headers repeated don't form a valid APK. The reason that your tools are failing on that file isn't that it's encrypted/obfuscated/hardened, but that it's not really an APK at all, and wouldn't work if you tried to install it.
---
The second APK you linked to extracts for me fine when I `unzip` it.
My conclusion is that the "hardening" you mention doesn't exist (it seemed to only due to mixing up valid and invalid APKs), and that any APK that successfully installs can also be successfully extracted.
|
That's encryption java classes feature (Like dexgaurd or Bangcle kh); and also that's protected with Native Library Encryption (NLE) + JNI Obfuscation (JNI) From Something like dexprotector (i found that in dynamic analysis tools)
and many tanks to semanticscholar for [This](https://pdfs.semanticscholar.org/2b02/44a2944d11f16da3c38ece8aa617ded090d4.pdf) article and [this](https://pdfs.semanticscholar.org/b725/5db6f0aa10aa8d6fa83ac8ab7129e692e927.pdf)
|
61,861,185
|
Let's say I have the following class.
```py
class A:
def __init__(self, x=0, y=0):
self.x = x
self.y = y
obj_A = A(y=2) # Dynamically pass the parameters
```
My question is, how can I dynamically create an object of a certain class by passing only one (perhaps multiple but not all) parameters? Also, I have prior knowledge of the attributes name i.e. **'x'** and **'y'** and they all have a default value.
Edit: To clear up some confusion. The class can have any number of attributes. Is it possible to create an instance by passing any subset of the parameters during runtime?
I know I can use `getters` and `setters` to achieve this, but I'm working on OpenCV python, and these functions are not implemented for some algorithms.
|
2020/05/18
|
[
"https://Stackoverflow.com/questions/61861185",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1474204/"
] |
There is `asSequence()` function for iterator, but it returns sequence that can be iterated only once. The point is to use the same iterator for each iteration.
```
// I don't know how to name the function...
public fun <T> Iterable<T>.asIteratorSequence(): Sequence<T> {
val iterator = this.iterator()
return Sequence { iterator }
}
fun main() {
val seq = listOf(0, 1, 2, 3, 4, 5, 6, 7, 8, 9).asIteratorSequence()
println(seq.take(4).toList().toString()) // [0, 1, 2, 3]
println(seq.toList().toString()) // [4, 5, 6, 7, 8, 9]
println(seq.toList().toString()) // []
}
```
|
I figured out we can use iterators for this:
```
fun main() {
val seq = listOf(0, 1, 2, 3, 4, 5, 6, 7, 8, 9).asSequence().iterator()
println(seq.asSequence().take(4).toList().toString());
println(seq.asSequence().toList().toString())
}
```
This way the sequences advance the inner iterator. If the sequence was originally obtained from an `Iterable`, as is the case above, we can simplify even further:
```
fun main() {
val seq = listOf(0, 1, 2, 3, 4, 5, 6, 7, 8, 9).iterator()
println(seq.asSequence().take(4).toList().toString());
println(seq.asSequence().toList().toString())
}
```
|
61,861,185
|
Let's say I have the following class.
```py
class A:
def __init__(self, x=0, y=0):
self.x = x
self.y = y
obj_A = A(y=2) # Dynamically pass the parameters
```
My question is, how can I dynamically create an object of a certain class by passing only one (perhaps multiple but not all) parameters? Also, I have prior knowledge of the attributes name i.e. **'x'** and **'y'** and they all have a default value.
Edit: To clear up some confusion. The class can have any number of attributes. Is it possible to create an instance by passing any subset of the parameters during runtime?
I know I can use `getters` and `setters` to achieve this, but I'm working on OpenCV python, and these functions are not implemented for some algorithms.
|
2020/05/18
|
[
"https://Stackoverflow.com/questions/61861185",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1474204/"
] |
You can create two subsequences from the original:
```
val seq = listOf(0, 1, 2, 3, 4, 5, 6, 7, 8, 9).asSequence()
val firstCount = 4
val first = seq.take(firstCount)
val second = seq.drop(firstCount)
println(first.toList())
println(second.toList())
```
|
I figured out we can use iterators for this:
```
fun main() {
val seq = listOf(0, 1, 2, 3, 4, 5, 6, 7, 8, 9).asSequence().iterator()
println(seq.asSequence().take(4).toList().toString());
println(seq.asSequence().toList().toString())
}
```
This way the sequences advance the inner iterator. If the sequence was originally obtained from an `Iterable`, as is the case above, we can simplify even further:
```
fun main() {
val seq = listOf(0, 1, 2, 3, 4, 5, 6, 7, 8, 9).iterator()
println(seq.asSequence().take(4).toList().toString());
println(seq.asSequence().toList().toString())
}
```
|
40,946,211
|
I've installed my Django app on an Ubuntu server with Apache2.4.7 and configured it to use py3.5.2 from a virtual environment.
However, from what I can see in the errors, it's starting at 3.5 and defaulting to 3.4.
Please explain why this is happening:
```
/var/www/venv/lib/python3.5/site-packages
/usr/lib/python3.4
```
See the full error below:
```
SyntaxError at /
invalid syntax (forms.py, line 2)
Request Method: GET
Request URL: http://intranet.example.com/
Django Version: 1.10.1
Exception Type: SyntaxError
Exception Value:
invalid syntax (forms.py, line 2)
Exception Location: /var/www/intranet/formater/views.py in <module>, line 7
Python Executable: /usr/bin/python3
Python Version: 3.4.3
Python Path:
['/var/www/intranet',
'/var/www/venv/lib/python3.5/site-packages',
'/usr/lib/python3.4',
'/usr/lib/python3.4/plat-x86_64-linux-gnu',
'/usr/lib/python3.4/lib-dynload',
'/usr/local/lib/python3.4/dist-packages',
'/usr/lib/python3/dist-packages',
'/var/www/intranet',
'/var/www/intranet/venv/lib/python3.5/site-packages']
```
Here's my apache2.conf file:
```
WSGIScriptAlias / /var/www/intranet/intranet/wsgi.py
#WSGIPythonPath /var/www/intranet/:/var/www/intranet/venv/lib/python3.5/site-packages
WSGIDaemonProcess intranet.example.com python-path=/var/www/intranet:/var/www/venv/lib/python3.5/site-packages
WSGIProcessGroup intranet.example.com
<Directory /var/www/intranet/intranet>
<Files wsgi.py>
Order deny,allow
Allow from all
</Files>
</Directory>
```
What am I doing wrong here?
|
2016/12/03
|
[
"https://Stackoverflow.com/questions/40946211",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2591242/"
] |
The mod\_wsgi module for Apache is compiled for a specific Python version. You cannot make it run using a different Python version by pointing it at a Python virtual environment for a different Python version. This is cleared mentioned in the mod\_wsgi documentation about use of Python virtual environments at:
* <http://modwsgi.readthedocs.io/en/develop/user-guides/virtual-environments.html>
The only way you can have mod\_wsgi run as Python 3.5, if it was original compiled for Python 3.4, is to uninstall that version of mod\_wsgi and build/install a version of mod\_wsgi compiled for Python 3.5.
|
The source of the problem was Graham Dumpleton's answer.
I just want to give some more information in case it helps someone facing the same problem as me.
There's no official repo for Python 3.5.2 in Ubuntu server 14.04.
Rather than using some unsupported repo like [this one](https://launchpad.net/~fkrull/+archive/ubuntu/deadsnakes), I compiled Python 3.5.2 from source using this very simple tutorial [here](http://tecadmin.net/install-python-3-5-on-ubuntu/).
After jumping through many hoops, I couldn't install mod\_wsgi for Python 3.5.2 because of a library path that was different.
Having already spent too much time on this, I uninstalled everything: Python, Apache, libraries and installed everything from scratch, using Python 3.4 this time.
It's officially supported for Ubuntu 14.04 and for my project, I noticed no compatibility issues.
So here's my shortlist for what to install:
**from apt:**
python3,
python3-pip,
apache2,
apache2-dev,
libapache2-mod-wsgi-py3 and
**from pip:**
Django,
mod-wsgi,
virtualenv (if you plan to use a venv).
Then just configure "/etc/apache2/apache.conf", run "apache2ctl configtest" and restart the service.
For extra help see this guide [here](https://docs.djangoproject.com/en/1.10/howto/deployment/wsgi/modwsgi/).
|
42,621,190
|
I am following a tutorial on using python v3.6 to do decision tree with machine learning using scikit-learn.
Here is the code;
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import mglearn
import graphviz
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, stratify=cancer.target, random_state=42)
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
tree = DecisionTreeClassifier(max_depth=4, random_state=0)
tree.fit(X_train, y_train)
from sklearn.tree import export_graphviz
export_graphviz(tree, out_file="tree.dot", class_names=["malignant", "benign"],feature_names=cancer.feature_names, impurity=False, filled=True)
import graphviz
with open("tree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
```
How do I use Graphviz to see what is inside dot\_graph? Presumably, it should look something like this;
[](https://i.stack.imgur.com/WNZ8q.jpg)
|
2017/03/06
|
[
"https://Stackoverflow.com/questions/42621190",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848207/"
] |
`graphviz.Source(dot_graph)` returns a [`graphviz.files.Source`](http://graphviz.readthedocs.io/en/latest/api.html#source) object.
```
g = graphviz.Source(dot_graph)
```
use [`g.render()`](http://graphviz.readthedocs.io/en/latest/api.html#graphviz.Source.render) to create an image file. When I ran it on your code without an argument I got a `Source.gv.pdf` but you can specify a different file name. There is also a shortcut [`g.view()`](http://graphviz.readthedocs.io/en/latest/api.html#graphviz.Source.view), which saves the file and opens it in an appropriate viewer application.
If you paste the code as it is in a rich terminal (such as Spyder/IPython with inline graphics or a Jupyter notebook) it will automagically display the image instead of the object's Python representation.
|
You can use display from IPython.display. Here is an example:
```
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
model = DecisionTreeClassifier()
model.fit(X, y)
from IPython.display import display
display(graphviz.Source(tree.export_graphviz(model)))
```
|
42,621,190
|
I am following a tutorial on using python v3.6 to do decision tree with machine learning using scikit-learn.
Here is the code;
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import mglearn
import graphviz
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, stratify=cancer.target, random_state=42)
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
tree = DecisionTreeClassifier(max_depth=4, random_state=0)
tree.fit(X_train, y_train)
from sklearn.tree import export_graphviz
export_graphviz(tree, out_file="tree.dot", class_names=["malignant", "benign"],feature_names=cancer.feature_names, impurity=False, filled=True)
import graphviz
with open("tree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
```
How do I use Graphviz to see what is inside dot\_graph? Presumably, it should look something like this;
[](https://i.stack.imgur.com/WNZ8q.jpg)
|
2017/03/06
|
[
"https://Stackoverflow.com/questions/42621190",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848207/"
] |
`graphviz.Source(dot_graph)` returns a [`graphviz.files.Source`](http://graphviz.readthedocs.io/en/latest/api.html#source) object.
```
g = graphviz.Source(dot_graph)
```
use [`g.render()`](http://graphviz.readthedocs.io/en/latest/api.html#graphviz.Source.render) to create an image file. When I ran it on your code without an argument I got a `Source.gv.pdf` but you can specify a different file name. There is also a shortcut [`g.view()`](http://graphviz.readthedocs.io/en/latest/api.html#graphviz.Source.view), which saves the file and opens it in an appropriate viewer application.
If you paste the code as it is in a rich terminal (such as Spyder/IPython with inline graphics or a Jupyter notebook) it will automagically display the image instead of the object's Python representation.
|
I'm working in Windows 10.
I solved this by adding to the 'path' environment variable.
I added the wrong path,
I added Drive:\Users\User.Name\AppData\Local\Continuum\anaconda3\envs\MyVirtualEnv\lib\site-packages\graphviz
should have used
Drive:\Users\User.Name\AppData\Local\Continuum\anaconda3\envs\MyVirtualEnv\Library\bin\graphviz
in the end I used both, then restarted python/anaconda.
Also added the pydotplus path, which is in ....MyVirtualEnv\lib\site-packages\pydotplus.
|
42,621,190
|
I am following a tutorial on using python v3.6 to do decision tree with machine learning using scikit-learn.
Here is the code;
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import mglearn
import graphviz
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, stratify=cancer.target, random_state=42)
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
tree = DecisionTreeClassifier(max_depth=4, random_state=0)
tree.fit(X_train, y_train)
from sklearn.tree import export_graphviz
export_graphviz(tree, out_file="tree.dot", class_names=["malignant", "benign"],feature_names=cancer.feature_names, impurity=False, filled=True)
import graphviz
with open("tree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
```
How do I use Graphviz to see what is inside dot\_graph? Presumably, it should look something like this;
[](https://i.stack.imgur.com/WNZ8q.jpg)
|
2017/03/06
|
[
"https://Stackoverflow.com/questions/42621190",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848207/"
] |
`graphviz.Source(dot_graph)` returns a [`graphviz.files.Source`](http://graphviz.readthedocs.io/en/latest/api.html#source) object.
```
g = graphviz.Source(dot_graph)
```
use [`g.render()`](http://graphviz.readthedocs.io/en/latest/api.html#graphviz.Source.render) to create an image file. When I ran it on your code without an argument I got a `Source.gv.pdf` but you can specify a different file name. There is also a shortcut [`g.view()`](http://graphviz.readthedocs.io/en/latest/api.html#graphviz.Source.view), which saves the file and opens it in an appropriate viewer application.
If you paste the code as it is in a rich terminal (such as Spyder/IPython with inline graphics or a Jupyter notebook) it will automagically display the image instead of the object's Python representation.
|
Jupyter will show the graph as is, but if you want to zoom in more you can try to save the file and inspect further :
```
# Draw graph
graph = pydotplus.graph_from_dot_data(dot_data)
# Show graph
Image(graph.create_png())
```
|
42,621,190
|
I am following a tutorial on using python v3.6 to do decision tree with machine learning using scikit-learn.
Here is the code;
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import mglearn
import graphviz
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, stratify=cancer.target, random_state=42)
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
tree = DecisionTreeClassifier(max_depth=4, random_state=0)
tree.fit(X_train, y_train)
from sklearn.tree import export_graphviz
export_graphviz(tree, out_file="tree.dot", class_names=["malignant", "benign"],feature_names=cancer.feature_names, impurity=False, filled=True)
import graphviz
with open("tree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
```
How do I use Graphviz to see what is inside dot\_graph? Presumably, it should look something like this;
[](https://i.stack.imgur.com/WNZ8q.jpg)
|
2017/03/06
|
[
"https://Stackoverflow.com/questions/42621190",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848207/"
] |
You can use display from IPython.display. Here is an example:
```
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
model = DecisionTreeClassifier()
model.fit(X, y)
from IPython.display import display
display(graphviz.Source(tree.export_graphviz(model)))
```
|
I'm working in Windows 10.
I solved this by adding to the 'path' environment variable.
I added the wrong path,
I added Drive:\Users\User.Name\AppData\Local\Continuum\anaconda3\envs\MyVirtualEnv\lib\site-packages\graphviz
should have used
Drive:\Users\User.Name\AppData\Local\Continuum\anaconda3\envs\MyVirtualEnv\Library\bin\graphviz
in the end I used both, then restarted python/anaconda.
Also added the pydotplus path, which is in ....MyVirtualEnv\lib\site-packages\pydotplus.
|
42,621,190
|
I am following a tutorial on using python v3.6 to do decision tree with machine learning using scikit-learn.
Here is the code;
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import mglearn
import graphviz
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, stratify=cancer.target, random_state=42)
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
tree = DecisionTreeClassifier(max_depth=4, random_state=0)
tree.fit(X_train, y_train)
from sklearn.tree import export_graphviz
export_graphviz(tree, out_file="tree.dot", class_names=["malignant", "benign"],feature_names=cancer.feature_names, impurity=False, filled=True)
import graphviz
with open("tree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
```
How do I use Graphviz to see what is inside dot\_graph? Presumably, it should look something like this;
[](https://i.stack.imgur.com/WNZ8q.jpg)
|
2017/03/06
|
[
"https://Stackoverflow.com/questions/42621190",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848207/"
] |
In jupyter notebook the following plots the decision tree:
```py
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
model = DecisionTreeClassifier()
model.fit(X, y)
dot_data = tree.export_graphviz(model,
feature_names=feature_names,
class_names=class_names,
filled=True, rounded=True,
special_characters=True,
out_file=None,
)
graph = graphviz.Source(dot_data)
graph
```
if you want to save it as png:
```py
graph.format = "png"
graph.render("file_name")
```
|
You can use display from IPython.display. Here is an example:
```
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
model = DecisionTreeClassifier()
model.fit(X, y)
from IPython.display import display
display(graphviz.Source(tree.export_graphviz(model)))
```
|
42,621,190
|
I am following a tutorial on using python v3.6 to do decision tree with machine learning using scikit-learn.
Here is the code;
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import mglearn
import graphviz
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, stratify=cancer.target, random_state=42)
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
tree = DecisionTreeClassifier(max_depth=4, random_state=0)
tree.fit(X_train, y_train)
from sklearn.tree import export_graphviz
export_graphviz(tree, out_file="tree.dot", class_names=["malignant", "benign"],feature_names=cancer.feature_names, impurity=False, filled=True)
import graphviz
with open("tree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
```
How do I use Graphviz to see what is inside dot\_graph? Presumably, it should look something like this;
[](https://i.stack.imgur.com/WNZ8q.jpg)
|
2017/03/06
|
[
"https://Stackoverflow.com/questions/42621190",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848207/"
] |
You can use display from IPython.display. Here is an example:
```
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
model = DecisionTreeClassifier()
model.fit(X, y)
from IPython.display import display
display(graphviz.Source(tree.export_graphviz(model)))
```
|
Jupyter will show the graph as is, but if you want to zoom in more you can try to save the file and inspect further :
```
# Draw graph
graph = pydotplus.graph_from_dot_data(dot_data)
# Show graph
Image(graph.create_png())
```
|
42,621,190
|
I am following a tutorial on using python v3.6 to do decision tree with machine learning using scikit-learn.
Here is the code;
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import mglearn
import graphviz
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, stratify=cancer.target, random_state=42)
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
tree = DecisionTreeClassifier(max_depth=4, random_state=0)
tree.fit(X_train, y_train)
from sklearn.tree import export_graphviz
export_graphviz(tree, out_file="tree.dot", class_names=["malignant", "benign"],feature_names=cancer.feature_names, impurity=False, filled=True)
import graphviz
with open("tree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
```
How do I use Graphviz to see what is inside dot\_graph? Presumably, it should look something like this;
[](https://i.stack.imgur.com/WNZ8q.jpg)
|
2017/03/06
|
[
"https://Stackoverflow.com/questions/42621190",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848207/"
] |
In jupyter notebook the following plots the decision tree:
```py
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
model = DecisionTreeClassifier()
model.fit(X, y)
dot_data = tree.export_graphviz(model,
feature_names=feature_names,
class_names=class_names,
filled=True, rounded=True,
special_characters=True,
out_file=None,
)
graph = graphviz.Source(dot_data)
graph
```
if you want to save it as png:
```py
graph.format = "png"
graph.render("file_name")
```
|
I'm working in Windows 10.
I solved this by adding to the 'path' environment variable.
I added the wrong path,
I added Drive:\Users\User.Name\AppData\Local\Continuum\anaconda3\envs\MyVirtualEnv\lib\site-packages\graphviz
should have used
Drive:\Users\User.Name\AppData\Local\Continuum\anaconda3\envs\MyVirtualEnv\Library\bin\graphviz
in the end I used both, then restarted python/anaconda.
Also added the pydotplus path, which is in ....MyVirtualEnv\lib\site-packages\pydotplus.
|
42,621,190
|
I am following a tutorial on using python v3.6 to do decision tree with machine learning using scikit-learn.
Here is the code;
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import mglearn
import graphviz
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
cancer = load_breast_cancer()
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, stratify=cancer.target, random_state=42)
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
tree = DecisionTreeClassifier(max_depth=4, random_state=0)
tree.fit(X_train, y_train)
from sklearn.tree import export_graphviz
export_graphviz(tree, out_file="tree.dot", class_names=["malignant", "benign"],feature_names=cancer.feature_names, impurity=False, filled=True)
import graphviz
with open("tree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
```
How do I use Graphviz to see what is inside dot\_graph? Presumably, it should look something like this;
[](https://i.stack.imgur.com/WNZ8q.jpg)
|
2017/03/06
|
[
"https://Stackoverflow.com/questions/42621190",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848207/"
] |
In jupyter notebook the following plots the decision tree:
```py
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
model = DecisionTreeClassifier()
model.fit(X, y)
dot_data = tree.export_graphviz(model,
feature_names=feature_names,
class_names=class_names,
filled=True, rounded=True,
special_characters=True,
out_file=None,
)
graph = graphviz.Source(dot_data)
graph
```
if you want to save it as png:
```py
graph.format = "png"
graph.render("file_name")
```
|
Jupyter will show the graph as is, but if you want to zoom in more you can try to save the file and inspect further :
```
# Draw graph
graph = pydotplus.graph_from_dot_data(dot_data)
# Show graph
Image(graph.create_png())
```
|
30,560,266
|
I have a router configuration file. The router has thousands of "Interfaces", and I'm trying to make sure it DOES have *two* certain lines in EACH Interface configuration section. The typical router configuration file would look like this:
```
<whitespaces> Interface_1
<whitespaces> description Interface_1
<whitespaces> etc etc'
<whitespaces> etc etc etc
<whitespaces> <config line that i am searching to confirm is present>
<whitespaces> etc etc etc etc
!
!
! random number of router lines
<whitespaces> <second config line that i am searching to confirm is
present>
!
<whitespaces> Interface_2
<whitespaces> description Interface_2
<whitespaces> etc etc
<whitespaces> etc etc etc
<whitespaces> <config line that I am searching to confirm is present>
<whitespaces> etc etc etc etc
! random number of router lines
<whitespaces> <second config line that i am searching to confirm is
present>
etc
```
So effectively I want this logic:
- go thru the router config. When you see Interface\_X, be on the lookout for two lines AFTER that, and make sure they are present in the config. And then move on to the next Interface, and do the same thing, over and over again.
Here is the tricky part:
- I want the two lines to be in the Interface config, and python needs to know that the search 'area' is any line AFTER Interface\_X and BEFORE the next Interface\_Y config.
- The two lines I'm searching for are RANDOMLY spaced in the config, they aren't like the 10th line and the 12th line after Interface\_X. They can be present anywhere between Interface\_X and Interface\_Y (the next interface definition).
Dying. Been on this for four days and can't seem to match it correctly.
Effectively i just want the python script to spit out output that says example:
"Interface\_22 and Interface\_89 are missing the two lines you are looking for Tom". I don't care if the interface is right honestly, i really only care about when the interface configuration is WRONG.
```
file_name = 'C:\\PYTHON\\router.txt'
f = open(file_name, 'r')
output_string_obj = f.read()
a = output_string_obj.splitlines()
for line in a:
if re.match("(.*)(Interface)(.*)", line):
# logic to search for the presence of two lines after matching
# Interface, but prior to next instance of a Interface
# definition
if re.match("(.*)(check-1-line-present)(.*)", line + ???):
if re.match("(.*)(check-2-line-present)(.*)", line ?):
print ("This Interface has the appropriate config")
else:
print ("This Interface is missing the key two lines")
```
Dying guys. This is the first question I've ever posted to this board. I'll take any thoughts/logic flow / statements/ ideas anyone has. I get in this type of search situation alot, where i don't know where in the router config something is missing, but i know it has to be somewhere between point A and point B, but have nothing else to 'grip' onto....
'''
'
|
2015/05/31
|
[
"https://Stackoverflow.com/questions/30560266",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4958929/"
] |
I solved this issue using [collection.distinct](https://mongodb.github.io/node-mongodb-native/api-generated/collection.html#distinct):
```
collection.distinct("DataFields",(function(err, docs){
console.log(docs);
assert.equal(null, err);
db.close();
}))
```
|
You could also do this by way of **[aggregation framework](http://docs.mongodb.org/manual/core/aggregation-introduction/)**. The suitable aggregation pipeline would consist of the [**`$group`**](http://docs.mongodb.org/manual/reference/operator/aggregation/group/#pipe._S_group) operator stage which groups the document by the `DataFields` key and create an aggregated field `count` which stores the documents produced by this grouping. The accumulator operator for this is [**`$sum`**](http://docs.mongodb.org/manual/reference/operator/aggregation/sum/#grp._S_sum).
The next pipeline stage would be the [**`$match`**](http://docs.mongodb.org/manual/reference/operator/aggregation/match/#pipe._S_match) operator which then filters the documents having a `count` of 1 to depict distinction.
The last pipeline operator [**`$group`**](http://docs.mongodb.org/manual/reference/operator/aggregation/group/#pipe._S_group) then group those distinct elements and adds them to a list by way of [**`$push`**](http://docs.mongodb.org/manual/reference/operator/aggregation/push) operator.
In the end your aggregation would look like this:
```
var collection = db.collection('Fixed_Asset_Register'),
pipeline = [
{
"$group": {
"_id": "$DataFields",
"count": {"$sum": 1}
}
},
{
"$match": { "count": 1 }
},
{
"$group": {
"_id": 0,
"distinct": { "$push": "$_id" }
}
}],
result = collection.aggregate(pipeline).toArray(),
distictDataFieldsValue = result[0].distinct;
```
|
10,839,302
|
I have a django application that requires PIL and I have been getting errors so I decided to play around with PIL on my hosting server.
PIL is installed in my virtual environment. However, when running the following I get an error, and when I run it outside the virtual environment it works.
```
Python 2.7.3 (default, Apr 16 2012, 15:47:14)
[GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import Image
>>> im = Image.open('test.png')
>>> im
<PngImagePlugin.PngImageFile image mode=RGBA size=28x22 at 0x7F477CFFAE18>
>>> im.convert('RGB')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/python27/lib/python2.7/site-packages/PIL-1.1.7-py2.7-linux-x86_64.egg/Image.py", line 679, in convert
self.load()
File "/opt/python27/lib/python2.7/site-packages/PIL-1.1.7-py2.7-linux-x86_64.egg/ImageFile.py", line 189, in load
d = Image._getdecoder(self.mode, d, a, self.decoderconfig)
File "/opt/python27/lib/python2.7/site-packages/PIL-1.1.7-py2.7-linux-x86_64.egg/Image.py", line 385, in _getdecoder
raise IOError("decoder %s not available" % decoder_name)
IOError: decoder zip not available
>>>
```
|
2012/05/31
|
[
"https://Stackoverflow.com/questions/10839302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/269106/"
] |
Most likely the Python you are using in your virtualenv has been built by yourself, not the system Python - is that right? If so your problem is that the header (.h) files for zlib are not installed in your system when building Python itself.
You have to have the "ziplib-devel" (or equivalent) package in Linux when building the Python you will use on your virtualenv. You can test if it works by trying to `import zlib` from the Python console.
As an alternative to rebuilding Python you can find your system's Python zip-related files and copy them to the Python used in your virtualenv (if they are the same Python version).
|
You could try [Pillow](http://pypi.python.org/pypi/Pillow) which is a repackaged PIL which plays much nicer with virtualenv.
|
46,423,582
|
I'm attempting to install cvxopt using Conda (which comes with the Anaconda python distribution), and I received the error message below. Apparently my Anaconda installation is using python 3.6, whereas cvxopt wants python 3.5\*. How can I fix this and install cvxopt using Conda?
After typing conda install cvxopt at the Anaconda prompt, the message I received was:
>
> Fetching package metadata ...........
>
>
> Solving package specifications: .
>
>
> UnsatisfiableError: The following specifications were found to be in
> conflict:
>
>
>
> ```
> - cvxopt -> python 3.5*
> - python 3.6*
>
> ```
>
> Use "conda info < package >" to see the dependencies for each package.
>
>
>
Here's a screenshot of the error message:
[](https://i.stack.imgur.com/oEbpa.png)
|
2017/09/26
|
[
"https://Stackoverflow.com/questions/46423582",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1854748/"
] |
It would appear that `cvxopt` requires Python 3.5. Easiest solution would be to use `conda` to create a separate environment for python 3.5 and then install cvxopt (and any other desired python packages). For example...
```
conda create -n cvxopt-env python=3.5 cvxopt numpy scipy matplotlib jupyter
```
...depending on your operating system you can then activate this environment using either...
```
source activate cvxopt-env
```
...or...
```
activate cvxopt-env
```
...you can then switch back to your default python install using...
```
deactivate
```
...check out the [`conda`](https://conda.io/docs/) docs for more details. In particular the docs for the [`conda create`](https://conda.io/docs/commands/conda-create.html) command.
|
try
```
conda install cvxopt=1.1.8
```
its the new version and only version having support for python3.6
|
55,358,337
|
How can we apply conditions for a Dataset in python, specially applying those and want to fetch the column name as an output?
let's say the below one is the dataframe so my question is how can we retrieve a colname(let's say "name") as an output by applying conditions on this dataframe
```
name salary jobDes
______________________________________
store1 | daniel | 50k | datascientist
store2 | paladugu | 55k | datascientist
store3 | theodor | 53k | dataEngineer
```
fetch a column name as a result like let's say "name"
|
2019/03/26
|
[
"https://Stackoverflow.com/questions/55358337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11260410/"
] |
(edit: slightly simplified non-recursive solution)
You can do it like this, just for each iteration consider if the item should be included or excluded.
```
def f(maxK,K, N, L, S):
if L == 0 or not N or K == 0:
return S
#either element is included
included = f(maxK,maxK, N[1:], L-1, S + N[0] )
#or excluded
excluded = f(maxK,K-1, N[1:], L, S )
return max(included, excluded)
assert f(2,2,[10,1,1,1,1,10],3,0) == 12
assert f(3,3,[8, 3, 7, 6, 2, 1, 9, 2, 5, 4],4,0) == 30
```
If N is very long you can consider changing to a table version, you could also change the input to tuples and use memoization.
Since OP later included the information that N can be 100 000, we can't really use recursive solutions like this. So here is a solution that runs in O(n*K*L), with same memory requirement:
```
import numpy as np
def f(n,K,L):
t = np.zeros((len(n),L+1))
for l in range(1,L+1):
for i in range(len(n)):
t[i,l] = n[i] + max( (t[i-k,l-1] for k in range(1,K+1) if i-k >= 0), default = 0 )
return np.max(t)
assert f([10,1,1,1,1,10],2,3) == 12
assert f([8, 3, 7, 6, 2, 1, 9],3,4) == 30
```
Explanation of the non recursive solution. Each cell in the table t[ i, l ] expresses the value of max subsequence with exactly l elements that use the element in position i and only elements in position i or lower where elements have at most K distance between each other.
subsequences of length n (those in t[i,1] have to have only one element, n[i] )
Longer subsequences have the n[i] + a subsequence of l-1 elements that starts at most k rows earlier, we pick the one with the maximal value. By iterating this way, we ensure that this value is already calculated.
Further improvements in memory is possible by considering that you only look at most K steps back.
|
Extending the code for `itertools.combinations` shown at the [docs](https://docs.python.org/3/library/itertools.html#itertools.combinations), I built a version that includes an argument for the maximum index distance (`K`) between two values. It only needed an additional `and indices[i] - indices[i-1] < K` check in the iteration:
```
def combinations_with_max_dist(iterable, r, K):
# combinations('ABCD', 2) --> AB AC AD BC BD CD
# combinations(range(4), 3) --> 012 013 023 123
pool = tuple(iterable)
n = len(pool)
if r > n:
return
indices = list(range(r))
yield tuple(pool[i] for i in indices)
while True:
for i in reversed(range(r)):
if indices[i] != i + n - r and indices[i] - indices[i-1] < K:
break
else:
return
indices[i] += 1
for j in range(i+1, r):
indices[j] = indices[j-1] + 1
yield tuple(pool[i] for i in indices)
```
Using this you can bruteforce over all combinations with regards to K, and then find the one that has the maximum value sum:
```
def find_subseq(a, L, K):
return max((sum(values), values) for values in combinations_with_max_dist(a, L, K))
```
Results:
```
print(*find_subseq([10, 1, 1, 1, 1, 10], L=3, K=2))
# 12 (10, 1, 1)
print(*find_subseq([8, 3, 7, 6, 2, 1, 9, 2, 5, 4], L=4, K=3))
# 30 (8, 7, 6, 9)
```
Not sure about the performance if your value lists become very long though...
|
55,358,337
|
How can we apply conditions for a Dataset in python, specially applying those and want to fetch the column name as an output?
let's say the below one is the dataframe so my question is how can we retrieve a colname(let's say "name") as an output by applying conditions on this dataframe
```
name salary jobDes
______________________________________
store1 | daniel | 50k | datascientist
store2 | paladugu | 55k | datascientist
store3 | theodor | 53k | dataEngineer
```
fetch a column name as a result like let's say "name"
|
2019/03/26
|
[
"https://Stackoverflow.com/questions/55358337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11260410/"
] |
(edit: slightly simplified non-recursive solution)
You can do it like this, just for each iteration consider if the item should be included or excluded.
```
def f(maxK,K, N, L, S):
if L == 0 or not N or K == 0:
return S
#either element is included
included = f(maxK,maxK, N[1:], L-1, S + N[0] )
#or excluded
excluded = f(maxK,K-1, N[1:], L, S )
return max(included, excluded)
assert f(2,2,[10,1,1,1,1,10],3,0) == 12
assert f(3,3,[8, 3, 7, 6, 2, 1, 9, 2, 5, 4],4,0) == 30
```
If N is very long you can consider changing to a table version, you could also change the input to tuples and use memoization.
Since OP later included the information that N can be 100 000, we can't really use recursive solutions like this. So here is a solution that runs in O(n*K*L), with same memory requirement:
```
import numpy as np
def f(n,K,L):
t = np.zeros((len(n),L+1))
for l in range(1,L+1):
for i in range(len(n)):
t[i,l] = n[i] + max( (t[i-k,l-1] for k in range(1,K+1) if i-k >= 0), default = 0 )
return np.max(t)
assert f([10,1,1,1,1,10],2,3) == 12
assert f([8, 3, 7, 6, 2, 1, 9],3,4) == 30
```
Explanation of the non recursive solution. Each cell in the table t[ i, l ] expresses the value of max subsequence with exactly l elements that use the element in position i and only elements in position i or lower where elements have at most K distance between each other.
subsequences of length n (those in t[i,1] have to have only one element, n[i] )
Longer subsequences have the n[i] + a subsequence of l-1 elements that starts at most k rows earlier, we pick the one with the maximal value. By iterating this way, we ensure that this value is already calculated.
Further improvements in memory is possible by considering that you only look at most K steps back.
|
---
Algorithm
---------
**Basic idea:**
* **Iteration** on input array, choose each index as the first taken element.
* Then **Recursion** on each first taken element, mark the index as `firstIdx`.
+ The next possible index would be in range `[firstIdx + 1, firstIdx + K]`, both inclusive.
+ Loop on the range to call each index recursively, with `L - 1` as the new L.
* Optionally, for each pair of (`firstIndex`, `L`), **cache** its max sum, for reuse.
Maybe this is necessary for large input.
**Constraints**:
* `array length` <= `1 << 17` // `131072`
* `K` <= `1 << 6` // `64`
* `L` <= `1 << 8` // `256`
**Complexity:**
* **Time:** `O(n * L * K)`
Since each `(firstIdx , L)` pair only calculated once, and that contains a iteration of `K.`
* **Space**: `O(n * L)`
For cache, and method stack in recursive call.
**Tips:**
* Depth of recursion is related to `L`, ***not*** `array length`.
* The defined constraints are not the actual limit, it could be larger, though I didn't test how large it can be.
Basically:
+ Both `array length` and `K` actually could be of any size as long as there are enough memory, since they are handled via iteration.
+ `L` is handled via recursion, thus it does has a limit.
---
Code - in `Java`
----------------
**SubSumLimitedDistance.java:**
```
import java.util.HashMap;
import java.util.Map;
public class SubSumLimitedDistance {
public static final long NOT_ENOUGH_ELE = -1; // sum that indicate not enough element, should be < 0,
public static final int MAX_ARR_LEN = 1 << 17; // max length of input array,
public static final int MAX_K = 1 << 6; // max K, should not be too long, otherwise slow,
public static final int MAX_L = 1 << 8; // max L, should not be too long, otherwise stackoverflow,
/**
* Find max sum of sum array.
*
* @param arr
* @param K
* @param L
* @return max sum,
*/
public static long find(int[] arr, int K, int L) {
if (K < 1 || K > MAX_K)
throw new IllegalArgumentException("K should be between [1, " + MAX_K + "], but get: " + K);
if (L < 0 || L > MAX_L)
throw new IllegalArgumentException("L should be between [0, " + MAX_L + "], but get: " + L);
if (arr.length > MAX_ARR_LEN)
throw new IllegalArgumentException("input array length should <= " + MAX_ARR_LEN + ", but get: " + arr.length);
Map<Integer, Map<Integer, Long>> cache = new HashMap<>(); // cache,
long maxSum = NOT_ENOUGH_ELE;
for (int i = 0; i < arr.length; i++) {
long sum = findTakeFirst(arr, K, L, i, cache);
if (sum == NOT_ENOUGH_ELE) break; // not enough elements,
if (sum > maxSum) maxSum = sum; // larger found,
}
return maxSum;
}
/**
* Find max sum of sum array, with index of first taken element specified,
*
* @param arr
* @param K
* @param L
* @param firstIdx index of first taken element,
* @param cache
* @return max sum,
*/
private static long findTakeFirst(int[] arr, int K, int L, int firstIdx, Map<Integer, Map<Integer, Long>> cache) {
// System.out.printf("findTakeFirst(): K = %d, L = %d, firstIdx = %d\n", K, L, firstIdx);
if (L == 0) return 0; // done,
if (firstIdx + L > arr.length) return NOT_ENOUGH_ELE; // not enough elements,
// check cache,
Map<Integer, Long> map = cache.get(firstIdx);
Long cachedResult;
if (map != null && (cachedResult = map.get(L)) != null) {
// System.out.printf("hit cache, cached result = %d\n", cachedResult);
return cachedResult;
}
// cache not exists, calculate,
long maxRemainSum = NOT_ENOUGH_ELE;
for (int i = firstIdx + 1; i <= firstIdx + K; i++) {
long remainSum = findTakeFirst(arr, K, L - 1, i, cache);
if (remainSum == NOT_ENOUGH_ELE) break; // not enough elements,
if (remainSum > maxRemainSum) maxRemainSum = remainSum;
}
if ((map = cache.get(firstIdx)) == null) cache.put(firstIdx, map = new HashMap<>());
if (maxRemainSum == NOT_ENOUGH_ELE) { // not enough elements,
map.put(L, NOT_ENOUGH_ELE); // cache - as not enough elements,
return NOT_ENOUGH_ELE;
}
long maxSum = arr[firstIdx] + maxRemainSum; // max sum,
map.put(L, maxSum); // cache - max sum,
return maxSum;
}
}
```
**SubSumLimitedDistanceTest.java:**
*(test case, via `TestNG`)*
```
import org.testng.Assert;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;
import java.util.concurrent.ThreadLocalRandom;
public class SubSumLimitedDistanceTest {
private int[] arr;
private int K;
private int L;
private int maxSum;
private int[] arr2;
private int K2;
private int L2;
private int maxSum2;
private int[] arrMax;
private int KMax;
private int KMaxLargest;
private int LMax;
private int LMaxLargest;
@BeforeClass
private void setUp() {
// init - arr,
arr = new int[]{10, 1, 1, 1, 1, 10};
K = 2;
L = 3;
maxSum = 12;
// init - arr2,
arr2 = new int[]{8, 3, 7, 6, 2, 1, 9, 2, 5, 4};
K2 = 3;
L2 = 4;
maxSum2 = 30;
// init - arrMax,
arrMax = new int[SubSumLimitedDistance.MAX_ARR_LEN];
ThreadLocalRandom rd = ThreadLocalRandom.current();
long maxLongEle = Long.MAX_VALUE / SubSumLimitedDistance.MAX_ARR_LEN;
int maxEle = maxLongEle > Integer.MAX_VALUE ? Integer.MAX_VALUE : (int) maxLongEle;
for (int i = 0; i < arrMax.length; i++) {
arrMax[i] = rd.nextInt(maxEle);
}
KMax = 5;
LMax = 10;
KMaxLargest = SubSumLimitedDistance.MAX_K;
LMaxLargest = SubSumLimitedDistance.MAX_L;
}
@Test
public void test() {
Assert.assertEquals(SubSumLimitedDistance.find(arr, K, L), maxSum);
Assert.assertEquals(SubSumLimitedDistance.find(arr2, K2, L2), maxSum2);
}
@Test(timeOut = 6000)
public void test_veryLargeArray() {
run_printDuring(arrMax, KMax, LMax);
}
@Test(timeOut = 60000) // takes seconds,
public void test_veryLargeArrayL() {
run_printDuring(arrMax, KMax, LMaxLargest);
}
@Test(timeOut = 60000) // takes seconds,
public void test_veryLargeArrayK() {
run_printDuring(arrMax, KMaxLargest, LMax);
}
// run find once, and print during,
private void run_printDuring(int[] arr, int K, int L) {
long startTime = System.currentTimeMillis();
long sum = SubSumLimitedDistance.find(arr, K, L);
long during = System.currentTimeMillis() - startTime; // during in milliseconds,
System.out.printf("arr length = %5d, K = %3d, L = %4d, max sum = %15d, running time = %.3f seconds\n", arr.length, K, L, sum, during / 1000.0);
}
@Test
public void test_corner_notEnoughEle() {
Assert.assertEquals(SubSumLimitedDistance.find(new int[]{1}, 2, 3), SubSumLimitedDistance.NOT_ENOUGH_ELE); // not enough element,
Assert.assertEquals(SubSumLimitedDistance.find(new int[]{0}, 1, 3), SubSumLimitedDistance.NOT_ENOUGH_ELE); // not enough element,
}
@Test
public void test_corner_ZeroL() {
Assert.assertEquals(SubSumLimitedDistance.find(new int[]{1, 2, 3}, 2, 0), 0); // L = 0,
Assert.assertEquals(SubSumLimitedDistance.find(new int[]{0}, 1, 0), 0); // L = 0,
}
@Test(expectedExceptions = IllegalArgumentException.class)
public void test_invalid_K() {
// SubSumLimitedDistance.find(new int[]{1, 2, 3}, 0, 2); // K = 0,
// SubSumLimitedDistance.find(new int[]{1, 2, 3}, -1, 2); // K = -1,
SubSumLimitedDistance.find(new int[]{1, 2, 3}, SubSumLimitedDistance.MAX_K + 1, 2); // K = SubSumLimitedDistance.MAX_K+1,
}
@Test(expectedExceptions = IllegalArgumentException.class)
public void test_invalid_L() {
// SubSumLimitedDistance.find(new int[]{1, 2, 3}, 2, -1); // L = -1,
SubSumLimitedDistance.find(new int[]{1, 2, 3}, 2, SubSumLimitedDistance.MAX_L + 1); // L = SubSumLimitedDistance.MAX_L+1,
}
@Test(expectedExceptions = IllegalArgumentException.class)
public void test_invalid_tooLong() {
SubSumLimitedDistance.find(new int[SubSumLimitedDistance.MAX_ARR_LEN + 1], 2, 3); // input array too long,
}
}
```
**Output of test case for large input:**
```
arr length = 131072, K = 5, L = 10, max sum = 20779205738, running time = 0.303 seconds
arr length = 131072, K = 64, L = 10, max sum = 21393422854, running time = 1.917 seconds
arr length = 131072, K = 5, L = 256, max sum = 461698553839, running time = 9.474 seconds
```
|
55,358,337
|
How can we apply conditions for a Dataset in python, specially applying those and want to fetch the column name as an output?
let's say the below one is the dataframe so my question is how can we retrieve a colname(let's say "name") as an output by applying conditions on this dataframe
```
name salary jobDes
______________________________________
store1 | daniel | 50k | datascientist
store2 | paladugu | 55k | datascientist
store3 | theodor | 53k | dataEngineer
```
fetch a column name as a result like let's say "name"
|
2019/03/26
|
[
"https://Stackoverflow.com/questions/55358337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11260410/"
] |
Extending the code for `itertools.combinations` shown at the [docs](https://docs.python.org/3/library/itertools.html#itertools.combinations), I built a version that includes an argument for the maximum index distance (`K`) between two values. It only needed an additional `and indices[i] - indices[i-1] < K` check in the iteration:
```
def combinations_with_max_dist(iterable, r, K):
# combinations('ABCD', 2) --> AB AC AD BC BD CD
# combinations(range(4), 3) --> 012 013 023 123
pool = tuple(iterable)
n = len(pool)
if r > n:
return
indices = list(range(r))
yield tuple(pool[i] for i in indices)
while True:
for i in reversed(range(r)):
if indices[i] != i + n - r and indices[i] - indices[i-1] < K:
break
else:
return
indices[i] += 1
for j in range(i+1, r):
indices[j] = indices[j-1] + 1
yield tuple(pool[i] for i in indices)
```
Using this you can bruteforce over all combinations with regards to K, and then find the one that has the maximum value sum:
```
def find_subseq(a, L, K):
return max((sum(values), values) for values in combinations_with_max_dist(a, L, K))
```
Results:
```
print(*find_subseq([10, 1, 1, 1, 1, 10], L=3, K=2))
# 12 (10, 1, 1)
print(*find_subseq([8, 3, 7, 6, 2, 1, 9, 2, 5, 4], L=4, K=3))
# 30 (8, 7, 6, 9)
```
Not sure about the performance if your value lists become very long though...
|
---
Algorithm
---------
**Basic idea:**
* **Iteration** on input array, choose each index as the first taken element.
* Then **Recursion** on each first taken element, mark the index as `firstIdx`.
+ The next possible index would be in range `[firstIdx + 1, firstIdx + K]`, both inclusive.
+ Loop on the range to call each index recursively, with `L - 1` as the new L.
* Optionally, for each pair of (`firstIndex`, `L`), **cache** its max sum, for reuse.
Maybe this is necessary for large input.
**Constraints**:
* `array length` <= `1 << 17` // `131072`
* `K` <= `1 << 6` // `64`
* `L` <= `1 << 8` // `256`
**Complexity:**
* **Time:** `O(n * L * K)`
Since each `(firstIdx , L)` pair only calculated once, and that contains a iteration of `K.`
* **Space**: `O(n * L)`
For cache, and method stack in recursive call.
**Tips:**
* Depth of recursion is related to `L`, ***not*** `array length`.
* The defined constraints are not the actual limit, it could be larger, though I didn't test how large it can be.
Basically:
+ Both `array length` and `K` actually could be of any size as long as there are enough memory, since they are handled via iteration.
+ `L` is handled via recursion, thus it does has a limit.
---
Code - in `Java`
----------------
**SubSumLimitedDistance.java:**
```
import java.util.HashMap;
import java.util.Map;
public class SubSumLimitedDistance {
public static final long NOT_ENOUGH_ELE = -1; // sum that indicate not enough element, should be < 0,
public static final int MAX_ARR_LEN = 1 << 17; // max length of input array,
public static final int MAX_K = 1 << 6; // max K, should not be too long, otherwise slow,
public static final int MAX_L = 1 << 8; // max L, should not be too long, otherwise stackoverflow,
/**
* Find max sum of sum array.
*
* @param arr
* @param K
* @param L
* @return max sum,
*/
public static long find(int[] arr, int K, int L) {
if (K < 1 || K > MAX_K)
throw new IllegalArgumentException("K should be between [1, " + MAX_K + "], but get: " + K);
if (L < 0 || L > MAX_L)
throw new IllegalArgumentException("L should be between [0, " + MAX_L + "], but get: " + L);
if (arr.length > MAX_ARR_LEN)
throw new IllegalArgumentException("input array length should <= " + MAX_ARR_LEN + ", but get: " + arr.length);
Map<Integer, Map<Integer, Long>> cache = new HashMap<>(); // cache,
long maxSum = NOT_ENOUGH_ELE;
for (int i = 0; i < arr.length; i++) {
long sum = findTakeFirst(arr, K, L, i, cache);
if (sum == NOT_ENOUGH_ELE) break; // not enough elements,
if (sum > maxSum) maxSum = sum; // larger found,
}
return maxSum;
}
/**
* Find max sum of sum array, with index of first taken element specified,
*
* @param arr
* @param K
* @param L
* @param firstIdx index of first taken element,
* @param cache
* @return max sum,
*/
private static long findTakeFirst(int[] arr, int K, int L, int firstIdx, Map<Integer, Map<Integer, Long>> cache) {
// System.out.printf("findTakeFirst(): K = %d, L = %d, firstIdx = %d\n", K, L, firstIdx);
if (L == 0) return 0; // done,
if (firstIdx + L > arr.length) return NOT_ENOUGH_ELE; // not enough elements,
// check cache,
Map<Integer, Long> map = cache.get(firstIdx);
Long cachedResult;
if (map != null && (cachedResult = map.get(L)) != null) {
// System.out.printf("hit cache, cached result = %d\n", cachedResult);
return cachedResult;
}
// cache not exists, calculate,
long maxRemainSum = NOT_ENOUGH_ELE;
for (int i = firstIdx + 1; i <= firstIdx + K; i++) {
long remainSum = findTakeFirst(arr, K, L - 1, i, cache);
if (remainSum == NOT_ENOUGH_ELE) break; // not enough elements,
if (remainSum > maxRemainSum) maxRemainSum = remainSum;
}
if ((map = cache.get(firstIdx)) == null) cache.put(firstIdx, map = new HashMap<>());
if (maxRemainSum == NOT_ENOUGH_ELE) { // not enough elements,
map.put(L, NOT_ENOUGH_ELE); // cache - as not enough elements,
return NOT_ENOUGH_ELE;
}
long maxSum = arr[firstIdx] + maxRemainSum; // max sum,
map.put(L, maxSum); // cache - max sum,
return maxSum;
}
}
```
**SubSumLimitedDistanceTest.java:**
*(test case, via `TestNG`)*
```
import org.testng.Assert;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;
import java.util.concurrent.ThreadLocalRandom;
public class SubSumLimitedDistanceTest {
private int[] arr;
private int K;
private int L;
private int maxSum;
private int[] arr2;
private int K2;
private int L2;
private int maxSum2;
private int[] arrMax;
private int KMax;
private int KMaxLargest;
private int LMax;
private int LMaxLargest;
@BeforeClass
private void setUp() {
// init - arr,
arr = new int[]{10, 1, 1, 1, 1, 10};
K = 2;
L = 3;
maxSum = 12;
// init - arr2,
arr2 = new int[]{8, 3, 7, 6, 2, 1, 9, 2, 5, 4};
K2 = 3;
L2 = 4;
maxSum2 = 30;
// init - arrMax,
arrMax = new int[SubSumLimitedDistance.MAX_ARR_LEN];
ThreadLocalRandom rd = ThreadLocalRandom.current();
long maxLongEle = Long.MAX_VALUE / SubSumLimitedDistance.MAX_ARR_LEN;
int maxEle = maxLongEle > Integer.MAX_VALUE ? Integer.MAX_VALUE : (int) maxLongEle;
for (int i = 0; i < arrMax.length; i++) {
arrMax[i] = rd.nextInt(maxEle);
}
KMax = 5;
LMax = 10;
KMaxLargest = SubSumLimitedDistance.MAX_K;
LMaxLargest = SubSumLimitedDistance.MAX_L;
}
@Test
public void test() {
Assert.assertEquals(SubSumLimitedDistance.find(arr, K, L), maxSum);
Assert.assertEquals(SubSumLimitedDistance.find(arr2, K2, L2), maxSum2);
}
@Test(timeOut = 6000)
public void test_veryLargeArray() {
run_printDuring(arrMax, KMax, LMax);
}
@Test(timeOut = 60000) // takes seconds,
public void test_veryLargeArrayL() {
run_printDuring(arrMax, KMax, LMaxLargest);
}
@Test(timeOut = 60000) // takes seconds,
public void test_veryLargeArrayK() {
run_printDuring(arrMax, KMaxLargest, LMax);
}
// run find once, and print during,
private void run_printDuring(int[] arr, int K, int L) {
long startTime = System.currentTimeMillis();
long sum = SubSumLimitedDistance.find(arr, K, L);
long during = System.currentTimeMillis() - startTime; // during in milliseconds,
System.out.printf("arr length = %5d, K = %3d, L = %4d, max sum = %15d, running time = %.3f seconds\n", arr.length, K, L, sum, during / 1000.0);
}
@Test
public void test_corner_notEnoughEle() {
Assert.assertEquals(SubSumLimitedDistance.find(new int[]{1}, 2, 3), SubSumLimitedDistance.NOT_ENOUGH_ELE); // not enough element,
Assert.assertEquals(SubSumLimitedDistance.find(new int[]{0}, 1, 3), SubSumLimitedDistance.NOT_ENOUGH_ELE); // not enough element,
}
@Test
public void test_corner_ZeroL() {
Assert.assertEquals(SubSumLimitedDistance.find(new int[]{1, 2, 3}, 2, 0), 0); // L = 0,
Assert.assertEquals(SubSumLimitedDistance.find(new int[]{0}, 1, 0), 0); // L = 0,
}
@Test(expectedExceptions = IllegalArgumentException.class)
public void test_invalid_K() {
// SubSumLimitedDistance.find(new int[]{1, 2, 3}, 0, 2); // K = 0,
// SubSumLimitedDistance.find(new int[]{1, 2, 3}, -1, 2); // K = -1,
SubSumLimitedDistance.find(new int[]{1, 2, 3}, SubSumLimitedDistance.MAX_K + 1, 2); // K = SubSumLimitedDistance.MAX_K+1,
}
@Test(expectedExceptions = IllegalArgumentException.class)
public void test_invalid_L() {
// SubSumLimitedDistance.find(new int[]{1, 2, 3}, 2, -1); // L = -1,
SubSumLimitedDistance.find(new int[]{1, 2, 3}, 2, SubSumLimitedDistance.MAX_L + 1); // L = SubSumLimitedDistance.MAX_L+1,
}
@Test(expectedExceptions = IllegalArgumentException.class)
public void test_invalid_tooLong() {
SubSumLimitedDistance.find(new int[SubSumLimitedDistance.MAX_ARR_LEN + 1], 2, 3); // input array too long,
}
}
```
**Output of test case for large input:**
```
arr length = 131072, K = 5, L = 10, max sum = 20779205738, running time = 0.303 seconds
arr length = 131072, K = 64, L = 10, max sum = 21393422854, running time = 1.917 seconds
arr length = 131072, K = 5, L = 256, max sum = 461698553839, running time = 9.474 seconds
```
|
55,358,337
|
How can we apply conditions for a Dataset in python, specially applying those and want to fetch the column name as an output?
let's say the below one is the dataframe so my question is how can we retrieve a colname(let's say "name") as an output by applying conditions on this dataframe
```
name salary jobDes
______________________________________
store1 | daniel | 50k | datascientist
store2 | paladugu | 55k | datascientist
store3 | theodor | 53k | dataEngineer
```
fetch a column name as a result like let's say "name"
|
2019/03/26
|
[
"https://Stackoverflow.com/questions/55358337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11260410/"
] |
Here is a bottom up (ie no recursion) dynamic solution in Python. It takes memory `O(l * n)` and time `O(l * n * k)`.
```
def max_subseq_sum(k, l, values):
# table[i][j] will be the highest value from a sequence of length j
# ending at position i
table = []
for i in range(len(values)):
# We have no sum from 0, and i from len 1.
table.append([0, values[i]])
# By length of previous subsequence
for subseq_len in range(1, l):
# We look back up to k for the best.
prev_val = None
for last_i in range(i-k, i):
# We don't look back if the sequence was not that long.
if subseq_len <= last_i+1:
# Is this better?
this_val = table[last_i][subseq_len]
if prev_val is None or prev_val < this_val:
prev_val = this_val
# Do we have a best to offer?
if prev_val is not None:
table[i].append(prev_val + values[i])
# Now we look for the best entry of length l.
best_val = None
for row in table:
# If the row has entries for 0...l will have len > l.
if l < len(row):
if best_val is None or best_val < row[l]:
best_val = row[l]
return best_val
print(max_subseq_sum(2, 3, [10, 1, 1, 1, 1, 10]))
print(max_subseq_sum(3, 4, [8, 3, 7, 6, 2, 1, 9, 2, 5, 4]))
```
If I wanted to be slightly clever I could make this memory `O(n)` pretty easily by calculating one layer at a time, throwing away the previous one. It takes a lot of cleverness to reduce running time to `O(l*n*log(k))` but that is doable. (Use a priority queue for your best value in the last k. It is `O(log(k))` to update it for each element but naturally grows. Every `k` values you throw it away and rebuild it for a `O(k)` cost incurred `O(n/k)` times for a total `O(n)` rebuild cost.)
And here is the clever version. Memory `O(n)`. Time `O(n*l*log(k))` worst case, and average case is `O(n*l)`. You hit the worst case when it is sorted in ascending order.
```
import heapq
def max_subseq_sum(k, l, values):
count = 0
prev_best = [0 for _ in values]
# i represents how many in prev subsequences
# It ranges from 0..(l-1).
for i in range(l):
# We are building subsequences of length i+1.
# We will have no way to find one that ends
# before the i'th element at position i-1
best = [None for _ in range(i)]
# Our heap will be (-sum, index). It is a min_heap so the
# minimum element has the largest sum. We track the index
# so that we know when it is in the last k.
min_heap = [(-prev_best[i-1], i-1)]
for j in range(i, len(values)):
# Remove best elements that are more than k back.
while min_heap[0][-1] < j-k:
heapq.heappop(min_heap)
# We append this value + (best prev sum) using -(-..) = +.
best.append(values[j] - min_heap[0][0])
heapq.heappush(min_heap, (-prev_best[j], j))
# And now keep min_heap from growing too big.
if 2*k < len(min_heap):
# Filter out elements too far back.
min_heap = [_ for _ in min_heap if j - k < _[1]]
# And make into a heap again.
heapq.heapify(min_heap)
# And now finish this layer.
prev_best = best
return max(prev_best)
```
|
Extending the code for `itertools.combinations` shown at the [docs](https://docs.python.org/3/library/itertools.html#itertools.combinations), I built a version that includes an argument for the maximum index distance (`K`) between two values. It only needed an additional `and indices[i] - indices[i-1] < K` check in the iteration:
```
def combinations_with_max_dist(iterable, r, K):
# combinations('ABCD', 2) --> AB AC AD BC BD CD
# combinations(range(4), 3) --> 012 013 023 123
pool = tuple(iterable)
n = len(pool)
if r > n:
return
indices = list(range(r))
yield tuple(pool[i] for i in indices)
while True:
for i in reversed(range(r)):
if indices[i] != i + n - r and indices[i] - indices[i-1] < K:
break
else:
return
indices[i] += 1
for j in range(i+1, r):
indices[j] = indices[j-1] + 1
yield tuple(pool[i] for i in indices)
```
Using this you can bruteforce over all combinations with regards to K, and then find the one that has the maximum value sum:
```
def find_subseq(a, L, K):
return max((sum(values), values) for values in combinations_with_max_dist(a, L, K))
```
Results:
```
print(*find_subseq([10, 1, 1, 1, 1, 10], L=3, K=2))
# 12 (10, 1, 1)
print(*find_subseq([8, 3, 7, 6, 2, 1, 9, 2, 5, 4], L=4, K=3))
# 30 (8, 7, 6, 9)
```
Not sure about the performance if your value lists become very long though...
|
55,358,337
|
How can we apply conditions for a Dataset in python, specially applying those and want to fetch the column name as an output?
let's say the below one is the dataframe so my question is how can we retrieve a colname(let's say "name") as an output by applying conditions on this dataframe
```
name salary jobDes
______________________________________
store1 | daniel | 50k | datascientist
store2 | paladugu | 55k | datascientist
store3 | theodor | 53k | dataEngineer
```
fetch a column name as a result like let's say "name"
|
2019/03/26
|
[
"https://Stackoverflow.com/questions/55358337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11260410/"
] |
Here is a bottom up (ie no recursion) dynamic solution in Python. It takes memory `O(l * n)` and time `O(l * n * k)`.
```
def max_subseq_sum(k, l, values):
# table[i][j] will be the highest value from a sequence of length j
# ending at position i
table = []
for i in range(len(values)):
# We have no sum from 0, and i from len 1.
table.append([0, values[i]])
# By length of previous subsequence
for subseq_len in range(1, l):
# We look back up to k for the best.
prev_val = None
for last_i in range(i-k, i):
# We don't look back if the sequence was not that long.
if subseq_len <= last_i+1:
# Is this better?
this_val = table[last_i][subseq_len]
if prev_val is None or prev_val < this_val:
prev_val = this_val
# Do we have a best to offer?
if prev_val is not None:
table[i].append(prev_val + values[i])
# Now we look for the best entry of length l.
best_val = None
for row in table:
# If the row has entries for 0...l will have len > l.
if l < len(row):
if best_val is None or best_val < row[l]:
best_val = row[l]
return best_val
print(max_subseq_sum(2, 3, [10, 1, 1, 1, 1, 10]))
print(max_subseq_sum(3, 4, [8, 3, 7, 6, 2, 1, 9, 2, 5, 4]))
```
If I wanted to be slightly clever I could make this memory `O(n)` pretty easily by calculating one layer at a time, throwing away the previous one. It takes a lot of cleverness to reduce running time to `O(l*n*log(k))` but that is doable. (Use a priority queue for your best value in the last k. It is `O(log(k))` to update it for each element but naturally grows. Every `k` values you throw it away and rebuild it for a `O(k)` cost incurred `O(n/k)` times for a total `O(n)` rebuild cost.)
And here is the clever version. Memory `O(n)`. Time `O(n*l*log(k))` worst case, and average case is `O(n*l)`. You hit the worst case when it is sorted in ascending order.
```
import heapq
def max_subseq_sum(k, l, values):
count = 0
prev_best = [0 for _ in values]
# i represents how many in prev subsequences
# It ranges from 0..(l-1).
for i in range(l):
# We are building subsequences of length i+1.
# We will have no way to find one that ends
# before the i'th element at position i-1
best = [None for _ in range(i)]
# Our heap will be (-sum, index). It is a min_heap so the
# minimum element has the largest sum. We track the index
# so that we know when it is in the last k.
min_heap = [(-prev_best[i-1], i-1)]
for j in range(i, len(values)):
# Remove best elements that are more than k back.
while min_heap[0][-1] < j-k:
heapq.heappop(min_heap)
# We append this value + (best prev sum) using -(-..) = +.
best.append(values[j] - min_heap[0][0])
heapq.heappush(min_heap, (-prev_best[j], j))
# And now keep min_heap from growing too big.
if 2*k < len(min_heap):
# Filter out elements too far back.
min_heap = [_ for _ in min_heap if j - k < _[1]]
# And make into a heap again.
heapq.heapify(min_heap)
# And now finish this layer.
prev_best = best
return max(prev_best)
```
|
---
Algorithm
---------
**Basic idea:**
* **Iteration** on input array, choose each index as the first taken element.
* Then **Recursion** on each first taken element, mark the index as `firstIdx`.
+ The next possible index would be in range `[firstIdx + 1, firstIdx + K]`, both inclusive.
+ Loop on the range to call each index recursively, with `L - 1` as the new L.
* Optionally, for each pair of (`firstIndex`, `L`), **cache** its max sum, for reuse.
Maybe this is necessary for large input.
**Constraints**:
* `array length` <= `1 << 17` // `131072`
* `K` <= `1 << 6` // `64`
* `L` <= `1 << 8` // `256`
**Complexity:**
* **Time:** `O(n * L * K)`
Since each `(firstIdx , L)` pair only calculated once, and that contains a iteration of `K.`
* **Space**: `O(n * L)`
For cache, and method stack in recursive call.
**Tips:**
* Depth of recursion is related to `L`, ***not*** `array length`.
* The defined constraints are not the actual limit, it could be larger, though I didn't test how large it can be.
Basically:
+ Both `array length` and `K` actually could be of any size as long as there are enough memory, since they are handled via iteration.
+ `L` is handled via recursion, thus it does has a limit.
---
Code - in `Java`
----------------
**SubSumLimitedDistance.java:**
```
import java.util.HashMap;
import java.util.Map;
public class SubSumLimitedDistance {
public static final long NOT_ENOUGH_ELE = -1; // sum that indicate not enough element, should be < 0,
public static final int MAX_ARR_LEN = 1 << 17; // max length of input array,
public static final int MAX_K = 1 << 6; // max K, should not be too long, otherwise slow,
public static final int MAX_L = 1 << 8; // max L, should not be too long, otherwise stackoverflow,
/**
* Find max sum of sum array.
*
* @param arr
* @param K
* @param L
* @return max sum,
*/
public static long find(int[] arr, int K, int L) {
if (K < 1 || K > MAX_K)
throw new IllegalArgumentException("K should be between [1, " + MAX_K + "], but get: " + K);
if (L < 0 || L > MAX_L)
throw new IllegalArgumentException("L should be between [0, " + MAX_L + "], but get: " + L);
if (arr.length > MAX_ARR_LEN)
throw new IllegalArgumentException("input array length should <= " + MAX_ARR_LEN + ", but get: " + arr.length);
Map<Integer, Map<Integer, Long>> cache = new HashMap<>(); // cache,
long maxSum = NOT_ENOUGH_ELE;
for (int i = 0; i < arr.length; i++) {
long sum = findTakeFirst(arr, K, L, i, cache);
if (sum == NOT_ENOUGH_ELE) break; // not enough elements,
if (sum > maxSum) maxSum = sum; // larger found,
}
return maxSum;
}
/**
* Find max sum of sum array, with index of first taken element specified,
*
* @param arr
* @param K
* @param L
* @param firstIdx index of first taken element,
* @param cache
* @return max sum,
*/
private static long findTakeFirst(int[] arr, int K, int L, int firstIdx, Map<Integer, Map<Integer, Long>> cache) {
// System.out.printf("findTakeFirst(): K = %d, L = %d, firstIdx = %d\n", K, L, firstIdx);
if (L == 0) return 0; // done,
if (firstIdx + L > arr.length) return NOT_ENOUGH_ELE; // not enough elements,
// check cache,
Map<Integer, Long> map = cache.get(firstIdx);
Long cachedResult;
if (map != null && (cachedResult = map.get(L)) != null) {
// System.out.printf("hit cache, cached result = %d\n", cachedResult);
return cachedResult;
}
// cache not exists, calculate,
long maxRemainSum = NOT_ENOUGH_ELE;
for (int i = firstIdx + 1; i <= firstIdx + K; i++) {
long remainSum = findTakeFirst(arr, K, L - 1, i, cache);
if (remainSum == NOT_ENOUGH_ELE) break; // not enough elements,
if (remainSum > maxRemainSum) maxRemainSum = remainSum;
}
if ((map = cache.get(firstIdx)) == null) cache.put(firstIdx, map = new HashMap<>());
if (maxRemainSum == NOT_ENOUGH_ELE) { // not enough elements,
map.put(L, NOT_ENOUGH_ELE); // cache - as not enough elements,
return NOT_ENOUGH_ELE;
}
long maxSum = arr[firstIdx] + maxRemainSum; // max sum,
map.put(L, maxSum); // cache - max sum,
return maxSum;
}
}
```
**SubSumLimitedDistanceTest.java:**
*(test case, via `TestNG`)*
```
import org.testng.Assert;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;
import java.util.concurrent.ThreadLocalRandom;
public class SubSumLimitedDistanceTest {
private int[] arr;
private int K;
private int L;
private int maxSum;
private int[] arr2;
private int K2;
private int L2;
private int maxSum2;
private int[] arrMax;
private int KMax;
private int KMaxLargest;
private int LMax;
private int LMaxLargest;
@BeforeClass
private void setUp() {
// init - arr,
arr = new int[]{10, 1, 1, 1, 1, 10};
K = 2;
L = 3;
maxSum = 12;
// init - arr2,
arr2 = new int[]{8, 3, 7, 6, 2, 1, 9, 2, 5, 4};
K2 = 3;
L2 = 4;
maxSum2 = 30;
// init - arrMax,
arrMax = new int[SubSumLimitedDistance.MAX_ARR_LEN];
ThreadLocalRandom rd = ThreadLocalRandom.current();
long maxLongEle = Long.MAX_VALUE / SubSumLimitedDistance.MAX_ARR_LEN;
int maxEle = maxLongEle > Integer.MAX_VALUE ? Integer.MAX_VALUE : (int) maxLongEle;
for (int i = 0; i < arrMax.length; i++) {
arrMax[i] = rd.nextInt(maxEle);
}
KMax = 5;
LMax = 10;
KMaxLargest = SubSumLimitedDistance.MAX_K;
LMaxLargest = SubSumLimitedDistance.MAX_L;
}
@Test
public void test() {
Assert.assertEquals(SubSumLimitedDistance.find(arr, K, L), maxSum);
Assert.assertEquals(SubSumLimitedDistance.find(arr2, K2, L2), maxSum2);
}
@Test(timeOut = 6000)
public void test_veryLargeArray() {
run_printDuring(arrMax, KMax, LMax);
}
@Test(timeOut = 60000) // takes seconds,
public void test_veryLargeArrayL() {
run_printDuring(arrMax, KMax, LMaxLargest);
}
@Test(timeOut = 60000) // takes seconds,
public void test_veryLargeArrayK() {
run_printDuring(arrMax, KMaxLargest, LMax);
}
// run find once, and print during,
private void run_printDuring(int[] arr, int K, int L) {
long startTime = System.currentTimeMillis();
long sum = SubSumLimitedDistance.find(arr, K, L);
long during = System.currentTimeMillis() - startTime; // during in milliseconds,
System.out.printf("arr length = %5d, K = %3d, L = %4d, max sum = %15d, running time = %.3f seconds\n", arr.length, K, L, sum, during / 1000.0);
}
@Test
public void test_corner_notEnoughEle() {
Assert.assertEquals(SubSumLimitedDistance.find(new int[]{1}, 2, 3), SubSumLimitedDistance.NOT_ENOUGH_ELE); // not enough element,
Assert.assertEquals(SubSumLimitedDistance.find(new int[]{0}, 1, 3), SubSumLimitedDistance.NOT_ENOUGH_ELE); // not enough element,
}
@Test
public void test_corner_ZeroL() {
Assert.assertEquals(SubSumLimitedDistance.find(new int[]{1, 2, 3}, 2, 0), 0); // L = 0,
Assert.assertEquals(SubSumLimitedDistance.find(new int[]{0}, 1, 0), 0); // L = 0,
}
@Test(expectedExceptions = IllegalArgumentException.class)
public void test_invalid_K() {
// SubSumLimitedDistance.find(new int[]{1, 2, 3}, 0, 2); // K = 0,
// SubSumLimitedDistance.find(new int[]{1, 2, 3}, -1, 2); // K = -1,
SubSumLimitedDistance.find(new int[]{1, 2, 3}, SubSumLimitedDistance.MAX_K + 1, 2); // K = SubSumLimitedDistance.MAX_K+1,
}
@Test(expectedExceptions = IllegalArgumentException.class)
public void test_invalid_L() {
// SubSumLimitedDistance.find(new int[]{1, 2, 3}, 2, -1); // L = -1,
SubSumLimitedDistance.find(new int[]{1, 2, 3}, 2, SubSumLimitedDistance.MAX_L + 1); // L = SubSumLimitedDistance.MAX_L+1,
}
@Test(expectedExceptions = IllegalArgumentException.class)
public void test_invalid_tooLong() {
SubSumLimitedDistance.find(new int[SubSumLimitedDistance.MAX_ARR_LEN + 1], 2, 3); // input array too long,
}
}
```
**Output of test case for large input:**
```
arr length = 131072, K = 5, L = 10, max sum = 20779205738, running time = 0.303 seconds
arr length = 131072, K = 64, L = 10, max sum = 21393422854, running time = 1.917 seconds
arr length = 131072, K = 5, L = 256, max sum = 461698553839, running time = 9.474 seconds
```
|
71,737,316
|
So, I'm having the classic trouble install lxml.
Initially I was just pip installing, but when I tried to free up memory using `Element.clear()` I was getting the following error:
```
Python(58695,0x1001b4580) malloc: *** error for object 0x600000bc3f60: pointer being freed was not allocated
```
I thought this must be because lxml is using the system's libxml2 which is probably out of date.
So I used homebrew to install libxml2 and libxlt, and I force linked them both.
I then tried to install using the following command:
```
❯ STATIC_DEPS=true pip install lxml --no-cache-dir 13:01:46
Collecting lxml
Downloading lxml-4.8.0.tar.gz (3.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.2/3.2 MB 5.4 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Building wheels for collected packages: lxml
Building wheel for lxml (setup.py) ... done
Created wheel for lxml: filename=lxml-4.8.0-cp310-cp310-macosx_12_0_arm64.whl size=1683935 sha256=47912c1ba66d274c3ad7b2a2db00243f96d334a3fd5e439725f5005a7a72a602
Stored in directory: /private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-ephem-wheel-cache-4_v4ov7s/wheels/e4/52/34/64064e2e2f1ce84d212a6dde6676f3227846210a7996fc2530
Successfully built lxml
Installing collected packages: lxml
Successfully installed lxml-4.8.0
```
..but then when I tried to import etree I would get this error:
```
Traceback (most recent call last):
File "/Users/human/Code/ia_book_images/viewer/book_image_downloader.py", line 4, in <module>
from lxml import etree as ET
ImportError: dlopen(/Users/human/.virtualenvs/ia_book_images/lib/python3.10/site-packages/lxml/etree.cpython-310-darwin.so, 0x0002): symbol not found in flat namespace '___htmlDefaultSAXHandler'
```
So then I thought let's make 100% sure that it's using the right versions of libxml2 using CFLAGS and got the following result:
```
❯ CFLAGS="-I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include" STATIC_DEPS=true pip install lxml --no-cache-dir
Collecting lxml
Downloading lxml-4.8.0.tar.gz (3.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.2/3.2 MB 4.4 MB/s eta 0:00:00
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [199 lines of output]
Checking for gcc...
Checking for shared library support...
Building shared library libz.1.2.12.dylib with gcc.
Checking for size_t... Yes.
Checking for off64_t... No.
Checking for fseeko... Yes.
Checking for strerror... Yes.
Checking for unistd.h... Yes.
Checking for stdarg.h... Yes.
Checking whether to use vs[n]printf() or s[n]printf()... using vs[n]printf().
Checking for vsnprintf() in stdio.h... Yes.
Checking for return value of vsnprintf()... Yes.
Checking for attribute(visibility) support... Yes.
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -I. -c -o example.o test/example.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o adler32.o adler32.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o crc32.o crc32.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o deflate.o deflate.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o infback.o infback.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o inffast.o inffast.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o inflate.o inflate.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o inftrees.o inftrees.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o trees.o trees.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o zutil.o zutil.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o compress.o compress.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o uncompr.o uncompr.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o gzclose.o gzclose.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o gzlib.o gzlib.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o gzread.o gzread.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -I. -c -o minigzip.o test/minigzip.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/adler32.o adler32.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/crc32.o crc32.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/deflate.o deflate.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/infback.o infback.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/inflate.o inflate.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/inffast.o inffast.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/inftrees.o inftrees.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/trees.o trees.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/zutil.o zutil.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/gzclose.o gzclose.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/uncompr.o uncompr.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/compress.o compress.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/gzlib.o gzlib.c
libtool -o libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o gzwrite.o
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/gzread.o gzread.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/gzwrite.o gzwrite.c
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -o example example.o -L. libz.a
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a
gcc -dynamiclib -install_name /private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/build/tmp/libxml2/lib/libz.1.dylib -compatibility_version 1 -current_version 1.2.12 -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -o libz.1.2.12.dylib adler32.lo crc32.lo deflate.lo infback.lo inffast.lo inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo gzlib.lo gzread.lo gzwrite.lo -lc -arch x86_64
ld: warning: ignoring file crc32.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file adler32.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file deflate.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file infback.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file inffast.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file inflate.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file inftrees.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file trees.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file compress.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file zutil.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file uncompr.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file gzread.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file gzlib.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file gzclose.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
ld: warning: ignoring file gzwrite.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64
rm -f libz.dylib libz.1.dylib
ln -s libz.1.2.12.dylib libz.dylib
ln -s libz.1.2.12.dylib libz.1.dylib
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -o examplesh example.o -L. libz.1.2.12.dylib
gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -o minigzipsh minigzip.o -L. libz.1.2.12.dylib
ld: warning: ignoring file libz.1.2.12.dylib, building for macOS-arm64 but attempting to link with file built for macOS-x86_64
ld: warning: ignoring file libz.1.2.12.dylib, building for macOS-arm64 but attempting to link with file built for macOS-x86_64
Undefined symbols for architecture arm64:
"_gzclose", referenced from:
_gz_compress in minigzip.o
_gz_uncompress in minigzip.o
"_gzdopen", referenced from:
_main in minigzip.o
"_gzerror", referenced from:
_gz_compress in minigzip.o
_gz_uncompress in minigzip.o
"_gzopen", referenced from:
_file_compress in minigzip.o
_file_uncompress in minigzip.o
_main in minigzip.o
"_gzread", referenced from:
_gz_uncompress in minigzip.o
"_gzwrite", referenced from:
_gz_compress in minigzip.o
ld: symbol(s) not found for architecture arm64
Undefined symbols for architecture arm64:
"_compress", referenced from:
_test_compress in example.o
(maybe you meant: _test_compress)
"_deflate", referenced from:
_test_deflate in example.o
_test_large_deflate in example.o
_test_flush in example.o
_test_dict_deflate in example.o
(maybe you meant: _test_large_deflate, _test_deflate , _test_dict_deflate )
"_deflateEnd", referenced from:
_test_deflate in example.o
_test_large_deflate in example.o
_test_flush in example.o
_test_dict_deflate in example.o
"_deflateInit_", referenced from:
_test_deflate in example.o
_test_large_deflate in example.o
_test_flush in example.o
_test_dict_deflate in example.o
"_deflateParams", referenced from:
_test_large_deflate in example.o
"_deflateSetDictionary", referenced from:
_test_dict_deflate in example.o
"_gzclose", referenced from:
_test_gzio in example.o
"_gzerror", referenced from:
_test_gzio in example.o
"_gzgetc", referenced from:
_test_gzio in example.o
"_gzgets", referenced from:
_test_gzio in example.o
"_gzopen", referenced from:
_test_gzio in example.o
"_gzprintf", referenced from:
_test_gzio in example.o
"_gzputc", referenced from:
_test_gzio in example.o
"_gzputs", referenced from:
_test_gzio in example.o
"_gzread", referenced from:
_test_gzio in example.o
"_gzseek", referenced from:
_test_gzio in example.o
"_gztell", referenced from:
_test_gzio in example.o
"_gzungetc", referenced from:
_test_gzio in example.o
"_inflate", referenced from:
_test_inflate in example.o
_test_large_inflate in example.o
_test_sync in example.o
_test_dict_inflate in example.o
(maybe you meant: _test_large_inflate, _test_inflate , _test_dict_inflate )
"_inflateEnd", referenced from:
_test_inflate in example.o
_test_large_inflate in example.o
_test_sync in example.o
_test_dict_inflate in example.o
"_inflateInit_", referenced from:
_test_inflate in example.o
_test_large_inflate in example.o
_test_sync in example.o
_test_dict_inflate in example.o
"_inflateSetDictionary", referenced from:
_test_dict_inflate in example.o
"_inflateSync", referenced from:
_test_sync in example.o
"_uncompress", referenced from:
_test_compress in example.o
"_zlibCompileFlags", referenced from:
_main in example.o
"_zlibVersion", referenced from:
_main in example.o
clang: error: linker command failed with exit code 1 (use -v to see invocation)
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [minigzipsh] Error 1
make: *** Waiting for unfinished jobs....
make: *** [examplesh] Error 1
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/setup.py", line 270, in <module>
**setup_extra_options()
File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/setup.py", line 162, in setup_extra_options
ext_modules = setupinfo.ext_modules(
File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/setupinfo.py", line 74, in ext_modules
XML2_CONFIG, XSLT_CONFIG = build_libxml2xslt(
File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/buildlibxml.py", line 428, in build_libxml2xslt
cmmi(zlib_configure_cmd, zlib_dir, multicore, **call_setup)
File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/buildlibxml.py", line 352, in cmmi
call_subprocess(
File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/buildlibxml.py", line 335, in call_subprocess
raise Exception('Command "%s" returned code %s' % (cmd_desc, returncode))
Exception: Command "make -j6" returned code 2
Building lxml version 4.8.0.
Latest version of zlib is 1.2.12
Downloading zlib into libs/zlib-1.2.12.tar.gz from https://zlib.net/zlib-1.2.12.tar.gz
Unpacking zlib-1.2.12.tar.gz into build/tmp
Latest version of libiconv is 1.16
Downloading libiconv into libs/libiconv-1.16.tar.gz from https://ftp.gnu.org/pub/gnu/libiconv/libiconv-1.16.tar.gz
Unpacking libiconv-1.16.tar.gz into build/tmp
Latest version of libxml2 is 2.9.12
Downloading libxml2 into libs/libxml2-2.9.12.tar.gz from http://xmlsoft.org/sources/libxml2-2.9.12.tar.gz
Unpacking libxml2-2.9.12.tar.gz into build/tmp
Latest version of libxslt is 1.1.34
Downloading libxslt into libs/libxslt-1.1.34.tar.gz from http://xmlsoft.org/sources/libxslt-1.1.34.tar.gz
Unpacking libxslt-1.1.34.tar.gz into build/tmp
Starting build in build/tmp/zlib-1.2.12
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
Do I need to do something special to build lxml on an m1 mac?
|
2022/04/04
|
[
"https://Stackoverflow.com/questions/71737316",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/311220/"
] |
One solution that works - build it from source:
```
git clone https://github.com/lxml/lxml
cd lxml
git checkout tags/lxml-4.9.1
python3 setup.py bdist_wheel
cd dist/
sudo pip3 install lxml-4.9.1-cp310-cp310-macosx_12_0_arm64.whl
```
For additional resources:
<https://lxml.de/installation.html>
|
It turned out that installing lxml with a simple `pip install` *was* working fine.
The reason for my malloc error was the fact that I was trying to clear the element before the end tag had been seen. Turns out this isn't possible and you need to wait for the end tag even if you already know you aren't interested in the element.
|
4,944,331
|
I wonder about the best/easiest way to authenticate the user for the Google Data API in a desktop app.
I read through the [docs](http://code.google.com/intl/de/apis/contacts/docs/3.0/developers_guide_python.html) and it seems that my options are ClientLogin or OAuth.
For ClientLogin, it seems I have to implement the UI for login/password (and related things like saving this somewhere etc.) myself. I really wonder if there is some more support there which may pop up some default login/password screen and uses the OS keychain to store the password, etc. I wonder why such support isn't there? Wouldn't that be the standard procedure? By leaving that implementation to the dev (well, the possibility to leave that impl to the dev is good of course), I would guess that many people have come up with very ugly solutions here (when they just wanted to hack together a small script).
OAuth seems to be the better solution. However, there seems to be some code missing and/or most code I found seems only to be relevant for web applications. Esp., I followed the documentation and got [here](http://code.google.com/intl/de/apis/gdata/docs/auth/oauth.html#OAuthRequestToken). Already in the introduction, it speaks about web application. Then later on, I need to specify a callback URL which does not make sense for a desktop application. Also I wonder what consumer key/secret I should put as that also doesn't really make sense for a desktop app (esp. not for an open-source one). I searched a bit around and it was said [here (on SO)](https://stackoverflow.com/questions/2569968/googles-oauth-for-installed-apps-vs-oauth-for-web-apps/2572795#2572795) that I should use "anonymous"/"anonymous" as the consumer key/secret; but where does it say that in the Google documentation? And how do I get the token after the user has authenticated itself?
Is there some sample code? (Not with a hardcoded username/password but with a reusable full authentication method.)
Thanks,
Albert
---
My code so far:
```
import gdata.gauth
import gdata.contacts.client
CONSUMER_KEY = 'anonymous'
CONSUMER_SECRET = 'anonymous'
SCOPES = [ "https://www.google.com/m8/feeds/" ] # contacts
client = gdata.contacts.client.ContactsClient(source='Test app')
import BaseHTTPServer
import SocketServer
Handler = BaseHTTPServer.BaseHTTPRequestHandler
httpd = BaseHTTPServer.HTTPServer(("", 0), Handler)
_,port = httpd.server_address
oauth_callback_url = 'http://localhost:%d/get_access_token' % port
request_token = client.GetOAuthToken(
SCOPES, oauth_callback_url, CONSUMER_KEY, consumer_secret=CONSUMER_SECRET)
loginurl = request_token.generate_authorization_url(google_apps_domain=None)
loginurl = str(loginurl)
import webbrowser
webbrowser.open(loginurl)
```
However, this does not work. I get this error:
>
> Sorry, you've reached a login page for a domain that isn't using Google Apps. Please check the web address and try again.
>
>
>
I don't quite understand that. Of course I don't use Google Apps.
---
Ah, that error came from `google_apps_domain=None` in `generate_authorization_url`. Leave that away (i.e. just `loginurl = request_token.generate_authorization_url()` and it works so far.
My current code:
```
import gdata.gauth
import gdata.contacts.client
CONSUMER_KEY = 'anonymous'
CONSUMER_SECRET = 'anonymous'
SCOPES = [ "https://www.google.com/m8/feeds/" ] # contacts
client = gdata.contacts.client.ContactsClient(source='Test app')
import BaseHTTPServer
import SocketServer
httpd_access_token_callback = None
class Handler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(self):
if self.path.startswith("/get_access_token?"):
global httpd_access_token_callback
httpd_access_token_callback = self.path
self.send_response(200)
def log_message(self, format, *args): pass
httpd = BaseHTTPServer.HTTPServer(("", 0), Handler)
_,port = httpd.server_address
oauth_callback_url = 'http://localhost:%d/get_access_token' % port
request_token = client.GetOAuthToken(
SCOPES, oauth_callback_url, CONSUMER_KEY, consumer_secret=CONSUMER_SECRET)
loginurl = request_token.generate_authorization_url()
loginurl = str(loginurl)
print "opening oauth login page ..."
import webbrowser; webbrowser.open(loginurl)
print "waiting for redirect callback ..."
while httpd_access_token_callback == None:
httpd.handle_request()
print "done"
request_token = gdata.gauth.AuthorizeRequestToken(request_token, httpd_access_token_callback)
# Upgrade the token and save in the user's datastore
access_token = client.GetAccessToken(request_token)
client.auth_token = access_token
```
That will open the Google OAuth page with the hint at the bottom:
>
> This website has not registered with Google to establish a secure connection for authorization requests. We recommend that you deny access unless you trust the website.
>
>
>
It still doesn't work, though. When I try to access the contacts (i.e. just a `client.GetContacts()`), I get this error:
```
gdata.client.Unauthorized: Unauthorized - Server responded with: 401, <HTML>
<HEAD>
<TITLE>Token invalid - AuthSub token has wrong scope</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFFFF" TEXT="#000000">
<H1>Token invalid - AuthSub token has wrong scope</H1>
<H2>Error 401</H2>
</BODY>
</HTML>
```
---
Ok, it seems that I really had set the wrong scope. When I use `http` instead of `https` (i.e. `SCOPES = [ "http://www.google.com/m8/feeds/" ]`), it works.
But I really would like to use https. I wonder how I can do that.
---
Also, another problem with this solution:
In the list of Authorized Access to my Google Account, I now have a bunch of such localhost entries:
>
> localhost:58630 — Google Contacts [ Revoke Access ]
>
> localhost:58559 — Google Contacts [ Revoke Access ]
>
> localhost:58815 — Google Contacts [ Revoke Access ]
>
> localhost:59174 — Google Contacts [ Revoke Access ]
>
> localhost:58514 — Google Contacts [ Revoke Access ]
>
> localhost:58533 — Google Contacts [ Revoke Access ]
>
> localhost:58790 — Google Contacts [ Revoke Access ]
>
> localhost:59012 — Google Contacts [ Revoke Access ]
>
> localhost:59191 — Google Contacts [ Revoke Access ]
>
>
>
I wonder how I can avoid that it will make such entries.
When I use `xoauth_displayname`, it displays that name instead but still makes multiple entries (probably because the URL is still mostly different (because of the port) each time). How can I avoid that?
---
My current code is now on [Github](https://github.com/albertz/google-contacts-sync/blob/master/goauth.py).
---
I also wonder, where, how and for how long I should store the access token and/or the request token so that the user is not asked always again and again each time when the user starts the application.
|
2011/02/09
|
[
"https://Stackoverflow.com/questions/4944331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/133374/"
] |
You can use the file streaming ability available on the MediaPlayer class to do the above.
I have not tested the following code, but it should be something on these lines:
```
MediaPlayer mp = new MediaPlayer();
mp.setDataSource(http://somedomain/some.pls);
mp.prepare();
mp.start();
```
But remember to do the following:
1. Surround each of the statements above with `try` and `catch` statements and print copiously to `LogCat`. It will help debug easily.
2. Release the resources once you are done with it.
HTH,
Sriram
|
You can try download pls file, parse the urls that are in there, and put them in MediaPlayer.
|
4,944,331
|
I wonder about the best/easiest way to authenticate the user for the Google Data API in a desktop app.
I read through the [docs](http://code.google.com/intl/de/apis/contacts/docs/3.0/developers_guide_python.html) and it seems that my options are ClientLogin or OAuth.
For ClientLogin, it seems I have to implement the UI for login/password (and related things like saving this somewhere etc.) myself. I really wonder if there is some more support there which may pop up some default login/password screen and uses the OS keychain to store the password, etc. I wonder why such support isn't there? Wouldn't that be the standard procedure? By leaving that implementation to the dev (well, the possibility to leave that impl to the dev is good of course), I would guess that many people have come up with very ugly solutions here (when they just wanted to hack together a small script).
OAuth seems to be the better solution. However, there seems to be some code missing and/or most code I found seems only to be relevant for web applications. Esp., I followed the documentation and got [here](http://code.google.com/intl/de/apis/gdata/docs/auth/oauth.html#OAuthRequestToken). Already in the introduction, it speaks about web application. Then later on, I need to specify a callback URL which does not make sense for a desktop application. Also I wonder what consumer key/secret I should put as that also doesn't really make sense for a desktop app (esp. not for an open-source one). I searched a bit around and it was said [here (on SO)](https://stackoverflow.com/questions/2569968/googles-oauth-for-installed-apps-vs-oauth-for-web-apps/2572795#2572795) that I should use "anonymous"/"anonymous" as the consumer key/secret; but where does it say that in the Google documentation? And how do I get the token after the user has authenticated itself?
Is there some sample code? (Not with a hardcoded username/password but with a reusable full authentication method.)
Thanks,
Albert
---
My code so far:
```
import gdata.gauth
import gdata.contacts.client
CONSUMER_KEY = 'anonymous'
CONSUMER_SECRET = 'anonymous'
SCOPES = [ "https://www.google.com/m8/feeds/" ] # contacts
client = gdata.contacts.client.ContactsClient(source='Test app')
import BaseHTTPServer
import SocketServer
Handler = BaseHTTPServer.BaseHTTPRequestHandler
httpd = BaseHTTPServer.HTTPServer(("", 0), Handler)
_,port = httpd.server_address
oauth_callback_url = 'http://localhost:%d/get_access_token' % port
request_token = client.GetOAuthToken(
SCOPES, oauth_callback_url, CONSUMER_KEY, consumer_secret=CONSUMER_SECRET)
loginurl = request_token.generate_authorization_url(google_apps_domain=None)
loginurl = str(loginurl)
import webbrowser
webbrowser.open(loginurl)
```
However, this does not work. I get this error:
>
> Sorry, you've reached a login page for a domain that isn't using Google Apps. Please check the web address and try again.
>
>
>
I don't quite understand that. Of course I don't use Google Apps.
---
Ah, that error came from `google_apps_domain=None` in `generate_authorization_url`. Leave that away (i.e. just `loginurl = request_token.generate_authorization_url()` and it works so far.
My current code:
```
import gdata.gauth
import gdata.contacts.client
CONSUMER_KEY = 'anonymous'
CONSUMER_SECRET = 'anonymous'
SCOPES = [ "https://www.google.com/m8/feeds/" ] # contacts
client = gdata.contacts.client.ContactsClient(source='Test app')
import BaseHTTPServer
import SocketServer
httpd_access_token_callback = None
class Handler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(self):
if self.path.startswith("/get_access_token?"):
global httpd_access_token_callback
httpd_access_token_callback = self.path
self.send_response(200)
def log_message(self, format, *args): pass
httpd = BaseHTTPServer.HTTPServer(("", 0), Handler)
_,port = httpd.server_address
oauth_callback_url = 'http://localhost:%d/get_access_token' % port
request_token = client.GetOAuthToken(
SCOPES, oauth_callback_url, CONSUMER_KEY, consumer_secret=CONSUMER_SECRET)
loginurl = request_token.generate_authorization_url()
loginurl = str(loginurl)
print "opening oauth login page ..."
import webbrowser; webbrowser.open(loginurl)
print "waiting for redirect callback ..."
while httpd_access_token_callback == None:
httpd.handle_request()
print "done"
request_token = gdata.gauth.AuthorizeRequestToken(request_token, httpd_access_token_callback)
# Upgrade the token and save in the user's datastore
access_token = client.GetAccessToken(request_token)
client.auth_token = access_token
```
That will open the Google OAuth page with the hint at the bottom:
>
> This website has not registered with Google to establish a secure connection for authorization requests. We recommend that you deny access unless you trust the website.
>
>
>
It still doesn't work, though. When I try to access the contacts (i.e. just a `client.GetContacts()`), I get this error:
```
gdata.client.Unauthorized: Unauthorized - Server responded with: 401, <HTML>
<HEAD>
<TITLE>Token invalid - AuthSub token has wrong scope</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFFFF" TEXT="#000000">
<H1>Token invalid - AuthSub token has wrong scope</H1>
<H2>Error 401</H2>
</BODY>
</HTML>
```
---
Ok, it seems that I really had set the wrong scope. When I use `http` instead of `https` (i.e. `SCOPES = [ "http://www.google.com/m8/feeds/" ]`), it works.
But I really would like to use https. I wonder how I can do that.
---
Also, another problem with this solution:
In the list of Authorized Access to my Google Account, I now have a bunch of such localhost entries:
>
> localhost:58630 — Google Contacts [ Revoke Access ]
>
> localhost:58559 — Google Contacts [ Revoke Access ]
>
> localhost:58815 — Google Contacts [ Revoke Access ]
>
> localhost:59174 — Google Contacts [ Revoke Access ]
>
> localhost:58514 — Google Contacts [ Revoke Access ]
>
> localhost:58533 — Google Contacts [ Revoke Access ]
>
> localhost:58790 — Google Contacts [ Revoke Access ]
>
> localhost:59012 — Google Contacts [ Revoke Access ]
>
> localhost:59191 — Google Contacts [ Revoke Access ]
>
>
>
I wonder how I can avoid that it will make such entries.
When I use `xoauth_displayname`, it displays that name instead but still makes multiple entries (probably because the URL is still mostly different (because of the port) each time). How can I avoid that?
---
My current code is now on [Github](https://github.com/albertz/google-contacts-sync/blob/master/goauth.py).
---
I also wonder, where, how and for how long I should store the access token and/or the request token so that the user is not asked always again and again each time when the user starts the application.
|
2011/02/09
|
[
"https://Stackoverflow.com/questions/4944331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/133374/"
] |
Mediaplayer api from android supports streaming a .pls file, but the API is not best option & the documentation is not well. The life cycle diagram given in the official documentation gives valuable information but may be confusing at a first glance.
A sample code snippet:
```
MediaPlayer mp;
mp=MediaPlayer.create(getApplicationContext(),Uri.parse(url))
Example url of .pls file http://50.xx.xxx.xx:xx40/)
mp.start();
mp.pause();
mp.release() (or mp.reset() as applicable)
```
<http://developer.android.com/reference/android/media/MediaPlayer.html#create(android.content.Context>, android.net.Uri)
There are listeners/callback available with MediaPlayer api but it is really problematic to a App developer working on audio streaming.
The static constructor used in the code snippet is suggested/better approach to create the media object with URI (like http url with host and port.)
But the call back functions can't be used by developer because the prepare is called by the constructor itself.
The created object can be played/stopped in a asyn thread (AsyncTask api).
The android official documentation does not give any 'get method' to get the status of the mediaplayer.
I welcome android app developers to talk on this;
and I wish frame work developers provide more accurate documentation - how much it supports or what's importance & vision of Google Inc and Android framework development team on Mediaplayer API for audio streaming with urls.
If some need help from me or share their experience let's please talk on MediaPlayer api for audio streaming @ <http://stackoverflow.com> or android developers blog (android-developers.blogspot.com).
Regards
Sree Ramakrishna
Program Manager & Android developer @ New Mek Solutions,Hyderabad.
|
You can try download pls file, parse the urls that are in there, and put them in MediaPlayer.
|
4,944,331
|
I wonder about the best/easiest way to authenticate the user for the Google Data API in a desktop app.
I read through the [docs](http://code.google.com/intl/de/apis/contacts/docs/3.0/developers_guide_python.html) and it seems that my options are ClientLogin or OAuth.
For ClientLogin, it seems I have to implement the UI for login/password (and related things like saving this somewhere etc.) myself. I really wonder if there is some more support there which may pop up some default login/password screen and uses the OS keychain to store the password, etc. I wonder why such support isn't there? Wouldn't that be the standard procedure? By leaving that implementation to the dev (well, the possibility to leave that impl to the dev is good of course), I would guess that many people have come up with very ugly solutions here (when they just wanted to hack together a small script).
OAuth seems to be the better solution. However, there seems to be some code missing and/or most code I found seems only to be relevant for web applications. Esp., I followed the documentation and got [here](http://code.google.com/intl/de/apis/gdata/docs/auth/oauth.html#OAuthRequestToken). Already in the introduction, it speaks about web application. Then later on, I need to specify a callback URL which does not make sense for a desktop application. Also I wonder what consumer key/secret I should put as that also doesn't really make sense for a desktop app (esp. not for an open-source one). I searched a bit around and it was said [here (on SO)](https://stackoverflow.com/questions/2569968/googles-oauth-for-installed-apps-vs-oauth-for-web-apps/2572795#2572795) that I should use "anonymous"/"anonymous" as the consumer key/secret; but where does it say that in the Google documentation? And how do I get the token after the user has authenticated itself?
Is there some sample code? (Not with a hardcoded username/password but with a reusable full authentication method.)
Thanks,
Albert
---
My code so far:
```
import gdata.gauth
import gdata.contacts.client
CONSUMER_KEY = 'anonymous'
CONSUMER_SECRET = 'anonymous'
SCOPES = [ "https://www.google.com/m8/feeds/" ] # contacts
client = gdata.contacts.client.ContactsClient(source='Test app')
import BaseHTTPServer
import SocketServer
Handler = BaseHTTPServer.BaseHTTPRequestHandler
httpd = BaseHTTPServer.HTTPServer(("", 0), Handler)
_,port = httpd.server_address
oauth_callback_url = 'http://localhost:%d/get_access_token' % port
request_token = client.GetOAuthToken(
SCOPES, oauth_callback_url, CONSUMER_KEY, consumer_secret=CONSUMER_SECRET)
loginurl = request_token.generate_authorization_url(google_apps_domain=None)
loginurl = str(loginurl)
import webbrowser
webbrowser.open(loginurl)
```
However, this does not work. I get this error:
>
> Sorry, you've reached a login page for a domain that isn't using Google Apps. Please check the web address and try again.
>
>
>
I don't quite understand that. Of course I don't use Google Apps.
---
Ah, that error came from `google_apps_domain=None` in `generate_authorization_url`. Leave that away (i.e. just `loginurl = request_token.generate_authorization_url()` and it works so far.
My current code:
```
import gdata.gauth
import gdata.contacts.client
CONSUMER_KEY = 'anonymous'
CONSUMER_SECRET = 'anonymous'
SCOPES = [ "https://www.google.com/m8/feeds/" ] # contacts
client = gdata.contacts.client.ContactsClient(source='Test app')
import BaseHTTPServer
import SocketServer
httpd_access_token_callback = None
class Handler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(self):
if self.path.startswith("/get_access_token?"):
global httpd_access_token_callback
httpd_access_token_callback = self.path
self.send_response(200)
def log_message(self, format, *args): pass
httpd = BaseHTTPServer.HTTPServer(("", 0), Handler)
_,port = httpd.server_address
oauth_callback_url = 'http://localhost:%d/get_access_token' % port
request_token = client.GetOAuthToken(
SCOPES, oauth_callback_url, CONSUMER_KEY, consumer_secret=CONSUMER_SECRET)
loginurl = request_token.generate_authorization_url()
loginurl = str(loginurl)
print "opening oauth login page ..."
import webbrowser; webbrowser.open(loginurl)
print "waiting for redirect callback ..."
while httpd_access_token_callback == None:
httpd.handle_request()
print "done"
request_token = gdata.gauth.AuthorizeRequestToken(request_token, httpd_access_token_callback)
# Upgrade the token and save in the user's datastore
access_token = client.GetAccessToken(request_token)
client.auth_token = access_token
```
That will open the Google OAuth page with the hint at the bottom:
>
> This website has not registered with Google to establish a secure connection for authorization requests. We recommend that you deny access unless you trust the website.
>
>
>
It still doesn't work, though. When I try to access the contacts (i.e. just a `client.GetContacts()`), I get this error:
```
gdata.client.Unauthorized: Unauthorized - Server responded with: 401, <HTML>
<HEAD>
<TITLE>Token invalid - AuthSub token has wrong scope</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFFFF" TEXT="#000000">
<H1>Token invalid - AuthSub token has wrong scope</H1>
<H2>Error 401</H2>
</BODY>
</HTML>
```
---
Ok, it seems that I really had set the wrong scope. When I use `http` instead of `https` (i.e. `SCOPES = [ "http://www.google.com/m8/feeds/" ]`), it works.
But I really would like to use https. I wonder how I can do that.
---
Also, another problem with this solution:
In the list of Authorized Access to my Google Account, I now have a bunch of such localhost entries:
>
> localhost:58630 — Google Contacts [ Revoke Access ]
>
> localhost:58559 — Google Contacts [ Revoke Access ]
>
> localhost:58815 — Google Contacts [ Revoke Access ]
>
> localhost:59174 — Google Contacts [ Revoke Access ]
>
> localhost:58514 — Google Contacts [ Revoke Access ]
>
> localhost:58533 — Google Contacts [ Revoke Access ]
>
> localhost:58790 — Google Contacts [ Revoke Access ]
>
> localhost:59012 — Google Contacts [ Revoke Access ]
>
> localhost:59191 — Google Contacts [ Revoke Access ]
>
>
>
I wonder how I can avoid that it will make such entries.
When I use `xoauth_displayname`, it displays that name instead but still makes multiple entries (probably because the URL is still mostly different (because of the port) each time). How can I avoid that?
---
My current code is now on [Github](https://github.com/albertz/google-contacts-sync/blob/master/goauth.py).
---
I also wonder, where, how and for how long I should store the access token and/or the request token so that the user is not asked always again and again each time when the user starts the application.
|
2011/02/09
|
[
"https://Stackoverflow.com/questions/4944331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/133374/"
] |
You can use the file streaming ability available on the MediaPlayer class to do the above.
I have not tested the following code, but it should be something on these lines:
```
MediaPlayer mp = new MediaPlayer();
mp.setDataSource(http://somedomain/some.pls);
mp.prepare();
mp.start();
```
But remember to do the following:
1. Surround each of the statements above with `try` and `catch` statements and print copiously to `LogCat`. It will help debug easily.
2. Release the resources once you are done with it.
HTH,
Sriram
|
Man, try this two things:
**1: the code**
```
mediaPlayer = MediaPlayer.create(this, Uri.parse("http://vprbbc.streamguys.net:80/vprbbc24.mp3"));
mediaPlayer.start();
```
**2: the .pls file**
This URL is from BBC just as an example. It was an .pls file that on linux i downloaded with
```
wget http://foo.bar/file.pls
```
and then i opened with vim (use your favorite editor ;) and i've seen the real URLs inside this file. Unfortunately not all of the .pls are plain text like that.
have fun!
|
4,944,331
|
I wonder about the best/easiest way to authenticate the user for the Google Data API in a desktop app.
I read through the [docs](http://code.google.com/intl/de/apis/contacts/docs/3.0/developers_guide_python.html) and it seems that my options are ClientLogin or OAuth.
For ClientLogin, it seems I have to implement the UI for login/password (and related things like saving this somewhere etc.) myself. I really wonder if there is some more support there which may pop up some default login/password screen and uses the OS keychain to store the password, etc. I wonder why such support isn't there? Wouldn't that be the standard procedure? By leaving that implementation to the dev (well, the possibility to leave that impl to the dev is good of course), I would guess that many people have come up with very ugly solutions here (when they just wanted to hack together a small script).
OAuth seems to be the better solution. However, there seems to be some code missing and/or most code I found seems only to be relevant for web applications. Esp., I followed the documentation and got [here](http://code.google.com/intl/de/apis/gdata/docs/auth/oauth.html#OAuthRequestToken). Already in the introduction, it speaks about web application. Then later on, I need to specify a callback URL which does not make sense for a desktop application. Also I wonder what consumer key/secret I should put as that also doesn't really make sense for a desktop app (esp. not for an open-source one). I searched a bit around and it was said [here (on SO)](https://stackoverflow.com/questions/2569968/googles-oauth-for-installed-apps-vs-oauth-for-web-apps/2572795#2572795) that I should use "anonymous"/"anonymous" as the consumer key/secret; but where does it say that in the Google documentation? And how do I get the token after the user has authenticated itself?
Is there some sample code? (Not with a hardcoded username/password but with a reusable full authentication method.)
Thanks,
Albert
---
My code so far:
```
import gdata.gauth
import gdata.contacts.client
CONSUMER_KEY = 'anonymous'
CONSUMER_SECRET = 'anonymous'
SCOPES = [ "https://www.google.com/m8/feeds/" ] # contacts
client = gdata.contacts.client.ContactsClient(source='Test app')
import BaseHTTPServer
import SocketServer
Handler = BaseHTTPServer.BaseHTTPRequestHandler
httpd = BaseHTTPServer.HTTPServer(("", 0), Handler)
_,port = httpd.server_address
oauth_callback_url = 'http://localhost:%d/get_access_token' % port
request_token = client.GetOAuthToken(
SCOPES, oauth_callback_url, CONSUMER_KEY, consumer_secret=CONSUMER_SECRET)
loginurl = request_token.generate_authorization_url(google_apps_domain=None)
loginurl = str(loginurl)
import webbrowser
webbrowser.open(loginurl)
```
However, this does not work. I get this error:
>
> Sorry, you've reached a login page for a domain that isn't using Google Apps. Please check the web address and try again.
>
>
>
I don't quite understand that. Of course I don't use Google Apps.
---
Ah, that error came from `google_apps_domain=None` in `generate_authorization_url`. Leave that away (i.e. just `loginurl = request_token.generate_authorization_url()` and it works so far.
My current code:
```
import gdata.gauth
import gdata.contacts.client
CONSUMER_KEY = 'anonymous'
CONSUMER_SECRET = 'anonymous'
SCOPES = [ "https://www.google.com/m8/feeds/" ] # contacts
client = gdata.contacts.client.ContactsClient(source='Test app')
import BaseHTTPServer
import SocketServer
httpd_access_token_callback = None
class Handler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(self):
if self.path.startswith("/get_access_token?"):
global httpd_access_token_callback
httpd_access_token_callback = self.path
self.send_response(200)
def log_message(self, format, *args): pass
httpd = BaseHTTPServer.HTTPServer(("", 0), Handler)
_,port = httpd.server_address
oauth_callback_url = 'http://localhost:%d/get_access_token' % port
request_token = client.GetOAuthToken(
SCOPES, oauth_callback_url, CONSUMER_KEY, consumer_secret=CONSUMER_SECRET)
loginurl = request_token.generate_authorization_url()
loginurl = str(loginurl)
print "opening oauth login page ..."
import webbrowser; webbrowser.open(loginurl)
print "waiting for redirect callback ..."
while httpd_access_token_callback == None:
httpd.handle_request()
print "done"
request_token = gdata.gauth.AuthorizeRequestToken(request_token, httpd_access_token_callback)
# Upgrade the token and save in the user's datastore
access_token = client.GetAccessToken(request_token)
client.auth_token = access_token
```
That will open the Google OAuth page with the hint at the bottom:
>
> This website has not registered with Google to establish a secure connection for authorization requests. We recommend that you deny access unless you trust the website.
>
>
>
It still doesn't work, though. When I try to access the contacts (i.e. just a `client.GetContacts()`), I get this error:
```
gdata.client.Unauthorized: Unauthorized - Server responded with: 401, <HTML>
<HEAD>
<TITLE>Token invalid - AuthSub token has wrong scope</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFFFF" TEXT="#000000">
<H1>Token invalid - AuthSub token has wrong scope</H1>
<H2>Error 401</H2>
</BODY>
</HTML>
```
---
Ok, it seems that I really had set the wrong scope. When I use `http` instead of `https` (i.e. `SCOPES = [ "http://www.google.com/m8/feeds/" ]`), it works.
But I really would like to use https. I wonder how I can do that.
---
Also, another problem with this solution:
In the list of Authorized Access to my Google Account, I now have a bunch of such localhost entries:
>
> localhost:58630 — Google Contacts [ Revoke Access ]
>
> localhost:58559 — Google Contacts [ Revoke Access ]
>
> localhost:58815 — Google Contacts [ Revoke Access ]
>
> localhost:59174 — Google Contacts [ Revoke Access ]
>
> localhost:58514 — Google Contacts [ Revoke Access ]
>
> localhost:58533 — Google Contacts [ Revoke Access ]
>
> localhost:58790 — Google Contacts [ Revoke Access ]
>
> localhost:59012 — Google Contacts [ Revoke Access ]
>
> localhost:59191 — Google Contacts [ Revoke Access ]
>
>
>
I wonder how I can avoid that it will make such entries.
When I use `xoauth_displayname`, it displays that name instead but still makes multiple entries (probably because the URL is still mostly different (because of the port) each time). How can I avoid that?
---
My current code is now on [Github](https://github.com/albertz/google-contacts-sync/blob/master/goauth.py).
---
I also wonder, where, how and for how long I should store the access token and/or the request token so that the user is not asked always again and again each time when the user starts the application.
|
2011/02/09
|
[
"https://Stackoverflow.com/questions/4944331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/133374/"
] |
Mediaplayer api from android supports streaming a .pls file, but the API is not best option & the documentation is not well. The life cycle diagram given in the official documentation gives valuable information but may be confusing at a first glance.
A sample code snippet:
```
MediaPlayer mp;
mp=MediaPlayer.create(getApplicationContext(),Uri.parse(url))
Example url of .pls file http://50.xx.xxx.xx:xx40/)
mp.start();
mp.pause();
mp.release() (or mp.reset() as applicable)
```
<http://developer.android.com/reference/android/media/MediaPlayer.html#create(android.content.Context>, android.net.Uri)
There are listeners/callback available with MediaPlayer api but it is really problematic to a App developer working on audio streaming.
The static constructor used in the code snippet is suggested/better approach to create the media object with URI (like http url with host and port.)
But the call back functions can't be used by developer because the prepare is called by the constructor itself.
The created object can be played/stopped in a asyn thread (AsyncTask api).
The android official documentation does not give any 'get method' to get the status of the mediaplayer.
I welcome android app developers to talk on this;
and I wish frame work developers provide more accurate documentation - how much it supports or what's importance & vision of Google Inc and Android framework development team on Mediaplayer API for audio streaming with urls.
If some need help from me or share their experience let's please talk on MediaPlayer api for audio streaming @ <http://stackoverflow.com> or android developers blog (android-developers.blogspot.com).
Regards
Sree Ramakrishna
Program Manager & Android developer @ New Mek Solutions,Hyderabad.
|
Man, try this two things:
**1: the code**
```
mediaPlayer = MediaPlayer.create(this, Uri.parse("http://vprbbc.streamguys.net:80/vprbbc24.mp3"));
mediaPlayer.start();
```
**2: the .pls file**
This URL is from BBC just as an example. It was an .pls file that on linux i downloaded with
```
wget http://foo.bar/file.pls
```
and then i opened with vim (use your favorite editor ;) and i've seen the real URLs inside this file. Unfortunately not all of the .pls are plain text like that.
have fun!
|
48,013,074
|
So I am trying to use cffi to access a c library quickly in pypy.
I am using my macpro with command line tools 9.1
specifically I am comparing a pure python priority queue, with heapq, with a cffi, and ctypes for a project.
I have got code from roman10's website for a C implementation of a priority queue.
I am following the docs for cffi but I get an error when trying to call a function that is inside of the library.
I have 5 files: the .c/.h `cheap.c` `cheap.h`for the priority queue, the interface python file with cffi `heap_inter.py`, a wrapper for the program to access the queue `heap.py`, and a test script `test_heap.py`
This is the error: I tried with `export CC=gcc` and `export CC=clang` and also my mpich
`pypy test_heap.py`
AttributeError: cffi library '\_heap\_i' has no function, constant, or global variable named 'initQueue'
however I do have a function in the c library that is being compiled called initQueue.
when i call pypy on heap\_inter.py the commands executed are
`building '_heap_i' extension
clang -DNDEBUG -O2 -fPIC -I/opt/local/include -I/opt/local/lib/pypy/include -c _heap_i.c -o ./_heap_i.o
cc -pthread -shared -undefined dynamic_lookup ./_heap_i.o -L/opt/local/lib -o ./_heap_i.pypy-41.so
clang: warning: argument unused during compilation: '-pthread' [-Wunused-command-line-argument]`
here are my sources,
```
cheap.h
#include <stdio.h>
#include <stdlib.h>
/* priority Queue implimentation via roman10.net */
struct heapData {
//everything from event
//tx string
char tx[64];
//txID int
int txID;
//rx string
char rx[64];
//rxID int
int rxID;
//name string
char name[64];
//data Object
//time float
float time;
};
struct heapNode {
int value;
struct heapData data; //dummy
};
struct PQ {
struct heapNode* heap;
int size;
};
void insert(struct heapNode aNode, struct heapNode* heap, int size);
void shiftdown(struct heapNode* heap, int size, int idx);
struct heapNode removeMin(struct heapNode* heap, int size);
void enqueue(struct heapNode node, struct PQ *q);
struct heapNode dequeue(struct PQ *q);
struct heapNode peak(struct PQ *q);
void initQueue(struct PQ *q, int n);
int nn = 1000000;
struct PQ q;
int main(int argc, char **argv);
```
then `heap.c`
```
void insert(struct heapNode aNode, struct heapNode* heap, int size) {
int idx;
struct heapNode tmp;
idx = size + 1;
heap[idx] = aNode;
while (heap[idx].value < heap[idx/2].value && idx > 1) {
tmp = heap[idx];
heap[idx] = heap[idx/2];
heap[idx/2] = tmp;
idx /= 2;
}
}
void shiftdown(struct heapNode* heap, int size, int idx) {
int cidx; //index for child
struct heapNode tmp;
for (;;) {
cidx = idx*2;
if (cidx > size) {
break; //it has no child
}
if (cidx < size) {
if (heap[cidx].value > heap[cidx+1].value) {
++cidx;
}
}
//swap if necessary
if (heap[cidx].value < heap[idx].value) {
tmp = heap[cidx];
heap[cidx] = heap[idx];
heap[idx] = tmp;
idx = cidx;
} else {
break;
}
}
}
struct heapNode removeMin(struct heapNode* heap, int size) {
int cidx;
struct heapNode rv = heap[1];
//printf("%d:%d:%dn", size, heap[1].value, heap[size].value);
heap[1] = heap[size];
--size;
shiftdown(heap, size, 1);
return rv;
}
void enqueue(struct heapNode node, struct PQ *q) {
insert(node, q->heap, q->size);
++q->size;
}
struct heapNode dequeue(struct PQ *q) {
struct heapNode rv = removeMin(q->heap, q->size);
--q->size;
return rv;
}
struct heapNode peak(struct PQ *q) {
return q->heap[1];
}
void initQueue(struct PQ *q, int n) {
q->size = 0;
q->heap = (struct heapNode*)malloc(sizeof(struct heapNode)*(n+1));
}
int nn = 1000000;
struct PQ q;
int main(int argc, char **argv) {
int n;
int i;
struct PQ q;
struct heapNode hn;
n = atoi(argv[1]);
initQueue(&q, n);
srand(time(NULL));
for (i = 0; i < n; ++i) {
hn.value = rand()%10000;
printf("enqueue node with value: %dn", hn.value);
enqueue(hn, &q);
}
printf("ndequeue all values:n");
for (i = 0; i < n; ++i) {
hn = dequeue(&q);
printf("dequeued node with value: %d, queue size after removal: %dn", hn.value, q.size);
}
}
```
`heap_inter.py`:
```
from cffi import FFI
ffibuilder = FFI()
ffibuilder.set_source("_heap_i",
r"""//passed to C compiler
#include <stdio.h>
#include <stdlib.h>
#include "cheap.h"
""",
libraries=[])
ffibuilder.cdef("""
struct heapData {
char tx[64];
int txID;
char rx[64];
int rxID;
char name[64];
float time;
} ;
struct heapNode {
int value;
struct heapData data; //dummy
} ;
struct PQ {
struct heapNode* heap;
int size;
} ;
""")
#
# Rank (Simian Engine) has a Bin heap
if __name__ == "__main__":
ffibuilder.compile(verbose=True)
```
then `heap.py`
```
from _heap_i import ffi, lib
infTime = 0
# /* create heap */
def init():
#global infTime
#infTime = int(engine.infTime) + 1
cheap = ffi.new("struct PQ *")
lib.initQueue(cheap,nn)
return cheap
def push(arr, element):
hn = ffi.new("struct heapNode")
value = element["time"]
hn.value = value
rx = ffi.new("char[]", element["rx"])
hn.data.rx = rx
tx = ffi.new("char[]", element["tx"])
hn.data.tx = tx
txID = ffi.new("int", element["txID"])
hn.data.txID = txID
rxID = ffi.new("int", element["rxID"])
hn.data.rxID = rxID
name = ffi.new("int", element["name"])
hn.data.name = name
hn.data.time = value
result = lib.enqueue(hn, arr)
def pop(arr):
hn = lib.dequeue(arr)
element = {"time": hn.value,
"rx" : hn.data.rx,
"tx" : hn.data.tx,
"rxID" : hn.data.rxID,
"txID" : hn.data.txID,
"name" : hn.data.name,
"data" : hn.data.data,
}
return element
def annihilate(arr, event):
pass
def peak(arr):
hn = lib.peak(arr)
element = {"time": hn.value,
"rx" : hn.data.rx,
"tx" : hn.data.tx,
"rxID" : hn.data.rxID,
"txID" : hn.data.txID,
"name" : hn.data.name,
"data" : hn.data.data,
}
return element
def isEvent(arr):
if arr.size:
return 1
else:
return None
def size(arr):
return arr.size
```
and finally test\_heap.py
```
import heap
pq = heap.init()
for x in range(1000) :
heap.push({"time" : x,
"rx" : "a",
"tx" : "b",
"txID" : 1,
"rxID" : 1,
"name" : "bob",
"data": "none",
},pq)
for x in range(100) :
heap.peak(pq)
for x in range(1000):
y = heap.dequeue(pq)
print y
```
Thanks anyone for looking and let me know if you have experienced this before.
|
2017/12/28
|
[
"https://Stackoverflow.com/questions/48013074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5062570/"
] |
You must tell ffi.set\_source() about `heap.c`. The easiest way is
`with open('heap.c', 'r') as fid:
ffi.set_source(fid.read())`
like in [this example](http://cffi.readthedocs.io/en/latest/overview.html#purely-for-performance-api-level-out-of-line). The way it works is:
* declarations of functions, structs, enums, typedefs that will be accessible from python appear in your ffi.cdef(...)
* the source code that implements those functions is given as a string to ffi.set\_source(...) ( you can also use various `distutils` keywords there)
* ffi.compile() builds a shared object that exports the things defined in ffi.cdef() and arranges it so those things will be exposed to python
|
Your ffi.cdef() does not have any function declaration, so the resulting \_heap\_i library doesn't export any function. Try to copy-paste the function declaration from the .h file.
|
71,395,163
|
I have a txt file of hundreds of thousands of words. I need to get into some format (I think dictionary is the right thing?) where I can put into my script something along the lines of;
```
for i in word_list:
word_length = len(i)
print("Length of " + i + word_length, file=open("LengthOutput.txt", "a"))
```
Currently, the txt file of words is separated by each word being on a new line, if that helps. I've tried importing it to my python script with
```
From x import y
```
.... and similar, but it seems like it needs to be in some format to actually get imported? I've been looking around stackoverflow for a wile now and nothing seems to really cover this specifically but apologies if this is super-beginner stuff that I'm just really not understanding.
|
2022/03/08
|
[
"https://Stackoverflow.com/questions/71395163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15604077/"
] |
A list would be the correct way to store the words. A dictionary requires a key-value pair and you don't need it in this case.
```
with open('filename.txt', 'r') as file:
x = [word.strip('\n') for word in file.readlines()]
```
|
What you are trying to do is to read a file. An `import` statement is used when you want to, loosely speaking, use python code from another file.
The [docs](https://docs.python.org/3/tutorial/inputoutput.html#reading-and-writing-files) have a good introduction on reading and writing files -
To read a file, you first open the file, load the contents to memory and finally close the file.
```py
f = open('my_wordfile.txt', 'r')
for line in f:
print(len(line))
f.close()
```
A better way is to use the `with` statement and you can find more about that in the docs as well.
|
11,729,120
|
I am having a problem reading specific lines. It's similar to the question answered here:
[python - Read file from and to specific lines of text](https://stackoverflow.com/questions/7559397/python-read-file-from-and-to-specific-lines-of-text)
The difference, I don't have a fixed end mark. Let me show an example:
```
--------------------------------
\n
***** SOMETHING ***** # i use this as my start
\n
--------------------------------
\n
data of interest
data of interest
data of interest
\n
----------------------- #this will either be dashes, or EOF
***** SOMETHING *****
-----------------------
```
I tried doing something similar to the above link, but I can't create a if statement to break the loop since I don't know if it will be the EOF or not.
|
2012/07/30
|
[
"https://Stackoverflow.com/questions/11729120",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1443368/"
] |
Take a look at [this file](https://github.com/void256/nifty-gui/blob/1.3/nifty-renderer-lwjgl/src/main/java/de/lessvoid/nifty/renderer/lwjgl/input/LwjglKeyboardInputEventCreator.java) of the source code of [NiftyGUI](https://github.com/void256/nifty-gui), which should contain this text handling code.
|
Just delete your shift handling line and add:
```
if(Keyboard.isKeyDown(Keyboard.KEY_LSHIFT) && !Keyboard.isKeyDown(Keyboard.KEY_RSHIFT))
shift=true;
```
before the beginning of the While loop.
|
39,142,168
|
I just started learning python a few days ago and I have been using Grok Learning. For the challenge I have everything working as far as i can see but when i submit it i am told "Testing yet another case that starts with a vowel. Your submission raised an exception of type IndexError. This occurred on line 8 of your submission." I am not sure how to solve this or even what i am doing wrong. By the way i am making a program to check if the message starts with a vowel and if so times the first letter by 10 if not then times the second letter by 10.
```
msg = input("Enter a word: ")
h = " "
half =" "
first = msg[0]
second = msg[1]
msg2 = "gg"
length = len(msg)
third = msg[2]
if first not in "aeiou":
if second != third:
print(msg.replace(msg[1], msg[1] * 10))
elif second == third:
msg2 = third * 6
msg3 = (msg.replace(msg[2], msg2))
msg4 = first + msg3[2:]
print(msg4)
else:
half = first * 10
msg10 = msg[1:length]
print((half) + msg10)
```
|
2016/08/25
|
[
"https://Stackoverflow.com/questions/39142168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
use the below query..
```
WITH cte_data
AS (
SELECT cost,(cost*round(rand()+rand(),2)origCost
FROM [dbo].[x])
UPDATE a
SET a.cost=a.origCost
FROM cte_data a
```
if you need different numbers for calculation use the below script
```
WITH cte_data
AS (
SELECT cost,ROW_NUMBER()OVER(ORDER BY cost)*(cost)origCost
FROM [dbo].[x])
UPDATE a
SET a.cost=a.origCost
FROM cte_data a
```
|
To get random decimal within 0..2 use `CAST (ABS(CHECKSUM(NewId())) % 200 /100. AS DECIMAL(3,2))` instead of `round(rand()+rand(),2)`
|
39,142,168
|
I just started learning python a few days ago and I have been using Grok Learning. For the challenge I have everything working as far as i can see but when i submit it i am told "Testing yet another case that starts with a vowel. Your submission raised an exception of type IndexError. This occurred on line 8 of your submission." I am not sure how to solve this or even what i am doing wrong. By the way i am making a program to check if the message starts with a vowel and if so times the first letter by 10 if not then times the second letter by 10.
```
msg = input("Enter a word: ")
h = " "
half =" "
first = msg[0]
second = msg[1]
msg2 = "gg"
length = len(msg)
third = msg[2]
if first not in "aeiou":
if second != third:
print(msg.replace(msg[1], msg[1] * 10))
elif second == third:
msg2 = third * 6
msg3 = (msg.replace(msg[2], msg2))
msg4 = first + msg3[2:]
print(msg4)
else:
half = first * 10
msg10 = msg[1:length]
print((half) + msg10)
```
|
2016/08/25
|
[
"https://Stackoverflow.com/questions/39142168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
use the below query..
```
WITH cte_data
AS (
SELECT cost,(cost*round(rand()+rand(),2)origCost
FROM [dbo].[x])
UPDATE a
SET a.cost=a.origCost
FROM cte_data a
```
if you need different numbers for calculation use the below script
```
WITH cte_data
AS (
SELECT cost,ROW_NUMBER()OVER(ORDER BY cost)*(cost)origCost
FROM [dbo].[x])
UPDATE a
SET a.cost=a.origCost
FROM cte_data a
```
|
```
declare @origCost float
declare table_cursor cursor for
select cost from [dbo].[x]where cost is not null
open table_cursor;
Fetch next from table_cursor into @origCost
while @@FETCH_STATUS = 0
BEGIN
update [dbo].[x]
set cost=@origCost* round(rand()+rand(),2) where cost is not null
Fetch Next from table_cursor into @origCost
END;
CLOSE table_cursor;
DEALLOCATE table_cursor;
```
declare one variable.. with xxxxxxxxx value.. + randome no....
for each statement +1;
|
39,142,168
|
I just started learning python a few days ago and I have been using Grok Learning. For the challenge I have everything working as far as i can see but when i submit it i am told "Testing yet another case that starts with a vowel. Your submission raised an exception of type IndexError. This occurred on line 8 of your submission." I am not sure how to solve this or even what i am doing wrong. By the way i am making a program to check if the message starts with a vowel and if so times the first letter by 10 if not then times the second letter by 10.
```
msg = input("Enter a word: ")
h = " "
half =" "
first = msg[0]
second = msg[1]
msg2 = "gg"
length = len(msg)
third = msg[2]
if first not in "aeiou":
if second != third:
print(msg.replace(msg[1], msg[1] * 10))
elif second == third:
msg2 = third * 6
msg3 = (msg.replace(msg[2], msg2))
msg4 = first + msg3[2:]
print(msg4)
else:
half = first * 10
msg10 = msg[1:length]
print((half) + msg10)
```
|
2016/08/25
|
[
"https://Stackoverflow.com/questions/39142168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
To get random decimal within 0..2 use `CAST (ABS(CHECKSUM(NewId())) % 200 /100. AS DECIMAL(3,2))` instead of `round(rand()+rand(),2)`
|
```
declare @origCost float
declare table_cursor cursor for
select cost from [dbo].[x]where cost is not null
open table_cursor;
Fetch next from table_cursor into @origCost
while @@FETCH_STATUS = 0
BEGIN
update [dbo].[x]
set cost=@origCost* round(rand()+rand(),2) where cost is not null
Fetch Next from table_cursor into @origCost
END;
CLOSE table_cursor;
DEALLOCATE table_cursor;
```
declare one variable.. with xxxxxxxxx value.. + randome no....
for each statement +1;
|
72,010,560
|
I am running [a model from github](https://github.com/wentaoyuan/pcn) and I already encountered several errors with pathing etc. After fixing this, I think the main error for now is tensorflow. This repo was probably done when TF 1.x, and now with the change to TF 2, I might need to migrate everything.
Mainly, I get the following error:
```
@ops.RegisterShape('ApproxMatch')
AttributeError: module 'tensorflow.python.framework.ops' has no attribute 'RegisterShape'
```
in:
```
import tensorflow as tf
from tensorflow.python.framework import ops
import os.path as osp
base_dir = osp.dirname(osp.abspath(__file__))
approxmatch_module = tf.load_op_library(osp.join(base_dir, 'tf_approxmatch_so.so'))
def approx_match(xyz1,xyz2):
'''
input:
xyz1 : batch_size * #dataset_points * 3
xyz2 : batch_size * #query_points * 3
returns:
match : batch_size * #query_points * #dataset_points
'''
return approxmatch_module.approx_match(xyz1,xyz2)
ops.NoGradient('ApproxMatch')
#@tf.RegisterShape('ApproxMatch')
@ops.RegisterShape('ApproxMatch')
def _approx_match_shape(op):
shape1=op.inputs[0].get_shape().with_rank(3)
shape2=op.inputs[1].get_shape().with_rank(3)
return [tf.TensorShape([shape1.dims[0],shape2.dims[1],shape1.dims[1]])]
```
2 main things I am not understanding:
1. I read that this probably will make me have to create the `ops` [C++ routines](https://www.tensorflow.org/guide/create_op) but at the same time, I can see that these are done in here: `@ops.RegisterShape('ApproxMatch')` . [Like this](https://stackoverflow.com/questions/59043649/attributeerror-module-tensorflow-python-framework-ops-has-no-attribute-regis) and [this](https://stackoverflow.com/questions/59643248/attributeerror-module-has-no-attribute-registershape) with `REGISTER_OP(...).SetShapeFn(...)`. But I dont think I am understanding the process and have seen other questions with the same, but no real implementation/answer.
2. If I go to the location of the tf\_approxmatch shared library ( `approxmatch_module = tf.load_op_library(osp.join(base_dir, 'tf_approxmatch_so.so'))` ), I cannot open it or edit it with `gedit`, so I am assuming I am not supposed to change anything in there (?).
There are `py`, `cpp` and `cu` files in that folder (I already did `make` yesterday and everything ran smoothly).
```
__init__.py tf_approxmatch.cu.o tf_nndistance.cu.o
makefile tf_approxmatch.py tf_nndistance.py
__pycache__ tf_approxmatch_so.so tf_nndistance_so.so
tf_approxmatch.cpp tf_nndistance.cpp
tf_approxmatch.cu tf_nndistance.cu
```
My main guess is that I should register the operation of RegisterShape somehow in the cpp file as it already has some registered ops, but I am a bit lost because **I am not even sure if I am understanding the problem I have**. I will just show the first lines of the file:
```
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/op_kernel.h"
#include <algorithm>
#include <vector>
#include <math.h>
using namespace tensorflow;
REGISTER_OP("ApproxMatch")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Output("match: float32");
REGISTER_OP("MatchCost")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Input("match: float32")
.Output("cost: float32");
REGISTER_OP("MatchCostGrad")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Input("match: float32")
.Output("grad1: float32")
.Output("grad2: float32");
```
|
2022/04/26
|
[
"https://Stackoverflow.com/questions/72010560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7396613/"
] |
**Disclaimer**: Unless no other choice, I would strongly recommend sticking to TensorFlow 1.x. Migrating code from TF 1.x to 2.x can be incredibly time consuming.
---
Registering a shape is done in c++ using `SetShapeFn` and not in python since TF 1.0. However, the private python API was kept in TF 1.x (I presume for backward compatibility reasons), but was completely removed in TF 2.0.
The [Create an Op](https://www.tensorflow.org/guide/create_op) guide is really useful to migrate the code in that case, and I would really recommend reading it.
First of all, why is it needed to register a shape? It's needed for shape inference, a feature that allow TensorFlow to know the shape of the inputs and outputs in the computation graph without running actual code. Shape Inference allows, for example, error handling when trying to use an op on Tensors that don't have a compatible shape.
In your specific case, you need to convert the python code that uses `ops.RegisterShape` to c++ code using `SetShapeFn`. Thankfully, the github repository you are working with provides comments that are going to be helpful.
Let's start with the `approx_match` function. The python code is the following:
```
def approx_match(xyz1,xyz2):
'''
input:
xyz1 : batch_size * #dataset_points * 3
xyz2 : batch_size * #query_points * 3
returns:
match : batch_size * #query_points * #dataset_points
'''
return approxmatch_module.approx_match(xyz1,xyz2)
@ops.RegisterShape('ApproxMatch')
def _approx_match_shape(op):
shape1=op.inputs[0].get_shape().with_rank(3)
shape2=op.inputs[1].get_shape().with_rank(3)
return [tf.TensorShape([shape1.dims[0],shape2.dims[1],shape1.dims[1]])]
```
Reading the code and the comments, we understand the following:
* There is 2 inputs `xyz1`, and `xyz2`
* `xyz1` has the shape `(batch_size, dataset_points, 3)`
* `xyz2` has the shape `(batch_size, query_points, 3)`
* There is 1 output: `match`
* `match` has the shape `(batch_size, query_points, dataset_points)`
This would translate to the following C++ code:
```cpp
#include "tensorflow/core/framework/shape_inference.h"
using namespace tensorflow;
REGISTER_OP("ApproxMatch")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Output("match: float32")
.SetShapeFn([](shape_inference::InferenceContext* c) {
shape_inference::ShapeHandle xyz1_shape = c->input(0);
shape_inference::ShapeHandle xyz2_shape = c->input(1);
// batch_size is the first dimension
shape_inference::DimensionHandle batch_size = c->Dim(xyz1_shape, 0);
// dataset_points points is the 2nd dimension of the first input
shape_inference::DimensionHandle dataset_points = c->Dim(xyz1_shape, 1);
// query_points points is the 2nd dimension of the second input
shape_inference::DimensionHandle query_points = c->Dim(xyz2_shape, 1);
// Creating the new shape (batch_size, query_points, dataset_points)
// and setting it to the output
c->set_output(0, c->MakeShape({batch_size, query_points, dataset_points}));
// Returning a status telling that everything went well
return Status::OK();
});
```
**Warning**: This code does not contain any error handling (for example, checking that the first dimensions of the 2 inputs are the same, or that the last dimension of both inputs is 3). I let that as an exercise to the reader, you could look at the aforementioned guide or directly to the [source code of some ops](https://github.com/tensorflow/tensorflow/blob/v2.8.0/tensorflow/core/ops/math_ops.cc) to have an idea on how to do error handling, with, for example, the macro `TF_RETURN_IF_ERROR`.
---
The same steps can be applied to the `match_cost` function, which could look like that:
```cpp
REGISTER_OP("MatchCost")
.Input("xyz1: float32")
.Input("xyz2: float32")
.Input("match: float32")
.Output("cost: float32")
.SetShapeFn([](shape_inference::InferenceContext* c) {
shape_inference::DimensionHandle batch_size = c->Dim(c->input(0), 0);
c->set_output(0, c->Vector(batch_size));
return Status::OK();
});
```
---
You need then to recompile the `so` library with the makefile included in the project. You might have to change some flags, for example, TF 2.8 uses the c++ standard c++14 instead of c++11, so the flag `-std=c++14` is needed. Once the library is compiled, you can test importing it in python:
```
>>> import tensorflow as tf
>>> approxmatch_module = tf.load_op_library('./tf_approxmatch_so.so')
>>> a = tf.random.uniform((10,20,3))
>>> b = tf.random.uniform((10,50,3))
>>> c = approxmatch_module.approx_match(a,b)
>>> c.shape
TensorShape([10, 50, 20])
```
|
according to tensorflow's release log, `RegisterShape` is deprecated, and you should use `SetShapeFn` to define the shape when registering your operator in c++ source file.
|
29,570,085
|
I am using a Makefile to provide consistent single commands for setting up a virtualenv, running tests, etc. I have configured my Jenkins instance to pull from a mercurial repo and then run "make virtualenv", which does this:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && pip install -r requirements.txt
```
But for some reason it insists on using the system-installed pip and trying to install my package dependencies in the system site-packages rather than the virtualenv:
```
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
```
If I add some debugging commands and explicitly point to the pip in my virtualenv, things get even more confusing:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && ls -l bin && which pip && pwd && ./bin/pip install -r requirements.txt
```
Which generates the following output:
```
New python executable in ./bin/python2.7
Not overwriting existing python script ./bin/python (you must use ./bin/python2.7)
Installing setuptools, pip...done.
Running virtualenv with interpreter /usr/bin/python2.7
```
It appears Jenkins doesn't rebuild the environment from scratch for each build, which strikes me as an odd choice, but shouldn't effect my immediate issue
The output from the "ls -l bin" shows pip to be installed in the virtualenv and executable:
```
-rw-r--r-- 1 jenkins jenkins 2248 Apr 9 21:14 activate
-rw-r--r-- 1 jenkins jenkins 1304 Apr 9 21:14 activate.csh
-rw-r--r-- 1 jenkins jenkins 2517 Apr 9 21:14 activate.fish
-rw-r--r-- 1 jenkins jenkins 1129 Apr 9 21:14 activate_this.py
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install-2.7
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python -> python2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python2 -> python2.7
-rwxr-xr-x 1 jenkins jenkins 3349512 Apr 10 19:31 python2.7
```
The output of "which pip" seems to want to use the correct one:
```
/var/lib/jenkins/jobs/Run Tests/workspace/bin/pip
```
My current working directory is what I expect it to be:
```
/var/lib/jenkins/jobs/Run Tests/workspace
```
But... wtf?
```
/bin/sh: 1: ./bin/pip: Permission denied
make: *** [virtualenv] Error 126
Build step 'Execute shell' marked build as failure
Finished: FAILURE
```
|
2015/04/10
|
[
"https://Stackoverflow.com/questions/29570085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1332304/"
] |
Jenkins pipelines can be made to run with virtual environments but there are multiple things to consider.
* The default shell that Jenkins uses is `/bin/sh` - this is configurable in **Manage Jenkins -> Configure System -> Shell -> Shell executable**. Setting this to `/bin/bash` will make `source` work.
* An activated venv simply changes environment variables, and environment variables do not persist between stages in jenkins. See [withEnv](https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#code-withenv-code-set-environment-variables)
* If you are using version controlled multibranch pipelines jenkins creates a workspace with the branch name and a commit hash in the path - which can be quite long. venv scripts (e.g. pip) all start with a hashbang line which includes the full path to the python interpreter in the venv (the python interpreter itself is a symlink). E.g.,
```
~/workspace/ink_feature-use-jenkinsfile-VGRPYD53GGGDDSBIJDLSUDYPJ34QR63ITGMC5VJNB56W6ID244AA/env/bin$ cat pip
#!/var/jenkins_home/workspace/ink_feature-use-jenkinsfile-VGRPYD53GGGDDSBIJDLSUDYPJ34QR63ITGMC5VJNB56W6ID244AA/env/bin/python3.5
```
Bash only reads the first `N` characters of any executable file - which I found did not quite include the full venv path:
```
bash: ./pip: /var/jenkins_home/workspace/ink_feature-use-jenkinsfile-VGRPYD53GGGDDSBIJDLSU: bad interpreter: No such file or directory
```
This particular problem can be avoided by executing the script with Python instead. E.g. `python3.5 ./pip`
|
[@hardbyte](https://stackoverflow.com/users/809692/hardbyte)'s answer
*The default shell that Jenkins uses is /bin/sh - this is configurable in Manage Jenkins -> Configure System -> Shell -> Shell executable. Setting this to /bin/bash will make source work.*
plus:
* <https://stackoverflow.com/a/70812248/1497139>
* [pyvenv-3.4 returned non-zero exit status 1](https://stackoverflow.com/questions/24123150/pyvenv-3-4-returned-non-zero-exit-status-1)
got me working
```bash
sudo apt install python3.10-venv
```
and then in jenkins in the execute shell step:
```bash
python3.10 -m venv .venv
source .venv/bin/activate
...
```
|
29,570,085
|
I am using a Makefile to provide consistent single commands for setting up a virtualenv, running tests, etc. I have configured my Jenkins instance to pull from a mercurial repo and then run "make virtualenv", which does this:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && pip install -r requirements.txt
```
But for some reason it insists on using the system-installed pip and trying to install my package dependencies in the system site-packages rather than the virtualenv:
```
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
```
If I add some debugging commands and explicitly point to the pip in my virtualenv, things get even more confusing:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && ls -l bin && which pip && pwd && ./bin/pip install -r requirements.txt
```
Which generates the following output:
```
New python executable in ./bin/python2.7
Not overwriting existing python script ./bin/python (you must use ./bin/python2.7)
Installing setuptools, pip...done.
Running virtualenv with interpreter /usr/bin/python2.7
```
It appears Jenkins doesn't rebuild the environment from scratch for each build, which strikes me as an odd choice, but shouldn't effect my immediate issue
The output from the "ls -l bin" shows pip to be installed in the virtualenv and executable:
```
-rw-r--r-- 1 jenkins jenkins 2248 Apr 9 21:14 activate
-rw-r--r-- 1 jenkins jenkins 1304 Apr 9 21:14 activate.csh
-rw-r--r-- 1 jenkins jenkins 2517 Apr 9 21:14 activate.fish
-rw-r--r-- 1 jenkins jenkins 1129 Apr 9 21:14 activate_this.py
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install-2.7
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python -> python2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python2 -> python2.7
-rwxr-xr-x 1 jenkins jenkins 3349512 Apr 10 19:31 python2.7
```
The output of "which pip" seems to want to use the correct one:
```
/var/lib/jenkins/jobs/Run Tests/workspace/bin/pip
```
My current working directory is what I expect it to be:
```
/var/lib/jenkins/jobs/Run Tests/workspace
```
But... wtf?
```
/bin/sh: 1: ./bin/pip: Permission denied
make: *** [virtualenv] Error 126
Build step 'Execute shell' marked build as failure
Finished: FAILURE
```
|
2015/04/10
|
[
"https://Stackoverflow.com/questions/29570085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1332304/"
] |
I have being using python virtualenvs with Jenkins every day in the last two years, at multiple companies and for small side projects and cannot say I found "THE" answer. Still, I hope that sharing my experience would help others save time. Hopefully I will get further feedback in order to make the decision easier.
* **Avoid ShiningPanda** - it not well maintained, incompatible with Jenkins2 pipelines and prevents execution of jobs in parallel. Also it has the bad habit of leaving orphan environments on disk.
* **DIY via bash and virtualenv** is my current favourite. Create it inside $WORKSPACE and, if not always cleaning, run [relocatable](https://virtualenv.pypa.io/en/stable/userguide/#making-environments-relocatable) before activating them. This is because jenkins workspace folder disk location can change between executions of job N and N+1.
If you use multiple builders that do need the same virtualenv, the easiest way is to dump your environment to a file and source it at the beginning of the new builder.
To ease the maintenance I am planning to investigate these:
* direnvm
* virtualenv-wrapper (mkvirtualenv)
* pyenv
If you hit the shebang command line limits the best thing to do is to change your jenkins home directory to just `/j`.
|
I've got same problem. As I can see - you project named 'Run Tests'. So, this name contain space. That was the problem for me. I just renamed project, like RunTests - and venv working now! Attention - jenkins ask you about confirmation renaming project.
|
29,570,085
|
I am using a Makefile to provide consistent single commands for setting up a virtualenv, running tests, etc. I have configured my Jenkins instance to pull from a mercurial repo and then run "make virtualenv", which does this:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && pip install -r requirements.txt
```
But for some reason it insists on using the system-installed pip and trying to install my package dependencies in the system site-packages rather than the virtualenv:
```
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
```
If I add some debugging commands and explicitly point to the pip in my virtualenv, things get even more confusing:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && ls -l bin && which pip && pwd && ./bin/pip install -r requirements.txt
```
Which generates the following output:
```
New python executable in ./bin/python2.7
Not overwriting existing python script ./bin/python (you must use ./bin/python2.7)
Installing setuptools, pip...done.
Running virtualenv with interpreter /usr/bin/python2.7
```
It appears Jenkins doesn't rebuild the environment from scratch for each build, which strikes me as an odd choice, but shouldn't effect my immediate issue
The output from the "ls -l bin" shows pip to be installed in the virtualenv and executable:
```
-rw-r--r-- 1 jenkins jenkins 2248 Apr 9 21:14 activate
-rw-r--r-- 1 jenkins jenkins 1304 Apr 9 21:14 activate.csh
-rw-r--r-- 1 jenkins jenkins 2517 Apr 9 21:14 activate.fish
-rw-r--r-- 1 jenkins jenkins 1129 Apr 9 21:14 activate_this.py
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install-2.7
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python -> python2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python2 -> python2.7
-rwxr-xr-x 1 jenkins jenkins 3349512 Apr 10 19:31 python2.7
```
The output of "which pip" seems to want to use the correct one:
```
/var/lib/jenkins/jobs/Run Tests/workspace/bin/pip
```
My current working directory is what I expect it to be:
```
/var/lib/jenkins/jobs/Run Tests/workspace
```
But... wtf?
```
/bin/sh: 1: ./bin/pip: Permission denied
make: *** [virtualenv] Error 126
Build step 'Execute shell' marked build as failure
Finished: FAILURE
```
|
2015/04/10
|
[
"https://Stackoverflow.com/questions/29570085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1332304/"
] |
Jenkins pipelines can be made to run with virtual environments but there are multiple things to consider.
* The default shell that Jenkins uses is `/bin/sh` - this is configurable in **Manage Jenkins -> Configure System -> Shell -> Shell executable**. Setting this to `/bin/bash` will make `source` work.
* An activated venv simply changes environment variables, and environment variables do not persist between stages in jenkins. See [withEnv](https://jenkins.io/doc/pipeline/steps/workflow-basic-steps/#code-withenv-code-set-environment-variables)
* If you are using version controlled multibranch pipelines jenkins creates a workspace with the branch name and a commit hash in the path - which can be quite long. venv scripts (e.g. pip) all start with a hashbang line which includes the full path to the python interpreter in the venv (the python interpreter itself is a symlink). E.g.,
```
~/workspace/ink_feature-use-jenkinsfile-VGRPYD53GGGDDSBIJDLSUDYPJ34QR63ITGMC5VJNB56W6ID244AA/env/bin$ cat pip
#!/var/jenkins_home/workspace/ink_feature-use-jenkinsfile-VGRPYD53GGGDDSBIJDLSUDYPJ34QR63ITGMC5VJNB56W6ID244AA/env/bin/python3.5
```
Bash only reads the first `N` characters of any executable file - which I found did not quite include the full venv path:
```
bash: ./pip: /var/jenkins_home/workspace/ink_feature-use-jenkinsfile-VGRPYD53GGGDDSBIJDLSU: bad interpreter: No such file or directory
```
This particular problem can be avoided by executing the script with Python instead. E.g. `python3.5 ./pip`
|
I've got same problem. As I can see - you project named 'Run Tests'. So, this name contain space. That was the problem for me. I just renamed project, like RunTests - and venv working now! Attention - jenkins ask you about confirmation renaming project.
|
29,570,085
|
I am using a Makefile to provide consistent single commands for setting up a virtualenv, running tests, etc. I have configured my Jenkins instance to pull from a mercurial repo and then run "make virtualenv", which does this:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && pip install -r requirements.txt
```
But for some reason it insists on using the system-installed pip and trying to install my package dependencies in the system site-packages rather than the virtualenv:
```
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
```
If I add some debugging commands and explicitly point to the pip in my virtualenv, things get even more confusing:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && ls -l bin && which pip && pwd && ./bin/pip install -r requirements.txt
```
Which generates the following output:
```
New python executable in ./bin/python2.7
Not overwriting existing python script ./bin/python (you must use ./bin/python2.7)
Installing setuptools, pip...done.
Running virtualenv with interpreter /usr/bin/python2.7
```
It appears Jenkins doesn't rebuild the environment from scratch for each build, which strikes me as an odd choice, but shouldn't effect my immediate issue
The output from the "ls -l bin" shows pip to be installed in the virtualenv and executable:
```
-rw-r--r-- 1 jenkins jenkins 2248 Apr 9 21:14 activate
-rw-r--r-- 1 jenkins jenkins 1304 Apr 9 21:14 activate.csh
-rw-r--r-- 1 jenkins jenkins 2517 Apr 9 21:14 activate.fish
-rw-r--r-- 1 jenkins jenkins 1129 Apr 9 21:14 activate_this.py
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install-2.7
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python -> python2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python2 -> python2.7
-rwxr-xr-x 1 jenkins jenkins 3349512 Apr 10 19:31 python2.7
```
The output of "which pip" seems to want to use the correct one:
```
/var/lib/jenkins/jobs/Run Tests/workspace/bin/pip
```
My current working directory is what I expect it to be:
```
/var/lib/jenkins/jobs/Run Tests/workspace
```
But... wtf?
```
/bin/sh: 1: ./bin/pip: Permission denied
make: *** [virtualenv] Error 126
Build step 'Execute shell' marked build as failure
Finished: FAILURE
```
|
2015/04/10
|
[
"https://Stackoverflow.com/questions/29570085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1332304/"
] |
I have being using python virtualenvs with Jenkins every day in the last two years, at multiple companies and for small side projects and cannot say I found "THE" answer. Still, I hope that sharing my experience would help others save time. Hopefully I will get further feedback in order to make the decision easier.
* **Avoid ShiningPanda** - it not well maintained, incompatible with Jenkins2 pipelines and prevents execution of jobs in parallel. Also it has the bad habit of leaving orphan environments on disk.
* **DIY via bash and virtualenv** is my current favourite. Create it inside $WORKSPACE and, if not always cleaning, run [relocatable](https://virtualenv.pypa.io/en/stable/userguide/#making-environments-relocatable) before activating them. This is because jenkins workspace folder disk location can change between executions of job N and N+1.
If you use multiple builders that do need the same virtualenv, the easiest way is to dump your environment to a file and source it at the beginning of the new builder.
To ease the maintenance I am planning to investigate these:
* direnvm
* virtualenv-wrapper (mkvirtualenv)
* pyenv
If you hit the shebang command line limits the best thing to do is to change your jenkins home directory to just `/j`.
|
[@hardbyte](https://stackoverflow.com/users/809692/hardbyte)'s answer
*The default shell that Jenkins uses is /bin/sh - this is configurable in Manage Jenkins -> Configure System -> Shell -> Shell executable. Setting this to /bin/bash will make source work.*
plus:
* <https://stackoverflow.com/a/70812248/1497139>
* [pyvenv-3.4 returned non-zero exit status 1](https://stackoverflow.com/questions/24123150/pyvenv-3-4-returned-non-zero-exit-status-1)
got me working
```bash
sudo apt install python3.10-venv
```
and then in jenkins in the execute shell step:
```bash
python3.10 -m venv .venv
source .venv/bin/activate
...
```
|
29,570,085
|
I am using a Makefile to provide consistent single commands for setting up a virtualenv, running tests, etc. I have configured my Jenkins instance to pull from a mercurial repo and then run "make virtualenv", which does this:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && pip install -r requirements.txt
```
But for some reason it insists on using the system-installed pip and trying to install my package dependencies in the system site-packages rather than the virtualenv:
```
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
```
If I add some debugging commands and explicitly point to the pip in my virtualenv, things get even more confusing:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && ls -l bin && which pip && pwd && ./bin/pip install -r requirements.txt
```
Which generates the following output:
```
New python executable in ./bin/python2.7
Not overwriting existing python script ./bin/python (you must use ./bin/python2.7)
Installing setuptools, pip...done.
Running virtualenv with interpreter /usr/bin/python2.7
```
It appears Jenkins doesn't rebuild the environment from scratch for each build, which strikes me as an odd choice, but shouldn't effect my immediate issue
The output from the "ls -l bin" shows pip to be installed in the virtualenv and executable:
```
-rw-r--r-- 1 jenkins jenkins 2248 Apr 9 21:14 activate
-rw-r--r-- 1 jenkins jenkins 1304 Apr 9 21:14 activate.csh
-rw-r--r-- 1 jenkins jenkins 2517 Apr 9 21:14 activate.fish
-rw-r--r-- 1 jenkins jenkins 1129 Apr 9 21:14 activate_this.py
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install-2.7
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python -> python2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python2 -> python2.7
-rwxr-xr-x 1 jenkins jenkins 3349512 Apr 10 19:31 python2.7
```
The output of "which pip" seems to want to use the correct one:
```
/var/lib/jenkins/jobs/Run Tests/workspace/bin/pip
```
My current working directory is what I expect it to be:
```
/var/lib/jenkins/jobs/Run Tests/workspace
```
But... wtf?
```
/bin/sh: 1: ./bin/pip: Permission denied
make: *** [virtualenv] Error 126
Build step 'Execute shell' marked build as failure
Finished: FAILURE
```
|
2015/04/10
|
[
"https://Stackoverflow.com/questions/29570085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1332304/"
] |
I'd recommend avoiding ShiningPanda.
I set up my virtual environments with [Anaconda](https://docs.continuum.io/anaconda/install)/[Miniconda](https://conda.io/miniconda.html). When installing conda make sure you're running as jenkins user.
```
your_user@$ sudo -u jenkins sh
jenkins@$ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
jenkins@$ bash Miniconda3-latest-Linux-x86_64.sh
```
Since Jenkins runs `sh` rather than `bash`, I added conda path to `/etc/profile`:
```
export PATH="/var/lib/jenkins/miniconda3/bin:$PATH"
```
Then in Jenkinsfile you can create and *delete* conda environments. Here's an example that creates a new environment for each build:
```
pipeline {
agent any
stages {
stage('Unit tests') {
steps {
sh '''
conda create --yes -n ${BUILD_TAG} python
source activate ${BUILD_TAG}
// example of unit test with nose2
pip install nose2
nose2
'''
}
}
}
post {
always {
sh 'conda remove --yes -n ${BUILD_TAG} --all'
}
}
}
```
|
[@hardbyte](https://stackoverflow.com/users/809692/hardbyte)'s answer
*The default shell that Jenkins uses is /bin/sh - this is configurable in Manage Jenkins -> Configure System -> Shell -> Shell executable. Setting this to /bin/bash will make source work.*
plus:
* <https://stackoverflow.com/a/70812248/1497139>
* [pyvenv-3.4 returned non-zero exit status 1](https://stackoverflow.com/questions/24123150/pyvenv-3-4-returned-non-zero-exit-status-1)
got me working
```bash
sudo apt install python3.10-venv
```
and then in jenkins in the execute shell step:
```bash
python3.10 -m venv .venv
source .venv/bin/activate
...
```
|
29,570,085
|
I am using a Makefile to provide consistent single commands for setting up a virtualenv, running tests, etc. I have configured my Jenkins instance to pull from a mercurial repo and then run "make virtualenv", which does this:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && pip install -r requirements.txt
```
But for some reason it insists on using the system-installed pip and trying to install my package dependencies in the system site-packages rather than the virtualenv:
```
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
```
If I add some debugging commands and explicitly point to the pip in my virtualenv, things get even more confusing:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && ls -l bin && which pip && pwd && ./bin/pip install -r requirements.txt
```
Which generates the following output:
```
New python executable in ./bin/python2.7
Not overwriting existing python script ./bin/python (you must use ./bin/python2.7)
Installing setuptools, pip...done.
Running virtualenv with interpreter /usr/bin/python2.7
```
It appears Jenkins doesn't rebuild the environment from scratch for each build, which strikes me as an odd choice, but shouldn't effect my immediate issue
The output from the "ls -l bin" shows pip to be installed in the virtualenv and executable:
```
-rw-r--r-- 1 jenkins jenkins 2248 Apr 9 21:14 activate
-rw-r--r-- 1 jenkins jenkins 1304 Apr 9 21:14 activate.csh
-rw-r--r-- 1 jenkins jenkins 2517 Apr 9 21:14 activate.fish
-rw-r--r-- 1 jenkins jenkins 1129 Apr 9 21:14 activate_this.py
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install-2.7
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python -> python2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python2 -> python2.7
-rwxr-xr-x 1 jenkins jenkins 3349512 Apr 10 19:31 python2.7
```
The output of "which pip" seems to want to use the correct one:
```
/var/lib/jenkins/jobs/Run Tests/workspace/bin/pip
```
My current working directory is what I expect it to be:
```
/var/lib/jenkins/jobs/Run Tests/workspace
```
But... wtf?
```
/bin/sh: 1: ./bin/pip: Permission denied
make: *** [virtualenv] Error 126
Build step 'Execute shell' marked build as failure
Finished: FAILURE
```
|
2015/04/10
|
[
"https://Stackoverflow.com/questions/29570085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1332304/"
] |
I have being using python virtualenvs with Jenkins every day in the last two years, at multiple companies and for small side projects and cannot say I found "THE" answer. Still, I hope that sharing my experience would help others save time. Hopefully I will get further feedback in order to make the decision easier.
* **Avoid ShiningPanda** - it not well maintained, incompatible with Jenkins2 pipelines and prevents execution of jobs in parallel. Also it has the bad habit of leaving orphan environments on disk.
* **DIY via bash and virtualenv** is my current favourite. Create it inside $WORKSPACE and, if not always cleaning, run [relocatable](https://virtualenv.pypa.io/en/stable/userguide/#making-environments-relocatable) before activating them. This is because jenkins workspace folder disk location can change between executions of job N and N+1.
If you use multiple builders that do need the same virtualenv, the easiest way is to dump your environment to a file and source it at the beginning of the new builder.
To ease the maintenance I am planning to investigate these:
* direnvm
* virtualenv-wrapper (mkvirtualenv)
* pyenv
If you hit the shebang command line limits the best thing to do is to change your jenkins home directory to just `/j`.
|
There are some issues with venv-python plugin with different OS environments.
Here is how I call python method manually. Not best practice but it work.
```html
// Put this stage on top of pipeline
stage('Prepare venv') {
steps {
script {
if (isUnix()) {
env.ISUNIX = "TRUE" // cache isUnix() function to prevent blueocean show too many duplicate step (Checks if running on a Unix-like node) in python function below
sh 'python3 -m venv pyenv'
PYTHON_PATH = sh(script: 'echo ${WORKSPACE}/pyenv/bin/', returnStdout: true).trim()
}
else {
env.ISUNIX = "FALSE"
powershell(script:"py -3 -m venv pyenv") // windows not allow call python3.exe with venv. https://github.com/msys2/MINGW-packages/issues/5001
PYTHON_PATH = sh(script: 'echo ${WORKSPACE}/pyenv/Scripts/', returnStdout: true).trim()
}
try {
// Sometime agent with older pip version can cause error due to non compatible plugin.
Python("-m pip install --upgrade pip")
}
catch (ignore) { } // update pip always return false when already lastest version
// After this you can call Python() anywhere from pipeline
Python("-m pip install -r requirements.txt")
}
}
}
// Several plugins like WithPyenv is not working perfectly accross platform when using Virtual Env.
// Put this method outside pipeline
def Python(String command) {
if (env.ISUNIX == "TRUE") {
sh script:"source ${WORKSPACE}/pyenv/bin/activate && python ${command}", label: "python ${command}"
}
else {
powershell script:"${WORKSPACE}\\pyenv\\Scripts\\Activate.ps1 ; python ${command}", label: "python ${command}"
}
}
```
|
29,570,085
|
I am using a Makefile to provide consistent single commands for setting up a virtualenv, running tests, etc. I have configured my Jenkins instance to pull from a mercurial repo and then run "make virtualenv", which does this:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && pip install -r requirements.txt
```
But for some reason it insists on using the system-installed pip and trying to install my package dependencies in the system site-packages rather than the virtualenv:
```
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
```
If I add some debugging commands and explicitly point to the pip in my virtualenv, things get even more confusing:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && ls -l bin && which pip && pwd && ./bin/pip install -r requirements.txt
```
Which generates the following output:
```
New python executable in ./bin/python2.7
Not overwriting existing python script ./bin/python (you must use ./bin/python2.7)
Installing setuptools, pip...done.
Running virtualenv with interpreter /usr/bin/python2.7
```
It appears Jenkins doesn't rebuild the environment from scratch for each build, which strikes me as an odd choice, but shouldn't effect my immediate issue
The output from the "ls -l bin" shows pip to be installed in the virtualenv and executable:
```
-rw-r--r-- 1 jenkins jenkins 2248 Apr 9 21:14 activate
-rw-r--r-- 1 jenkins jenkins 1304 Apr 9 21:14 activate.csh
-rw-r--r-- 1 jenkins jenkins 2517 Apr 9 21:14 activate.fish
-rw-r--r-- 1 jenkins jenkins 1129 Apr 9 21:14 activate_this.py
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install-2.7
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python -> python2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python2 -> python2.7
-rwxr-xr-x 1 jenkins jenkins 3349512 Apr 10 19:31 python2.7
```
The output of "which pip" seems to want to use the correct one:
```
/var/lib/jenkins/jobs/Run Tests/workspace/bin/pip
```
My current working directory is what I expect it to be:
```
/var/lib/jenkins/jobs/Run Tests/workspace
```
But... wtf?
```
/bin/sh: 1: ./bin/pip: Permission denied
make: *** [virtualenv] Error 126
Build step 'Execute shell' marked build as failure
Finished: FAILURE
```
|
2015/04/10
|
[
"https://Stackoverflow.com/questions/29570085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1332304/"
] |
I'd recommend avoiding ShiningPanda.
I set up my virtual environments with [Anaconda](https://docs.continuum.io/anaconda/install)/[Miniconda](https://conda.io/miniconda.html). When installing conda make sure you're running as jenkins user.
```
your_user@$ sudo -u jenkins sh
jenkins@$ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
jenkins@$ bash Miniconda3-latest-Linux-x86_64.sh
```
Since Jenkins runs `sh` rather than `bash`, I added conda path to `/etc/profile`:
```
export PATH="/var/lib/jenkins/miniconda3/bin:$PATH"
```
Then in Jenkinsfile you can create and *delete* conda environments. Here's an example that creates a new environment for each build:
```
pipeline {
agent any
stages {
stage('Unit tests') {
steps {
sh '''
conda create --yes -n ${BUILD_TAG} python
source activate ${BUILD_TAG}
// example of unit test with nose2
pip install nose2
nose2
'''
}
}
}
post {
always {
sh 'conda remove --yes -n ${BUILD_TAG} --all'
}
}
}
```
|
I've got same problem. As I can see - you project named 'Run Tests'. So, this name contain space. That was the problem for me. I just renamed project, like RunTests - and venv working now! Attention - jenkins ask you about confirmation renaming project.
|
29,570,085
|
I am using a Makefile to provide consistent single commands for setting up a virtualenv, running tests, etc. I have configured my Jenkins instance to pull from a mercurial repo and then run "make virtualenv", which does this:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && pip install -r requirements.txt
```
But for some reason it insists on using the system-installed pip and trying to install my package dependencies in the system site-packages rather than the virtualenv:
```
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
```
If I add some debugging commands and explicitly point to the pip in my virtualenv, things get even more confusing:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && ls -l bin && which pip && pwd && ./bin/pip install -r requirements.txt
```
Which generates the following output:
```
New python executable in ./bin/python2.7
Not overwriting existing python script ./bin/python (you must use ./bin/python2.7)
Installing setuptools, pip...done.
Running virtualenv with interpreter /usr/bin/python2.7
```
It appears Jenkins doesn't rebuild the environment from scratch for each build, which strikes me as an odd choice, but shouldn't effect my immediate issue
The output from the "ls -l bin" shows pip to be installed in the virtualenv and executable:
```
-rw-r--r-- 1 jenkins jenkins 2248 Apr 9 21:14 activate
-rw-r--r-- 1 jenkins jenkins 1304 Apr 9 21:14 activate.csh
-rw-r--r-- 1 jenkins jenkins 2517 Apr 9 21:14 activate.fish
-rw-r--r-- 1 jenkins jenkins 1129 Apr 9 21:14 activate_this.py
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install-2.7
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python -> python2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python2 -> python2.7
-rwxr-xr-x 1 jenkins jenkins 3349512 Apr 10 19:31 python2.7
```
The output of "which pip" seems to want to use the correct one:
```
/var/lib/jenkins/jobs/Run Tests/workspace/bin/pip
```
My current working directory is what I expect it to be:
```
/var/lib/jenkins/jobs/Run Tests/workspace
```
But... wtf?
```
/bin/sh: 1: ./bin/pip: Permission denied
make: *** [virtualenv] Error 126
Build step 'Execute shell' marked build as failure
Finished: FAILURE
```
|
2015/04/10
|
[
"https://Stackoverflow.com/questions/29570085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1332304/"
] |
There are some issues with venv-python plugin with different OS environments.
Here is how I call python method manually. Not best practice but it work.
```html
// Put this stage on top of pipeline
stage('Prepare venv') {
steps {
script {
if (isUnix()) {
env.ISUNIX = "TRUE" // cache isUnix() function to prevent blueocean show too many duplicate step (Checks if running on a Unix-like node) in python function below
sh 'python3 -m venv pyenv'
PYTHON_PATH = sh(script: 'echo ${WORKSPACE}/pyenv/bin/', returnStdout: true).trim()
}
else {
env.ISUNIX = "FALSE"
powershell(script:"py -3 -m venv pyenv") // windows not allow call python3.exe with venv. https://github.com/msys2/MINGW-packages/issues/5001
PYTHON_PATH = sh(script: 'echo ${WORKSPACE}/pyenv/Scripts/', returnStdout: true).trim()
}
try {
// Sometime agent with older pip version can cause error due to non compatible plugin.
Python("-m pip install --upgrade pip")
}
catch (ignore) { } // update pip always return false when already lastest version
// After this you can call Python() anywhere from pipeline
Python("-m pip install -r requirements.txt")
}
}
}
// Several plugins like WithPyenv is not working perfectly accross platform when using Virtual Env.
// Put this method outside pipeline
def Python(String command) {
if (env.ISUNIX == "TRUE") {
sh script:"source ${WORKSPACE}/pyenv/bin/activate && python ${command}", label: "python ${command}"
}
else {
powershell script:"${WORKSPACE}\\pyenv\\Scripts\\Activate.ps1 ; python ${command}", label: "python ${command}"
}
}
```
|
[@hardbyte](https://stackoverflow.com/users/809692/hardbyte)'s answer
*The default shell that Jenkins uses is /bin/sh - this is configurable in Manage Jenkins -> Configure System -> Shell -> Shell executable. Setting this to /bin/bash will make source work.*
plus:
* <https://stackoverflow.com/a/70812248/1497139>
* [pyvenv-3.4 returned non-zero exit status 1](https://stackoverflow.com/questions/24123150/pyvenv-3-4-returned-non-zero-exit-status-1)
got me working
```bash
sudo apt install python3.10-venv
```
and then in jenkins in the execute shell step:
```bash
python3.10 -m venv .venv
source .venv/bin/activate
...
```
|
29,570,085
|
I am using a Makefile to provide consistent single commands for setting up a virtualenv, running tests, etc. I have configured my Jenkins instance to pull from a mercurial repo and then run "make virtualenv", which does this:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && pip install -r requirements.txt
```
But for some reason it insists on using the system-installed pip and trying to install my package dependencies in the system site-packages rather than the virtualenv:
```
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
```
If I add some debugging commands and explicitly point to the pip in my virtualenv, things get even more confusing:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && ls -l bin && which pip && pwd && ./bin/pip install -r requirements.txt
```
Which generates the following output:
```
New python executable in ./bin/python2.7
Not overwriting existing python script ./bin/python (you must use ./bin/python2.7)
Installing setuptools, pip...done.
Running virtualenv with interpreter /usr/bin/python2.7
```
It appears Jenkins doesn't rebuild the environment from scratch for each build, which strikes me as an odd choice, but shouldn't effect my immediate issue
The output from the "ls -l bin" shows pip to be installed in the virtualenv and executable:
```
-rw-r--r-- 1 jenkins jenkins 2248 Apr 9 21:14 activate
-rw-r--r-- 1 jenkins jenkins 1304 Apr 9 21:14 activate.csh
-rw-r--r-- 1 jenkins jenkins 2517 Apr 9 21:14 activate.fish
-rw-r--r-- 1 jenkins jenkins 1129 Apr 9 21:14 activate_this.py
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install-2.7
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python -> python2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python2 -> python2.7
-rwxr-xr-x 1 jenkins jenkins 3349512 Apr 10 19:31 python2.7
```
The output of "which pip" seems to want to use the correct one:
```
/var/lib/jenkins/jobs/Run Tests/workspace/bin/pip
```
My current working directory is what I expect it to be:
```
/var/lib/jenkins/jobs/Run Tests/workspace
```
But... wtf?
```
/bin/sh: 1: ./bin/pip: Permission denied
make: *** [virtualenv] Error 126
Build step 'Execute shell' marked build as failure
Finished: FAILURE
```
|
2015/04/10
|
[
"https://Stackoverflow.com/questions/29570085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1332304/"
] |
I'd recommend avoiding ShiningPanda.
I set up my virtual environments with [Anaconda](https://docs.continuum.io/anaconda/install)/[Miniconda](https://conda.io/miniconda.html). When installing conda make sure you're running as jenkins user.
```
your_user@$ sudo -u jenkins sh
jenkins@$ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
jenkins@$ bash Miniconda3-latest-Linux-x86_64.sh
```
Since Jenkins runs `sh` rather than `bash`, I added conda path to `/etc/profile`:
```
export PATH="/var/lib/jenkins/miniconda3/bin:$PATH"
```
Then in Jenkinsfile you can create and *delete* conda environments. Here's an example that creates a new environment for each build:
```
pipeline {
agent any
stages {
stage('Unit tests') {
steps {
sh '''
conda create --yes -n ${BUILD_TAG} python
source activate ${BUILD_TAG}
// example of unit test with nose2
pip install nose2
nose2
'''
}
}
}
post {
always {
sh 'conda remove --yes -n ${BUILD_TAG} --all'
}
}
}
```
|
There are some issues with venv-python plugin with different OS environments.
Here is how I call python method manually. Not best practice but it work.
```html
// Put this stage on top of pipeline
stage('Prepare venv') {
steps {
script {
if (isUnix()) {
env.ISUNIX = "TRUE" // cache isUnix() function to prevent blueocean show too many duplicate step (Checks if running on a Unix-like node) in python function below
sh 'python3 -m venv pyenv'
PYTHON_PATH = sh(script: 'echo ${WORKSPACE}/pyenv/bin/', returnStdout: true).trim()
}
else {
env.ISUNIX = "FALSE"
powershell(script:"py -3 -m venv pyenv") // windows not allow call python3.exe with venv. https://github.com/msys2/MINGW-packages/issues/5001
PYTHON_PATH = sh(script: 'echo ${WORKSPACE}/pyenv/Scripts/', returnStdout: true).trim()
}
try {
// Sometime agent with older pip version can cause error due to non compatible plugin.
Python("-m pip install --upgrade pip")
}
catch (ignore) { } // update pip always return false when already lastest version
// After this you can call Python() anywhere from pipeline
Python("-m pip install -r requirements.txt")
}
}
}
// Several plugins like WithPyenv is not working perfectly accross platform when using Virtual Env.
// Put this method outside pipeline
def Python(String command) {
if (env.ISUNIX == "TRUE") {
sh script:"source ${WORKSPACE}/pyenv/bin/activate && python ${command}", label: "python ${command}"
}
else {
powershell script:"${WORKSPACE}\\pyenv\\Scripts\\Activate.ps1 ; python ${command}", label: "python ${command}"
}
}
```
|
29,570,085
|
I am using a Makefile to provide consistent single commands for setting up a virtualenv, running tests, etc. I have configured my Jenkins instance to pull from a mercurial repo and then run "make virtualenv", which does this:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && pip install -r requirements.txt
```
But for some reason it insists on using the system-installed pip and trying to install my package dependencies in the system site-packages rather than the virtualenv:
```
error: could not create '/usr/local/lib/python2.7/dist-packages/flask': Permission denied
```
If I add some debugging commands and explicitly point to the pip in my virtualenv, things get even more confusing:
```
virtualenv --python=/usr/bin/python2.7 --no-site-packages . && . ./bin/activate && ls -l bin && which pip && pwd && ./bin/pip install -r requirements.txt
```
Which generates the following output:
```
New python executable in ./bin/python2.7
Not overwriting existing python script ./bin/python (you must use ./bin/python2.7)
Installing setuptools, pip...done.
Running virtualenv with interpreter /usr/bin/python2.7
```
It appears Jenkins doesn't rebuild the environment from scratch for each build, which strikes me as an odd choice, but shouldn't effect my immediate issue
The output from the "ls -l bin" shows pip to be installed in the virtualenv and executable:
```
-rw-r--r-- 1 jenkins jenkins 2248 Apr 9 21:14 activate
-rw-r--r-- 1 jenkins jenkins 1304 Apr 9 21:14 activate.csh
-rw-r--r-- 1 jenkins jenkins 2517 Apr 9 21:14 activate.fish
-rw-r--r-- 1 jenkins jenkins 1129 Apr 9 21:14 activate_this.py
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install
-rwxr-xr-x 1 jenkins jenkins 278 Apr 9 21:14 easy_install-2.7
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2
-rwxr-xr-x 1 jenkins jenkins 250 Apr 9 21:14 pip2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python -> python2.7
lrwxrwxrwx 1 jenkins jenkins 9 Apr 10 19:31 python2 -> python2.7
-rwxr-xr-x 1 jenkins jenkins 3349512 Apr 10 19:31 python2.7
```
The output of "which pip" seems to want to use the correct one:
```
/var/lib/jenkins/jobs/Run Tests/workspace/bin/pip
```
My current working directory is what I expect it to be:
```
/var/lib/jenkins/jobs/Run Tests/workspace
```
But... wtf?
```
/bin/sh: 1: ./bin/pip: Permission denied
make: *** [virtualenv] Error 126
Build step 'Execute shell' marked build as failure
Finished: FAILURE
```
|
2015/04/10
|
[
"https://Stackoverflow.com/questions/29570085",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1332304/"
] |
I'd recommend avoiding ShiningPanda.
I set up my virtual environments with [Anaconda](https://docs.continuum.io/anaconda/install)/[Miniconda](https://conda.io/miniconda.html). When installing conda make sure you're running as jenkins user.
```
your_user@$ sudo -u jenkins sh
jenkins@$ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
jenkins@$ bash Miniconda3-latest-Linux-x86_64.sh
```
Since Jenkins runs `sh` rather than `bash`, I added conda path to `/etc/profile`:
```
export PATH="/var/lib/jenkins/miniconda3/bin:$PATH"
```
Then in Jenkinsfile you can create and *delete* conda environments. Here's an example that creates a new environment for each build:
```
pipeline {
agent any
stages {
stage('Unit tests') {
steps {
sh '''
conda create --yes -n ${BUILD_TAG} python
source activate ${BUILD_TAG}
// example of unit test with nose2
pip install nose2
nose2
'''
}
}
}
post {
always {
sh 'conda remove --yes -n ${BUILD_TAG} --all'
}
}
}
```
|
After acticating the virtualenv, try to run pip as a module:
```
python -m pip install ...
```
**`python -m pip` vs `pip`**
* `python -m pip`: executes python interpreter binary that reads module `pip.py` from site packages directory
* `pip`: executes `pip` binary / script picked up from `$PATH`
I have found that using `python -m pip` solved most of the pip permission problems encountered.
|
55,980,027
|
I need to add to a .csv file based on user input. Other portions of the code add to the file but I can't figure out how to have it add user input. I'm new to python and coding in general.
I have other portions of the code that can merge or draw the data from a .csv database and write it to the separate file, but can't figure how to get it to take multiple user inputs to write or append to the outgoing file.
```
def manualentry():
stock = input("Enter Stock #: ") #Generate data for each column to fill in to the output file.
VIN = input("Enter Full VIN: ") #Each line asks the user to add data do the line.
make = input("Enter Make: ")
model = input("Enter Model: ")
year = input("Enter Year: ")
l8v = input("Enter L8V: ")
print(stock, VIN, make, model, year, l8v) #Prints the line of user data
input4 = input("Append to inventory list? Y/N") #Asks user to append the data to the output file.
if input4 == "Y" or input4 == "y":
with open('INV.csv','a', newline='') as outfile: #Pull up a seperate csv to write to, an output for collected data
w = csv.writer(outfile) #Need to write the user input to the .csv file.
w.writerow([stock, VIN, make, model, year, l8v]) #<-This is the portion that seems to fall apart.
print("INVENTORY UPDATED")
starter() #Restarts whole program from begining.
if input4 == "N" or input4 == "n":
print("SKIPPING. RESTARTING....")
starter() #Reset
else:
print("Invalid entry restarting program.")
starter() #Reset
starter() #R E S E T !
```
Just need the user inputs to be applied to the .csv and saved there. Earlier portions of the code function perfectly except this to add to the .csv file. It's to fill in missing data that would otherwise not be listed in a separate database.
|
2019/05/04
|
[
"https://Stackoverflow.com/questions/55980027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11450864/"
] |
Trying to output a SimpleXMLElement using `var_dump()` isn't a good idea and as you have seen - doesn't give much.
If you just want to see the XML it has loaded, instead use...
```
echo $xml->asXML();
```
which will show you the XML has loaded OK, so then to output the field your after is just
```
$xml = simplexml_load_string($response);
echo $xml->Header->AsyncReplyFlag;
```
|
Using the `XPath` query
```
$xml = new SimpleXMLElement($xmlData);
echo $xml->xpath('//AsyncReplyFlag')[0];
```
OR
You can use `xml_parser_create`
```
$p = xml_parser_create();
xml_parse_into_struct($p, $xml, $values, $indexes);// $xml containing the XML
xml_parser_free($p);
echo $values[12]['value'];
```
For other details, you can `print_r($values)`
|
55,980,027
|
I need to add to a .csv file based on user input. Other portions of the code add to the file but I can't figure out how to have it add user input. I'm new to python and coding in general.
I have other portions of the code that can merge or draw the data from a .csv database and write it to the separate file, but can't figure how to get it to take multiple user inputs to write or append to the outgoing file.
```
def manualentry():
stock = input("Enter Stock #: ") #Generate data for each column to fill in to the output file.
VIN = input("Enter Full VIN: ") #Each line asks the user to add data do the line.
make = input("Enter Make: ")
model = input("Enter Model: ")
year = input("Enter Year: ")
l8v = input("Enter L8V: ")
print(stock, VIN, make, model, year, l8v) #Prints the line of user data
input4 = input("Append to inventory list? Y/N") #Asks user to append the data to the output file.
if input4 == "Y" or input4 == "y":
with open('INV.csv','a', newline='') as outfile: #Pull up a seperate csv to write to, an output for collected data
w = csv.writer(outfile) #Need to write the user input to the .csv file.
w.writerow([stock, VIN, make, model, year, l8v]) #<-This is the portion that seems to fall apart.
print("INVENTORY UPDATED")
starter() #Restarts whole program from begining.
if input4 == "N" or input4 == "n":
print("SKIPPING. RESTARTING....")
starter() #Reset
else:
print("Invalid entry restarting program.")
starter() #Reset
starter() #R E S E T !
```
Just need the user inputs to be applied to the .csv and saved there. Earlier portions of the code function perfectly except this to add to the .csv file. It's to fill in missing data that would otherwise not be listed in a separate database.
|
2019/05/04
|
[
"https://Stackoverflow.com/questions/55980027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11450864/"
] |
DOMDocument should have no problem extracting it, as a quick one-liner:
```
echo (@DOMDocument::loadXML($response))->getElementsByTagName("AsyncReplyFlag")->item(0)->textContent;
```
... or if you want to meticulously check for errors every step of the way,
```
$xml_errors=[];
set_error_handler(function(int $errno, string $errstr, string $errfile, int $errline, array $errcontext) use(&$xml_errors){
ob_start();
call_user_func_array('var_dump',func_get_args());
$xml_errors[]=ob_get_clean();
});
$domd=new DOMDocument();
$loaded=$domd->loadXML($response);
restore_error_handler();
if(!$loaded){
if(defined('STDERR')){
fprintf(STDERR,"%s",$response);
}
throw new \RuntimeException("errors parsing XML! xml printed in stderr, parsing errors: ".print_r($xml_errors,true));
}
$ele=$domd->getElementsByTagName("AsyncReplyFlag");
if($ele->length<1){
if(defined('STDERR')){
fprintf(STDERR,"%s",$response);
}
throw new \RuntimeException("did not get AsyncReplyFlag in response! (xml printed in stderr)");
}
echo $ele->item(0)->textContent;
```
|
Trying to output a SimpleXMLElement using `var_dump()` isn't a good idea and as you have seen - doesn't give much.
If you just want to see the XML it has loaded, instead use...
```
echo $xml->asXML();
```
which will show you the XML has loaded OK, so then to output the field your after is just
```
$xml = simplexml_load_string($response);
echo $xml->Header->AsyncReplyFlag;
```
|
55,980,027
|
I need to add to a .csv file based on user input. Other portions of the code add to the file but I can't figure out how to have it add user input. I'm new to python and coding in general.
I have other portions of the code that can merge or draw the data from a .csv database and write it to the separate file, but can't figure how to get it to take multiple user inputs to write or append to the outgoing file.
```
def manualentry():
stock = input("Enter Stock #: ") #Generate data for each column to fill in to the output file.
VIN = input("Enter Full VIN: ") #Each line asks the user to add data do the line.
make = input("Enter Make: ")
model = input("Enter Model: ")
year = input("Enter Year: ")
l8v = input("Enter L8V: ")
print(stock, VIN, make, model, year, l8v) #Prints the line of user data
input4 = input("Append to inventory list? Y/N") #Asks user to append the data to the output file.
if input4 == "Y" or input4 == "y":
with open('INV.csv','a', newline='') as outfile: #Pull up a seperate csv to write to, an output for collected data
w = csv.writer(outfile) #Need to write the user input to the .csv file.
w.writerow([stock, VIN, make, model, year, l8v]) #<-This is the portion that seems to fall apart.
print("INVENTORY UPDATED")
starter() #Restarts whole program from begining.
if input4 == "N" or input4 == "n":
print("SKIPPING. RESTARTING....")
starter() #Reset
else:
print("Invalid entry restarting program.")
starter() #Reset
starter() #R E S E T !
```
Just need the user inputs to be applied to the .csv and saved there. Earlier portions of the code function perfectly except this to add to the .csv file. It's to fill in missing data that would otherwise not be listed in a separate database.
|
2019/05/04
|
[
"https://Stackoverflow.com/questions/55980027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11450864/"
] |
DOMDocument should have no problem extracting it, as a quick one-liner:
```
echo (@DOMDocument::loadXML($response))->getElementsByTagName("AsyncReplyFlag")->item(0)->textContent;
```
... or if you want to meticulously check for errors every step of the way,
```
$xml_errors=[];
set_error_handler(function(int $errno, string $errstr, string $errfile, int $errline, array $errcontext) use(&$xml_errors){
ob_start();
call_user_func_array('var_dump',func_get_args());
$xml_errors[]=ob_get_clean();
});
$domd=new DOMDocument();
$loaded=$domd->loadXML($response);
restore_error_handler();
if(!$loaded){
if(defined('STDERR')){
fprintf(STDERR,"%s",$response);
}
throw new \RuntimeException("errors parsing XML! xml printed in stderr, parsing errors: ".print_r($xml_errors,true));
}
$ele=$domd->getElementsByTagName("AsyncReplyFlag");
if($ele->length<1){
if(defined('STDERR')){
fprintf(STDERR,"%s",$response);
}
throw new \RuntimeException("did not get AsyncReplyFlag in response! (xml printed in stderr)");
}
echo $ele->item(0)->textContent;
```
|
Using the `XPath` query
```
$xml = new SimpleXMLElement($xmlData);
echo $xml->xpath('//AsyncReplyFlag')[0];
```
OR
You can use `xml_parser_create`
```
$p = xml_parser_create();
xml_parse_into_struct($p, $xml, $values, $indexes);// $xml containing the XML
xml_parser_free($p);
echo $values[12]['value'];
```
For other details, you can `print_r($values)`
|
419,698
|
**Overall Plan**
Get my class information to automatically optimize and select my uni class timetable
Overall Algorithm
1. Logon to the website using its
Enterprise Sign On Engine login
2. Find my current semester and its
related subjects (pre setup)
3. Navigate to the right page and get the data from each related
subject (lecture, practical and
workshop times)
4. Strip the data of useless
information
5. Rank the classes which are closer
to each other higher, the ones on
random days lower
6. Solve a best time table solution
7. Output me a detailed list of the
BEST CASE information
8. Output me a detailed list of the
possible class information (some
might be full for example)
9. Get the program to select the best
classes automatically
10. Keep checking to see if we can
achieve 7.
6 in detail
Get all the classes, using the lectures as a focus point, would be highest ranked (only one per subject), and try to arrange the classes around that.
**Questions**
Can anyone supply me with links to something that might be similar to this hopefully written in python?
In regards to 6.: what data structure would you recommend to store this information in? A linked list where each object of uniclass?
Should i write all information to a text file?
I am thinking uniclass to be setup like the following
attributes:
* Subject
* Rank
* Time
* Type
* Teacher
I am hardly experienced in Python and thought this would be a good learning project to try to accomplish.
Thanks for any help and links provided to help get me started, **open to edits to tag appropriately or what ever is necessary** (not sure what this falls under other than programming and python?)
EDIT: can't really get the proper formatting i want for this SO post ><
|
2009/01/07
|
[
"https://Stackoverflow.com/questions/419698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/45211/"
] |
Depending on how far you plan on taking #6, and how big the dataset is, it may be non-trivial; it certainly smacks of NP-hard global optimisation to me...
Still, if you're talking about tens (rather than hundreds) of nodes, a fairly dumb algorithm should give good enough performance.
So, you have two constraints:
1. A total ordering on the classes by score;
this is flexible.
2. Class clashes; this is not flexible.
What I mean by flexible is that you *can* go to more spaced out classes (with lower scores), but you *cannot* be in two classes at once. Interestingly, there's likely to be a positive correlation between score and clashes; higher scoring classes are more likely to clash.
My first pass at an algorithm:
```
selected_classes = []
classes = sorted(classes, key=lambda c: c.score)
for clas in classes:
if not clas.clashes_with(selected_classes):
selected_classes.append(clas)
```
Working out clashes might be awkward if classes are of uneven lengths, start at strange times and so on. Mapping start and end times into a simplified representation of "blocks" of time (every 15 minutes / 30 minutes or whatever you need) would make it easier to look for overlaps between the start and end of different classes.
|
[BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/) was mentioned here a few times, e.g [get-list-of-xml-attribute-values-in-python](https://stackoverflow.com/questions/87317/get-list-of-xml-attribute-values-in-python).
>
> Beautiful Soup is a Python HTML/XML parser designed for quick turnaround projects like screen-scraping. Three features make it powerful:
>
>
> 1. Beautiful Soup won't choke if you give it bad markup. It yields a parse tree that makes approximately as much sense as your original document. This is usually good enough to collect the data you need and run away.
> 2. Beautiful Soup provides a few simple methods and Pythonic idioms for navigating, searching, and modifying a parse tree: a toolkit for dissecting a document and extracting what you need. You don't have to create a custom parser for each application.
> 3. Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. You don't have to think about encodings, unless the document doesn't specify an encoding and Beautiful Soup can't autodetect one. Then you just have to specify the original encoding.
>
>
> Beautiful Soup parses anything you give it, and does the tree traversal stuff for you. You can tell it "Find all the links", or "Find all the links of class externalLink", or "Find all the links whose urls match "foo.com", or "Find the table heading that's got bold text, then give me that text."
>
>
> Valuable data that was once locked up in poorly-designed websites is now within your reach. Projects that would have taken hours take only minutes with Beautiful Soup.
>
>
>
|
419,698
|
**Overall Plan**
Get my class information to automatically optimize and select my uni class timetable
Overall Algorithm
1. Logon to the website using its
Enterprise Sign On Engine login
2. Find my current semester and its
related subjects (pre setup)
3. Navigate to the right page and get the data from each related
subject (lecture, practical and
workshop times)
4. Strip the data of useless
information
5. Rank the classes which are closer
to each other higher, the ones on
random days lower
6. Solve a best time table solution
7. Output me a detailed list of the
BEST CASE information
8. Output me a detailed list of the
possible class information (some
might be full for example)
9. Get the program to select the best
classes automatically
10. Keep checking to see if we can
achieve 7.
6 in detail
Get all the classes, using the lectures as a focus point, would be highest ranked (only one per subject), and try to arrange the classes around that.
**Questions**
Can anyone supply me with links to something that might be similar to this hopefully written in python?
In regards to 6.: what data structure would you recommend to store this information in? A linked list where each object of uniclass?
Should i write all information to a text file?
I am thinking uniclass to be setup like the following
attributes:
* Subject
* Rank
* Time
* Type
* Teacher
I am hardly experienced in Python and thought this would be a good learning project to try to accomplish.
Thanks for any help and links provided to help get me started, **open to edits to tag appropriately or what ever is necessary** (not sure what this falls under other than programming and python?)
EDIT: can't really get the proper formatting i want for this SO post ><
|
2009/01/07
|
[
"https://Stackoverflow.com/questions/419698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/45211/"
] |
Depending on how far you plan on taking #6, and how big the dataset is, it may be non-trivial; it certainly smacks of NP-hard global optimisation to me...
Still, if you're talking about tens (rather than hundreds) of nodes, a fairly dumb algorithm should give good enough performance.
So, you have two constraints:
1. A total ordering on the classes by score;
this is flexible.
2. Class clashes; this is not flexible.
What I mean by flexible is that you *can* go to more spaced out classes (with lower scores), but you *cannot* be in two classes at once. Interestingly, there's likely to be a positive correlation between score and clashes; higher scoring classes are more likely to clash.
My first pass at an algorithm:
```
selected_classes = []
classes = sorted(classes, key=lambda c: c.score)
for clas in classes:
if not clas.clashes_with(selected_classes):
selected_classes.append(clas)
```
Working out clashes might be awkward if classes are of uneven lengths, start at strange times and so on. Mapping start and end times into a simplified representation of "blocks" of time (every 15 minutes / 30 minutes or whatever you need) would make it easier to look for overlaps between the start and end of different classes.
|
There are waaay too many questions here.
Please break this down into subject areas and ask specific questions on each subject. Please focus on one of these with specific questions. Please define your terms: "best" doesn't mean anything without some specific measurement to optimize.
Here's what I think I see in your list of topics.
1. Scraping HTML
1 Logon to the website using its Enterprise Sign On Engine login
2 Find my current semester and its related subjects (pre setup)
3 Navigate to the right page and get the data from each related subject (lecture, practical and workshop times)
4 Strip the data of useless information
2. Some algorithm to "rank" based on "closer to each other" looking for a "best time". Since these terms are undefined, it's nearly impossible to provide any help on this.
5 Rank the classes which are closer to each other higher, the ones on random days lower
6 Solve a best time table solution
3. Output something.
7 Output me a detailed list of the BEST CASE information
8 Output me a detailed list of the possible class information (some might be full for example)
4. Optimize something, looking for "best". Another undefinable term.
9 Get the program to select the best classes automatically
10 Keep checking to see if we can achieve 7.
BTW, Python has "[lists](http://docs.python.org/tutorial/introduction.html#lists)". Whether or not they're "linked" doesn't really enter into it.
|
5,997,027
|
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
```
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
```
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
```
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
```
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
|
2011/05/13
|
[
"https://Stackoverflow.com/questions/5997027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/752910/"
] |
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html).
|
Its very well known due to the nature of [floating point numbers](http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html).
If you want to do decimal arithmetic not floating point arithmatic there are [libraries](http://docs.python.org/library/decimal.html) to do this.
E.g.,
```
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
```
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
```
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
```
So its best to always just round before casting floating points to ints, unless you want a floor function.
```
>>> int(1.9999)
1
>>> int(round(1.999))
2
```
Another alternative is to use the Fraction class from the [fractions](http://docs.python.org/library/fractions.html) library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).
|
25,288,927
|
I have a dict and behind each key is a list stored.
Looks like this:
```
dict with values: {
u'New_York': [(u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 10), (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 4), (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 3)],
u'Jersy': [(u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 6), (u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 7)],
u'Alameda': [(u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 1), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 2), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 3), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 1)]
}
```
What I want is to iterate through the dic lists and return the max in a certain position of the list for each KEY. The result should contain the KEY and the whole element of the list with the max value. Perfect would be to store the returned element in a dic as well.
Example:
Max Value here with the values at the attributes last position of the list.
```
somedic = {
u'New_York': (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 10)
u'Jersy': (u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 7)
u'Alameda': (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 3)
}
```
I tried a couple of thinks, by looking at these Freds:
[Getting key with maximum value in dictionary?](https://stackoverflow.com/questions/268272/getting-key-with-maximum-value-in-dictionary)
[Max/Min value of Dictionary of List](https://stackoverflow.com/questions/14884376/max-min-value-of-dictionary-of-list)
[Python miminum value in dictionary of lists](https://stackoverflow.com/questions/11729670/python-miminum-value-in-dictionary-of-lists?lq=1)
[Python miminum length/max value in dictionary of lists](https://stackoverflow.com/questions/11730552/python-miminum-length-max-value-in-dictionary-of-lists)
But I cannot get my head around it. It is simply above my abilities. I just started to learn python. I tried something like this:
```
import operator
maxvalues = {}
maxvalues = max(countylist.iteritems(), key=operator.itemgetter(1))[0]
print "should be max values here: ", maxvalues
#gave me New York
```
Is that possible?
I am working on Python 2.7
It would be awesome if the code snipped one posts as an answer could be explained, since I want to learn something!
Btw I am not looking for a ready to use code. Some hints and code snippet work for me. I will work my way from there on through it. That's how I will learn the most.
|
2014/08/13
|
[
"https://Stackoverflow.com/questions/25288927",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3935035/"
] |
[`max()`](https://docs.python.org/2/library/functions.html#max) takes a second parameter, a callable `key` that lets you specify how to calculate the maximum. It is called for each entry in the input iterable and its return value is used to find the highest value. You need to apply this against each *value* in your dictionary; you are finding the maximum of each individual *list* here, not the maximum of all values in a dictionary.
Use that against the values; the rest is just formatting for the output; I've used a [dict comprehension](https://docs.python.org/2/reference/expressions.html#dictionary-displays) here to process each of your key-value pairs from the input and produce a dictionary again for the output:
```
{k: max(v, key=lambda i: i[-1]) for k, v in somedic.iteritems()}
```
You could also use the [`operator.itemgetter()` function](https://docs.python.org/2/library/operator.html#operator.itemgetter) to produce a callable for you, instead of using a `lambda`:
```
from operator import itemgetter
{k: max(v, key=itemgetter(-1)) for k, v in somedic.iteritems()}
```
Both grab the last element of each input tuple.
Demo:
```
>>> import datetime
>>> from pprint import pprint
>>> somedic = {
... u'New_York': (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 10), (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 4), (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 3)],
... u'Jersy': [(u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 6), (u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 7)],
... u'Alameda': [(u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 1), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 2), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 3), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 1)]
... }
>>> {k: max(v, key=lambda i: i[-1]) for k, v in somedic.iteritems()}
{u'New_York': (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 10), u'Jersy': (u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 7), u'Alameda': (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 3)}
>>> pprint(_)
{u'Alameda': (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 3),
u'Jersy': (u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 7),
u'New_York': (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 10)}
```
|
Can you have a look at : <https://wiki.python.org/moin/HowTo/Sorting#Sorting_Mini-HOW_TO>
The answer may be :
```
In [2]: import datetime
In [3]: d = {
u'New_York': [(u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 10), (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 4), (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 3)],
u'Jersy': [(u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 6), (u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 7)],
u'Alameda': [(u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 1), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 2), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 3), (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 1)]
}
In [4]: def give_max_values(d):
...: res = {}
...: for key, vals in d.iteritems():
...: res[key] = max(vals, key=lambda x: x[3])
...: return res
In [5]: somedic = give_max_values(d)
In [6]: print somedic
{u'New_York': (u'New_York', u'NY', datetime.datetime(2014, 8, 13, 0, 0), 10), u'Jersy': (u'Jersy', u'JY', datetime.datetime(2014, 8, 13, 0, 0), 7), u'Alameda': (u'Alameda', u'CA', datetime.datetime(2014, 8, 13, 0, 0), 3)}
```
|
1,027,990
|
I'm trying to build my first facebook app, and it seems that the python facebook ([pyfacebook](http://code.google.com/p/pyfacebook/)) wrapper is really out of date, and the most relevant functions, like stream functions, are not implemented.
Are there any mature python frontends for facebook? If not, what's the best language for facebook development?
|
2009/06/22
|
[
"https://Stackoverflow.com/questions/1027990",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/51197/"
] |
The updated location of pyfacebook is [on github](http://github.com/sciyoshi/pyfacebook/tree/master). Plus, as [arstechnica](http://arstechnica.com/open-source/news/2009/04/how-to-using-the-new-facebook-stream-api-in-a-desktop-app.ars) well explains:
>
> PyFacebook is also very easy to extend
> when new Facebook API methods are
> introduced. Each Facebook API method
> is described in the PyFacebook library
> using a simple data structure that
> specifies the method's name and
> parameter types.
>
>
>
so, even should you be using a pyfacebook version that doesn't yet implement some brand-new thing you need, it's easy to add said thing, as Ryan Paul shows [here](http://bazaar.launchpad.net/~segphault/gwibber/template-facebook-stream/revision/289#gwibber/microblog/support/facelib.py) regarding some of the stream functions (back in April right after they were launched).
|
Try [this site](http://github.com/sciyoshi/pyfacebook/tree/master) instead.
It's pyfacebooks site on GitHub. The one you have is outdated.
|
1,027,990
|
I'm trying to build my first facebook app, and it seems that the python facebook ([pyfacebook](http://code.google.com/p/pyfacebook/)) wrapper is really out of date, and the most relevant functions, like stream functions, are not implemented.
Are there any mature python frontends for facebook? If not, what's the best language for facebook development?
|
2009/06/22
|
[
"https://Stackoverflow.com/questions/1027990",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/51197/"
] |
The updated location of pyfacebook is [on github](http://github.com/sciyoshi/pyfacebook/tree/master). Plus, as [arstechnica](http://arstechnica.com/open-source/news/2009/04/how-to-using-the-new-facebook-stream-api-in-a-desktop-app.ars) well explains:
>
> PyFacebook is also very easy to extend
> when new Facebook API methods are
> introduced. Each Facebook API method
> is described in the PyFacebook library
> using a simple data structure that
> specifies the method's name and
> parameter types.
>
>
>
so, even should you be using a pyfacebook version that doesn't yet implement some brand-new thing you need, it's easy to add said thing, as Ryan Paul shows [here](http://bazaar.launchpad.net/~segphault/gwibber/template-facebook-stream/revision/289#gwibber/microblog/support/facelib.py) regarding some of the stream functions (back in April right after they were launched).
|
If you're a facebook newbie, I'd suggest doing your first couple of apps in PHP. Facebook is written in PHP, the APIs are really designed around PHP (although they are language-neutral, theoretically.) The latest API support and most of the sample code is always in PHP.
Once you get the hang of it, you can definitely write FB apps in other languages, including Python, Actionscript, etc. But my experience with other platforms is that they never work "out of the box" with Facebook the way PHP does.
This is nothing against python! I like the language alot.
|
1,027,990
|
I'm trying to build my first facebook app, and it seems that the python facebook ([pyfacebook](http://code.google.com/p/pyfacebook/)) wrapper is really out of date, and the most relevant functions, like stream functions, are not implemented.
Are there any mature python frontends for facebook? If not, what's the best language for facebook development?
|
2009/06/22
|
[
"https://Stackoverflow.com/questions/1027990",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/51197/"
] |
The updated location of pyfacebook is [on github](http://github.com/sciyoshi/pyfacebook/tree/master). Plus, as [arstechnica](http://arstechnica.com/open-source/news/2009/04/how-to-using-the-new-facebook-stream-api-in-a-desktop-app.ars) well explains:
>
> PyFacebook is also very easy to extend
> when new Facebook API methods are
> introduced. Each Facebook API method
> is described in the PyFacebook library
> using a simple data structure that
> specifies the method's name and
> parameter types.
>
>
>
so, even should you be using a pyfacebook version that doesn't yet implement some brand-new thing you need, it's easy to add said thing, as Ryan Paul shows [here](http://bazaar.launchpad.net/~segphault/gwibber/template-facebook-stream/revision/289#gwibber/microblog/support/facelib.py) regarding some of the stream functions (back in April right after they were launched).
|
Facebook's own Python-SDK covers the newer Graph API now:
<http://github.com/facebook/python-sdk/>
|
1,027,990
|
I'm trying to build my first facebook app, and it seems that the python facebook ([pyfacebook](http://code.google.com/p/pyfacebook/)) wrapper is really out of date, and the most relevant functions, like stream functions, are not implemented.
Are there any mature python frontends for facebook? If not, what's the best language for facebook development?
|
2009/06/22
|
[
"https://Stackoverflow.com/questions/1027990",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/51197/"
] |
Facebook's own Python-SDK covers the newer Graph API now:
<http://github.com/facebook/python-sdk/>
|
Try [this site](http://github.com/sciyoshi/pyfacebook/tree/master) instead.
It's pyfacebooks site on GitHub. The one you have is outdated.
|
1,027,990
|
I'm trying to build my first facebook app, and it seems that the python facebook ([pyfacebook](http://code.google.com/p/pyfacebook/)) wrapper is really out of date, and the most relevant functions, like stream functions, are not implemented.
Are there any mature python frontends for facebook? If not, what's the best language for facebook development?
|
2009/06/22
|
[
"https://Stackoverflow.com/questions/1027990",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/51197/"
] |
Facebook's own Python-SDK covers the newer Graph API now:
<http://github.com/facebook/python-sdk/>
|
If you're a facebook newbie, I'd suggest doing your first couple of apps in PHP. Facebook is written in PHP, the APIs are really designed around PHP (although they are language-neutral, theoretically.) The latest API support and most of the sample code is always in PHP.
Once you get the hang of it, you can definitely write FB apps in other languages, including Python, Actionscript, etc. But my experience with other platforms is that they never work "out of the box" with Facebook the way PHP does.
This is nothing against python! I like the language alot.
|
42,802,194
|
Hi I am really new to JSON and Python, here is my dilemma, it's been bugging me for two days.
Here is the sample json that I want to parse.
```
{
"Tag1":"{
"TagX": [
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "01",
"TagF": null
},
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "02",
"TagF": null
}
],
"date": "10.03.2017 21:00:00"
}"
}
```
Here is my python code:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import requests
import json
import urllib2
jaysonData = json.load(urllib2.urlopen('URL'))
print jaysonData["Tag1"]
```
How can I get values of TagB and TagC?
When I try to access them with
```
jaysonData = json.load(urllib2.urlopen('URL'))
print jaysonData["Tag1"]["TagX"]["TagB"]
```
The output is:
```
TypeError: string indices must be integers
```
When I do this:
```
print jaysonData["Tag1"]
```
The output is:
```
{
"TagX": [
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "01",
"TagF": null
},
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "02",
"TagF": null
}
],
"date": "10.03.2017 21:00:00"
}"
```
I need to reach TagX, TagD, TagE but the below code gives this error:
```
print jaysonData["Tag1"]["TagX"]
```
prints
```
print jaysonData["Tag1"]["TagX"]
TypeError: string indices must be integers
```
How can I access TagA to TagF with python?
Thanks in advance.
|
2017/03/15
|
[
"https://Stackoverflow.com/questions/42802194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7672271/"
] |
In the returned JSON, the value of `Tag1` is a string, not more JSON. It does appear to be JSON encoded as a string though, so convert that to json once more:
```
jaysonData = json.load(urllib2.urlopen('URL'))
tag1JaysonData = json.load(jaysonData['Tag1'])
print tag1JaysonData["TagX"]
```
Also note that `TagX` is a list, not a dictionary, so there are multiple `TagB`s in it:
```
print [x['TagB'] for x in tag1JaysonData['TagX']]
```
|
`TagX` is a `list` consisting of 2 dictionaries and `TagB` is a `dictionary`.
```
print jaysonData["Tag1"]["TagX"][0]["TagB"]
```
You need to remove double quotations before and after the curly braces of `Tag1`.
```
{
"Tag1":{
"TagX": [
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "01",
"TagF": null
},
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "02",
"TagF": null
}
],
"date": "10.03.2017 21:00:00"
}
}
```
|
42,802,194
|
Hi I am really new to JSON and Python, here is my dilemma, it's been bugging me for two days.
Here is the sample json that I want to parse.
```
{
"Tag1":"{
"TagX": [
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "01",
"TagF": null
},
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "02",
"TagF": null
}
],
"date": "10.03.2017 21:00:00"
}"
}
```
Here is my python code:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import requests
import json
import urllib2
jaysonData = json.load(urllib2.urlopen('URL'))
print jaysonData["Tag1"]
```
How can I get values of TagB and TagC?
When I try to access them with
```
jaysonData = json.load(urllib2.urlopen('URL'))
print jaysonData["Tag1"]["TagX"]["TagB"]
```
The output is:
```
TypeError: string indices must be integers
```
When I do this:
```
print jaysonData["Tag1"]
```
The output is:
```
{
"TagX": [
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "01",
"TagF": null
},
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "02",
"TagF": null
}
],
"date": "10.03.2017 21:00:00"
}"
```
I need to reach TagX, TagD, TagE but the below code gives this error:
```
print jaysonData["Tag1"]["TagX"]
```
prints
```
print jaysonData["Tag1"]["TagX"]
TypeError: string indices must be integers
```
How can I access TagA to TagF with python?
Thanks in advance.
|
2017/03/15
|
[
"https://Stackoverflow.com/questions/42802194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7672271/"
] |
In the returned JSON, the value of `Tag1` is a string, not more JSON. It does appear to be JSON encoded as a string though, so convert that to json once more:
```
jaysonData = json.load(urllib2.urlopen('URL'))
tag1JaysonData = json.load(jaysonData['Tag1'])
print tag1JaysonData["TagX"]
```
Also note that `TagX` is a list, not a dictionary, so there are multiple `TagB`s in it:
```
print [x['TagB'] for x in tag1JaysonData['TagX']]
```
|
```
{
"Tag1":{
"TagX": [
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "01",
"TagF": null
},
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "02",
"TagF": null
}
],
"date": "10.03.2017 21:00:00"
}
}
```
You have to remove double qoutation before curly bracket of Tag1. I have removed in above sample.
and acess like this.
```
print jaysonData["Tag1"]["TagX"][0]["TagA"]
```
|
42,802,194
|
Hi I am really new to JSON and Python, here is my dilemma, it's been bugging me for two days.
Here is the sample json that I want to parse.
```
{
"Tag1":"{
"TagX": [
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "01",
"TagF": null
},
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "02",
"TagF": null
}
],
"date": "10.03.2017 21:00:00"
}"
}
```
Here is my python code:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import requests
import json
import urllib2
jaysonData = json.load(urllib2.urlopen('URL'))
print jaysonData["Tag1"]
```
How can I get values of TagB and TagC?
When I try to access them with
```
jaysonData = json.load(urllib2.urlopen('URL'))
print jaysonData["Tag1"]["TagX"]["TagB"]
```
The output is:
```
TypeError: string indices must be integers
```
When I do this:
```
print jaysonData["Tag1"]
```
The output is:
```
{
"TagX": [
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "01",
"TagF": null
},
{
"TagA": "A",
"TagB": 1.6,
"TagC": 1.4,
"TagD": 3.5,
"TagE": "02",
"TagF": null
}
],
"date": "10.03.2017 21:00:00"
}"
```
I need to reach TagX, TagD, TagE but the below code gives this error:
```
print jaysonData["Tag1"]["TagX"]
```
prints
```
print jaysonData["Tag1"]["TagX"]
TypeError: string indices must be integers
```
How can I access TagA to TagF with python?
Thanks in advance.
|
2017/03/15
|
[
"https://Stackoverflow.com/questions/42802194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7672271/"
] |
In the returned JSON, the value of `Tag1` is a string, not more JSON. It does appear to be JSON encoded as a string though, so convert that to json once more:
```
jaysonData = json.load(urllib2.urlopen('URL'))
tag1JaysonData = json.load(jaysonData['Tag1'])
print tag1JaysonData["TagX"]
```
Also note that `TagX` is a list, not a dictionary, so there are multiple `TagB`s in it:
```
print [x['TagB'] for x in tag1JaysonData['TagX']]
```
|
You need to consider what values your about to access e.g a dict or list ?
You always access dict element by key and list element by index.
to remove the quotes you can do after reading data from url.
```
import ast
JsonData = ast.literal_eval(jaysonData["Tag1"])
JsonData["Tagx"][0]["TagB"]
```
|
19,868,457
|
I have renamed a css class name in a number of (python-django) templates. The css files however are wide-spread across multiple files in multiple directories. I have a python snippet to start renaming from the root dir and then recursively rename all the css files.
```
from os import walk, curdir
import subprocess
COMMAND = "find %s -iname *.css | xargs sed -i s/[Ff][Oo][Oo]/bar/g"
test_command = 'echo "This is just a test. DIR: %s"'
def renamer(command):
print command # Please ignore the print commands.
proccess = subprocess.Popen(command.split(), stdout = subprocess.PIPE)
op = proccess.communicate()[0]
print op
for root, dirs, files in walk(curdir):
if root:
command = COMMAND % root
renamer(command)
```
It doesn't work, gives:
```
find ./cms/djangoapps/contentstore/management/commands/tests -iname *.css | xargs sed -i s/[Ee][Dd][Xx]/gurukul/g
find: paths must precede expression: |
Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression]
find ./cms/djangoapps/contentstore/views -iname *.css | xargs sed -i s/[Ee][Dd][Xx]/gurukul/g
find: paths must precede expression: |
Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression]
```
When I copy and run the same command (printed above), `find` doesn't error out and sed either gets no input files or it works.
What is wrong with the python snippet?
|
2013/11/08
|
[
"https://Stackoverflow.com/questions/19868457",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/860421/"
] |
You're not trying to run a single command, but a shell pipeline of multiple commands, and you're trying to do it without invoking the shell. That can't possibly work. The way you're doing this, `|` is just one of the arguments to `find`, which is why `find` is telling you that it doesn't understand that argument with that "paths must precede expression: |" error.
You *can* fix that by adding `shell=True` to your `Popen`.
But a better solution is to do the pipeline in Python and keep the shell out of it. See [Replacing Older Functions with the `subprocess` Module](http://docs.python.org/2/library/subprocess.html#replacing-older-functions-with-the-subprocess-module) in the docs for an explanation, but I'll show an example.
Meanwhile, you should never use `split` to split a command line. The best solution is to write the list of separate arguments instead of joining them up into a string just to split them out. If you must do that, use the `shlex` module; that's what it's for. But in your case, even that won't help you, because you're inserting random strings verbatim, which could easily have spaces or quotes in them, and there's no way anything—`shlex` or otherwise—can reconstruct the data in the first place.
So:
```
pfind = Popen(['find', root, '-iname', '*.css'], stdout=PIPE)
pxargs = Popen(['xargs', 'sed', '-i', 's/[Ff][Oo][Oo]/bar/g'],
stdin=pfind.stdout, stdout=PIPE)
pfind.stdout.close()
output = pxargs.communicate()
```
---
But there's an even better solution here.
Python has `os.walk` to do the same thing as `find`, you can simulate `xargs` easily, but there's really no need to do so, and it has its own `re` module to use instead of `sed`. So, why not use them?
Or, conversely, bash is much better at driving and connecting up simple commands than Python, so if you'd rather use `find` and `sed` instead of `os.walk` and `re.sub`, why write the driving script in Python in the first place?
|
The problem is the pipe. To use a pipe with the subprocess module, you have to pass `shell=True`.
|
28,125,214
|
I'm very new to python and I'm trying to print the url of an open websitein Chrome. Here is what I could gather from this page and googeling a bit:
```
import win32gui, win32con
def getWindowText(hwnd):
buf_size = 1 + win32gui.SendMessage(hwnd, win32con.WM_GETTEXTLENGTH, 0, 0)
buf = win32gui.PyMakeBuffer(buf_size)
win32gui.SendMessage(hwnd, win32con.WM_GETTEXT, buf_size, buf)
return str(buf)
hwnd = win32gui.FindWindow(None, "Chrome_WidgetWin_1" )
omniboxHwnd = win32gui.FindWindowEx(hwnd, 0, 'Chrome_OmniboxView', None)
print(getWindowText(hwnd))
```
I get this as a result:
```
<memory at 0x00CA37A0>
```
I don't really know what goes wrong, whether he gets into the window and the way I try to print it is wrong or whether he just doesn't get into the window at all.
Thanks for the help
|
2015/01/24
|
[
"https://Stackoverflow.com/questions/28125214",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4489629/"
] |
You are exactly right because at that time view will not be drawn so for that you have to use viewtreeobserver:
like this:
```
final ViewTreeObserver treeObserver = viewToMesure
.getViewTreeObserver();
treeObserver
.addOnGlobalLayoutListener(new OnGlobalLayoutListener() {
@Override
public void onGlobalLayout() {
// TODO Auto-generated method stub
System.out.println(viewToMesure.getHeight()
+ "SDAFASFASDFas");
heith = viewToMesure.getHeight();
}
});
```
this view tree observer will be called after creating the view based on that you calculate and you can change.
|
You can use a custom view to set the view's height to be tha same as its width by overriding onMeasure:
```
public class SquareButton extends Button {
public SquareButton (Context context) {
super(context);
}
public SquareButton (Context context, AttributeSet attrs) {
super(context, attrs);
}
public SquareButton (Context context, AttributeSet attrs, int defStyleAttr) {
super(context, attrs, defStyleAttr);
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
setMeasuredDimension(getMeasuredWidth(), getMeasuredWidth());
}
}
```
All you have to is use the custom button in your xml layout and you don't have to do anything in the activity:
```
<com.package.name.SquareButton
android:layout_with="0dp"
android:layout_height="wrap_content"
android:layout_weight="1" />
```
|
28,125,214
|
I'm very new to python and I'm trying to print the url of an open websitein Chrome. Here is what I could gather from this page and googeling a bit:
```
import win32gui, win32con
def getWindowText(hwnd):
buf_size = 1 + win32gui.SendMessage(hwnd, win32con.WM_GETTEXTLENGTH, 0, 0)
buf = win32gui.PyMakeBuffer(buf_size)
win32gui.SendMessage(hwnd, win32con.WM_GETTEXT, buf_size, buf)
return str(buf)
hwnd = win32gui.FindWindow(None, "Chrome_WidgetWin_1" )
omniboxHwnd = win32gui.FindWindowEx(hwnd, 0, 'Chrome_OmniboxView', None)
print(getWindowText(hwnd))
```
I get this as a result:
```
<memory at 0x00CA37A0>
```
I don't really know what goes wrong, whether he gets into the window and the way I try to print it is wrong or whether he just doesn't get into the window at all.
Thanks for the help
|
2015/01/24
|
[
"https://Stackoverflow.com/questions/28125214",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4489629/"
] |
There are many ways to achieve this. If you need to find the real width of an element, you can:
1) attach an [`OnGlobalLayoutListener`](http://developer.android.com/reference/android/view/ViewTreeObserver.OnGlobalLayoutListener.html) to the view's [`ViewTreeObserver`](http://developer.android.com/reference/android/view/ViewTreeObserver.html) (but remember to remove it when you are done)
2) or you can **manually measure** what you need:
```
if(view.getMeasuredHeight() == 0){
WindowManager manager = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE);
DisplayMetrics metrics = new DisplayMetrics();
manager.getDefaultDisplay().getMetrics(metrics);
view.measure( metrics.widthPixels, metrics.heightPixels );
}
int realHeight = view.getMeasuredHeight();
//...your code
```
|
You are exactly right because at that time view will not be drawn so for that you have to use viewtreeobserver:
like this:
```
final ViewTreeObserver treeObserver = viewToMesure
.getViewTreeObserver();
treeObserver
.addOnGlobalLayoutListener(new OnGlobalLayoutListener() {
@Override
public void onGlobalLayout() {
// TODO Auto-generated method stub
System.out.println(viewToMesure.getHeight()
+ "SDAFASFASDFas");
heith = viewToMesure.getHeight();
}
});
```
this view tree observer will be called after creating the view based on that you calculate and you can change.
|
28,125,214
|
I'm very new to python and I'm trying to print the url of an open websitein Chrome. Here is what I could gather from this page and googeling a bit:
```
import win32gui, win32con
def getWindowText(hwnd):
buf_size = 1 + win32gui.SendMessage(hwnd, win32con.WM_GETTEXTLENGTH, 0, 0)
buf = win32gui.PyMakeBuffer(buf_size)
win32gui.SendMessage(hwnd, win32con.WM_GETTEXT, buf_size, buf)
return str(buf)
hwnd = win32gui.FindWindow(None, "Chrome_WidgetWin_1" )
omniboxHwnd = win32gui.FindWindowEx(hwnd, 0, 'Chrome_OmniboxView', None)
print(getWindowText(hwnd))
```
I get this as a result:
```
<memory at 0x00CA37A0>
```
I don't really know what goes wrong, whether he gets into the window and the way I try to print it is wrong or whether he just doesn't get into the window at all.
Thanks for the help
|
2015/01/24
|
[
"https://Stackoverflow.com/questions/28125214",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4489629/"
] |
There are many ways to achieve this. If you need to find the real width of an element, you can:
1) attach an [`OnGlobalLayoutListener`](http://developer.android.com/reference/android/view/ViewTreeObserver.OnGlobalLayoutListener.html) to the view's [`ViewTreeObserver`](http://developer.android.com/reference/android/view/ViewTreeObserver.html) (but remember to remove it when you are done)
2) or you can **manually measure** what you need:
```
if(view.getMeasuredHeight() == 0){
WindowManager manager = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE);
DisplayMetrics metrics = new DisplayMetrics();
manager.getDefaultDisplay().getMetrics(metrics);
view.measure( metrics.widthPixels, metrics.heightPixels );
}
int realHeight = view.getMeasuredHeight();
//...your code
```
|
You can use a custom view to set the view's height to be tha same as its width by overriding onMeasure:
```
public class SquareButton extends Button {
public SquareButton (Context context) {
super(context);
}
public SquareButton (Context context, AttributeSet attrs) {
super(context, attrs);
}
public SquareButton (Context context, AttributeSet attrs, int defStyleAttr) {
super(context, attrs, defStyleAttr);
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
setMeasuredDimension(getMeasuredWidth(), getMeasuredWidth());
}
}
```
All you have to is use the custom button in your xml layout and you don't have to do anything in the activity:
```
<com.package.name.SquareButton
android:layout_with="0dp"
android:layout_height="wrap_content"
android:layout_weight="1" />
```
|
70,945,717
|
I have Snowflake tasks that runs every 30 minutes. Currently, when the task fails due to underlying data issue in the stored procedure that the Task calls, there is no way to notify the users on the failure.
```
SELECT *
FROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY());
```
How can notifications be setup for a Snowflake Task failure? The design plan I have in mind is to build a python application that runs every 30mins and looks for any error on TASK\_HISTORY table. Please advise if there are any better approaches to handle failure notifications
|
2022/02/01
|
[
"https://Stackoverflow.com/questions/70945717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11252662/"
] |
I think currently a python script would the best way to address this.
You can use this SQL to query last runs, read into a data frame and filter out errors
```
select *
from table(information_schema.task_history(scheduled_time_range_start=>dateadd(minutes, -30,current_timestamp())))
```
|
It is possible to create Notification Integration and send message when error occurs. As of May 2022 this feature is in preview, supported by accounts on Amazon Web Servives.
#### [Enabling Error Notifications for Tasks](https://docs.snowflake.com/en/user-guide/tasks-errors.html#enabling-error-notifications-for-tasks)
>
> This topic provides instructions for configuring error notification support for tasks using cloud messaging. **This feature triggers a notification describing the errors encountered when a task executes SQL code**
>
>
>
> ```
> Currently, error notifications rely on cloud messaging provided by the
> Amazon Simple Notification Service service;
> support for Google Cloud Pub/Sub queue and Microsoft Azure Event Grid is planned.
>
> ```
>
> New Tasks
>
>
> Create a new task using CREATE TASK. For descriptions of all available task parameters, see the SQL command topic:
>
>
>
> ```
> CREATE TASK <name>
> [...]
> ERROR_INTEGRATION = <integration_name>
> AS <sql>
>
> ```
>
> Existing tasks:
>
>
>
> ```
> ALTER TASK <name> SET ERROR_INTEGRATION = <integration_name>;
>
> ```
>
>
|
70,945,717
|
I have Snowflake tasks that runs every 30 minutes. Currently, when the task fails due to underlying data issue in the stored procedure that the Task calls, there is no way to notify the users on the failure.
```
SELECT *
FROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY());
```
How can notifications be setup for a Snowflake Task failure? The design plan I have in mind is to build a python application that runs every 30mins and looks for any error on TASK\_HISTORY table. Please advise if there are any better approaches to handle failure notifications
|
2022/02/01
|
[
"https://Stackoverflow.com/questions/70945717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11252662/"
] |
I think currently a python script would the best way to address this.
You can use this SQL to query last runs, read into a data frame and filter out errors
```
select *
from table(information_schema.task_history(scheduled_time_range_start=>dateadd(minutes, -30,current_timestamp())))
```
|
A new Snowflake feature was announced for Task Error Notifications on AWS via SNS. This doc walks though how to set this up for task failures.
<https://docs.snowflake.com/en/user-guide/tasks-errors.html>
|
70,945,717
|
I have Snowflake tasks that runs every 30 minutes. Currently, when the task fails due to underlying data issue in the stored procedure that the Task calls, there is no way to notify the users on the failure.
```
SELECT *
FROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY());
```
How can notifications be setup for a Snowflake Task failure? The design plan I have in mind is to build a python application that runs every 30mins and looks for any error on TASK\_HISTORY table. Please advise if there are any better approaches to handle failure notifications
|
2022/02/01
|
[
"https://Stackoverflow.com/questions/70945717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11252662/"
] |
It is possible to create Notification Integration and send message when error occurs. As of May 2022 this feature is in preview, supported by accounts on Amazon Web Servives.
#### [Enabling Error Notifications for Tasks](https://docs.snowflake.com/en/user-guide/tasks-errors.html#enabling-error-notifications-for-tasks)
>
> This topic provides instructions for configuring error notification support for tasks using cloud messaging. **This feature triggers a notification describing the errors encountered when a task executes SQL code**
>
>
>
> ```
> Currently, error notifications rely on cloud messaging provided by the
> Amazon Simple Notification Service service;
> support for Google Cloud Pub/Sub queue and Microsoft Azure Event Grid is planned.
>
> ```
>
> New Tasks
>
>
> Create a new task using CREATE TASK. For descriptions of all available task parameters, see the SQL command topic:
>
>
>
> ```
> CREATE TASK <name>
> [...]
> ERROR_INTEGRATION = <integration_name>
> AS <sql>
>
> ```
>
> Existing tasks:
>
>
>
> ```
> ALTER TASK <name> SET ERROR_INTEGRATION = <integration_name>;
>
> ```
>
>
|
A new Snowflake feature was announced for Task Error Notifications on AWS via SNS. This doc walks though how to set this up for task failures.
<https://docs.snowflake.com/en/user-guide/tasks-errors.html>
|
67,030,593
|
**Problem**
When trying to access and purchase a specific item from store X, which releases limited quantities randomly throughout the week, trying to load the page via the browser is essentially pointless. 99 out of 100 requests time out. By the time 1 page loads, the stock is sold out.
**Question**
What would be the fastest way to load these pages from a website -- one that is currently under high amounts of stress and timing out regularly -- programmatically, or even via the browser?
For example, is it better to send multiple requests and wait until a "timed out" response is received? Is it best to retry the request after X seconds has passed regardless? Etc, etc.
**Tried**
I've tried both solutions above in browser without much luck, so I'm thinking of putting together a python or javascript solution in order to better my chances, but couldn't find an answer to my question via Google.
EDIT:
Just to clarify, the website in question doesn't sporadically time out -- it is strictly when new stock is released and the website is bombarded with visitors. Once stock is bought up, the site returns to normal. New stock releases last anywhere from 5 minutes to 25 minutes.
|
2021/04/10
|
[
"https://Stackoverflow.com/questions/67030593",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15596095/"
] |
You don't need to `JSON.stringify` your 'params'.
Change it into:
```js
const params = {
email: "someEmail@gmail.com",
name: "Some Name",
password: "123qwe!!"
}
```
|
use this one-
axios.post('http://localhost:8080/users', params, {
headers: {
'Content-Type': 'application/json'
}
}
|
67,030,593
|
**Problem**
When trying to access and purchase a specific item from store X, which releases limited quantities randomly throughout the week, trying to load the page via the browser is essentially pointless. 99 out of 100 requests time out. By the time 1 page loads, the stock is sold out.
**Question**
What would be the fastest way to load these pages from a website -- one that is currently under high amounts of stress and timing out regularly -- programmatically, or even via the browser?
For example, is it better to send multiple requests and wait until a "timed out" response is received? Is it best to retry the request after X seconds has passed regardless? Etc, etc.
**Tried**
I've tried both solutions above in browser without much luck, so I'm thinking of putting together a python or javascript solution in order to better my chances, but couldn't find an answer to my question via Google.
EDIT:
Just to clarify, the website in question doesn't sporadically time out -- it is strictly when new stock is released and the website is bombarded with visitors. Once stock is bought up, the site returns to normal. New stock releases last anywhere from 5 minutes to 25 minutes.
|
2021/04/10
|
[
"https://Stackoverflow.com/questions/67030593",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15596095/"
] |
You don't need to `JSON.stringify` your 'params'.
Change it into:
```js
const params = {
email: "someEmail@gmail.com",
name: "Some Name",
password: "123qwe!!"
}
```
|
axios signature for post is `axios.post(url[, data[, config]])`.
so your code must be like this:
```
const loginData= {
"email": "someEmail@gmail.com",
"name": "Some Name",
"password": "123qwe!!"
}
axios.post('localhost:8080/users', null, {
params:loginData,
headers: {
'Content-Type': 'application/json'
}
}
)
.then(response => {
console.log("RESPONSE RECEIVED: ",response);
})
.catch((err) => {
console.log("AXIOS ERROR: ", err);
})
```
|
4,135,261
|
I am having a problem connecting to a device with a Paramiko (version 1.7.6-2) ssh client:
```
$ python
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import paramiko
>>> ssh = paramiko.SSHClient()
>>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
>>> ssh.connect("123.0.0.1", username="root", password=None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect
self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys)
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth
raise saved_exception
paramiko.AuthenticationException: Authentication failed.
>>>
```
When I use ssh from the command line, it works fine:
```
ssh root@123.0.0.1
BusyBox v1.12.1 (2010-11-03 13:18:46 EDT) built-in shell (ash)
Enter 'help' for a list of built-in commands.
#
```
Anyone seen this before?
**Edit 1**
Here is the verbose output of the ssh command:
```
:~$ ssh -v root@123.0.0.1
OpenSSH_5.3p1 Debian-3ubuntu4, OpenSSL 0.9.8k 25 Mar 2009
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to 123.0.0.1 [123.0.0.1] port 22.
debug1: Connection established.
debug1: identity file /home/waffleman/.ssh/identity type -1
debug1: identity file /home/waffleman/.ssh/id_rsa type -1
debug1: identity file /home/waffleman/.ssh/id_dsa type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1
debug1: match: OpenSSH_5.1 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu4
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '123.0.0.1' is known and matches the RSA host key.
debug1: Found key in /home/waffleman/.ssh/known_hosts:3
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentication succeeded (none).
debug1: channel 0: new [client-session]
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env LANG = en_US.utf8
```
**Edit 2**
Here is the python output with debug output:
```
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import paramiko, os
>>> paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG)
>>> ssh = paramiko.SSHClient()
>>> ssh.load_system_host_keys()
>>> ssh.load_host_keys(os.path.expanduser('~/.ssh/known_hosts'))
>>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
>>> ssh.connect("123.0.0.1", username='root', password=None)
DEBUG:paramiko.transport:starting thread (client mode): 0x928756cL
INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.1)
DEBUG:paramiko.transport:kex algos:['diffie-hellman-group-exchange-sha256', 'diffie-hellman-group-exchange-sha1', 'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] server encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] client mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] server mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] client compress:['none', 'zlib@openssh.com'] server compress:['none', 'zlib@openssh.com'] client lang:[''] server lang:[''] kex follows?False
DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr
DEBUG:paramiko.transport:using kex diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none
DEBUG:paramiko.transport:Switch to new keys ...
DEBUG:paramiko.transport:Trying discovered key b945197b1de1207d9aa0663f01888c3c in /home/waffleman/.ssh/id_rsa
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect
self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys)
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth
raise saved_exception
paramiko.AuthenticationException: Authentication failed.
>>>
```
|
2010/11/09
|
[
"https://Stackoverflow.com/questions/4135261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/197108/"
] |
As a very late follow-up on this matter, I believe I was running into the same issue as waffleman, in a context of a confined network.
The hint about using `auth_none` on the `Transport` object turned out quite helpful, but I found myself a little puzzled as to how to implement that. Thing is, as of today at least, I can't get the `Transport` object of an `SSHClient` object until it has connected; but it won't connect in the first place...
So In case this is useful to others, my work around is below. I just override the `_auth` method.
OK, this is fragile, as `_auth` is a private thing. My other alternatives were - actually still are - to manually create the `Transport` and `Channel` objects, but for the time being I feel like I'm much better off with all this still under the hood.
```
from paramiko import SSHClient, BadAuthenticationType
class SSHClient_try_noauth(SSHClient):
def _auth(self, username, *args):
try:
self._transport.auth_none(username)
except BadAuthenticationType:
super()._auth(username, *args)
```
|
[paramiko's SSHClient](http://www.lag.net/paramiko/docs/paramiko.SSHClient-class.html) has [`load_system_host_keys`](http://www.lag.net/paramiko/docs/paramiko.SSHClient-class.html#load_system_host_keys) method which you could use to load user specific set of keys. As example in the docs explain, it needs to be run before connecting to a server.
|
4,135,261
|
I am having a problem connecting to a device with a Paramiko (version 1.7.6-2) ssh client:
```
$ python
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import paramiko
>>> ssh = paramiko.SSHClient()
>>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
>>> ssh.connect("123.0.0.1", username="root", password=None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect
self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys)
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth
raise saved_exception
paramiko.AuthenticationException: Authentication failed.
>>>
```
When I use ssh from the command line, it works fine:
```
ssh root@123.0.0.1
BusyBox v1.12.1 (2010-11-03 13:18:46 EDT) built-in shell (ash)
Enter 'help' for a list of built-in commands.
#
```
Anyone seen this before?
**Edit 1**
Here is the verbose output of the ssh command:
```
:~$ ssh -v root@123.0.0.1
OpenSSH_5.3p1 Debian-3ubuntu4, OpenSSL 0.9.8k 25 Mar 2009
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to 123.0.0.1 [123.0.0.1] port 22.
debug1: Connection established.
debug1: identity file /home/waffleman/.ssh/identity type -1
debug1: identity file /home/waffleman/.ssh/id_rsa type -1
debug1: identity file /home/waffleman/.ssh/id_dsa type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1
debug1: match: OpenSSH_5.1 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu4
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '123.0.0.1' is known and matches the RSA host key.
debug1: Found key in /home/waffleman/.ssh/known_hosts:3
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentication succeeded (none).
debug1: channel 0: new [client-session]
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env LANG = en_US.utf8
```
**Edit 2**
Here is the python output with debug output:
```
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import paramiko, os
>>> paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG)
>>> ssh = paramiko.SSHClient()
>>> ssh.load_system_host_keys()
>>> ssh.load_host_keys(os.path.expanduser('~/.ssh/known_hosts'))
>>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
>>> ssh.connect("123.0.0.1", username='root', password=None)
DEBUG:paramiko.transport:starting thread (client mode): 0x928756cL
INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.1)
DEBUG:paramiko.transport:kex algos:['diffie-hellman-group-exchange-sha256', 'diffie-hellman-group-exchange-sha1', 'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] server encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] client mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] server mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] client compress:['none', 'zlib@openssh.com'] server compress:['none', 'zlib@openssh.com'] client lang:[''] server lang:[''] kex follows?False
DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr
DEBUG:paramiko.transport:using kex diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none
DEBUG:paramiko.transport:Switch to new keys ...
DEBUG:paramiko.transport:Trying discovered key b945197b1de1207d9aa0663f01888c3c in /home/waffleman/.ssh/id_rsa
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect
self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys)
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth
raise saved_exception
paramiko.AuthenticationException: Authentication failed.
>>>
```
|
2010/11/09
|
[
"https://Stackoverflow.com/questions/4135261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/197108/"
] |
The ssh server on the remote device denied your authentication. Make sure you're using the correct key, the public key is present in `authorized_keys`, `.ssh` directory permissions are correct, `authorized_keys` permissions are correct, and the device doesn't have any other access restrictions. It hard to say what's going on without logs from the server.
[EDIT] I just looked back through your output, you are authenticating using `None` authentication. This usually isn't ever permitted, and is used to determine what auth methods are allowed by the server. It's possible your server is using host based authentication (or none at all!).
Since `auth_none()` is rarely used, it's not accessible from the `SSHClient` class, so you will need to use `Transport` directly.
```
transport.auth_none('root')
```
|
Make sure that the permissions on the public and private key files (and possibly the containing folder) are set to very restrictive (i.e. chmod 600 id\_rsa). It turns out this is required (by the Operating System?) to use the files as ssh keys. Found this out from my helpful colleague :)
Also make sure that you are using the correct username for the given ssh key.
|
4,135,261
|
I am having a problem connecting to a device with a Paramiko (version 1.7.6-2) ssh client:
```
$ python
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import paramiko
>>> ssh = paramiko.SSHClient()
>>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
>>> ssh.connect("123.0.0.1", username="root", password=None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect
self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys)
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth
raise saved_exception
paramiko.AuthenticationException: Authentication failed.
>>>
```
When I use ssh from the command line, it works fine:
```
ssh root@123.0.0.1
BusyBox v1.12.1 (2010-11-03 13:18:46 EDT) built-in shell (ash)
Enter 'help' for a list of built-in commands.
#
```
Anyone seen this before?
**Edit 1**
Here is the verbose output of the ssh command:
```
:~$ ssh -v root@123.0.0.1
OpenSSH_5.3p1 Debian-3ubuntu4, OpenSSL 0.9.8k 25 Mar 2009
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to 123.0.0.1 [123.0.0.1] port 22.
debug1: Connection established.
debug1: identity file /home/waffleman/.ssh/identity type -1
debug1: identity file /home/waffleman/.ssh/id_rsa type -1
debug1: identity file /home/waffleman/.ssh/id_dsa type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1
debug1: match: OpenSSH_5.1 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu4
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '123.0.0.1' is known and matches the RSA host key.
debug1: Found key in /home/waffleman/.ssh/known_hosts:3
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentication succeeded (none).
debug1: channel 0: new [client-session]
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env LANG = en_US.utf8
```
**Edit 2**
Here is the python output with debug output:
```
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import paramiko, os
>>> paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG)
>>> ssh = paramiko.SSHClient()
>>> ssh.load_system_host_keys()
>>> ssh.load_host_keys(os.path.expanduser('~/.ssh/known_hosts'))
>>> ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
>>> ssh.connect("123.0.0.1", username='root', password=None)
DEBUG:paramiko.transport:starting thread (client mode): 0x928756cL
INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.1)
DEBUG:paramiko.transport:kex algos:['diffie-hellman-group-exchange-sha256', 'diffie-hellman-group-exchange-sha1', 'diffie-hellman-group14-sha1', 'diffie-hellman-group1-sha1'] server key:['ssh-rsa', 'ssh-dss'] client encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] server encrypt:['aes128-cbc', '3des-cbc', 'blowfish-cbc', 'cast128-cbc', 'arcfour128', 'arcfour256', 'arcfour', 'aes192-cbc', 'aes256-cbc', 'rijndael-cbc@lysator.liu.se', 'aes128-ctr', 'aes192-ctr', 'aes256-ctr'] client mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] server mac:['hmac-md5', 'hmac-sha1', 'umac-64@openssh.com', 'hmac-ripemd160', 'hmac-ripemd160@openssh.com', 'hmac-sha1-96', 'hmac-md5-96'] client compress:['none', 'zlib@openssh.com'] server compress:['none', 'zlib@openssh.com'] client lang:[''] server lang:[''] kex follows?False
DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr
DEBUG:paramiko.transport:using kex diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none
DEBUG:paramiko.transport:Switch to new keys ...
DEBUG:paramiko.transport:Trying discovered key b945197b1de1207d9aa0663f01888c3c in /home/waffleman/.ssh/id_rsa
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 327, in connect
self._auth(username, password, pkey, key_filenames, allow_agent, look_for_keys)
File "/usr/lib/pymodules/python2.6/paramiko/client.py", line 481, in _auth
raise saved_exception
paramiko.AuthenticationException: Authentication failed.
>>>
```
|
2010/11/09
|
[
"https://Stackoverflow.com/questions/4135261",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/197108/"
] |
The ssh server on the remote device denied your authentication. Make sure you're using the correct key, the public key is present in `authorized_keys`, `.ssh` directory permissions are correct, `authorized_keys` permissions are correct, and the device doesn't have any other access restrictions. It hard to say what's going on without logs from the server.
[EDIT] I just looked back through your output, you are authenticating using `None` authentication. This usually isn't ever permitted, and is used to determine what auth methods are allowed by the server. It's possible your server is using host based authentication (or none at all!).
Since `auth_none()` is rarely used, it's not accessible from the `SSHClient` class, so you will need to use `Transport` directly.
```
transport.auth_none('root')
```
|
I get similar error, when the server uses AD authentication. I think this is a bug of paramiko. I have learned that I have to set ssh keys before use paramiko.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.