qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
61,011,373
I'm trying to install indy-node on a fresh Ubuntu 18.04 machine in order to create a small network with 4 nodes. when following the [installation instructions](https://github.com/hyperledger/indy-node/blob/master/docs/source/start-nodes.md) I get the following error: ``` localhost:~$ sudo apt-get install indy-node The following packages have unmet dependencies: indy-node : Depends: indy-plenum (= 1.12.2) but it is not going to be installed Depends: libsodium18 but it is not installable E: Unable to correct problems, you have held broken packages. ``` I've tried installing `libsodium18` but I get the following error: ``` Package libsodium18 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'libsodium18' has no installation candidate ``` Likewise, trying to install `indy-plenum` has a bunch of 'unmet dependency' errors. \*\*Note:\* this is all happens with the following repo: ``` sudo bash -c 'echo "deb https://repo.sovrin.org/deb xenial stable" >> /etc/apt/sources.list' ``` when I try to add the packages for bionic instead of xenial, I get the following error: ``` repository 'https://repo.sovrin.org/deb bionic InRelease' doesn't have the component 'stable' ``` I have also tried to install indy-node via python + pip, but that doesn't seem to work either. **Has anyone successfully installed indy-node, and if so, can you share the secret?**
2020/04/03
[ "https://Stackoverflow.com/questions/61011373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1786712/" ]
We've generated Docker images for Indy-Node using Ubuntu 18.04, but had to build libsodium from source. You can see the source dockerfile here although there are git URLs that get replaced by the build script: <https://github.com/PSPC-SPAC-buyandsell/von-image/blob/master/node-1.9/Dockerfile.ubuntu> The final images are at <https://hub.docker.com/r/bcgovimages/von-image/tags> (the node-\* images, latest is `bcgovimages/von-image:node-1.12-2`) I believe the Indy-Node team are shortly going to be updating to 20.04 for the image used in testing.
The solution in the end was to downgrade to Ubuntu 16.04
14,049
49,738,443
I'm attempting to convert an array of strings to array of floats using : ``` arr_str = '[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]' a1 = arr_str.split() [int(x) for x in a1] ``` but throws error : < ``` ipython-input-57-f7f1eaba7ebd> in <listcomp>(.0) 3 a1 = arr_str.split() 4 ----> 5 [int(x) for x in a1] 6 7 # for a in arr_str.split(): ValueError: invalid literal for int() with base 10: '[1' ``` Should pre-process the string and remove '[' and ']' ?
2018/04/09
[ "https://Stackoverflow.com/questions/49738443", "https://Stackoverflow.com", "https://Stackoverflow.com/users/470184/" ]
One way is to use `ast.literal_eval`. If you need a `numpy` integer array, the conversion is trivial. ``` import numpy as np from ast import literal_eval arr_str = '[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]' res = literal_eval(arr_str.replace(' ', ',')) # [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] res_np = np.array(res) # array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) ```
``` arr_str = arr_str.strip("[]") voila = [int(x) for x in arr_str.split()] ``` Edit 1: Being pedantic about variable assignment.
14,050
27,532,112
I'm using two python packages that have the same name. * <http://www.alembic.io/updates.html> * <https://pypi.python.org/pypi/alembic> Is there a canonical or pythonic way to handle installing two packages with conflicting names? So far, I've only occasionally needed one of the packages during development/building, so I've been using a separate virtualenv to deal with the conflict, but it makes the build step more complex and I wonder if there isn't a better way to handle it.
2014/12/17
[ "https://Stackoverflow.com/questions/27532112", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1547004/" ]
You could use the --target option for pip and install to an alternate location: ``` pip install --target=/tmp/test/lib/python3.6/site-packages/alt_alembic alembic ``` Then when you import in python, do the first as usual and for the alt do an import from that namespace like this: ``` import alembic # alembic.io version from alt_alembic import alembic as alt_alembic # pip version ``` Then when you're making calls to that one you can call alt\_alembic.function() and to the one that isn't in PyPi, alembic.function() My target path has /tmp/test as I was using a virtual env. You would need to replace that path with the correct one for your python installation.
how about **absolute and relative imports.** <https://docs.python.org/2/whatsnew/2.5.html#pep-328-absolute-and-relative-imports>
14,052
43,429,018
i have the following link: <https://webcache.googleusercontent.com/search?q=cache:jAc7OJyyQboJ>:**<https://cooking.nytimes.com/learn-to-cook>**+&cd=5&hl=en&ct=clnk I have multiple links in a dataset. Each link is of same pattern. I want to get a specific part of the link, for the above link i would be the bold part of the link above. I want text starting from 2nd http to before first + sign. I don't know how to do so using regex. I am working in python. Kindly help me out.
2017/04/15
[ "https://Stackoverflow.com/questions/43429018", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7872059/" ]
Instead of `var c = new Audio(src);` use `var c = document.createElement('audio'); c.src=src; c.play();`
You have to wait for the DOM to be ready. Since your are using jQuery, please encapsulate your code in that: ``` $(document).ready(function () { // Your code... }); ``` You can also use this syntax: ``` $(function () { // Your code... }); ``` *(Bonus tip: use the `switch` instruction in your code. `RoNBeta.js` is a bit scary...)*
14,053
50,706,987
I've been trying for a couple of days with limited success to use TCP to make two ruby programs on the same or different machines communicate. I'm looking for example 'client' and 'server' scripts that will work straight away, once I've chosen ports that work. Client code I found that seems to work, shown below. But I haven't got a server working with it. ``` # Seems to work for some reason require 'socket' tcp_client=TCPSocket.new('localhost',1540) while grab_string=tcp_client.gets puts(grab_string) end tcp_client.close ``` I'm interested in the simplest possible solution, that works on my machine. All it has to do is send a string. The answer I'm looking for is just like [this](https://stackoverflow.com/questions/27241804/sending-a-file-over-tcp-sockets-in-python) but with ruby, instead of python. Feel free to change the code for client and server, with only half the puzzle in place I'm not sure if its works or not. Server code ``` # Server require 'socket' sock = TCPSocket.new('localhost', 1540) sock.write 'GETHELLO' puts sock.read(5) # Since the response message has 5 bytes. sock.close ``` Using the code suggested by kennycoc I get the following error message ``` Server.rb:3:in `initialize': Only one usage of each socket address (protocol/network address/port) is normally permitted. - bind(2) for nil port 1540 (Errno::EADDRINUSE) from Server.rb:3:in `new' from Server.rb:3:in `<main>' ```
2018/06/05
[ "https://Stackoverflow.com/questions/50706987", "https://Stackoverflow.com", "https://Stackoverflow.com/users/336879/" ]
@adjam you haven't created a TcpServer, [TCPSocket](https://ruby-doc.org/stdlib-1.9.3/libdoc/socket/rdoc/TCPSocket.html) is used to create TCP/IP client socket To create TCP/IP server you have to use [TCPServer](https://ruby-doc.org/stdlib-1.9.3/libdoc/socket/rdoc/TCPServer.html) EX: Tcp/ip Server code: ``` require 'socket' server = TCPServer.open(2000) client = server.accept # Accept client while (response = client.gets) # read data send by client print response end ``` Tcp/ip Client code: ``` require 'socket' client = TCPSocket.open('localhost', 2000) client.puts "hello" client.close; ```
Taking the documentation from <https://ruby-doc.org/stdlib-2.5.1/libdoc/socket/rdoc/Socket.html>, you seem to be looking for something like this: ``` require 'socket' server = TCPServer.new(1540) client = server.accept client.puts "GETHELLO" client.close server.close ``` More generally, if you'd like the server accessible for multiple clients to request data from, you'd have a loop running like ``` loop do client = server.accept client.puts "gethello" client.close end ```
14,056
6,844,863
Relative import not working properly in python2.6.5 getting "ValueError: Attempted relative import in non-package". I am having all those `__init__.py` in proper place.
2011/07/27
[ "https://Stackoverflow.com/questions/6844863", "https://Stackoverflow.com", "https://Stackoverflow.com/users/865438/" ]
I have seen that error before when running a script that is actually *inside* a package. To the interpreter, it appears as though the package is not a package. Try taking the script into another directory, putting your package inside your `pythonpath`, and import absolutely. Then, relative imports inside your package will work. NOTE: you can STILL not relatively import inside the end script - the easiest thing to do in this case is to make a "wrapper" script, that simply calls some entry point in your package. You can go even further here by using `setuptools` to create a [`setup.py`](http://peak.telecommunity.com/DevCenter/setuptools) for your package, to make it distributable. Then, as a part of that, [entry points](http://peak.telecommunity.com/DevCenter/setuptools#automatic-script-creation) would allow you to autogenerate scripts that called your package's code. **EDIT**: From your comment, it appears as though I wasn't quite clear. I'm not 100% sure of your directory structure because your comment above wasn't formatted, but I took it to be like this: ``` PythonEvent/ main.py __init__.py DBConnector/ __init__.py connector.py service/ __init__.py myservice.py ``` When in `myservice.py` you have the line `from ..DBConnector.connector import DBUpdate`, the interpreter tries to import it relatively, **UNLESS** you are running `myservice.py` directly. This is what it appears you are doing. Try making another dummy script outside of `PythonEvent/` that is simply as follows: ``` from PythonEvent.service import myservice if __name__ == '__main__': myservice.main() # or whatever the entry point is called in myservice. ``` Then, set your `PYTHONPATH` environment variable to point to the parent directory of `PythonEvent/` (or move `PythonEvent/` to your site-packages).
``` main.py setup.py Main Package/ -> __init__.py subpackage_a/ -> __init__.py module_a.py subpackage_b/ -> __init__.py module_b.py ``` i) ``` 1.You run python main.py 2.main.py does: import app.package_a.module_a 3.module_a.py does import app.package_b.module_b ``` ii) ``` Alternatively 2 or 3 could use: from app.package_a import module_a ``` > > That will work as long as you have app in your PYTHONPATH. main.py could be anywhere then. > > > So you write a setup.py to copy (install) the whole app package and subpackages to the target system's python folders, and main.py to target system's script folders. Thanks to <https://stackoverflow.com/a/1083169>
14,059
45,301,335
I am using Python + IPython for Data Science. I made a folder that contains all the modules I wrote, organised in packages, something like ``` python_workfolder | |---a | |---__init__.py | |---a1.py | |---a2.py | |---b | |---__init__.py | |---b1.py | |---b2.py | |---c | |---__init__.py | |---c1.py | |---c2.py | | |---script1.py |---script2.py ``` At the beginning of each session I ask IPython to autoreload modules: ``` %load_ext autoreload %autoreload 2 ``` Now... let's say a1.py contains a class, `A1`, that I want to call from one of the scripts. In the `__init__.p` of package `a` I import the module ``` import a1 ``` Then in the script I import the class I need ``` from a.a1 import A1 ``` If there is some error in class A1 and I modify it, there is no way to have Python reload it without restarting the kernel. I tried with `del a1`, `del sys.modules['a1']`, `del sys.modules['a']`. Each time it uses the old version of the class until I don't restart the kernel... anyone can give me some suggestions?
2017/07/25
[ "https://Stackoverflow.com/questions/45301335", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2558671/" ]
Just type: ``` last root ``` This will give you details of the IP addresses of machines where users logged in as root.
Without knowing your Input\_file I am providing this solution, so could you please try following and let me know if this helps you. ``` awk '{match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/);array[substr($0,RSTART,RLENGTH)]} END{for(i in array){print i,array[i]}}' Input_file ``` If above is not helping you then kindly show us sample Input\_file and expected output file too in code tags, so that we could help you in same.
14,060
32,031,111
I am trying to run a simple Python script with crontab, but I can’t get it to work. I can run a simple program in crontab when not using Python though. Here is the line I have in my Crontab file that does work: ``` * * * * * echo “cron test” >> /home/ftpuser/dev/mod_high_lows/hello.txt ``` I also can run this python script testit.py directly from the command line. This is my testit.py file that outputs a csv file. ``` #!/usr/bin/env python import f_file_handling _data = [(12,15,17)] f_file_handling.csv_out('my_file_test',_data) ``` The above file has a function I made, but I know it works since it does what I expect when I run the testit.py from the command line like this: ``` python testit.py ``` So I got Crontab to work on it's own and the testit.py file to work on it's own then I tried to run the testit.py file with Crontab. I did make the testit.py file executable with command: ``` chmod +x testit.py ``` And I see its executable because the file shows up in green in my linux command window when I’m in the proper directory. Now in the same Crontab file I used to run the earlier Crontab test I added the following line: ``` * * * * * /home/ftpuser/dev/mod_high_lows/testit.py ``` Yes, I am tying to get this to execute every minute, just trying to run the simplest test possible to get Crontab and Python to work together. Here is what I am using: * Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-52-generic x86\_64) * Python 2.7 The above are on a linux server I have set up. You see the shebang line at the top of my testit.py file, from my research this should work. As far as my testit.py python file, I wrote it on a windows machine and then transferred it to the server, but when crontab and python did not work together I also coded the file from the Linux command window using Nano text editor, but this makes no difference when trying to run the testit.py file through Crontab. So it does not run even when I write the testit.py code directly on the Linux server (just in case windows created hidden characters in my file).
2015/08/16
[ "https://Stackoverflow.com/questions/32031111", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1087809/" ]
* cron runs commands in a limited environment. Only a few environment variables are automatically set. It loads the environment specified by `/etc/environment` and `/etc/security/pam_env.conf`, but not about the environment variables you might have set in your `.bashrc` or `.profile`. Set the crontab entry ``` * * * * * /usr/bin/env > /tmp/out ``` to take a look at what environment variables are actually set. Don't forget to remove the crontab entry once you have /tmp/out. * When running Python scripts, one important environment variable that you would probably need to set is PYTHONPATH. So at the top of your crontab add a PYTHONPATH setting such as: ``` PYTHONPATH=/home/ftpuser/dev/mod_high_lows ``` Be sure to add the directory which contains the `f_file_handling` module so Python will find the module when it runs the statement ``` import f_file_handling ``` * Finally, also note that cron [runs commands in your home directory](https://unix.stackexchange.com/a/38956/3330) by default. It would be better to be explicit however, and supply a full path whenever you specify a file in your script: ``` f_file_handling.csv_out('/path/to/my_file_test',_data) ```
I'm not sure if this will help, but I've always successfully managed to get python scripts to run successfully from cron by adding this line to the end of the crontab file: ``` @reboot python /home/ftpuser/dev/mod_high_lows/testit.py & ``` The `&` is necessary at the end of the line. If this is what you need, and you would like for this script to execute every minute, you could put the whole script in a loop and then put a one minute sleep at the end of each loop's iteration. You may also need to put a `sudo` before the python, although crontabs should run as root anyway so this probably won't be necessary. This method has worked for running scripts on boot up for my Raspberry Pi though.
14,061
62,714,282
In the [ThreadPoolExecutor documentation](https://docs.python.org/3/library/concurrent.futures.html) it says: > > Changed in version 3.5: If `max_workers` is `None` or not given, it will default to the number of processors on the machine, multiplied by 5, assuming that `ThreadPoolExecutor` is often used to overlap I/O instead of CPU work and the number of workers should be higher than the number of workers for `ProcessPoolExecutor`. > > > Is there any reason why ThreadPoolExecutor uses a factor of 5, and is it important to use it for other threading applications in python?
2020/07/03
[ "https://Stackoverflow.com/questions/62714282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13859123/" ]
You are writing `binascii.hexlify(self._public_key.exportKey(format='DER')).decode('ascii')` at the next line. Try writing it after the `return` keyword. Hope your error will go away
you should define a Clinet instance and then get it's \_public\_key: ``` binascii.hexlify(Client._public_key.exportKey(format='DER')).decode('ascii') ```
14,063
38,044,788
it's login is fine, but i am not able to track the issue, here the code below ``` while True: time.sleep(10) browser.get("https://www.instagram.com/accounts/edit/?wo=1") ``` I am getting this error when i ran project.py ``` Superuser$ python project.py user diabruxaneas1989 with proxy 192.126.184.130:8800 running Traceback (most recent call last): File "project.py", line 158, in <module> Main() File "project.py", line 94, in Main username = browser.find_element_by_id('id_username') File "/Library/Python/2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 269, in find_element_by_id return self.find_element(by=By.ID, value=id_) File "/Library/Python/2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 745, in find_element {'using': by, 'value': value})['value'] File "/Library/Python/2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 236, in execute self.error_handler.check_response(response) File "/Library/Python/2.7/site-packages/selenium/webdriver/remote/errorhandler.py", line 194, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: {"method":"id","selector":"id_username"} Stacktrace: at FirefoxDriver.prototype.findElementInternal_ (file:///var/folders/rc/vx_d14f14p97l02f35j6_dvw0000gn/T/tmp6I93xt/extensions/fxdriver@googlecode.com/components/driver-component.js:10770) at FirefoxDriver.prototype.findElement (file:///var/folders/rc/vx_d14f14p97l02f35j6_dvw0000gn/T/tmp6I93xt/extensions/fxdriver@googlecode.com/components/driver-component.js:10779) at DelayedCommand.prototype.executeInternal_/h (file:///var/folders/rc/vx_d14f14p97l02f35j6_dvw0000gn/T/tmp6I93xt/extensions/fxdriver@googlecode.com/components/command-processor.js:12661) at DelayedCommand.prototype.executeInternal_ (file:///var/folders/rc/vx_d14f14p97l02f35j6_dvw0000gn/T/tmp6I93xt/extensions/fxdriver@googlecode.com/components/command-processor.js:12666) at DelayedCommand.prototype.execute/< (file:///var/folders/rc/vx_d14f14p97l02f35j6_dvw0000gn/T/tmp6I93xt/extensions/fxdriver@googlecode.com/components/command-processor.js:12608) ``` I tried other solutions as well but still same error . Any help would be appreciated, Thanks
2016/06/27
[ "https://Stackoverflow.com/questions/38044788", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6445447/" ]
You forgot to close the `href` attribute (double-quotes): ``` echo '<a href="directMessageRoom.php?directMessageRoomID='.$row3['id'].'"></a>'; right here ---^ ```
Be aware of lots of 'white-space' in your form field. Your submit button for example, you write this `<input type = "submit" ...>`. You are accidentally insert white space. It should be `<input type="submit" ...>`.
14,065
55,549,014
I get the syntax error: FileNotFoundError: [WinError 2] The system cannot find the file specified when running the below code. It is a little hard to find a good solution for this problem on windows which I am running as compared to UNIX which I can find working code for. ``` from subprocess import Popen, check_call p1 = Popen('start http://stackoverflow.com/') p2 = Popen('start http://www.google.com/') p3 = Popen('start http://www.facebook.com/') time.sleep(60) for pid in [p1.pid,p2.pid,p3.pid]: check_call(['taskkill', '/F', '/T', '/PID', str(pid)]) ``` I want the code to open the pages for 60 seconds and then close them. I know there is similar topic on the link: , but firstly it is for python 2 and I have tried the codes using the subprocess module and they are identical to the code I am using which does not work.
2019/04/06
[ "https://Stackoverflow.com/questions/55549014", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6104634/" ]
A big problem there is that your width will be zero. The X and Y scales are factors. As in multipliers. Anything times Zero is zero. Hence ``` ScaleTransform(0, -1); ``` Will give you something with no width. You presumably want the same width and hence: ``` ScaleTransform(1, -1); ``` That might still have another problem if you want the thing to be flipped about it's centre but at least it ought to show up when you use it. The CenterY calculation is perhaps less than obvious. You can work out the height of a geometry using it's bounds. Since you're creating a new pathgeometry, maybe you want to retain the original without any transform. I put some code together that manipulates a geometry from resources and uses it to add a path to a canvas. Markup: ``` <Window.Resources> <Geometry x:Key="Star"> M16.001007,0L20.944,10.533997 32,12.223022 23.998993,20.421997 25.889008,32 16.001007,26.533997 6.1109924,32 8,20.421997 0,12.223022 11.057007,10.533997z </Geometry> </Window.Resources> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="100"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Button x:Name="myButton" Click="MyButton_Click"> </Button> <Canvas Grid.Column="1" Name="myCanvas"/> </Grid> ``` Code ``` private void MyButton_Click(object sender, RoutedEventArgs e) { Geometry geom = this.Resources["Star"] as Geometry; Geometry flipped = geom.Clone(); var bounds = geom.Bounds; double halfY = (bounds.Bottom - bounds.Top) / 2.0; flipped.Transform = new ScaleTransform(1, -1, 0, halfY ); PathGeometry pg = PathGeometry.CreateFromGeometry(flipped); var path = new System.Windows.Shapes.Path {Data=pg, Fill= System.Windows.Media.Brushes.Red }; this.myCanvas.Children.Add(path); } ```
Just set the PathGeometry's `Transform` property: ``` var myPathGeometry = new PathGeometry(); myPathGeometry.Figures.Add(myPathFigure); myPathGeometry.Transform = new ScaleTransform(1, -1); ``` Note that you may also need to set the ScaleTransform's `CenterY` property for a correct vertical alignment.
14,066
63,614,832
I've faced an global issue recently and I have no idea for this behavior in python: ``` # declaring some global variables variable = 'peter' list_variable_1 = ['a','b'] list_variable_2 = ['c','d'] def update_global_variables(): """without using global line""" variable = 'PETER' # won't update in global scope list_variable_1 = ['A','B'] # won't get updated in global scope list_variable_2[0]= 'C' # updated in global scope surprisingly this way list_variable_2[1]= 'D' # updated in global scope surprisingly this way update_global_variables() print('variable is: %s'%variable) # prints peter print('list_variable_1 is: %s'%list_variable_1) # prints ['a', 'b'] print('list_variable_2 is: %s'%list_variable_2) # prints ['C', 'D'] ``` Why did `list_variable_2` is updated in global scope while the other variables did not?
2020/08/27
[ "https://Stackoverflow.com/questions/63614832", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6756530/" ]
from [exitBeforeEnter](https://www.framer.com/api/motion/animate-presence/#animatepresenceprops.exitbeforeenter) docs > > If set to `true`, `AnimatePresence` will only render one component at a time. The exiting component will finished its exit animation before the entering component is rendered. > > > You have to specify `exit` animations for all components that you are using inside `AnimatePresence`. On route changes, `AnimatePresence` will execute `exit` animations first and then only it will render next component. If a component does not have `exit` animation and it is child of `AnimatePresence`, then route change will only change the url not the view.
That's normal if you will not add an `exit` animation to each and every routes. Main route with AnimatePresense ``` <AnimatePresence exitBeforeEnter> <Switch location={window.location} key={window.location.pathname}> <Route exact path='/' component={Home} /> <Route exact path='/about' component={About} /> <Route path="*">Page not found</Route> </Switch> </AnimatePresence ``` About.jsx ``` const exit = { exit: { opacity: 0, }, } export default function About() { return <motion.h1 {...exit}>Hello</h1> } ``` Home.jsx ``` const exit = { exit: { opacity: 0, }, } export default function Home() { return <motion.h1 {...exit}>Hello</h1> } ```
14,069
11,767,757
This issue just started, last week I had no issues with the particular source file. I'm using SQLAlchemy and Geoalchemy and the particular block of code that triggers Eclipse and Aptana to start pegging the cpu while simply editing the file is: ``` obsRecs = db.session.query(multi_obs)\ .join(sensor,sensor.row_id == multi_obs.sensor_id)\ .join(platform,platform.row_id == sensor.platform_id)\ .join(m_type,m_type.row_id == multi_obs.m_type_id)\ .join(m_scalar_type,m_scalar_type.row_id == m_type.m_scalar_type_id)\ .join(obs_type,obs_type.row_id == m_scalar_type.obs_type_id)\ .join(uom_type,uom_type.row_id == m_scalar_type.uom_type_id)\ .filter(multi_obs.m_date > dateOffset)\ .filter(multi_obs.m_type_id.in_(mTypes))\ .filter(multi_obs.d_top_of_hour == 1)\ .filter(platform.active < 3)\ .filter(platform.the_geom.within(WKTSpatialElement(bboxPoly, -1)))\ .order_by(platform.row_id)\ .all() ``` As soon as I start editing anything in that block, the issue occurs. I've cut out that bit of code and edited other areas in the file, and I have no problems. I've edited other python files with no problems as well. At first I thought there was some issue with code completion, so I turned that off and still get the issue. I was using Eclipse Indigo and if I did not Force Quit the app, an out of memory Java error would get thrown. In Aptana the cpu spikes, then will go back to an idle usage, then spike again if I started editing again. My setup: OS X Lion Java(TM) SE Runtime Environment (build 1.6.0\_31-b04-415-11M3646) Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01-415, mixed mode) i7 Quad core, 8Gb RAM I thought the issue may have been triggered by the latest java update from Apple, so I rolled back the entire machine via Time Machine to a pre-update state and still have the issue. I'd appreciate any pointers, I am at the point of trying to find a non-PyDev based solution. **Edit** Allowing Eclipse to run until erroring out, Console.App does show the following: ``` 8/1/12 9:14:01.114 PM [0x0-0x182182].org.eclipse.eclipse: Exception in thread "[Timer] - Main Queue Handler" Exception in thread "Poller SunPKCS11-Darwin" 8/1/12 9:14:01.114 PM [0x0-0x182182].org.eclipse.eclipse: java.lang.OutOfMemoryError: Java heap space 8/1/12 9:14:01.114 PM [0x0-0x182182].org.eclipse.eclipse: at sun.security.pkcs11.wrapper.PKCS11.C_GetSlotInfo(Native Method) 8/1/12 9:14:01.115 PM [0x0-0x182182].org.eclipse.eclipse: at sun.security.pkcs11.SunPKCS11.initToken(SunPKCS11.java:767) 8/1/12 9:14:01.115 PM [0x0-0x182182].org.eclipse.eclipse: at sun.security.pkcs11.SunPKCS11.access$100(SunPKCS11.java:42) 8/1/12 9:14:01.115 PM [0x0-0x182182].org.eclipse.eclipse: at sun.security.pkcs11.SunPKCS11$TokenPoller.run(SunPKCS11.java:700) 8/1/12 9:14:01.115 PM [0x0-0x182182].org.eclipse.eclipse: at java.lang.Thread.run(Thread.java:680) 8/1/12 9:14:01.115 PM [0x0-0x182182].org.eclipse.eclipse: java.lang.OutOfMemoryError: Java heap space 8/1/12 9:14:01.115 PM [0x0-0x182182].org.eclipse.eclipse: at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:45) 8/1/12 9:14:01.115 PM [0x0-0x182182].org.eclipse.eclipse: at java.lang.StringBuffer.<init>(StringBuffer.java:103) 8/1/12 9:14:01.115 PM [0x0-0x182182].org.eclipse.eclipse: at org.eclipse.equinox.internal.util.impl.tpt.threadpool.ThreadPoolFactoryImpl.execute0(ThreadPoolFactoryImpl.java:94) 8/1/12 9:14:01.115 PM [0x0-0x182182].org.eclipse.eclipse: at org.eclipse.equinox.internal.util.impl.tpt.timer.TimerImpl.run(TimerImpl.java:110) 8/1/12 9:14:01.115 PM [0x0-0x182182].org.eclipse.eclipse: at java.lang.Thread.run(Thread.java:680) ``` **Edit 2** Grabbed the Oracle JDK, set the system to use it. Same issue. **EDIT 3** Similar problem is still occurring after a restore from backups. This issue must have been lurking and the code block managed to be just right to trigger it. Code completion engine is still prime suspect.
2012/08/01
[ "https://Stackoverflow.com/questions/11767757", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1569769/" ]
I was running into the same problem but for a different, long query. I tried disabling auto-complete, tried the `-clean` thing, no luck. To fix, I waited for the memory leak to explode and used `jmap.exe` to dump the heap. I then ran Eclipse Memory Analyzer to see where my memory was going, the screenshot is attached below. There was something about `com.python.pydev.refactoring.markoccurrences.MarkOccurrencesJob` in the job, so I disabled that option in Preferences->Pydev->Editor. Ever since I did that Eclipse is a speed demon and no more crashing. Someone with some street cred should post this as a bug at pydev. ![enter image description here](https://i.stack.imgur.com/l1WOW.png)
In the past I have had some success solving Eclipse insanity by starting with a clean workspace. It's kind of a shot in the dark, but try the following one by one: 1. Start eclipse with the `-clean` option and the existing workspace. 2. If the above does not work, try editing the same file in a new workspace. If you look in the Console.app, when editing the file, do you seen any relevant log messages ? Does the Error Log View inside eclipse throw up any error messages when you try to edit the file in Aptana ? **Update** I just tried reproducing this in Eclipse Juno on Window 7 64 Bit and no problems with it: ![enter image description here](https://i.stack.imgur.com/3bpGt.png) I think it might be a time taking exercise, but would it be possible for you to upgrade to Eclipse Juno ?
14,075
394,500
In the python program I'm writing, I've got a thread which iterates over a large structure in memory and writes it incrementally into a file-like object. I've got another thread which takes a file-like object and writes it to disk. Is there an easy way to connect the two, such that any data input from the first thread will be buffered for the second? Specifically, I'm trying to pass data to subprocess.Popen(). The process will read from stdin, but you cannot pass a "file-like" object to Popen because it calls stdin.fileno() and blows up unless you have a real file. Instead, you need to pass the PIPE argument to Popen, which allows you to use proc.stdin as a file-like object. But if you've already got a file-like object, there doesn't seem to be a great way to yolk the two of them together.
2008/12/27
[ "https://Stackoverflow.com/questions/394500", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9161/" ]
You should use the [Queue](http://docs.python.org/library/queue.html) module for sharing sequential data across threads. You would have to make a file-like Queue subclass, where .read and .write mutually block each other, with a buffer in-between. OTOH, I wonder why the first thread can't write to the real file in the first place.
I'm not clear what you're trying to do ehre. This sounds like a job for a regular old pipe, which is a file-like object. I'm guessing, however, that you mean you're got a stream of some other sort. It also sounds a lot like what you want is a python [Queue](http://docs.python.org/library/queue.html), or maybe a [tempfile](http://docs.python.org/library/tempfile.html#module-tempfile).
14,080
25,387,286
When I do `pip install statsmodels` it gives me `ImportError: statsmodels requires patsy. http://patsy.readthedocs.org`, but then I run `pip install patsy` and it says its successful, but running `pip install statsmodels` still gives me same error about requiring patsy. How can this be? --- ``` $ sudo pip install patsy Requirement already satisfied (use --upgrade to upgrade): patsy in /Library/Python/2.7/site-packages/patsy-0.3.0-py2.7.egg Requirement already satisfied (use --upgrade to upgrade): numpy in /Library/Python/2.7/site-packages/numpy-1.8.2-py2.7-macosx-10.9-intel.egg (from patsy) Cleaning up... $ sudo pip install statsmodels Downloading/unpacking statsmodels Downloading statsmodels-0.5.0.tar.gz (5.5MB): 5.5MB downloaded Running setup.py (path:/private/tmp/pip_build_root/statsmodels/setup.py) egg_info for package statsmodels Traceback (most recent call last): File "<string>", line 17, in <module> File "/private/tmp/pip_build_root/statsmodels/setup.py", line 463, in <module> check_dependency_versions(min_versions) File "/private/tmp/pip_build_root/statsmodels/setup.py", line 122, in check_dependency_versions raise ImportError("statsmodels requires patsy. http://patsy.readthedocs.org") ImportError: statsmodels requires patsy. http://patsy.readthedocs.org Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 17, in <module> File "/private/tmp/pip_build_root/statsmodels/setup.py", line 463, in <module> check_dependency_versions(min_versions) File "/private/tmp/pip_build_root/statsmodels/setup.py", line 122, in check_dependency_versions raise ImportError("statsmodels requires patsy. http://patsy.readthedocs.org") ImportError: statsmodels requires patsy. http://patsy.readthedocs.org ---------------------------------------- Cleaning up... Command python setup.py egg_info failed with error code 1 in /private/tmp/pip_build_root/statsmodels Storing debug log for failure in /Users/Jacob/Library/Logs/pip.log ```
2014/08/19
[ "https://Stackoverflow.com/questions/25387286", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3391108/" ]
What the error message doesn't tell you is that the module `six` not being there is really the problem. Found this out by doing `import patsy` and having it fail and tell me that I needed `six`. So I did `pip install six` and now the patsy import worked, as did the `pip install statsmodels`.
For me: ``` $python3 -m pip install --upgrade patsy $python3 -m pip install statsmodels ``` worked!
14,085
32,085,019
I am very new to Google App engine and was trying to understand bolb storage and api, but cant get it working. I followed the the below tutorial from goolge on using blobstore api <https://cloud.google.com/appengine/docs/python/blobstore/> Github: <https://github.com/GoogleCloudPlatform/appengine-blobstore-python/blob/master/main.py> I always get 404 not found error, the image is getting uploaded to the blob store but is not being retrieved. Any help is greatly appreciated.
2015/08/19
[ "https://Stackoverflow.com/questions/32085019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4822241/" ]
You are dealing with jQuery object, methods like removeChild() and [appendChild()](https://developer.mozilla.org/en-US/docs/Web/API/Node/appendChild) belongs to dom element not to the jQuery object. To remove all contents of an element you can use [.empty()](http://api.jquery.com/empty) and to set the text content of an element you can use [.text()](http://api.jquery.com/text), but using .text() will replace existing content so in your case you can just use ```js var $resourceSpan = $("#divname" + dynamic_variable); var stringVar = functionThatReturnsString(); $resourceSpan.text(stringVar); ```
Did you wanna do somethin like this? ``` <html> <head> <title>STACK OVERFLOW TESTS</title> <style> </style> </head> <body> <span>HI, IM SOME TEXT</span> <input type = 'button' value = 'Click me!' onClick = 'changeText()'></input> <!-- Change the text with a button for example... --> <script> var text = 'I AM THE NEW TEXT'; // The text to go in the span element function changeText(){ var span = document.querySelector('span'); span.innerHTML = text; } </script> </body> </html> ```
14,090
45,031,524
I have a melted DataFrame I would like to pivot but cannot manage to do so using 2 columns as index. ``` import pandas as pd df = pd.DataFrame({'A': {0: 'XYZ', 1: 'XYZ', 2: 'XYZ', 3: 'XYZ', 4: 'XYZ', 5: 'XYZ', 6: 'XYZ', 7: 'XYZ', 8: 'XYZ', 9: 'XYZ', 10: 'ABC', 11: 'ABC', 12: 'ABC', 13: 'ABC', 14: 'ABC', 15: 'ABC', 16: 'ABC', 17: 'ABC', 18: 'ABC', 19: 'ABC'}, 'B': {0: '01/01/2017', 1: '02/01/2017', 2: '03/01/2017', 3: '04/01/2017', 4: '05/01/2017', 5: '01/01/2017', 6: '02/01/2017', 7: '03/01/2017', 8: '04/01/2017', 9: '05/01/2017', 10: '01/01/2017', 11: '02/01/2017', 12: '03/01/2017', 13: '04/01/2017', 14: '05/01/2017', 15: '01/01/2017', 16: '02/01/2017', 17: '03/01/2017', 18: '04/01/2017', 19: '05/01/2017'}, 'C': {0: 'Price', 1: 'Price', 2: 'Price', 3: 'Price', 4: 'Price', 5: 'Trading', 6: 'Trading', 7: 'Trading', 8: 'Trading', 9: 'Trading', 10: 'Price', 11: 'Price', 12: 'Price', 13: 'Price', 14: 'Price', 15: 'Trading', 16: 'Trading', 17: 'Trading', 18: 'Trading', 19: 'Trading'}, 'D': {0: '100', 1: '101', 2: '102', 3: '103', 4: '104', 5: 'Yes', 6: 'Yes', 7: 'Yes', 8: 'Yes', 9: 'Yes', 10: '50', 11: nan, 12: '48', 13: '47', 14: '46', 15: 'Yes', 16: 'No', 17: 'Yes', 18: 'Yes', 19: 'Yes'}}) ``` So: ``` A B C D XYZ 01/01/2017 Price 100 XYZ 02/01/2017 Price 101 XYZ 03/01/2017 Price 102 XYZ 04/01/2017 Price 103 XYZ 05/01/2017 Price 104 XYZ 01/01/2017 Trading Yes XYZ 02/01/2017 Trading Yes XYZ 03/01/2017 Trading Yes XYZ 04/01/2017 Trading Yes XYZ 05/01/2017 Trading Yes ABC 01/01/2017 Price 50 ABC 02/01/2017 Price ABC 03/01/2017 Price 48 ABC 04/01/2017 Price 47 ABC 05/01/2017 Price 46 ABC 01/01/2017 Trading Yes ABC 02/01/2017 Trading No ABC 03/01/2017 Trading Yes ABC 04/01/2017 Trading Yes ABC 05/01/2017 Trading Yes ``` Would become: ``` A B Trading Price ABC 01/01/2017 Yes 50 02/01/2017 No 03/01/2017 Yes 48 04/01/2017 Yes 47 05/01/2017 Yes 46 XYZ 01/01/2017 Yes 100 02/01/2017 Yes 101 03/01/2017 Yes 102 04/01/2017 Yes 103 05/01/2017 Yes 104 ``` or: ``` ABC XYZ Trading Price Trading Price 01/01/2017 Yes 50 Yes 100 02/01/2017 No Yes 101 03/01/2017 Yes 48 Yes 102 04/01/2017 Yes 47 Yes 103 05/01/2017 Yes 46 Yes 104 ``` I thought this could simply be done with pivot but get an error: ``` df.pivot(index=['A', 'B'], columns = ['C'], values = ['D'] ) Traceback (most recent call last): File "<ipython-input-41-afcc34979ff8>", line 1, in <module> df.pivot(index=['A', 'B'], columns = ['C'], values = ['D'] ) File "C:\Miniconda\lib\site-packages\pandas\core\frame.py", line 3951, in pivot return pivot(self, index=index, columns=columns, values=values) File "C:\Miniconda\lib\site-packages\pandas\core\reshape\reshape.py", line 377, in pivot index=MultiIndex.from_arrays([index, self[columns]])) File "C:\Miniconda\lib\site-packages\pandas\core\series.py", line 248, in __init__ raise_cast_failure=True) File "C:\Miniconda\lib\site-packages\pandas\core\series.py", line 3027, in _sanitize_array raise Exception('Data must be 1-dimensional') Exception: Data must be 1-dimensional ``` In R this will be quickly done with gather/spread. Thanks!
2017/07/11
[ "https://Stackoverflow.com/questions/45031524", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4947923/" ]
Is that what you want? ``` In [23]: df.pivot_table(index=['A','B'], columns='C', values='D', aggfunc='first') Out[23]: C Price Trading A B ABC 01/01/2017 50 Yes 02/01/2017 NaN No 03/01/2017 48 Yes 04/01/2017 47 Yes 05/01/2017 46 Yes XYZ 01/01/2017 100 Yes 02/01/2017 101 Yes 03/01/2017 102 Yes 04/01/2017 103 Yes 05/01/2017 104 Yes ```
I found the following is possible: ``` df.set_index(['A', 'C', 'B']).unstack().T Out[59]: A ABC XYZ C Price Trading Price Trading B D 01/01/2017 50 Yes 100 Yes 02/01/2017 NaN No 101 Yes 03/01/2017 48 Yes 102 Yes 04/01/2017 47 Yes 103 Yes 05/01/2017 46 Yes 104 Yes ``` And: ``` df.set_index(['A', 'B', 'C']).unstack() Out[61]: D C Price Trading A B ABC 01/01/2017 50 Yes 02/01/2017 NaN No 03/01/2017 48 Yes 04/01/2017 47 Yes 05/01/2017 46 Yes XYZ 01/01/2017 100 Yes 02/01/2017 101 Yes 03/01/2017 102 Yes 04/01/2017 103 Yes 05/01/2017 104 Yes ```
14,091
40,712,887
I am confused with the time queryset uses its `_result_cache` or it directly hits the database. For example (in python shell): ``` user = User.objects.all() # User is one of my models print(user) # show data in database (hitting the database) print(user._result_cache) # output is None len(user) # output is a nonzero number print(user._result_cache) # this time, output is not None ``` So, my questions are: 1. Why the `_result_cache` is `None` after hitting the database? 2. Does the queryset is evaluated mean that the `_result_cache` is not `None`? 3. When does the queryset use its `_result_cache`, not hitting the database?
2016/11/21
[ "https://Stackoverflow.com/questions/40712887", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6751999/" ]
A queryset will cache its data in `self._result_cache` whenever the *complete* queryset is evaluated. This includes iterating over the queryset, calling `bool()`, `len()` or `list()`, or pickling the queryset. The `print()` function indirectly calls `repr()` on the queryset. `repr()` will evaluate the queryset to include the data in the string representation, but it will not evaluate the *complete* queryset. Instead, it will [get a slice](https://github.com/django/django/blob/74742aa956d9cef0493b57f50f1cb7dc0f987fc6/django/db/models/query.py#L228) of the queryset and use that in the representation. This prevents huge queries when all you want is a simple string representation. Since only a slice is evaluated, it will not cache the results. When the cache is filled, every method that does not create a new queryset object will use the cache instead of making a new query. In your specific example, if you switch the `print()` and `len()` statement, your queryset will only hit the database once: ``` user = User.objects.all() len(user) # complete queryset is evaluated print(user._result_cache) # cache is filled print(user) # slicing an evaluated queryset will not hit the database print(user._result_cache) ```
There is an explanation for this behavior : When you use User.objects.all(),Database is not hit.When you do not iterate through the query set, the \_result\_cache is always None.But when you invoke len() function.The iteration will be done through query set, the database will be hit and resulting output will also set the result\_cahce for that query set. Here is the source code for len() function of Django : ``` def __len__(self): # Since __len__ is called quite frequently (for example, as part of # list(qs), we make some effort here to be as efficient as possible # whilst not messing up any existing iterators against the QuerySet. if self._result_cache is None: if self._iter: self._result_cache = list(self._iter) else: self._result_cache = list(self.iterator()) elif self._iter: self._result_cache.extend(self._iter) return len(self._result_cache) ``` Hope this clarifies all your questions. Thanks.
14,092
46,994,144
I am a beginner in python. I have written the following python code: ``` import subprocess PIPE = subprocess.PIPE process = subprocess.Popen(['git', 'status'], stdout=PIPE, stderr=PIPE, cwd='my\git-repo\path',shell=True) stdout_str, stderr_str = process.communicate() print (stdout_str) print (stderr_str) ``` Upon execution of the python script, I get the following output: ``` b'' b'"git" is not recognized as an internal or external command,\r\noperable program or batch file.\r\n' ``` I have added '..../git/bin' to my PATH variable, and since then git commands run fine in cmd.exe. Since 'shell=True', I thought this would run too. I can't seem to get a hold of the issue. Help appreciated.
2017/10/28
[ "https://Stackoverflow.com/questions/46994144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8849445/" ]
You have to specify the full php binary path inside the cronjob. Assuming your php binary full path is `/usr/bin/php` then your cronjob will look like this: ``` /usr/bin/php -q /home/user/tracker.domain.com/cron/3.php ```
You should use php before -q in your cron jobs like this php -q /home/user/tracker.domain.com/cron/3.php
14,093
11,860,252
In a python script i do a gobject call. I need to know, when its finished. are there any possible ways to check this? Are there Functions or so on to check? My code is: ``` gobject.idle_add(main.process) class main: def process(): <-- needs some time to finish --> next.call.if.finished() ``` I want to start start another object, pending on the first to finish. I looked through the gobject reference, but i didn't find something necessary. Thanks
2012/08/08
[ "https://Stackoverflow.com/questions/11860252", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1508490/" ]
``` for($i = 0; $i < count($key); $i++){ $query.=" AND (dealTitle rlike '[[:<:]]".$key[$i]."[[:>:]]' )"; } ``` Should use AND EDITED: ``` for($i = 0; $i < count($key); $i++){ $query.=" AND (dealTitle like '% ".$key[$i]." %' or dealTitle like '% ".$key[$i]."' or dealTitle like '".$key[$i]."%') "; } ```
try this: ``` AND (CONCAT(' ',dealTitle,' ') LIKE '% car %' and CONCAT(' ',dealTitle,' ') LIKE '% wash %' ) ```
14,094
20,289,373
I am developing a program in python and have reached a point I don't know how to solve. My intention is to use a `with` statement, an avoid the usage of try/except. So far, my idea is being able to use the `continue` statement as it would be used inside the `except`. However, I don't seem to succeed. Let's supposse this is my code: ``` def A(object): def __enter__: return self def __exit__: return True with A(): print "Ok" raise Exception("Excp") print "I want to get here" print "Outside" ``` Reading the docs I have found out that by returning True inside the `__exit__` method, I can prevent the exception from passing, as with the `pass` statement. However, this will immediately skip everything left to do in the with, which I'm trying to avoid as I want everything to be executed, even if an exception is raised. So far I haven't been able to find a way to do this. Any advice would be appreciated. Thank you very much.
2013/11/29
[ "https://Stackoverflow.com/questions/20289373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
It's not possible. The only two options are (a) let the exception propagate by returning a false-y value or (b) swallow the exception by returning True. There is no way to resume the code block from where the exception was thrown. Either way, your `with` block is over.
You can't. The `with` statement's purpose is to handle cleanup automatically (which is why exceptions can be suppressed *when exiting it*), not to act as Visual Basic's infamous `On Error Resume Next`. If you want to continue the execution of a block after an exception is raised, you need to wrap whatever raises the exception in a `try/except` statement.
14,100
73,642,679
I have a function in C which is integrated into python as a library. The python looks something like this: ``` import ctypes import numpy as np lib=ctypes.cdll.LoadLibrary("./array.so") lib.eq.argtypes=(ctypes.c_int, ctypes.POINTER(ctypes.c_float), ctypes.POINTER(ctypes.c_float)) params = parameters.ctypes.data_as(ctypes.POINTER(ctypes.c_float)) results = np.zeros(5).ctypes.data_as(ctypes.POINTER(ctypes.c_float)) lib.get_results(ctypes.c_int(5),params,results) ``` And the C code in array.c looks something like: ``` void get_results(int size, float params[], float results[5]) { 'do some calculations which use params' results[0] = ... results[1] = ... results[2] = ... results[3] = ... results[4] = ... } ``` How would one access the array(pointer) `results` in python? I couldn't figure it out since the void function in C doesn't actually return any value.
2022/09/08
[ "https://Stackoverflow.com/questions/73642679", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18335909/" ]
You were close. The input arrays need to be `dtypes=np.float32` (or `ct.c_float`) to match the C 32-bit `float` parameters. `results` is changed in-place so print the updated `results` after the function call. You can also use `ndpointer` to declare the *exact* type of arrays expected, so `ctypes` can check the the correct array type is passed. In your original code `np.zeros(5)` defaults to `np.float64` type. **test.c** ```c #ifdef _WIN32 # define API __declspec(dllexport) #else # define API #endif API void get_results(int size, float params[], float results[5]) { // Make some kind of calculation for(int i = 0; i < size; ++i) { results[0] += params[i]; results[1] += params[i] * 2; results[2] += params[i] * 3; results[3] += params[i] * 4; results[4] += params[i] * 5; } } ``` **test.py** ```py import ctypes as ct import numpy as np lib = ct.CDLL('./test') # Since you are using numpy, we can use ndpointer to declare # the exact type of numpy array expected. In this case, # params is a one-dimensional array of any size, and # results is a one-dimensional array of exactly 5 elements. # Both need to be of np.float32 type to match C float. lib.get_results.argtypes = (ct.c_int, np.ctypeslib.ndpointer(dtype=np.float32, ndim=1), np.ctypeslib.ndpointer(dtype=np.float32, shape=(5,))) lib.get_results.restype = None params = np.array([1,2,3], dtype=np.float32) results = np.zeros(5, dtype=np.float32) lib.get_results(len(params), params, results) print(results) ``` Output: ```none [ 6. 12. 18. 24. 30.] ```
My suggestion: ```py import ctypes import numpy as np from os.path import abspath from ctypes import cdll, c_void_p, c_int params = np.ascontiguousarray(np.zeros(5, dtype=np.float64)) results = np.ascontiguousarray(np.zeros(5, dtype=np.float64)) lib = cdll.LoadLibrary(abspath('array.so')) # loading the compiled binary shared C library, which should be located in the same directory as this Python script; absolute path is more important for Linux — not necessary for MacOS f = lib.get_results # assigning the C interface function to this Python variable "f" f.arguments = [ c_int, c_void_p, c_void_p ] # declaring the data types for C function arguments f.restype = c_int # declaring the data types for C function return value res = f(params.size, c_void_p(params.ctypes.data), c_void_p(results.ctypes.data)) # calling the C interface function print(res) print(results) ``` ```c int get_results(int size, double * params, double * results) { /* do some calculations which use params */ results[0] = ...; results[1] = ...; results[2] = ...; results[3] = ...; results[4] = ...; return whatever; } ```
14,103
73,224,353
**Context** An empty list: `my_list = []` I also have a list of lists of strings: words\_list = `[['this', '', 'is'], ['a', 'list', ''], ['of', 'lists']]` But note that there are some elements in the lists that are null. **Ideal output** I want to randomly choose a non null element from each list in `words_list` and append that to `my_list` as an element. e.g. ``` >> my_list ['this', 'list', 'of'] ``` **What I currently have** ``` for i in words_list: my_list.append(random.choice(words)) ``` **My issue** but it throws this error: ``` File "random_word_match.py", line 56, in <module> get_random_word(lines) File "random_word_match.py", line 51, in get_random_word word_list.append(random.choice(words)) File "/Users/nathancahn/miniconda3/envs/linguafranca/lib/python3.7/random.py", line 261, in choice raise IndexError('Cannot choose from an empty sequence') from None IndexError: Cannot choose from an empty sequence ``` **What I don't want** I don't want to only append the first non null element I don't want null values in `my_list`
2022/08/03
[ "https://Stackoverflow.com/questions/73224353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10587373/" ]
Maybe you can try to start thinking this way: The idea is quite simple - just go through the list of words, and choose the non-empty word then passing to *choice*. ```py >>> for word in words_list: wd = choice([w for w in word if w]) # if w is non-null, choice will pick it... print(wd) # then just add those non-null word into your my_list --- leave as exercise. # Like this: >>> for word in words_list: wd = choice([w for w in word if w]) my_list.append(wd) >>> my_list ['this', 'a', 'lists'] # Later, you could even simplify this into a List Comprehension. ```
Try below code and see if it works for you: ``` import random my_list = [] words_list = [['this', '', 'is'], ['a', 'list', ''], ['of', 'lists']] for sublist in words_list: filtered_list = list(filter(None, sublist)) my_list.append(random.choice(filtered_list)) print(my_list) ```
14,104
46,600,413
New to programming and currently working with python. I am trying to take a user inputted string (containing letters, numbers and special characters), I then need to split it multiple times at different points to reform new strings. I have done research on the splitting of strings (and lists) and feel I understand it but I still know there must be a better way to do this than I can think of. This is what I currently have ``` ass=input("Enter Assembly Number: ") #Sample Input 1 - BF90UQ70321-14 #Sample Input 2 - BS73OA91136-43 ass0=ass[0] ass1=ass[1] ass2=ass[2] ass3=ass[3] ass4=ass[4] ass5=ass[5] ass6=ass[6] ass7=ass[7] ass8=ass[8] ass9=ass[9] ass10=ass[10] ass11=ass[11] ass12=ass[12] ass13=ass[13] code1=ass0+ass2+ass3+ass4+ass5+ass6+ass13 code2=ass0+ass2+ass3+ass4+ass5+ass6+ass9 code3=ass1+ass4+ass6+ass7+ass12+ass6+ass13 code4=ass1+ass2+ass4+ass5+ass6+ass9+ass12 # require 21 different code variations ``` Please tell me that there is a better way to do this. Thank you
2017/10/06
[ "https://Stackoverflow.com/questions/46600413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8660214/" ]
change your jquery code to ``` $('#forgotPassword').click(function() { var base_url = '<?php echo base_url()?>'; $('#forgotPasswordEmailError').text(''); var email = $('#forgotPasswordEmail').val(); console.log(email); if(email == ''){ $('#forgotPasswordEmailError').text('Email is required'); }else{ $.ajax({ url : base_url + 'Home/forgotPassword', type : 'POST', data : {email : email}, dataType:'json', success: function(data) { console.log(data); //location.reload(); } }); } }); ``` change your controller code like ``` public function forgotPassword() { $email = $this->input->post('email'); $response = ["email" => $email]; echo json_encode($response); } ```
Instead of ``` echo $email; ``` use: ``` $response = ["email" => $email]; return json_encode($response); ``` And parse JSON, on client side, using `JSON.parse`.
14,108
33,146,316
Sorry, I'm a newbie of nodejs. I'd like to try the package `win32ole` in nodejs under Windows7, but when I run the installation command `npm install win32ole` in a command prompt window opened as administrator, many errors pop up. My configuration is: * Windows 7 64 bit (version 6.1.7601) * Microsoft Visual Studio Express 2015 for Windows Desktop - ENU (imho having to install 20GB of software to try to make node-gyp work is like a certification of failure for a certain IT model) * Microsoft .NET Framework 4.6 * both Python 2.7.9 and 3.4.3 installed, but I made `python` command point to 2.7.9 * nodejs version 4.2.1 * npm version 2.14.7 * node-gyp 3.0.3 * `PYTHON` environment variable set to `C:\Python27\python.exe` * told to `node-gyp` where to find Python with command `node-gyp --python C:\Python27\` * told to `npm` where to find Python with command `npm config set python C:\Python27\python.exe` Here's the console output: ``` C:\Windows\system32 >npm install win32ole Impossibile trovare il percorso specificato. npm WARN engine win32ole@0.1.3: wanted: {"node":">= 0.8.18 && < 0.9.0"} (current: {"node":"4.2.1","npm":"2.14.7"}) \ > ref@1.2.0 install C:\Windows\system32\node_modules\win32ole\node_modules\ref > node-gyp rebuild C:\Windows\system32\node_modules\win32ole\node_modules\ref >if not defined npm_config_node_gyp (node "C:\Program Files\nodejs\node_modules\npm\bin\node-gyp-bin\\..\..\node_modules\node-gyp\bin\node-gyp.js" rebuild ) else (node rebuild ) Impossibile trovare il percorso specificato. gyp: Call to 'node -e "require('nan')"' returned exit status 1. while trying to load binding.gyp gyp ERR! configure error gyp ERR! stack Error: `gyp` failed with exit code: 1 gyp ERR! stack at ChildProcess.onCpExit (C:\Program Files\nodejs\node_modules\npm\node_modules\node-gyp\lib\configure.js:355:16) gyp ERR! stack at emitTwo (events.js:87:13) gyp ERR! stack at ChildProcess.emit (events.js:172:7) gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12) gyp ERR! System Windows_NT 6.1.7601 gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild" gyp ERR! cwd C:\Windows\system32\node_modules\win32ole\node_modules\ref gyp ERR! node -v v4.2.1 gyp ERR! node-gyp -v v3.0.3 gyp ERR! not ok npm ERR! Windows_NT 6.1.7601 npm ERR! argv "C:\\Program Files\\nodejs\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install" "win32ole" npm ERR! node v4.2.1 npm ERR! npm v2.14.7 npm ERR! code ELIFECYCLE npm ERR! ref@1.2.0 install: `node-gyp rebuild` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the ref@1.2.0 install script 'node-gyp rebuild'. npm ERR! This is most likely a problem with the ref package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! node-gyp rebuild npm ERR! You can get their info via: npm ERR! npm owner ls ref npm ERR! There is likely additional logging output above. npm ERR! Please include the following file with any support request: npm ERR! C:\Windows\system32\npm-debug.log ``` Any clue on what I'm doing wrong? **FOLLOW UP** I guess there is nothing I'm doing wrong, the package `node-gyp` under Windows, as pointed out in a comment, has some issues: <https://github.com/nodejs/node-gyp/issues/629>
2015/10/15
[ "https://Stackoverflow.com/questions/33146316", "https://Stackoverflow.com", "https://Stackoverflow.com/users/694360/" ]
It's not possible with up to date node.js versions. Use [node winax](https://www.npmjs.com/package/winax)
First of all you have a **warning** due to the node version ``` npm WARN engine win32ole@0.1.3: wanted: {"node":">= 0.8.18 && < 0.9.0"} (current: {"node":"4.2.1","npm":"2.14.7"}) ``` It should be lower than 0.9.0 Have you installed **node-gyp**? I'm seeing a lot of error complaining it. If not you can install it with this command ``` npm install -g node-gyp ```
14,110
6,119,038
I'm writing a bash script that fires up python and then enters some simple commands before exiting. I've got it firing up python ok, but how do I make the script simulate keyboard input in the python shell, as though a person were doing it?
2011/05/25
[ "https://Stackoverflow.com/questions/6119038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/193601/" ]
Use a "here" document. It looks like this ``` command << HERE text that someone types in more text HERE ``` You don'th have to use "HERE", you can use something that has a little more meaning relative to the context of your code.
Have you tried `echo "Something for input" | python myPythonScript.py` ?
14,111
28,377,690
Complete noob with python 3. I have some code and can't figure out for the life of me why I keep getting the output I do. For some reason the elif statements aren't getting recognized. Here is the output first and the code down below: ``` 3 Your fortune for today is: Please press enter to end ``` ``` #Program for fortune cookies var1 = "It's going to be a good day" var2 = "You'll have a long life" var3 = "Your life will be short" var4 = "Things will be good" var5 = "Life will be fun" import random randNum = random.randint(1, 5) statement = "" print(randNum) if randNum == 1: statement = var1 elif randNum == 2: statement = var2 elif randNum == 3: statement == var3 elif randNum == 4: statement == var4 elif randNum == 5: statement == var5 print("Your fortune for today is: ", statement) input("Please press enter to end") ```
2015/02/07
[ "https://Stackoverflow.com/questions/28377690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2288612/" ]
You are not assigning `varx` to `statement` (in some cases, 3, 4 and 5th cases), but comparing. Just change all: ``` statement == varx ``` To: ``` statement = var3 ``` Except from that, it seems to work.
The Problem is for example ```js statement == var5 ``` assignment statement. Good luck!
14,116
35,437,380
I installed Dato's `GraphLab Create` to run with `python 27` first directly from its executable then manually via `pip` ([instructions here](https://dato.com/products/create/)) for troubleshooting. Code: ``` import graphlab graphlab.SFrame() ``` Output: ``` [INFO] Start server at: ipc:///tmp/graphlab_server-4908 - Server binary: C:\Users\Remi\Anaconda2\envs\dato-env\lib\site-packages\graphlab\unity_server.exe - Server log: C:\Users\Remi\AppData\Local\Temp\graphlab_server_1455637156.log.0 [INFO] GraphLab Server Version: 1.8.1 ``` Now, attempt to load a .csv file as an Sframe: ``` csvsf = graphlab.Sframe('file.csv') ``` complains: ``` AttributeError Traceback (most recent call last) <ipython-input-5-68278493c023> in <module>() ----> 1 sf = graphlab.Sframe('file.csv') AttributeError: 'module' object has no attribute 'Sframe' ``` Any idea(s) how to pinpoint the issue? Thanks so much. Note: Uninstalled an already present `python 34` version
2016/02/16
[ "https://Stackoverflow.com/questions/35437380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4992716/" ]
capitalization error, it should be SFrame, not Sframe
You can only load a graplab package ('file.gl') by `graphlab.SFrame()`. Instead to load a csv file use `csvf = graphlab.SFrame.read_csv('file.csv')` for more information and other data types read this docs <https://dato.com/products/create/docs/graphlab.data_structures.html>
14,119
49,651,351
I have a similar issues like [How to upload a bytes image on Google Cloud Storage from a Python script](https://stackoverflow.com/questions/46078088/how-to-upload-a-bytes-image-on-google-cloud-storage-from-a-python-script/47140336#comment86305324_47140336). I tried this ``` from google.cloud import storage import cv2 from tempfile import TemporaryFile import google.auth credentials, project = google.auth.default() client = storage.Client() # https://console.cloud.google.com/storage/browser/[bucket-id]/ bucket = client.get_bucket('document') # Then do other things... image=cv2.imread('/Users/santhoshdc/Documents/Realtest/15.jpg') with TemporaryFile() as gcs_image: image.tofile(gcs_image) blob = bucket.get_blob(gcs_image) print(blob.download_as_string()) blob.upload_from_string('New contents!') blob2 = bucket.blob('document/operations/15.png') blob2.upload_from_filename(filename='gcs_image') ``` This is the error that's posing up ``` > Traceback (most recent call last): File > "/Users/santhoshdc/Documents/ImageShapeSize/imageGcloudStorageUpload.py", > line 13, in <module> > blob = bucket.get_blob(gcs_image) File "/Users/santhoshdc/.virtualenvs/test/lib/python3.6/site-packages/google/cloud/storage/bucket.py", > line 388, in get_blob > **kwargs) File "/Users/santhoshdc/.virtualenvs/test/lib/python3.6/site-packages/google/cloud/storage/blob.py", > line 151, in __init__ > name = _bytes_to_unicode(name) File "/Users/santhoshdc/.virtualenvs/test/lib/python3.6/site-packages/google/cloud/_helpers.py", > line 377, in _bytes_to_unicode > raise ValueError('%r could not be converted to unicode' % (value,)) ValueError: <_io.BufferedRandom name=7> could not be > converted to unicode ``` Can anyone guide me what's going wrong or what I'm doing incorrectly?
2018/04/04
[ "https://Stackoverflow.com/questions/49651351", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9596737/" ]
As suggested by @A.Queue [in](https://pastebin.com/Jr6k3SW2)(gets deleted after 29 days) ``` from google.cloud import storage import cv2 from tempfile import TemporaryFile client = storage.Client() bucket = client.get_bucket('test-bucket') image=cv2.imread('example.jpg') with TemporaryFile() as gcs_image: image.tofile(gcs_image) gcs_image.seek(0) blob = bucket.blob('example.jpg') blob.upload_from_file(gcs_image) ``` The file got uploaded,but uploading a `numpy ndarray` doesn't get saved as an image file on the `google-cloud-storage` PS: `numpy array` has to be convert into any image format before saving. This is fairly simple, use the `tempfile` created to store the image, here's the code. ``` with NamedTemporaryFile() as temp: #Extract name to the temp file iName = "".join([str(temp.name),".jpg"]) #Save image to temp file cv2.imwrite(iName,duplicate_image) #Storing the image temp file inside the bucket blob = bucket.blob('ImageTest/Example1.jpg') blob.upload_from_filename(iName,content_type='image/jpeg') #Get the public_url of the saved image url = blob.public_url ```
You are calling `blob = bucket.get_blob(gcs_image)` which makes no sense. `get_blob()` is supposed to get a string argument, namely the name of the blob you want to get. A *name*. But you pass a file object. I propose this code: ``` with TemporaryFile() as gcs_image: image.tofile(gcs_image) gcs_image.seek(0) blob = bucket.blob('documentation-screenshots/operations/15.png') blob.upload_from_file(gcs_image) ```
14,120
58,134,808
I'd like to install some special sub-package from a package. For example, I want to create package with pkg\_a and pkg\_b. But I want to allow the user to choose which he wants to install. What I'd like to do: ``` git clone https://github.com/pypa/sample-namespace-packages.git cd sample-namespace-packages touch setup.py ``` setup-py: ``` import setuptools setup( name='native', version='1', packages=setuptools.find_packages() ) ``` ``` # for all packages pip install -e native #Successfully installed native # for specific # Throws ERROR: native.pkg_a is not a valid editable requirement. # It should either be a path to a local project pip install -e native.pkg_a native.pkg_b # for specific cd native pip install -e pkg_a # Successfully installed example-pkg-a ``` I've seen this in another questions answer so it must be possible: [Python install sub-package from package](https://stackoverflow.com/questions/45324189/python-install-sub-package-from-package) Also I've read the [Packaging namespace packages documentation](https://packaging.python.org/guides/packaging-namespace-packages/) and tried to do the trick with the [repo](https://github.com/pypa/sample-namespace-packages) I've tried some variants with an additional setup.py in the native directory, but I can't handle it and I am thankful for all help. Update ------ In addition to the answer from sinoroc I've made an own repo. I created a package Nmspc, with subpackages NmspcPing and NmspcPong. But I want to allow the user to choose which he wants to install. Also it must to be possible to install the whole package. What I'd like to do is something like that: ``` git clone https://github.com/cj-prog/Nmspc.git cd Nmspc # for all packages pip install Nmspc # Test import python3 -c "import nmspc; import nmspc.pong" ``` ``` # for specific pip install -e Nmspc.pong # or pip install -e pong # Test import python3 -c "import pong;" ```
2019/09/27
[ "https://Stackoverflow.com/questions/58134808", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5872512/" ]
A solution for your use case seems to be similar to the one I gave here: <https://stackoverflow.com/a/58024830/11138259>, as well as the one you linked in your question: [Python install sub-package from package](https://stackoverflow.com/questions/45324189/python-install-sub-package-from-package). Here is an example... The directory tree might look like this: ``` . ├── Nmspc │   ├── nmspc │   │   └── _nmspc │   │   └── __init__.py │   └── setup.py ├── NmspcPing │   ├── nmspc │   │   └── ping │   │   └── __init__.py │   └── setup.py └── NmspcPong ├── nmspc │   └── pong │   └── __init__.py └── setup.py ``` 3 Python projects: * *NmspcPing* provides `nmspc.ping` * *NmspcPong* provides `nmspc.pong` * *Nmspc* depends on the other two projects (and also provides `nmspc._nmspc` see below for details) They are all *namespace packages*. They are built using the instructions from the [Python Packaging User Guide on "Packaging namespace packages, Native namespace packages"](https://packaging.python.org/guides/packaging-namespace-packages/#native-namespace-packages). There is another example [here](https://github.com/pypa/sample-namespace-packages/tree/master/native). The project *Nmspc* is basically empty, no actual code, but the important part is to add the other two *NmspcPing* and *NmspcPong* as *installation requirements*. Another thing to note, is that for convenience it is also a *namespace package* with `nmspc._nmspc` being kind of hidden (the leading underscore is the naming convention for hidden things in Python). `NmspcPing/setup.py` (and similarly `NmspcPong/setup.py`): ``` #!/usr/bin/env python3 import setuptools setuptools.setup( name='NmspcPing', version='1.2.3', packages=['nmspc.ping',], ) ``` `Nmspc/setup.py`: ``` #!/usr/bin/env python3 import setuptools setuptools.setup( name='Nmspc', version='1.2.3', packages=['nmspc._nmspc',], install_requires=['NmspcPing', 'NmspcPong',], ) ``` Assuming you are in the root directory, you can install these for development like this: ``` $ python3 -m pip install -e NmspcPing $ python3 -m pip install -e NmspcPong $ python3 -m pip install -e Nmspc ``` And then you should be able to use them like this: ``` $ python3 -c "import nmspc.ping; import nmspc.pong; import nmspc._nmspc;" ``` --- **Update** This can be simplified: ``` . ├── NmspcPing │   ├── nmspc │   │   └── ping │   │   └── __init__.py │   └── setup.py ├── NmspcPong │   ├── nmspc │   │   └── pong │   │   └── __init__.py │   └── setup.py └── setup.py ``` `setup.py` ``` #!/usr/bin/env python3 import setuptools setuptools.setup( name='Nmspc', version='1.2.3', install_requires=['NmspcPing', 'NmspcPong',], ) ``` Use it like this: ``` $ python3 -m pip install ./NmspcPing ./NmspcPong/ . $ python3 -c "import nmspc.ping; import nmspc.pong;" ```
If the projects are not installed from an index such as *PyPI*, it is not possible to take advantage of the `install_requires` feature. Something like this could be done instead: ``` . ├── NmspcPing │   ├── nmspc.ping │   │   └── __init__.py │   └── setup.py ├── NmspcPong │   ├── nmspc.pong │   │   └── __init__.py │   └── setup.py └── setup.py ``` `NmspcPing/setup.py` (and similarly `NmspcPong/setup.py`) ``` import setuptools setuptools.setup( name='NmspcPing', version='1.2.3', package_dir={'nmspc.ping': 'nmspc.ping'}, packages=['nmspc.ping'], ) ``` `setup.py` ``` import setuptools setuptools.setup( name='Nmspc', version='1.2.3', package_dir={ 'nmspc.ping': 'NmspcPing/nmspc.ping', 'nmspc.pong': 'NmspcPong/nmspc.pong', }, packages=['nmspc.ping', 'nmspc.pong'], ) ``` This allows to install from the root folder in any of the following combinations: ``` $ python3 -m pip install . $ python3 -m pip install ./NmspcPing $ python3 -m pip install ./NmspcPong $ python3 -m pip install ./NmspcPing ./NmspcPong ```
14,121
61,933,414
``` import tkinter as tk from tkinter import filedialog, Text from subprocess import call import os root = tk.Tk() def buttonClick(): print('Button is clicked') def openAgenda(): call("cd '/media/emilia/Linux/Programming/PycharmProjects/SmartschoolSelenium' && python3 SeleniumMain.py", shell=True) return canvas = tk.Canvas(root, height=700, width=700, bg='#263D42') canvas.pack() frame = tk.Frame(root, bg='white') frame.place(relwidth=0.8, relheight=0.8, relx=0.1, rely=0.1) openFile = tk.Button(root, text='Open file', padx=10, pady=5, fg="white", bg='#263D42', command=openAgenda) openFile.pack() root.mainloop() ``` the script it calls opens a new browser window, after finishing entering text in that window, it opens a new browser windows and loops. meanwhile the tkinter button stays clicked, visually.
2020/05/21
[ "https://Stackoverflow.com/questions/61933414", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13588940/" ]
You should include the necessary JavaScript resource like this: ``` <script src='https://www.google.com/recaptcha/api.js'></script> ``` Besides, set the right site key in the `data-sitekey` attribute. The result is like this in IE: [![enter image description here](https://i.stack.imgur.com/mnuEK.png)](https://i.stack.imgur.com/mnuEK.png) If it still doesn't work in IE 11, you could use F12 dev tools to check if there's any error in console.
Actually it was loading loading first time in IE. So, I tried with making url different everytime by appending current datetime. "<https://www.google.com/recaptcha/api.js?render=>" + EncodeUrl(SiteKey) + "&time="+CurrTime() It started working.
14,122
1,046,656
I have a very simple python script that **should** scan a text file, which contains lines formatted as *id*='*value*' and put them into a dict. the python module is called chval.py and the input file is in.txt. here's the code: ``` import os,sys from os import * from sys import * vals = {} f = open(sys.argv[1], 'r') for line in val_f: t = line.split('=') t[1].strip('\'') vals.append(t[0], t[1]) print vals f.close() ``` when i try to run it i get: > > Traceback (most recent call last): > > File "chval.py", line 9, in ? > f = open(sys.argv[1], 'r') TypeError: an integer is required > > > I'm using python 2.4... because i've been challenged to not use anything newer, is there something about open() that I don't know about? Why does it want an integer? anything after that line is untested. in short: why is it giving me the error and how do i fix it?
2009/06/25
[ "https://Stackoverflow.com/questions/1046656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/125946/" ]
Also of note is that starting with Python 2.6 the built-in function open() is now an alias for the io.open() function. It was even considered removing the built-in open() in Python 3 and requiring the usage of io.open, in order to avoid accidental namespace collisions resulting from things such as "from blah import \*". In Python 2.6+ you can write (and can also consider this style to be good practice): ``` import io filehandle = io.open(sys.argv[1], 'r') ```
Providing these parameters resolved my issue: ``` with open('tomorrow.txt', mode='w', encoding='UTF-8', errors='strict', buffering=1) as file: file.write(result) ```
14,123
1,597,732
Folks, I know there have been lots of threads about forcing the download dialog to pop up, but none of the solutions worked for me yet. My app sends mail to the user's email account, notifying them that "another user sent them a message". Those messages might have links to Excel files. When the user clicks on a link in their GMail/Yahoo Mail/Outlook to that Excel file, I want the File Save dialog to pop up. Problem: when I right-click and do "Save As" on IE, i get a Save As dialog. When I just click the link (which many of my clients will do as they are not computer-savvy), I get an IE error message: "IE cannot download file ... from ...". May be relevant: on GMail where I'm testing this, every link is a "target=\_blank" link (forced by Google). All other browsers work fine in all cases. Here are my headers (captured through Fiddler): ``` HTTP/1.1 200 OK Proxy-Connection: Keep-Alive Connection: Keep-Alive Content-Length: 15872 Via: **** // proxy server name Expires: 0 Date: Tue, 20 Oct 2009 22:41:37 GMT Content-Type: application/vnd.ms-excel Server: Apache/2.2.11 (Unix) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i mod_python/3.3.1 Python/2.5.2 SVN/1.4.6 mod_apreq2-20051231/2.6.0 mod_perl/2.0.4 Perl/v5.10.0 Cache-Control: private Pragma: no-cache Last-Modified: Tue, 20 Oct 2009 22:41:37 GMT Content-Disposition: attachment; filename="myFile.xls" Vary: Accept-Encoding Keep-Alive: timeout=5, max=100 ``` I want IE's regular left-click behavior to work. Any ideas?
2009/10/20
[ "https://Stackoverflow.com/questions/1597732", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16668/" ]
This will check for versions of IE and set headers accordingly. ``` // assume you have a full path to file stored in $filename if (!is_file($filename)) { die('The file appears to be invalid.'); } $filepath = str_replace('\\', '/', realpath($filename)); $filesize = filesize($filepath); $filename = substr(strrchr('/'.$filepath, '/'), 1); $extension = strtolower(substr(strrchr($filepath, '.'), 1)); // use this unless you want to find the mime type based on extension $mime = array('application/octet-stream'); header('Content-Type: '.$mime); header('Content-Disposition: attachment; filename="'.$filename.'"'); header('Content-Transfer-Encoding: binary'); header('Content-Length: '.sprintf('%d', $filesize)); header('Expires: 0'); // check for IE only headers if (preg_match('~MSIE|Internet Explorer~i', $_SERVER['HTTP_USER_AGENT']) || (strpos($_SERVER['HTTP_USER_AGENT'], 'Trident/7.0; rv:11.0') !== false)) { header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); header('Pragma: public'); } else { header('Pragma: no-cache'); } $handle = fopen($filepath, 'rb'); fpassthru($handle); fclose($handle); ```
If you're trying to get the file to download every time, change the content type to 'application/octet-stream'. Try it without the pragma statement.
14,133
12,460,943
I have a list of log files, where each line in each file has a timestamp and the lines are pre-sorted ascending within each file. The different files can have overlapping time ranges, and my goal is to blend them together into one large file, sorted by timestamp. There can be ties in the sorting, in which case I want to next line to come from whatever file is listed first in my input list. I've seen examples of how to do this using `fileinput` (see [here](https://stackoverflow.com/questions/6653371/merging-and-sorting-log-files-in-python)), but this seems to read all the files into memory. Owing to the large size of my files this will be a problem. Because my files are pre-sorted, it seems there should be a way to merge them using a method that only has to consider the most recent unexplored line from each file.
2012/09/17
[ "https://Stackoverflow.com/questions/12460943", "https://Stackoverflow.com", "https://Stackoverflow.com/users/233446/" ]
Why roll your own if there is `heapq.merge()` in the standard library? Unfortunately it doesn't provide a key argument -- you have to do the decorate - merge - undecorate dance yourself: ``` from itertools import imap from operator import itemgetter import heapq def extract_timestamp(line): """Extract timestamp and convert to a form that gives the expected result in a comparison """ return line.split()[1] # for example with open("log1.txt") as f1, open("log2.txt") as f2: sources = [f1, f2] with open("merged.txt", "w") as dest: decorated = [ ((extract_timestamp(line), line) for line in f) for f in sources] merged = heapq.merge(*decorated) undecorated = imap(itemgetter(-1), merged) dest.writelines(undecorated) ``` Every step in the above is "lazy". As I avoid `file.readlines()` the lines in the files are read as needed. Likewise the decoration process which uses generator expressions rather than list-comps. `heapq.merge()` is lazy, too -- it needs one item per input iterator simultaneously to do the necessary comparisons. Finally I'm using `itertools.imap()`, the lazy variant of the map() built-in to undecorate. (In Python 3 map() has become lazy, so you can use that)
You want to implement a file-based [merge sort](http://en.wikipedia.org/wiki/Merge_sort). Read a line from both files, output the older line, then read another line from that file. Once one of the files is exhausted, output all the remaining lines from the other file.
14,139
63,214,706
I'm working on a database with a graphical interface, I made an insert and delete method connected to the database, now I'm working on creating a search method but unfortunately not working for an unexpected error. The code Is a little bit long : ``` import sqlite3 from Tkinter import * global all,root, main_text, num_ent, nom_ent, search_ent def showall(): all = True con = sqlite3.connect("repertoire.db") cur = con.cursor() request = 'select * from blinta' cur.execute(request) table = str(cur.fetchall()).replace(')', '\n').replace('(', '').replace(',', '').replace('[', '').replace(']', '') return table con.commit() con.close() def delete(): con = sqlite3.connect("repertoire.db") cur = con.cursor() request = ' DELETE FROM blinta WHERE id=?' cur.execute(request, (ident.get(),)) con.commit() con.close() main_text.configure(state='normal') main_text.delete(1.0, END) main_text.insert(1.0, showall()) main_text.configure(state='disabled') ident.delete(0,END) def insert(): con = sqlite3.connect("repertoire.db") cur = con.cursor() request = 'insert into blinta (nom,numero) values(?,?)' cur.execute(request, (nom_ent.get(), num_ent.get())) con.commit() con.close() main_text.config(state='normal') main_text.delete(1.0, END) main_text.insert(1.0, showall()) main_text.config(state='disabled') num_ent.delete(0, END) nom_ent.delete(0, END) def search(): all=False con = sqlite3.connect("repertoire.db") cur = con.cursor() request = "select * from blinta where nom = ?" noun = search_ent.get() args=(noun,) cur.execute(request,args) selected = str(cur.fetchall()).replace(')', '\n').replace('(', '').replace(',', '').replace('[', '').replace(']', '') return selected con.commit() con.close() root = Tk() root.config(bg='#D2B024') root.geometry('450x650+900+0') root.minsize(450, 650) root.maxsize(450, 650) main_text = Text(root, bg='#CBCAC5', fg='black', width=30, height=40, state='normal') main_text.grid(column=1, row=1, rowspan=50, padx=9, pady=3) main_text.delete(1.0, END) if all == True : main_text.insert(1.0, showall()) else : main_text.insert(1.0, search()) main_text.config(state='disabled') nomlbl=Label(root,text='Enter noun to insert',bg='#D2B024').grid(row=1,column=2) nom_ent = Entry(root) nom_ent.grid(row=2, column=2) num_lbl=Label(root, text='Enter number to insert', bg='#D2B024').grid(row=3, column=2) num_ent = Entry(root) num_ent.grid(row=4, column=2) insert_btn = Button(relief='flat', fg='black', width=7, text='insert', font=("heveltica Bold", 15), command=insert).grid(row=5, column=2, columnspan=2, padx=50) Label(root,text='_______________________',bg='#D2B024').grid(row=6,column=2) idlbl=Label(root,text='Enter id to delete',bg='#D2B024').grid(row=7,column=2) ident=Entry(root) ident.grid(row=8,column=2) delete_btn = Button(relief='flat', fg='black', width=7, text='delete', font=("heveltica Bold", 15), command=delete).grid(row=9, column=2, columnspan=2, padx=0) Label(root,text='_______________________',bg='#D2B024').grid(row=10,column=2) search_lbl=Label(root,text='Enter noun to search',bg='#D2B024').grid(row=11,column=2) search_btn = Button(relief='flat', fg='black', width=7, text='search', font=("heveltica Bold", 15), command=search).grid(row=13, column=2, columnspan=2, padx=0) global search_ent search_ent = Entry(root) search_ent.grid(row=12, column=2) root.mainloop() ``` All the problem is related to the search function where the entry search\_ent is not defined and I m sure it is on the global scope The error is: ``` Traceback (most recent call last): File "C:/Users/asus/Documents/python/Projet/managment system/main.py", line 70, in <module> main_text.insert(1.0, search()) File "C:/Users/asus/Documents/python/Projet/managment system/main.py", line 51, in search noun = search_ent.get() NameError: name 'search_ent' is not defined ```
2020/08/02
[ "https://Stackoverflow.com/questions/63214706", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13421322/" ]
you can give your search function an input. global statement is for an outer scope. for example when you want to make a function in another function. you can [check here](https://www.python-course.eu/python3_global_vs_local_variables.php). and now about your code. here is a simple way that I have said the idea: ``` import sqlite3 from tkinter import * global all,root, main_text, num_ent, nom_ent, search_ent def showall(): all = True con = sqlite3.connect("repertoire.db") cur = con.cursor() request = 'select * from blinta' cur.execute(request) table = str(cur.fetchall()).replace(')', '\n').replace('(', '').replace(',', '').replace('[', '').replace(']', '') return table con.commit() con.close() def delete(): con = sqlite3.connect("repertoire.db") cur = con.cursor() request = ' DELETE FROM blinta WHERE id=?' cur.execute(request, (ident.get(),)) con.commit() con.close() main_text.configure(state='normal') main_text.delete(1.0, END) main_text.insert(1.0, showall()) main_text.configure(state='disabled') ident.delete(0,END) def insert(): con = sqlite3.connect("repertoire.db") cur = con.cursor() request = 'insert into blinta (nom,numero) values(?,?)' cur.execute(request, (nom_ent.get(), num_ent.get())) con.commit() con.close() main_text.config(state='normal') main_text.delete(1.0, END) main_text.insert(1.0, showall()) main_text.config(state='disabled') num_ent.delete(0, END) nom_ent.delete(0, END) def search(search_ent): all=False con = sqlite3.connect("repertoire.db") cur = con.cursor() request = "select * from blinta where nom = ?" noun = search_ent.get() args=(noun,) cur.execute(request,args) selected = str(cur.fetchall()).replace(')', '\n').replace('(', '').replace(',', '').replace('[', '').replace(']', '') return selected con.commit() con.close() root = Tk() root.config(bg='#D2B024') root.geometry('450x650+900+0') root.minsize(450, 650) root.maxsize(450, 650) main_text = Text(root, bg='#CBCAC5', fg='black', width=30, height=40, state='normal') main_text.grid(column=1, row=1, rowspan=50, padx=9, pady=3) main_text.delete(1.0, END) search_ent = Entry(root) search_ent.grid(row=12, column=2) if all == True : main_text.insert(1.0, showall()) else : main_text.insert(1.0, search(search_ent)) main_text.config(state='disabled') nomlbl=Label(root,text='Enter noun to insert',bg='#D2B024').grid(row=1,column=2) nom_ent = Entry(root) nom_ent.grid(row=2, column=2) num_lbl=Label(root, text='Enter number to insert', bg='#D2B024').grid(row=3, column=2) num_ent = Entry(root) num_ent.grid(row=4, column=2) insert_btn = Button(relief='flat', fg='black', width=7, text='insert', font=("heveltica Bold", 15), command=insert).grid(row=5, column=2, columnspan=2, padx=50) Label(root,text='_______________________',bg='#D2B024').grid(row=6,column=2) idlbl=Label(root,text='Enter id to delete',bg='#D2B024').grid(row=7,column=2) ident=Entry(root) ident.grid(row=8,column=2) delete_btn = Button(relief='flat', fg='black', width=7, text='delete', font=("heveltica Bold", 15), command=delete).grid(row=9, column=2, columnspan=2, padx=0) Label(root,text='_______________________',bg='#D2B024').grid(row=10,column=2) search_lbl=Label(root,text='Enter noun to search',bg='#D2B024').grid(row=11,column=2) search_btn = Button(relief='flat', fg='black', width=7, text='search', font=("heveltica Bold", 15), command=lambda: search(search_ent)).grid(row=13, column=2, columnspan=2, padx=0) root.mainloop() ``` check `search()` carefully.
I slightly redisigned your code, and it seems to work. I changed the location of the `root`, `search_ent` and `main_text`, now it's before the functions, so there won't be an error. ``` import sqlite3 from tkinter import * global all,root, main_text, num_ent, nom_ent, search_ent global search_ent root = Tk() root.config(bg='#D2B024') root.geometry('450x650+900+0') root.minsize(450, 650) root.maxsize(450, 650) search_ent = Entry(root) search_ent.grid(row=12, column=2) main_text = Text(root, bg='#CBCAC5', fg='black', width=30, height=40, state='normal') main_text.grid(column=1, row=1, rowspan=50, padx=9, pady=3) main_text.delete(1.0, END) def showall(): all = True con = sqlite3.connect("repertoire.db") cur = con.cursor() request = 'select * from blinta' cur.execute(request) table = str(cur.fetchall()).replace(')', '\n').replace('(', '').replace(',', '').replace('[', '').replace(']', '') return table con.commit() con.close() def delete(): con = sqlite3.connect("repertoire.db") cur = con.cursor() request = ' DELETE FROM blinta WHERE id=?' cur.execute(request, (ident.get(),)) con.commit() con.close() main_text.configure(state='normal') main_text.delete(1.0, END) main_text.insert(1.0, showall()) main_text.configure(state='disabled') ident.delete(0,END) def insert(): con = sqlite3.connect("repertoire.db") cur = con.cursor() request = 'insert into blinta (nom,numero) values(?,?)' cur.execute(request, (nom_ent.get(), num_ent.get())) con.commit() con.close() main_text.config(state='normal') main_text.delete(1.0, END) main_text.insert(1.0, showall()) main_text.config(state='disabled') num_ent.delete(0, END) nom_ent.delete(0, END) def search(): all=False con = sqlite3.connect("repertoire.db") cur = con.cursor() request = "select * from blinta where nom = ?" noun = search_ent.get() args=(noun,) cur.execute(request,args) selected = str(cur.fetchall()).replace(')', '\n').replace('(', '').replace(',', '').replace('[', '').replace(']', '') return selected con.commit() con.close() insert_btn = Button(relief='flat', fg='black', width=7, text='insert', font=("heveltica Bold", 15), command=insert).grid(row=5, column=2, columnspan=2, padx=50) Label(root,text='_______________________',bg='#D2B024').grid(row=6,column=2) idlbl=Label(root,text='Enter id to delete',bg='#D2B024').grid(row=7,column=2) ident=Entry(root) ident.grid(row=8,column=2) delete_btn = Button(relief='flat', fg='black', width=7, text='delete', font=("heveltica Bold", 15), command=delete).grid(row=9, column=2, columnspan=2, padx=0) Label(root,text='_______________________',bg='#D2B024').grid(row=10,column=2) search_lbl=Label(root,text='Enter noun to search',bg='#D2B024').grid(row=11,column=2) search_btn = Button(relief='flat', fg='black', width=7, text='search', font=("heveltica Bold", 15), command=search).grid(row=13, column=2, columnspan=2, padx=0) if all == True : main_text.insert(1.0, showall()) else : main_text.insert(1.0, search()) main_text.config(state='disabled') nomlbl=Label(root,text='Enter noun to insert',bg='#D2B024').grid(row=1,column=2) nom_ent = Entry(root) nom_ent.grid(row=2, column=2) num_lbl=Label(root, text='Enter number to insert', bg='#D2B024').grid(row=3, column=2) num_ent = Entry(root) num_ent.grid(row=4, column=2) root.mainloop() ```
14,140
63,940,493
I want to compare every element of two matrices with the same dimensions. I want to know, if one of the elements in the first matrix is smaller than another with the same indices in the second one. I want to fill a third matrice with the values of the first, but every entry, where my criteria applies, should be a 0. Below I will show my approach: ``` a = ([1, 2, 3], [3, 4, 5], [5, 6, 7]) b = ([2, 3, 1], [3, 5, 4], [4, 4, 4]) c = np.zeros(a.shape) for i, a_i in enumerate(a): a_1 = a_i for i, b_i in enumerate(b): b_1 = b_i if a_1 < b_1: c[i] = 0 else: c[i] = a_1 ``` I am very sorry if i made any simple mistakes here, but i am new to python and SO. I found some posts on how to find entrys that match, but I dont know if there is a solution where could use that method. I appreciate any help, thank you.
2020/09/17
[ "https://Stackoverflow.com/questions/63940493", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14285969/" ]
When you `map` over `question`: ``` question.map(function (quest){ ``` The `quest` variable will be each element of that array. Which in this case that element is: ``` [ { questions: "question1", answer1: "answer1", answer2: "answer2" }, { questions: "question2", answer1: "answer1", answer2: "answer2" } ] ``` An array of objects. So referencing an element of that array (such as `quest[0]`) would be: ``` { questions: "question1", answer1: "answer1", answer2: "answer2" } ``` Which indeed isn't an array and has no `.map()`. It sounds like you wanted to `map` over `quest`, not an element of it: ``` quest.map(function(ques){ return quest.questions; }) ``` Ultimately it looks like your variable naming is confusing you here. You have something called `question` which contains an array, each of which contains an array, each of which contains a property called `questions`. The plurality/singularity of those is dizzying. Perhaps `question` should really be `questionGroups`? It's an array of arrays. Each "group" is an array of questions. Each of which should have a property called `question`. Variable naming is important, and helps prevent confusion when writing your own code. So in this case it might be something like: ``` const [questionGroups, setQuestionGroups] = useState([]); // then later... questionGroups.map(function (questionGroup){ questionGroup.map(function (question){ return question.question; }) }) ```
In short objects doesn't have .map() function. Do this instead: `question.map(function (quest){ return quest.questions; });`
14,141
19,405,223
I hope not to make a fool of myself by re-asking this question, but I just can't figure out why my fixtures are not loaded when running test. I am using python 2.7.5 and Django 1.5.3. I can load my fixtures with `python manage.py testserver test_winning_answers`, with a location of `survey/fixtures/test_winning_answers.json`. ``` Creating test database for alias 'default'... Installed 13 object(s) from 1 fixture(s) Validating models... 0 errors found ``` My test class is doing the correct import: ``` from django.test import TestCase class QuestionWinningAnswersTest(TestCase): fixtures = ['test_winning_answers.json'] ... ``` But when trying to run the test command, it cannot find them: ``` python manage.py test survey.QuestionWinningAnswersTest -v3 ... Checking '/django/mysite/survey/fixtures' for fixtures... ... No json fixture 'initial_data' in '/django/mysite/survey/fixtures'. ... Installed 0 object(s) from 0 fixture(s) ... ``` I suspect I am missing something obvious, but cannot quite figure it out... any suggestion would be appreciated. Thanks!
2013/10/16
[ "https://Stackoverflow.com/questions/19405223", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1014862/" ]
This will perform well if you have index on VersionColumn ``` SELECT * FROM MainTable m INNER JOIN JoinedTable j on j.ForeignID = m.ID CROSS APPLY (SELECT TOP 1 * FROM SubQueryTable sq WHERE sq.ForeignID = j.ID ORDER BY VersionColumn DESC) sj ```
**Answer** : Hi, Below query I have created as per your requirement using Country, State and City tables. > > > ``` > SELECT * FROM ( > SELECT m.countryName, j.StateName,c.CityName , ROW_NUMBER() OVER(PARTITION BY c.stateid ORDER BY c.cityid desc) AS 'x' > FROM CountryMaster m > INNER JOIN StateMaster j on j.CountryID = m.CountryID > INNER JOIN dbo.CityMaster c ON j.StateID = c.StateID > ) AS numbered WHERE x = 1 > > ``` > > **Below is your solution and above is only for your reference.** > > > ``` > SELECT * FROM ( > SELECT m.MainTablecolumnNm, j.JoinedTablecolumnNm,c.SubQueryTableColumnName , ROW_NUMBER() > OVER(PARTITION BY sj.ForeignID ORDER BY c.sjID desc) AS 'abc' > FROM MainTable m > INNER JOIN JoinedTable j on j.ForeignID = m.ID > INNER JOIN SubQueryTable sj ON sj.ForeignID = j.ID > ) AS numbered WHERE abc = 1 > > ``` > > **Thank you, Vishal Patel**
14,142
8,373,710
I have recently started using a Mac OS X Lion system and tried to use Vim in terminal. I previously had a .vimrc file in my Ubuntu system and had `F2` and `F5` keys mapped to pastetoggle and run python interpreter. Here are the two lines I have for it: ``` set pastetoggle=<F2> map <buffer> <F5> :wa<CR>:!/usr/bin/env python % <CR> ``` It's working just fine in Ubuntu but no longer works in Mac. (The above two lines are in .vimrc under my home dir.) I have turned off the Mac specific functions in my preference so the function keys are not been used for things like volume. Right now pressing `F5` seems to capitalize all letters until next word, and `F2` seems to delete next line and insert O..... Is there something else I need to do to have it working as expected? In addition, I had been using solarized as my color scheme and tried to have the same color scheme now in Mac. It seems that the scheme command is being read from .vimrc, but the colors are stil the default colors. Even though the .vim/colors files are just the same as before. Is this a related error that I need to fix? Perhaps another setting file being read after my own? (I looked for \_vimrc and .gvimrc, none exists.) Thanks!
2011/12/04
[ "https://Stackoverflow.com/questions/8373710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/501330/" ]
I finally got my function mappings working by resorting to adding mappings like this: ``` if has('mac') && ($TERM == 'xterm-256color' || $TERM == 'screen-256color') map <Esc>OP <F1> map <Esc>OQ <F2> map <Esc>OR <F3> map <Esc>OS <F4> map <Esc>[16~ <F5> map <Esc>[17~ <F6> map <Esc>[18~ <F7> map <Esc>[19~ <F8> map <Esc>[20~ <F9> map <Esc>[21~ <F10> map <Esc>[23~ <F11> map <Esc>[24~ <F12> endif ``` Answers to these questions were helpful, if you need to verify that these escape sequences match your terminal's or set your own: [mapping function keys in vim](https://stackoverflow.com/questions/3519532/mapping-function-keys-in-vim) [Binding special keys as vim shortcuts](https://stackoverflow.com/questions/9950944/binding-special-keys-as-vim-shortcuts) It probably depends on terminal emulators behaving consistently (guffaw), but @Mark Carey's suggestion wasn't enough for me (I wish it was so simple). With iTerm2 on OS X, I'd already configured it for `xterm-256color` and tmux for `screen-256color`, and function mappings still wouldn't work. So the `has('mac')` might be unnecessary if these sequences from iTerm2 are xterm-compliant, I haven't checked yet so left it in my own config for now. You might want some `imap` versions too. Note that you shouldn't use `noremap` variants since you **do** want these mappings to cascade (to trigger whatever you've mapped `<Fx>` to).
Regarding your colorscheme/solarized question - make sure you set up Terminal (or iTerm2, which I prefer) with the solarized profiles available in the full solarized distribution that you can download here: <http://ethanschoonover.com/solarized/files/solarized.zip>. Then the only other issue you may run into is making sure you set your $TERM `xterm-256color` or `screen-256color` if you use screen or tmux. You can take a look at my [dotfiles](https://github.com/prognostikos/dotfiles) for a working setup, but don't forget to setup your Terminal/iTerm color profiles as a first step.
14,143
67,712,314
Can anyone see what im doing wrong here? works fine locally, but when running it in docker it cant seem to find my modules ... The **init**.py files are emtpy if that info could help. Im no expert in docker and non of the tips I've googled/stackoverflowed so far has panned out, such as adding pythonpath env in the dockerfile. Error log from docker: ``` Traceback (most recent call last): File "/usr/local/bin/uvicorn", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.7/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/uvicorn/main.py", line 331, in main run(**kwargs) File "/usr/local/lib/python3.7/site-packages/uvicorn/main.py", line 354, in run server.run() File "/usr/local/lib/python3.7/site-packages/uvicorn/main.py", line 382, in run loop.run_until_complete(self.serve(sockets=sockets)) File "uvloop/loop.pyx", line 1456, in uvloop.loop.Loop.run_until_complete File "/usr/local/lib/python3.7/site-packages/uvicorn/main.py", line 389, in serve config.load() File "/usr/local/lib/python3.7/site-packages/uvicorn/config.py", line 288, in load self.loaded_app = import_from_string(self.app) File "/usr/local/lib/python3.7/site-packages/uvicorn/importer.py", line 23, in import_from_string raise exc from None File "/usr/local/lib/python3.7/site-packages/uvicorn/importer.py", line 20, in import_from_string module = importlib.import_module(module_str) File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "./main.py", line 3, in <module> from routers.user import router as user_router ModuleNotFoundError: No module named 'routers' ``` Dockerfile: ``` FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7 RUN apt-get update RUN apt-get install -y --no-install-recommends # install requirements RUN pip3 install fastapi[all] uvicorn[standard] # Move files COPY ./* /app # attemt to fix a python ModuleNotFoundError WORKDIR /app #ENV PATH=$PATH:/app ENV PYTHONPATH "${PYTHONPATH}:/app" CMD [ "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "15400"] ``` Docker-compose.yml: ``` version: '3' services: core_api: build: . container_name: "core-api-container" ports: - "8000:15400" volumes: - ./app/:/app ``` file structure: ``` API/ --__init__.py --Dockerfile --docker-compose.yml --main.py --routers/ --__init__.py --user.py --x.py ``` main.py: ``` from fastapi import FastAPI from routers.user import router as user_router from routers.x import router as x_router app = FastAPI() app.include_router(user_router) app.include_router(x_router) ```
2021/05/26
[ "https://Stackoverflow.com/questions/67712314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14994913/" ]
After running `docker container run -it your-docker-container bash` and checked the files, it seems docker does not copy the file hierarchy as i expected. all files was under the same folder /app. none of my subfolder from the project in my local files were added, just the files those contained. No wonder i got ModuleNotFoundError. To fix this i simply made a new project and put all my python files in the same directoy, and edited them all to import correctly. There is probably a simpler way to fix this, but cba. to figure that out at this time.
I had almost the same issue with this and I try to fix it not by changing the directory structure of my application or how I copy the files but by running the application directly using gunicorn. I run it with the following command : `gunicorn -k uvicorn.workers.UvicornWorker run:app` that can either be updated in the entrypoint of your docker file or in the command section of compose if you are using docker-compose.
14,148
71,907,288
Question ======== What is the shortest and most efficient (preferrable most efficient) way of reading in a single depth folder and creating a file tree that consists of the longest substrings of each file? start with this --------------- ``` . ├── hello ├── lima_peru ├── limabeans ├── limes ├── limit ├── what_are_those ├── what_is_bofa └── what_is_up ``` end with this ------------- ``` . ├── hello ├── lim │   ├── lima │   │   ├── lima_peru │   │   └── limabeans │   ├── limes │   └── limit └── what_ ├── what_are_those └── what_is_ ├── what_is_bofa └── what_is_up ``` Framing ======= I feel like I know how to do the naive version of this problem but I wanted to see what the best way to do this in **python** would be. Format ====== ```py def longest_substrings_filetree(old_directory: str, new_directory: str): ... ```
2022/04/18
[ "https://Stackoverflow.com/questions/71907288", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17356304/" ]
You are using the same input `ID` in `colorId` when you select the same group the second time. However, you need unique input `ID`s in shiny. I have added a counter to change it. Try this. ``` library(shinyjs) # useShinyjs library(ggplot2) library(RColorBrewer) library(shiny) ui <- fluidPage( titlePanel("Reprex"), sidebarLayout( sidebarPanel( useShinyjs(), fluidRow(column(9, fileInput('manifest', 'Choose File', accept = c('text/csv', 'text/comma-separated-values,text/plain', '.csv'))), column(3, actionButton("load_button", "Load", width = "100%"))), fluidRow(column(5, selectInput(inputId = "group_palette_input", label = "Palette Selector", choices = NULL)), column(5, selectInput(inputId = "column_input", label = "Column Selector", choices = rownames(brewer.pal.info)))), uiOutput("group_colors"), width=4), mainPanel( tabsetPanel(id = "tabs", tabPanel("Plot") ) ) ) ) server <- function(input, output, session) { cntr <- reactiveValues(val=0) # Load the demo data on initialization and press the load button to update the file # data <- reactiveValues() # observeEvent(eventExpr = input$load_button, ignoreInit = FALSE, ignoreNULL = FALSE, { # manifestName = ifelse(is.null(input$manifest$datapath), "file.csv", input$manifest$datapath) # man = read.table(manifestName, sep = "\t", header = T, check.names=F, strip.white = T) # data$manifest <- man[man$include, ] # }) datamanifest <- reactive({ req(input$manifest) read.csv(input$manifest$datapath, header = TRUE) }) # Update column selector observeEvent(datamanifest(), { freezeReactiveValue(input, "column_input") updateSelectInput(inputId = "column_input", choices = names(datamanifest()), selected = "group") # All files should have a group column }) # Update palette selector observeEvent(datamanifest(), { freezeReactiveValue(input, "group_palette_input") updateSelectInput(inputId = "group_palette_input", choices = rownames(brewer.pal.info), selected = "Dark2") }) groupIncludeManualPaletteInput <- eventReactive(groups(), { req(input$group_palette_input) cntr$val <- cntr$val + 1 fullColors = brewer.pal(length(groups()), input$group_palette_input) lapply(1:length(groups()), function(groupIndex) { colorId = paste0(groups()[groupIndex], "_color", cntr$val) fluidRow(column(5, textInput(inputId = colorId, label = NULL, value = fullColors[groupIndex])), column(1, checkboxInput(inputId = as.character(groups()[groupIndex]), # Numeric column causes issues, need to wrap with as.character label = groups()[groupIndex], value = TRUE), style='padding:0px;')) # Removing padding puts the two columns closer together }) # End lapply }) groups <- eventReactive(input$column_input, {sort(unique(datamanifest()[[input$column_input]]))}) # Update groupIncludeManualPaletteInput observeEvent(input$group_palette_input, { groupColorIds = paste0(groups(), "_color",cntr$val) fullColors = brewer.pal(length(groups()), input$group_palette_input) for (groupColorIndex in seq_along(groupColorIds)) { updateTextInput(session, groupColorIds[groupColorIndex], value = fullColors[groupColorIndex]) } }) # Vector of booleans to mask the included groups includedGroups <- eventReactive(groups(), {unlist(map(as.character(groups()), ~input[[.x]]))}) # unlist() allows includedGroups to be used as an index variable # Vector of characters of group names that are included currentGroups <- eventReactive(includedGroups(), {groups()[includedGroups()]}) # Vector of characters of color names that are included currentColors <- reactive(unlist(map(groups(), ~input[[paste0(.x, "_color")]])[includedGroups()])) output$group_colors <- renderUI(groupIncludeManualPaletteInput()) # Make the borders of these textboxes match the color they describe # This is run twice when it works, but only once when it doesn't as a simple observe # As an observeEvent on groups(), only activates once and temporarily shows the border color; Adding in the req(input[[colorId]]) doesn't help # Same thing when the observation is on the groupIncludeManualPaletteInput() # Doubling the js code doesn't work # In the html, I can see that the border color is not being set # observing currentColors() is a dud too # Adding an if statement in the javascript doesn't change behavior. observe({ lapply(seq_along(groups()), function(groupIndex) { colorId = paste0(groups()[groupIndex], "_color",cntr$val) cat("Made it into Loop,", input[[colorId]], '\n') cat(colorId, ': ', input[[colorId]], '\n\n') runjs(paste0("document.getElementById('", colorId, "').style.borderColor ='", input[[colorId]] ,"'")) runjs(paste0("document.getElementById('", colorId, "').style.borderWidth = 'thick'")) }) }) } shinyApp(ui, server) ```
Following up with @YBS's response to my comment, I resorted to precomputing all the fields that could be made with the input file and using `conditionalPanel()` to selectively show them. This has the theoretical disadvantage that another file cannot be loaded in, but in practice I'm not seeing it. But I can use @YBS's answer, a counter, to fix that should it ever stop being theoretical. I also changed the section that makes the fields into a regular reactive dependent solely on data$manifest. I also changed the update function for the colors so they fits inside an observe instead of observeEvent. No other changes were necessary, but I hit a snag when I over-engineered around what I thought was a big update.
14,149
3,510,846
Sorry if the question is bit confusing. This is similar to [this question](https://stackoverflow.com/questions/2553668/how-to-remove-list-of-words-from-strings) I think this the above question is close to what I want, but in Clojure. There is [another](https://stackoverflow.com/questions/3136689/find-and-replace-string-values-in-python-list) question I need something like this but instead of '[br]' in that question, there is a list of strings that need to be searched and removed. Hope I made myself clear. I think that this is due to the fact that strings in python are immutable. I have a list of noise words that need to be removed from a list of strings. If I use the list comprehension, I end up searching the same string again and again. So, only "of" gets removed and not "the". So my modified list looks like this ``` places = ['New York', 'the New York City', 'at Moscow' and many more] noise_words_list = ['of', 'the', 'in', 'for', 'at'] for place in places: stuff = [place.replace(w, "").strip() for w in noise_words_list if place.startswith(w)] ``` I would like to know as to what mistake I'm doing.
2010/08/18
[ "https://Stackoverflow.com/questions/3510846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/24382/" ]
Here is my stab at it. This uses regular expressions. ``` import re pattern = re.compile("(of|the|in|for|at)\W", re.I) phrases = ['of New York', 'of the New York'] map(lambda phrase: pattern.sub("", phrase), phrases) # ['New York', 'New York'] ``` Sans `lambda`: ``` [pattern.sub("", phrase) for phrase in phrases] ``` **Update** Fix for the bug pointed out by [gnibbler](https://stackoverflow.com/users/174728/gnibbler) (thanks!): ``` pattern = re.compile("\\b(of|the|in|for|at)\\W", re.I) phrases = ['of New York', 'of the New York', 'Spain has rain'] [pattern.sub("", phrase) for phrase in phrases] # ['New York', 'New York', 'Spain has rain'] ``` @prabhu: the above change avoids snipping off the trailing "*in*" from "Spain". To verify run both versions of the regular expressions against the phrase "Spain has rain".
``` >>> import re >>> noise_words_list = ['of', 'the', 'in', 'for', 'at'] >>> phrases = ['of New York', 'of the New York'] >>> noise_re = re.compile('\\b(%s)\\W'%('|'.join(map(re.escape,noise_words_list))),re.I) >>> [noise_re.sub('',p) for p in phrases] ['New York', 'New York'] ```
14,150
56,401,685
### Concepts of objects in python classes While reading about old style and new style classes in Python , term object occurs many times. What is exactly an object? Is it a base class or simply an object or a parameter ? for e.g. : New style for creating a class in python ``` class Class_name(object): pass ``` If object is just another class which is base class for Class\_name (inheritance) then what will be termed as object in python ?
2019/05/31
[ "https://Stackoverflow.com/questions/56401685", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5084760/" ]
All objects in python are ultimately derived from "object". You don't need to be explicit about it in python 3, but it's common to explicitly derive from object.
Object is a generic term. It could be a class, a string, or any type. (and probably many other things) As an example look at the term OOP, "Object oriented programming". Object has the same meaning here.
14,156
63,046,777
I have in my utils.py the following functions which I use for debugging info ... ``` say = print log = print ``` I want to declare them in such a way, so that I can switch them ON/OFF. If possible on per module basis. F.e. let say I want to test something and enable/disable printing ... I don't want to use logging, because it is too cumbersome and requires more typing.. I'm using this for quick debugging and eventually delete those prints --- in utils.py ``` say = print log = print def nope(*args, **kwargs): return None ``` in blah.py ``` from utils import * class ABC: def abc(self): say(111) ``` in ipython : ``` from blah import * a = ABC() a.abc() 111 say(222) 222 say = nope a.abc(111) 111 say(222) None ```
2020/07/23
[ "https://Stackoverflow.com/questions/63046777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1019129/" ]
You can always redefine say/log to do nothing later on. ``` Python 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> say = print >>> say("hello") hello >>> def say(*args, **kwargs): ... return None ... >>> say("hello") >>> ```
You could used `sys.stdout.write("\033[K")` to overwrite/clear the previous line at the terminal when you don't want it logged or like: ``` def removeLine(dontWant): if dontWant == True: sys.stdout.write("\033[K") ``` where dontWant is set within the module or class or as a glob even or whatever based on when you want it and then removeLine is call after every log event.
14,161
55,437,583
I would like to convert HTML code to JavaScript. Currently I can send a message from the HTML file to a python server, which is then reversed and sent back to the HTML through socket io. I used this tutorial: <https://tutorialedge.net/python/python-socket-io-tutorial/> What I want to do now is rather than send the message by clicking a button on a web page I can instead run a JavaScript file from the command line, so > > node **index.js** > > > My **index.js** is below: ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.2.0/socket.io.js"></script> <script> const socket = io("http://localhost:8080"); function sendMsg() { socket.emit("message", "HELLO WORLD"); } socket.on("message", function(data) { console.log(data); }); </script> ``` When running **index.js** I receive this error: ``` /home/name/Desktop/Research/server_practice/name/tutorialedge/js_communicate/index_1.js:1 (function (exports, require, module, __filename, __dirname) { <script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.2.0/socket.io.js"></script> SyntaxError: Unexpected token < at createScript (vm.js:80:10) at Object.runInThisContext (vm.js:139:10) at Module._compile (module.js:616:28) at Object.Module._extensions..js (module.js:663:10) at Module.load (module.js:565:32) at tryModuleLoad (module.js:505:12) at Function.Module._load (module.js:497:3) at Function.Module.runMain (module.js:693:10) at startup (bootstrap_node.js:188:16) at bootstrap_node.js:609:3 ``` **In the error output, the caret is pointing at the second part of quotes:** ``` socket.io/2.2.0/socket.io.js"> ^ ``` I am pretty sure this is a syntax issue in combination with using socket io, but I am not sure what exactly is the problem. I believe I am using some sort of pseudo HTML/JavaScript code, which is why I am getting an error. I am new to JavaScript, but I need to use it because it contains APIs that I need. For clarity, this is the working HTML code from the tutorial, **index.html**: ``` <!-- index.html --> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <meta http-equiv="X-UA-Compatible" content="ie=edge" /> <title>Document</title> </head> <body> <button onClick="sendMsg()">Hit Me</button> <script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.2.0/socket.io.js"></script> <script> const socket = io("http://localhost:8080"); function sendMsg() { socket.emit("message", "HELLO WORLD"); } socket.on("message", function(data) { console.log(data); }); </script> </body> </html> ``` And here is the python server code, **server.py** ``` from aiohttp import web import socketio # creates a new Async Socket IO Server sio = socketio.AsyncServer() # Creates a new Aiohttp Web Application app = web.Application() # Binds our Socket.IO server to our Web App # instance sio.attach(app) # we can define aiohttp endpoints just as we normally # would with no change async def index(request): with open('index.html') as f: return web.Response(text=f.read(), content_type='text/html') # If we wanted to create a new websocket endpoint, # use this decorator, passing in the name of the # event we wish to listen out for @sio.on('message') async def print_message(sid, message): print("Socket ID: " , sid) print(message) # await a successful emit of our reversed message # back to the client await sio.emit('message', message[::-1]) # We bind our aiohttp endpoint to our app # router app.router.add_get('/', index) # We kick off our server if __name__ == '__main__': web.run_app(app) ``` The end goal is to have multiple data streams from JavaScript being sent to Python to be analyzed and sent back to JavaScript to be outputted through an API.
2019/03/31
[ "https://Stackoverflow.com/questions/55437583", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8973785/" ]
Script tags don't work in node. You need to use require to import modules. You can use the [socket.io client](https://www.npmjs.com/package/socket.io-client) module to connect to socket io using node. Note that you'll have to `npm install` it before use Example connection code adapted from socket.io client readme: ```js var socket = require('socket.io-client')('http://localhost:8080'); socket.on('connect', function(){}); function sendMsg() { socket.emit("message", "HELLO WORLD"); } socket.on("message", function(data) { console.log(data); }); socket.on('disconnect', function(){}); ```
`<script>` tags are HTML tags - you can't have them in a JavaScript file. Just place your JavaScript: ``` const socket = io("http://localhost:8080"); function sendMsg() { socket.emit("message", "HELLO WORLD"); } socket.on("message", function(data) { console.log(data); }); ```
14,163
27,925,447
I have the following dataframe. ``` c1 c2 v1 v2 0 a a 1 2 1 a a 2 3 2 b a 3 1 3 b a 4 5 5 c d 5 0 ``` I wish to have the following output. ``` c1 c2 v1 v2 0 a a 2 3 1 b a 4 5 2 c d 5 0 ``` The rule. First group dataframe by c1, c2. Then into each group, keep the row with the maximun value in column v2. Finally, output the original dataframe with all the rows not satisfying the previous rule dropped. What is the better way to obtain this result? Thanks. Going around, I have found also [this solution based on apply method](https://stackoverflow.com/questions/27488080/python-pandas-filter-rows-after-groupby)
2015/01/13
[ "https://Stackoverflow.com/questions/27925447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2063033/" ]
I wanted to add a comment, but my reputation doesn't allow me :). So I will risk posting an incomplete answer here.. As a common practice, when you have a form to submit, you should use @using (Html.BeginForm()). This will take care of all the technicalities of having a form and avoid unnecessary errors (which I can't identify in your code, sorry!!). Also, you can try putting a breakpoint at the Create action to see if it gets called properly.
The proper way to redirect is to return RedirectResult. In your case it seems to be: ``` [HttpGet] public ActionResult Create(Clients client) { if (ModelState.IsValid) { _db.Clients.Add(client); _db.SaveChanges(); } return new RedirectResult("Index"); } ``` Update: To allow redirection you should use HTTP GET instead of POST. Add FormMethod.Get to your form parameters and change the attribute: ``` @using (Html.BeginForm("Login", "Account", FormMethod.Get)) { // ... <input type="submit" value="Sign In"> } ```
14,164
15,587,311
``` def parabola(h, k, xCoordinates): ``` h is the x coordinate where the parabola touches the x axis and k is the y coordinate where the parabola intersects the y axis and xCoordinates is a list of x coordinates along the major axis. The function returns a list of y coordinates using the equation shown below. There will be one y coordinate for each x coordinate in the list of x coordinates. ``` y(x, h, k) = a(x − h)2, where a =k/h2 ``` I know how to work in python as i already compute the area , ``` def computeArea(y_vals, h): i=1 total=y_vals[0]+y_vals[-1] for y in y_vals[1:-1]: if i%2 == 0: total+=2*y else: total+=4*y i+=1 return total*(h/3.0) y_values=[13, 45.3, 12, 1, 476, 0] interval=1.2 area=computeArea(y_values, interval) print "The area is", area ``` But the question above is hurting me because its pure mathmatics , i just want little bit help
2013/03/23
[ "https://Stackoverflow.com/questions/15587311", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
``` pricing::pricing(void) { m(10,0.0,0.01,50); } ``` This attempts to *call* `m` as though it were a function (if it had overloaded `operator()`, you would be able to do this, which is what the error is talking about). To initialise `m` instead, use the member initialization list: ``` pricing::pricing(void) : m(10,0.0,0.01,50) { } ``` This colon syntax is used to initialise members of an object in the constructor. You simply list members by their names and initialize them with either `( expression-list )` or `{ initializer-list }` syntax.
pricing.cpp ``` #include "pricing.h" pricing::pricing() : m(10,0.0,0.01,50) { } double pricing::expectedValue() { return m.samplePaths[2][3]; } ``` pricing.h ``` #ifndef PRICING_H #define PRICING_H #include "monteCarlo.h" #include <vector> class pricing { public: pricing(); double euroCall(); std::vector<double> samplePathing; double expectedValue(); private: monteCarlo m; }; #endif ``` montecarlo.cpp looks like: ``` #include "monteCarlo.h" #include "randomWalk.h" #include <iostream> monteCarlo::monteCarlo(int trails, double drift, double volidatity, int density) { for (int i = 0; i < trails; i++) { std::cout << "Trail number " << i+1 << std::endl; randomWalk r(drift,volidatity,density); r.seed(); samplePaths.emplace_back(r.samplePath); std::cout << "\n" << std::endl; } } ``` and finally montecarlo.h is: ``` #ifndef MONTECARLO_H #define MONTECARLO_H #include <vector> class monteCarlo { public: monteCarlo(int, double, double, int); std::vector< std::vector<double> > samplePaths; }; #endif ``` I would use some very basic rules: 1. use include guards 2. Include only headers that are really needed in headers 3. Include the header as the first file in the implementation. 4. Do not use "using namespace" in a header 5. If possible, use forward declarations instead of includes 3) makes sure, that the header contains all necessary include files
14,165
15,820,247
Raymond Hettinger surprised quite a few people when he showed slides 36 and 37. <https://speakerdeck.com/pyconslides/transforming-code-into-beautiful-idiomatic-python-by-raymond-hettinger> -- Many people knew that the with statement could be used for opening files, but not these new things. Looking at python 3.3 docs on threading, only at the very bottom, section 16.2.8, is it even mentioned. From the lecture it was implied that using the 'with' operator was best practice. * How is one supposed to figure out if 'with' is supported, what it can be tied to, etc? * Also, how should 'with' be referred to? (threading with statement, python threading lock with statement,...), what is the vernacular to search and see if 'with' is supported (we can ask if something is iterable, do we ask if it's 'withable')? ref: * <http://docs.python.org/2/reference/compound_stmts.html#with> 7.5 * <http://docs.python.org/2/reference/datamodel.html#context-managers> 3.4.10 * <http://docs.python.org/3.1/library/threading.html> 16.2.8
2013/04/04
[ "https://Stackoverflow.com/questions/15820247", "https://Stackoverflow.com", "https://Stackoverflow.com/users/411046/" ]
First, you don't ask if something is "withable", you ask if it's a "context manager".\* For example, in the docs you linked (which are from 3.1, not 3.3, by the way): > > Currently, `Lock`, `RLock`, `Condition`, `Semaphore`, and `BoundedSemaphore` objects may be used as `with` statement context managers. > > > Meanwhile, if you want to search in the interactive interpreter, there are two obvious things to do: ``` if hasattr(x, '__exit__'): print('x is a context manager') try: with x: pass except AttributeError: pass else: print('x is a context manager') ``` Meanwhile: > > `help(open)` … makes no mention of it > > > Well, yeah, because `open` isn't a context manager, it's a function that happens to return something that is a context manager. In 3.3, it can return a variety of different things depending on its parameters; in 2.7, it only returns one thing (a `file`), but `help` tells you exactly what it returns, and you can then use `help` on whichever one is appropriate for your use case, or just look at its attributes, to see that it defines `__exit__`. At any rate, realistically, just remember that EAFTP applies to debugging and prototyping as well as to your final code. Try writing something with a `with` statement first. If the expression you're trying to use as a context manager isn't one, you'll get an exception as soon as you try to run that code, which is pretty easy to debug. (It will generally be an `AttributeError` about the lack of `__exit__`, but even if it isn't, the fact that the traceback says it's from your `with` line ought to tell you the problem.) And if you have an object that seems like it *should* be usable as a context manager, and isn't, you might want to consider filing a bug/bringing it up on the mailing lists/etc. (There are some classes in the stdlib that weren't context managers until someone complained.) One last thing: If you're using a type that has a `close` method, but isn't a context manager, just use `contextlib.closing` around it: ``` with closing(legacy_file_like_object): ``` … or ``` with closing(legacy_file_like_object_producer()) as f: ``` In fact, you should really look at everything in [`contextlib`](http://docs.python.org/2/library/contextlib.html). `@contextmanager` is very nifty, and `nested` is handy if you need to backport 2.7/3.x code to 2.5, and, while `closing` is trivial to write (if you have `@contextmanager`), using the stdlib function makes your intentions clear. --- \* Actually, there was a bit of a debate about the naming, and it recurs every so often on the mailing lists. But the docs and `help('with')` both give a nearly-precise definition, the "context manager" is the result of evaluating the "context expression". So, in `with foo(bar) as baz, qux as quux:`, `foo(bar)` and `qux` are both context managers. (Or maybe in some way the two of them make up a single context manager.)
afaik any class/object that that implements `__exit__` method (you may also need to implement `__enter__`) ``` >>>dir(file) #notice it includes __enter__ and __exit__ ``` so ``` def supportsWith(some_ob): if "__exit__" in dir(some_ob): #could justas easily used hasattr return True ```
14,166
46,439,557
I have a .txt file, each line is in the format like this 1 2,10 3,20 2 6,87 . . . This file actually represents a graph, line 1 says that Vertex 1 have directed edge to vertex 2 and the length is 10, vertex 1 also have directed edge to vertex 3 and the length is 20. Line 2 says that Vertex 2 only have one directed edge to vertex 6, and the length is 87. I want to do Dajkstra's shortest path algorithm. I don't want to define vertex class, edge class, etc., rather, I want to store each line into a 2-d array, so that by using index, I can get the graph info. If it were in python, I would store the whole file into a nested list [[(2,10) (3,20)], [(6,87)], ...], so that without making vertex, edge class, I can easily access all necessary graph info by indexing the list. My question is, how to do this in Java? 2-D array not good because each line might have different number of integers. ArrayList might be good, but how to read the whole txt file into such arraylist efficently?
2017/09/27
[ "https://Stackoverflow.com/questions/46439557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8680259/" ]
Providing you have a method that takes `[FromBody]TestStatus status` as a parameter. * Click on **Body** tab and select **raw**, then JSON(application/json). * Use this Json: ``` { "TestStatus": "expiredTest" } ``` * Send! I think above is your case as you stated: "take enum object as a body". Below are some more trivial ingredients: If you have a parameter like `[FromBody]MyClass class` and its definition as ``` public class MyClass { public Guid Id { get; set; } public TestStatus ClassStatus { get; set; } } ``` Then you modify your Json as: ``` { "Id": "28fa119e-fd61-461e-a727-08d504b9ee0b", "ClassStatus": "expiredTest" } ```
Just pass 0,1,2... interger in the json body to pass enum objects. Choose 0 if required to pass the first enum object. Exmple: { "employee": 0 }
14,168
47,483,015
I have these settings ``` EMAIL_HOST = 'smtpout.secureserver.net' EMAIL_HOST_USER = 'username@domain.com' EMAIL_HOST_PASSWORD = 'password' DEFAULT_FROM_EMAIL = 'username@domain.com' SERVER_EMAIL = 'username@domain.com' EMAIL_PORT = 465 EMAIL_USE_TLS = True SMTP_SSL = True ``` Speaking to Godaddy I have found out these are the ports and settings ``` smtpout.secureserver.net ssl 465 587 TLS ON 3535 TLS ON 25 TLS ON 80 TLS ON or TLS OFF ``` I have tried all the combinations. If I set TLS to True I am getting ``` STARTTLS extension not supported by the server. ``` If I set to 465 I am getting [![Connetion Closed](https://i.stack.imgur.com/koqnf.png)](https://i.stack.imgur.com/koqnf.png) If I set other combinations like ``` EMAIL_HOST = 'smtpout.secureserver.net' EMAIL_HOST_USER = 'username@domain.com' EMAIL_HOST_PASSWORD = 'password' DEFAULT_FROM_EMAIL = 'username@domain.com' SERVER_EMAIL = 'username@domain.com' EMAIL_PORT = 25 EMAIL_USE_TLS = False ``` [![Invalid Address](https://i.stack.imgur.com/iwXh1.png)](https://i.stack.imgur.com/iwXh1.png) For verification, I used Google Mail settings to test if the email sending via python works, and it is working. Now I want to switch to GoDaddy and I know for the email we use TLS to log in even for POP3 download and it is working, so I am not sure why python / Django option is not working. Can you please help? I have called Godaddy, they cannot help because it is a software issue - all their settings and ports are working, so I have no one to ask.
2017/11/25
[ "https://Stackoverflow.com/questions/47483015", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1333598/" ]
This worked for me with my GoDaddy email. Since GoDaddy sets up your email in Office365, you can use smtp.office365.com. settings.py ``` EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST = 'smtp.office365.com' EMAIL_HOST_USER = 'myemail@GoDaddyDomain.com' DEFAULT_FROM_EMAIL = EMAIL_HOST_USER EMAIL_HOST_PASSWORD = 'myPassword' EMAIL_PORT = 587 EMAIL_USE_SSL = False EMAIL_USE_TLS = True ``` method in views.py that handles sending email. The email variable is from a user form. ``` from django.conf import settings from django.core.mail import EmailMessage def send_email(subject, body, email): try: email_msg = EmailMessage(subject, body, settings.EMAIL_HOST_USER, [settings.EMAIL_HOST_USER], reply_to=[email]) email_msg.send() return "Message sent :)" except: return "Message failed, try again later :(" ```
I found this code worked for me.. Hope this will be useful to somebody. I was using SMTP godaddy webmail..you can put this code into your django setting file. Since you cannot set Both SSL and TSL together... if you do so you get the error something as, **At one time either SSL or TSL can be true....** setting.py ``` # Emailing details EMAIL_HOST = 'md-97.webhostbox.net' EMAIL_HOST_USER = 'mailer@yourdomain' EMAIL_HOST_PASSWORD = 'your login password' EMAIL_PORT = 465 EMAIL_USE_SSL = True ```
14,170
11,027,749
What's going on? I tried iPython and the regular Python interpreter, both show ^[[A and ^[[B for the up and down arrows instead of previous commands. **Platform:** Ubuntu 12.04. **Python:** 2.7.3 installed with pythonbrew **Terminal:** iTerm 2 on Mac OSX 10.6, connected over SSH. Has never worked in the Python shell over SSH, but works locally. Running locale outputs: ``` LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL= ```
2012/06/14
[ "https://Stackoverflow.com/questions/11027749", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1377021/" ]
Since you installed Python with pythonbrew, you must install the `libreadline-dev` package in your package manager *then* recompile Python. The package is named `libreadline-dev` or something similar in most Linux distributions (Ubuntu, Debian, Fedora...). This step is not required on Gentoo or Arch systems, which always include dev support for libraries. This step is also not necessary for Python that you install from the package manager. **Footnote:** The locale is irrelevant. The terminal emulator is irrelevant. SSH is irrelevant. I have never seen these factors affect line editing capabilities, although I suppose anything's possible. **Footnote 2:** I'm going to submit a patch to the docs for pythonbrew, this is not the first time someone has complained about readline missing. **Update:** [Pull request](https://github.com/utahta/pythonbrew/pull/87) **Update 2:** Merged.
`libreadline-dev` was not enough, what solved it for me is to install the `readline` package: ``` pip install readline ```
14,179
3,663,762
In my models.py, I want to have an optional field to a foreign key. I tried this: ``` field = models.ForeignKey(MyModel, null=True, blank=True, default=None) ``` I am getting this error: ``` model.mymodel_id may not be NULL ``` I am using sqlite. In case it is helpful, here is the exception location: ``` /usr/lib/python2.6/site-packages/django/db/backends/sqlite3/base.py in execute, line 200 ```
2010/09/08
[ "https://Stackoverflow.com/questions/3663762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/440221/" ]
If it was previously not null and you synced it before then resyncing won't change it. Either drop the table, use a migration tool such as South, or alter the column in SQL directly.
I believe that it has to be both `null=True` and `blank=True`.
14,180
36,648,800
I have a BaseEntity class, which defines a bunch (a lot) of non-required properties and has most of functionality. I extend this class in two others, which have some extra methods, as well as initialize one required property. ``` class BaseEntity(object): def __init__(self, request_url): self.clearAllFilters() super(BaseEntity, self).__init__(request_url=request_url) @property def filter1(self): return self.filter1 @filter1.setter def filter1(self, some_value): self.filter1 = some_value ... def clearAllFilters(self): self.filter1 = None self.filter2 = None ... def someCommonAction1(self): ... class DefinedEntity1(BaseEntity): def __init__(self): super(BaseEntity, self).__init__(request_url="someUrl1") def foo(): ... class DefinedEntity2(BaseEntity): def __init__(self): super(ConsensusSequenceApi, self).__init__(request_url="someUrl2") def bar(self): ... ``` What I would like is to initialize a BaseEntity object once, with all the filters specified, and then use it to create each of the DefinedEntities, i.e. ``` baseObject = BaseEntity(None) baseObject.filter1 = "boo" baseObject.filter2 = "moo" entity1 = baseObject.create(DefinedEntity1) ``` Looking for pythonic ideas, since I've just switched from statically typed language and still trying to grasp the power of python.
2016/04/15
[ "https://Stackoverflow.com/questions/36648800", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2521764/" ]
One way to do it: ``` import copy class A(object): def __init__(self, sth, blah): self.sth = sth self.blah = blah def do_sth(self): print(self.sth, self.blah) class B(A): def __init__(self, param): self.param = param def do_sth(self): print(self.param, self.sth, self.blah) a = A("one", "two") almost_b = copy.deepcopy(a) almost_b.__class__ = B B.__init__(almost_b, "three") almost_b.do_sth() # it would print "three one two" ``` Keep in mind that Python is an extremely open language with lot of dynamic modification possibilities and it is better not to abuse them. From clean code point of view I would use just a plain old call to superconstructor.
I had the same problem as the OP and was able to use the idea from Radosław Łazarz above of explicitly setting the **class** attribute of the object to the subclass, but without the deep copy: ``` class A: def __init__(a) : pass def amethod(a) : return 'aresult' class B(A): def __init__(b) : pass def bmethod(self) : return 'bresult' a=A() print(f"{a} of class {a.__class__} is {'' if isinstance(a,B) else ' not'} an instance of B") a.__class__=B # here is where the magic happens! print(f"{a} of class {a.__class__} is {'' if isinstance(a,B) else ' not'} an instance of B") print(f"a.amethod()={a.amethod()} a.bmethod()={a.bmethod()}") ``` Output: ``` <__main__.A object at 0x00000169F74DBE88> of class <class '__main__.A'> is not an instance of B <__main__.B object at 0x00000169F74DBE88> of class <class '__main__.B'> is an instance of B a.amethod()=aresult a.bmethod()=bresult ```
14,185
61,738,541
The first line works, but the second doesn't: ``` print(np.fromfunction(lambda x, y: 10 * x + y , (3, 5), dtype=int)) print(np.fromfunction(lambda x, y: str(10 * x + y), (3, 5), dtype=str)) [[ 0 1 2 3 4] [10 11 12 13 14] [20 21 22 23 24]] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/cygdrive/c/Users/pafh2/OneDrive/dev/reverb/t.py", line 439, in <module> print(np.fromfunction(lambda x, y: str(10 * x + y), (3, 5), dtype=str)) File "/usr/lib/python3.7/site-packages/numpy/core/numeric.py", line 2027, in fromfunction args = indices(shape, dtype=dtype) File "/usr/lib/python3.7/site-packages/numpy/core/numeric.py", line 1968, in indices res[i] = arange(dim, dtype=dtype).reshape( ValueError: no fill-function for data-type. >>> ``` Which page of the Numpy documentation explains the difference? StackOverflow won't let me post, telling me, "It looks like your post is mostly code; please add some more details." So now everyone has to read this pointless paragraph I've just added. Now it says I've still not written enough "details", though I really have, so here's more added spam to appease the the StackOverflow spam-filter "A.I.".
2020/05/11
[ "https://Stackoverflow.com/questions/61738541", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
They are both text files with code in them. The difference isn't in the files, but in how your webserver treats them. It is configured to run files with a `.php` extension though a PHP engine, and to serve up `.html` files directly.
You have to open the php file through an apache (or other php-handling) server. For instance, if you use XAMPP, and have index.php in the XAMPP directory, you would open a browser and go to localhost/index.php. The server then converts it into html, which a browser can handle.
14,186
24,821,340
First of all an introduction to my development environment: ``` OS: Windows. SDK: Microsoft Visual Studio 2008. ``` Earlier today I was facing the problem of trying to define a Timer inside a class. My class is interfacing a Python embedded module and a C++ backend, My problem is that I need to receive some time event on the python module. Also it is important to notice that there will be only one instance of this class. The main problem is that when I define a timer using: ``` /* Null, 0, mseconds, CALLBACK_METHOD */ SetTimer(NULL, 0, 100, (TIMERPROC) OnTimer); ``` The method activated on the timer event (OnTimer) needs to be a static method on my Class (and then I cannot access any non-static methods or variables inside that class). Reading some code on codeproject I have found: <http://www.codeproject.com/Articles/4817/How-to-use-SetTimer-with-callback-to-a-non-static> I have a similar implementation but without the lines: ``` void * CSleeperThread::pObject; ``` and ``` CSleeperThread *pSomeClass = (CSleeperThread*)pObject; // cast the void pointer pSomeClass->TimerProc(hwnd, uMsg, idEvent, dwTime); // call non-static function ``` Is this the only way to implement the functionality I'm looking for? Is there an easier way I may have skipped on my information gathering process?
2014/07/18
[ "https://Stackoverflow.com/questions/24821340", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3227543/" ]
Create a static map of your Class object: ``` static std::map<UINT_PTR, CMyClass*> m_CMyClassMap; //declaration ``` At the time of object creation insert the object in this map: ``` CMyClass myClassObj; CMyClassMap.insert(std::pair<int, CMyClass*>(0, &myClassObj)); ``` Now you can use it in static methods to access its non static members. ``` int a = m_CMyClassMap[0]->m_someNonStaticMember; ```
So long as there is only one instance of this class, there is an easy (if somewhat ugly) solution: Declare your class object and then store a pointer to it in a global object. E.g., ``` MyClass myObject; MyClass* self = &myObject; ``` Then inside your static member, you can use self->myMethod() or self->myData to refer to non static items.
14,187
14,140,902
I've run into a nasty little problem connecting to an Oracle schema via SQLAlchemy using a service name. Here is my code as a script. (items between angle brackets are place holders for real values for security reasons) ``` from sqlalchemy import create_engine if __name__ == "__main__": engine = create_engine("oracle+cx_oracle://<username>:<password>@<host>/devdb") result = engine.execute("create table test_table (id NUMBER(6), name VARCHAR2(15) not NULL)") result = engine.execute("drop table test_table") ``` Where 'devdb' is a service name and not an SID. The result of running this script is the stack trace. ``` (oracle-test)[1]jgoodell@jgoodell-MBP:python$ python example.py Traceback (most recent call last): File "example.py", line 8, in <module> result = engine.execute("create table test_table (id NUMBER(6), name VARCHAR2(15) not NULL)") File "/Users/jgoodell/.virtualenvs/oracle-test/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1621, in execute connection = self.contextual_connect(close_with_result=True) File "/Users/jgoodell/.virtualenvs/oracle-test/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1669, in contextual_connect self.pool.connect(), File "/Users/jgoodell/.virtualenvs/oracle-test/lib/python2.6/site-packages/sqlalchemy/pool.py", line 272, in connect return _ConnectionFairy(self).checkout() File "/Users/jgoodell/.virtualenvs/oracle-test/lib/python2.6/site-packages/sqlalchemy/pool.py", line 425, in __init__ rec = self._connection_record = pool._do_get() File "/Users/jgoodell/.virtualenvs/oracle-test/lib/python2.6/site-packages/sqlalchemy/pool.py", line 777, in _do_get con = self._create_connection() File "/Users/jgoodell/.virtualenvs/oracle-test/lib/python2.6/site-packages/sqlalchemy/pool.py", line 225, in _create_connection return _ConnectionRecord(self) File "/Users/jgoodell/.virtualenvs/oracle-test/lib/python2.6/site-packages/sqlalchemy/pool.py", line 318, in __init__ self.connection = self.__connect() File "/Users/jgoodell/.virtualenvs/oracle-test/lib/python2.6/site-packages/sqlalchemy/pool.py", line 368, in __connect connection = self.__pool._creator() File "/Users/jgoodell/.virtualenvs/oracle-test/lib/python2.6/site-packages/sqlalchemy/engine/strategies.py", line 80, in connect return dialect.connect(*cargs, **cparams) File "/Users/jgoodell/.virtualenvs/oracle-test/lib/python2.6/site-packages/sqlalchemy/engine/default.py", line 279, in connect return self.dbapi.connect(*cargs, **cparams) sqlalchemy.exc.DatabaseError: (DatabaseError) ORA-12505: TNS:listener does not currently know of SID given in connect descriptor None None ``` If 'devdb' were an SID and not a service name this example would work just fine, I've been trying different permutations of the connection string but haven't found anything that works. There also does not appear to be anything in the SQLAlchemy documentation that explicitly explains how to handle SID's verses service names for Oracle connections.
2013/01/03
[ "https://Stackoverflow.com/questions/14140902", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165435/" ]
I've found the answer you have to use the same connection string that would be used in a tnsnames.ora file in the connection string after the '@" like so ``` from sqlalchemy import create_engine if __name__ == "__main__": engine = create_engine("oracle+cx_oracle://<username>:<password>@(DESCRIPTION = (LOAD_BALANCE=on) (FAILOVER=ON) (ADDRESS = (PROTOCOL = TCP)(HOST = <host>)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = devdb)))") result = engine.execute("create table test_table (id NUMBER(6), name VARCHAR2(15) not NULL)") result = engine.execute("drop table test_table") ``` This example runs just fine, and you can comment out the drop statement and check the DB to see that the table was created.
cx\_Oracle supports the passing of a service\_name to the makedsn function. <http://cx-oracle.sourceforge.net/html/module.html?highlight=makedsn#cx_Oracle.makedsn> It would be nice if the create\_engine() API passed the service\_name through to the underlying call it makes to makedsn...something like this: ``` oracle = create_engine('oracle://user:pw@host:port', service_name='myservice') TypeError: Invalid argument(s) 'service_name' sent to create_engine(), using configuration OracleDialect_cx_oracle/QueuePool/Engine. Please check that the keyword arguments are appropriate for this combination of components. ```
14,188
36,234,988
I feel it is a more general question, but here is an example I am considering: I have a python class which during its initialization goes through a zip archive and extracts some data. Should the code-chunk below be written explicitly inside the "def init" or should it be made as a method outside which will be called inside the "def init"? Which approach is the most 'Pythonic' one? ``` with ZipFile(filename, "r") as archive: for item in archive.namelist(): match = self.pattern.match(item) if match: uid = match.group(2) time = match.group(3) else: raise BadZipFile("bad archive") ```
2016/03/26
[ "https://Stackoverflow.com/questions/36234988", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3286832/" ]
If you want to execute the statements you are showing in more then one place, then there's really no discussion. Without a method or a function for this task, you will be violating the [DRY](http://c2.com/cgi/wiki?DontRepeatYourself) principle. Otherwise... well I'd write a method regardless. The task you are showing is nicely self contained and should be abstracted under a descriptive name. It will make your `__init__` method easier to maintain and easier to read. You should also consider writing the code you are showing as a module level function accepting a pattern as an argument, because besides the `self.pattern` attribute the task does not seem to have a strong connection to the data and methods of your class instances (from what we now).
It is perfectly fine for `__init__()` to call other functions, including methods of the same class.
14,193
52,827,722
What is the naming convention in python community to set names for project folders and subfolders? ``` my-great-python-project my_great_python_project myGreatPythonProject MyGreatPythonProject ``` I find mixed up in the github. Appreciate your expert opinion.
2018/10/16
[ "https://Stackoverflow.com/questions/52827722", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5164382/" ]
There are three conventions, which you might find confusing. 1. The standard [PEP8](https://www.python.org/dev/peps/pep-0008/) defines a standard for how to name packages and modules: > > Modules should have short, all-lowercase names. Underscores can be used in the module name if it improves readability. Python packages should also have short, all-lowercase names, although the use of underscores is discouraged. > > > 2. Actually, nobody cares about the recommendation about not using underscores Even though it's in PEP8, many packages use underscores and the community doesn't consider it poor practice. So you see many names like `sqlalchemy_searchable`, etc. Although you can create a folder with a name which does not match your package name, it's generally a bad idea to do so because it makes things more confusing. So you'll usually use all-lowercase names with underscores for your folders. 3. Package naming on pypi The name of a package when it's installed doesn't need to match the name it's published to on pypi (the source for `pip` installs). Packages on `pypi` tend to be named with hyphens, not underscores. e.g. [flask-cors](https://pypi.org/project/Flask-Cors/), which installs the package `flask_cors`. However, you'll note that if you follow-up on this example that [flask-cors's GitHub repo](https://github.com/corydolphin/flask-cors) defines the package code in a `flask_cors/` directory. This is the norm. It gets a bit messy though, because `pip` package installation is case-insensitive and treats underscores and hyphens equivalently. So `Flask-Cors`, `fLASK_cOrs`, etc are all "equivalent". Personally, I don't like playing games with this -- I recommend just naming packages on pypi in all-lowercase with hyphens, which is what most people do. --- Disclaimer: I don't own or maintain `sqlalchemy-searchable` or `flask-cors`, but at time of writing they're good examples of packages with underscores in their names.
> > Python packages should also have short, all-lowercase names, although the use of underscores is discouraged. [Pep 8 Style Guide](https://www.python.org/dev/peps/pep-0008/#naming-conventions) > > > This is the recommendation for packages, which is the main folder containing modules, for testing, setup, and script files, \*.py and \_\_init\_\_.py. Therefore, I am assuming the folder is the package and as such, should be all lower case with no underscore (see the link [Some Package Github](https://github.com/ctb/SomePackage) ).
14,194
58,475,837
I am trying to learn the functional programming way of doing things in python. I am trying to serialize a list of strings in python using the following code ``` S = ["geeks", "are", "awesome"] reduce(lambda x, y: (str(len(x)) + '~' + x) + (str(len(y)) + '~' + y), S) ``` I am expecting: ``` 5~geeks3~are7~awesome ``` But I am seeing: ``` 12~5~geeks3~are7~awesome ``` Can someone point out why? Thanks in advance!
2019/10/20
[ "https://Stackoverflow.com/questions/58475837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6101835/" ]
`reduce` function on each current iteration relies on previous item/calculation (the nature of all ***reduce*** routines), that's why you got `12` at the start of the resulting string: on the 1st pass the item was `5~geeks3~are` with length `12` and that was used/prepended on next iteration. Instead, you can go with simple *consecutive* approach: ``` lst = ["geeks", "are", "awesome"] res = ''.join('{}~{}'.format(str(len(s)), s) for s in lst) print(res) # 5~geeks3~are7~awesome ```
The `reduce` function is for aggregation. What you're trying to do is mapping instead. You can use the `map` function for the purpose: ``` ''.join(map(lambda x: str(len(x)) + '~' + x, S)) ``` This returns: ``` 5~geeks3~are7~awesome ```
14,197
70,765,867
I have been trying to use github actions to deploy a docker image to AWS ECR, but there is a step that is consistently failing. Here is the portion that is failing: ``` - name: Pulling ECR for updates and instantiating new updated containers. uses: appleboy/ssh-action@master with: host: ${{secrets.STAGING_HOST}} username: ${{secrets.STAGING_USERNAME}} key: ${{secrets.STAGING_PEM}} port: ${{secrets.STAGING_PORT}} script: | cd staging aws ecr get-login-password --region us-east-2 | docker login -u AWS -p-stdin ***.dkr.ecr.us-east-2.amazonaws.com docker pull ***.dkr.ecr.us-east-2.amazonaws.com/*container name*:latest docker-compose -f docker-compose.staging.yml up -d docker rmi $(docker images --filter dangling=true -q 2>/dev/null) 2>/dev/null docker exec -i *** python manage.py makemigrations *dir name* docker exec -i *** python manage.py makemigrations accountsettings docker exec -i *** python manage.py makemigrations payment docker exec -i *** python manage.py runapscheduler docker exec -i *** python manage.py migrate ``` Not sure why it is an issue as github action's virtual environments already has AWS CLI installed (<https://github.com/actions/virtual-environments/blob/main/images/linux/Ubuntu2004-Readme.md>), and also I am using the AWS CLI in other steps in my github actions and there is no issue, for example: ``` - name: Build, Tag and Push image to Amazon ECR. id: build-image env: ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} ECR_REPOSITORY: *ecr name* IMAGE_TAG: latest run: | cd *dir name* docker build -f Dockerfile.staging -t *container name* . aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin ***.dkr.ecr.us-east-2.amazonaws.com docker tag *container name*:latest ***.dkr.ecr.us-east-2.amazonaws.com/*container name*:latest docker push ***.dkr.ecr.us-east-2.amazonaws.com/*container name*:latest ``` and the image successfully gets pushed to my aws ECR. I have tried to install the aws cli as suggested here: [GitHub Action - AWS CLI](https://stackoverflow.com/questions/59166099/github-action-aws-cli), but still to no avail. here is the code I used to install the aws cli: ``` - name: Intalling aws cli via python pip run: | python -m pip install --upgrade pip pip install awscli ``` Here is the full error I have been getting: ``` ======END====== err: bash: line 2: aws: command not found err: WARNING! Using -*** the CLI is insecure. Use --password-stdin. err: Error response from daemon: login attempt to https://***.dkr.ecr.us-east-2.amazonaws.com/v2/ failed with status: 400 Bad Request err: Error response from daemon: Head "https://***.dkr.ecr.us-east-2.amazonaws.com/v2/*ecr name*/manifests/latest": no basic auth credentials err: Pulling web (***.dkr.ecr.us-east-2.amazonaws.com/*ecr-name*:latest)... err: Head "https://***.dkr.ecr.us-east-2.amazonaws.com/v2/*ecr-name*/manifests/latest": no basic auth credentials err: Error: No such container: *** err: Error: No such container: *** err: Error: No such container: *** err: Error: No such container: *** err: Error: No such container: *** err: Error: No such container: *** err: Error: No such container: *** err: Error: No such container: *** 20***/01/19 04:59:42 Process exited with status 1 ```
2022/01/19
[ "https://Stackoverflow.com/questions/70765867", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17970861/" ]
Welcome to StackOverflow and the joys of programming and the cloud! It seems that the AWS CLI is failing to configure the access key id and secret on the pipeline. In order to solve this and make it easier to manage in the long run, I would recommend using the pre-built actions from AWS to ease your pipeline's setup process. The most common way of building a Github action pipeline for pushing images to AWS ECR is by using the following actions: * `aws-actions/configure-aws-credentials@v1` * `aws-actions/amazon-ecr-login@v1` Using the combination of these actions together enables us to configure the pipeline's shell session to store temporary credentials for the AWS CLI and the ECR credentials for the docker login. ```sh steps: - name: Checkout uses: actions/checkout@v2 - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ap-south-1 - name: Login to Amazon ECR id: login-ecr uses: aws-actions/amazon-ecr-login@v1 - name: Build, tag, and push the image to Amazon ECR id: build-image env: ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} ECR_REPOSITORY: ${{ secrets.REPO_NAME }} IMAGE_TAG: 1.0 run: | # Build a docker container and push it to ECR docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG . echo "Pushing image to ECR..." docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" ``` If the guide above is not sufficient and you need help in configuring the access keys and secrets, I would recommend following the blog written [here](https://aws.plainenglish.io/build-a-docker-image-and-publish-it-to-aws-ecr-using-github-actions-f20accd774c3)
Actually, I just had to install AWS CLI on my EC2 instance, but thank you so much for the help!
14,201
55,966,757
When (and why) was the Python `__new__()` function introduced? There are three steps in creating an instance of a class, e.g. `MyClass()`: * `MyClass.__call__()` is called. This method must be defined in the metaclass of `MyClass`. * `MyClass.__new__()` is called (by `__call__`). Defined on `MyClass` itself. This creates the instance. * `MyClass.__init__()` is called (also by `__call__`). This initializes the instance. Creation of the instance can be influenced either by overloading `__call__` or `__new__`. There usually is little reason to overload `__call__` instead of `__new__` (e.g. [Using the \_\_call\_\_ method of a metaclass instead of \_\_new\_\_?](https://stackoverflow.com/questions/6966772/using-the-call-method-of-a-metaclass-instead-of-new)). We have some old code (still running strong!) where `__call__` is overloaded. The reason given was that `__new__` was not available at the time. So I tried to learn more about the history of both Python and our code, but I could not figure out when `__new__` was introduced. `__new__` appears in the [documentation for Python 2.4](https://docs.python.org/2.4/ref/customization.html) and not in those for [Python 2.3](https://docs.python.org/2.3/ref/customization.html), but it does not appear in the [whathsnew](https://docs.python.org/2/whatsnew/2.4.html) of any of the Python 2 versions. The [first commit that introduced `__new__`](https://github.com/python/cpython/commit/6d6c1a35e08b95a83dbe47dbd9e6474daff00354) (Merge of descr-branch back into trunk.) that I could find is from 2001, but the 'back into trunk' message is an indication that there was something before. [PEP 252 (Making Types Look More Like Classes)](https://www.python.org/dev/peps/pep-0252/) and [PEP 253 (Subtyping Built-in Types)](https://www.python.org/dev/peps/pep-0253/) from a few months earlier seem to be relevant. Learning more about the introduction of `__new__` would teach us more about why Python is the way it is. --- Edit for clarification: It seems that `class.__new__` duplicates functionality that is already provided by `metaclass.__call__`. It seems un-Pythonic to add a method only to replicate existing functionality in a better way. `__new__` is one of the few class methods that you get out of the box (i.e. with `cls` as first argument), thereby introducing complexity that wasn't there before. If the class is the first argument of a function, then it can be argued that the function should be a normal method of the metaclass. But that method did already exist: `__call__()`. I feel like I'm missing something. > > There should be one-- and preferably only one --obvious way to do it. > > >
2019/05/03
[ "https://Stackoverflow.com/questions/55966757", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2097/" ]
The blog post [**`The Inside Story on New-Style Classes`**](http://python-history.blogspot.com/2010/06/inside-story-on-new-style-classes.html) (from the aptly named **`http://python-history.blogspot.com`**) written by [**`Guido van Rossum`**](https://en.wikipedia.org/wiki/Guido_van_Rossum) (Python's BDFL) provides some good information regarding this subject. Some relevant quotes: > > New-style classes introduced a new class method `__new__()` that lets > the class author customize how new class instances are created. By > overriding `__new__()` a class author can implement patterns like the > Singleton Pattern, return a previously created instance (e.g., from a > free list), or to return an instance of a different class (e.g., a > subclass). However, the use of `__new__` has other important > applications. For example, in the pickle module, `__new__` is used to > create instances when unserializing objects. In this case, instances > are created, but the `__init__` method is not invoked. > > > Another use of `__new__` is to help with the subclassing of immutable > types. By the nature of their immutability, these kinds of objects can > not be initialized through a standard `__init__()` method. Instead, any > kind of special initialization must be performed as the object is > created; for instance, if the class wanted to modify the value being > stored in the immutable object, the `__new__` method can do this by > passing the modified value to the base class `__new__` method. > > > You can read the entire post for more information on this subject. Another post about [**`New-style Classes`**](http://python-history.blogspot.com/2010/06/new-style-classes.html) which was written along with the above quoted post has some additional information. **Edit:** In response to OP's edit and the quote from the Zen of Python, I would say this. [Zen of Python](https://www.python.org/dev/peps/pep-0020/) was not written by the creator of the language but by Tim Peters and was published only in August 19, 2004. We have to take into account the fact that `__new__` appears only in the documentation of Python 2.4 (which was released on [November 30, 2004](https://en.wikipedia.org/wiki/History_of_Python)), and this particular guideline (or aphorism) did not even exist **publicly** when `__new__` was introduced into the language. Even if such a document of guidelines existed *informally* before, I do not think that the author(s) intended them to be misinterpreted as a design document for an entire language and ecosystem.
I will not explain the history of `__new__` here because I have only used Python since 2005, so after it was introduced into the language. But here is the rationale behind it. The *normal* configuration method for a new object is the `__init__` method of its class. The object has already been created (usually via an indirect call to `object.__new__`) and the method just *initializes* it. Simply, if you have a truely non mutable object, it is too late. In that use case the Pythonic way is the `__new__` method, which builds and returns the new object. The nice point with it, is that is is still included in the class definition and does not require a specific metaclass. Standard documentation states: > > **new**() is intended mainly to allow subclasses of immutable types (like int, str, or tuple) to customize instance creation. It is also commonly overridden in custom metaclasses in order to customize class creation. > > > Defining a `__call__` method on the metaclass is indeed allowed but is IMHO non Pythonic, because `__new__` should be enough. In addition, `__init__`, `__new__` and metaclasses each dig deeper inside the internal Python machinery. So the rule shoud be *do not use `__new__` if `__init__` is enough, and do not use metaclasses if `__new__` is enough*.
14,202
40,007,305
I am using kivy to create a small Gui for my python program. This Gui is not always visible. So I start it with these settings: ``` Config.set('graphics', 'borderless', True) Config.set('graphics', 'resizable', False) Config.set('graphics', 'window_state', 'hidden') ``` However: Somewhere in my program I want to make the window visible again. How do I do that? I couldnt find anything that changes configuration at runtime.
2016/10/12
[ "https://Stackoverflow.com/questions/40007305", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2129897/" ]
It seems that if you are using the SDL provider you have a **hide & show** functions on the Window object from the kivy.core.window docs: ``` hide() Added in 1.9.0 Hides the window. This method should be used on desktop platforms only. Note This feature requires the SDL2 window provider and is currently only supported on desktop platforms. show()¶Added in 1.9.0 Shows the window. This method should be used on desktop platforms only. Note This feature requires the SDL2 window provider and is currently only supported on desktop platforms. ```
I'm not familiar with Kivy, but it looks like you just need to set it to visible. `window_state`: string , one of 'visible', 'hidden', 'maximized' \ or 'minimized' from: <https://kivy.org/docs/_modules/kivy/config.html> Looking at this github post: <https://github.com/kivy/kivy/issues/3637> The method they're using is .hide() and .show().
14,203
9,724,872
I have a python (django) web application. It uses an external web service (Facebook Graph). All the code for making external (http) calls is wrapped in one extra function (called `facebook_api`), which takes some arguments and returns a parsed dict (it does some logging, checks for errors etc.) Around this function, I have written some code. I want to do some unittests of this code. How can I "mock" this one function with some known input? i.e. I want to provide some input & output for my facebook\_api function (i.e. if you see this input, return this, and if it's this, return that, etc.). I then want to check that my code handles it OK. I want to provide various different scenarios for my code (what if the API started returning this, or that) and check that my code does what I want. I don't want to actually make external (http) requests. I tried looking at some mock libraries (e.g.http://www.voidspace.org.uk/python/mock/ ), but they always talk about creating class objects, or just checking that something was called. I can't see easy ways to say "Intercept this function, and if the input is A, return X, if it's like D, return B"
2012/03/15
[ "https://Stackoverflow.com/questions/9724872", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161922/" ]
I think you are looking for [Mock's side\_effect](http://www.voidspace.org.uk/python/mock/mock.html#mock.Mock.side_effect) . For example ``` def my_facebook_api(input): if input=='A': return 'X' elif input=='B': return 'D' facebook_api = Mock(side_effect=my_facebook_api) ```
I have been using mockito-python (<http://code.google.com/p/mockito-python/>) with a good success. It allows you to specify behaviour of mocks with simple syntax (straight from their documentation): ``` >>> dummy = mock() >>> when(dummy).reply("hi").thenReturn("hello") >>> when(dummy).reply("bye").thenReturn("good-bye") >>> dummy.hi() >>> dummy.reply("hi") 'hello' >>> dummy.reply("bye") 'good-bye' ``` This of course requires that you are able to change object containing facebook\_api to mock during testing.
14,204
11,021,853
The IPython documentation pages suggest that opening several different sessions of IPython notebook is the only way to interact with saved notebooks in different directories or subdirectories, but this is not explicitly confirmed anywhere. I am facing a situation where I might need to interact with hundreds of different notebooks, which are classified according to different properties and stored in subdirectories of a main directory. I have set that main directory (let's call it `/main`) in the `ipython_notebook_config.py` configuration file to be the default directory. When I launch IPython notebook, indeed it displays any saved notebooks that are within `/main` (but *not* saved notebooks within subdirectories within `/main`). How can I achieve one single IPython dashboard that shows me the notebooks within `/main` *and also* shows subdirectories, lets me expand a subdirectory and choose from its contents, or just shows all notebooks from all subdirectories? Doing this by launching new instances of IPython every time is completely out of the question. I'm willing to tinker with source code if I have to for this ability. It's an extremely basic sort of feature, we need it, and it's surprising that it's not just the default IPython behavior. For any amount of saved notebooks over maybe 10 or 15, this feature is *necessary*.
2012/06/13
[ "https://Stackoverflow.com/questions/11021853", "https://Stackoverflow.com", "https://Stackoverflow.com/users/567620/" ]
> > The IPython documentation pages suggest that opening several different sessions of IPython notebook is the only way to interact with saved notebooks in different directories or subdirectories, but this is not explicitly confirmed anywhere. > > > Yes, this is a current (*temporary*) limitation of the Notebook server. Multi-directory support is very high on the notebook todo list (unfortunately that list is long, and devs are few and have day jobs), it is just not there yet. By 0.14 (Fall, probably), you should have no reason to be running more than one nb server, but for now that's the only option for multiple directories. All that is missing for a simple first draft is: 1. Associating individual notebooks with directories (fairly trivial), and 2. Web UI for simple filesystem navigation (slightly less trivial). > > I'm willing to tinker with source code if I have to for this ability > > > The limiting factor, if you want to poke around in the source, is the [NotebookManager](https://github.com/ipython/ipython/blob/rel-0.13/IPython/frontend/html/notebook/notebookmanager.py), which is associated with a particular directory. If you tweak the list\_notebooks() method to handle subdirectories, you are 90% there. I was curious about this as well, so I tossed together an quick example [here](https://github.com/minrk/ipython/tree/nbwalk) that allows you to at least read/run/edit/save notebooks in subdirs (walk depth is limited to 2, but easy to change). Any new notebooks will be in the top-level dir, and there is no UI for moving them around.
The interface and architecture design issues for multiple directory support (and more generally for "project" support) for iPython notebook are important to get right. A design is described in [IPEP 16: Notebook multi directory dashboard and URL mapping](https://github.com/ipython/ipython/wiki/IPEP-16%3A-Notebook-multi-directory-dashboard-and-URL-mapping) and is being discussed at [IPEP 16: Notebook multi directory dashboard and URL mapping · Issue #3166 · ipython/ipython](https://github.com/ipython/ipython/issues/3166)
14,206
13,586,153
**Objectives:** Implement a program (java or python) to retrieve data from videos that I published on my Youtube channel. This program will be launched daily (1:00 AM). **Solutions:** To retrieve data Youtube, including the number of views per day, YouTube Analytics API is in my opinion the best solution. I use the Google Account Service ("GoogleCredential") to authenticate me: ``` static { // Build service account credential. try { // Create a listener for automatic refresh OAuthAccessToken List<CredentialRefreshListener> list = new ArrayList<CredentialRefreshListener>(); list.add(new CredentialRefreshListener() { public void onTokenResponse(Credential credential, TokenResponse tokenResponse) throws IOException { System.out.println(tokenResponse.toPrettyString()); } public void onTokenErrorResponse(Credential credential, TokenErrorResponse tokenErrorResponse) throws IOException { System.err.println("Error: " + tokenErrorResponse.toPrettyString()); } }); // Create a GoogleCredential for authenticate with ServiceAccount // service credential = new GoogleCredential.Builder() .setTransport(HTTP_TRANSPORT) .setJsonFactory(JSON_FACTORY) .setServiceAccountId(SERVICE_ACCOUNT_EMAIL) .setServiceAccountScopes(SCOPES) .setClock(Clock.SYSTEM) .setServiceAccountPrivateKeyFromP12File( new File("key.p12")) .setRefreshListeners(list).build(); credential.refreshToken(); } catch (GeneralSecurityException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } ``` And I execute Youtube Analytics query: ``` YoutubeAnalytics youtubeAnalytics = new YoutubeAnalytics.Builder(HTTP_TRANSPORT, JSON_FACTORY, credential) .setApplicationName("Test-YouTube-Analytics/1.0").build(); // Create request credential.refreshToken(); YoutubeAnalyticsRequest<?> request = youtubeAnalytics.reports() .query("channel==" + channelId, "2012-10-01", "2012-12-01", "views") .setAlt("json") .setKey(API_KEY) .setDimensions("month") .setPrettyPrint(true); System.out.println(request.buildHttpRequest().getUrl().toString()); ResultTable first = (ResultTable) request.execute(); } ``` But I get the following error: ``` com.google.api.client.googleapis.json.GoogleJsonResponseException: 500 Internal Server Error { "code" : 500, "errors" : [ { "domain" : "global", "message" : "Unknown error occurred on the server.", "reason" : "internalError" } ], "message" : "Unknown error occurred on the server." } ``` Thanks for your insight!
2012/11/27
[ "https://Stackoverflow.com/questions/13586153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1856335/" ]
You can't use a service account when making a YouTube Analytics API request. You need to use an account that is either the owner of the YouTube channel or a content owner associated with the channel, and I don't believe a service account can be either of those things. Please go through the OAuth 2 flow once while signed in as the Google Account that owns the YouTube channel, and the saved OAuth 2 refresh token could then be used repeatedly in the future to get fresh access tokens which can be used to run reports. Could you please resolve that issue and then try running your report again?
Yes you can authenticate for any of Youtubes APIs using a Service Account. The service account and the account you want to work with, have to be in the same CMS. (note for Youtube-Partner-Channels you will also need to set their content-owner-ID, when calling the API). How it works for me: I generate an access\_token from the keyfile that I downloaded from Gcloud when creating the Service Accounts keys. [You can read more about Server-server authentication with Oauth2 here](https://developers.google.com/identity/protocols/oauth2/service-account)
14,207
53,259,674
it 's possible to put a variable into the path in python/linux for example : ``` >>>counter = 0; >>>image = ClImage(file_obj=open('/home/user/image'counter'.jpeg', 'rb')) ``` I have syntax error when i do that.
2018/11/12
[ "https://Stackoverflow.com/questions/53259674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9100401/" ]
You could use an [f-string](https://www.python.org/dev/peps/pep-0498/) if you’re working in python 3.6+ This is the most efficient method. ``` counter = 0 filepath = f"/home/user/image{counter}.jpeg" image = ClImage(file_obj=open(filepath, 'rb')) ``` Otherwise the second best would be using the [.format()](https://docs.python.org/3.4/library/functions.html#format) function: ``` counter = 0 filepath = "/home/user/image{0}.jpeg".format(counter) image = ClImage(file_obj=open(filepath, 'rb')) ```
You can use Python's [.format()](https://realpython.com/python-string-formatting/) method: ``` counter = 0 filepath = '/home/user/image{0}.jpeg'.format(counter) image = ClImage(file_obj=open(filepath, 'rb')) ```
14,208
36,551,531
**My Flume configuration** ``` source_agent.sources = tail source_agent.sources.tail.type = exec source_agent.sources.tail.command = python loggen.py source_agent.sources.tail.batchSize = 1 source_agent.sources.tail.channels = memoryChannel #memory-channel source_agent.channels = memoryChannel source_agent.channels.memoryChannel.type = memory source_agent.channels.memoryChannel.capacity = 10000 source_agent.channels.memoryChannel.transactionCapacity=10000 source_agent.channels.memoryChannel.byteCapacityBufferPercentage = 20 source_agent.channels.memoryChannel.byteCapacity = 800000 # Send to Flume Collector on saprk sink source_agent.sinks = spark source_agent.sinks.spark.type=org.apache.spark.streaming.flume.sink.SparkSink source_agent.sinks.spark.batchSize=100 source_agent.sinks.spark.channel = memoryChannel source_agent.sinks.spark.hostname=localhost source_agent.sinks.spark.port=1234 ``` **My Spark-Scala Code** ``` package com.thanga.twtsteam import org.apache.spark.streaming.flume._ import org.apache.spark.streaming._ import org.apache.spark.streaming.StreamingContext._ import org.apache.spark.SparkConf object SampleStream { def main(args: Array[String]) { val conf = new SparkConf().setMaster("local[2]").setAppName("SampleStream") val ssc = new StreamingContext(conf, Seconds(1)) val flumeStream = FlumeUtils.createPollingStream(ssc, "localhost", 1234) ssc.stop() } } ``` **i am using SBT to build Jar my SBT configuration is below:** ``` name := "Flume" version := "1.0" scalaVersion := "2.10.4" publishMavenStyle := true libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.4.1" libraryDependencies += "org.apache.spark" % "spark-streaming_2.10" % "1.4.1" libraryDependencies += "org.apache.spark" % "spark-streaming-flume_2.10" % "1.4.1" libraryDependencies += "org.apache.spark" % "spark-streaming-flume-sink_2.10" % "1.4.1" libraryDependencies += "org.scala-lang" % "scala-library" % "2.10.4" resolvers += "Akka Repository" at "http://repo.akka.io/releases/" ``` **The problem is now i can get build my jar without any error but while running i am getting the below error:** ``` 16/04/11 19:52:56 INFO BlockManagerMaster: Registered BlockManager Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/streaming/flume/FlumeUtils$ at com.thagna.twtsteam.SampleStream$.main(SampleStream.scala:10) at com.thanga.twtsteam.SampleStream.main(SampleStream.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.ClassNotFoundException: org.apache.spark.streaming.flume.FlumeUtils$ at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 11 more 16/04/11 19:52:56 INFO SparkContext: Invoking stop() from shutdown hook ``` **can anyone help to get resolve**
2016/04/11
[ "https://Stackoverflow.com/questions/36551531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3536400/" ]
You can use --jars option if you are running the job using spark-submit For Ex: ``` spark-submit --jars ....../lib/spark-streaming_2.10-1.2.1‌​.2.2.6.0-2800.jar ``` or add this to your SBT configuration ``` libraryDependencies += "org.apache.spark" %% "spark-streaming-flume" % "2.1.0" ``` <https://spark.apache.org/docs/latest/streaming-flume-integration.html>
Add this to your build to get rid of this error: ``` <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-flume_2.10 --> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming-flume_2.10</artifactId> <version>2.0.0</version> </dependency> ```
14,210
27,218,638
I need to replace `\` into `\\` with python from pattern matching. For example, `$$\a\b\c$$` should be matched replaced with `$$\\a\\b\\c$$`. I couldn't use the regular expression to find a match. ``` >>> import re >>> p = re.compile("\$\$([^$]+)\$\$") >>> a = "$$\a\b\c$$" >>> m = p.search(a) >>> m.group(1) '\x07\x08\\c' ``` I can't simply make the input as raw string such as `a=r'$$\a\b\c$$'` because it's automatically processed with markdown processor. I also found that I couldn't use replace method: ``` >>> a.replace('\\','\\\\') '$$\x07\x08\\\\c$$' ``` How can I solve this issue?
2014/11/30
[ "https://Stackoverflow.com/questions/27218638", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
The reason you're having trouble is because the string you're inputting is `$$\a\b\c$$`, which python translates to `'$$\x07\x08\\c$$'`, and the only back slash in the string is actually in the segment '\c' the best way to deal with this would be to input a as such ``` a=r'$$\a\b\c$$' ``` This will tell python to convert the string literals as raw chars. If you're reading in from a file, this is done automatically for you.
Split the string with single backslashes, then join the resulting list with double backslashes. ``` s = r'$$\a\b\c$$' t = r'\\'.join(s.split('\\')) print('%s -> %s' % (s, t)) ```
14,211
67,687,962
I am trying to build a Word2vec model but when I try to reshape the vector for tokens, I am getting this error. Any idea ? ``` wordvec_arrays = np.zeros((len(tokenized_tweet), 100)) for i in range(len(tokenized_tweet)): wordvec_arrays[i,:] = word_vector(tokenized_tweet[i], 100) wordvec_df = pd.DataFrame(wordvec_arrays) wordvec_df.shape --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-101-71156bf1c4a3> in <module> 1 wordvec_arrays = np.zeros((len(tokenized_tweet), 100)) 2 for i in range(len(tokenized_tweet)): ----> 3 wordvec_arrays[i,:] = word_vector(tokenized_tweet[i], 100) 4 wordvec_df = pd.DataFrame(wordvec_arrays) 5 wordvec_df.shape <ipython-input-100-e3a82e60af93> in word_vector(tokens, size) 4 for word in tokens: 5 try: ----> 6 vec += model_w2v[word].reshape((1, size)) 7 count += 1. 8 except KeyError: # handling the case where the token is not in vocabulary TypeError: 'Word2Vec' object is not subscriptable ```
2021/05/25
[ "https://Stackoverflow.com/questions/67687962", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8020986/" ]
As of Gensim 4.0 & higher, the `Word2Vec` model doesn't support subscripted-indexed access (the `['...']') to individual words. (Previous versions would display a deprecation warning,` Method will be removed in 4.0.0, use self.wv.**getitem**() instead`, for such uses.) So, when you want to access a specific word, do it via the `Word2Vec` model's `.wv` property, which holds just the word-vectors, instead. So, your (unshown) `word_vector()` function should have its line highlighted in the error stack changed to: ``` vec += model_w2v.wv[word].reshape((1, size)) ```
use the following method: ``` model.wv.get_item() ```
14,212
58,945,475
I'm somewhat new to python: I'm trying to write a text file into a different format. Given a file of format: ``` [header] rho = 1.1742817531 mu = 1.71997e-05 q = 411385.1046712013 ... ``` I want: ``` [header] 1.1742817531, 1.71997e-05, 411385.1046712013, ... ``` and be able to write successive lines below that. Right now, I have the following: ``` inFile = open('test.txt', 'r') f = open('test.txt').readlines() firstLine = f.pop(0) #removes the first line D = '' for line in f: D = line.strip('\n') b=D.rfind('=') c=D[b+2:] line = inFile.readline() ``` It returns only the last value, "3". How do I get it to return a string (which will be saved to a new txt file) in the format I want? Thanks in advance.
2019/11/20
[ "https://Stackoverflow.com/questions/58945475", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12400757/" ]
Cache the images on the filesystem when you first download them. When you load an image, check the cache, and download the images only if they're not yet cached. If they are, load them from the filesystem instead.
Try using glide or Picasso to load images in different list views. Glide internally caches images using their url as a key to retrieve cache. That way when your images are loaded once in any of your list view, they can be cached for future use in other list views. However, you will still need to create new instances of the image view as you will be using a completely different listview. You CAN create your own factory of image view with content (image) populated inside them and get such views based on unique keys (that you will define yourself which can be pain) but that would be an overkill for very little outcome.
14,215
57,464,098
I am currently doing some exercises with Kernel Density Estimation and I am trying to run this piece of code: ```py from sklearn.datasets import load_digits from sklearn.model_selection import GridSearchCV digits = load_digits() bandwidths = 10 ** np.linspace(0, 2, 100) grid = GridSearchCV(KDEClassifier(), {'bandwidth': bandwidths}, cv=3) grid.fit(digits.data, digits.target) scores = [val.mean_validation_score for val in grid.cv_results_] ``` but as the title says I get an ```py --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-29-15a5f685e6d6> in <module> 8 grid.fit(digits.data, digits.target) 9 ---> 10 scores = [val.mean_validation_score for val in grid.cv_results_] <ipython-input-29-15a5f685e6d6> in <listcomp>(.0) 8 grid.fit(digits.data, digits.target) 9 ---> 10 scores = [val.mean_validation_score for val in grid.cv_results_] AttributeError: 'str' object has no attribute 'mean_validation_score' ``` regarding mean\_validation\_score and I don't understand why. The code is directly out of a book with a few changes due running an up to date scikit learn package. Here is the original code snipet: ```py from sklearn.datasets import load_digits from sklearn.grid_search import GridSearchCV digits = load_digits() bandwidths = 10 ** np.linspace(0, 2, 100) grid = GridSearchCV(KDEClassifier(), {'bandwidth': bandwidths}) grid.fit(digits.data, digits.target) scores = [val.mean_validation_score for val in grid.grid_scores_] ``` EDIT: Forgot to add how bandwiths is defined: ```py from sklearn.base import BaseEstimator, ClassifierMixin class KDEClassifier(BaseEstimator, ClassifierMixin): """Bayesian generative classification based on KDE Parameters ---------- bandwidth : float the kernel bandwidth within each class kernel : str the kernel name, passed to KernelDensity """ def __init__(self, bandwidth=1.0, kernel='gaussian'): self.bandwidth = bandwidth self.kernel = kernel def fit(self, X, y): self.classes_ = np.sort(np.unique(y)) training_sets = [X[y == yi] for yi in self.classes_] self.models_ = [KernelDensity(bandwidth=self.bandwidth, kernel=self.kernel).fit(Xi) for Xi in training_sets] self.logpriors_ = [np.log(Xi.shape[0] / X.shape[0]) for Xi in training_sets] return self def predict_proba(self, X): logprobs = np.array([model.score_samples(X) for model in self.models_]).T result = np.exp(logprobs + self.logpriors_) return result / result.sum(1, keepdims=True) def predict(self, X): return self.classes_[np.argmax(self.predict_proba(X), 1)] ```
2019/08/12
[ "https://Stackoverflow.com/questions/57464098", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11491321/" ]
It's simple, I also face the same problem, Just replace this line- ``` scores = [val.mean_test_score for val in grid.cv_results_] ``` with ``` scores = grid.cv_results_.get('mean_test_score').tolist() ``` Because, 'mean\_test\_score' is depricated and grid.cv\_results\_ is in dict format.
The [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) of the object `GridSearchCV` specifies that the attribute `cv_results_` is a dictionary, therefore, iterating over a python dictionary returns the strings of the keys as you can se [here](https://realpython.com/iterate-through-dictionary-python/). My recommendation is to specify at the `GridSearchCV` constructor the `scoring` you want to use and then have a look at the `cv_results_` dictionary. Hope it helps.
14,216
20,386,727
Currently I have data in the following format ``` A A -> B -> C -> D -> Z A -> B -> O A -> X ``` This is stored in a list [line1,line2, and so forth] Now I want to print this in the following manner ``` A |- X |- B |- O |- C |- D |- Z ``` I'm new to python so. I was thinking of finding '->' in each element in array and replacing with space. I don't know to go forward.
2013/12/04
[ "https://Stackoverflow.com/questions/20386727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2838679/" ]
1. You don't modify method parameters, you make copies of them. 2. You don't null-check/empty-check inside the loop, you do it first thing in the method. 3. The standard in a `for loop` is `i < size`, not `size > i`... meh ``` /** * Splits the string str into individual characters: Small becomes S m a l l */ public static String split(final String str) { String result = ""; // If parameter is null or empty, return an empty string if (str == null || str.isEmpty()) return result; // Go through the parameter's characters, and modify the result for (int i = 0; i < str.length(); i++) { // The new result will be the previous result, // plus the current character at position i, // plus a white space. result = result + str.charAt(i) + " "; } return result; } ``` --- 4. Go pro, use `StringBuilder` for the result, and static final constants for empty string and space character. Peace!
Ask yourself a question, where is **s** coming from? ``` char space = s.charAt(); ??? s ??? ``` A second question, character at? ``` public static String split(String str){ for(int i = 0; i < str.length(); i++) { if (str.length() > 0) { char space = str.charAt(i) } } return str; } ```
14,217
54,174,950
**Context** I am trying to run my Django application and Postgres database in a docker development environment using docker-compose (it's my first time using Docker). I want to use my application with a custom role and database both named `teddycrepineau` (as opposed to using the default postgres user and db). **Goal** My goal is to deploy a web app powered on the front end by react and the backend by django restapi, the whole running in a docker. **System/Version** * python: 3.7 * django: 2.1 * OS: Mac OS High Sierra **What error am I getting** When running `docker-compose up` with my custom role and db, I am getting the following error `django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist`. When running the same command with the default role and db `postgres` Django is able to start normally. My understanding was that running docker-compose up would create the role and db passed as environment variable. **What I have tried so far** I read multiple threat on this site, GitHub, and docker: * tried to delete my container and rebuilt it with formatting as suggested [here](https://stackoverflow.com/questions/49112545/postgres-and-docker-compose-cant-create-a-custom-role-and-database) * Went through [this](https://github.com/docker-library/postgres/issues/41) GitHub issue * Tried to move my environment variable from `.env` file the `environment` inside my `docker-compose.yml` file and rebuild my container --- Files ----- **docker-compose.yml** ``` version: '3' volumes: postgres_data: {} services: postgres: image: postgres volumes: - postgres_data:/var/lib/postgresql/data env_file: .env ports: - "5432" django: build: context: teddycrepineau-backend dockerfile: teddycrepineau-root/Dockerfile command: ./teddycrepineau-backend/teddycrepineau-root/start.sh env_file: .env volumes: - .:/teddycrepineau-backend ports: - "8000:8000" depends_on: - postgres ``` **Dockerfile** ``` FROM python:3.7 ENV PYTHONUNBUFFERED 1 WORKDIR /teddycrepineau-backend/ ADD ./teddycrepineau-root/requirements.txt /teddycrepineau-backend/ RUN pip install -r requirements.txt ADD . /teddycrepineau-backend/ RUN chmod +x ./teddycrepineau-root/start.sh ``` **start.sh** ``` #!/usr/bin/env bash python3 ./teddycrepineau-backend/teddycrepineau-root/manage.py runserver ``` **.env** ``` POSTGRES_PASSWORD= POSTGRES_USER=teddycrepineau POSTGRES_DB=teddycrepineau ``` --- **EDIT** My file structure is as follow ``` root |___ teddycrepineau-backend |___ teddycrepineau-root |___ teddycrepineau |___ Dockerfile |___ manage.py |___ start.sh |___ teddycrepineau-frontend |___ React-App |___ .env |___ docker-compose.yml ``` When I move my docker-compose.yml file inside my backend folder, it starts as expected (though I am not able to access my site when going to `127.0.0.1:8000` but that is mostly a different issue) with custom user and db. When I put my `docker-compose.yml` file to my root folder, I get the error `django.db.utils.OperationalError: FATAL: role "teddycrepineau" does not exist`
2019/01/14
[ "https://Stackoverflow.com/questions/54174950", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5022051/" ]
This happens because your pgsql db was launched without any envs. The pgsql docker image only uses the envs the first time you created the container, after that it won't recreate DB and users. The solution is to remove the pgsql volume so next time you `docker-compose up` you will have a fresh db with envs read. Simple way to do it is `docker-compose down -v`
Change your env order like this. ``` POSTGRES_DB=teddycrepineau POSTGRES_USER=teddycrepineau POSTGRES_PASSWORD= ``` I find it at [this issue](https://github.com/docker-library/postgres/issues/41#issuecomment-382925263). I hope it works.
14,221
47,031,382
I am using PyTorch with python3. I tried the following while in ipdb mode: ``` regions = np.zeros([107,4], dtype='uint8') torch.from_numpy(regions) ``` This prints the tensor. However when trying: ``` regions = np.zeros([107,107,4], dtype='uint8') torch.from_numpy(regions) ``` I get the following error: ``` *** UnicodeEncodeError: 'ascii' codec can't encode character '\u22ee' in position 72: ordinal not in range(128) ``` I'm am using: ``` numpy==1.11.3 torch==0.2.0.post4 torchvision==0.1.9 ``` and python3.5.3
2017/10/31
[ "https://Stackoverflow.com/questions/47031382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8683130/" ]
``` Pleas make sure our AWS S3 configuration : <CORSConfiguration> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <MaxAgeSeconds>3000</MaxAgeSeconds> <AllowedHeader>Authorization</AllowedHeader> </CORSRule> </CORSConfiguration> ```
One of your uploads is failing. You will need to catch the error from s3Client.uploadPart() and retry. I recommend the following improvements on the simple code below. 1) Add an increasing timeout for each retry. 2) Process the type of error to determine if a retry will make sense. For some errors you should just report the error and abort. 3) Limit the number of retries to something like 10 to prevent a forever while loop. ``` // repeat the upload until it succeeds. boolean anotherPass; do { anotherPass = false; // assume everythings ok try { // Upload part and add response to our list. partETags.add(s3Client.uploadPart(uploadRequest).getPartETag()); } catch (Exception e) { anotherPass = true; // repeat } } while (anotherPass); ``` This Stack Overflow question has code for improving the error handling for your example. [Problems when uploading large files to Amazon S3](https://stackoverflow.com/questions/4698869/problems-when-uploading-large-files-to-amazon-s3)
14,226
55,210,888
I faced with problem when I installed python-pptx with conda on cleaned environment: conda install -c conda-forge python-pptx. After install was successfully finished I tried to import pptx module and got following error: > > > ``` > >>> import pptx > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p > ackages\pptx\__init__.py", line 13, in <module> > from pptx.api import Presentation # noqa > File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p > ackages\pptx\api.py", line 17, in <module> > from .package import Package > File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p > ackages\pptx\package.py", line 13, in <module> > from .opc.package import OpcPackage > File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p > ackages\pptx\opc\package.py", line 13, in <module> > from .oxml import CT_Relationships, serialize_part_xml > File "C:\Users\SazonovEO\AppData\Local\Continuum\anaconda3\envs\new\lib\site-p > ackages\pptx\opc\oxml.py", line 12, in <module> > from lxml import etree > ImportError: DLL load failed: Не найден указанный модуль. > > ``` > > But if I installed this library (python-pptx) with pip like this (also into new cleaned environment): ``` pip install python-pptx ``` this works. I have following versions: python version - 3.7.1, python-pptx-0.6.17, lxml-4.3.0. Do you have any ideas about this issue?
2019/03/17
[ "https://Stackoverflow.com/questions/55210888", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9869122/" ]
If you are allowed to use built-in functions, you could do this: ``` idx = s[::-1].find(c[::-1]) return len(s) - (idx + len(c)) if idx >= 0 else -1 ```
Your problem is this line: ``` last_position = next_position + len(c) ``` This is skipping potential matches. As it is, your code considers only the first, third, and fifth positions for matches. As you say, the right answer comes from checking the fourth position (index == 3). But you're skipping that because you move the length of the test string each time, rather than moving forward by only one character. I think you want: ``` last_position = next_position + 1 ```
14,229
49,105,693
I have the following code: ``` import csv import requests from bs4 import BeautifulSoup import datetime with open("D:/python/sursa_alimentare.csv", "w+") as f: writer = csv.writer(f) writer.writerow(["Descriere", "Pret"])` ``` Because I run this quite often, I want to save the csv file with a name that include the datetime format. Any help would appreciated. Thank you.
2018/03/05
[ "https://Stackoverflow.com/questions/49105693", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8947024/" ]
you have to use `.strftime` ``` filename = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M.csv") with open(filename, "w+") as f: writer = csv.writer(f) writer.writerow(["Descriere", "Pret"])` ``` here is some details <https://www.tutorialspoint.com/python/time_strftime.htm>
I guess this might help you add datetime to your filename, ``` import csv import requests from bs4 import BeautifulSoup import datetime file_name = 'sursa_alimentare-'+str(datetime.datetime.now())+'.csv' with open(file_name, "w+") as f: writer = csv.writer(f) writer.writerow(["Descriere", "Pret"]) ```
14,231
21,265,633
I need to read a huge (larger than memory) unquoted TSV file. Fields may contain the string "\n". However, python tries to be clever and split that string in two. So for example a row containing: ``` cat dog fish\nchips 4.50 ``` gets split into two lines: ``` ['cat', 'dog', 'fish'] ['chips', 4.5] ``` What I want is a single line: ``` ['cat', 'dog', 'fish\nchips', 4.5] ``` How can I make python stop being clever and just split lines on 0x0a? My code is: ``` with open(path, 'r') as file: for line in file: row = line.split("\t") ``` Quoting the TSV file is not an option since I don't create it myself.
2014/01/21
[ "https://Stackoverflow.com/questions/21265633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2400966/" ]
This already works correctly; for a file with a literal `\` followed by a literal `n` character (two bytes), will **never** be seen by Python as a newline. What you have, then, is a single `\n` character, an actual newline. The *rest* of your file is separated by the `\r\n` Windows conventional line separator. Use [`io.open()`](http://docs.python.org/2/library/io.html#io.open) to control how newlines are to be treated: ``` import io with io.open(path, newline='\r\n') as infh: for line in infh: row = line.strip().split('\t') ``` Demo: ``` >>> import io >>> with open('/tmp/test.txt', 'wb') as outfh: ... outfh.write('cat\tdog\tfish\nchips\t4.50\r\nsnake\tegg\tspam\nham\t42.38\r\n') ... >>> with io.open('/tmp/test.txt', newline='\r\n') as infh: ... for line in infh: ... row = line.strip().split('\t') ... print row ... [u'cat', u'dog', u'fish\nchips', u'4.50'] [u'snake', u'egg', u'spam\nham', u'42.38'] ``` Note that `io.open()` also decodes your file data to unicode; you may need to specify an explicit encoding for non-ASCII file data.
If your problem is .readline() and splitting on \t, try using the csv builtin: ``` import csv with open(path, 'r') as file: reader = csv.Reader(file, delimiter='\t') # Or DictReader - I like DictReader. reader.next() ``` It handles these things for us.
14,233
20,998,832
I've ran the brown-clustering algorithm from <https://github.com/percyliang/brown-cluster> and also a python implementation <https://github.com/mheilman/tan-clustering>. And they both give some sort of binary and another integer for each unique token. For example: ``` 0 the 6 10 chased 3 110 dog 2 1110 mouse 2 1111 cat 2 ``` **What does the binary and the integer mean?** From the first [link](https://github.com/percyliang/brown-cluster), the binary is known as a `bit-string`, see <http://saffron.deri.ie/acl_acl/document/ACL_ANTHOLOGY_ACL_P11-1053/> But how do I tell from the output that `dog and mouse and cat` is one cluster and `the and chased` is not in the same cluster?
2014/01/08
[ "https://Stackoverflow.com/questions/20998832", "https://Stackoverflow.com", "https://Stackoverflow.com/users/610569/" ]
If I understand correctly, the algorithm gives you a tree and you need to truncate it at some level to get clusters. In case of those bit strings, you should just take first `L` characters. For example, cutting at the second character gives you two clusters ``` 10 chased 11 dog 11 mouse 11 cat ``` At the third character you get ``` 110 dog 111 mouse 111 cat ``` The cutting strategy is a different subject though.
My guess is: According to Figure 2 in [Brown et al 1992](http://acl.ldc.upenn.edu/J/J92/J92-4003.pdf), the clustering is hierarchical and to get from the root to each word "leaf" you have to make an up/down decision. If up is 0 and down is 1, you can represent each word as a bit string. From <https://github.com/mheilman/tan-clustering/blob/master/class_lm_cluster.py> : ``` # the 0/1 bit to add when walking up the hierarchy # from a word to the top-level cluster ```
14,234
43,303,575
I am trying to install Cassandra on windows 10 localhost. I am getting error as `Can't detect Python version!` I am trying this way Downloaded and extracted Cassandra in `C:\wamp64\apache-cassandra-3.10` Set `Set-ExecutionPolicy Unrestricted` in Windows powershell From Windows CMD ``` cd C:\wamp64\apache-cassandra-3.10\bin C:\wamp64\apache-cassandra-3.10\bin>cassandra.bat -f ``` Cassandra is now running so I stopped it by `Control-C` Then I try to run `cqlsh` by following command ``` C:\wamp64\apache-cassandra-3.10\bin>cqlsh.bat ``` But I got errror `Can't detect Python version!` So I download and install Python 2.7.13 in `C:\wamp64\python` I have added environmental path for python in System Properties `C:\wamp64\python\` I extracted Thrift in `C:\wamp64\python\thrift-0.10.0` Then I install Python like this ``` C:\wamp64\python\thrift-0.10.0>python setup.py install ``` But again I am getting error on running `cqlsh` as ``` C:\wamp64\apache-cassandra-3.10\bin>cqlsh.bat Can't detect Python version! ``` Please see and suggest what step I have missed in installation of Cassandra for this error. Thanks **Edit** I reinstall everything from scratch again carefully and now I am getting this error ``` C:\wamp64\apache-cassandra-3.10\pylib>python setup.py install Traceback (most recent call last): File "setup.py", line 33, in <module> ext_modules=get_extensions(), File "setup.py", line 26, in get_extensions from Cython.Build import cythonize ImportError: No module named Cython.Build C:\wamp64\apache-cassandra-3.10\pylib>cd C:\wamp64\apache-cassandra-3.10\bin C:\wamp64\apache-cassandra-3.10\bin>python cqlsh localhost 9160 File "cqlsh", line 20 python -c 'import sys; sys.exit(not (0x020700b0 < sys.hexversion < 0x03000000))' 2>/dev/null \ ^ SyntaxError: invalid syntax C:\wamp64\apache-cassandra-3.10\bin> ``` Please see and suggest any possible way to resolve these error. Thanks
2017/04/09
[ "https://Stackoverflow.com/questions/43303575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I have installed latest version of Apache Cassandra 3.11.9 for Windows, My python env variable is already set for python3 (Python 3.8), as I actively use python 3.8. I was continuously getting error, then I installed python2 inside 'Apache Cassandra 3.11.9\bin'. I need not to reset my env variable to python2. The more on solution: <https://susant.medium.com/simple-way-to-install-cassandra-in-windows-10-6497e93989e6>
I think you are following wrong python installation procedures. **please uninstall all the python instances using programs and features section in control panel. then install python obtained from [python.org](https://www.python.org/). ensure add to path option is checked on the time of installation. verify python installation by typing `python` on a CMD window.** After that cd to your cassandra installation bin folder. type `cassandra.bat -f`. it will successfully launch a cassandra server instance. And never stop it, beacuase cqlsh needs a running cassandra instance. Then open another CMD window. cd to your cassandra installation bin folder. type `cqlsh`. it will successfully connect to running cassandra server instance. And CMD window will switched to cqlsh console mode. Successfully tested and verified on Win 7 64 bit with python 2.7 64 bit. *I you have time, please check it on python 3.6 too...*
14,243
67,281,038
I have wrote a code for face recognition in python. My code works perfectly in `.py` file (without any errors or warning), but after making a `.exe` file out of it, through `pyinstaller` it won't work at all. I have searched through, for the same and tried the following methods, but it still won't work. first method i made the following changes in `.spec` file. (Windows OS) > > main.spec file > > > ``` block_cipher = None face_models = [ ('.\\face_recognition_models\\models\\dlib_face_recognition_resnet_model_v1.dat', './face_recognition_models/models'), ('.\\face_recognition_models\\models\\mmod_human_face_detector.dat', './face_recognition_models/models'), ('.\\face_recognition_models\\models\\shape_predictor_5_face_landmarks.dat', './face_recognition_models/models'), ('.\\face_recognition_models\\models\\shape_predictor_68_face_landmarks.dat', './face_recognition_models/models'), ] a = Analysis(['<your python script name.py>'], pathex=['<path to working directory>'], binaries=face_models, datas=[], hiddenimports=['scipy._lib.messagestream', 'scipy', 'scipy.signal', 'scipy.signal.bsplines', 'scipy.special', 'scipy.special._ufuncs_cxx', 'scipy.linalg.cython_blas', 'scipy.linalg.cython_lapack', 'scipy.integrate', 'scipy.integrate.quadrature', 'scipy.integrate.odepack', 'scipy.integrate._odepack', 'scipy.integrate.quadpack', 'scipy.integrate._quadpack', 'scipy.integrate._ode', 'scipy.integrate.vode', 'scipy.integrate._dop', 'scipy._lib', 'scipy._build_utils','scipy.__config__', 'scipy.integrate.lsoda', 'scipy.cluster', 'scipy.constants','scipy.fftpack','scipy.interpolate','scipy.io','scipy.linalg','scipy.misc','scipy.ndimage','scipy.odr','scipy.optimize','scipy.setup','scipy.sparse','scipy.spatial','scipy.special','scipy.stats','scipy.version'], hookspath=[], runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher) a.datas += Tree('./scipy-extra-dll', prefix=None) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) exe = EXE(pyz, a.scripts, a.binaries, a.zipfiles, a.datas, name='<your python script name>', debug=False, strip=False, upx=True, runtime_tmpdir=None, console=True ) ``` another method ``` #replaced datas=[] #with datas=[('shape_predictor_68_face_landmarks.dat','./face_recognition_models/models'),('shape_predictor_5_face_landmarks.dat','./face_recognition_models/models'),('mmod_human_face_detector.dat','./face_recognition_models/models'),('dlib_face_recognition_resnet_model_v1.dat','./face_recognition_models/models')] ``` some people said it might me a problem with the `hook-scipy.py` file so here it is > > hook-scipy.py > > > ``` import os import glob from PyInstaller.utils.hooks import get_module_file_attribute from PyInstaller.compat import is_win from PyInstaller.utils.hooks import collect_submodules from PyInstaller.utils.hooks import collect_data_files hiddenimports = collect_submodules('scipy') datas = collect_data_files('scipy') binaries = [] # package the DLL bundle that official scipy wheels for Windows ship # The DLL bundle will either be in extra-dll on windows proper # and in .libs if installed on a virtualenv created from MinGW (Git-Bash # for example) if is_win: extra_dll_locations = ['extra-dll', '.libs'] for location in extra_dll_locations: dll_glob = os.path.join(os.path.dirname( get_module_file_attribute('scipy')), location, "*.dll") if glob.glob(dll_glob): binaries.append((dll_glob, ".")) # collect library-wide utility extension modules hiddenimports = ['scipy._lib.%s' % m for m in [ 'messagestream', "_ccallback_c", "_fpumode"]] ``` please help me to solve this.
2021/04/27
[ "https://Stackoverflow.com/questions/67281038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14332805/" ]
First of all, make sure your function app can be compiled. Second, the format of your publish url is no problem. So maybe this problem is not from the Visual Studio side. please make sure the function app is not stop or restarting, the scm site is not under the protection of NETWorking and you have login the right Microsoft account in VS. If all of above still don't work, you can try to use other deploy method. Such as [command](https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=windows%2Ccsharp%2Cbash#publish) or [ftp](https://learn.microsoft.com/en-us/azure/app-service/deploy-ftp?tabs=portal) and [so on](https://learn.microsoft.com/en-us/azure/azure-functions/functions-deployment-technologies#deployment-technology-details). And for your situation, if you just do a little change, the [incremental deployment](https://learn.microsoft.com/en-us/azure/azure-functions/functions-continuous-deployment) may be a better choice.
In my case opening azure functions app in my browser helped. Until that it was giving error when I try to publish it in Visual Studio.
14,245
45,457,324
I have set up a spark cluster and all the nodes have access to network shared storage where they can access a file to read. I am running this in a python jupyter notebook. It was working a few days ago, and now it stopped working but I'm not sure why, or what I have changed. I have tried restarting the nodes and master. I have also tried copying the csv file to a new directory and pointing the spark.read there, but it still gives the same error. When I delete the csv file, it gives a much shorter error saying 'File not found' Any help would be greatly appreciated. This is my code: ``` from pyspark.sql import SparkSession from pyspark.conf import SparkConf spark = SparkSession.builder \ .master("spark://IP:PORT") \ .appName("app_1") \ .config(conf=SparkConf()) \ .getOrCreate() df = spark.read.csv("/nas/file123.csv") string1 = df.rdd.map(lambda x: x.column1).collect() ``` However, I get this error: ``` --------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) <ipython-input-2-12bd938122cd> in <module>() 29 30 ---> 31 string1 = df.rdd.map(lambda x: x.column1).collect() 32 33 /home/hjk/Downloads/spark-2.1.0-bin-hadoop2.7/python/pyspark/rdd.pyc in collect(self) 807 """ 808 with SCCallSiteSync(self.context) as css: --> 809 port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd()) 810 return list(_load_from_socket(port, self._jrdd_deserializer)) 811 /usr/local/lib/python2.7/dist-packages/py4j/java_gateway.pyc in __call__(self, *args) 1131 answer = self.gateway_client.send_command(command) 1132 return_value = get_return_value( -> 1133 answer, self.gateway_client, self.target_id, self.name) 1134 1135 for temp_arg in temp_args: /home/hjk/Downloads/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/utils.pyc in deco(*a, **kw) 61 def deco(*a, **kw): 62 try: ---> 63 return f(*a, **kw) 64 except py4j.protocol.Py4JJavaError as e: 65 s = e.java_exception.toString() /usr/local/lib/python2.7/dist-packages/py4j/protocol.pyc in get_return_value(answer, gateway_client, target_id, name) 317 raise Py4JJavaError( 318 "An error occurred while calling {0}{1}{2}.\n". --> 319 format(target_id, ".", name), value) 320 else: 321 raise Py4JError( Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 4 in stage 3.0 failed 4 times, most recent failure: Lost task 4.3 in stage 3.0 (TID 37, executor 2): java.io.FileNotFoundException: File file:/nas/file123.csv does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:157) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:102) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.hasNext(SerDeUtil.scala:117) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:112) at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:504) at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:328) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951) at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:269) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:362) at org.apache.spark.rdd.RDD.collect(RDD.scala:934) at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:453) at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:280) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.FileNotFoundException: File file:/nas/file123.csv does not exist It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:157) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:102) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.hasNext(SerDeUtil.scala:117) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:112) at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:504) at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:328) at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951) at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:269) ```
2017/08/02
[ "https://Stackoverflow.com/questions/45457324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8236204/" ]
From the error it looks like it is checking the file on your local system. Just make sure that you have file present on specified Path. Also try below suggestions. 1. try with file URI : file:///nas/file123.csv 2. Upload the file on HDFS and try to read the file from HDFS URI like hdfs:///... Hope this helps. Regards, Neeraj
If you are loading the data from local directory, remember to make sure file exists in all of your worker nodes.
14,246
62,246,786
I would like to run my scrapy sprider from python script. I can call my spider with the following code, ``` subprocess.check_output(['scrapy crawl mySpider']) ``` Untill all is well. But before that, I instantiate the class of my spider by initializing the start\_urls, then the call to scrapy crawl doesn't work since it doesn't find the variable start\_urls. ``` from flask import Flask, jsonify, request import scrapy import subprocess class ClassSpider(scrapy.Spider): name = 'mySpider' #start_urls = [] #pages = 0 news = [] def __init__(self, url, nbrPage): self.pages = nbrPage self.start_urls = url def parse(self): ... def run(self): subprocess.check_output(['scrapy crawl mySpider']) return self.news app = Flask(__name__) data = [] @app.route('/', methods=['POST']) def getNews(): mySpiderClass = ClassSpider(request.json['url'], 2) data.append(mySpider.run()) return jsonify({'data': data}) if __name__ == "__main__": app.run(debug=True) ``` The error I get is: TypeError: **init** missing 1 required positional argument: 'start\_url' and 'pages' any help please?
2020/06/07
[ "https://Stackoverflow.com/questions/62246786", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13700256/" ]
``` stock = {'meat':100,'fish':100,'bread':100, 'milk':100,'chips':100} total = 0 for v in stock.values(): total += v ```
``` >>> from statistics import mean >>> stock={'meat':100,'fish':100,'bread':100, 'milk':100,'chips':100} >>> print(f"Total stock level : {mean(stock.values())*len(stock)}") Total stock level : 500 ```
14,247
56,128,397
I pulled the official mongo image from the Docker website and started a mongo container named `dataiomongo`. I now want to connect to the mongodb inside the container using pymongo. This is the python script I wrote: ``` from pprint import pprint from pymongo import MongoClient client = MongoClient('localhost', port=27017) db = client.admin server = db.command("serverStatus") pprint(server) ``` The error that came is: ``` Traceback (most recent call last): File "D:/dataio/test_mongo.py", line 8, in <module> server = db.command("serverStatus") File "D:\dataio\venv\lib\site-packages\pymongo\database.py", line 655, in command read_preference) as (sock_info, slave_ok): File "C:\Python27\Lib\contextlib.py", line 17, in __enter__ return self.gen.next() File "D:\dataio\venv\lib\site-packages\pymongo\mongo_client.py", line 1135, in _socket_for_reads server = topology.select_server(read_preference) File "D:\dataio\venv\lib\site-packages\pymongo\topology.py", line 226, in select_server address)) File "D:\dataio\venv\lib\site-packages\pymongo\topology.py", line 184, in select_servers selector, server_timeout, address) File "D:\dataio\venv\lib\site-packages\pymongo\topology.py", line 200, in _select_servers_loop self._error_message(selector)) pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [Errno 10061] No connection could be made because the target machine actively refused it ``` How do I go about connecting to the mongodb inside the docker container?
2019/05/14
[ "https://Stackoverflow.com/questions/56128397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5684940/" ]
run mongo ========= First you need to run mongo ``` $ docker run --rm --name my-mongo -it -p 27017:27017 mongo:latest ``` as a daemon =========== ``` $ docker run --name my-mongo -d mongo:latest ``` connect to the previous container.. with another container ========================================================== ``` $ docker run -it --link my-mongo:mongo --rm mongo:latest sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"' ``` Insert Data into db =================== ``` Insert the data into the db ``` Connect db with python ---------------------- ``` client = MongoClient() client.server_info() db = client.yourdbname ```
Make sure you bind the 27017 container port to host port via -p 27017:27017 flag.
14,251
56,803,812
I want to include a cron task in a MariaDB container, based on the latest image `mariadb`, but I'm stuck with this. I tried many things without success because I can't launch both MariaDB and Cron. Here is my actual dockerfile: ``` FROM mariadb:10.3 # DB settings ENV MYSQL_DATABASE=beurre \ MYSQL_ROOT_PASSWORD=beurette COPY ./data /docker-entrypoint-initdb.d COPY ./keys/keys.enc home/mdb/ COPY ./config/encryption.cnf /etc/mysql/conf.d/encryption.cnf # Installations RUN apt-get update && apt-get -y install python cron # Cron RUN touch /etc/cron.d/bp-cron RUN printf '* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1\n#' >> /etc/cron.d/bp-cron RUN touch /var/log/cron.log RUN chmod 0644 /etc/cron.d/bp-cron RUN cron ``` With its settings, the database starts correctly, but "Cron" is not initialized. To make it work, I have to get into the container and execute the "Cron" command, and everything works perfectly. So I'm looking for a way to launch both the db and cron from my Dockerfile used in my docker-compose. If this is not possible, maybe there is another way to do tasks planned? The purpose being to execute a script of the db.
2019/06/28
[ "https://Stackoverflow.com/questions/56803812", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9678258/" ]
Elaborating on @k0pernikus's comment, I would recommend to use a separate container that runs cron. The cronjobs in that container can then work with your mysql database. Here's how I would approach it: 1. Create a Cron Docker Container ================================= You can set up a cron container fairly simply. Here's an example Dockerfile that should do the job: ``` FROM alpine COPY ./crontab /etc/crontab RUN crontab /etc/crontab RUN touch /var/log/cron.log CMD crond -f ``` Just put your crontab into a `crontab` file next to that Dockerfile and you should have a working cron container. An example crontab file: ``` * * * * * mysql -h mysql --execute "INSERT INTO database.table VALUES 'v';" ``` 2. Add the cron container to your docker-compose.yml as a service ================================================================= Make sure you add your cron container to the docker-compose.yml, and put it in the same network as your mysql service: ``` networks: my_network: services: mysql: image: mariadb networks: - my_network cron: image: my_cron depends_on: - mysql build: context: ./path/to/my/cron-docker-folder networks: - my_network ```
I recommend the [solution provided by fjc](https://stackoverflow.com/a/56804227/457268). Treat this as nice-to-know to understand why your approach is not working. --- Docker has `RUN` commands that are only being executed during build. Not on container startup. It also has a `CMD` (or ENTRYPOINT) for executing specific scripts. Since you are using [mariadb](https://github.com/docker-library/mariadb/blob/5b833d3bb53298adea162c555f066233f74ff236/10.3/Dockerfile) your CMD it is: ``` ENTRYPOINT ["docker-entrypoint.sh"] EXPOSE 3306 CMD ["mysqld"] ``` (You can find the link to the actual dockerfles on [dockerhub](https://hub.docker.com/_/mariadb).) This tells docker to run: ``` docker-entrypoint.sh mysqld ``` on startup. You'd have to override its `docker-entrypoint.sh` to allow for the startup of the cron job as well. --- [See the relevant part of the Dockerfile for the CMD instruction](https://docs.docker.com/engine/reference/builder/): > > CMD The CMD instruction has three forms: > ======================================== > > > CMD ["executable","param1","param2"] (exec form, this is the preferred > form) CMD ["param1","param2"] (as default parameters to ENTRYPOINT) > CMD command param1 param2 (shell form) There can only be one CMD > instruction in a Dockerfile. If you list more than one CMD then only > the last CMD will take effect. > > > The main purpose of a CMD is to provide defaults for an executing > container. These defaults can include an executable, or they can omit > the executable, in which case you must specify an ENTRYPOINT > instruction as well. > > > Note: If CMD is used to provide default arguments for the ENTRYPOINT > instruction, both the CMD and ENTRYPOINT instructions should be > specified with the JSON array format. > > > Note: The exec form is parsed as a JSON array, which means that you > must use double-quotes (“) around words not single-quotes (‘). > > > Note: Unlike the shell form, the exec form does not invoke a command > shell. This means that normal shell processing does not happen. For > example, CMD [ "echo", "$HOME" ] will not do variable substitution on > $HOME. If you want shell processing then either use the shell form or > execute a shell directly, for example: CMD [ "sh", "-c", "echo $HOME" > ]. When using the exec form and executing a shell directly, as in the > case for the shell form, it is the shell that is doing the environment > variable expansion, not docker. > > > When used in the shell or exec formats, the CMD instruction sets the > command to be executed when running the image. > > > If you use the shell form of the CMD, then the will execute > in /bin/sh -c: > > > FROM ubuntu CMD echo "This is a test." | wc - If you want to run your > without a shell then you must express the command as a JSON > array and give the full path to the executable. This array form is the > preferred format of CMD. Any additional parameters must be > individually expressed as strings in the array: > > > FROM ubuntu CMD ["/usr/bin/wc","--help"] If you would like your > container to run the same executable every time, then you should > consider using ENTRYPOINT in combination with CMD. See ENTRYPOINT. > > > If the user specifies arguments to docker run then they will override > the default specified in CMD. > > > Note: Don’t confuse RUN with CMD. RUN actually runs a command and > commits the result; CMD does not execute anything at build time, but > specifies the intended command for the image. > > >
14,254
62,827,871
I'm looking for a compiler to compile '.py' file to a single '.exe' file. I've try already **auto-py-to-exe** but I'm not happy with it. I've tried **PyInstaller**, but one of its dependencies (PyCrypto, which I need) is not working/ maintained anymore and fails to install. <https://pyinstaller.readthedocs.io/en/stable/usage.html#encrypting-python-bytecode> I've look also **nuitka** but it doesn't seem possible to set an icon for the exe. Do you have any compiler recommendations that can obfuscate / encrypt the code to limit the reverse engineering?
2020/07/10
[ "https://Stackoverflow.com/questions/62827871", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11943028/" ]
I had a similar issue to this, needing to run Python code on machines where Python could not be downloaded. I used py2exe, and it worked quite well. (<https://www.py2exe.org/>)
You could try these **steps to convert .py to .exe in Python 3.8** 1. Install [Python 3.8](https://www.python.org/downloads/). 2. Install cx\_Freeze, (open your command prompt and type `pip install cx_Freeze`. 3. Install idna, (open your command prompt and type `pip install idna`. 4. Write a `.py` a program named `myfirstprog.py`. 5. Create a new python file named `setup.py` on the current directory of your script. 6. In the `setup.py` file, copy the code below and save it. 7. With shift pressed right click on the same directory, so you are able to open a command prompt window. 8. In the prompt, type `python setup.py build` 9. If your script is error-free, then there will be no problem with creating applications. 10. Check the newly created folder `build`. It has another folder in it. Within that folder, you can find your application. Run it. Make yourself happy. See the original answer [here](https://stackoverflow.com/a/44433442/10250028).
14,255
64,764,650
Say that there are two iterators: ``` def genA(): while True: yield 1 def genB(): while True: yield 2 gA = genA() gB = genB() ``` According to [this SO answer](https://stackoverflow.com/a/8770796/3259896) they can be ***evenly*** interleaved using the [`itertools` recipes](https://docs.python.org/3/library/itertools.html#recipes): ``` def cycle(iterable): # cycle('ABCD') --> A B C D A B C D A B C D ... saved = [] for element in iterable: yield element saved.append(element) while saved: for element in saved: yield element def roundrobin(*iterables): "roundrobin('ABC', 'D', 'EF') --> A D E B F C" # Recipe credited to George Sakkis num_active = len(iterables) nexts = cycle(iter(it).__next__ for it in iterables) while num_active: try: for next in nexts: yield next() except StopIteration: # Remove the iterator we just exhausted from the cycle. num_active -= 1 nexts = cycle(islice(nexts, num_active)) aa = roundrobin(gA, gB) next(aa) ``` So `next(aa)` will shift the iterator output each time, so a bunch of `next` calls will result in `1, 2, 1, 2, 1, 2, 1` - `50%` will come from one iterator, and the other `50%` will come from the other. I am wondering how we can code it so that `x%` will come from one iterator, and `(1-x)%` from the other. For example, `75%` from the first iterator, and `25%` from the other. So several calls to `next(combinedIterator)` will result in something like this: ``` 1 1 1 2 1 1 1 2 1 1 1 2 ``` For my purpose, it doesn't matter if the output is strictly ordered like above, or if it is random, with the output determined by probability.
2020/11/10
[ "https://Stackoverflow.com/questions/64764650", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3259896/" ]
If you're okay with a deterministic approach (as I understand from your self-answer), you can add an argument which is the percentage of the first iterator and then just calculate each iterator's "part". For example, if you want `.75` from the first iterator - this translates to: *for every **three** elements from `iterator1`, yield **one** element from `iterator2`*. ```py def interleave(itt1, itt2, itt1_per): itt1_frac, total = itt1_per.as_integer_ratio() itt2_frac = total - itt1_frac while True: for _ in range(itt1_frac): yield next(itt1) for _ in range(itt2_frac): yield next(itt2) newGen = interleave(gA, gB, .75) for _ in range(12): print(next(newGen), end=' ') ``` This will print: ``` 1 1 1 2 1 1 1 2 1 1 1 2 ``` --- ***Watch out!*** This will only work well for "nice" fractions. For example: using this function with `.6` means that *for every **`5,404,319,552,844,595`** elements from `iterator1`, it will yield **`3,602,879,701,896,397`** elements from `iterator2`*. ***One way to overcome this*** is to use [`decimal.Decimal`](https://docs.python.org/3/library/decimal.html#decimal.Decimal) with ***[string arguments](https://stackoverflow.com/questions/55755444/why-is-the-precision-accurate-when-decimal-takes-in-a-string-instead-of-float)***: ```py from decimal import Decimal def interleave(itt1, itt2, itt1_per): itt1_frac, total = Decimal(str(itt1_per)).as_integer_ratio() ... ``` Using `Decimal` now means that passing `.6` translates to the more **sensible**: *for every **three** elements from `iterator1`, yield **two** elements from `iterator2`*. Using this revised code with `.6` as an argument, will print: ``` 1 1 1 2 2 1 1 1 2 2 1 1 ```
``` def genA(): while True: yield 1 def genB(): while True: yield 2 gA = genA() gB = genB() import random def xyz(itt1, itt2): while True: if random.random() < .25: yield next(itt1) else: yield next(itt2) newGen = xyz(gA, gB) next(newGen) ``` This works for a uniform distribution. I won't select this as an answer for someone to possibility give a non probabilistic answer.
14,256
19,965,453
Im making a multipart POST using the python package requests. Im using xlrd to change some values in an Excel file save it then send that up in a multipart POST. This working fine when I run it locally on my mac but when I put the code on a remote machine and make the same request the body content type is blank where as locally the body content type is application/vnd.ms-excel. So my question is, is there a way enforce the content type using python requests so that in this case the body content type is application/vnd.ms-excel. Sorry cant post up any code as I don't have it on this machine.
2013/11/13
[ "https://Stackoverflow.com/questions/19965453", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1946337/" ]
The `files` parameter accepts a dictionary of keys to tuples, with the following form: ``` files = {'name': (<filename>, <file object>, <content type>, <per-part headers>)} ``` In your specific case, you could write this: ``` files = {'file': ('filename.xls', open('filename.xls'), 'application/vnd.ms-excel', {})} ``` That should work fine.
I believe you can use the headers parameter, e.g ``` requests.post(url, data=my_data, headers={"Content-type": "application/vnd.ms-excel"}) ```
14,257
29,686,328
Edit: Rather than vote me down can you provide an url on where you would recommend a newbie learn Python? Be part of the solution versus problem. I'm trying to compile a basic program (for a class) that when specific if/elif/else conditions are met a specific roman numeral shows though I'm a bit confused on why I'm getting my error. Error included next. ``` Traceback (most recent call last): File "python", line 42 print"The number is I" ^ SyntaxError: invalid syntax ``` When I remove the else line altogether the program operates but I don't ever get any of the if/elif functions of print to actually print out. I've been going off of this below tutorial for learning the else/if python as the community college doesnt teach python but frustratingly enough asks projects be coded in it. tutorial:<http://www.tutorialspoint.com/python/python_if_else.htm> which I've learned I need to put whatever I want printed in paranthesis correct: print ("1 - Got a false expression value") wrong: print "1 - Got a false expression value" Again I'm lost on this error - any advice or direction to a tutorial where I can understand this would be much appreciated ``` # ////////////////////// ALGORITHIM ///////////////////////// # 1 prompt the user for a number from range of 1-10 # 2 display roman numeral for that number # 3 if outside range of 1-10 display error message invalid number try reentering # ////////////////////// ALGORITHIM END ///////////////////////// # ////////////////////// PSEUDOCODE ///////////////////////// # IF <<number >= 0, 11 # Error the number you have entered is not in the range of 1-10 # Elif var = 1 # display I # Elif var = 2 # display II # Elif var = 2 # display II # Elif var = 3 # display III # Elif var = 4 # display IV # Elif var = 5 # display V # Elif var = 6 # display VI # Elif var = 7 # display VII # Elif var = 8 # display VIII # Elif var = 9 # display IX # Elif var = 10 # display X #END IF # MAIN MODULE number = 0 #getInput Module number = input(int("Please enter a number within the range of 1-10")) print("Was your number " + number + "?") #decisionPiece Module if(number==1): print"The number is I" elif(number==2): print"The number is II" elif(number==3): print"The nubmer is III" elif(number==4): print"The nubmer is IV" elif(number==5): print"The nubmer is V" elif(number==6): print"The nubmer is VI" elif(number==7): print"The nubmer is VII" elif(number==8): print"The nubmer is VIII" elif(number==9): print"The nubmer is IX" elif(number==10): print"The nubmer is X" else: print"The number you entered is not within the range of 1-10" print ("Good bye!") #printNumber Module ```
2015/04/16
[ "https://Stackoverflow.com/questions/29686328", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4259649/" ]
``` else(number==3): print("The number is X") ``` Is incorret. You should use only ``` else : print("The number is X") ```
Just use `else:` instead of `else(number==3)`. `else:` doesn't take a condition. Also, you don't need to put parentheses around the conditions in Python.
14,258
67,017,354
**My problem**: starting a threaded function and, **asynchronously**, act upon the returned value I know how to: * start a threaded function with `threading`. The problem: no simple way to get the result back * [get the return value](https://stackoverflow.com/questions/6893968/how-to-get-the-return-value-from-a-thread-in-python) from a threaded function. The problem: it is synchronous What I would like to achieve is similar to JavaScript's ``` aFunctionThatReturnsAPromise() .then(r => {// do something with the returned value when it is available}) // the code here runs synchronously right after aFunctionThatReturnsAPromise is started ``` In pseudo-Python, I would think about something like (modifying the example from [the answer](https://stackoverflow.com/a/58829816/903011) to the linked thread) ``` import time import concurrent.futures def foo(bar): print('hello {}'.format(bar)) time.sleep(10) return 'foo' def the_callback(something): print(f"the thread returned {something}") with concurrent.futures.ThreadPoolExecutor() as executor: # submit the threaded call ... future = executor.submit(foo, 'world!') # ... and set a callback future.callback(the_callback, future.result()) # ← this is the made up part # or, all in one: future = executor.submit(foo, 'world!', callback=the_callback) # in which case the parameters probably would need to be passed the JS way # the threaded call runs at its pace # the following line is ran right after the call above print("after submit") # after some time (~10 seconds) the callback is finished (and has printed out what was passed to it) # there should probably be some kind of join() so that the scripts waits until the thread is done ``` I want to stay if possible with threads (which do things at their own pace and I do not care when they are done), rather than `asyncio` (where I have to explicitly `await` things in a single thread)
2021/04/09
[ "https://Stackoverflow.com/questions/67017354", "https://Stackoverflow.com", "https://Stackoverflow.com/users/903011/" ]
You can use [`concurrent.futures.add_done_callback`](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Future.add_done_callback) as shown below. The callback must be a callable taking a single argument, the `Future` instance — and it must get the result from that as shown. The example also adds some additional information to it which the callback function uses for printing its messages. Note that the callback function(s) will be called concurrently, so the usual mutex precautions should be taken if there are shared resources involved. This *wasn't* been done in the example below, so sometimes the printed output will be jumbled. ``` from concurrent import futures import random import time def foo(bar, delay): print(f'hello {bar} - {delay}') time.sleep(delay) return bar def the_callback(fn): if fn.cancelled(): print(f'args {fn.args}: canceled') elif fn.done(): error = fn.exception() if error: print(f'args {fn.args}: caused error {error}') else: print(f'args {fn.args}: returned: {fn.result()}') with futures.ThreadPoolExecutor(max_workers=2) as executor: for name in ('foo', 'bar', 'bas'): delay = random.randint(1, 5) f = executor.submit(foo, name, delay) f.args = name, delay f.add_done_callback(the_callback) print('fini') ``` Sample output: ``` hello foo - 5 hello bar - 3 args ('bar', 3): returned: bar hello bas - 4 args ('foo', 5): returned: foo args ('bas', 4): returned: bas fini ```
You can use [add\_done\_callback](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Future.add_done_callback) of `concurrent.futures` library, so you can modify your example like this: ```py def the_callback(something): print(f"the thread returned {something.result()}") with concurrent.futures.ThreadPoolExecutor() as executor: future = executor.submit(foo, 'world!') future.add_done_callback(the_callback) ```
14,261
6,969,222
Every time I run my code in Python IDLE development environment, I get a Visual C++ runtime error/unhandled exception in pythonw.exe. ``` Figure 1: pythonw.exe - Application Error The exception unknown software exception (0x40000015) occurred in the application at location 0x1e0e1379. ``` I am using networkx and matplotlib to display a graph: ``` import matplotlib.pyplot as plt import networkx as nx ``` I am running Windows XP. Any ideas how to resolve this? Or should I just quit using IDLE?
2011/08/06
[ "https://Stackoverflow.com/questions/6969222", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264970/" ]
The easiest fix for this is to open IDLE from the start menu and then opening your code files from there.
The solution to this problem was indeed to quit using IDLE. I got the Python stuff for Eclipse; I'd recommend that setup.
14,262
45,530,741
I'm trying to run my code with a multiprocessing function but mongo keep returning > > "MongoClient opened before fork. Create MongoClient with > connect=False, or create client after forking." > > > I really doesn't understand how i can adapt my code to this. Basically the structure is: ``` db = MongoClient().database db.authenticate('user', 'password', mechanism='SCRAM-SHA-1') collectionW = db['words'] collectionT = db['sinMemo'] collectionL = db['sinLogic'] def findW(word): rows = collectionw.find({"word": word}) ind = 0 for row in rows: ind += 1 id = row["_id"] if ind == 0: a = ind else: a = id return a def trainAI(stri): ... if findW(word) == 0: _id = db['words'].insert( {"_id": getNextSequence(db.counters, "nodeid"), "word": word}) story = _id else: story = findW(word) ... def train(index): # searching progress progFile = "./train/progress{0}.txt".format(index) trainFile = "./train/small_file_{0}".format(index) if os.path.exists(progFile): f = open(progFile, "r") ind = f.read().strip() if ind != "": pprint(ind) i = int(ind) else: pprint("No progress saved or progress lost!") i = 0 f.close() else: i = 0 #get the number of line of the file rangeC = rawbigcount(trainFile) #fix unicode non_bmp_map = dict.fromkeys(range(0x10000, sys.maxunicode + 1), 0xfffd) files = io.open(trainFile, "r", encoding="utf8") str1 = "" str2 = "" filex = open(progFile, "w") with progressbar.ProgressBar(max_value=rangeC) as bar: for line in files: line = line.replace("\n", "") if i % 2 == 0: str1 = line.translate(non_bmp_map) else: str2 = line.translate(non_bmp_map) bar.update(i) trainAI(str1 + " " + str2) filex.seek(0) filex.truncate() filex.write(str(i)) i += 1 #multiprocessing function maxProcess = 3 def f(l, i): l.acquire() train(i + 1) l.release() if __name__ == '__main__': lock = Lock() for num in range(maxProcess): pprint("start " + str(num)) Process(target=f, args=(lock, num)).start() ``` This code is made for reading 4 different file in 4 different process and at the same time insert the data in the database. I copied only part of the code for make you understand the structure of it. I've tried to add connect=False to this code but nothing... ``` db = MongoClient(connect=False).database db.authenticate('user', 'password', mechanism='SCRAM-SHA-1') collectionW = db['words'] collectionT = db['sinMemo'] collectionL = db['sinLogic'] ``` then i've tried to move it in the f function (right before train() but what i get is that the program doesn't find collectionW,collectionT and collectionL. I'm not very expert of python or mongodb so i hope that this is not a silly question. The code is running under Ubuntu 16.04.2 with python 2.7.12
2017/08/06
[ "https://Stackoverflow.com/questions/45530741", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3476027/" ]
db.authenticate will have to connect to mongo server and it will try to make a connection. So, even though connect=False is being used, db.authenticate will require a connection to be open. Why don't you create the mongo client instance after fork? That's look like the easiest solution.
Since `db.authenticate` must open the MongoClient and connect to the server, it creates connections which won't work in the forked subprocess. Hence, the error message. Try this instead: ``` db = MongoClient('mongodb://user:password@localhost', connect=False).database ``` Also, delete the Lock `l`. Acquiring a lock in one subprocess has no effect on other subprocesses.
14,263
21,687,643
I am working on a large scale project that involves giving a python script a first name and getting back a result as to what kind of gender it belongs to. My current program is written in Java and using Jython to interact with a Python script called "sex machine." It works great in most cases and I've tested it with smaller groups of users. However, when I attempt to test it with a large group of users the program gets about halfway in and then gives me the following error: ``` "Exception in thread "main" SyntaxError: No viable alternative to input '\\n'", ('<string>', 1, 22, "result = d.get_gender('Christinewazonek'')\n") ``` I am more accustomed to Java and have limited knowledge of Python so at the moment I don't know how to solve this problem. I tried to trim the string that I'm giving the get\_gender method but that didn't help any. I am not sure what the numbers 1, 22 even mean. Like I said since I'm using Jython my code would be the following: ``` static PythonInterpreter interp = new PythonInterpreter(); interp.exec("import sys, os.path"); interp.exec("sys.path.append('/Users/myname/Desktop/')"); interp.exec("import sexmachine.detector as gender"); interp.exec("d = gender.Detector()"); interp.exec("result = d.get_gender('"+WordUtils.capitalize(name).trim() +"')"); PyObject gendAnswer = interp.get("result"); ``` And this is pretty much the extent of Jython/Python interaction in my Java code. If someone sees something that's wrong or not right I would certainly appreciate if you could help me. As this is a large project it takes time to run the whole program again only to run into the same issue, so because of this I really need to fix this problem.
2014/02/10
[ "https://Stackoverflow.com/questions/21687643", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1461393/" ]
I don't know if it helps but this is what I did and it works for me. ``` public static void main(String[] args){ PythonInterpreter pI = new PythonInterpreter(); pI.exec("x = 3"); PyObject result = pI.get("x"); System.out.println(result); } ```
Not sure if you sorted this out, but have an extra apostrophe on ``` d.get_gender('Christinewazonek'') ``` Just like in Java, everything you open you need to close, and in this case you opened a string containing `)\n")` which was not closed. Depending on the interpreter you are using, this can be flagged easily. Perhaps you might try different interpreter.
14,265
34,132,484
i have a large string like ``` res = ["FAV_VENUE_CITY_NAME == 'Mumbai' & EVENT_GENRE == 'KIDS' & count_EVENT_GENRE >= 1", "FAV_VENUE_CITY_NAME == 'Mumbai' & EVENT_GENRE == 'FANTASY' & count_EVENT_GENRE >= 1", "FAV_VENUE_CITY_NAME =='Mumbai' & EVENT_GENRE == 'FESTIVAL' & count_EVENT_GENRE >= 1", "FAV_VENUE_CITY_NAME == 'New Delhi' & EVENT_GENRE == 'WORKSHOP' & count_EVENT_GENRE >= 1", "FAV_VENUE_CITY_NAME == 'Mumbai' & EVENT_GENRE == 'EXHIBITION' & count_EVENT_GENRE >= 1", "FAV_VENUE_CITY_NAME == 'Bangalore' & FAV_GENRE == '|DRAMA|'", "FAV_VENUE_CITY_NAME = 'Mumbai' & & FAV_GENRE == '|ACTION|ADVENTURE|SCI-FI|'", "FAV_VENUE_CITY_NAME == 'Bangalore' & FAV_GENRE == '|COMEDY|'", "FAV_VENUE_CITY_NAME == 'Bangalore' & FAV_GENRE == 'DRAMA' & FAV_LANGUAGE == 'English'", "FAV_VENUE_CITY_NAME == 'New Delhi' & FAV_LANGUAGE == 'Hindi' & count_EVENT_LANGUAGE >= 1"] ``` now i am extracting fields by ``` res = [re.split(r'[(==)(>=)]', x)[0].strip() for x in re.split('[&($#$)]', whereFields)] res = [x for x in list(set(res)) if x] o/p:['FAV_GENRE', 'FAV_LANGUAGE', 'FAV_VENUE_CITY_NAME', 'count_EVENT_GENRE', 'EVENT_GENRE','count_EVENT_LANGUAGE'] ``` then by following this [filter out some items from a list and store in different arrays in python](https://stackoverflow.com/questions/34119386/filter-out-some-items-from-a-list-and-store-in-different-arrays-in-python?noredirect=1#comment55989982_34119386) i am getting values ``` FAV_VENUE_CITY_NAME = ['New Delhi', 'Mumbai', 'Bangalore'] FAV_GENRE = ['|DRAMA|', '|COMEDY|', '|ACTION|ADVENTURE|SCI-FI|', 'DRAMA'] EVENT_GENRE = ['FESTIVAL', 'WORKSHOP', 'FANTASY', 'KIDS', 'EXHIBITION'] FAV_LANGUAGE = ['English', 'Hindi'] count_on_field = ['EVENT_GENRE', 'EVENT_LANGUAGE'] ``` Now i want to make a dictionary whose key will be field name in res. and values will be the result from above link. Or is there a way to make items of list res as different different list by themselves. SOmething like ``` res = ['FAV_GENRE', 'FAV_LANGUAGE', 'FAV_VENUE_CITY_NAME', 'count_EVENT_GENRE', 'EVENT_GENRE','count_EVENT_LANGUAGE'] for i in range(len(res)): res[i] = list(res[i]) # make each item as an empty list with name as it is ``` so that they become like ``` FAV_VENUE_CITY_NAME = [] EVENT_GENRE = [] FAV_GENRE = [] FAV_LANGUAGE = [ ``` then get the value to each individual lists in res list by following the method in above link. Then make a dictionary like the below line making a dict with index as key ``` a = [51,27,13,56] b = dict(enumerate(a)) #####d = dict{key=each list name from res list, value = value in each ind. lists} ``` # or if possible suggest something like from top res list....how to form a dict having key as field names and values as values from each lines ``` o/p: d = {'FAV_VENUE_CITY_NAME':['Mumbai','New Delhi','Bangalore'], 'EVENT_GENRE':['KIDS','FANTASY','FESTIVAL','WORKSHOP','EXHIBITION'], 'FAV_GENRE':['|DRAMA|','|ACTION|ADVENTURE|SCI-FI|','|COMEDY|','DRAMA'], 'FAV_LANGUAGE':['English','Hindi']} ``` count\_EVENT\_GENRE>=1,count\_EVENT\_LANGUAGE>=1 should not be in that dictionary ,rather they should go to a list ``` count_on_fields = ['EVENT_GENRE','EVENT_LANGUAGE'] ``` Pease if anybody has a better idea or suggestion, do help.
2015/12/07
[ "https://Stackoverflow.com/questions/34132484", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5533254/" ]
The latest d.ts file now has the component method. Update yours with `tsd update -o`
We had a kinda similar issue earlier today and it was to do with using Angular 1.5. Beta 1 which doesn't contain the component function. To fix it we had to upgrade to Angular 1.5 Beta 2 which does contain the component functon.
14,266
70,921,901
I have been trying to fetch the metadata from a KDB+ Database using python, basically, I installed a library called **`qpython`** and using this library we connect and query the KDB+ Database. I want to store the metadata for all the appropriate cols for a table/view in KDB+ Database using python. I am unable to separate the metadata part, despite trying myriad different approaches. Namely a few to typecast the output to list/tuple, iterating using `for`, et cetera. ``` from qpython import qconnection def fetch_metadata_from_kdb(params): try: kdb_connection_obj = qconnection.QConnection(host=params['host'], port=params['port'], username=params['username'], password=params['password']) kdb_connection_obj.open() PREDICATE = "meta[{}]".format(params['table']) metadata = kdb_connection_obj(PREDICATE) kdb_connection_obj.close() return metadata except Exception as error_msg: return error_msg def fetch_tables_from_kdb(params): try: kdb_connection_obj = qconnection.QConnection(host=params['host'], port=params['port'], username=params['username'], password=params['password']) kdb_connection_obj.open() tables = kdb_connection_obj("tables[]") views = kdb_connection_obj("views[]") kdb_connection_obj.close() return [table.decode() for table in list(tables)], [view.decode() for view in list(views)] except Exception as error_msg: return error_msg parms_q = {'host':'localhost', 'port':5010, 'username':'kdb', 'password':'kdb', 'table':'testing'} print("fetch_tables_from_kdb:", fetch_tables_from_kdb(parms_q), "\n") print("fetch_metadata_from_kdb:", fetch_metadata_from_kdb(parms_q), "\n") ``` The output which I am currently getting is as follows; ``` fetch_tables_from_kdb: (['testing'], ['viewname']) fetch_metadata_from_kdb: [(b'time',) (b'sym',) (b'price',) (b'qty',)]![(b'p', b'', b'') (b's', b'', b'') (b'f', b'', b'') (b'j', b'', b'')] ``` I am not able to separate the columns part and the metadata part. How to store only the metadata for the appropriate column for a table/view in KDB using python?
2022/01/31
[ "https://Stackoverflow.com/questions/70921901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12553730/" ]
The metadata that you have returned from kdb is correct but is being displayed in python as a kdb dictionary format which I agree is not very useful. If you pass the pandas=True flag into your qconnection call then qPython will parse kdb datastructures, such as a table into pandas data structures or sensible python types, which in your case looks like it will be more useful. Please see an example below - kdb setup (all on localhost) ``` $ q -p 5000 q)testing:([]date:.z.d+0 1 2;`g#sym:`abc`def`ghi;num:`s#10 20 30) q)testing date sym num ------------------ 2022.01.31 abc 10 2022.02.01 def 20 2022.02.02 ghi 30 q)meta testing c | t f a ----| ----- date| d sym | s g num | j s ``` Python code ``` from qpython import qconnection #create and open 2 connections to kdb process - 1 without pandas flag and one q = qconnection.QConnection(host="localhost", port=5000) qpandas = qconnection.QConnection(host="localhost", port=5000, pandas=True) q.open() qpandas.open() #see what is returned with a q table print(q("testing")) [(8066, b'abc', 10) (8067, b'def', 20) (8068, b'ghi', 30)] #the data is a qPython data object type(q("testing")) qpython.qcollection.QTable #whereas using the pandas=True flag a dataframe is returned. print(qpandas("testing")) date sym num 0 2022-01-31 b'abc' 10 1 2022-02-01 b'def' 20 2 2022-02-02 b'ghi' 30 #This is the same for the meta of a table print(q("meta testing")) [(b'date',) (b'sym',) (b'num',)]![(b'd', b'', b'') (b's', b'', b'g') (b'j', b'', b's')] print(qpandas("meta testing")) t f a c b'date' d b'' b'' b'sym' s b'' b'g' b'num' j b'' b's' ``` With the above you can now access the columns and rows using pandas (the b'num' etc is the qPython way of expressing a backtick ` Also now you have the ability to now use the `DataFrame.info()` to extract datatypes if you are more intrested in the python data structure rather than the kdb data structure/types. qPython will convert the q types to sensible python types automatically. ``` qpandas("testing").info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 3 entries, 0 to 2 Data columns (total 3 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 date 3 non-null datetime64[ns] 1 sym 3 non-null object 2 num 3 non-null int64 dtypes: datetime64[ns](1), int64(1), object(1) memory usage: 200.0+ bytes ```
In the meantime, I have checked quite a bit of KBD documentation and found that the metadata provides the following as the output. You can see that here [kdb metadata](https://code.kx.com/q4m3/8_Tables/) `c | t f a` c-columns t-symbol f-foreign key association a-attributes associated with the column We can access the metadata object(*<class 'qpython.qcollection.QKeyedTable'>*) by interating over a `for` loop as shown below; ``` from qpython import qconnection def fetch_metadata_from_kdb(params): try: col_list, metadata_list = [], [] kdb_connection_obj = qconnection.QConnection(host=params['host'], port=params['port'], username=params['username'], password=params['password']) kdb_connection_obj.open() PREDICATE = "meta[{}]".format(params['table']) ############# FOR LOOP ############## for i,j in kdb_connection_obj(PREDICATE).items(): col_list.append(i[0].decode()) metadata_list.append(j[0].decode()) kdb_connection_obj.close() return col_list, metadata_list except Exception as error_msg: return error_msg parms_q = {'host':'localhost', 'port':5010, 'username':'kdb', 'password':'kdb', 'table':'testing'} print(fetch_metadata_from_kdb(parms_q)) ``` `Output: ['time', 'sym', 'price', 'qty'], ['p', 's', 'f', 'j']` I also got the KDB char types / q data types from the documentation [here](https://code.kx.com/q4m3/2_Basic_Data_Types_Atoms/). Below is the implementation for the same; ``` import pandas as pd from qpython import qconnection kdb_type_char_dict = dict() df = pd.read_html('https://code.kx.com/q4m3/2_Basic_Data_Types_Atoms/')[1].iloc[:17, 0:3][['Type', 'CharType']] for i, j in zip(df.iloc[:, 0], df.iloc[:, 1]): kdb_type_char_dict[str(j)] = str(i) ####### Q DATA TYPES DICTIONARY ####### print("Chat types/ q data types dictionary:", kdb_type_char_dict) def fetch_metadata_from_kdb(params): try: col_list, metadata_list, temp_list = [], [], [] kdb_connection_obj = qconnection.QConnection(host=params['host'], port=params['port'], username=params['username'], password=params['password']) kdb_connection_obj.open() PREDICATE = "meta[{}]".format(params['table']) for i, j in kdb_connection_obj(PREDICATE).items(): col_list.append(i[0].decode()) temp_list.append(j[0].decode()) for i in temp_list: metadata_list.append("{}".format(kdb_type_char_dict[i])) kdb_connection_obj.close() return col_list, metadata_list except Exception as error_msg: return error_msg params = {'host': 'localhost', 'port': 5010, 'username': 'kdb', 'password': 'kdb', 'table': 'testing'} print(fetch_metadata_from_kdb(params)) ``` Output: ``` Chat types/ q data types dictionary: {'b': 'boolean', 'x': 'byte', 'h': 'short', 'i': 'int', 'j': 'long', 'e': 'real', 'f': 'float', 'c': 'char', 's': 'symbol', 'p': 'timestamp', 'm': 'month', 'd': 'date', 'z': '(datetime)', 'n': 'timespan', 'u': 'minute', 'v': 'second', 't': 'time'} (['time', 'sym', 'price', 'qty'], ['timestamp', 'symbol', 'float', 'long']) ```
14,267
62,503,638
I have a data frame as shown below. which is a sales data of two health care product starting from December 2016 to November 2018. ``` product price sale_date discount A 50 2016-12-01 5 A 50 2017-01-03 4 B 200 2016-12-24 10 A 50 2017-01-18 3 B 200 2017-01-28 15 A 50 2017-01-18 6 B 200 2017-01-28 20 A 50 2017-04-18 6 B 200 2017-12-08 25 A 50 2017-11-18 6 B 200 2017-08-21 20 B 200 2017-12-28 30 A 50 2018-03-18 10 B 300 2018-06-08 45 B 300 2018-09-20 50 A 50 2018-11-18 8 B 300 2018-11-28 35 ``` From the above data I would like to plot month wise total sale price and total discount in a bar plot for each products using in python. So I would like to have two line plots for product A ``` X axis = year and month Y axis1 = Total sale price Y axis = Total discount price ``` The intention of this plots are to impacts the of discounts on sales.
2020/06/21
[ "https://Stackoverflow.com/questions/62503638", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8901845/" ]
The problem with the first query is that it returns no rows if there is 1 row (or less) in the table. It looks like they consider that an empty resultset is not the correct answer in this case. Instead, they always want one row as a result that contains a `null` value (which indicates the absence of the Nth salary in the table). This is what the second query does. It `SELECT`s the (scalar) result of the initial query - so it always produces one row. If the subquery returns something, you get that value as a result, else you get `null`. Consider this simple example: ``` select 1 where 0 = 1 -- returns no rows select (select 1 where 0 = 1) -- returns one row with a "null" value ```
This query: ``` SELECT DISTINCT Salary AS SecondHighestSalary -- (DISTINCT is really not needed) FROM Employee ORDER BY Salary DESC LIMIT 1 OFFSET 1 ``` in case the table has only 1 row, does not return `null`. It returns nothing (no rows). But when it is placed inside another query as a derived column: ``` SELECT ( SELECT DISTINCT Salary FROM Employee ORDER BY Salary DESC LIMIT 1 OFFSET 1 ) AS SecondHighestSalary ``` the value of that column returned will be `null` because every column in a query contains either a value or `null` which has the meaning of *unknown* or *missing*.
14,268
35,700,781
I have a small Python app that produces a form, the user enters some strings in and it collects them as an array and adds (or tries to) that array as a value of a key in Google's Memcache. This is the script: ``` import webapp2 from google.appengine.api import memcache MAIN_PAGE_HTML = """\ <html> <body> <form action="/get" method="post"> <div><input name="ID"/></div> <div><input name="param1"/></div> <div><input name="param2"/></div> <div><input name="param3"/></div> <div><input name="param4"/></div> <div><input name="param5"/></div> <div><input type="submit" value="Change Status"></div> </form> </body> </html> """ class MainPage(webapp2.RequestHandler): def get(self): self.response.write(MAIN_PAGE_HTML) class status(webapp2.RequestHandler): def post(self): paramarray= (self.request.get_all('param1'), self.request.get_all('param2'), self.request.get_all('param3'), self.request.get_all('param4'), self.request.get_all('param5')) array1 = tuple(paramarray) memcache.add(key=(self.request.get_all('ID')), value=array1, time=3600) app = webapp2.WSGIApplication([ ('/', MainPage), ('/get', status), ], debug=True) ``` I've tried setting `paramarray` as a tuple, not a list. Still getting the same error: ``` Traceback (most recent call last): File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1535, in __call__ rv = self.handle_exception(request, response, e) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1529, in __call__ rv = self.router.dispatch(request, response) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher return route.handler_adapter(request, response) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1102, in __call__ return handler.dispatch() File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 572, in dispatch return self.handle_exception(e, self.app.debug) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 570, in dispatch return method(*args, **kwargs) File "/base/data/home/apps/s~gtech-iot/1.391041688070473184/main.py", line 41, in post memcache.add(key=(self.request.get_all('ID')), value=array1, time=3600) File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/memcache/__init__.py", line 785, in add namespace=namespace) File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/memcache/__init__.py", line 868, in _set_with_policy rpc = self._set_multi_async_with_policy(policy, {key: value}, TypeError: unhashable type: 'list' ``` Tried to set curly braces, regular, squared, with or without the `array1=..` statement Please help, Thanks
2016/02/29
[ "https://Stackoverflow.com/questions/35700781", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3136727/" ]
Given that you work with system logs and their format is known and stable, my approach would be something like: * identify a set of keywords (either common, or one per log) * for each log, iterate line by line * once keywords match, add the relevant information from each line in e.g. a dictionary You could use shell tools (like `grep`, `cut` and/or `awk`) to pre-process the log and extract relevant lines from the log (I assume you only need e.g. error entries). You can use something like [this](https://stackoverflow.com/a/16017858/515948) as a starting point.
If you want ot use tool then you can use ELK(Elastic,Logstash and kibana). if no then you have to read first log file then apply regex according to your requirment.
14,269
36,913,153
When I do this calculation `2*(5+5/(3+3))*3` I get 30 in Python (2.7). But what it seems is that `2*(5+5/(3+3))*3`is equal to `35`. Can someone tell me why python gives me the answer of 30 instead of 35? I've tested with JavaScript, Lua and Mac Calculator and they show me 35. Why does Python calculate wrong? <http://ideone.com/yiFJxS>
2016/04/28
[ "https://Stackoverflow.com/questions/36913153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4841229/" ]
This happens because of the piece `5/(3 + 3)` which evalautes to 0. You need to use either of them as float.
Always assume it's an issue with something you're doing rather than with an entire coding language! It works fine for me in Python shell. 35 is the expected answer and 35 is what we get! Most likely something on your end or a mis-type / you've miss-commented something out. This is from copy pasting your code above. edit: Who is more likely to be wrong, an individual? Or the masses? Occams razor. In this case I assumed he was using 3 and not 2.7 which led to the correct assumption as it does work in 3.
14,271
52,958,847
I am trying to calculate a DTW distance matrix which will look into 150,000 time series each having between 13 to 24 observations - that is the produced distance matrix will be a list of the size of approximately (150,000 x 150,000)/2= 11,250,000,000. I am running this over a big data cluster of the size of 200GB but I am getting the memory error. I am using dtaidisatance library and used the **distance\_matrix\_fast** function that I could pass on the entire time series at once in a list, but I was getting similar memory error but coming out of the package. the error was thrown straight away as soon as I run it though. I also used the **block function** in the package, but seems like it is not able to take it all the time series at once to start with. So I decided to go through a loop and calculate the distance between every two pair of time series and then append it to a list. However I do get the same memory error again as following after running for a long while: File "/root/anaconda2/test/final\_clustering\_2.py", line 93, in distance\_matrix\_scaled.append(dtw.distance\_fast(Series\_scaled[i], Series\_scaled[j])) MemoryError this is my code below: ``` distance_matrix_scaled = [] m=len(Series_scaled) #m=100000 for i in range(0, m - 1): for j in range(i + 1, m): distance_matrix_scaled.append(dtw.distance_fast(Series_scaled[i], Series_scaled[j])) # save it to the disk np.save('distance_entire', distance_matrix_scaled) ``` Could you please help to answer why am I getting this memory error? is it the python list limit or my cluster size causing this? Is there a clever way or format in numpy I could use to navigate this problem?
2018/10/23
[ "https://Stackoverflow.com/questions/52958847", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6235045/" ]
Thanks to @KacperMadej for this solution on [github](https://github.com/highcharts/highcharts-angular/issues/89). To load a theme simply add the following somewhere in the project: ```js import * as Highcharts from 'highcharts'; require('highcharts/themes/dark-blue')(Highcharts); ```
The theme factory is now the default export of `highcharts/themes/<theme-name>` so this will work: ``` import * as Highcharts from 'highcharts'; import theme from 'highcharts/themes/dark-unica'; theme(Highcharts); ```
14,272
32,492,183
When I run `python manage.py runserver`, everything starts out fine, but then I get a `SystemCheckError` stating that Pillow is not installed; however, Pillow is definitely installed on this machine. This is the error I receive: > > Performing system checks... > > > Unhandled exception in thread started by Traceback (most recent call last): File > "/usr/local/lib/python2.7/dist-packages/django/utils/autoreload.py", > line 225, in wrapper > fn(\*args, \*\*kwargs) File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/runserver.py", > line 110, in inner\_run > self.validate(display\_num\_errors=True) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", > line 468, in validate > return self.check(app\_configs=app\_configs, display\_num\_errors=display\_num\_errors) File > "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", > line 527, in check > raise SystemCheckError(msg) django.core.management.base.SystemCheckError: SystemCheckError: System > check identified some issues: > > > ERRORS: recipes.Recipes.primary\_image: (fields.E210) Cannot use > ImageField because Pillow is not installed. > HINT: Get Pillow at <https://pypi.python.org/pypi/Pillow> or run command "pip install Pillow". recipes.Recipes.thumbnail\_image: > (fields.E210) Cannot use ImageField because Pillow is not installed. > HINT: Get Pillow at <https://pypi.python.org/pypi/Pillow> or run command "pip install Pillow". > > > I'm running this on an Ubuntu machine. Any ideas what's up?
2015/09/10
[ "https://Stackoverflow.com/questions/32492183", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4501444/" ]
This should do the trick: `/(?| (")((?:\\"|[^"])+)\1 | (')((?:\\'|[^'])+)\1 )/xg` [Demo](https://regex101.com/r/cG3qR3/2) --------------------------------------- BTW: [regex101.com](https://regex101.com/r/rX4rL7/1) is a great resource to use (which is where I got the regex above) Update ------ The first one I posted works for PHP, [here is one](https://regex101.com/r/jB6hM0/1) for JS `/"([^"\\]*(?:\\.[^"\\]*)*)"|\w+|'([^'\\]*(?:\\.[^'\\]*)*)'/g`
Maybe I read your question incorrectly but this is working for me `/\".+\"/gm` <https://regex101.com/r/wF0yN4/1>
14,275
18,802,563
**Background**: My Python program handles relatively large quantities of data, which can be generated in-program, or imported. The data is then processed, and during one of these processes, the data is deliberately copied and then manipulated, cleaned for duplicates and then returned to the program for further use. The data I'm handling is very precise (up to 16 decimal places), and maintaining this accuracy to at least 14dp is vital. However, mathematical operations of course can return slight variations in my floats, such that two values are identical to 14dp, but may vary ever so slightly to 16dp, therefore meaning the built in `set()` function doesn't correctly remove such 'duplicates' (I used this method to prototype the idea, but it's not satisfactory for the finished program). I should also point out I may well be overlooking something simple! I am just interested to see what others come up with :) **Question:** What is the most efficient way to remove very-near-duplicates from a potentially very large data set? **My Attempts**: I have tried rounding the values themselves to 14dp, but this is of course not satisfactory as this leads to larger errors down the line. I have a potential solution to this problem, but I am not convinced it is as efficient or 'pythonic' as possible. My attempt involves finding the indices of list entries that match to x dp, and then removing one of the matching entries. Thank you in advance for any advice! Please let me know if there's anything you wish to be clarified, or of course if I'm overlooking something very simple (I may be at a point where I'm over-thinking it). **Clarification on 'Duplicates'**: Example of one of my 'duplicate' entries: 603.73066958946424, 603.73066958946460, the solution would remove one of these values. **Note on decimal.Decimal:** This could work if it was guaranteed that all *imported* data did not already have some near-duplicates (which it often does).
2013/09/14
[ "https://Stackoverflow.com/questions/18802563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1644700/" ]
You really want to use NumPy if you're handling large quantities of data. Here's how I would do it : Import NumPy : ``` import numpy as np ``` Generate 8000 high-precision floats (128-bits will be enough for your purposes, but note that I'm converting the 64-bits output of `random` to 128 just to fake it. Use your real data here.) : ``` a = np.float128(np.random.random((8000,))) ``` Find the indexes of the unique elements in the rounded array : ``` _, unique = np.unique(a.round(decimals=14), return_index=True) ``` And take those indexes from the original (non-rounded) array : ``` no_duplicates = a[unique] ```
Why don't you create a dict that maps the 14dp values to the corresponding full 16dp values: ``` d = collections.defaultdict(list) for x in l: d[round(x, 14)].append(x) ``` Now if you just want "unique" (by your definition) values, you can do ``` unique = [v[0] for v in d.values()] ```
14,278
67,541,366
I have a set of filter objects, which inherit the properties of a `Filter` base class ``` class Filter(): def __init__(self): self.filterList = [] def __add__(self,f): self.filterList += f.filterList def match(self, entry): for f in self.filterList: if not f(entry): return False return True class thisFilter(Filter): def __init__(self, args): super().__init__() .... def thisFilterFunc(entry)->bool: return True self.filterList.append(thisFilterFunc) ``` This filter classes are used by various functions to filter entries ``` def myfunc(myfilter, arg1, ...): ... for entry in entries: if myfilter.match(entry): ... do something ``` Multiple filters can be added (logical and) by adding instances of these filters: ``` bigFilter = filter1 + filter2 + ... ``` This is all comming together quite well, but I would love to generalize this in a way to handle more complex logical constraints, e.g. ``` bigFilter = (filter1 and filter2) or (filter3 and not filter4) ``` It feels like this should somehow be possible with overwriting `__bool__` of the class instead of using `__add__` but the boolean value of the class is only known for a given entry and not during assemly of the filter. Any ideas how to make this possible? Or is there maybe a more pythonic way do do this?
2021/05/14
[ "https://Stackoverflow.com/questions/67541366", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4177926/" ]
I would go for something like this: ```py class Filter: def __init__(self, filter: Callable[[Any], bool]): self.filter = filter def __add__(self, added: Filter): return OrFilter(self, added) def __mul__(self, mult: Filter): return AndFilter(self, mult) def __invert__(self): return Filter(lambda x: not self.filter(x)) def __call__(self, entry): return self.filter(entry) class AndFilter(Filter): def __init__(self, left: Filter, right: Filter): self.left = left self.right = right def __call__(self, entry): return self.left(entry) and self.right(entry) class OrFilter(Filter): def __init__(self, left: Filter, right: Filter): self.left = left self.right = right def __call__(self, entry): return self.left(entry) or self.right(entry) ``` Then you can create filters, and use them as `(filterA + ~filterB) * filterC` You'll probably want to replace that `Any` with a generic type, so that your filter knows what it's dealing with.
Thanks, @njzk2 for the solution. In my code I used `|` and `&`. To be backwards compatible I also kept the `.match()` instead of using `__call__()` and also added the `__add__` again. ``` class Filter: def __init__(self, filter: Callable[[Any], bool]): self.filter = filter def __or__(self, ored: Filter): return OrFilter(self, ored) def __and__(self, anded: Filter): return AndFilter(self, anded) def __add__(self, added: Filter): # self as __and__ return self.__and__(added) def __invert__(self): return Filter(lambda x: not self.filter(x)) def match(self, entry): return self.filter(entry) class AndFilter(Filter): def __init__(self, left: Filter, right: Filter): self.left = left self.right = right def filter(self, entry): return self.left.filter(entry) and self.right.filter(entry) class OrFilter(Filter): def __init__(self, left: Filter, right: Filter): self.left = left self.right = right def filter(self, entry): return self.left.filter(entry) or self.right.filter(entry) class MyFilter(Filter): def __init__(self, args): ... def ffunc(entry) -> bool: ... super().__init__(ffunc) ```
14,279