qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
35,780,768
I am getting this message when I try to install aws. Anyone have any ideas of what's going on? ``` Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 209, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 317, in run prefix=options.prefix_path, File "/Library/Python/2.7/site-packages/pip/req/req_set.py", line 725, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip/req/req_install.py", line 756, in uninstall paths_to_remove.remove(auto_confirm) File "/Library/Python/2.7/site-packages/pip/req/req_uninstall.py", line 115, in remove renames(path, new_path) File "/Library/Python/2.7/site-packages/pip/utils/__init__.py", line 266, in renames shutil.move(old, new) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move copy2(src, real_dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2 copystat(src, dst) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat os.chflags(dst, st.st_flags) OSError: [Errno 1] Operation not permitted: '/tmp/pip-TIuiKe-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info' ``` The command I'm using is pip install awscli
2016/03/03
[ "https://Stackoverflow.com/questions/35780768", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5505587/" ]
What's going on is the shared library was built with one version of the `A` class while the executable binary was built with a different version. At this point you've violated the one definition rule and the compiler is free to do anything, including compiling and linking your code successfully. There is no requirement for compiler diagnostics in this situation.
This may compile or build, if the solution has been previously built. This means that it is using old object code. Try to do a clean build of your solution, then rebuild it. Once your solution is cleaned. Then try to compile the inherited class. Another thing that may be of concern is that you have declared your inherited class as: > > class AImpl : A { ... }; > > > My question on this is how is `AImpl` being inherited from `A`? Are you intending `public`, `protected` or `private` inheritance? **EDIT** If you are linking to this as a library and your current solution does not show any compile, build, link errors; this is because your current solution is using the old `lib` or `dll` that was already built. If you go back into your library solution and do a clean build, it should not compile, and thus you won't have a newer version of your library to link to.
33,312,175
I want to use [`re.MULTILINE`](https://docs.python.org/2/library/re.html#re.MULTILINE) but **NOT** [`re.DOTALL`](https://docs.python.org/2/library/re.html#re.DOTALL), so that I can have a regex that includes both an "any character" wildcard and the normal `.` wildcard that doesn't match newlines. Is there a way to do this? What should I use to match any character in those instances that I want to include newlines?
2015/10/23
[ "https://Stackoverflow.com/questions/33312175", "https://Stackoverflow.com", "https://Stackoverflow.com/users/44330/" ]
To match a newline, or "any symbol" without `re.S`/`re.DOTALL`, you may use any of the following: 1. `(?s).` - the [inline modifier group](https://www.regular-expressions.info/modifiers.html) with `s` flag on sets a scope where all `.` patterns match any char including line break chars 2. Any of the following work-arounds: ``` [\s\S] [\w\W] [\d\D] ``` The main idea is that the opposite shorthand classes inside a character class match any symbol there is in the input string. Comparing it to `(.|\s)` and other variations with alternation, the character class solution is much more efficient as it involves much less backtracking (when used with a `*` or `+` quantifier). Compare the small example: it takes [`(?:.|\n)+`](https://regex101.com/r/pX7lM6/1) 45 steps to complete, and it takes [`[\s\S]+`](https://regex101.com/r/pX7lM6/2) just 2 steps. See a [Python demo](https://ideone.com/GZEQNf) where I am matching a line starting with `123` and up to the first occurrence of `3` at the start of a line and including the rest of that line: ```py import re text = """abc 123 def 356 more text...""" print( re.findall(r"^123(?s:.*?)^3.*", text, re.M) ) # => ['123\ndef\n356'] print( re.findall(r"^123[\w\W]*?^3.*", text, re.M) ) # => ['123\ndef\n356'] ```
Match any character (including new line): ----------------------------------------- Regular Expression: (Note the use of space ' ' is also there) ``` [\S\n\t\v ] ``` Example: -------- ``` import re text = 'abc def ###A quick brown fox.\nIt jumps over the lazy dog### ghi jkl' # We want to extract "A quick brown fox.\nIt jumps over the lazy dog" matches = re.findall('###[\S\n ]+###', text) print(matches[0]) ``` **The 'matches[0]' will contain:** 'A quick brown fox.**\n**It jumps over the lazy dog' Description of '\S' Python docs: -------------------------------- `\S` Matches any character which is not a whitespace character. ( See: <https://docs.python.org/3/library/re.html#regular-expression-syntax> )
66,880,698
I have a notebook that runs overnight, and prints out a bunch of stuff, including images and such. I want to cause this output to be saved programatically (perhaps at certain intervals). I also want to save the code that was run. In a Jupyter notebook, you could do: ``` from IPython.display import display, Javascript display(Javascript('IPython.notebook.save_checkpoint();')) # causes the current .ipynb file to save itself (same as hitting CTRL+s) ``` (from [Save an IPython notebook programmatically from within itself?](https://stackoverflow.com/questions/32237275/save-an-ipython-notebook-programmatically-from-within-itself)) Although, I found that this javascript injection did not work in Jupyter lab(Jupyter not found). My question is how to do the equivalent of the above code in Jupyter lab. Upon inspecting the HTML of the jupyter lab, I could not find the Jupyter object.
2021/03/31
[ "https://Stackoverflow.com/questions/66880698", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11632499/" ]
You can use [ipylab](https://github.com/jtpio/ipylab) to access JupyterLab API from Python. To save the notebook just invoke the `docmanager:save` command: ```py from ipylab import JupyterFrontEnd app = JupyterFrontEnd() app.commands.execute('docmanager:save') ``` You can get the full list of commands with `app.commands.list_commands()`.
JupyterLab has a bulit-in auto-save function. You can configure the time interval using the Advanced Settings Editor, the Document Manager section (see screenshot below). [![autosave config](https://i.stack.imgur.com/01vR4.png)](https://i.stack.imgur.com/01vR4.png) However, if you *really* want a JavaScript solution you could just invoke the keyboard shortcut `Ctrl` + `s` with: ``` from IPython.display import display, Javascript display(Javascript( "document.body.dispatchEvent(" "new KeyboardEvent('keydown', {key:'s', keyCode: 83, ctrlKey: true}" "))" )) ``` this will only work as long as you do not change focus to a different notebook. However, you can always use an invisible HTML node such as input to reclaim the focus first: ``` from IPython.display import display, HTML script = """ this.nextElementSibling.focus(); this.dispatchEvent(new KeyboardEvent('keydown', {key:'s', keyCode: 83, ctrlKey: true})); """ display(HTML(( '<img src onerror="{}" style="display:none">' '<input style="width:0;height:0;border:0">' ).format(script))) ``` And you can always wrap the script in `window.setTimout` or `window.setInterval` - but it should not be needed thanks to the built in auto-save function of JupyterLab.
66,880,698
I have a notebook that runs overnight, and prints out a bunch of stuff, including images and such. I want to cause this output to be saved programatically (perhaps at certain intervals). I also want to save the code that was run. In a Jupyter notebook, you could do: ``` from IPython.display import display, Javascript display(Javascript('IPython.notebook.save_checkpoint();')) # causes the current .ipynb file to save itself (same as hitting CTRL+s) ``` (from [Save an IPython notebook programmatically from within itself?](https://stackoverflow.com/questions/32237275/save-an-ipython-notebook-programmatically-from-within-itself)) Although, I found that this javascript injection did not work in Jupyter lab(Jupyter not found). My question is how to do the equivalent of the above code in Jupyter lab. Upon inspecting the HTML of the jupyter lab, I could not find the Jupyter object.
2021/03/31
[ "https://Stackoverflow.com/questions/66880698", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11632499/" ]
JupyterLab has a bulit-in auto-save function. You can configure the time interval using the Advanced Settings Editor, the Document Manager section (see screenshot below). [![autosave config](https://i.stack.imgur.com/01vR4.png)](https://i.stack.imgur.com/01vR4.png) However, if you *really* want a JavaScript solution you could just invoke the keyboard shortcut `Ctrl` + `s` with: ``` from IPython.display import display, Javascript display(Javascript( "document.body.dispatchEvent(" "new KeyboardEvent('keydown', {key:'s', keyCode: 83, ctrlKey: true}" "))" )) ``` this will only work as long as you do not change focus to a different notebook. However, you can always use an invisible HTML node such as input to reclaim the focus first: ``` from IPython.display import display, HTML script = """ this.nextElementSibling.focus(); this.dispatchEvent(new KeyboardEvent('keydown', {key:'s', keyCode: 83, ctrlKey: true})); """ display(HTML(( '<img src onerror="{}" style="display:none">' '<input style="width:0;height:0;border:0">' ).format(script))) ``` And you can always wrap the script in `window.setTimout` or `window.setInterval` - but it should not be needed thanks to the built in auto-save function of JupyterLab.
I just wanted to share a really small tweak of the other solution by krassowski that works for JupyterLab on MacOS: ``` from IPython.display import display, HTML script = """ this.nextElementSibling.focus(); this.dispatchEvent(new KeyboardEvent('keydown', {key:'s', keyCode: 83, metaKey: true})); """ display(HTML(( '<img src onerror="{}" style="display:none">' '<input style="width:0;height:0;border:0">' ).format(script))) ```
66,880,698
I have a notebook that runs overnight, and prints out a bunch of stuff, including images and such. I want to cause this output to be saved programatically (perhaps at certain intervals). I also want to save the code that was run. In a Jupyter notebook, you could do: ``` from IPython.display import display, Javascript display(Javascript('IPython.notebook.save_checkpoint();')) # causes the current .ipynb file to save itself (same as hitting CTRL+s) ``` (from [Save an IPython notebook programmatically from within itself?](https://stackoverflow.com/questions/32237275/save-an-ipython-notebook-programmatically-from-within-itself)) Although, I found that this javascript injection did not work in Jupyter lab(Jupyter not found). My question is how to do the equivalent of the above code in Jupyter lab. Upon inspecting the HTML of the jupyter lab, I could not find the Jupyter object.
2021/03/31
[ "https://Stackoverflow.com/questions/66880698", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11632499/" ]
You can use [ipylab](https://github.com/jtpio/ipylab) to access JupyterLab API from Python. To save the notebook just invoke the `docmanager:save` command: ```py from ipylab import JupyterFrontEnd app = JupyterFrontEnd() app.commands.execute('docmanager:save') ``` You can get the full list of commands with `app.commands.list_commands()`.
I just wanted to share a really small tweak of the other solution by krassowski that works for JupyterLab on MacOS: ``` from IPython.display import display, HTML script = """ this.nextElementSibling.focus(); this.dispatchEvent(new KeyboardEvent('keydown', {key:'s', keyCode: 83, metaKey: true})); """ display(HTML(( '<img src onerror="{}" style="display:none">' '<input style="width:0;height:0;border:0">' ).format(script))) ```
42,103,367
I am using multiprocessing.Pool.imap to run many independent jobs in parallel using Python 2.7 on Windows 7. With the default settings, my total CPU usage is pegged at 100%, as measured by Windows Task Manager. This makes it impossible to do any other work while my code runs in the background. I've tried limiting the number of processes to be the number of CPUs minus 1, as described in [How to limit the number of processors that Python uses](https://stackoverflow.com/questions/5495203/how-to-limit-the-number-of-processors-that-python-uses): ``` pool = Pool(processes=max(multiprocessing.cpu_count()-1, 1) for p in pool.imap(func, iterable): ... ``` This does reduce the total number of running processes. However, each process just takes up more cycles to make up for it. So my total CPU usage is still pegged at 100%. Is there a way to directly limit the total CPU usage - NOT just the number of processes - or failing that, is there any workaround?
2017/02/08
[ "https://Stackoverflow.com/questions/42103367", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4410133/" ]
The solution depends on what you want to do. Here are a few options: Lower priorities of processes ----------------------------- You can [`nice`](https://en.wikipedia.org/wiki/Nice_(Unix)) the subprocesses. This way, though they will still eat 100% of the CPU, when you start other applications, the OS gives preference to the other applications. If you want to leave a work intensive computation run on the background of your laptop and don't care about the CPU fan running all the time, then setting the nice value with `psutils` is your solution. This script is a test script which runs on all cores for enough time so you can see how it behaves. ``` from multiprocessing import Pool, cpu_count import math import psutil import os def f(i): return math.sqrt(i) def limit_cpu(): "is called at every process start" p = psutil.Process(os.getpid()) # set to lowest priority, this is windows only, on Unix use ps.nice(19) p.nice(psutil.BELOW_NORMAL_PRIORITY_CLASS) if __name__ == '__main__': # start "number of cores" processes pool = Pool(None, limit_cpu) for p in pool.imap(f, range(10**8)): pass ``` The trick is that `limit_cpu` is run at the beginning of every process (see [`initializer` argment in the doc](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool)). Whereas Unix has levels -19 (highest prio) to 19 (lowest prio), Windows has [a few distinct levels](https://msdn.microsoft.com/en-us/library/ms686219(v=vs.85).aspx) for giving priority. `BELOW_NORMAL_PRIORITY_CLASS` probably fits your requirements best, there is also `IDLE_PRIORITY_CLASS` which says Windows to run your process only when the system is idle. You can view the priority if you switch to detail mode in Task Manager and right click on the process: [![enter image description here](https://i.stack.imgur.com/K9QY4.png)](https://i.stack.imgur.com/K9QY4.png) Lower number of processes ------------------------- Although you have rejected this option it still might be a good option: Say you limit the number of subprocesses to half the cpu cores using `pool = Pool(max(cpu_count()//2, 1))` then the OS initially runs those processes on half the cpu cores, while the others stay idle or just run the other applications currently running. After a short time, the OS reschedules the processes and might move them to other cpu cores etc. Both Windows as Unix based systems behave this way. **Windows: Running 2 processes on 4 cores:** ![](https://i.stack.imgur.com/HNKNA.png) **OSX: Running 4 processes on 8 cores**: [![enter image description here](https://i.stack.imgur.com/rrRDD.png)](https://i.stack.imgur.com/K9QY4.png) You see that both OS balance the process between the cores, although not evenly so you still see a few cores with higher percentages than others. Sleep ----- If you absolutely want to go sure, that your processes never eat 100% of a certain core (e.g. if you want to prevent that the cpu fan goes up), then you can run sleep in your processing function: ``` from time import sleep def f(i): sleep(0.01) return math.sqrt(i) ``` This makes the OS "schedule out" your process for `0.01` seconds for each computation and makes room for other applications. If there are no other applications, then the cpu core is idle, thus it will never go to 100%. You'll need to play around with different sleep durations, it will also vary from computer to computer you run it on. If you want to make it very sophisticated you could adapt the sleep depending on what `cpu_times()` reports.
**On the OS level ---------------** you can use `nice` to set a priority to a single command. You could also start a python script with nice. (Below from: <http://blog.scoutapp.com/articles/2014/11/04/restricting-process-cpu-usage-using-nice-cpulimit-and-cgroups>) > > **nice** > > > The nice command tweaks the priority level of a process so that it runs less frequently. This is useful when you need to run a > CPU intensive task as a background or batch job. The niceness level > ranges from -20 (most favorable scheduling) to 19 (least favorable). > Processes on Linux are started with a niceness of 0 by default. The > nice command (without any additional parameters) will start a process > with a niceness of 10. At that level the scheduler will see it as a > lower priority task and give it less CPU resources.Start two > matho-primes tasks, one with nice and one without: > > > ``` nice matho-primes 0 9999999999 > /dev/null &matho-primes 0 9999999999 > /dev/null & matho-primes 0 9999999999 > /dev/null & ``` > > Now run top. > > > [![enter image description here](https://i.stack.imgur.com/CPuTD.jpg)](https://i.stack.imgur.com/CPuTD.jpg) As a function in Python ----------------------- Another approach is to use psutils to check your CPU load average for the past minute and then have your threads check the CPU load average and spool up another thread if you are below the specified CPU load target, and sleep or kill the thread if you are above the CPU load target. This will get out of your way when you are using your computer, but will maintain a constant CPU load. ``` # Import Python modules import time import os import multiprocessing import psutil import math from random import randint # Main task function def main_process(item_queue, args_array): # Go through each link in the array passed in. while not item_queue.empty(): # Get the next item in the queue item = item_queue.get() # Create a random number to simulate threads that # are not all going to be the same randomizer = randint(100, 100000) for i in range(randomizer): algo_seed = math.sqrt(math.sqrt(i * randomizer) % randomizer) # Check if the thread should continue based on current load balance if spool_down_load_balance(): print "Process " + str(os.getpid()) + " saying goodnight..." break # This function will build a queue and def start_thread_process(queue_pile, args_array): # Create a Queue to hold link pile and share between threads item_queue = multiprocessing.Queue() # Put all the initial items into the queue for item in queue_pile: item_queue.put(item) # Append the load balancer thread to the loop load_balance_process = multiprocessing.Process(target=spool_up_load_balance, args=(item_queue, args_array)) # Loop through and start all processes load_balance_process.start() # This .join() function prevents the script from progressing further. load_balance_process.join() # Spool down the thread balance when load is too high def spool_down_load_balance(): # Get the count of CPU cores core_count = psutil.cpu_count() # Calulate the short term load average of past minute one_minute_load_average = os.getloadavg()[0] / core_count # If load balance above the max return True to kill the process if one_minute_load_average > args_array['cpu_target']: print "-Unacceptable load balance detected. Killing process " + str(os.getpid()) + "..." return True # Load balancer thread function def spool_up_load_balance(item_queue, args_array): print "[Starting load balancer...]" # Get the count of CPU cores core_count = psutil.cpu_count() # While there is still links in queue while not item_queue.empty(): print "[Calculating load balance...]" # Check the 1 minute average CPU load balance # returns 1,5,15 minute load averages one_minute_load_average = os.getloadavg()[0] / core_count # If the load average much less than target, start a group of new threads if one_minute_load_average < args_array['cpu_target'] / 2: # Print message and log that load balancer is starting another thread print "Starting another thread group due to low CPU load balance of: " + str(one_minute_load_average * 100) + "%" time.sleep(5) # Start another group of threads for i in range(3): start_new_thread = multiprocessing.Process(target=main_process,args=(item_queue, args_array)) start_new_thread.start() # Allow the added threads to have an impact on the CPU balance # before checking the one minute average again time.sleep(20) # If load average less than target start single thread elif one_minute_load_average < args_array['cpu_target']: # Print message and log that load balancer is starting another thread print "Starting another single thread due to low CPU load balance of: " + str(one_minute_load_average * 100) + "%" # Start another thread start_new_thread = multiprocessing.Process(target=main_process,args=(item_queue, args_array)) start_new_thread.start() # Allow the added threads to have an impact on the CPU balance # before checking the one minute average again time.sleep(20) else: # Print CPU load balance print "Reporting stable CPU load balance: " + str(one_minute_load_average * 100) + "%" # Sleep for another minute while time.sleep(20) if __name__=="__main__": # Set the queue size queue_size = 10000 # Define an arguments array to pass around all the values args_array = { # Set some initial CPU load values as a CPU usage goal "cpu_target" : 0.60, # When CPU load is significantly low, start this number # of threads "thread_group_size" : 3 } # Create an array of fixed length to act as queue queue_pile = list(range(queue_size)) # Set main process start time start_time = time.time() # Start the main process start_thread_process(queue_pile, args_array) print '[Finished processing the entire queue! Time consuming:{0} Time Finished: {1}]'.format(time.time() - start_time, time.strftime("%c")) ```
42,103,367
I am using multiprocessing.Pool.imap to run many independent jobs in parallel using Python 2.7 on Windows 7. With the default settings, my total CPU usage is pegged at 100%, as measured by Windows Task Manager. This makes it impossible to do any other work while my code runs in the background. I've tried limiting the number of processes to be the number of CPUs minus 1, as described in [How to limit the number of processors that Python uses](https://stackoverflow.com/questions/5495203/how-to-limit-the-number-of-processors-that-python-uses): ``` pool = Pool(processes=max(multiprocessing.cpu_count()-1, 1) for p in pool.imap(func, iterable): ... ``` This does reduce the total number of running processes. However, each process just takes up more cycles to make up for it. So my total CPU usage is still pegged at 100%. Is there a way to directly limit the total CPU usage - NOT just the number of processes - or failing that, is there any workaround?
2017/02/08
[ "https://Stackoverflow.com/questions/42103367", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4410133/" ]
The solution depends on what you want to do. Here are a few options: Lower priorities of processes ----------------------------- You can [`nice`](https://en.wikipedia.org/wiki/Nice_(Unix)) the subprocesses. This way, though they will still eat 100% of the CPU, when you start other applications, the OS gives preference to the other applications. If you want to leave a work intensive computation run on the background of your laptop and don't care about the CPU fan running all the time, then setting the nice value with `psutils` is your solution. This script is a test script which runs on all cores for enough time so you can see how it behaves. ``` from multiprocessing import Pool, cpu_count import math import psutil import os def f(i): return math.sqrt(i) def limit_cpu(): "is called at every process start" p = psutil.Process(os.getpid()) # set to lowest priority, this is windows only, on Unix use ps.nice(19) p.nice(psutil.BELOW_NORMAL_PRIORITY_CLASS) if __name__ == '__main__': # start "number of cores" processes pool = Pool(None, limit_cpu) for p in pool.imap(f, range(10**8)): pass ``` The trick is that `limit_cpu` is run at the beginning of every process (see [`initializer` argment in the doc](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool)). Whereas Unix has levels -19 (highest prio) to 19 (lowest prio), Windows has [a few distinct levels](https://msdn.microsoft.com/en-us/library/ms686219(v=vs.85).aspx) for giving priority. `BELOW_NORMAL_PRIORITY_CLASS` probably fits your requirements best, there is also `IDLE_PRIORITY_CLASS` which says Windows to run your process only when the system is idle. You can view the priority if you switch to detail mode in Task Manager and right click on the process: [![enter image description here](https://i.stack.imgur.com/K9QY4.png)](https://i.stack.imgur.com/K9QY4.png) Lower number of processes ------------------------- Although you have rejected this option it still might be a good option: Say you limit the number of subprocesses to half the cpu cores using `pool = Pool(max(cpu_count()//2, 1))` then the OS initially runs those processes on half the cpu cores, while the others stay idle or just run the other applications currently running. After a short time, the OS reschedules the processes and might move them to other cpu cores etc. Both Windows as Unix based systems behave this way. **Windows: Running 2 processes on 4 cores:** ![](https://i.stack.imgur.com/HNKNA.png) **OSX: Running 4 processes on 8 cores**: [![enter image description here](https://i.stack.imgur.com/rrRDD.png)](https://i.stack.imgur.com/K9QY4.png) You see that both OS balance the process between the cores, although not evenly so you still see a few cores with higher percentages than others. Sleep ----- If you absolutely want to go sure, that your processes never eat 100% of a certain core (e.g. if you want to prevent that the cpu fan goes up), then you can run sleep in your processing function: ``` from time import sleep def f(i): sleep(0.01) return math.sqrt(i) ``` This makes the OS "schedule out" your process for `0.01` seconds for each computation and makes room for other applications. If there are no other applications, then the cpu core is idle, thus it will never go to 100%. You'll need to play around with different sleep durations, it will also vary from computer to computer you run it on. If you want to make it very sophisticated you could adapt the sleep depending on what `cpu_times()` reports.
In Linux: Use nice() with a numerical value: ``` #on Unix use ps.nice(10) for very low priority p.nice(10) ``` [![![enter image description here](https://i.stack.imgur.com/WeIMX.png)](https://i.stack.imgur.com/WeIMX.png) [https://en.wikipedia.org/wiki/Nice\_(Unix)#:~:text=nice%20is%20a%20program%20found,CPU%20time%20than%20other%20processes](https://en.wikipedia.org/wiki/Nice_(Unix)#:%7E:text=nice%20is%20a%20program%20found,CPU%20time%20than%20other%20processes).
42,103,367
I am using multiprocessing.Pool.imap to run many independent jobs in parallel using Python 2.7 on Windows 7. With the default settings, my total CPU usage is pegged at 100%, as measured by Windows Task Manager. This makes it impossible to do any other work while my code runs in the background. I've tried limiting the number of processes to be the number of CPUs minus 1, as described in [How to limit the number of processors that Python uses](https://stackoverflow.com/questions/5495203/how-to-limit-the-number-of-processors-that-python-uses): ``` pool = Pool(processes=max(multiprocessing.cpu_count()-1, 1) for p in pool.imap(func, iterable): ... ``` This does reduce the total number of running processes. However, each process just takes up more cycles to make up for it. So my total CPU usage is still pegged at 100%. Is there a way to directly limit the total CPU usage - NOT just the number of processes - or failing that, is there any workaround?
2017/02/08
[ "https://Stackoverflow.com/questions/42103367", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4410133/" ]
**On the OS level ---------------** you can use `nice` to set a priority to a single command. You could also start a python script with nice. (Below from: <http://blog.scoutapp.com/articles/2014/11/04/restricting-process-cpu-usage-using-nice-cpulimit-and-cgroups>) > > **nice** > > > The nice command tweaks the priority level of a process so that it runs less frequently. This is useful when you need to run a > CPU intensive task as a background or batch job. The niceness level > ranges from -20 (most favorable scheduling) to 19 (least favorable). > Processes on Linux are started with a niceness of 0 by default. The > nice command (without any additional parameters) will start a process > with a niceness of 10. At that level the scheduler will see it as a > lower priority task and give it less CPU resources.Start two > matho-primes tasks, one with nice and one without: > > > ``` nice matho-primes 0 9999999999 > /dev/null &matho-primes 0 9999999999 > /dev/null & matho-primes 0 9999999999 > /dev/null & ``` > > Now run top. > > > [![enter image description here](https://i.stack.imgur.com/CPuTD.jpg)](https://i.stack.imgur.com/CPuTD.jpg) As a function in Python ----------------------- Another approach is to use psutils to check your CPU load average for the past minute and then have your threads check the CPU load average and spool up another thread if you are below the specified CPU load target, and sleep or kill the thread if you are above the CPU load target. This will get out of your way when you are using your computer, but will maintain a constant CPU load. ``` # Import Python modules import time import os import multiprocessing import psutil import math from random import randint # Main task function def main_process(item_queue, args_array): # Go through each link in the array passed in. while not item_queue.empty(): # Get the next item in the queue item = item_queue.get() # Create a random number to simulate threads that # are not all going to be the same randomizer = randint(100, 100000) for i in range(randomizer): algo_seed = math.sqrt(math.sqrt(i * randomizer) % randomizer) # Check if the thread should continue based on current load balance if spool_down_load_balance(): print "Process " + str(os.getpid()) + " saying goodnight..." break # This function will build a queue and def start_thread_process(queue_pile, args_array): # Create a Queue to hold link pile and share between threads item_queue = multiprocessing.Queue() # Put all the initial items into the queue for item in queue_pile: item_queue.put(item) # Append the load balancer thread to the loop load_balance_process = multiprocessing.Process(target=spool_up_load_balance, args=(item_queue, args_array)) # Loop through and start all processes load_balance_process.start() # This .join() function prevents the script from progressing further. load_balance_process.join() # Spool down the thread balance when load is too high def spool_down_load_balance(): # Get the count of CPU cores core_count = psutil.cpu_count() # Calulate the short term load average of past minute one_minute_load_average = os.getloadavg()[0] / core_count # If load balance above the max return True to kill the process if one_minute_load_average > args_array['cpu_target']: print "-Unacceptable load balance detected. Killing process " + str(os.getpid()) + "..." return True # Load balancer thread function def spool_up_load_balance(item_queue, args_array): print "[Starting load balancer...]" # Get the count of CPU cores core_count = psutil.cpu_count() # While there is still links in queue while not item_queue.empty(): print "[Calculating load balance...]" # Check the 1 minute average CPU load balance # returns 1,5,15 minute load averages one_minute_load_average = os.getloadavg()[0] / core_count # If the load average much less than target, start a group of new threads if one_minute_load_average < args_array['cpu_target'] / 2: # Print message and log that load balancer is starting another thread print "Starting another thread group due to low CPU load balance of: " + str(one_minute_load_average * 100) + "%" time.sleep(5) # Start another group of threads for i in range(3): start_new_thread = multiprocessing.Process(target=main_process,args=(item_queue, args_array)) start_new_thread.start() # Allow the added threads to have an impact on the CPU balance # before checking the one minute average again time.sleep(20) # If load average less than target start single thread elif one_minute_load_average < args_array['cpu_target']: # Print message and log that load balancer is starting another thread print "Starting another single thread due to low CPU load balance of: " + str(one_minute_load_average * 100) + "%" # Start another thread start_new_thread = multiprocessing.Process(target=main_process,args=(item_queue, args_array)) start_new_thread.start() # Allow the added threads to have an impact on the CPU balance # before checking the one minute average again time.sleep(20) else: # Print CPU load balance print "Reporting stable CPU load balance: " + str(one_minute_load_average * 100) + "%" # Sleep for another minute while time.sleep(20) if __name__=="__main__": # Set the queue size queue_size = 10000 # Define an arguments array to pass around all the values args_array = { # Set some initial CPU load values as a CPU usage goal "cpu_target" : 0.60, # When CPU load is significantly low, start this number # of threads "thread_group_size" : 3 } # Create an array of fixed length to act as queue queue_pile = list(range(queue_size)) # Set main process start time start_time = time.time() # Start the main process start_thread_process(queue_pile, args_array) print '[Finished processing the entire queue! Time consuming:{0} Time Finished: {1}]'.format(time.time() - start_time, time.strftime("%c")) ```
In Linux: Use nice() with a numerical value: ``` #on Unix use ps.nice(10) for very low priority p.nice(10) ``` [![![enter image description here](https://i.stack.imgur.com/WeIMX.png)](https://i.stack.imgur.com/WeIMX.png) [https://en.wikipedia.org/wiki/Nice\_(Unix)#:~:text=nice%20is%20a%20program%20found,CPU%20time%20than%20other%20processes](https://en.wikipedia.org/wiki/Nice_(Unix)#:%7E:text=nice%20is%20a%20program%20found,CPU%20time%20than%20other%20processes).
16,867,347
From what I have read, there are two ways to debug code in Python: * With a traditional debugger such as `pdb` or `ipdb`. This supports commands such as `c` for `continue`, `n` for `step-over`, `s` for `step-into` etc.), but you don't have direct access to an IPython shell which can be extremely useful for object inspection. * Using **IPython** by [embedding](http://ipython.org/ipython-doc/dev/interactive/reference.html#embedding-ipython) an IPython shell in your code. You can do `from IPython import embed`, and then use `embed()` in your code. When your program/script hits an `embed()` statement, you are dropped into an IPython shell. This allows the full inspection of objects and testing of Python code using all the IPython goodies. However, when using `embed()` you can't **step-by-step** through the code anymore with handy keyboard shortcuts. Is there any way to combine the best of both worlds? I.e. 1. Be able to **step-by-step** through your code with handy pdb/ipdb keyboard shortcuts. 2. At any such step (e.g. on a given statement), have access to a full-fledged **IPython shell**. ### IPython debugging *as in* MATLAB: An example of this type of "enhanced debugging" can be found in MATLAB, where the user **always** has full access to the MATLAB engine/shell, and she can still **step-by-step** through her code, define conditional breakpoints, etc. From what I have discussed with other users, this is the debugging feature that people miss the most when moving from MATLAB to IPython. ### IPython debugging in Emacs and other editors: I don't want to make the question too specific, but I work mostly in Emacs, so I wonder if there is any way to bring this functionality into it. **Ideally**, Emacs (or the editor) would allow the programmer to set breakpoints anywhere on the code and communicate with the interpreter or debugger to have it stop in the location of your choice, and bring to a full IPython interpreter on that location.
2013/05/31
[ "https://Stackoverflow.com/questions/16867347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/283296/" ]
(Update on May 28, 2016) Using RealGUD in Emacs =============================================== For anyone in Emacs, [this thread](https://github.com/rocky/emacs-dbgr/issues/96) shows how to accomplish everything described in the OP (and more) using 1. a new important debugger in Emacs called **RealGUD** which can operate with any debugger (including `ipdb`). 2. The Emacs package `isend-mode`. The combination of these two packages is extremely powerful and allows one to recreate exactly the behavior described in the OP and do even more. More info on [the wiki article](https://github.com/rocky/emacs-dbgr/wiki/ipdb-notes) of RealGUD for ipdb. --- Original answer: ================ After having tried many different methods for debugging Python, including everything mentioned in this thread, one of my preferred ways of debugging Python with IPython is with embedded shells. ### Defining a custom embedded IPython shell: Add the following on a script to your `PYTHONPATH`, so that the method `ipsh()` becomes available. ``` import inspect # First import the embed function from IPython.terminal.embed import InteractiveShellEmbed from IPython.config.loader import Config # Configure the prompt so that I know I am in a nested (embedded) shell cfg = Config() prompt_config = cfg.PromptManager prompt_config.in_template = 'N.In <\\#>: ' prompt_config.in2_template = ' .\\D.: ' prompt_config.out_template = 'N.Out<\\#>: ' # Messages displayed when I drop into and exit the shell. banner_msg = ("\n**Nested Interpreter:\n" "Hit Ctrl-D to exit interpreter and continue program.\n" "Note that if you use %kill_embedded, you can fully deactivate\n" "This embedded instance so it will never turn on again") exit_msg = '**Leaving Nested interpreter' # Wrap it in a function that gives me more context: def ipsh(): ipshell = InteractiveShellEmbed(config=cfg, banner1=banner_msg, exit_msg=exit_msg) frame = inspect.currentframe().f_back msg = 'Stopped at {0.f_code.co_filename} at line {0.f_lineno}'.format(frame) # Go back one level! # This is needed because the call to ipshell is inside the function ipsh() ipshell(msg,stack_depth=2) ``` Then, whenever I want to debug something in my code, I place `ipsh()` right at the location where I need to do object inspection, etc. For example, say I want to debug `my_function` below ### Using it: ``` def my_function(b): a = b ipsh() # <- This will embed a full-fledged IPython interpreter a = 4 ``` and then I invoke `my_function(2)` in one of the following ways: 1. Either by running a Python program that invokes this function from a Unix shell 2. Or by invoking it directly from IPython Regardless of how I invoke it, the interpreter stops at the line that says `ipsh()`. Once you are done, you can do `Ctrl-D` and Python will resume execution (with any variable updates that you made). Note that, if you run the code from a regular IPython the IPython shell (case 2 above), the new IPython shell will be **nested** inside the one from which you invoked it, which is perfectly fine, but it's good to be aware of. Eitherway, once the interpreter stops on the location of `ipsh`, I can inspect the value of `a` (which be `2`), see what functions and objects are defined, etc. ### The problem: The solution above can be used to have Python stop anywhere you want in your code, and then drop you into a fully-fledged IPython interpreter. Unfortunately it does not let you add or remove breakpoints once you invoke the script, which is highly frustrating. In my opinion, this is the **only** thing that is preventing IPython from becoming a great debugging tool for Python. ### The best you can do for now: A workaround is to place `ipsh()` a priori at the different locations where you want the Python interpreter to launch an IPython shell (i.e. a `breakpoint`). You can then "jump" between different pre-defined, hard-coded "breakpoints" with `Ctrl-D`, which would exit the current embedded IPython shell and stop again whenever the interpreter hits the next call to `ipsh()`. If you go this route, one way to exit "debugging mode" and ignore all subsequent breakpoints, is to use `ipshell.dummy_mode = True` which will make Python ignore any subsequent instantiations of the `ipshell` object that we created above.
Running from inside Emacs' IPython-shell and breakpoint set via pdb.set\_trace() should work. Checked with python-mode.el, M-x ipython RET etc.
16,867,347
From what I have read, there are two ways to debug code in Python: * With a traditional debugger such as `pdb` or `ipdb`. This supports commands such as `c` for `continue`, `n` for `step-over`, `s` for `step-into` etc.), but you don't have direct access to an IPython shell which can be extremely useful for object inspection. * Using **IPython** by [embedding](http://ipython.org/ipython-doc/dev/interactive/reference.html#embedding-ipython) an IPython shell in your code. You can do `from IPython import embed`, and then use `embed()` in your code. When your program/script hits an `embed()` statement, you are dropped into an IPython shell. This allows the full inspection of objects and testing of Python code using all the IPython goodies. However, when using `embed()` you can't **step-by-step** through the code anymore with handy keyboard shortcuts. Is there any way to combine the best of both worlds? I.e. 1. Be able to **step-by-step** through your code with handy pdb/ipdb keyboard shortcuts. 2. At any such step (e.g. on a given statement), have access to a full-fledged **IPython shell**. ### IPython debugging *as in* MATLAB: An example of this type of "enhanced debugging" can be found in MATLAB, where the user **always** has full access to the MATLAB engine/shell, and she can still **step-by-step** through her code, define conditional breakpoints, etc. From what I have discussed with other users, this is the debugging feature that people miss the most when moving from MATLAB to IPython. ### IPython debugging in Emacs and other editors: I don't want to make the question too specific, but I work mostly in Emacs, so I wonder if there is any way to bring this functionality into it. **Ideally**, Emacs (or the editor) would allow the programmer to set breakpoints anywhere on the code and communicate with the interpreter or debugger to have it stop in the location of your choice, and bring to a full IPython interpreter on that location.
2013/05/31
[ "https://Stackoverflow.com/questions/16867347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/283296/" ]
Did you try [this tip](http://vjethava.blogspot.fr/2010/11/matlabs-keyboard-command-in-python.html)? > > Or better still, use ipython, and call: > > > > ``` > from IPython.Debugger import Tracer; debug_here = Tracer() > > ``` > > then you can just use > > > > ``` > debug_here() > > ``` > > whenever you want to set a breakpoint > > >
Developing New Code =================== *Debugging inside IPython* 1. Use Jupyter/IPython cell execution to speed up experiment iterations 2. Use %%debug for step through Cell Example: ``` %%debug ...: for n in range(4): ...: n>2 ``` Debugging Existing Code ======================= *IPython inside debugging* 1. Debugging a broken unit test: `pytest ... --pdbcls=IPython.terminal.debugger:TerminalPdb --pdb` 2. Debugging outside of test case: `breakpoint()`, `python -m ipdb`, etc. 3. IPython.embed() for full IPython functionality where needed while in the debugger Thoughts on Python ================== I agree with the OP that many things MATLAB does nicely Python still does not have and really should since just about everything in the language favors development speed over production speed. Maybe someday I will contribute more than trivial bug fixes to CPython. <https://github.com/ipython/ipython/commit/f042f3fea7560afcb518a1940daa46a72fbcfa68> See also [Is it possible to run commands in IPython with debugging?](https://stackoverflow.com/questions/9689378/is-it-possible-to-run-commands-in-ipython-with-debugging/46333421#46333421)
16,867,347
From what I have read, there are two ways to debug code in Python: * With a traditional debugger such as `pdb` or `ipdb`. This supports commands such as `c` for `continue`, `n` for `step-over`, `s` for `step-into` etc.), but you don't have direct access to an IPython shell which can be extremely useful for object inspection. * Using **IPython** by [embedding](http://ipython.org/ipython-doc/dev/interactive/reference.html#embedding-ipython) an IPython shell in your code. You can do `from IPython import embed`, and then use `embed()` in your code. When your program/script hits an `embed()` statement, you are dropped into an IPython shell. This allows the full inspection of objects and testing of Python code using all the IPython goodies. However, when using `embed()` you can't **step-by-step** through the code anymore with handy keyboard shortcuts. Is there any way to combine the best of both worlds? I.e. 1. Be able to **step-by-step** through your code with handy pdb/ipdb keyboard shortcuts. 2. At any such step (e.g. on a given statement), have access to a full-fledged **IPython shell**. ### IPython debugging *as in* MATLAB: An example of this type of "enhanced debugging" can be found in MATLAB, where the user **always** has full access to the MATLAB engine/shell, and she can still **step-by-step** through her code, define conditional breakpoints, etc. From what I have discussed with other users, this is the debugging feature that people miss the most when moving from MATLAB to IPython. ### IPython debugging in Emacs and other editors: I don't want to make the question too specific, but I work mostly in Emacs, so I wonder if there is any way to bring this functionality into it. **Ideally**, Emacs (or the editor) would allow the programmer to set breakpoints anywhere on the code and communicate with the interpreter or debugger to have it stop in the location of your choice, and bring to a full IPython interpreter on that location.
2013/05/31
[ "https://Stackoverflow.com/questions/16867347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/283296/" ]
Prefixing an "!" symbol to commands you type in pdb seems to have the same effect as doing something in an IPython shell. This works for accessing help for a certain function, or even variable names. Maybe this will help you to some extent. For example, ``` ipdb> help(numpy.transpose) *** No help on (numpy.transpose) ``` But !help(numpy.transpose) will give you the expected help page on numpy.transpose. Similarly for variable names, say you have a variable l, typing "l" in pdb lists the code, but !l prints the value of l.
If put `import ipdb; ipdb.set_trace()` at cell outside function, it will occur error. Using `%pdb` or `%debug`, you can only see the filnal error result. You cannot see the code doing step by step. I use following skill: ``` %%writefile temp.py .....cell code..... ``` save the code of cell to file **temp.py**. and then `%run -i -d temp.py`, it will run the cell code by pdb . `-i`: run the file in IPython’s namespace instead of an empty one. `-d`: run your program under the control of pdb, the Python debugger.
16,867,347
From what I have read, there are two ways to debug code in Python: * With a traditional debugger such as `pdb` or `ipdb`. This supports commands such as `c` for `continue`, `n` for `step-over`, `s` for `step-into` etc.), but you don't have direct access to an IPython shell which can be extremely useful for object inspection. * Using **IPython** by [embedding](http://ipython.org/ipython-doc/dev/interactive/reference.html#embedding-ipython) an IPython shell in your code. You can do `from IPython import embed`, and then use `embed()` in your code. When your program/script hits an `embed()` statement, you are dropped into an IPython shell. This allows the full inspection of objects and testing of Python code using all the IPython goodies. However, when using `embed()` you can't **step-by-step** through the code anymore with handy keyboard shortcuts. Is there any way to combine the best of both worlds? I.e. 1. Be able to **step-by-step** through your code with handy pdb/ipdb keyboard shortcuts. 2. At any such step (e.g. on a given statement), have access to a full-fledged **IPython shell**. ### IPython debugging *as in* MATLAB: An example of this type of "enhanced debugging" can be found in MATLAB, where the user **always** has full access to the MATLAB engine/shell, and she can still **step-by-step** through her code, define conditional breakpoints, etc. From what I have discussed with other users, this is the debugging feature that people miss the most when moving from MATLAB to IPython. ### IPython debugging in Emacs and other editors: I don't want to make the question too specific, but I work mostly in Emacs, so I wonder if there is any way to bring this functionality into it. **Ideally**, Emacs (or the editor) would allow the programmer to set breakpoints anywhere on the code and communicate with the interpreter or debugger to have it stop in the location of your choice, and bring to a full IPython interpreter on that location.
2013/05/31
[ "https://Stackoverflow.com/questions/16867347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/283296/" ]
What about ipdb.set\_trace() ? In your code : `import ipdb; ipdb.set_trace()` **update**: now in Python 3.7, we can write `breakpoint()`. It works the same, but it also obeys to the `PYTHONBREAKPOINT` environment variable. This feature comes from [this PEP](https://www.python.org/dev/peps/pep-0553/). This allows for full inspection of your code, and you have access to commands such as `c` (continue), `n` (execute next line), `s` (step into the method at point) and so on. See the [ipdb repo](https://github.com/gotcha/ipdb) and [a list of commands](https://wangchuan.github.io/coding/2017/07/12/ipdb-cheat-sheet.html). [IPython](https://ipython.org/) is now called (edit: part of) [Jupyter](https://jupyter.org/). --- ps: note that an ipdb command takes precedence over python code. So in order to write `list(foo)` you'd need `print(list(foo))`, or `!list(foo)` . Also, if you like the ipython prompt (its emacs and vim modes, history, completions,…) it's easy to get the same for your project since it's based on the [python prompt toolkit](https://github.com/jonathanslenders/python-prompt-toolkit/).
Did you try [this tip](http://vjethava.blogspot.fr/2010/11/matlabs-keyboard-command-in-python.html)? > > Or better still, use ipython, and call: > > > > ``` > from IPython.Debugger import Tracer; debug_here = Tracer() > > ``` > > then you can just use > > > > ``` > debug_here() > > ``` > > whenever you want to set a breakpoint > > >
16,867,347
From what I have read, there are two ways to debug code in Python: * With a traditional debugger such as `pdb` or `ipdb`. This supports commands such as `c` for `continue`, `n` for `step-over`, `s` for `step-into` etc.), but you don't have direct access to an IPython shell which can be extremely useful for object inspection. * Using **IPython** by [embedding](http://ipython.org/ipython-doc/dev/interactive/reference.html#embedding-ipython) an IPython shell in your code. You can do `from IPython import embed`, and then use `embed()` in your code. When your program/script hits an `embed()` statement, you are dropped into an IPython shell. This allows the full inspection of objects and testing of Python code using all the IPython goodies. However, when using `embed()` you can't **step-by-step** through the code anymore with handy keyboard shortcuts. Is there any way to combine the best of both worlds? I.e. 1. Be able to **step-by-step** through your code with handy pdb/ipdb keyboard shortcuts. 2. At any such step (e.g. on a given statement), have access to a full-fledged **IPython shell**. ### IPython debugging *as in* MATLAB: An example of this type of "enhanced debugging" can be found in MATLAB, where the user **always** has full access to the MATLAB engine/shell, and she can still **step-by-step** through her code, define conditional breakpoints, etc. From what I have discussed with other users, this is the debugging feature that people miss the most when moving from MATLAB to IPython. ### IPython debugging in Emacs and other editors: I don't want to make the question too specific, but I work mostly in Emacs, so I wonder if there is any way to bring this functionality into it. **Ideally**, Emacs (or the editor) would allow the programmer to set breakpoints anywhere on the code and communicate with the interpreter or debugger to have it stop in the location of your choice, and bring to a full IPython interpreter on that location.
2013/05/31
[ "https://Stackoverflow.com/questions/16867347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/283296/" ]
Looks like the approach in @gaborous's answer [is deprecated](https://github.com/ipython/ipython/pull/9731/). The new approach seems to be: ``` from IPython.core import debugger debug = debugger.Pdb().set_trace def buggy_method(): debug() ```
From python 3.2, you have the `interact` command, which gives you access to the full python/ipython command space.
16,867,347
From what I have read, there are two ways to debug code in Python: * With a traditional debugger such as `pdb` or `ipdb`. This supports commands such as `c` for `continue`, `n` for `step-over`, `s` for `step-into` etc.), but you don't have direct access to an IPython shell which can be extremely useful for object inspection. * Using **IPython** by [embedding](http://ipython.org/ipython-doc/dev/interactive/reference.html#embedding-ipython) an IPython shell in your code. You can do `from IPython import embed`, and then use `embed()` in your code. When your program/script hits an `embed()` statement, you are dropped into an IPython shell. This allows the full inspection of objects and testing of Python code using all the IPython goodies. However, when using `embed()` you can't **step-by-step** through the code anymore with handy keyboard shortcuts. Is there any way to combine the best of both worlds? I.e. 1. Be able to **step-by-step** through your code with handy pdb/ipdb keyboard shortcuts. 2. At any such step (e.g. on a given statement), have access to a full-fledged **IPython shell**. ### IPython debugging *as in* MATLAB: An example of this type of "enhanced debugging" can be found in MATLAB, where the user **always** has full access to the MATLAB engine/shell, and she can still **step-by-step** through her code, define conditional breakpoints, etc. From what I have discussed with other users, this is the debugging feature that people miss the most when moving from MATLAB to IPython. ### IPython debugging in Emacs and other editors: I don't want to make the question too specific, but I work mostly in Emacs, so I wonder if there is any way to bring this functionality into it. **Ideally**, Emacs (or the editor) would allow the programmer to set breakpoints anywhere on the code and communicate with the interpreter or debugger to have it stop in the location of your choice, and bring to a full IPython interpreter on that location.
2013/05/31
[ "https://Stackoverflow.com/questions/16867347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/283296/" ]
From python 3.2, you have the `interact` command, which gives you access to the full python/ipython command space.
If put `import ipdb; ipdb.set_trace()` at cell outside function, it will occur error. Using `%pdb` or `%debug`, you can only see the filnal error result. You cannot see the code doing step by step. I use following skill: ``` %%writefile temp.py .....cell code..... ``` save the code of cell to file **temp.py**. and then `%run -i -d temp.py`, it will run the cell code by pdb . `-i`: run the file in IPython’s namespace instead of an empty one. `-d`: run your program under the control of pdb, the Python debugger.
16,867,347
From what I have read, there are two ways to debug code in Python: * With a traditional debugger such as `pdb` or `ipdb`. This supports commands such as `c` for `continue`, `n` for `step-over`, `s` for `step-into` etc.), but you don't have direct access to an IPython shell which can be extremely useful for object inspection. * Using **IPython** by [embedding](http://ipython.org/ipython-doc/dev/interactive/reference.html#embedding-ipython) an IPython shell in your code. You can do `from IPython import embed`, and then use `embed()` in your code. When your program/script hits an `embed()` statement, you are dropped into an IPython shell. This allows the full inspection of objects and testing of Python code using all the IPython goodies. However, when using `embed()` you can't **step-by-step** through the code anymore with handy keyboard shortcuts. Is there any way to combine the best of both worlds? I.e. 1. Be able to **step-by-step** through your code with handy pdb/ipdb keyboard shortcuts. 2. At any such step (e.g. on a given statement), have access to a full-fledged **IPython shell**. ### IPython debugging *as in* MATLAB: An example of this type of "enhanced debugging" can be found in MATLAB, where the user **always** has full access to the MATLAB engine/shell, and she can still **step-by-step** through her code, define conditional breakpoints, etc. From what I have discussed with other users, this is the debugging feature that people miss the most when moving from MATLAB to IPython. ### IPython debugging in Emacs and other editors: I don't want to make the question too specific, but I work mostly in Emacs, so I wonder if there is any way to bring this functionality into it. **Ideally**, Emacs (or the editor) would allow the programmer to set breakpoints anywhere on the code and communicate with the interpreter or debugger to have it stop in the location of your choice, and bring to a full IPython interpreter on that location.
2013/05/31
[ "https://Stackoverflow.com/questions/16867347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/283296/" ]
(Update on May 28, 2016) Using RealGUD in Emacs =============================================== For anyone in Emacs, [this thread](https://github.com/rocky/emacs-dbgr/issues/96) shows how to accomplish everything described in the OP (and more) using 1. a new important debugger in Emacs called **RealGUD** which can operate with any debugger (including `ipdb`). 2. The Emacs package `isend-mode`. The combination of these two packages is extremely powerful and allows one to recreate exactly the behavior described in the OP and do even more. More info on [the wiki article](https://github.com/rocky/emacs-dbgr/wiki/ipdb-notes) of RealGUD for ipdb. --- Original answer: ================ After having tried many different methods for debugging Python, including everything mentioned in this thread, one of my preferred ways of debugging Python with IPython is with embedded shells. ### Defining a custom embedded IPython shell: Add the following on a script to your `PYTHONPATH`, so that the method `ipsh()` becomes available. ``` import inspect # First import the embed function from IPython.terminal.embed import InteractiveShellEmbed from IPython.config.loader import Config # Configure the prompt so that I know I am in a nested (embedded) shell cfg = Config() prompt_config = cfg.PromptManager prompt_config.in_template = 'N.In <\\#>: ' prompt_config.in2_template = ' .\\D.: ' prompt_config.out_template = 'N.Out<\\#>: ' # Messages displayed when I drop into and exit the shell. banner_msg = ("\n**Nested Interpreter:\n" "Hit Ctrl-D to exit interpreter and continue program.\n" "Note that if you use %kill_embedded, you can fully deactivate\n" "This embedded instance so it will never turn on again") exit_msg = '**Leaving Nested interpreter' # Wrap it in a function that gives me more context: def ipsh(): ipshell = InteractiveShellEmbed(config=cfg, banner1=banner_msg, exit_msg=exit_msg) frame = inspect.currentframe().f_back msg = 'Stopped at {0.f_code.co_filename} at line {0.f_lineno}'.format(frame) # Go back one level! # This is needed because the call to ipshell is inside the function ipsh() ipshell(msg,stack_depth=2) ``` Then, whenever I want to debug something in my code, I place `ipsh()` right at the location where I need to do object inspection, etc. For example, say I want to debug `my_function` below ### Using it: ``` def my_function(b): a = b ipsh() # <- This will embed a full-fledged IPython interpreter a = 4 ``` and then I invoke `my_function(2)` in one of the following ways: 1. Either by running a Python program that invokes this function from a Unix shell 2. Or by invoking it directly from IPython Regardless of how I invoke it, the interpreter stops at the line that says `ipsh()`. Once you are done, you can do `Ctrl-D` and Python will resume execution (with any variable updates that you made). Note that, if you run the code from a regular IPython the IPython shell (case 2 above), the new IPython shell will be **nested** inside the one from which you invoked it, which is perfectly fine, but it's good to be aware of. Eitherway, once the interpreter stops on the location of `ipsh`, I can inspect the value of `a` (which be `2`), see what functions and objects are defined, etc. ### The problem: The solution above can be used to have Python stop anywhere you want in your code, and then drop you into a fully-fledged IPython interpreter. Unfortunately it does not let you add or remove breakpoints once you invoke the script, which is highly frustrating. In my opinion, this is the **only** thing that is preventing IPython from becoming a great debugging tool for Python. ### The best you can do for now: A workaround is to place `ipsh()` a priori at the different locations where you want the Python interpreter to launch an IPython shell (i.e. a `breakpoint`). You can then "jump" between different pre-defined, hard-coded "breakpoints" with `Ctrl-D`, which would exit the current embedded IPython shell and stop again whenever the interpreter hits the next call to `ipsh()`. If you go this route, one way to exit "debugging mode" and ignore all subsequent breakpoints, is to use `ipshell.dummy_mode = True` which will make Python ignore any subsequent instantiations of the `ipshell` object that we created above.
Looks like the approach in @gaborous's answer [is deprecated](https://github.com/ipython/ipython/pull/9731/). The new approach seems to be: ``` from IPython.core import debugger debug = debugger.Pdb().set_trace def buggy_method(): debug() ```
16,867,347
From what I have read, there are two ways to debug code in Python: * With a traditional debugger such as `pdb` or `ipdb`. This supports commands such as `c` for `continue`, `n` for `step-over`, `s` for `step-into` etc.), but you don't have direct access to an IPython shell which can be extremely useful for object inspection. * Using **IPython** by [embedding](http://ipython.org/ipython-doc/dev/interactive/reference.html#embedding-ipython) an IPython shell in your code. You can do `from IPython import embed`, and then use `embed()` in your code. When your program/script hits an `embed()` statement, you are dropped into an IPython shell. This allows the full inspection of objects and testing of Python code using all the IPython goodies. However, when using `embed()` you can't **step-by-step** through the code anymore with handy keyboard shortcuts. Is there any way to combine the best of both worlds? I.e. 1. Be able to **step-by-step** through your code with handy pdb/ipdb keyboard shortcuts. 2. At any such step (e.g. on a given statement), have access to a full-fledged **IPython shell**. ### IPython debugging *as in* MATLAB: An example of this type of "enhanced debugging" can be found in MATLAB, where the user **always** has full access to the MATLAB engine/shell, and she can still **step-by-step** through her code, define conditional breakpoints, etc. From what I have discussed with other users, this is the debugging feature that people miss the most when moving from MATLAB to IPython. ### IPython debugging in Emacs and other editors: I don't want to make the question too specific, but I work mostly in Emacs, so I wonder if there is any way to bring this functionality into it. **Ideally**, Emacs (or the editor) would allow the programmer to set breakpoints anywhere on the code and communicate with the interpreter or debugger to have it stop in the location of your choice, and bring to a full IPython interpreter on that location.
2013/05/31
[ "https://Stackoverflow.com/questions/16867347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/283296/" ]
Prefixing an "!" symbol to commands you type in pdb seems to have the same effect as doing something in an IPython shell. This works for accessing help for a certain function, or even variable names. Maybe this will help you to some extent. For example, ``` ipdb> help(numpy.transpose) *** No help on (numpy.transpose) ``` But !help(numpy.transpose) will give you the expected help page on numpy.transpose. Similarly for variable names, say you have a variable l, typing "l" in pdb lists the code, but !l prints the value of l.
The [Pyzo](http://www.pyzo.org/) IDE has similar capabilities as the OP asked for. You don't have to start in debug mode. Similarly to MATLAB, the commands are executed in the shell. When you set up a break-point in some source code line, the IDE stops the execution there and you can debug and issue regular IPython commands as well. It does seem however that step-into doesn't (yet?) work well (i.e. stopping in one line and then stepping into another function) unless you set up another break-point. Still, coming from MATLAB, this seems the best solution I've found.
16,867,347
From what I have read, there are two ways to debug code in Python: * With a traditional debugger such as `pdb` or `ipdb`. This supports commands such as `c` for `continue`, `n` for `step-over`, `s` for `step-into` etc.), but you don't have direct access to an IPython shell which can be extremely useful for object inspection. * Using **IPython** by [embedding](http://ipython.org/ipython-doc/dev/interactive/reference.html#embedding-ipython) an IPython shell in your code. You can do `from IPython import embed`, and then use `embed()` in your code. When your program/script hits an `embed()` statement, you are dropped into an IPython shell. This allows the full inspection of objects and testing of Python code using all the IPython goodies. However, when using `embed()` you can't **step-by-step** through the code anymore with handy keyboard shortcuts. Is there any way to combine the best of both worlds? I.e. 1. Be able to **step-by-step** through your code with handy pdb/ipdb keyboard shortcuts. 2. At any such step (e.g. on a given statement), have access to a full-fledged **IPython shell**. ### IPython debugging *as in* MATLAB: An example of this type of "enhanced debugging" can be found in MATLAB, where the user **always** has full access to the MATLAB engine/shell, and she can still **step-by-step** through her code, define conditional breakpoints, etc. From what I have discussed with other users, this is the debugging feature that people miss the most when moving from MATLAB to IPython. ### IPython debugging in Emacs and other editors: I don't want to make the question too specific, but I work mostly in Emacs, so I wonder if there is any way to bring this functionality into it. **Ideally**, Emacs (or the editor) would allow the programmer to set breakpoints anywhere on the code and communicate with the interpreter or debugger to have it stop in the location of your choice, and bring to a full IPython interpreter on that location.
2013/05/31
[ "https://Stackoverflow.com/questions/16867347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/283296/" ]
(Update on May 28, 2016) Using RealGUD in Emacs =============================================== For anyone in Emacs, [this thread](https://github.com/rocky/emacs-dbgr/issues/96) shows how to accomplish everything described in the OP (and more) using 1. a new important debugger in Emacs called **RealGUD** which can operate with any debugger (including `ipdb`). 2. The Emacs package `isend-mode`. The combination of these two packages is extremely powerful and allows one to recreate exactly the behavior described in the OP and do even more. More info on [the wiki article](https://github.com/rocky/emacs-dbgr/wiki/ipdb-notes) of RealGUD for ipdb. --- Original answer: ================ After having tried many different methods for debugging Python, including everything mentioned in this thread, one of my preferred ways of debugging Python with IPython is with embedded shells. ### Defining a custom embedded IPython shell: Add the following on a script to your `PYTHONPATH`, so that the method `ipsh()` becomes available. ``` import inspect # First import the embed function from IPython.terminal.embed import InteractiveShellEmbed from IPython.config.loader import Config # Configure the prompt so that I know I am in a nested (embedded) shell cfg = Config() prompt_config = cfg.PromptManager prompt_config.in_template = 'N.In <\\#>: ' prompt_config.in2_template = ' .\\D.: ' prompt_config.out_template = 'N.Out<\\#>: ' # Messages displayed when I drop into and exit the shell. banner_msg = ("\n**Nested Interpreter:\n" "Hit Ctrl-D to exit interpreter and continue program.\n" "Note that if you use %kill_embedded, you can fully deactivate\n" "This embedded instance so it will never turn on again") exit_msg = '**Leaving Nested interpreter' # Wrap it in a function that gives me more context: def ipsh(): ipshell = InteractiveShellEmbed(config=cfg, banner1=banner_msg, exit_msg=exit_msg) frame = inspect.currentframe().f_back msg = 'Stopped at {0.f_code.co_filename} at line {0.f_lineno}'.format(frame) # Go back one level! # This is needed because the call to ipshell is inside the function ipsh() ipshell(msg,stack_depth=2) ``` Then, whenever I want to debug something in my code, I place `ipsh()` right at the location where I need to do object inspection, etc. For example, say I want to debug `my_function` below ### Using it: ``` def my_function(b): a = b ipsh() # <- This will embed a full-fledged IPython interpreter a = 4 ``` and then I invoke `my_function(2)` in one of the following ways: 1. Either by running a Python program that invokes this function from a Unix shell 2. Or by invoking it directly from IPython Regardless of how I invoke it, the interpreter stops at the line that says `ipsh()`. Once you are done, you can do `Ctrl-D` and Python will resume execution (with any variable updates that you made). Note that, if you run the code from a regular IPython the IPython shell (case 2 above), the new IPython shell will be **nested** inside the one from which you invoked it, which is perfectly fine, but it's good to be aware of. Eitherway, once the interpreter stops on the location of `ipsh`, I can inspect the value of `a` (which be `2`), see what functions and objects are defined, etc. ### The problem: The solution above can be used to have Python stop anywhere you want in your code, and then drop you into a fully-fledged IPython interpreter. Unfortunately it does not let you add or remove breakpoints once you invoke the script, which is highly frustrating. In my opinion, this is the **only** thing that is preventing IPython from becoming a great debugging tool for Python. ### The best you can do for now: A workaround is to place `ipsh()` a priori at the different locations where you want the Python interpreter to launch an IPython shell (i.e. a `breakpoint`). You can then "jump" between different pre-defined, hard-coded "breakpoints" with `Ctrl-D`, which would exit the current embedded IPython shell and stop again whenever the interpreter hits the next call to `ipsh()`. If you go this route, one way to exit "debugging mode" and ignore all subsequent breakpoints, is to use `ipshell.dummy_mode = True` which will make Python ignore any subsequent instantiations of the `ipshell` object that we created above.
One option is to use an IDE like [Spyder](https://pythonhosted.org/spyder/) which should allow you to interact with your code while debugging (using an IPython console, in fact). In fact, Spyder is very MATLAB-like, which I presume was intentional. That includes variable inspectors, variable editing, built-in access to documentation, etc.
16,867,347
From what I have read, there are two ways to debug code in Python: * With a traditional debugger such as `pdb` or `ipdb`. This supports commands such as `c` for `continue`, `n` for `step-over`, `s` for `step-into` etc.), but you don't have direct access to an IPython shell which can be extremely useful for object inspection. * Using **IPython** by [embedding](http://ipython.org/ipython-doc/dev/interactive/reference.html#embedding-ipython) an IPython shell in your code. You can do `from IPython import embed`, and then use `embed()` in your code. When your program/script hits an `embed()` statement, you are dropped into an IPython shell. This allows the full inspection of objects and testing of Python code using all the IPython goodies. However, when using `embed()` you can't **step-by-step** through the code anymore with handy keyboard shortcuts. Is there any way to combine the best of both worlds? I.e. 1. Be able to **step-by-step** through your code with handy pdb/ipdb keyboard shortcuts. 2. At any such step (e.g. on a given statement), have access to a full-fledged **IPython shell**. ### IPython debugging *as in* MATLAB: An example of this type of "enhanced debugging" can be found in MATLAB, where the user **always** has full access to the MATLAB engine/shell, and she can still **step-by-step** through her code, define conditional breakpoints, etc. From what I have discussed with other users, this is the debugging feature that people miss the most when moving from MATLAB to IPython. ### IPython debugging in Emacs and other editors: I don't want to make the question too specific, but I work mostly in Emacs, so I wonder if there is any way to bring this functionality into it. **Ideally**, Emacs (or the editor) would allow the programmer to set breakpoints anywhere on the code and communicate with the interpreter or debugger to have it stop in the location of your choice, and bring to a full IPython interpreter on that location.
2013/05/31
[ "https://Stackoverflow.com/questions/16867347", "https://Stackoverflow.com", "https://Stackoverflow.com/users/283296/" ]
the right, easy, cool, exact answer for the question is to use %run macro with -d flag. ``` In [4]: run -d myscript.py NOTE: Enter 'c' at the ipdb> prompt to continue execution. > /cygdrive/c/Users/mycodefolder/myscript.py(4)<module>() 2 3 ----> 4 a=1 5 b=2 ```
If put `import ipdb; ipdb.set_trace()` at cell outside function, it will occur error. Using `%pdb` or `%debug`, you can only see the filnal error result. You cannot see the code doing step by step. I use following skill: ``` %%writefile temp.py .....cell code..... ``` save the code of cell to file **temp.py**. and then `%run -i -d temp.py`, it will run the cell code by pdb . `-i`: run the file in IPython’s namespace instead of an empty one. `-d`: run your program under the control of pdb, the Python debugger.
43,149,637
I have string in python containing a large text file (over 1MiB). I need to split it to chunks. Constrains: * chunks can be splited only by newline character, and * len(chunk) must be as big as possbile but smaller than LIMIT (i.e. 100KiB) Lines longer than LIMIT can be ommited. Any idea how to implement this nicely in python? Thank you in advance.
2017/03/31
[ "https://Stackoverflow.com/questions/43149637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/705676/" ]
Following the suggestion of Linuxios you could use rfind to find the last newline within the limit and split at this point. If no newline character is found the chunk is too large and can be dismissed. ``` chunks = [] not_chunked_text = input_text while not_chunked_text: if len(not_chunked_text) <= LIMIT: chunks.append(not_chunked_text) break split_index = not_chunked_text.rfind("\n", 0, LIMIT) if split_index == -1: # The chunk is too big, so everything until the next newline is deleted try: not_chunked_text = not_chunked_text.split("\n", 1)[1] except IndexError: # No "\n" in not_chunked_text, i.e. the end of the input text was reached break else: chunks.append(not_chunked_text[:split_index+1]) not_chunked_text = not_chunked_text[split_index+1:] ``` `rfind("\n", 0, LIMIT)` returns the highest index where a newline character was found within the bounds of your LIMIT. `not_chunked_text[:split_index+1]` is needed so that the newline character is included in the chunk I interpreted the LIMIT as the biggest length of a chunk that is allowed. If a chunk with the length of LIMIT should not be allowed you have to add a `-1` after ever `LIMIT` in this code.
Here is my not-so-pythonic solution: ``` def line_chunks(lines, chunk_limit): chunks = [] chunk = [] chunk_len = 0 for line in lines: if len(line) + chunk_len < chunk_limit: chunk.append(line) chunk_len += len(line) else: chunks.append(chunk) chunk = [line] chunk_len = len(line) chunks.append(chunk) return chunks chunks = line_chunks(data.split('\n'), 150) print '\n---new-chunk---\n'.join(['\n'.join(chunk) for chunk in chunks]) ```
12,905,300
I am trying to develop an application that creates an image and fills it with color pixels using bilinear interpolation and then displays it. My code so far is the following: ``` #include <QtCore/QCoreApplication> #include <opencv/cv.h> #include <opencv/highgui.h> #include <iostream> #include <string> #include <sys/stat.h> using namespace cv; int main() { Mat image; image.create( 500, 500, CV_8UC3); //upper left corner Vec3b ul( 255, 0, 0 ); //upper right corner Vec3b ur( 0, 255, 0 ); //bottom left corner Vec3b bl( 0, 0, 255 ); //bottom right corner Vec3b br( 255, 0, 255 ); //for(int y=0;y<image.rows; y++) //for(int x=0;x<image.cols;x++) // call function to add noise namedWindow("Colored Pixels"); imshow("Colored Pixels", image); // shows image for 5 seconds waitKey(10000); return 0; } ``` When I run my program in debug mode I get the following two pop up windows: ``` Unexpected GDB Exit The gdb process exited unexpectedly (code 0) ``` and ``` Executable Failed During startup program exited with code 0xc0000138 ``` my debugger log screen shows the following: ``` sStarting debugger 'GdbEngine' for ABI 'x86-windows-msys-pe-32bit'... dStart parameters: 'pixelGradient' mode: 1 dABI: x86-windows-msys-pe-32bit dExecutable: C:\Users\encore\Desktop\Lectures\Year 3\QtCreator\pixelGradient-build-desktop-Qt_4_8_1_for_Desktop_-_MinGW__Qt_SDK__Debug\debug\pixelGradient.exe [terminal] dDirectory: C:\Users\encore\Desktop\Lectures\Year 3\QtCreator\pixelGradient-build-desktop-Qt_4_8_1_for_Desktop_-_MinGW__Qt_SDK__Debug dDebugger: C:\QtSDK\pythongdb\python_2.7based\gdb-i686-pc-mingw32.exe dProject: C:\Users\encore\Desktop\Lectures\Year 3\QtCreator\pixelGradient (built: C:\Users\encore\Desktop\Lectures\Year 3\QtCreator\pixelGradient-build-desktop-Qt_4_8_1_for_Desktop_-_MinGW__Qt_SDK__Debug) dQt: C:\QtSDK\Desktop\Qt\4.8.1\mingw dQML server: 127.0.0.1:3768 dSysroot: dDebug Source Loaction: dSymbol file: dDumper libraries: C:\QtSDK\Desktop\Qt\4.8.1\mingw\\qtc-debugging-helper\ C:\QtSDK\QtCreator\qtc-debugging-helper\168937759\ C:\Users\encore\AppData\Local\Nokia\QtCreator\qtc-debugging-helper\168937759\ d dDebugger settings: dUseAlternatingRowColours: false (default: false) dFontSizeFollowsEditor: false (default: false) dUseMessageBoxForSignals: true (default: true) dAutoQuit: false (default: false) dLogTimeStamps: false (default: false) dVerboseLog: false (default: false) dCloseBuffersOnExit: false (default: false) dSwitchModeOnExit: false (default: false) dUseDebuggingHelper: true (default: true) dUseCodeModel: true (default: true) dShowThreadNames: false (default: false) dUseToolTips: false (default: false) dUseToolTipsInLocalsView: false (default: false) dUseToolTipsInBreakpointsView: false (default: false) dUseAddressInBreakpointsView: false (default: false) dUseAddressInStackView: false (default: false) dRegisterForPostMortem: false (default: false) dLoadGdbInit: true (default: true) dScriptFile: (default: ) dWatchdogTimeout: 20 (default: 20) dAutoEnrichParameters: false (default: false) dTargetAsync: false (default: false) dMaximalStackDepth: 20 (default: 20) dAlwaysAdjustStackColumnWidths: false (default: false) dShowStandardNamespace: true (default: true) dShowQtNamespace: true (default: true) dSortStructMembers: true (default: true) dAutoDerefPointers: true (default: true) dAlwaysAdjustLocalsColumnWidths: false (default: false) dListSourceFiles: false (default: false) dSkipKnownFrames: false (default: false) dEnableReverseDebugging: false (default: false) dAllPluginBreakpoints: true (default: true) dSelectedPluginBreakpoints: false (default: false) dAdjustBreakpointLocations: true (default: true) dAlwaysAdjustBreakpointsColumnWidths: false (default: false) dNoPluginBreakpoints: false (default: false) dSelectedPluginBreakpointsPattern: .* (default: .*) dBreakOnThrow: false (default: false) dBreakOnCatch: false (default: false) dBreakOnWarning: false (default: false) dBreakOnFatal: false (default: false) dAlwaysAdjustRegistersColumnWidths: false (default: false) dAlwaysAdjustSnapshotsColumnWidths: false (default: false) dAlwaysAdjustThreadsColumnWidths: false (default: false) dAlwaysAdjustModulesColumnWidths: false (default: false) dState changed from DebuggerNotReady(0) to EngineSetupRequested(1). dQUEUE: SETUP ENGINE dCALL: SETUP ENGINE dTRYING TO START ADAPTER dENABLING TEST CASE: 0 dSTARTING C:/QtSDK/pythongdb/python_2.7based/gdb-i686-pc-mingw32.exe -i mi dGDB STARTED, INITIALIZING IT <1show version <2-list-features <3set print object on <4set breakpoint pending on <5set print elements 10000 <6set overload-resolution off <7handle SIGSEGV nopass stop print <8set unwindonsignal on <9pwd <10set width 0 <11set height 0 <12set auto-solib-add on <13-interpreter-exec console "maintenance set internal-warning quit no" <14-interpreter-exec console "maintenance set internal-error quit no" <15-interpreter-exec console "disassemble 0 0" <16-interpreter-exec console "python execfile('C:/QtSDK/QtCreator/share/qtcreator/dumper/bridge.py')" <17-interpreter-exec console "python execfile('C:/QtSDK/QtCreator/share/qtcreator/dumper/dumper.py')" <18-interpreter-exec console "python execfile('C:/QtSDK/QtCreator/share/qtcreator/dumper/qttypes.py')" <19-interpreter-exec console "bbsetup" dADAPTER SUCCESSFULLY STARTED dNOTE: ENGINE SETUP OK dState changed from EngineSetupRequested(1) to EngineSetupOk(3). dQUEUE: SETUP INFERIOR dState changed from EngineSetupOk(3) to InferiorSetupRequested(4). dQUEUE: SETUP INFERIOR dCALL: SETUP INFERIOR sSetting up inferior... <20set substitute-path C:/iwmake/build_mingw_opensource C:/QtSDK/Desktop/Qt/4.8.1/mingw <21set substitute-path C:/ndk_buildrepos/qt-desktop/src C:/QtSDK/Desktop/Qt/4.8.1/mingw <22set substitute-path C:/qt-greenhouse/Trolltech/Code_less_create_more/Trolltech/Code_less_create_more/Troll/4.6/qt C:/QtSDK/Desktop/Qt/4.8.1/mingw Attaching to 1260 (6412) dTaking notice of pid 1260 <23attach 1260 >=thread-group-added,id="i1" >~"GNU gdb (GDB) 7.2\n" >~"Copyright (C) 2010 Free Software Foundation, Inc.\n" >~"License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law. Type \"show copying\"\nand \"show warranty\" for details.\n" >~"This GDB was configured as \"i686-pc-mingw32\".\nFor bug reporting instructions, please see:\n" >~"<http://www.gnu.org/software/gdb/bugs/>.\n" >&"show version\n" >~"GNU gdb (GDB) 7.2\n" >~"Copyright (C) 2010 Free Software Foundation, Inc.\n" >~"License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law. Type \"show copying\"\nand \"show warranty\" for details.\n" >~"This GDB was configured as \"i686-pc-mingw32\".\nFor bug reporting instructions, please see:\n" >~"<http://www.gnu.org/software/gdb/bugs/>.\n" >1^done dPARSING VERSION: 1^done d dSUPPORTED GDB VERSION GNU gdb (GDB) 7.2 dCopyright (C) 2010 Free Software Foundation, Inc. dLicense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> dThis is free software: you are free to change and redistribute it. dThere is NO WARRANTY, to the extent permitted by law. Type "show copying" dand "show warranty" for details. dThis GDB was configured as "i686-pc-mingw32". dFor bug reporting instructions, please see: d<http://www.gnu.org/software/gdb/bugs/>. dGNU gdb (GDB) 7.2 dCopyright (C) 2010 Free Software Foundation, Inc. dLicense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> dThis is free software: you are free to change and redistribute it. dThere is NO WARRANTY, to the extent permitted by law. Type "show copying" dand "show warranty" for details. dThis GDB was configured as "i686-pc-mingw32". dFor bug reporting instructions, please see: d<http://www.gnu.org/software/gdb/bugs/>. d dUSING GDB VERSION: 70200, BUILD: 2010 >2^done,features=["frozen-varobjs","pending-breakpoints","thread-info","python"] dFEATURES: 2^done,data={features=["frozen-varobjs","pending-breakpoints","thread-info","python"]} d >&"set print object on\n" >3^done >&"set breakpoint pending on\n" >4^done >&"set print elements 10000\n" >5^done >&"set overload-resolution off\n" >6^done >&"handle SIGSEGV nopass stop print\n" >~"Signal Stop\tPrint\tPass to program\tDescription\n" >~"SIGSEGV Yes\tYes\tNo\t\tSegmentation fault\n" >7^done >&"set unwindonsignal on\n" >8^done >&"pwd\n" >~"Working directory C:\\Users\\encore.\n" >9^done >&"set width 0\n" >10^done >&"set height 0\n" >11^done >&"set auto-solib-add on\n" >12^done >13^done >14^done >&"A syntax error in expression, near `0'.\n" >15^error,msg="A syntax error in expression, near `0'." >16^done >17^done >18^done >~"dumpers=[{type=\"QLinkedList\",formats=\"\"},{type=\"QSize\",formats=\"\"},{type=\"QFileInfo\",formats=\"\"},{type=\"QAbstractItemModel\",formats=\"\"},{type=\"std__stack\",formats=\"\"},{type=\"QTextDocument\",formats=\"\"},{type=\"QTJSC__JSValue\",formats=\"\"},{type=\"__gnu_cxx__hash_set\",formats=\"\"},{type=\"QStringList\",formats=\"\"},{type=\"QRegion\",formats=\"\"},{type=\"std__wstring\",formats=\"\"},{type=\"QString\",formats=\"Inline,Separate Window\",editable=\"true\"},{type=\"QTextCodec\",formats=\"\"},{type=\"QBasicAtomicInt\",formats=\"\"},{type=\"QScriptValue\",formats=\"\"},{type=\"QTime\",formats=\"\"},{type=\"QSharedData\",formats=\"\"},{type=\"std__vector\",formats=\"\",editable=\"true\"},{type=\"QRegExp\",formats=\"\"},{type=\"QTextCursor\",formats=\"\"},{type=\"QxXmlAttributes\",formats=\"\"},{type=\"QDateTime\",formats=\"\"},{type=\"QList\",formats=\"\"},{type=\"QStandardItem\",formats=\"\"},{type=\"std__deque\",formats=\"\"},{type=\"QFixed\",formats=\"\"},{type=\"QHash\",formats=\"\"},{type=\"QSharedPointer\",formats=\"\"},{type=\"QUrl\",formats=\"\"},{type=\"std__set\",formats=\"\"},{type=\"std__list\",formats=\"\"},{type=\"std__basic_string\",formats=\"\"},{type=\"QPoint\",formats=\"\"},{type=\"QHostAddress\",formats=\"\"},{type=\"QStack\",formats=\"\"},{type=\"QScopedPointer\",formats=\"\"},{type=\"QRectF\",formats=\"\"},{type=\"QMultiMap\",formats=\"\"},{type=\"QMapNode\",formats=\"\"},{type=\"QModelIndex\",formats=\"Normal,Enhanced\"},{type=\"QLocale\",formats=\"\"},{type=\"QSharedDataPointer\",formats=\"\"},{type=\"QVariant\",formats=\"\"},{type=\"string\",formats=\"\",editable=\"true\"},{type=\"QBasicAtomicPointer\",formats=\"\"},{type=\"QVector\",formats=\"\",editable=\"true\"},{type=\"QDate\",formats=\"\"},{type=\"QFile\",formats=\"\"},{type=\"QAtomicInt\",formats=\"\"},{type=\"TBuf\",formats=\"\"},{type=\"QWeakPointer\",formats=\"\"},{type=\"QSizeF\",formats=\"\"},{type=\"__m128\",formats=\"As Floats,As Doubles\"},{type=\"boost__optional\",formats=\"\"},{type=\"wstring\",formats=\"\"},{type=\"QPointF\",formats=\"\"},{type=\"TLitC\",formats=\"\"},{type=\"QRect\",formats=\"\"},{type=\"QByteArray\",formats=\"\"},{type=\"QMap\",formats=\"\"},{type=\"boost__shared_ptr\",formats=\"\"},{type=\"QChar\",formats=\"\"},{type=\"QDir\",formats=\"\"},{type=\"QPixmap\",formats=\"\"},{type=\"QFlags\",formats=\"\"},{type=\"std__map\",formats=\"\"},{type=\"QHashNode\",formats=\"\"},{type=\"QTemporaryFile\",formats=\"\"},{type=\"QObject\",formats=\"\"},{type=\"Eigen__Matrix\",formats=\"\"},{type=\"std__string\",formats=\"\",editable=\"true\"},{type=\"QImage\",formats=\"Normal,Displayed\"},{type=\"QSet\",formats=\"\"},],hasInferiorThreadList=\"1\"\n" >19^done >&"set substitute-path C:/iwmake/build_mingw_opensource C:/QtSDK/Desktop/Qt/4.8.1/mingw\n" >20^done >&"set substitute-path C:/ndk_buildrepos/qt-desktop/src C:/QtSDK/Desktop/Qt/4.8.1/mingw\n" >21^done >&"set substitute-path C:/qt-greenhouse/Trolltech/Code_less_create_more/Trolltech/Code_less_create_more/Troll/4.6/qt C:/QtSDK/Desktop/Qt/4.8.1/mingw\n" >22^done >&"attach 1260\n" >~"Attaching to process 1260\n" >=thread-group-started,id="i1",pid="1260" sThread group i1 created >=thread-created,id="1",group-id="i1" sThread 1 created >~"[New Thread 1260.0x190c]\n" s[New Thread 1260.0x190c] >23^running Inferior attached, thread 6412 resumed sSetting breakpoints... dSetting breakpoints... <24maint print msymbols C:/Users/encore/AppData/Local/Temp/gdb_ns_.Dq6408 >*running,thread-id="all" >=thread-created,id="2",group-id="i1" sThread 2 created >~"[New Thread 1260.0xf9c]\n" s[New Thread 1260.0xf9c] >*running,thread-id="all" >=thread-exited,id="2",group-id="i1" sThread 2 in group i1 exited >=thread-exited,id="1",group-id="i1" sThread 1 in group i1 exited >=thread-group-exited,id="i1" sThread group i1 exited >&"During startup program exited with code 0xc0000138.\n" >23^error,msg="During startup program exited with code 0xc0000138." dCOOKIE FOR TOKEN 23 ALREADY EATEN (InferiorSetupRequested). TWO RESPONSES FOR ONE COMMAND? dNOTE: INFERIOR EXITED dState changed from InferiorSetupRequested(4) to InferiorExitOk(16). dState changed from InferiorExitOk(16) to InferiorShutdownOk(19). dState changed from InferiorShutdownOk(19) to EngineShutdownRequested(20). dQUEUE: SHUTDOWN ENGINE sExecutable failed: During startup program exited with code 0xc0000138. dCALL: SHUTDOWN ENGINE dINITIATE GDBENGINE SHUTDOWN IN STATE 0, PROC: 2 <25-gdb-exit >&"maint print msymbols C:/Users/encore/AppData/Local/Temp/gdb_ns_.Dq6408\n" >24^done dFOUND NON-NAMESPACED QT dNOTE: INFERIOR SETUP OK dState changed from EngineShutdownRequested(20) to InferiorSetupOk(6). dState changed from InferiorSetupOk(6) to EngineRunRequested(7). dQUEUE: RUN ENGINE dCALL: RUN ENGINE dNOTE: ENGINE RUN AND INFERIOR STOP OK dState changed from EngineRunRequested(7) to InferiorStopOk(14). dNOTE: INFERIOR RUN REQUESTED dState changed from InferiorStopOk(14) to InferiorRunRequested(10). sRunning requested... <26-exec-continue >25^exit dGDB CLAIMS EXIT; WAITING dGDB PROCESS FINISHED, status 0, code 0 dNOTE: ENGINE ILL ****** dState changed BY FORCE from InferiorRunRequested(10) to InferiorStopRequested(13). dATTEMPT TO INTERRUPT INFERIOR sStop requested... dTRYING TO INTERRUPT INFERIOR dCANNOT INTERRUPT 1260 ``` do you have any idea as to how I can handle this?
2012/10/15
[ "https://Stackoverflow.com/questions/12905300", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1178770/" ]
The following code is almost the same as William answered, but without using 'for' loop statement. ``` subdirs := A B C .PHONY: all $(subdirs) all: $(subdirs) $(subdirs): $(MAKE) -C $@ ```
I'm rusty on makefiles and know for sure the following is not the best answer. But it might help for now... ``` TARGETS = A B C .phoney: all all: @for subdir in $(TARGETS); do \ $(MAKE) -C $$subdir all || exit 1; \ done ``` Note that the indents must use a TAB, not spaces
23,332,259
I am trying to copy a sheet, `default_sheet`, into a new sheet `new_sheet` in the same workbook. I did managed to create a new sheet and to copy the values from default sheet. How can I also copy the style of each cell into the new\_sheet cells? ```python new_sheet = workbook.create_sheet() new_sheet.title = sheetName default_sheet = workbook.get_sheet_by_name('default') new_sheet = workbook.get_sheet_by_name(sheetName) for row in default_sheet.rows: col_idx = float(default_sheet.get_highest_column()) starting_col = chr(65 + int(col_idx)) for row in default_sheet.rows: for cell in row: new_sheet[cell.get_coordinate()] = cell.value <copy also style of each cell> ``` I am at the moment using openpyxl 1.8.2, but i have in mind to switch to 1.8.5. One solution is with copy: ```python from copy import copy, deepcopy new_sheet._styles[cell.get_coordinate()] = copy( default_sheet._styles[cell.get_coordinate()]) ```
2014/04/28
[ "https://Stackoverflow.com/questions/23332259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3458191/" ]
As of openpyxl 2.5.4, python 3.4: (subtle changes over the older version below) ```python new_sheet = workbook.create_sheet(sheetName) default_sheet = workbook['default'] from copy import copy for row in default_sheet.rows: for cell in row: new_cell = new_sheet.cell(row=cell.row, column=cell.col_idx, value= cell.value) if cell.has_style: new_cell.font = copy(cell.font) new_cell.border = copy(cell.border) new_cell.fill = copy(cell.fill) new_cell.number_format = copy(cell.number_format) new_cell.protection = copy(cell.protection) new_cell.alignment = copy(cell.alignment) ``` For openpyxl 2.1 ```python new_sheet = workbook.create_sheet(sheetName) default_sheet = workbook['default'] for row in default_sheet.rows: for cell in row: new_cell = new_sheet.cell(row=cell.row_idx, col=cell.col_idx, value= cell.value) if cell.has_style: new_cell.font = cell.font new_cell.border = cell.border new_cell.fill = cell.fill new_cell.number_format = cell.number_format new_cell.protection = cell.protection new_cell.alignment = cell.alignment ```
May be this is the convenient way for most. ``` from openpyxl import load_workbook from openpyxl import Workbook read_from = load_workbook('path/to/file.xlsx') read_sheet = read_from.active write_to = Workbook() write_sheet = write_to.active write_sheet['A1'] = read_sheet['A1'].value write_sheet['A1'].style = read_sheet['A1'].style write_to.save('save/to/file.xlsx') ```
23,332,259
I am trying to copy a sheet, `default_sheet`, into a new sheet `new_sheet` in the same workbook. I did managed to create a new sheet and to copy the values from default sheet. How can I also copy the style of each cell into the new\_sheet cells? ```python new_sheet = workbook.create_sheet() new_sheet.title = sheetName default_sheet = workbook.get_sheet_by_name('default') new_sheet = workbook.get_sheet_by_name(sheetName) for row in default_sheet.rows: col_idx = float(default_sheet.get_highest_column()) starting_col = chr(65 + int(col_idx)) for row in default_sheet.rows: for cell in row: new_sheet[cell.get_coordinate()] = cell.value <copy also style of each cell> ``` I am at the moment using openpyxl 1.8.2, but i have in mind to switch to 1.8.5. One solution is with copy: ```python from copy import copy, deepcopy new_sheet._styles[cell.get_coordinate()] = copy( default_sheet._styles[cell.get_coordinate()]) ```
2014/04/28
[ "https://Stackoverflow.com/questions/23332259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3458191/" ]
As of openpyxl 2.5.4, python 3.4: (subtle changes over the older version below) ```python new_sheet = workbook.create_sheet(sheetName) default_sheet = workbook['default'] from copy import copy for row in default_sheet.rows: for cell in row: new_cell = new_sheet.cell(row=cell.row, column=cell.col_idx, value= cell.value) if cell.has_style: new_cell.font = copy(cell.font) new_cell.border = copy(cell.border) new_cell.fill = copy(cell.fill) new_cell.number_format = copy(cell.number_format) new_cell.protection = copy(cell.protection) new_cell.alignment = copy(cell.alignment) ``` For openpyxl 2.1 ```python new_sheet = workbook.create_sheet(sheetName) default_sheet = workbook['default'] for row in default_sheet.rows: for cell in row: new_cell = new_sheet.cell(row=cell.row_idx, col=cell.col_idx, value= cell.value) if cell.has_style: new_cell.font = cell.font new_cell.border = cell.border new_cell.fill = cell.fill new_cell.number_format = cell.number_format new_cell.protection = cell.protection new_cell.alignment = cell.alignment ```
The `StyleableObject` implementation stores styles in a single list, `_style`, and style properties on a cell are actually getters and setters to this array. You can implement the copy for each style individually but this will be slow, especially if you're doing it in a busy inner loop like I was. If you're willing to dig into private class attributes there is a much faster way to clone styles: ``` if cell.has_style: new_cell._style = copy(cell._style) ``` FWIW this is how the optimized `WorksheetCopy` class does it in the `_copy_cells` method.
23,332,259
I am trying to copy a sheet, `default_sheet`, into a new sheet `new_sheet` in the same workbook. I did managed to create a new sheet and to copy the values from default sheet. How can I also copy the style of each cell into the new\_sheet cells? ```python new_sheet = workbook.create_sheet() new_sheet.title = sheetName default_sheet = workbook.get_sheet_by_name('default') new_sheet = workbook.get_sheet_by_name(sheetName) for row in default_sheet.rows: col_idx = float(default_sheet.get_highest_column()) starting_col = chr(65 + int(col_idx)) for row in default_sheet.rows: for cell in row: new_sheet[cell.get_coordinate()] = cell.value <copy also style of each cell> ``` I am at the moment using openpyxl 1.8.2, but i have in mind to switch to 1.8.5. One solution is with copy: ```python from copy import copy, deepcopy new_sheet._styles[cell.get_coordinate()] = copy( default_sheet._styles[cell.get_coordinate()]) ```
2014/04/28
[ "https://Stackoverflow.com/questions/23332259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3458191/" ]
The `StyleableObject` implementation stores styles in a single list, `_style`, and style properties on a cell are actually getters and setters to this array. You can implement the copy for each style individually but this will be slow, especially if you're doing it in a busy inner loop like I was. If you're willing to dig into private class attributes there is a much faster way to clone styles: ``` if cell.has_style: new_cell._style = copy(cell._style) ``` FWIW this is how the optimized `WorksheetCopy` class does it in the `_copy_cells` method.
May be this is the convenient way for most. ``` from openpyxl import load_workbook from openpyxl import Workbook read_from = load_workbook('path/to/file.xlsx') read_sheet = read_from.active write_to = Workbook() write_sheet = write_to.active write_sheet['A1'] = read_sheet['A1'].value write_sheet['A1'].style = read_sheet['A1'].style write_to.save('save/to/file.xlsx') ```
19,322,350
I am trying to use f2py to interface my python programs with my Fortran modules. I am on a Win7 platform. I use latest Anaconda 64 (1.7) as a Python+NumPy stack. My Fortran compiler is the latest Intel Fortran compiler 64 (version 14.0.0.103 Build 20130728). I have been experiencing a number of issues when executing `f2py -c -m PyModule FortranModule.f90 --fcompiler=intelvem` The last one, which I can't seem to sort out is that it looks like the sequence of flags f2py/distutils passes to the compiler does not match what ifort expects. I get a series of warning messages regarding unknown options when ifort is invoked. ``` ifort: command line warning #10006: ignoring unknown option '/LC:\Anaconda\libs' ifort: command line warning #10006: ignoring unknown option'/LC:\Anaconda\PCbuild\amd64' ifort: command line warning #10006: ignoring unknown option '/lpython27' ``` I suspect this is related to the errors I get from the linker at the end ``` error LNK2019: unresolved external symbol __imp_PyImport_ImportModule referenced in function _import_array error LNK2019... and so forth (there are about 30-40 lines like that, with different python modules missing) ``` and it concludes with a plain ``` fatal error LNK1120: 42 unresolved externals ``` My guess is that this is because the /link flag is missing in the sequence of options. Because of this, the /l /L options are not passed to the linker and the compiler believes these are addressed to him. The ifort command generated by f2py looks like this: ``` ifort.exe -dll -dll Pymodule.o fortranobject.o FortranModule.o module-f2pywrappers2.o -LC:\Anaconda\libs -LC:\Anaconda\PCbuild\amd64 -lPython27 ``` I have no idea why the "-dll" is repeated twice (I had to change that flag from an original "-shared"). Now, I have tried to look into the f2py and distutils codes but haven't figured out how to bodge an additional /link in the command output. I haven't even been able to locate where this output is generated. If anyone has encountered this problem in the past and/or may have some suggestions, I would very much appreciate it. Thank you for your time
2013/10/11
[ "https://Stackoverflow.com/questions/19322350", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2866568/" ]
I encountered similar problems with my own code some time ago. If I understand the comments correctly you already used the approach that worked for me, so this is just meant as clarification and summary for all those that struggle with f2py and dependencies: f2py seems to have problems resolving dependecies on external source files. If the external dependencies get passed to f2py as already compiled object files though, the linking works fine and the python library gets build without problems. The easiest solution therefore seems to be: 1. compile all dependencies to object files (\*.o) using your prefered compiler and compiler settings 2. pass all object files to f2py, together with the **source file** of your main subroutine/ function/ module/ ... 3. use generated python library as expected A simple python skript could look like this (pycompile.py): ``` #!python.exe # -*- coding: UTF-8 -*- import os import platform '''Uses f2py to compile needed library''' # build command-strings # command for compling *.o and *.mod files fortran_exe = "gfortran " # fortran compiler settings fortran_flags = "<some_gfortran_flags> " # add path to source code fortran_source = ("./relative/path/to/source_1.f90 " "C:/absolut/path/to/source_2.f90 " "...") # assemble fortran command fortran_cmd = fortran_exe + fortran_flags + fortran_source # command for compiling main source file using f2py f2py_exe = "f2py -c " # special compiler-options for Linux/ Windows if (platform.system() == 'Linux'): f2py_flags = "--compiler=unix --fcompiler=gnu95 " elif (platform.system() == 'Windows'): f2py_flags = "--compiler=mingw32 --fcompiler=gnu95 " # add path to source code/ dependencies f2py_source = ("-m for_to_py_lib " "./path/to/main_source.f90 " "source_1.o " "source_2.o " "... " ) # assemble f2py command f2py_cmd = f2py_exe + f2py_flags + f2py_source # compile .o and .mod files print "compiling object- and module-files..." print print fortran_cmd os.system(fortran_cmd) # compile main_source.f90 with f2py print "================================================================" print "start f2py..." print print f2py_cmd os.system(f2py_cmd) ``` --- A more flexible solution for large projects could be provided via Makefile, as dicussed by [@bdforbes](https://stackoverflow.com/users/336001/bdforbes) in the comments ([for reference](http://pastebin.com/ChSxLzSb)) or a custom CMake User Command in combination with the above skript: ``` ############################################################################### # General project properties ################################################################################ # Set Project Name project (for_to_py_lib) # Set Version Number set (for_to_py_lib_VERSION_MAJOR 1) set (for_to_py_lib_VERSION_MINOR 0) # save folder locations for later use/ scripting (see pycompile.py) # relative to SOURCE folder set(source_root ${CMAKE_CURRENT_LIST_DIR}/SOURCE) # save top level source dir for later use set(lib_root ${CMAKE_CURRENT_LIST_DIR}/LIBRARIES) # save top level lib dir for later use # relative to BUILD folder set(build_root ${CMAKE_CURRENT_BINARY_DIR}) # save top level build dir for later use ### ### Fortran to Python library ### find_package(PythonInterp) if (PYTHONINTERP_FOUND) # copy python compile skript file to build folder and substitute CMake variables configure_file(${source_root}/pycompile.py ${build_root}/pycompile.py @ONLY) # define for_to_py library ending if (UNIX) set(CMAKE_PYTHON_LIBRARY_SUFFIX .so) elseif (WIN32) set(CMAKE_PYTHON_LIBRARY_SUFFIX .pyd) endif() # add custom target to ALL, building the for_to_py python library (using f2py) add_custom_target(for_to_py ALL DEPENDS ${build_root}/for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX}) # build command for python library (execute python script pycompile.py containing the actual build commands) add_custom_command(OUTPUT ${build_root}/for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX} COMMAND ${PYTHON_EXECUTABLE} ${build_root}/pycompile.py WORKING_DIRECTORY ${build_root} DEPENDS ${build_root}/pycompile.py ${source_root}/path/to/source_1.f90 ${source_root}/path/to/source_2.f90 ${source_root}/INOUT/s4binout.f90 COMMENT "Generating fortran to python library") # post build command for python library (copying of generated files) add_custom_command(TARGET for_to_py POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy_if_different ${build_root}/s4_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX} ${lib_root}/for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX} COMMENT "\ ***************************************************************************************************\n\ copy of python library for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX} placed in ${lib_root}/for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX} \n\ ***************************************************************************************************" ) endif (PYTHONINTERP_FOUND) ``` with modified pycompile: ``` #!python.exe # -*- coding: UTF-8 -*- ... fortran_source = ("@source_root@/source_1.f90 " "@source_root@/source_2.f90 " "...") ... # add path to source code/ dependencies f2py_source = ("-m for_to_py_lib " "@build_root@/for_to_py.f90 " "source_1.o " "source_2.o " "... " ) ... # compile .o and .mod files ... ```
The library path is specified using /LIBPATH not /L
49,739,245
I have spent a good amount of time trying to determine what is going wrong exactly, with the code I am using to convert pdf to docx (and doc to docx) using LibreOffice. I have used both the windows run interface to test-run some of the code I have found to be relevant, and have tried on python as well, neither of which works. I have LibreOffice v6.0.2 installed on windows. I have been using variations of this code to attempt to convert some pdf files to docx of which the specific pdf file is not really relevant: ``` import subprocess lowriter='C://Program Files/LibreOffice/program/swriter.exe' subprocess.run('{} --invisible --convert-to docx --outdir "{}" "{}"' .format(lowriter,'dir', 'filepath.pdf',),shell=True) ``` I have tried code, again, in both the run interface on the windows os, and through python using the above code, with no luck. I have tried without the outdir as well, just in case I was writing that incorrectly, but always get a return code of 1: ``` CompletedProcess(args='C://Program Files/LibreOffice/program/swriter.exe --invisible --convert-to docx --outdir "{dir}" {filepath.pdf}"', returncode=1) ``` The dir and filepath.pdf are place holders I have put. I have a similar problem with the doc to docx conversion.
2018/04/09
[ "https://Stackoverflow.com/questions/49739245", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8492478/" ]
There are a number of problems here. You should first get the `--convert-to` call to work from the command line as @CristiFati commented, and then implement in python. Here is the code that works on my system. No `//` in the path, and quotes are needed. Also, the folder is `LibreOffice 5` on my system. ``` import subprocess lowriter = 'C:/Program Files (x86)/LibreOffice 5/program/swriter.exe' subprocess.run( '"{}" --convert-to docx --outdir "{}" "{}"' .format(lowriter,'dir', 'filepath.doc',), shell=True) ``` Finally, it looks like converting from PDF to DOCX is not supported. LibreOffice Draw can open a PDF file and save as ODG format. **EDIT**: Here is working code to convert from PDF. I upgraded to LO 6, so the version number ("LibreOffice 5") is no longer required in the path. ``` import subprocess loffice = 'C:/Program Files/LibreOffice/program/soffice.exe' subprocess.run( '"{}" --convert-to odg --outdir "{}" "{}"' .format(loffice,'dir', 'filepath.pdf',), shell=True) ``` [![filepath.odg](https://i.stack.imgur.com/nlIhm.jpg)](https://i.stack.imgur.com/nlIhm.jpg)
Install pdf2docx package in python ``` source = r'C:\Users\sdDesktop\New Project/Document2.pdf' destination = r'C:\Users\sd\Desktop\New Project/sample_6.docx' def Converter_pdf2docx(source,destination): pdf_file = source docx_file = destination cv = Converter(pdf_file) cv.convert(docx_file, start=0, end=None) cv.close() ```
63,648,764
I have a program folder for which paths are required: ``` export RBT_ROOT=/path/to/installation/ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$RBT_ROOT/lib export PATH=$PATH:$RBT_ROOT/bin ``` Then the command is run: ``` rbcavity -was -d -r <PRMFILE> ``` rbcavity - is an exe program contained in the program's bin folder PRMFILE - is the program contained in the current path (working folder not included in the program folder) This works from the command line, but not from python. How can I run this from a python script (3.5)? I tried subprocess.run but it doesn't find the command rbcavity... I'm new to linux and don't quite know how it works.
2020/08/29
[ "https://Stackoverflow.com/questions/63648764", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12399859/" ]
The issue is that the website filters out requests without a proper `User-Agent`, so just use a random one from MDN: ```py requests.get("https://apis.digital.gob.cl/fl/feriados/2020", headers={ "User-Agent" : "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36" }) ```
It might be due to idle timeout. Overriding default socket options can help ``` import socket from urllib3.connection import HTTPConnection HTTPConnection.default_socket_options = ( HTTPConnection.default_socket_options + [ (socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1), (socket.SOL_TCP, socket.TCP_KEEPIDLE, 45), (socket.SOL_TCP, socket.TCP_KEEPINTVL, 10), (socket.SOL_TCP, socket.TCP_KEEPCNT, 6) ] ) ```
28,999,913
Is there a 'correct' or preferred manner for sending data over a web socket connection? In my case, I am sending the information from a C# application to a python (tornado) web server, and I am simply sending a string consisting of several elements separated by commas. In python, I use rudimentary techniques to split the string and then structure the elements into an object. e.g: ``` 'foo,0,bar,1' ``` becomes: ``` object = { 'foo': 0, 'bar': 1 } ``` In the other direction, I am sending the information as a JSON string which I then deserialise using Json.NET I imagine there is no strictly right or wrong way of doing this, but are there significant advantages and disadvantages that I should be thinking of? And, somewhat related, is there a consensus for using string vs. binary formats?
2015/03/12
[ "https://Stackoverflow.com/questions/28999913", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1190200/" ]
Writing a *custom* encoding (eg, as "k,v,..") is *different* than 'using binary'. *It is still text*, just a rigid under-defined one-off hand-rolled format that must be manually replicated. (What happens if a key or value contains a comma? What happens if the data needs to contain nested objects? How can null be interpreted differently than '' or 'null'?) While JSON is definitely the most *ubiquitous* format for WebSockets one shouldn't (for interchange purposes) write JSON by hand - *one uses an existing serialization library on both ends*. (There are many reasons why JSON is ubiquitous which are covered in other answers - this doesn't mean it is always the 'best' format, however.) To this end a *binary* serializer can also be used (BSON being a trivial example as it is effectively JSON-like in structure and operation). Just replace `JSON.parse` with `FORMATX.parse` as appropriate. The only requirements are then: * There is a suitable serializer/deserializer for the *all* the clients and servers. JSON works well here because it is so popular and there is no shortage of implementations. There *are* various binary serialization libraries with *both* Python and C# libraries, but it will require finding a 'happy intersection'. * The serialization format can represent the data. JSON usually works sufficiently and it has a *very nice 1-1 correspondence with basic object graphs* and simple values. It is also inherently schema-less. Some formats are better are certain tasks and have different characteristics, features, or tool-chains. However most concepts (and arguably most DTOs) can be mapped onto JSON easily which makes it a good 'default' choice. The other differences between different *kinds* of binary and text serializations is most mostly dressing - but if you'd like to start talking about schema vs. schema-less, extensibility, external tooling, metadata, non-compressed encoded sizes (or size after transport compression), compliance with a specific existing protocol, etc.. .. but the point to take away is **don't create a 'new' one-off format**. Unless of course, you just like making wheels or there is a *very specific* use-case to fit.
First advice would be to use the same format for both ways, not plain text in one direction and JSON in the other. I personally think `{'foo':0,'bar':1}` is better than `foo,0,bar,1` because everybody understands JSON but for your custom format they might not without some explanations. The idea is you are inventing a data interchange format when JSON is already one and @jfriend00 is right, pretty much every language now understands JSON, [Python included](https://docs.python.org/2/library/json.html). Regarding text vs binary, there isn't any consensus. As @ user2864740 mentions in the comments to my answer as long as the two sides understand each other, it doesn't really matter. This only becomes relevant if one of the sides has a preference for a format (consider for example opening the connection from the browser, using JavaScript - for that people might prefer JSON instead of binary). My advice is to go with something simple as JSON and design your app so that you can change the wire format by swapping in another implementation without affecting the logic of your application.
33,006,474
In the cloud, I have multiple instances, each running a container with a different random name, e.g.: ``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5dc97950d924 aws_beanstalk/my-app:latest "/bin/sh -c 'python 3 hours ago Up 3 hours 80/tcp, 5000/tcp, 8080/tcp jolly_galileo ``` To enter them, I type: ``` sudo docker exec -it jolly_galileo /bin/bash ``` Is there a command or can you write a bash script to automatically execute the exec to enter the correct container?
2015/10/08
[ "https://Stackoverflow.com/questions/33006474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/478354/" ]
"the correct container"? To determine what is the "correct" container, your bash script would still need either the id or the name of that container. For example, I [have a function in my `.bashrc`](https://github.com/VonC/b2d/blob/f9890cb6e1ee14842b8be2dd66a754550db793a9/.bash_aliases#L54): ``` deb() { docker exec -u git -it $1 bash; } ``` That way, I would type: ``` deb jolly_galileo ``` (it uses the account git, but you don't have to)
Here's my final solution. It edits the instance's .bashrc if it hasn't been edited yet, prints out docker ps, defines the dock function, and enters the container. A user can then type "exit" if they want to access the raw instances, and "exit" again to quit ssh. ``` commands: bashrc: command: if ! grep -Fxq "sudo docker ps" /home/ec2-user/.bashrc; then echo -e "dock() { sudo docker exec -it $(sudo docker ps -lq) bash; } \nsudo docker ps\ndock" >> /home/ec2-user/.bashrc; fi ```
33,006,474
In the cloud, I have multiple instances, each running a container with a different random name, e.g.: ``` CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5dc97950d924 aws_beanstalk/my-app:latest "/bin/sh -c 'python 3 hours ago Up 3 hours 80/tcp, 5000/tcp, 8080/tcp jolly_galileo ``` To enter them, I type: ``` sudo docker exec -it jolly_galileo /bin/bash ``` Is there a command or can you write a bash script to automatically execute the exec to enter the correct container?
2015/10/08
[ "https://Stackoverflow.com/questions/33006474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/478354/" ]
"the correct container"? To determine what is the "correct" container, your bash script would still need either the id or the name of that container. For example, I [have a function in my `.bashrc`](https://github.com/VonC/b2d/blob/f9890cb6e1ee14842b8be2dd66a754550db793a9/.bash_aliases#L54): ``` deb() { docker exec -u git -it $1 bash; } ``` That way, I would type: ``` deb jolly_galileo ``` (it uses the account git, but you don't have to)
As VonC indicated, usually you have to make some shell scripting of your own if you find yourself doing something repetitive. I made a tool myself [here](https://github.com/Pithikos/dockerint) which works if you have Bash 4+. Install ``` wget -qO- https://raw.githubusercontent.com/Pithikos/dockerint/master/docker_autoenter >> ~/.bashrc ``` Then you can enter a container by simply typing the first letters of the container. ``` $> docker ps CONTAINER ID IMAGE .. 807b1e7eab7e ubuntu .. 18e953015fa9 ubuntu .. 19bd96389d54 ubuntu .. $> 18 root@18e953015fa9:/# ``` This works by taking advantage of the function `command_not_found_handle` [introduced in Bash 4](http://tldp.org/LDP/abs/html/bashver4.html). If a command is not found, the script will try and see if what you typed is a container and if it is, it will run `docker exec <container> bash`.
32,488,029
I'm using django-twilio to try and respond to text messages coming from Twilio account. As it recommends using the twilio\_View decorator to imrove upon @csrf\_exempt, I'm using it. Problem is, it doesnt work. No matter what I try, I always get 403. Things I've done: 1. Twilio test account. Added TWILIO\_ACCOUNT\_SID and TWILIO\_AUTH\_TOKEN in settings, which match the test account values. 2. Double checked the url set in twilio exists, and not HTTPS. Set in the properties of the twilio SMS phone number settings. 3. Upgraded to South 1.0, run migrations. 4. Turn off DJANGO\_TWILIO\_FORGERY\_PROTECTION, and it works. 5. Number the text is coming from is verified in twilio. 6. Production settings, so debug=False Running Django 1.4.20, Twilio 4.5, django-twilio 0.8, python 2.7. View is stupid simple: ``` from django_twilio.decorators import twilio_view from twilio.twiml import Response @twilio_view def say_hello(request): r = Response() r.message('hello there') return r ``` If you replace the twilio decorator with the @csrf\_exempt decorator, it all works fine. I get the response back to my phone. In the decorator it looks for the HTTP\_X\_TWILIO\_SIGNATURE value in the request. Looking in my twilio alerts I can't see that value in the request. Don't know if it will show up there or not?
2015/09/09
[ "https://Stackoverflow.com/questions/32488029", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1907157/" ]
Twilio team member here. Are you importing twiml? Try making the first line of your say-hello function: ``` r = twiml.Response() ```
I don't see anything obviously wrong in what you are doing, so I would suggest removing the `@twilio_view` decorator and logging the X-Twilio-Signature header in your view to see what it is and manually checking to see if it's correct. (Basically, redoing the logic of the `@twilio_view` decorator in your view, just to see what's going on). This [link](https://www.twilio.com/docs/security#validating-requests) kinda suggests that you should use a SSL URL before Twilio will add the X-Twilio-Signature to its request, but I'm not really sure if that's true.
60,286,928
I am learning Python (python3) and am working with a text file containing semi-JSON format. It is not full JSON because the "keys" are not surrounded by quotes. I am looking to programmatically add quotes around all of these key names. My plan was to "open" this file and parse each "line" as an individual string. **From:** > > key\_name: { another\_key: "somevalue", second\_key: "anotherval" } > > > **Into:** > > "key\_name": { "another\_key": "somevalue", "second\_key": "anotherval" } > > > I'm sure regex would be the ideal way to do this - for the sake of learning I have been using arrays... I have some code that works partially, but not all of the keys get parentheses placed around them. ``` str = "this is: a string: testing testing: blah blah more: test: hereis: test:" cp_str = list(str[::-1]) skip = False find_end = False for step in range(len(cp_str) - 1): if skip: skip = False continue if cp_str[step] == ':': cp_str.insert(step + 1, '"') skip = True find_end = True if not skip and find_end and not(ord(cp_str[step].lower()) > 95 and ord(cp_str[step].lower()) < 95+26): cp_str.insert(step, '"') skip = True find_end = False print(''.join(cp_str[::-1])) ``` **Outputs:** > > this is: a string": testing "testing": blah blah "more": > "test": "hereis": "test": > > > Any tips or help on the best ways to tackle this would be appreciated.
2020/02/18
[ "https://Stackoverflow.com/questions/60286928", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8729789/" ]
Avoid using regex to handle structured formats. It will almost always mis-handle certain corner cases. Since your input is valid YAML, you can install [PyYAML](https://pypi.org/project/PyYAML/), load the input as YAML, and dump the data structure as JSON instead: ``` import yaml import json s = 'key_name: { another_key: "somevalue", second_key: "anotherval" }' print(json.dumps(yaml.load(s))) ``` This outputs: ``` {"key_name": {"another_key": "somevalue", "second_key": "anotherval"}} ```
While the pattern `([{,]\s*)([^"]*?)(\s*:\s*)` isn't going to cover all corner cases, it should work fine for basic JSON content. Example usage: ``` >>> import re >>> data = '{ another_key: "somevalue", second_key: "anotherval" }' >>> repl_fn = lambda x: f'{x.group(1)}"{x.group(2)}"{x.group(3)}' >>> re.sub(r'([{,]\s*)([^"]*?)(\s*:\s*)', repl_fn, data) '{ "another_key": "somevalue", "second_key": "anotherval" }' ```
57,701,538
I have a `Jupyter` notebook and I'd like to convert it into a `Python` script using the `nbconvert` command from *within* the `Jupyter` notebook. I have included the following line at the end of the notebook: ``` !jupyter nbconvert --to script <filename>.ipynb ``` This creates a `Python` script. However, I'd like the resulting `.py` file to have the following properties: 1. No input statements, such as: > > # In[27]: > > > 2. No markdown, including statements such as: > > # coding: utf-8 > > > 3. Ignore `%magic` commands such as: 1. `%matplotlib inline` 2. `!jupyter nbconvert --to script <filename>.ipynb`, i.e. the command within the notebook that executes the `Python` conversionCurrently, the `%magic` commands get translated to the form: `get_ipython().magic(...)`, but these are not necessarily recognized in `Python`.
2019/08/29
[ "https://Stackoverflow.com/questions/57701538", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2957960/" ]
One way to get control of what appears in the output is to tag the cells that you don't want in the output and then use the TagRemovePreprocessor to remove the cells. [![enter image description here](https://i.stack.imgur.com/gvqcA.png)](https://i.stack.imgur.com/gvqcA.png) The code below also uses the exclude\_markdown function in the TemplateExporter to remove markdown. ``` !jupyter nbconvert \ --TagRemovePreprocessor.enabled=True \ --TagRemovePreprocessor.remove_cell_tags="['parameters']" \ --TemplateExporter.exclude_markdown=True \ --to python "notebook_with_parameters_removed.ipynb" ``` To remove the commented lines and the input statement markets (like # [1]), I believe you'll need to post-process the Python file with something like the following in the cell after the cell you call !jupyter nbconvert from (note that this is Python 3 code): ``` import re from pathlib import Path filename = Path.cwd() / 'notebook_with_parameters_removed.py' code_text = filename.read_text().split('\n') lines = [line for line in code_text if len(line) == 0 or (line[0] != '#' and 'get_ipython()' not in line)] clean_code = '\n'.join(lines) clean_code = re.sub(r'\n{2,}', '\n\n', clean_code) filename.write_text(clean_code.strip()) ```
Jupyter nbconvert has made this a little bit easier with a new [template structure](https://nbconvert.readthedocs.io/en/latest/customizing.html). Templates should be placed in the template path. This can be found by running `jupyter --paths` Each template should be placed in its own directory within the template directory and must contain a conf.json and index.py.j2 file. This [solution](https://stackoverflow.com/a/64144224/5530152) covers all the details for adding a template. This template will remove all the of the markdown, magic and cell numbers leaving a "runnable" .py file. Run this template from within a notebook with `!jupyter nbconvert --to python --template my_clean_python_template my_notebook.ipynb` **`index.py.j2`** ``` {%- extends 'null.j2' -%} ## set to python3 {%- block header -%} #!/usr/bin/env python3 # coding: utf-8 {% endblock header %} ## remove cell counts entirely {% block in_prompt %} {% if resources.global_content_filter.include_input_prompt -%} {% endif %} {% endblock in_prompt %} ## remove markdown cells entirely {% block markdowncell %} {% endblock markdowncell %} {% block input %} {{ cell.source | ipython2python }} {% endblock input %} ## remove magic statement completely {% block codecell %} {{'' if "get_ipython" in super() else super() }} {% endblock codecell%} ```
57,701,538
I have a `Jupyter` notebook and I'd like to convert it into a `Python` script using the `nbconvert` command from *within* the `Jupyter` notebook. I have included the following line at the end of the notebook: ``` !jupyter nbconvert --to script <filename>.ipynb ``` This creates a `Python` script. However, I'd like the resulting `.py` file to have the following properties: 1. No input statements, such as: > > # In[27]: > > > 2. No markdown, including statements such as: > > # coding: utf-8 > > > 3. Ignore `%magic` commands such as: 1. `%matplotlib inline` 2. `!jupyter nbconvert --to script <filename>.ipynb`, i.e. the command within the notebook that executes the `Python` conversionCurrently, the `%magic` commands get translated to the form: `get_ipython().magic(...)`, but these are not necessarily recognized in `Python`.
2019/08/29
[ "https://Stackoverflow.com/questions/57701538", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2957960/" ]
One way to get control of what appears in the output is to tag the cells that you don't want in the output and then use the TagRemovePreprocessor to remove the cells. [![enter image description here](https://i.stack.imgur.com/gvqcA.png)](https://i.stack.imgur.com/gvqcA.png) The code below also uses the exclude\_markdown function in the TemplateExporter to remove markdown. ``` !jupyter nbconvert \ --TagRemovePreprocessor.enabled=True \ --TagRemovePreprocessor.remove_cell_tags="['parameters']" \ --TemplateExporter.exclude_markdown=True \ --to python "notebook_with_parameters_removed.ipynb" ``` To remove the commented lines and the input statement markets (like # [1]), I believe you'll need to post-process the Python file with something like the following in the cell after the cell you call !jupyter nbconvert from (note that this is Python 3 code): ``` import re from pathlib import Path filename = Path.cwd() / 'notebook_with_parameters_removed.py' code_text = filename.read_text().split('\n') lines = [line for line in code_text if len(line) == 0 or (line[0] != '#' and 'get_ipython()' not in line)] clean_code = '\n'.join(lines) clean_code = re.sub(r'\n{2,}', '\n\n', clean_code) filename.write_text(clean_code.strip()) ```
The most obvious solution seems to work for me: ```sh jupyter nbconvert --to python a_notebook.ipynb --stdout | grep -v -e "^get_ipython" | python ``` Of course, you can't use something like `dirs = !ls` in your notebooks for this to work.
29,578,217
I made sure to try installing PyQt4 on mac in many different ways, but I always get the error above. My attempts have in common installing Python 3.4 from the official website installer, then installing Qt4 from [here](https://download.qt.io/archive/qt/4.8/4.8.6/) and finally installing SIP from the package available in the Riverbanks website. I've already tried to install PyQt4 by running the configure-ng.py, the configure.py without options and configure.py with a reasonable number of different combinations (by empiric/hopelessness purposes), but I know the "pattern" for the options would be "-q" option to indicate the qmake path, "-d" option to the python path and "--use-arch x86\_64" to indicate, I guess, the machine architecture (I made sure to use "uname -a", something like that, to check if I really should use "x86\_64"). Simply nothing worked! After all that trouble, I have tried to install SIP and PyQt4 on the Python 2.7 and lastly I've tried to use Homebrew to install all that stuff. Again it doesn't worked. Someone has idea what could fix the problem? (Unfortunately I have the possibility to work with a mac just once a week, then I cannot test your solutions immediately.)
2015/04/11
[ "https://Stackoverflow.com/questions/29578217", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2157820/" ]
If you look a little on what Google has to say, there are several references to that problem. I see you are from Brasil so maybe this is your problem: <https://github.com/thoughtbot/capybara-webkit/issues/291> (which refers to: <https://github.com/thoughtbot/capybara-webkit/issues/224> Also: * <https://github.com/thoughtbot/capybara-webkit/issues/682> * <https://github.com/thoughtbot/capybara-webkit/issues/291> * <https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDcQFjAE&url=https%3A%2F%2Ftrac.macports.org%2Fticket%2F22924&ei=LCMpVYbtOIOYNv7NgPAO&usg=AFQjCNH7wcogTMUJHKR7NlOSIOWHWGzsIA&bvm=bv.90491159,d.eXY&cad=rja>
I had the same problem using GCC installed via MacPorts (tested several versions up to gcc5). The solution for me was using g++ supplied with the XCode command line tools. I uninstalled all MacPorts GCC versions. Below version details of the g++ command that worked. ``` $ g++ --version Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 6.0 (clang-600.0.51) (based on LLVM 3.5svn) Target: x86_64-apple-darwin13.4.0 Thread model: posix ``` A similar question was asked here: [QT Creator adds -Xarch](https://stackoverflow.com/q/16701355/3922554)
56,387,349
I'm using Django Rest Framework and want to be able to delete a Content instance via `DELETE` to `/api/content/<int:pk>/`. I *don't* want to implement any method to respond to `GET` requests. When I include a `.retrieve()` method as follows, the `DELETE` request **works**: ``` class ContentViewSet(GenericViewSet): def get_queryset(self): return Content.objects.filter(user=self.request.user) def retrieve(self, request, pk=None): pass #this works, but I don't want .retrieve() at all def delete(self, request, pk=None): content = self.get_object() #look up some info info here content.delete() return Response('return some info') ``` If I replace `.retrieve()` with `RetrieveModelMixin` it also works. However, if I remove both of these, which is what want to do, I get the following error. > > django.urls.exceptions.NoReverseMatch: Reverse for 'content-detail' not found. 'content-detail' is not a valid view function or pattern name. > > > I haven't tested, but I assume the same thing would happen with `PUT` and `PATCH`. My questions are: 1. How can I allow `DELETE` without implementing a `.retrieve()` method, and 2. Why can't DRF create the urlconf without `.retrieve()` implemented? **UPDATE: Failing test and complete error traceback caused by removing `.retrieve()` method** ``` from rest_framework.test import APITestCase, APIClient from myapp.models import Content class ContentTestCase(APITestCase): def setUp(self): self.content = Content.objects.create(title='New content') self.client = APIClient() def test_DELETE_content(self): url = reverse('content-detail', kwargs={'pk':self.content.pk}) response = self.client.delete(url) self.assertEqual(response.status_code, 200) ``` Results in: ``` Traceback (most recent call last): File "myproject/myapp/tests.py", line 548, in test_DELETE_content url = reverse('content-detail', kwargs={'pk':self.content}) File "python3.6/site-packages/rest_framework/reverse.py", line 50, in reverse url = _reverse(viewname, args, kwargs, request, format, **extra) File "python3.6/site-packages/rest_framework/reverse.py", line 63, in _reverse url = django_reverse(viewname, args=args, kwargs=kwargs, **extra) File "python3.6/site-packages/django/urls/base.py", line 90, in reverse return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)) File "python3.6/site-packages/django/urls/resolvers.py", line 636, in _reverse_with_prefix raise NoReverseMatch(msg) django.urls.exceptions.NoReverseMatch: Reverse for 'content-detail' not found. 'content-detail' is not a valid view function or pattern name. ```
2019/05/31
[ "https://Stackoverflow.com/questions/56387349", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4400877/" ]
> > 1. How can I allow DELETE without implementing a `.retrieve()` method? > > > Just remove the **`retrieve()`** method from the view class. Which means, the [**`GenericViewSet`**](https://www.django-rest-framework.org/api-guide/viewsets/#genericviewset) doesn't provide any ***HTTP Actions*** unless it's defined in your class. So, the following will be your code snippet, ``` class ContentViewSet(GenericViewSet): def get_queryset(self): return Content.objects.filter(user=self.request.user) def delete(self, request, pk=None): content = self.get_object() # look up some info info here content.delete() return Response('return some info') ``` or you could use **`mixin classes`** here, ``` from rest_framework.mixins import DestroyModelMixin class ContentViewSet(DestroyModelMixin, GenericViewSet): def get_queryset(self): return Content.objects.filter(user=self.request.user) ``` --- > > 2. Why can't DRF create the urlconf without `.retrieve()` implemented? > > > I'm not sure how you've defined your URLs. When I tried with ***DRF Router***, it's only creating the URL conf for defined actions. You've got **`GET`** and **`DELETE`** actions on your end-point because of you'd defined the **`retrieve()`** method in your view class. Hope this help :)
My solution for part 1. is to include the mixin but restrict the `http_method_names`: ``` class ContentViewSet(RetrieveModelMixin, GenericViewSet): http_method_names = ['delete'] ... ``` However, I still don't know why I have to include `RetrieveModelMixin` at all.
56,387,349
I'm using Django Rest Framework and want to be able to delete a Content instance via `DELETE` to `/api/content/<int:pk>/`. I *don't* want to implement any method to respond to `GET` requests. When I include a `.retrieve()` method as follows, the `DELETE` request **works**: ``` class ContentViewSet(GenericViewSet): def get_queryset(self): return Content.objects.filter(user=self.request.user) def retrieve(self, request, pk=None): pass #this works, but I don't want .retrieve() at all def delete(self, request, pk=None): content = self.get_object() #look up some info info here content.delete() return Response('return some info') ``` If I replace `.retrieve()` with `RetrieveModelMixin` it also works. However, if I remove both of these, which is what want to do, I get the following error. > > django.urls.exceptions.NoReverseMatch: Reverse for 'content-detail' not found. 'content-detail' is not a valid view function or pattern name. > > > I haven't tested, but I assume the same thing would happen with `PUT` and `PATCH`. My questions are: 1. How can I allow `DELETE` without implementing a `.retrieve()` method, and 2. Why can't DRF create the urlconf without `.retrieve()` implemented? **UPDATE: Failing test and complete error traceback caused by removing `.retrieve()` method** ``` from rest_framework.test import APITestCase, APIClient from myapp.models import Content class ContentTestCase(APITestCase): def setUp(self): self.content = Content.objects.create(title='New content') self.client = APIClient() def test_DELETE_content(self): url = reverse('content-detail', kwargs={'pk':self.content.pk}) response = self.client.delete(url) self.assertEqual(response.status_code, 200) ``` Results in: ``` Traceback (most recent call last): File "myproject/myapp/tests.py", line 548, in test_DELETE_content url = reverse('content-detail', kwargs={'pk':self.content}) File "python3.6/site-packages/rest_framework/reverse.py", line 50, in reverse url = _reverse(viewname, args, kwargs, request, format, **extra) File "python3.6/site-packages/rest_framework/reverse.py", line 63, in _reverse url = django_reverse(viewname, args=args, kwargs=kwargs, **extra) File "python3.6/site-packages/django/urls/base.py", line 90, in reverse return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)) File "python3.6/site-packages/django/urls/resolvers.py", line 636, in _reverse_with_prefix raise NoReverseMatch(msg) django.urls.exceptions.NoReverseMatch: Reverse for 'content-detail' not found. 'content-detail' is not a valid view function or pattern name. ```
2019/05/31
[ "https://Stackoverflow.com/questions/56387349", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4400877/" ]
Wild guess here but did you use a `SimpleRouter` or a `DefaultRouter` to build your `urlpatterns`? If so, that's your problem. The router uses a viewset and expects to have all methods implemented. More info [here](https://www.django-rest-framework.org/api-guide/routers/#using-include-with-routers) What you can do is just add your `url` to `urlpatterns` like you would normally do on django using the `.as_view()` method.
My solution for part 1. is to include the mixin but restrict the `http_method_names`: ``` class ContentViewSet(RetrieveModelMixin, GenericViewSet): http_method_names = ['delete'] ... ``` However, I still don't know why I have to include `RetrieveModelMixin` at all.
64,784,079
In an attempt to create a JWT in python I have written the following code. ``` #Header header = str({"alg": "RS256"}) header_binary = header.encode() header_base64 = base64.urlsafe_b64encode(header_binary) print(header_base64) #Claims set (Pay Load) client_id="" username="" URL="" exp_time=str(round(time.time())+300) claims = str({"iss": client_id, "sub": username, "aud": URL, "exp": exp_time}) claims_binary = claims.encode() claims_base64 = base64.urlsafe_b64encode(claims_binary) print(claims_base64) ``` I understand there is still more to do but I have the following problem. If I concatenate the two strings created above with a "." and put the resulting string in a JWT debugger it seems the claims set works perfectly but the same cannot be said for the header. Please advise if this is the correct way to go about doing this and what my errors are.
2020/11/11
[ "https://Stackoverflow.com/questions/64784079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14233404/" ]
Directly using LTPA in Tomcat is not possible unless you use 3rd party token services. The better way to have SSO experience between WebSphere and Tomcat is to use Windows ADFS as SSO server instead of LDAP. You can setup ADFS as either SAML identity provider and OpenID connect provider, and setup WebSphere and Tomcat as SAML or OIDC relying party to ADFS. ADFS server will be your SSO server, and user only need login to ADFS once, and will be authenticated to both WebSphere and Tomcat automatically. If you can not use ADFS, you could setup one dedicated WebSphere Liberty server as OpenID Connect server (which can be configured to use Windows LDAP as user registry), and use the OpenID Connect server as SSO server for both WebSphere and Tomcat. Similar to ADFS case, user is only required to login to Liberty once, and will be login to both WebSphere and Tomcat automatically. Note here Liberty OIDC server plays role to translate LTPA token to OIDC token which can be consumed by Tomcat.
The other, simpler approach would be just to use Use Open Liberty instead of Tomcat, which I suggested in other thread. As usually there is no benefit using Tomcat over OpenLibery and LTPA token will work just via configuration in Liberty and can integrate with any older WebSpheres you have in your environment.
58,843,848
What I need is simple: a piece of code that will receive a GET request, process some data, and then generate a response. I'm completely new to python web development, so I've decided to use DRF for this purpose because it seemed like the most robust solution, but every example I found online consisted of CRUDs with models and views, and I figure what I need is something more simple (since the front-end is already created). Could anyone provide an example on how to do this on DRF? (or even some other viable solution, having in mind that it needs to be robust enough to take on multiple requests at the same time in production)
2019/11/13
[ "https://Stackoverflow.com/questions/58843848", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7742448/" ]
Simple way to do thing you want is by using Django REST Framework's `APIView` (or `@api_view` decorator). Here is an example of it in the docs: <https://www.django-rest-framework.org/api-guide/views/>. Besides code on that page, you would need to register your view on appropriate route, which can be found here: <https://www.django-rest-framework.org/api-guide/routers/>
Django and Django REST Framework are pretty heavy products out-of-the-box. If you want something more lightweight that can handle many incoming requests, you could create a simple Express server using Node.js. This would result in very few lines of code on your end. Sample Node server: ``` var express = require('express') var app = express() app.get('/', (req, res) => { res.send('hello world') }); app.listen(8000); ```
58,843,848
What I need is simple: a piece of code that will receive a GET request, process some data, and then generate a response. I'm completely new to python web development, so I've decided to use DRF for this purpose because it seemed like the most robust solution, but every example I found online consisted of CRUDs with models and views, and I figure what I need is something more simple (since the front-end is already created). Could anyone provide an example on how to do this on DRF? (or even some other viable solution, having in mind that it needs to be robust enough to take on multiple requests at the same time in production)
2019/11/13
[ "https://Stackoverflow.com/questions/58843848", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7742448/" ]
Simple way to do thing you want is by using Django REST Framework's `APIView` (or `@api_view` decorator). Here is an example of it in the docs: <https://www.django-rest-framework.org/api-guide/views/>. Besides code on that page, you would need to register your view on appropriate route, which can be found here: <https://www.django-rest-framework.org/api-guide/routers/>
For DRF: <https://www.django-rest-framework.org/tutorial/quickstart/> Another viable option: Flask: <https://flask.palletsprojects.com/en/1.1.x/quickstart/>
28,778,843
I apologize for my noobiness in java, but I am trying to make a very basic app for someone for their birthday and have only really done any programming in python. I have been trying to implement the code found in [android - how to make a button click play a sound file every time it been pressed?](https://stackoverflow.com/questions/19464782/android-how-to-make-a-button-click-play-a-sound-file-every-time-it-been-presse) and am having trouble. I have placed the assets folder in the main directory, the src directory, and the app directory to see if any helped and am still getting the error Every time I attempt to run the program I get the following error ``` 02-27 23:06:48.896 25643-25643/com.app.bdking.mineturtle W/System.err﹕ java.io.FileNotFoundException: hello.mp3 ``` Here is my main activity ``` package com.app.bdking.mineturtle; import android.content.res.AssetFileDescriptor; import android.media.MediaPlayer; import android.os.Bundle; import android.app.Activity; import android.view.Menu; import android.view.View; import android.widget.Button; import java.io.IOException; public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); final MediaPlayer mp = new MediaPlayer(); Button b = (Button) findViewById(R.id.button1); b.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if(mp.isPlaying()) { mp.stop(); } try { mp.reset(); AssetFileDescriptor afd; afd = getAssets().openFd("hello.mp3"); mp.setDataSource(afd.getFileDescriptor(),afd.getStartOffset(),afd.getLength()); mp.prepare(); mp.start(); } catch (IllegalStateException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } }); } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true; } } ``` main xml is the same as on the aforementioned post. Thanks in advance.
2015/02/28
[ "https://Stackoverflow.com/questions/28778843", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4616988/" ]
You need to just Put your file in res/raw folder and use ``` public void onClick(View v) { if(mp.isPlaying()) { mp.stop(); } else{ try { mp = MediaPlayer.create(this, R.raw.hello); mp.prepare(); mp.start(); } catch (IllegalStateException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } ``` }
You can do it in another way. Put the .mp3 files under res/raw folder and use the following code: ``` MediaPlayer mediaPlayer = MediaPlayer.create(getApplicationContext(), R.raw.android); mediaPlayer.start(); ``` Refer this [link](http://www.tutorialspoint.com/android/android_mediaplayer.htm) for better example to understand
51,115,825
I have numpy array and two python lists of indexes with positions to increase arrays elements by one. Do numpy has some methods to vectorize this operation without use of `for` loops? My current slow implementation: ``` a = np.zeros([4,5]) xs = [1,1,1,3] ys = [2,2,3,0] for x,y in zip(xs,ys): # how to do it in numpy way (efficiently)? a[x,y] += 1 print(a) ``` Output: ``` [[0. 0. 0. 0. 0.] [0. 0. 2. 1. 0.] [0. 0. 0. 0. 0.] [1. 0. 0. 0. 0.]] ```
2018/06/30
[ "https://Stackoverflow.com/questions/51115825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
`np.add.at` will do just that, just pass both indexes as a single 2D array/list: ``` a = np.zeros([4,5]) xs = [1, 1, 1, 3] ys = [2, 2, 3, 0] np.add.at(a, [xs, ys], 1) # in-place print(a) array([[0., 0., 0., 0., 0.], [0., 0., 2., 1., 0.], [0., 0., 0., 0., 0.], [1., 0., 0., 0., 0.]]) ```
``` >>> a = np.zeros([4,5]) >>> xs = [1, 1, 1, 3] >>> ys = [2, 2, 3, 0] >>> a[[xs,ys]] += 1 >>> a array([[ 0., 0., 0., 0., 0.], [ 0., 0., 1., 1., 0.], [ 0., 0., 0., 0., 0.], [ 1., 0., 0., 0., 0.]]) ```
74,033,201
I have a python dictionary which contains multiple key,values which are actually image indexes. For e.g. the dictionary I have looks something as given below ``` { 1: [1, 2, 3], 2: [1, 2, 3], 3: [1, 2, 3], 4: [4, 5], 5: [4, 5], 6: [6] } ``` this means that 1 is related to 1, 2 & 3. Similarly 2 is related to 1, 2 & 3. 6 has only 6 in it so it's related to none of the other elements. This is leading me to perform extra operations in my programs. I want the dictionary to be filtered and look like ``` # Preferred output { 1: [2, 3], 4: [5], 6: [] } ``` So far I have tried ``` new_dict = dict() # source is the original dictionary for k,v in source.items(): ok = True for k1,v1 in new_dict.items(): if k in v1: ok = False if ok: new_dict[k] = v ``` This modifies the dictionary to ``` {1: [1, 2, 3], 4: [4, 5], 6: [6]} ``` I am looking for a more efficient and pythonic way to solve this problem.
2022/10/11
[ "https://Stackoverflow.com/questions/74033201", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7483151/" ]
How about this, use my modification below to remove the first item in the list during each loop. I commented the line which does it. ``` new_dict = dict() # source is the original dictionary for k,v in source.items(): ok = True for k1,v1 in new_dict.items(): if k in v1: ok = False if ok: new_dict[k] = v[1:] #this line removes the first element. ``` My output is, ``` {1: [2, 3], 4: [5], 6: []} ```
your modified code: **ver 1:** ``` new_dict = dict() for k, v in source.items(): if not any(k in v1 for v1 in new_dict.values()): new_dict[k] = v[1:] ``` **ver 2:** ``` tmp = dict() for k, v in source.items(): tmp[tuple(v)] = tmp.get(tuple(v), []) + [k] res = dict() for k, v in tmp.items(): res[v[0]] = v[1:] ``` **ver 3:** ``` new_dict = {v[0]: list(v[1:]) for v in set(map(tuple, source.values()))} ```
74,033,201
I have a python dictionary which contains multiple key,values which are actually image indexes. For e.g. the dictionary I have looks something as given below ``` { 1: [1, 2, 3], 2: [1, 2, 3], 3: [1, 2, 3], 4: [4, 5], 5: [4, 5], 6: [6] } ``` this means that 1 is related to 1, 2 & 3. Similarly 2 is related to 1, 2 & 3. 6 has only 6 in it so it's related to none of the other elements. This is leading me to perform extra operations in my programs. I want the dictionary to be filtered and look like ``` # Preferred output { 1: [2, 3], 4: [5], 6: [] } ``` So far I have tried ``` new_dict = dict() # source is the original dictionary for k,v in source.items(): ok = True for k1,v1 in new_dict.items(): if k in v1: ok = False if ok: new_dict[k] = v ``` This modifies the dictionary to ``` {1: [1, 2, 3], 4: [4, 5], 6: [6]} ``` I am looking for a more efficient and pythonic way to solve this problem.
2022/10/11
[ "https://Stackoverflow.com/questions/74033201", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7483151/" ]
This shows the preferred output: ``` source = { 1: [1, 2, 3], 2: [1, 2, 3], 3: [1, 2, 3], 4: [4, 5], 5: [4, 5], 6: [6] } new_dict = {} seen = set() for k, v in source.items(): if k in seen: continue seen.add(k) new_v = [i for i in v if i not in seen] seen.update(new_v) new_dict[k] = new_v print(new_dict) ```
your modified code: **ver 1:** ``` new_dict = dict() for k, v in source.items(): if not any(k in v1 for v1 in new_dict.values()): new_dict[k] = v[1:] ``` **ver 2:** ``` tmp = dict() for k, v in source.items(): tmp[tuple(v)] = tmp.get(tuple(v), []) + [k] res = dict() for k, v in tmp.items(): res[v[0]] = v[1:] ``` **ver 3:** ``` new_dict = {v[0]: list(v[1:]) for v in set(map(tuple, source.values()))} ```
37,584,629
I am trying to connect to a AWS Redshift server via SSL. I am using psycopg2 library in python to establish the connection and used `sslmode='require'` as a parameter in the connect line. Unfortunately i got this error: ``` sslmode value "require" invalid when SSL support is not compiled in ``` I read many other similar cases for PostgreSQL which mention the problem exists with the PostgeSQL version, but i didn't find any solutions for Redshift using Psycopg2. Do i need to install any specific SSL certificate for Redshift? If yes, how do i do it with Psycopg2? Any help will be appreciated.
2016/06/02
[ "https://Stackoverflow.com/questions/37584629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5610841/" ]
This worked for me: <https://stackoverflow.com/a/36489939/101266> > > had this same error, which turned out to be because I was using the Anaconda version of psycopg2. To fix it, I had adapt VictorF's solution from here and run: > > > ``` conda uninstall psycopg2 sudo ln -s /Users/YOURUSERNAME/anaconda/lib/libssl.1.0.0.dylib /usr/local/lib sudo ln -s /Users/YOURUSERNAME/anaconda/lib/libcrypto.1.0.0.dylib /usr/local/lib pip install psycopg2 ```
Is your cluster enabled for SSL connections? Your URL itself will have the SSL info. and you can use the same in your code.
16,946,051
I have a log file with arbitrary number of lines. All I need is to extract is one line of data from the log file which starts with a string “Total”. I do not want any other lines from the file. How do I write a simple python program for this? This is how my input file looks ``` TestName id eno TPS GRE FNP Test 1205 1 0 78.00 0.00 0.02 Test 1206 1 0 45.00 0.00 0.02 Test 1207 1 0 73400 0.00 0.02 Test 1208 1 0 34.00 0.00 0.02 Totals 64 0 129.61 145.64 1.12 ``` I am trying to get an output file which looks like ``` TestName id TPS GRE Totals 64 129.61 145.64 ``` Ok.. So I wanted only the 1st, 2nd, 4th and 5th column from the input file but not others. I am trying the list[index] to achieve this but getting a IndexError: (list index out of range ). Also the space between 2 columns are not the same so i am not sure how to split the columns and select the ones that i want. Can somebody please help me with this. below is the program I used ``` newFile = open('sana.log','r') for line in newFile.readlines(): if ('TestName' in line) or ('Totals' in line): data = line.split('\t') print data[0]+data[1] ```
2013/06/05
[ "https://Stackoverflow.com/questions/16946051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2180817/" ]
``` theFile = open('thefile.txt','r') FILE = theFile.readlines() theFile.close() printList = [] for line in FILE: if ('TestName' in line) or ('Totals' in line): # here you may want to do some splitting/concatenation/formatting to your string printList.append(line) for item in printList: print item # or write it to another file... or whatever ```
``` for line in open('filename.txt', 'r'): if line.startswith('TestName') or line.startswith('Totals'): fields = line.rsplit(None, 5) print '\t'.join(fields[:2] + fields[3:4]) ```
34,254,594
I am trying to emulate a piano in python using mingus as suggested in [this question](https://stackoverflow.com/questions/6487180/synthesize-musical-notes-with-piano-sounds-in-python/ "this question"). I am running Ubuntu 14.04, and have already created an audio group and added myself to it. I am using alsa. I ran the code given in one of the answers to the aforementioned question and it ran fine in shell mode. However, when I wrote a python script and tried to run it, I did not get any sound whatsoever. Here is my code: ``` #!/usr/bin/env python from mingus.midi import fluidsynth DEF_FONT_PATH = '/usr/share/sounds/sf2/FluidR3_GM.sf2' def main(): fluidsynth.init(DEF_FONT_PATH, 'alsa') fluidsynth.play_Note(80, 0, 80) if __name__ == '__main__': main() ``` I have checked many other answers, and I cannot seem to find a solution.
2015/12/13
[ "https://Stackoverflow.com/questions/34254594", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2951705/" ]
The data should be passed like this :- ``` $('#event_delete').on('click', function () { $('#calendar').fullCalendar('removeEvents', calEvent.id); $.ajax({ data: {id: calEvent.id}, type: "POST", url: "http://localhost/book/js/delete_events.php" }); alert (calEvent.id); CloseModalBox(); }); ```
Instead of reloading the entire page you can call $('#calendar').fullCalendar( 'refetchEvents' ). It will get all the events from your database and rerender them.
63,687,990
I have a `setup.py` which contains the following: ``` from pip._internal.req import parse_requirements def load_requirements(fname): """Turn requirements.txt into a list""" reqs = parse_requirements(fname, session="test") return [str(ir.requirement) for ir in reqs] setup( name="Projectname", [...] python_requires='>=3.6', extras_require={ 'dev': load_requirements('./requirements/dev.txt') }, install_requires=load_requirements('./requirements/prod.txt') ) ``` My `./requirements/prod.txt` looks like this: ``` -r common.txt ``` and my `./requirements/dev.txt` is similar but with some development specific packages. My `./requirements/common.txt` contains a line to pip-install a package from a github link, like: ``` -e git://github.com/BioGeek/tta_wrapper.git@master#egg=tta_wrapper ``` However, since I added that line, the command `python setup.py build` fails with: ``` error in Projectname setup command: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers. ``` Versions of relevant packages: ``` pip 20.2.2 setuptools 50.0.0 ``` How do I modify my `setup.py` or my requirements files to fix this? **Edit** After modifying my `setup.py` as shown in [the anwser](https://stackoverflow.com/a/63688209/50065) of [Martijn Pieters](https://stackoverflow.com/users/100297/martijn-pieters), I can confirm that `load_requirements` now turns my requirements files into a list with name@ url direct reference syntax where needed. ``` >>> load_requirements('./requirements/prod.txt') ['absl-py==0.8.1', 'GitPython==3.1.0', 'numpy==1.18.4', 'pip==20.2.2', 'protobuf==3.12.0', 'setuptools==41.0.0', 'scikit_learn==0.22', 'tensorflow_hub==0.8.0', 'importlib-metadata==1.6.1', 'keras-tuner==1.0.1', 'apache-beam==2.23.0', 'ml-metadata==0.23.0', 'pyarrow==0.17.0', 'tensorflow==2.3.0', 'tensorflow-data-validation==0.23.0', 'tensorflow-metadata==0.23.0', 'tensorflow-model-analysis==0.23.0', 'tensorflow-transform==0.23.0', 'tta_wrapper @ git://github.com/BioGeek/tta_wrapper.git@master'] ``` However, now I get the following error when I run `python setup.py build`: ``` $ python setup.py build /home/biogeek/code/programname/env/lib/python3.6/site-packages/_distutils_hack/__init__.py:30: UserWarning: Setuptools is replacing distutils. warnings.warn("Setuptools is replacing distutils.") running build Traceback (most recent call last): File "setup.py", line 91, in <module> install_requires=load_requirements('./requirements/prod.txt') File "/home/biogeek/code/programname/env/lib/python3.6/site-packages/setuptools/__init__.py", line 153, in setup return distutils.core.setup(**attrs) File "/home/biogeek/code/programname/env/lib/python3.6/site-packages/setuptools/_distutils/core.py", line 148, in setup dist.run_commands() File "/home/biogeek/code/programname/env/lib/python3.6/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands self.run_command(cmd) File "/home/biogeek/code/programname/env/lib/python3.6/site-packages/setuptools/_distutils/dist.py", line 984, in run_command cmd_obj = self.get_command_obj(command) File "/home/biogeek/code/programname/env/lib/python3.6/site-packages/setuptools/_distutils/dist.py", line 859, in get_command_obj cmd_obj = self.command_obj[command] = klass(self) File "/usr/lib/python3.6/distutils/cmd.py", line 57, in __init__ raise TypeError("dist must be a Distribution instance") TypeError: dist must be a Distribution instance ``` **Edit 2** I finally made my installation succeed. I tried a few things, so not entirely sure what in the end resolved the issue, but I: * downgraded `setuptools` from `50.0.0` to `41.0.0` * put `setuptools` as the first line in my requirements file (see [here](https://github.com/pypa/setuptools/pull/784#issuecomment-248008457)) * added a crude, hacky one-off function to point to the zip archive with the name @ url syntax. ```py def _format_requirement(req): if str(req.requirement) == 'git://github.com/BioGeek/tta_wrapper.git@master#egg=tta_wrapper': return 'tta_wrapper @ https://github.com/BioGeek/tta_wrapper/archive/v0.0.1.zip' return str(req.requirement) ```
2020/09/01
[ "https://Stackoverflow.com/questions/63687990", "https://Stackoverflow.com", "https://Stackoverflow.com/users/50065/" ]
You can only use [PEP 508 - *Dependency specification for Python Software Packages*](https://www.python.org/dev/peps/pep-0508/) requirements. `git://github.com/BioGeek/tta_wrapper.git@master#egg=tta_wrapper` is not valid syntax according to that standard. `setuptools` does accept the [`name@ url` direct reference syntax](https://www.python.org/dev/peps/pep-0440/#direct-references): ```none tta_wrapper @ git://github.com/BioGeek/tta_wrapper.git ``` You can't put that in a requirements.txt file however, not *and* use the `-e` switch. The latter can only take a VCS URL or a local file path, **not** a requirement specification; see the [*Requirements File Format* section](https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format). So you have translate between formats here. I'd check for the `is_editable` flag on the `ParsedRequirement` objects that `parse_requirements()` produces, and alter behaviour accordingly. You'd have to parse the requirement string as a URL, pull out the `#egg=` fragment and put that at the front: ``` from urllib.parse import urlparse def _format_requirement(req): if req.is_editable: # parse out egg=... fragment from VCS URL parsed = urlparse(req.requirement) egg_name = parsed.fragment.partition("egg=")[-1] without_fragment = parsed._replace(fragment="").geturl() return f"{egg_name} @ {without_fragment}" return req.requirement def load_requirements(fname): """Turn requirements.txt into a list""" reqs = parse_requirements(fname, session="test") return [_format_requirement(ir) for ir in reqs] ``` The above then turns `-e git:...#egg=tta_wrapper` into `tta_wrapper @ git:...`: ``` >>> load_requirements('./requirements/dev.txt') ['tta_wrapper @ git://github.com/BioGeek/tta_wrapper.git@master', 'black==20.08b1'] ```
In my case there is no any github link in my requirements, but the line ``` -r common.txt ``` in `./requirements/prod.txt` caused the same error. I've added stupid condition and now it works for me: ```py def load_requirements(filename) -> list: requirements = [] try: with open(filename) as req: requirements = [line for line in req.readlines() if line.strip() != "-r common.txt"] except Exception as e: print(e) return requirements ```
29,192,068
On Windows 7 machine, Pycharm (community or professional) and Python 3.4 (tried Anaconda 3 as well) were installed newly. There were not problems running Python scripts interactively in main editor. However, when I tried to select *View > Tool Windows > Python Console*, it generates the following error messages and more. Basically, I couldn't bring up a console window in Pycharm. ``` C:\Users\user\Anaconda3\python.exe -u C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevconsole.py 56743 56744 Traceback (most recent call last): File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydev_imports.py", line 21, in <module> from SimpleXMLRPCServer import SimpleXMLRPCServer ImportError: No module named 'SimpleXMLRPCServer' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevconsole.py", line 20, in <module> import pydevd_vars File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd_vars.py", line 9, in <module> from pydevd_xml import * File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd_xml.py", line 7, in <module> from pydev_imports import quote File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydev_imports.py", line 23, in <module> from xmlrpc.server import SimpleXMLRPCServer File "C:\Users\user\Anaconda3\lib\xmlrpc\server.py", line 108, in <module> from http.server import BaseHTTPRequestHandler File "C:\Users\user\Anaconda3\lib\http\server.py", line 660, in <module> class SimpleHTTPRequestHandler(BaseHTTPRequestHandler): File "C:\Users\user\Anaconda3\lib\http\server.py", line 851, in SimpleHTTPRequestHandler mimetypes.init() # try to read system mime.types File "C:\Users\user\Anaconda3\lib\mimetypes.py", line 348, in init db.read_windows_registry() File "C:\Users\user\Anaconda3\lib\mimetypes.py", line 255, in read_windows_registry with _winreg.OpenKey(hkcr, subkeyname) as subkey: TypeError: OpenKey() argument 2 must be str without null characters or None, not str Process finished with exit code 1 Couldn't connect to console process. ``` -----------------these messages were showed up in "Python Console"-------------
2015/03/22
[ "https://Stackoverflow.com/questions/29192068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4018111/" ]
You need to change your working directory. Go to `File->Settings->Build, Execution, Deployment->Console->Python Console` and then change or provide a directory where you have read and write access in the `Working directory` box.
The configuring of pycharm in the presence of various development configurations is a bit of a black art IMHO. The most effective mechanism I've found for pinning this down is put random strings into the various settings dialogs, Interpreters, consoles, tests , servers and observe the command lines submitted to the interpreter VERY carefully. Hardly a satisfactory approach but it will sort out what is going where and to a certain degree what effects what. The other think that helps me are screenshots of the settings and testing dialogs of working installations. Again, a bit rough and ready but it has got me up and running again after a long period of successful debugging followed by pycharm amnesia.
29,192,068
On Windows 7 machine, Pycharm (community or professional) and Python 3.4 (tried Anaconda 3 as well) were installed newly. There were not problems running Python scripts interactively in main editor. However, when I tried to select *View > Tool Windows > Python Console*, it generates the following error messages and more. Basically, I couldn't bring up a console window in Pycharm. ``` C:\Users\user\Anaconda3\python.exe -u C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevconsole.py 56743 56744 Traceback (most recent call last): File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydev_imports.py", line 21, in <module> from SimpleXMLRPCServer import SimpleXMLRPCServer ImportError: No module named 'SimpleXMLRPCServer' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevconsole.py", line 20, in <module> import pydevd_vars File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd_vars.py", line 9, in <module> from pydevd_xml import * File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd_xml.py", line 7, in <module> from pydev_imports import quote File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydev_imports.py", line 23, in <module> from xmlrpc.server import SimpleXMLRPCServer File "C:\Users\user\Anaconda3\lib\xmlrpc\server.py", line 108, in <module> from http.server import BaseHTTPRequestHandler File "C:\Users\user\Anaconda3\lib\http\server.py", line 660, in <module> class SimpleHTTPRequestHandler(BaseHTTPRequestHandler): File "C:\Users\user\Anaconda3\lib\http\server.py", line 851, in SimpleHTTPRequestHandler mimetypes.init() # try to read system mime.types File "C:\Users\user\Anaconda3\lib\mimetypes.py", line 348, in init db.read_windows_registry() File "C:\Users\user\Anaconda3\lib\mimetypes.py", line 255, in read_windows_registry with _winreg.OpenKey(hkcr, subkeyname) as subkey: TypeError: OpenKey() argument 2 must be str without null characters or None, not str Process finished with exit code 1 Couldn't connect to console process. ``` -----------------these messages were showed up in "Python Console"-------------
2015/03/22
[ "https://Stackoverflow.com/questions/29192068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4018111/" ]
The configuring of pycharm in the presence of various development configurations is a bit of a black art IMHO. The most effective mechanism I've found for pinning this down is put random strings into the various settings dialogs, Interpreters, consoles, tests , servers and observe the command lines submitted to the interpreter VERY carefully. Hardly a satisfactory approach but it will sort out what is going where and to a certain degree what effects what. The other think that helps me are screenshots of the settings and testing dialogs of working installations. Again, a bit rough and ready but it has got me up and running again after a long period of successful debugging followed by pycharm amnesia.
I had same problem. I reinstalled python and default directories have changed. Then I just refreshed interpreter here `File->Settings->Build, Execution, Deployment->Console->Python Console` **and here** `File->Settings->Project: <YOUR_PROJECT>->Project Interpreter`. If you will open new projects interpreter will need choose again (?)
29,192,068
On Windows 7 machine, Pycharm (community or professional) and Python 3.4 (tried Anaconda 3 as well) were installed newly. There were not problems running Python scripts interactively in main editor. However, when I tried to select *View > Tool Windows > Python Console*, it generates the following error messages and more. Basically, I couldn't bring up a console window in Pycharm. ``` C:\Users\user\Anaconda3\python.exe -u C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevconsole.py 56743 56744 Traceback (most recent call last): File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydev_imports.py", line 21, in <module> from SimpleXMLRPCServer import SimpleXMLRPCServer ImportError: No module named 'SimpleXMLRPCServer' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevconsole.py", line 20, in <module> import pydevd_vars File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd_vars.py", line 9, in <module> from pydevd_xml import * File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd_xml.py", line 7, in <module> from pydev_imports import quote File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydev_imports.py", line 23, in <module> from xmlrpc.server import SimpleXMLRPCServer File "C:\Users\user\Anaconda3\lib\xmlrpc\server.py", line 108, in <module> from http.server import BaseHTTPRequestHandler File "C:\Users\user\Anaconda3\lib\http\server.py", line 660, in <module> class SimpleHTTPRequestHandler(BaseHTTPRequestHandler): File "C:\Users\user\Anaconda3\lib\http\server.py", line 851, in SimpleHTTPRequestHandler mimetypes.init() # try to read system mime.types File "C:\Users\user\Anaconda3\lib\mimetypes.py", line 348, in init db.read_windows_registry() File "C:\Users\user\Anaconda3\lib\mimetypes.py", line 255, in read_windows_registry with _winreg.OpenKey(hkcr, subkeyname) as subkey: TypeError: OpenKey() argument 2 must be str without null characters or None, not str Process finished with exit code 1 Couldn't connect to console process. ``` -----------------these messages were showed up in "Python Console"-------------
2015/03/22
[ "https://Stackoverflow.com/questions/29192068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4018111/" ]
The configuring of pycharm in the presence of various development configurations is a bit of a black art IMHO. The most effective mechanism I've found for pinning this down is put random strings into the various settings dialogs, Interpreters, consoles, tests , servers and observe the command lines submitted to the interpreter VERY carefully. Hardly a satisfactory approach but it will sort out what is going where and to a certain degree what effects what. The other think that helps me are screenshots of the settings and testing dialogs of working installations. Again, a bit rough and ready but it has got me up and running again after a long period of successful debugging followed by pycharm amnesia.
I got it resolved by setting the interpreter in Preferences and project interpreter.
29,192,068
On Windows 7 machine, Pycharm (community or professional) and Python 3.4 (tried Anaconda 3 as well) were installed newly. There were not problems running Python scripts interactively in main editor. However, when I tried to select *View > Tool Windows > Python Console*, it generates the following error messages and more. Basically, I couldn't bring up a console window in Pycharm. ``` C:\Users\user\Anaconda3\python.exe -u C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevconsole.py 56743 56744 Traceback (most recent call last): File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydev_imports.py", line 21, in <module> from SimpleXMLRPCServer import SimpleXMLRPCServer ImportError: No module named 'SimpleXMLRPCServer' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevconsole.py", line 20, in <module> import pydevd_vars File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd_vars.py", line 9, in <module> from pydevd_xml import * File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd_xml.py", line 7, in <module> from pydev_imports import quote File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydev_imports.py", line 23, in <module> from xmlrpc.server import SimpleXMLRPCServer File "C:\Users\user\Anaconda3\lib\xmlrpc\server.py", line 108, in <module> from http.server import BaseHTTPRequestHandler File "C:\Users\user\Anaconda3\lib\http\server.py", line 660, in <module> class SimpleHTTPRequestHandler(BaseHTTPRequestHandler): File "C:\Users\user\Anaconda3\lib\http\server.py", line 851, in SimpleHTTPRequestHandler mimetypes.init() # try to read system mime.types File "C:\Users\user\Anaconda3\lib\mimetypes.py", line 348, in init db.read_windows_registry() File "C:\Users\user\Anaconda3\lib\mimetypes.py", line 255, in read_windows_registry with _winreg.OpenKey(hkcr, subkeyname) as subkey: TypeError: OpenKey() argument 2 must be str without null characters or None, not str Process finished with exit code 1 Couldn't connect to console process. ``` -----------------these messages were showed up in "Python Console"-------------
2015/03/22
[ "https://Stackoverflow.com/questions/29192068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4018111/" ]
You need to change your working directory. Go to `File->Settings->Build, Execution, Deployment->Console->Python Console` and then change or provide a directory where you have read and write access in the `Working directory` box.
I had same problem. I reinstalled python and default directories have changed. Then I just refreshed interpreter here `File->Settings->Build, Execution, Deployment->Console->Python Console` **and here** `File->Settings->Project: <YOUR_PROJECT>->Project Interpreter`. If you will open new projects interpreter will need choose again (?)
29,192,068
On Windows 7 machine, Pycharm (community or professional) and Python 3.4 (tried Anaconda 3 as well) were installed newly. There were not problems running Python scripts interactively in main editor. However, when I tried to select *View > Tool Windows > Python Console*, it generates the following error messages and more. Basically, I couldn't bring up a console window in Pycharm. ``` C:\Users\user\Anaconda3\python.exe -u C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevconsole.py 56743 56744 Traceback (most recent call last): File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydev_imports.py", line 21, in <module> from SimpleXMLRPCServer import SimpleXMLRPCServer ImportError: No module named 'SimpleXMLRPCServer' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevconsole.py", line 20, in <module> import pydevd_vars File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd_vars.py", line 9, in <module> from pydevd_xml import * File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd_xml.py", line 7, in <module> from pydev_imports import quote File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydev_imports.py", line 23, in <module> from xmlrpc.server import SimpleXMLRPCServer File "C:\Users\user\Anaconda3\lib\xmlrpc\server.py", line 108, in <module> from http.server import BaseHTTPRequestHandler File "C:\Users\user\Anaconda3\lib\http\server.py", line 660, in <module> class SimpleHTTPRequestHandler(BaseHTTPRequestHandler): File "C:\Users\user\Anaconda3\lib\http\server.py", line 851, in SimpleHTTPRequestHandler mimetypes.init() # try to read system mime.types File "C:\Users\user\Anaconda3\lib\mimetypes.py", line 348, in init db.read_windows_registry() File "C:\Users\user\Anaconda3\lib\mimetypes.py", line 255, in read_windows_registry with _winreg.OpenKey(hkcr, subkeyname) as subkey: TypeError: OpenKey() argument 2 must be str without null characters or None, not str Process finished with exit code 1 Couldn't connect to console process. ``` -----------------these messages were showed up in "Python Console"-------------
2015/03/22
[ "https://Stackoverflow.com/questions/29192068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4018111/" ]
You need to change your working directory. Go to `File->Settings->Build, Execution, Deployment->Console->Python Console` and then change or provide a directory where you have read and write access in the `Working directory` box.
I got it resolved by setting the interpreter in Preferences and project interpreter.
11,782,147
I am trying to implement the algorithm found [here](http://www.m.cs.osakafu-u.ac.jp/cbdar2007/proceedings/papers/O1-1.pdf) in python with OpenCV. I am trying to implement the part of the algorithm that remove irrelevant edge boundaries based on the number of interior boundaries that they have. * If the current edge boundary has exactly one or two interior edge boundaries, the internal boundaries can be ignored * If the current edge boundary has more than two interior edge boundaries, it can be ignored I am having trouble determining the tree structure of the contours I have extracted from the image. My current source: ``` import cv2 # Load the image img = cv2.imread('test.png') cv2.copyMakeBorder(img, 50,50,50,50,cv2.BORDER_CONSTANT, img, (255,255,255)) # Split out each channel blue = cv2.split(img)[0] green = cv2.split(img)[1] red = cv2.split(img)[2] # Run canny edge detection on each channel blue_edges = cv2.Canny(blue, 1, 255) green_edges = cv2.Canny(green, 1, 255) red_edges = cv2.Canny(red, 1, 255) # Join edges back into image edges = blue_edges | green_edges | red_edges # Find the contours contours,hierarchy = cv2.findContours(edges.copy(),cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) # For each contour, find the bounding rectangle and draw it for cnt in contours: x,y,w,h = cv2.boundingRect(cnt) cv2.rectangle(edges,(x,y),(x+w,y+h),(200,200,200),2) # Finally show the image cv2.imshow('img',edges) cv2.waitKey(0) cv2.destroyAllWindows() ``` I assumed that using `cv2.RETR_TREE` would give me a nice nested array of the contours but that doesn't seem to be the case. How do I retrieve the tree structure of my contours?
2012/08/02
[ "https://Stackoverflow.com/questions/11782147", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1128407/" ]
The main confusion here is probably the fact that the hierarchy returned is a numpy array with more dimensions than necessary. On top of that, it looks like the Python FindContours function returns a tuple that is a LIST of contours, and and NDARRAY of the hierarchy... You can get a sensible array of hierarchy information that is more in line with the C docs by just taking hierarchy[0]. It would then be an appropriate shape to zip, for example, with the contours. Below is an example that, will draw the outermost rectangles in green and the innermost rectangles in red on this image: ![enter image description here](https://i.stack.imgur.com/i7g0M.png) Output: ![enter image description here](https://i.stack.imgur.com/KwiGk.png) Note, by the way, that the wording in the OpenCV docs is a little ambiguous, but `hierarchyDataOfAContour[2]` describes the children of that contour (if it is negative then that is an inner contour), and `hierarchyDataOfAContour[3]` describes the parents of that contour (if it is negative then that is an exterior contour). Also note: I looked into implementing the algorithm that you referred to in the OCR paper, and I saw that FindContours was giving me a lot of repeats of near-identical contours. This would complicate the finding of "Edge Boxes" as the paper describes. This may be because the Canny thresholds were too low (note that I was playing around with them as described in the paper), but there may be some way to reduce that effect or just look at the average deviation of the four corners of all the boxes and eliminate duplicates... ``` import cv2 import numpy # Load the image img = cv2.imread("/ContourTest.PNG") # Split out each channel blue, green, red = cv2.split(img) def medianCanny(img, thresh1, thresh2): median = numpy.median(img) img = cv2.Canny(img, int(thresh1 * median), int(thresh2 * median)) return img # Run canny edge detection on each channel blue_edges = medianCanny(blue, 0.2, 0.3) green_edges = medianCanny(green, 0.2, 0.3) red_edges = medianCanny(red, 0.2, 0.3) # Join edges back into image edges = blue_edges | green_edges | red_edges # Find the contours contours,hierarchy = cv2.findContours(edges, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) hierarchy = hierarchy[0] # get the actual inner list of hierarchy descriptions # For each contour, find the bounding rectangle and draw it for component in zip(contours, hierarchy): currentContour = component[0] currentHierarchy = component[1] x,y,w,h = cv2.boundingRect(currentContour) if currentHierarchy[2] < 0: # these are the innermost child components cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),3) elif currentHierarchy[3] < 0: # these are the outermost parent components cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),3) # Finally show the image cv2.imshow('img',img) cv2.waitKey(0) cv2.destroyAllWindows() ```
**Understanding Contour Hierarchies** When finding contours in a binary image using [`cv2.findContours()`](https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#findcontours), you can use contour hierarchy to select and extract specific contours within the image. Specifically, you can choose a [contour retrieval mode](https://docs.opencv.org/4.2.0/d3/dc0/group__imgproc__shape.html#ga819779b9857cc2f8601e6526a3a5bc71) to optionally return an output vector containing information about the image topology. There are the four possible modes: * `cv2.RETR_EXTERNAL` - retrieves only the extreme outer contours (no hierarchy) * `cv2.RETR_LIST` - retrieves all of the contours without establishing any hierarchical relationships * `cv2.RETR_CCOMP` - retrieves all of the contours and organizes them into a two-level hierarchy. At the top level, there are external boundaries of the components. At the second level, there are boundaries of the holes. If there is another contour inside a hole of a connected component, it is still put at the top level * `cv2.RETR_TREE` - retrieves all of the contours and reconstructs a full hierarchy of nested contours **Contour Tree Structure** So with this information, we can use `cv2.RETR_CCOMP` or `cv2.RETR_TREE` to return a hierarchy list. Take for example this image: [![enter image description here](https://i.stack.imgur.com/JcY09.jpg)](https://i.stack.imgur.com/JcY09.jpg) When we use the `cv2.RETR_TREE` parameter, the contours are arranged in a hierarchy, with the outermost contours for each object at the top. Moving down the hierarchy, each new level of contours represents the next innermost contour for each object. In the image above, the contours in the image are colored to represent the hierarchical structure of the returned contours data. The outermost contours are red, and they are at the top of the hierarchy. The next innermost contours -- the dice pips, in this case -- are green. We can get that information about the contour hierarchies via the hierarchy array from the `cv2.findContours` function call. Suppose we call the function like this: ``` (_, contours, hierarchy) = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) ``` The third return value, saved in the `hierarchy` variable in this code, is a three-dimensional NumPy array, with one row, `X` columns, and a "depth" of 4. The `X` columns correspond to the number of contours found by the function. The `cv2.RETR_TREE` parameter causes the function to find the internal contours as well as the outermost contours for each object. Column zero corresponds to the first contour, column one the second, and so on. Each of the columns has a four-element array of integers, representing indices of other contours, according to this scheme: ``` [next, previous, first child, parent] ``` The *next* index refers to the next contour in this contour's hierarchy level, while the *previous* index refers to the previous contour in this contour's hierarchy level. The *first child* index refers to the first contour that is contained inside this contour. The *parent* index refers to the contour containing this contour. In all cases, an value of `-1` indicates that there is no *next*, *previous*, *first child*, or *parent* contour, as appropriate. For a more concrete example, here are some example `hierarchy` values. The values are in square brackets, and the indices of the contours precede each entry. If you printed out the hierarchy array you will get something like this ``` 0: [ 6 -1 1 -1] 18: [19 -1 -1 17] 1: [ 2 -1 -1 0] 19: [20 18 -1 17] 2: [ 3 1 -1 0] 20: [21 19 -1 17] 3: [ 4 2 -1 0] 21: [22 20 -1 17] 4: [ 5 3 -1 0] 22: [-1 21 -1 17] 5: [-1 4 -1 0] 23: [27 17 24 -1] 6: [11 0 7 -1] 24: [25 -1 -1 23] 7: [ 8 -1 -1 6] 25: [26 24 -1 23] 8: [ 9 7 -1 6] 26: [-1 25 -1 23] 9: [10 8 -1 6] 27: [32 23 28 -1] 10: [-1 9 -1 6] 28: [29 -1 -1 27] 11: [17 6 12 -1] 29: [30 28 -1 27] 12: [15 -1 13 11] 30: [31 29 -1 27] 13: [14 -1 -1 12] 31: [-1 30 -1 27] 14: [-1 13 -1 12] 32: [-1 27 33 -1] 15: [16 12 -1 11] 33: [34 -1 -1 32] 16: [-1 15 -1 11] 34: [35 33 -1 32] 17: [23 11 18 -1] 35: [-1 34 -1 32] ``` The entry for the first contour is `[6, -1, 1, -1]`. This represents the first of the outermost contours; note that there is no particular order for the contours, e.g., they are not stored left to right by default. The entry tells us that the next dice outline is the contour with index six, that there is no previous contour in the list, that the first contour inside this one has index one, and that there is no parent for this contour (no contour containing this one). We can visualize the information in the `hierarchy` array as seven trees, one for each of the dice in the image. [![enter image description here](https://i.stack.imgur.com/dC4Bn.png)](https://i.stack.imgur.com/dC4Bn.png) The seven outermost contours are all those that have no parent, i.e., those with an value of `-1` in the fourth field of their `hierarchy` entry. Each of the child nodes beneath one of the "roots" represents a contour inside the outermost contour. Note how contours 13 and 14 are beneath contour 12 in the diagram. These two contours represent the innermost contours, perhaps noise or some lost paint in one of the pips. --- Once we understand how contours are arranged into a hierarchy, we can perform more sophisticated tasks, such as counting the number of contours within a shape in addition to the number of objects in an image. Depending on the retrieval mode you select, you will have full access to the topology of an image and be able to control the tree structure of the contours.
37,811,767
I like to use python interpreter, as it shows the result instantly. But I sometimes make mistakes. Like misspelling or typing 'enter' twice during writing class or function. It's really annoying work to rewrite the code. Is it possible to add some code to a predefined class or function in the interpreter?
2016/06/14
[ "https://Stackoverflow.com/questions/37811767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5894129/" ]
When you want to declare an id in XML you do it as `android:id="@+id/myId"` R is Java class. When you include the above line for an XML view a `public static final int myId` field gets included into the R class. You can reference this from your own classes. `findViewById(int)` accepts an integer as a parameter. The R class contains Integers and not the strings you entered as the XML id. Here is a sample from an R class. ``` public final class R { public static final class id { public static final int ReflectionsLevelText=0x7f0d00af; public static final int about=0x7f0d01b3; public static final int action0=0x7f0d014d; public static final int action_bar=0x7f0d005f; public static final int action_bar_activity_content=0x7f0d0000; public static final int action_bar_container=0x7f0d005e; } } ``` So if you want to access the view with the id `action_bar` you have to call `findViewById(R.id.action_bar)` In the same way R class also includes drawables, dimensions and basically all the resources. They are exactly inner static classes inside the R class. For an example when you add a drawable `ic_my_pic.png` to `res/drawable` a field gets generated in the R class. It would look like, ``` public final class R{ public static final class drawable{ public static final int ic_my_pic=0x7f020000; } } ``` Now you can access this image from your classes by, ``` imageView.setImageResource(R.drawable.ic_my_pic); ``` You can find more info [here](https://stackoverflow.com/questions/6804053/understand-the-r-class-in-android) and [here](http://knowledgefolders.com/akc/display?url=DisplayNoteIMPURL&reportId=2883&ownerUserId=satya).
When gradle builds your app, it generates a " ``` <EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:id="@+id/myEditText" /> ``` You would then reference it in your code using the generated R class: ``` EditText myEditText = (EditText) findViewById(R.id.myEditText); ```
37,811,767
I like to use python interpreter, as it shows the result instantly. But I sometimes make mistakes. Like misspelling or typing 'enter' twice during writing class or function. It's really annoying work to rewrite the code. Is it possible to add some code to a predefined class or function in the interpreter?
2016/06/14
[ "https://Stackoverflow.com/questions/37811767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5894129/" ]
When you want to declare an id in XML you do it as `android:id="@+id/myId"` R is Java class. When you include the above line for an XML view a `public static final int myId` field gets included into the R class. You can reference this from your own classes. `findViewById(int)` accepts an integer as a parameter. The R class contains Integers and not the strings you entered as the XML id. Here is a sample from an R class. ``` public final class R { public static final class id { public static final int ReflectionsLevelText=0x7f0d00af; public static final int about=0x7f0d01b3; public static final int action0=0x7f0d014d; public static final int action_bar=0x7f0d005f; public static final int action_bar_activity_content=0x7f0d0000; public static final int action_bar_container=0x7f0d005e; } } ``` So if you want to access the view with the id `action_bar` you have to call `findViewById(R.id.action_bar)` In the same way R class also includes drawables, dimensions and basically all the resources. They are exactly inner static classes inside the R class. For an example when you add a drawable `ic_my_pic.png` to `res/drawable` a field gets generated in the R class. It would look like, ``` public final class R{ public static final class drawable{ public static final int ic_my_pic=0x7f020000; } } ``` Now you can access this image from your classes by, ``` imageView.setImageResource(R.drawable.ic_my_pic); ``` You can find more info [here](https://stackoverflow.com/questions/6804053/understand-the-r-class-in-android) and [here](http://knowledgefolders.com/akc/display?url=DisplayNoteIMPURL&reportId=2883&ownerUserId=satya).
All of the views in a window are arranged in a single tree. You can add views either from code or by specifying a tree of views in one or more XML layout files. **Views may have an integer id associated with them. These ids are typically assigned in the layout XML files, and are used to find specific views within the view tree.** A common pattern is to: *Define a Button in the layout file and assign it a unique ID.* ``` <Button android:id="@+id/my_button" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/my_button_text"/> ``` From the onCreate method of an Activity, find the Button ``` Button myButton = (Button) findViewById(R.id.my_button); ``` View IDs need not be unique throughout the tree, but it is good practice to ensure that they are at least unique within the part of the tree you are searching. Reference : <https://developer.android.com/reference/android/view/View.html>
48,250,092
How do I upload a excel `.xlsx` file to python flask from angular2? I upload something but it can't be read when I open the excel file. html for upload dialog: ``` <mat-form-field> <input matInput placeholder="Filename" [(ngModel)]="filename"> </mat-form-field> <button type="button" mat-raised-button (click)="imgFileInput.click()" [disabled]="!is_file">Upload file</button> <input hidden type="file" #imgFileInput (change)="fileChange($event)" accept=".xlsx"/> <button type="button" mat-raised-button [disabled]="is_file">Submit</button> ``` ts code for sending post to the api: ``` fileChange(event){ let reader = new FileReader(); let file = event.target.files[0]; this.filename = file.name; reader.readAsDataURL(file); reader.onload = () => { let data = atob(reader.result.toString().split(',')[1]); this.is_file=false; var token = localStorage.getItem('token') let headers = new Headers({ 'Content-Type': 'application/json', 'Accept': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'Access-Control-Allow-Origin':'*', 'Access-Control-Allow-Headers': 'Content-Type', 'Access-Control-Allow-Methods': 'POST', 'Authorization': 'bearer ' + token }); let options = new RequestOptions({ headers: headers }); options.responseType = ResponseContentType.Blob; let ran = Math.floor(Math.random() * 10000) + 1; this.http.post(environment.API_URL + '/test/model/expsoure/excel/' + 'model_id' + '?cache=' + ran.toString(),data,options) .subscribe(service_data => { }); } }// end fileChange ``` Python code for saving data: ``` class exposureExcelId(Resource): method_decorators = [jwt_required()] def post(self,mid): user_id = str(current_identity.id) f = open('/Users/data/wtf.xlsx','wb') f.write(request.data) f.close() ``` Error I get when I try and load the file from the upload in excel is: > > excel could not open wtf.xlsx because some of the content is unreadable > > >
2018/01/14
[ "https://Stackoverflow.com/questions/48250092", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1203556/" ]
``` temp = (ds_list)(malloc(sizeof(ds_list))); ``` will be ``` temp = malloc(sizeof(*temp))); ``` You want to allocate memory for `struct ds_list_element` not `struct ds_list_element*`. Don't hide pointers behind typedef name. It rarely helps. Also you should check the return value of `malloc` and the casting is not needed.
Use `ds_list` as structure not a pointer ``` typedef struct ds_list_element { char value[MAX]; struct ds_list_element *next; }ds_list; ``` and allocate memory for the structure not a pointer. Working program: ``` #include<stdio.h> #include<stdlib.h> #include<string.h> #define MAX 100 typedef struct ds_list_element { char value[MAX]; struct ds_list_element *next; }ds_list; int ds_list_empty(ds_list *id) { // id listy if (id == NULL) return 1; else return 0; } ds_list * ds_list_add(ds_list *id, char add[MAX]) { ds_list *temp; temp = (malloc(sizeof(ds_list))); strcpy(temp->value,add); temp->next = id; return temp; } void ds_list_print(ds_list *id) { if (ds_list_empty(id) == 0) { printf("%s\n",id->value); ds_list_print(id->next); } } int main () { ds_list *my_list = NULL; my_list = ds_list_add(my_list,"one"); my_list = ds_list_add(my_list,"two"); my_list = ds_list_add(my_list,"three"); my_list = ds_list_add(my_list,"four"); ds_list_print(my_list); return 0; } ``` Output: ``` four three two one ```
2,211,706
I've always preferred these: ``` not 'x' in 'abc' not 'x' is 'a' ``` (assuming, of course that everyone knows `in` and `is` out-prioritize `not` -- I probably should use parentheses) over the more (English) grammatical: ``` 'x' not in 'abc' 'x' is not 'a' ``` but didn't bother to think why until I realized they do not make syntactical sense ``` 'x' == not 'a' 'x' not == 'a' ``` both of course throw a syntax error. so I figured they were both two-word operators. However, the [documentation](http://docs.python.org/library/operator.html) only references `is not` and makes no mention of `not in` as an operator. Am I perhaps misinterpreting the syntax? If they both are operators, then are they at all different (even subtlety) from their non-grammatical counterparts? If they are the same, then why do they exist? It seems to be impious to the [Zen of Python](http://www.python.org/dev/peps/pep-0020/) (.."one -- and preferably only one -- obvious way"..) I apologize if this has been discussed to death already, I just had little luck finding it with search terms like "is not".
2010/02/06
[ "https://Stackoverflow.com/questions/2211706", "https://Stackoverflow.com", "https://Stackoverflow.com/users/31676/" ]
From the python 2.6.4 docs at: <http://docs.python.org/reference/expressions.html> > > > The operator not in is defined to have > the inverse true value of in. > > > The operators is and is not test for > object identity: x is y is true if and > only if x and y are the same object. x > is not y yields the inverse truth > value. > > > eg: "x not in y" is exactly the same as "not x in y" and "x is not y" is the same as "not x is y". "x not == y" doesn't parse, but "x != y" does, so there's an equivalence there too ... HTH.
The remainder of your question has been answered above, but I'll address the last question: the Zen of Python bit. "There should only be one way to do it" isn't mean't in a mathematical sense. If it were, there'd be no `!=` operator, since that's just the inversion of `==`. Similarly, no `and` and `or` --- you can, after all, just use a single `nand` command. There is a limit to the "one way" mantra: there should only be one high-level way of doing it. Of course, that high-level way can be decomposed --- you can write your own `math.tan`, and you never need to use `urllib` --- `socket` is always there for you. But just as `urllib.open` is a higher-level encapsulation of raw `socket` operations, so `not in` is a higher-level encapsulation of `not` and `in`. That's a bit banal, you might say. But you use `x != y` instead of `not (x == y)`.
2,211,706
I've always preferred these: ``` not 'x' in 'abc' not 'x' is 'a' ``` (assuming, of course that everyone knows `in` and `is` out-prioritize `not` -- I probably should use parentheses) over the more (English) grammatical: ``` 'x' not in 'abc' 'x' is not 'a' ``` but didn't bother to think why until I realized they do not make syntactical sense ``` 'x' == not 'a' 'x' not == 'a' ``` both of course throw a syntax error. so I figured they were both two-word operators. However, the [documentation](http://docs.python.org/library/operator.html) only references `is not` and makes no mention of `not in` as an operator. Am I perhaps misinterpreting the syntax? If they both are operators, then are they at all different (even subtlety) from their non-grammatical counterparts? If they are the same, then why do they exist? It seems to be impious to the [Zen of Python](http://www.python.org/dev/peps/pep-0020/) (.."one -- and preferably only one -- obvious way"..) I apologize if this has been discussed to death already, I just had little luck finding it with search terms like "is not".
2010/02/06
[ "https://Stackoverflow.com/questions/2211706", "https://Stackoverflow.com", "https://Stackoverflow.com/users/31676/" ]
From the python 2.6.4 docs at: <http://docs.python.org/reference/expressions.html> > > > The operator not in is defined to have > the inverse true value of in. > > > The operators is and is not test for > object identity: x is y is true if and > only if x and y are the same object. x > is not y yields the inverse truth > value. > > > eg: "x not in y" is exactly the same as "not x in y" and "x is not y" is the same as "not x is y". "x not == y" doesn't parse, but "x != y" does, so there's an equivalence there too ... HTH.
Your documentation reference is nothing to do with syntax. [Try this](http://docs.python.org/reference/expressions.html#notin). Both `is not` and `not in` are two-word operators.
2,211,706
I've always preferred these: ``` not 'x' in 'abc' not 'x' is 'a' ``` (assuming, of course that everyone knows `in` and `is` out-prioritize `not` -- I probably should use parentheses) over the more (English) grammatical: ``` 'x' not in 'abc' 'x' is not 'a' ``` but didn't bother to think why until I realized they do not make syntactical sense ``` 'x' == not 'a' 'x' not == 'a' ``` both of course throw a syntax error. so I figured they were both two-word operators. However, the [documentation](http://docs.python.org/library/operator.html) only references `is not` and makes no mention of `not in` as an operator. Am I perhaps misinterpreting the syntax? If they both are operators, then are they at all different (even subtlety) from their non-grammatical counterparts? If they are the same, then why do they exist? It seems to be impious to the [Zen of Python](http://www.python.org/dev/peps/pep-0020/) (.."one -- and preferably only one -- obvious way"..) I apologize if this has been discussed to death already, I just had little luck finding it with search terms like "is not".
2010/02/06
[ "https://Stackoverflow.com/questions/2211706", "https://Stackoverflow.com", "https://Stackoverflow.com/users/31676/" ]
From the python 2.6.4 docs at: <http://docs.python.org/reference/expressions.html> > > > The operator not in is defined to have > the inverse true value of in. > > > The operators is and is not test for > object identity: x is y is true if and > only if x and y are the same object. x > is not y yields the inverse truth > value. > > > eg: "x not in y" is exactly the same as "not x in y" and "x is not y" is the same as "not x is y". "x not == y" doesn't parse, but "x != y" does, so there's an equivalence there too ... HTH.
`x in a` tests whether the element x is in a (which in this case would probably be a list or a string or some such) and returns `True` if it is. `x not in a` is similar, but returns `False` if x is in a. On the other hand, `x is not a` is analogous to `x != a` and `not x is a`. And like you said, x == not 5 will give you an error of sorts
2,211,706
I've always preferred these: ``` not 'x' in 'abc' not 'x' is 'a' ``` (assuming, of course that everyone knows `in` and `is` out-prioritize `not` -- I probably should use parentheses) over the more (English) grammatical: ``` 'x' not in 'abc' 'x' is not 'a' ``` but didn't bother to think why until I realized they do not make syntactical sense ``` 'x' == not 'a' 'x' not == 'a' ``` both of course throw a syntax error. so I figured they were both two-word operators. However, the [documentation](http://docs.python.org/library/operator.html) only references `is not` and makes no mention of `not in` as an operator. Am I perhaps misinterpreting the syntax? If they both are operators, then are they at all different (even subtlety) from their non-grammatical counterparts? If they are the same, then why do they exist? It seems to be impious to the [Zen of Python](http://www.python.org/dev/peps/pep-0020/) (.."one -- and preferably only one -- obvious way"..) I apologize if this has been discussed to death already, I just had little luck finding it with search terms like "is not".
2010/02/06
[ "https://Stackoverflow.com/questions/2211706", "https://Stackoverflow.com", "https://Stackoverflow.com/users/31676/" ]
The remainder of your question has been answered above, but I'll address the last question: the Zen of Python bit. "There should only be one way to do it" isn't mean't in a mathematical sense. If it were, there'd be no `!=` operator, since that's just the inversion of `==`. Similarly, no `and` and `or` --- you can, after all, just use a single `nand` command. There is a limit to the "one way" mantra: there should only be one high-level way of doing it. Of course, that high-level way can be decomposed --- you can write your own `math.tan`, and you never need to use `urllib` --- `socket` is always there for you. But just as `urllib.open` is a higher-level encapsulation of raw `socket` operations, so `not in` is a higher-level encapsulation of `not` and `in`. That's a bit banal, you might say. But you use `x != y` instead of `not (x == y)`.
`x in a` tests whether the element x is in a (which in this case would probably be a list or a string or some such) and returns `True` if it is. `x not in a` is similar, but returns `False` if x is in a. On the other hand, `x is not a` is analogous to `x != a` and `not x is a`. And like you said, x == not 5 will give you an error of sorts
2,211,706
I've always preferred these: ``` not 'x' in 'abc' not 'x' is 'a' ``` (assuming, of course that everyone knows `in` and `is` out-prioritize `not` -- I probably should use parentheses) over the more (English) grammatical: ``` 'x' not in 'abc' 'x' is not 'a' ``` but didn't bother to think why until I realized they do not make syntactical sense ``` 'x' == not 'a' 'x' not == 'a' ``` both of course throw a syntax error. so I figured they were both two-word operators. However, the [documentation](http://docs.python.org/library/operator.html) only references `is not` and makes no mention of `not in` as an operator. Am I perhaps misinterpreting the syntax? If they both are operators, then are they at all different (even subtlety) from their non-grammatical counterparts? If they are the same, then why do they exist? It seems to be impious to the [Zen of Python](http://www.python.org/dev/peps/pep-0020/) (.."one -- and preferably only one -- obvious way"..) I apologize if this has been discussed to death already, I just had little luck finding it with search terms like "is not".
2010/02/06
[ "https://Stackoverflow.com/questions/2211706", "https://Stackoverflow.com", "https://Stackoverflow.com/users/31676/" ]
It's easy to check if there's any difference, with the [dis](http://docs.python.org/library/dis.html?highlight=dis#module-dis) module: ``` >>> dis.dis(compile('not a in b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 7 (not in) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE >>> dis.dis(compile('a not in b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 7 (not in) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE ``` See? The awkward first form (who often confuses those who can't immediately tell the relative priority of `not` and `in` operators) gets compiled into exactly the same bytecode as the can't-confuse-anybody second for, with the single `not in` operator. Similarly: ``` >>> dis.dis(compile('not a is b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 9 (is not) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE >>> dis.dis(compile('a is not b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 9 (is not) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE ``` both these forms -- the goofy, confusing first one, and the elegant second one -- compare to exactly the same code, and both use the `is not` operator. Of course there's no good reason to use the forms that can confuse the reader (unless you *like* to set traps for those who read your code!-), but in terms of execution semantics and speed there's no difference whatsoever.
The remainder of your question has been answered above, but I'll address the last question: the Zen of Python bit. "There should only be one way to do it" isn't mean't in a mathematical sense. If it were, there'd be no `!=` operator, since that's just the inversion of `==`. Similarly, no `and` and `or` --- you can, after all, just use a single `nand` command. There is a limit to the "one way" mantra: there should only be one high-level way of doing it. Of course, that high-level way can be decomposed --- you can write your own `math.tan`, and you never need to use `urllib` --- `socket` is always there for you. But just as `urllib.open` is a higher-level encapsulation of raw `socket` operations, so `not in` is a higher-level encapsulation of `not` and `in`. That's a bit banal, you might say. But you use `x != y` instead of `not (x == y)`.
2,211,706
I've always preferred these: ``` not 'x' in 'abc' not 'x' is 'a' ``` (assuming, of course that everyone knows `in` and `is` out-prioritize `not` -- I probably should use parentheses) over the more (English) grammatical: ``` 'x' not in 'abc' 'x' is not 'a' ``` but didn't bother to think why until I realized they do not make syntactical sense ``` 'x' == not 'a' 'x' not == 'a' ``` both of course throw a syntax error. so I figured they were both two-word operators. However, the [documentation](http://docs.python.org/library/operator.html) only references `is not` and makes no mention of `not in` as an operator. Am I perhaps misinterpreting the syntax? If they both are operators, then are they at all different (even subtlety) from their non-grammatical counterparts? If they are the same, then why do they exist? It seems to be impious to the [Zen of Python](http://www.python.org/dev/peps/pep-0020/) (.."one -- and preferably only one -- obvious way"..) I apologize if this has been discussed to death already, I just had little luck finding it with search terms like "is not".
2010/02/06
[ "https://Stackoverflow.com/questions/2211706", "https://Stackoverflow.com", "https://Stackoverflow.com/users/31676/" ]
Your documentation reference is nothing to do with syntax. [Try this](http://docs.python.org/reference/expressions.html#notin). Both `is not` and `not in` are two-word operators.
`x in a` tests whether the element x is in a (which in this case would probably be a list or a string or some such) and returns `True` if it is. `x not in a` is similar, but returns `False` if x is in a. On the other hand, `x is not a` is analogous to `x != a` and `not x is a`. And like you said, x == not 5 will give you an error of sorts
2,211,706
I've always preferred these: ``` not 'x' in 'abc' not 'x' is 'a' ``` (assuming, of course that everyone knows `in` and `is` out-prioritize `not` -- I probably should use parentheses) over the more (English) grammatical: ``` 'x' not in 'abc' 'x' is not 'a' ``` but didn't bother to think why until I realized they do not make syntactical sense ``` 'x' == not 'a' 'x' not == 'a' ``` both of course throw a syntax error. so I figured they were both two-word operators. However, the [documentation](http://docs.python.org/library/operator.html) only references `is not` and makes no mention of `not in` as an operator. Am I perhaps misinterpreting the syntax? If they both are operators, then are they at all different (even subtlety) from their non-grammatical counterparts? If they are the same, then why do they exist? It seems to be impious to the [Zen of Python](http://www.python.org/dev/peps/pep-0020/) (.."one -- and preferably only one -- obvious way"..) I apologize if this has been discussed to death already, I just had little luck finding it with search terms like "is not".
2010/02/06
[ "https://Stackoverflow.com/questions/2211706", "https://Stackoverflow.com", "https://Stackoverflow.com/users/31676/" ]
It's easy to check if there's any difference, with the [dis](http://docs.python.org/library/dis.html?highlight=dis#module-dis) module: ``` >>> dis.dis(compile('not a in b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 7 (not in) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE >>> dis.dis(compile('a not in b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 7 (not in) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE ``` See? The awkward first form (who often confuses those who can't immediately tell the relative priority of `not` and `in` operators) gets compiled into exactly the same bytecode as the can't-confuse-anybody second for, with the single `not in` operator. Similarly: ``` >>> dis.dis(compile('not a is b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 9 (is not) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE >>> dis.dis(compile('a is not b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 9 (is not) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE ``` both these forms -- the goofy, confusing first one, and the elegant second one -- compare to exactly the same code, and both use the `is not` operator. Of course there's no good reason to use the forms that can confuse the reader (unless you *like* to set traps for those who read your code!-), but in terms of execution semantics and speed there's no difference whatsoever.
Your documentation reference is nothing to do with syntax. [Try this](http://docs.python.org/reference/expressions.html#notin). Both `is not` and `not in` are two-word operators.
2,211,706
I've always preferred these: ``` not 'x' in 'abc' not 'x' is 'a' ``` (assuming, of course that everyone knows `in` and `is` out-prioritize `not` -- I probably should use parentheses) over the more (English) grammatical: ``` 'x' not in 'abc' 'x' is not 'a' ``` but didn't bother to think why until I realized they do not make syntactical sense ``` 'x' == not 'a' 'x' not == 'a' ``` both of course throw a syntax error. so I figured they were both two-word operators. However, the [documentation](http://docs.python.org/library/operator.html) only references `is not` and makes no mention of `not in` as an operator. Am I perhaps misinterpreting the syntax? If they both are operators, then are they at all different (even subtlety) from their non-grammatical counterparts? If they are the same, then why do they exist? It seems to be impious to the [Zen of Python](http://www.python.org/dev/peps/pep-0020/) (.."one -- and preferably only one -- obvious way"..) I apologize if this has been discussed to death already, I just had little luck finding it with search terms like "is not".
2010/02/06
[ "https://Stackoverflow.com/questions/2211706", "https://Stackoverflow.com", "https://Stackoverflow.com/users/31676/" ]
It's easy to check if there's any difference, with the [dis](http://docs.python.org/library/dis.html?highlight=dis#module-dis) module: ``` >>> dis.dis(compile('not a in b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 7 (not in) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE >>> dis.dis(compile('a not in b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 7 (not in) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE ``` See? The awkward first form (who often confuses those who can't immediately tell the relative priority of `not` and `in` operators) gets compiled into exactly the same bytecode as the can't-confuse-anybody second for, with the single `not in` operator. Similarly: ``` >>> dis.dis(compile('not a is b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 9 (is not) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE >>> dis.dis(compile('a is not b','','exec')) 1 0 LOAD_NAME 0 (a) 3 LOAD_NAME 1 (b) 6 COMPARE_OP 9 (is not) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE ``` both these forms -- the goofy, confusing first one, and the elegant second one -- compare to exactly the same code, and both use the `is not` operator. Of course there's no good reason to use the forms that can confuse the reader (unless you *like* to set traps for those who read your code!-), but in terms of execution semantics and speed there's no difference whatsoever.
`x in a` tests whether the element x is in a (which in this case would probably be a list or a string or some such) and returns `True` if it is. `x not in a` is similar, but returns `False` if x is in a. On the other hand, `x is not a` is analogous to `x != a` and `not x is a`. And like you said, x == not 5 will give you an error of sorts
12,633,618
I am new to python. I am working on some other's project but when i tried to run the code it give me the error said above. My all pages are working properly except those in which i had images. Is there any library required for the same?? Any help will be appreciable. Thanks
2012/09/28
[ "https://Stackoverflow.com/questions/12633618", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1696228/" ]
you need to have `cropresize` package `http://pypi.python.org/pypi/cropresize/` installed on your device. If it is not there install it from the link Do `easy_install cropresize` or `pip install cropresize`
Just do [`easy_install cropresize`](http://pypi.python.org/pypi/cropresize/).
12,633,618
I am new to python. I am working on some other's project but when i tried to run the code it give me the error said above. My all pages are working properly except those in which i had images. Is there any library required for the same?? Any help will be appreciable. Thanks
2012/09/28
[ "https://Stackoverflow.com/questions/12633618", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1696228/" ]
you need to have `cropresize` package `http://pypi.python.org/pypi/cropresize/` installed on your device. If it is not there install it from the link Do `easy_install cropresize` or `pip install cropresize`
cropresize package needs to be installed! First install pip(Ubuntu/Debian): ``` sudo apt-get install python-pip ``` Then install cropresize using pip: ``` pip install cropresize ```
12,633,618
I am new to python. I am working on some other's project but when i tried to run the code it give me the error said above. My all pages are working properly except those in which i had images. Is there any library required for the same?? Any help will be appreciable. Thanks
2012/09/28
[ "https://Stackoverflow.com/questions/12633618", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1696228/" ]
you need to have `cropresize` package `http://pypi.python.org/pypi/cropresize/` installed on your device. If it is not there install it from the link Do `easy_install cropresize` or `pip install cropresize`
cropresize (all versions) depends on PIL. Unfortunately, Pillow (installed) does not satisfy this dependency when using pip or easy\_install :(
12,633,618
I am new to python. I am working on some other's project but when i tried to run the code it give me the error said above. My all pages are working properly except those in which i had images. Is there any library required for the same?? Any help will be appreciable. Thanks
2012/09/28
[ "https://Stackoverflow.com/questions/12633618", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1696228/" ]
Just do [`easy_install cropresize`](http://pypi.python.org/pypi/cropresize/).
cropresize package needs to be installed! First install pip(Ubuntu/Debian): ``` sudo apt-get install python-pip ``` Then install cropresize using pip: ``` pip install cropresize ```
12,633,618
I am new to python. I am working on some other's project but when i tried to run the code it give me the error said above. My all pages are working properly except those in which i had images. Is there any library required for the same?? Any help will be appreciable. Thanks
2012/09/28
[ "https://Stackoverflow.com/questions/12633618", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1696228/" ]
Just do [`easy_install cropresize`](http://pypi.python.org/pypi/cropresize/).
cropresize (all versions) depends on PIL. Unfortunately, Pillow (installed) does not satisfy this dependency when using pip or easy\_install :(
33,168,308
I have the following directory structure ``` -----root |___docpd |__docpd (contains settings.py, wsgi.py , uwsgi.ini) |__static ``` During my vanilla django setup in dev environment , everything was fine (all static files used to load). But now after setting up uwsgi, i found that none of my static files are being loaded(I get a 404 error). **What have i tried ?** ``` 1. Read alot of stack overflow questions about this error and none could solve my problem. 2. I adjusted the python path with paths to my project and they seem to get added as i printed them out in the settings.py file, but still no luck. ``` **Some code** ``` uwsgi.ini [uwsgi] chdir=/home/ubuntu/docpad/Docpad/ wsgi-file=/home/ubuntu/docpad/Docpad/Docpad/wsgi.py master=True pidfile=/tmp/project-master.pid vacuum=True max-requests=5000 daemonize=/var/log/uwsgi/docpad.log ``` **wsgi.py** ``` import os,sys sys.path.append('/home/ubuntu/docpad/Docpad/') os.environ.setdefault("DJANGO_SETTINGS_MODULE", "Docpad.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() ``` **settings.py** ``` from sys import path from os.path import abspath, dirname, join, normpath import os BASE_DIR = os.path.dirname(os.path.dirname(__file__)) print "Base dir = " , BASE_DIR DJANGO_ROOT = dirname(abspath(__file__)) print "Root =",DJANGO_ROOT SITE_ROOT = dirname(DJANGO_ROOT) print "SITE =", SITE_ROOT path.append(DJANGO_ROOT) print "Path = ",path print BASE_DIR # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.6/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 'SECRET' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True TEMPLATE_DEBUG = True PROJECT_PATH = os.path.join(BASE_DIR, os.pardir) PROJECT_PATH = os.path.abspath(PROJECT_PATH) TEMPLATE_PATH = os.path.join(PROJECT_PATH, 'Docpad/templates/') print "TEMPLATES = ",TEMPLATE_PATH # TODO make separate template_paths TEMPLATE_DIRS = ( # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. TEMPLATE_PATH, ) ... ``` After i run my application, i get 404 for all static resources like css/js files. **UPDATE** When i do ``` python manage.py runserver 0.0.0.0:8000 ``` the server starts serving the static files But , this command ``` uwsgi --ini uwsgi.ini --http :8000 ``` gives me the problem(not loading the static files). I am totally clueless right now,have been trying various alternatives but no luck. If anybody could help, it'll be great.
2015/10/16
[ "https://Stackoverflow.com/questions/33168308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1270865/" ]
in settings ``` STATIC_URL = '/static/' STATICFILES_DIRS = ( os.path.normpath(os.path.join(BASE_DIR, "static")), ) ``` in urls.py ``` urlpatterns = [ #urls ]+ static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) ```
You could also do it like so: ``` uwsgi --ini uwsgi.ini --http :8000 --static-map /static=/path/to/staticfiles/ ``` or just add this to your .ini file: ``` static-map = /static=/path/to/staticfiles ```
33,168,308
I have the following directory structure ``` -----root |___docpd |__docpd (contains settings.py, wsgi.py , uwsgi.ini) |__static ``` During my vanilla django setup in dev environment , everything was fine (all static files used to load). But now after setting up uwsgi, i found that none of my static files are being loaded(I get a 404 error). **What have i tried ?** ``` 1. Read alot of stack overflow questions about this error and none could solve my problem. 2. I adjusted the python path with paths to my project and they seem to get added as i printed them out in the settings.py file, but still no luck. ``` **Some code** ``` uwsgi.ini [uwsgi] chdir=/home/ubuntu/docpad/Docpad/ wsgi-file=/home/ubuntu/docpad/Docpad/Docpad/wsgi.py master=True pidfile=/tmp/project-master.pid vacuum=True max-requests=5000 daemonize=/var/log/uwsgi/docpad.log ``` **wsgi.py** ``` import os,sys sys.path.append('/home/ubuntu/docpad/Docpad/') os.environ.setdefault("DJANGO_SETTINGS_MODULE", "Docpad.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() ``` **settings.py** ``` from sys import path from os.path import abspath, dirname, join, normpath import os BASE_DIR = os.path.dirname(os.path.dirname(__file__)) print "Base dir = " , BASE_DIR DJANGO_ROOT = dirname(abspath(__file__)) print "Root =",DJANGO_ROOT SITE_ROOT = dirname(DJANGO_ROOT) print "SITE =", SITE_ROOT path.append(DJANGO_ROOT) print "Path = ",path print BASE_DIR # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.6/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 'SECRET' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True TEMPLATE_DEBUG = True PROJECT_PATH = os.path.join(BASE_DIR, os.pardir) PROJECT_PATH = os.path.abspath(PROJECT_PATH) TEMPLATE_PATH = os.path.join(PROJECT_PATH, 'Docpad/templates/') print "TEMPLATES = ",TEMPLATE_PATH # TODO make separate template_paths TEMPLATE_DIRS = ( # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. TEMPLATE_PATH, ) ... ``` After i run my application, i get 404 for all static resources like css/js files. **UPDATE** When i do ``` python manage.py runserver 0.0.0.0:8000 ``` the server starts serving the static files But , this command ``` uwsgi --ini uwsgi.ini --http :8000 ``` gives me the problem(not loading the static files). I am totally clueless right now,have been trying various alternatives but no luck. If anybody could help, it'll be great.
2015/10/16
[ "https://Stackoverflow.com/questions/33168308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1270865/" ]
in settings ``` STATIC_URL = '/static/' STATICFILES_DIRS = ( os.path.normpath(os.path.join(BASE_DIR, "static")), ) ``` in urls.py ``` urlpatterns = [ #urls ]+ static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) ```
I don't know why, but it starts work when I replace this code: ``` if settings.DEBUG: urlpatterns += [ url(r'^(?P<path>.*)$', 'django.contrib.staticfiles.views.serve'), ] ``` with this: ``` if settings.DEBUG: from django.conf.urls.static import static urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) ``` in file `urls.py` I have Django 1.8
33,168,308
I have the following directory structure ``` -----root |___docpd |__docpd (contains settings.py, wsgi.py , uwsgi.ini) |__static ``` During my vanilla django setup in dev environment , everything was fine (all static files used to load). But now after setting up uwsgi, i found that none of my static files are being loaded(I get a 404 error). **What have i tried ?** ``` 1. Read alot of stack overflow questions about this error and none could solve my problem. 2. I adjusted the python path with paths to my project and they seem to get added as i printed them out in the settings.py file, but still no luck. ``` **Some code** ``` uwsgi.ini [uwsgi] chdir=/home/ubuntu/docpad/Docpad/ wsgi-file=/home/ubuntu/docpad/Docpad/Docpad/wsgi.py master=True pidfile=/tmp/project-master.pid vacuum=True max-requests=5000 daemonize=/var/log/uwsgi/docpad.log ``` **wsgi.py** ``` import os,sys sys.path.append('/home/ubuntu/docpad/Docpad/') os.environ.setdefault("DJANGO_SETTINGS_MODULE", "Docpad.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() ``` **settings.py** ``` from sys import path from os.path import abspath, dirname, join, normpath import os BASE_DIR = os.path.dirname(os.path.dirname(__file__)) print "Base dir = " , BASE_DIR DJANGO_ROOT = dirname(abspath(__file__)) print "Root =",DJANGO_ROOT SITE_ROOT = dirname(DJANGO_ROOT) print "SITE =", SITE_ROOT path.append(DJANGO_ROOT) print "Path = ",path print BASE_DIR # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.6/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 'SECRET' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True TEMPLATE_DEBUG = True PROJECT_PATH = os.path.join(BASE_DIR, os.pardir) PROJECT_PATH = os.path.abspath(PROJECT_PATH) TEMPLATE_PATH = os.path.join(PROJECT_PATH, 'Docpad/templates/') print "TEMPLATES = ",TEMPLATE_PATH # TODO make separate template_paths TEMPLATE_DIRS = ( # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. TEMPLATE_PATH, ) ... ``` After i run my application, i get 404 for all static resources like css/js files. **UPDATE** When i do ``` python manage.py runserver 0.0.0.0:8000 ``` the server starts serving the static files But , this command ``` uwsgi --ini uwsgi.ini --http :8000 ``` gives me the problem(not loading the static files). I am totally clueless right now,have been trying various alternatives but no luck. If anybody could help, it'll be great.
2015/10/16
[ "https://Stackoverflow.com/questions/33168308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1270865/" ]
I solved problem by doing below: add check-static = /your django dir/ in uwsgi.ini file. it works for me! Suppose your static assets are under /customers/foobar/app001/public. You want to check each request has a corresponding file in that directory before passing the request to your dynamic app. The --check-static option is for you: --check-static /customers/foobar/app001/public <http://uwsgi-docs.readthedocs.io/en/latest/StaticFiles.html>
You could also do it like so: ``` uwsgi --ini uwsgi.ini --http :8000 --static-map /static=/path/to/staticfiles/ ``` or just add this to your .ini file: ``` static-map = /static=/path/to/staticfiles ```
33,168,308
I have the following directory structure ``` -----root |___docpd |__docpd (contains settings.py, wsgi.py , uwsgi.ini) |__static ``` During my vanilla django setup in dev environment , everything was fine (all static files used to load). But now after setting up uwsgi, i found that none of my static files are being loaded(I get a 404 error). **What have i tried ?** ``` 1. Read alot of stack overflow questions about this error and none could solve my problem. 2. I adjusted the python path with paths to my project and they seem to get added as i printed them out in the settings.py file, but still no luck. ``` **Some code** ``` uwsgi.ini [uwsgi] chdir=/home/ubuntu/docpad/Docpad/ wsgi-file=/home/ubuntu/docpad/Docpad/Docpad/wsgi.py master=True pidfile=/tmp/project-master.pid vacuum=True max-requests=5000 daemonize=/var/log/uwsgi/docpad.log ``` **wsgi.py** ``` import os,sys sys.path.append('/home/ubuntu/docpad/Docpad/') os.environ.setdefault("DJANGO_SETTINGS_MODULE", "Docpad.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() ``` **settings.py** ``` from sys import path from os.path import abspath, dirname, join, normpath import os BASE_DIR = os.path.dirname(os.path.dirname(__file__)) print "Base dir = " , BASE_DIR DJANGO_ROOT = dirname(abspath(__file__)) print "Root =",DJANGO_ROOT SITE_ROOT = dirname(DJANGO_ROOT) print "SITE =", SITE_ROOT path.append(DJANGO_ROOT) print "Path = ",path print BASE_DIR # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.6/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 'SECRET' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True TEMPLATE_DEBUG = True PROJECT_PATH = os.path.join(BASE_DIR, os.pardir) PROJECT_PATH = os.path.abspath(PROJECT_PATH) TEMPLATE_PATH = os.path.join(PROJECT_PATH, 'Docpad/templates/') print "TEMPLATES = ",TEMPLATE_PATH # TODO make separate template_paths TEMPLATE_DIRS = ( # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. TEMPLATE_PATH, ) ... ``` After i run my application, i get 404 for all static resources like css/js files. **UPDATE** When i do ``` python manage.py runserver 0.0.0.0:8000 ``` the server starts serving the static files But , this command ``` uwsgi --ini uwsgi.ini --http :8000 ``` gives me the problem(not loading the static files). I am totally clueless right now,have been trying various alternatives but no luck. If anybody could help, it'll be great.
2015/10/16
[ "https://Stackoverflow.com/questions/33168308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1270865/" ]
I solved problem by doing below: add check-static = /your django dir/ in uwsgi.ini file. it works for me! Suppose your static assets are under /customers/foobar/app001/public. You want to check each request has a corresponding file in that directory before passing the request to your dynamic app. The --check-static option is for you: --check-static /customers/foobar/app001/public <http://uwsgi-docs.readthedocs.io/en/latest/StaticFiles.html>
I don't know why, but it starts work when I replace this code: ``` if settings.DEBUG: urlpatterns += [ url(r'^(?P<path>.*)$', 'django.contrib.staticfiles.views.serve'), ] ``` with this: ``` if settings.DEBUG: from django.conf.urls.static import static urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) ``` in file `urls.py` I have Django 1.8
33,168,308
I have the following directory structure ``` -----root |___docpd |__docpd (contains settings.py, wsgi.py , uwsgi.ini) |__static ``` During my vanilla django setup in dev environment , everything was fine (all static files used to load). But now after setting up uwsgi, i found that none of my static files are being loaded(I get a 404 error). **What have i tried ?** ``` 1. Read alot of stack overflow questions about this error and none could solve my problem. 2. I adjusted the python path with paths to my project and they seem to get added as i printed them out in the settings.py file, but still no luck. ``` **Some code** ``` uwsgi.ini [uwsgi] chdir=/home/ubuntu/docpad/Docpad/ wsgi-file=/home/ubuntu/docpad/Docpad/Docpad/wsgi.py master=True pidfile=/tmp/project-master.pid vacuum=True max-requests=5000 daemonize=/var/log/uwsgi/docpad.log ``` **wsgi.py** ``` import os,sys sys.path.append('/home/ubuntu/docpad/Docpad/') os.environ.setdefault("DJANGO_SETTINGS_MODULE", "Docpad.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() ``` **settings.py** ``` from sys import path from os.path import abspath, dirname, join, normpath import os BASE_DIR = os.path.dirname(os.path.dirname(__file__)) print "Base dir = " , BASE_DIR DJANGO_ROOT = dirname(abspath(__file__)) print "Root =",DJANGO_ROOT SITE_ROOT = dirname(DJANGO_ROOT) print "SITE =", SITE_ROOT path.append(DJANGO_ROOT) print "Path = ",path print BASE_DIR # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.6/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 'SECRET' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True TEMPLATE_DEBUG = True PROJECT_PATH = os.path.join(BASE_DIR, os.pardir) PROJECT_PATH = os.path.abspath(PROJECT_PATH) TEMPLATE_PATH = os.path.join(PROJECT_PATH, 'Docpad/templates/') print "TEMPLATES = ",TEMPLATE_PATH # TODO make separate template_paths TEMPLATE_DIRS = ( # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. TEMPLATE_PATH, ) ... ``` After i run my application, i get 404 for all static resources like css/js files. **UPDATE** When i do ``` python manage.py runserver 0.0.0.0:8000 ``` the server starts serving the static files But , this command ``` uwsgi --ini uwsgi.ini --http :8000 ``` gives me the problem(not loading the static files). I am totally clueless right now,have been trying various alternatives but no luck. If anybody could help, it'll be great.
2015/10/16
[ "https://Stackoverflow.com/questions/33168308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1270865/" ]
You could also do it like so: ``` uwsgi --ini uwsgi.ini --http :8000 --static-map /static=/path/to/staticfiles/ ``` or just add this to your .ini file: ``` static-map = /static=/path/to/staticfiles ```
I don't know why, but it starts work when I replace this code: ``` if settings.DEBUG: urlpatterns += [ url(r'^(?P<path>.*)$', 'django.contrib.staticfiles.views.serve'), ] ``` with this: ``` if settings.DEBUG: from django.conf.urls.static import static urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT) ``` in file `urls.py` I have Django 1.8
58,471,984
Could anyone here tell me how to properly append series of missing values onto a python list? for example, ``` > ls=[1,2,3] > ls += []*2 > ls [1,2,3] ``` but this is not the outcome I want. I want: ``` [1,2,3, , ] ``` where the blanks denotes for the missing values. (note: also what I DON'T want is: ``` > ls [1,2,3,'',''] ``` ) Thanks,
2019/10/20
[ "https://Stackoverflow.com/questions/58471984", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12205961/" ]
Where are you set dataSource and Deleagte methods of TableView? use this code ``` override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. self.tableView.dataSource = self self.tableView.delegate = self self.tableView.register(UITableViewCell.self, forCellReuseIdentifier: "cell") } ```
2 possible reasons: 1. If the cell is designed as prototype cell you **must not** register the cell. 2. `dataSource` and `delegate` of the table view must be connected to the controller in Interface Builder or set in code.
59,622,544
My Data frame is below I am trying to find the age ``` customer_Id DOB 0 268408 1970-02-01 1 268408 1970-02-01 2 268408 1970-02-01 3 268408 1970-02-01 4 268408 1970-02-01 ``` shape is of `207518` while converting the data i got `ValueError: unconverted data remains: 5` code is below to convert in to age ``` def cal_age(dob=str): x = dt.datetime.strptime(dob, "%Y-%d-%m") y = dt.date.today() age = y.year - x.year - ((y.month, x.day) < (y.month, x.day)) return age ``` `df_n_4['DOB'] = pd.to_datetime(df_n_4['DOB'],errors='coerce')` `df_n_4['DOB'] = df_n_4['DOB'].astype('str')` `df_n_4['Age'] = df_n_4.DOB.apply(lambda z: cal_age(z))` Error is below ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-113-90ef9d4c7002> in <module> ----> 1 df_n_4['Age'] = df_n_4.DOB.apply(lambda z: cal_age(z)) ~\Anaconda3\lib\site-packages\pandas\core\series.py in apply(self, func, convert_dtype, args, **kwds) 4040 else: 4041 values = self.astype(object).values -> 4042 mapped = lib.map_infer(values, f, convert=convert_dtype) 4043 4044 if len(mapped) and isinstance(mapped[0], Series): pandas\_libs\lib.pyx in pandas._libs.lib.map_infer() <ipython-input-113-90ef9d4c7002> in <lambda>(z) ----> 1 df_n_4['Age'] = df_n_4.DOB.apply(lambda z: cal_age(z)) <ipython-input-111-98546f386b50> in cal_age(dob) 1 def cal_age(dob=str): ----> 2 x = dt.datetime.strptime(dob, "%Y-%d-%m") 3 y = dt.date.today() 4 age = y.year - x.year - ((y.month, x.day) < (y.month, x.day)) 5 return age ~\Anaconda3\lib\_strptime.py in _strptime_datetime(cls, data_string, format) 575 """Return a class cls instance based on the input string and the 576 format string.""" --> 577 tt, fraction, gmtoff_fraction = _strptime(data_string, format) 578 tzname, gmtoff = tt[-2:] 579 args = tt[:6] + (fraction,) ~\Anaconda3\lib\_strptime.py in _strptime(data_string, format) 360 if len(data_string) != found.end(): 361 raise ValueError("unconverted data remains: %s" % --> 362 data_string[found.end():]) 363 364 iso_year = year = None ValueError: unconverted data remains: 5 ```
2020/01/07
[ "https://Stackoverflow.com/questions/59622544", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You can change your function for working with datetimes, also convert to strings is not necessary: ``` def cal_age(x): y = dt.date.today() age = y.year - x.year - ((y.month, x.day) < (y.month, x.day)) return age df_n_4['DOB'] = pd.to_datetime(df_n_4['DOB'],errors='coerce') df_n_4['Age'] = df_n_4.DOB.apply(cal_age) ```
try this code, ``` df['current_date']=pd.datetime.now() df['age']=(df.current_date-pd.to_datetime(df.DOB)).dt.days/365 ```
66,484,870
So I was developing a python program for my school project that asks a customer for their details such as their Firstname,Lastname,Age etc. So I made a function called customer details. ``` def customerdetails(): Firstname = input("Enter your First name:") Lastname = input("Enter your last name:") Age = input("Age:") Address =input("Enter your address:") Postcode = input("Enter your Postcode:") Email = input("Email:") Phone = int(input("Phone Number:")) customerdetails() ``` Now how can I print those variables such as Firstname, Lastname, Age, Address etc. I tried using the same logic we use to print normal variables, but it didn’t work.This is the code. ``` print("Please check your details") print("***********") print(Firstname) print("***********") ``` It shows me an error that says “NameError: name ‘Firstname’ is not defined.” What do I do ? Any help will be appreciated
2021/03/05
[ "https://Stackoverflow.com/questions/66484870", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13290732/" ]
You need to return the information from your `customerdetails` function. Since you have a bunch of different variables, you'll be returning a collection -- a tuple, a list, a dict, etc. Using a tuple or a list means you need to keep track of the order of all the elements, and using a dict means you need to keep track of the exact string literals that correspond to each variable. Personally, I prefer using `NamedTuple` for simple collections like this; it makes for code that's easy to read and type-check. ``` from typing import NamedTuple class CustomerDetails(NamedTuple): first_name: str last_name: str age: str address: str postcode: str email: str phone: int def get_customer_details() -> CustomerDetails: """Get customer details from the user.""" return CustomerDetails( input("Enter your First name:"), input("Enter your last name:"), input("Age:"), input("Enter your address:"), input("Enter your Postcode:"), input("Email:"), int(input("Phone Number:")), ) details = get_customer_details() print("Please check your details") print("***********") print(details.first_name) print("***********") ```
By default, variables declared within a function are local only to that function. If you want a function to give back multiple values, you can `return` a tuple of values: ```py def customerdetails(): Firstname = input("Enter your First name:") Lastname = input("Enter your last name:") Age = input("Age:") Address =input("Enter your address:") Postcode = input("Enter your Postcode:") Email = input("Email:") Phone = int(input("Phone Number:")) return (Firstname, Lastname, Age, Address, Postcode, Email, Phone) (fn, ln, age, addr, postcode, email, phone) = customerdetails() ```
66,484,870
So I was developing a python program for my school project that asks a customer for their details such as their Firstname,Lastname,Age etc. So I made a function called customer details. ``` def customerdetails(): Firstname = input("Enter your First name:") Lastname = input("Enter your last name:") Age = input("Age:") Address =input("Enter your address:") Postcode = input("Enter your Postcode:") Email = input("Email:") Phone = int(input("Phone Number:")) customerdetails() ``` Now how can I print those variables such as Firstname, Lastname, Age, Address etc. I tried using the same logic we use to print normal variables, but it didn’t work.This is the code. ``` print("Please check your details") print("***********") print(Firstname) print("***********") ``` It shows me an error that says “NameError: name ‘Firstname’ is not defined.” What do I do ? Any help will be appreciated
2021/03/05
[ "https://Stackoverflow.com/questions/66484870", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13290732/" ]
You need to return the information from your `customerdetails` function. Since you have a bunch of different variables, you'll be returning a collection -- a tuple, a list, a dict, etc. Using a tuple or a list means you need to keep track of the order of all the elements, and using a dict means you need to keep track of the exact string literals that correspond to each variable. Personally, I prefer using `NamedTuple` for simple collections like this; it makes for code that's easy to read and type-check. ``` from typing import NamedTuple class CustomerDetails(NamedTuple): first_name: str last_name: str age: str address: str postcode: str email: str phone: int def get_customer_details() -> CustomerDetails: """Get customer details from the user.""" return CustomerDetails( input("Enter your First name:"), input("Enter your last name:"), input("Age:"), input("Enter your address:"), input("Enter your Postcode:"), input("Email:"), int(input("Phone Number:")), ) details = get_customer_details() print("Please check your details") print("***********") print(details.first_name) print("***********") ```
I think you propably wrote the print-functions into another function than the function you defined the variable Firstname etc.
66,484,870
So I was developing a python program for my school project that asks a customer for their details such as their Firstname,Lastname,Age etc. So I made a function called customer details. ``` def customerdetails(): Firstname = input("Enter your First name:") Lastname = input("Enter your last name:") Age = input("Age:") Address =input("Enter your address:") Postcode = input("Enter your Postcode:") Email = input("Email:") Phone = int(input("Phone Number:")) customerdetails() ``` Now how can I print those variables such as Firstname, Lastname, Age, Address etc. I tried using the same logic we use to print normal variables, but it didn’t work.This is the code. ``` print("Please check your details") print("***********") print(Firstname) print("***********") ``` It shows me an error that says “NameError: name ‘Firstname’ is not defined.” What do I do ? Any help will be appreciated
2021/03/05
[ "https://Stackoverflow.com/questions/66484870", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13290732/" ]
You need to return the information from your `customerdetails` function. Since you have a bunch of different variables, you'll be returning a collection -- a tuple, a list, a dict, etc. Using a tuple or a list means you need to keep track of the order of all the elements, and using a dict means you need to keep track of the exact string literals that correspond to each variable. Personally, I prefer using `NamedTuple` for simple collections like this; it makes for code that's easy to read and type-check. ``` from typing import NamedTuple class CustomerDetails(NamedTuple): first_name: str last_name: str age: str address: str postcode: str email: str phone: int def get_customer_details() -> CustomerDetails: """Get customer details from the user.""" return CustomerDetails( input("Enter your First name:"), input("Enter your last name:"), input("Age:"), input("Enter your address:"), input("Enter your Postcode:"), input("Email:"), int(input("Phone Number:")), ) details = get_customer_details() print("Please check your details") print("***********") print(details.first_name) print("***********") ```
``` def customerdetails(): Firstname = input("Enter your First name:") Lastname = input("Enter your last name:") Age = input("Age:") Address =input("Enter your address:") Postcode = input("Enter your Postcode:") Email = input("Email:") Phone = int(input("Phone Number:")) return Firstname, Lastname, Age,Address, Postcode, Email, Phone ## Calling function Firstname, Lastname, Age,Address, Postcode, Email, Phone = customerdetails() ```
41,994,645
Is there a more functional way to create a 2d array in Javascript than what I have here? Perhaps using `.apply`? ``` generatePuzzle(size) { let puzzle = []; for (let i = 0; i < size; i++) { puzzle[i] = []; for (let j = 0; j < size; j++) { puzzle[i][j] = Math.floor((Math.random() * 200) + 1); } } return puzzle; } ``` For instance, in python, you can do something like `[[0]*4]*4` to create a 4x4 list
2017/02/02
[ "https://Stackoverflow.com/questions/41994645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1718122/" ]
``` const repeat = (fn, n) => Array(n).fill(0).map(fn); const rand = () => Math.floor((Math.random() * 200) + 1); const puzzle = n => repeat(() => repeat(rand, n), n); ``` And then `puzzle(3)`, eg, will return a 3x3 matrix filled with random numbers.
With lodash just like below: ``` const _ = require('lodash'); function generatePuzzle(size) { return _.times(size, () => _.times(size, () => (Math.random() * 200) + 1)); } ```
64,169,510
I am using XEN hypervisor. For managing virtual Machine I am using **virt-manager** whenever I want to start to Virtual Machine at last when everything is ready and I click the create Button I get the following error ``` Unable to complete install: 'An error occurred, but the cause is unknown' Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/createvm.py", line 2089, in _do_async_install guest.installer_instance.start_install(guest, meter=meter) File "/usr/share/virt-manager/virtinst/install/installer.py", line 542, in start_install domain = self._create_guest( File "/usr/share/virt-manager/virtinst/install/installer.py", line 491, in _create_guest domain = self.conn.createXML(install_xml or final_xml, 0) File "/usr/lib/python3/dist-packages/libvirt.py", line 4034, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirt.libvirtError: An error occurred, but the cause is unknow ```
2020/10/02
[ "https://Stackoverflow.com/questions/64169510", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12966156/" ]
You can iterate over the result of the dynamic query directly: ``` create or replace function gapandoverlapdetection ( table_name text, entity_ids bigint[]) returns table (entity_id bigint, valid tsrange, causes_overlap boolean, causes_gap boolean) as $$ declare var_r record; begin for var_r in EXECUTE format('select entity_id, valid from %I where entity_id = any($1) and registration > now()::timestamp order by valid ASC', table_name) using entity_ids loop ... do something with var_r -- return a row for the result -- this does not end the function -- it just appends this row to the result return query select entity_id, true, false; end loop; end $$ language plpgsql; ``` The `%I` injects an identifier into a string and the `$1` inside the dynamic SQL is then populated through passing the argument with the `using` keyword
Firstly, decide whether you want to pass the table's name or oid. If you want to identify the table by name, then the parameter should be of `text` type and not `regclass`. Secondly, if you want the table name to change between executions then you need to [execute the SQL statement dynamically](https://www.postgresql.org/docs/13/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN) with the `EXECUTE` statement.
60,404,756
``` from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_index, test_index in split.split(housing, housing["income_cat"]): strat_train_set = housing.loc[train_index] strat_test_set = housing.loc[test_index] ``` I am currently reading the book hands-on ML and I am having some issues with this code. I don't have much experience with python so that might be a reason but let me make my confusion clearer. In the book, the housing problem requires us to create stratums so the dataset has sufficient instances of each, and we do this with code that I didn't copy here, the code I am showing is used to create the test and train sets, using the specific income categories. The 1st and 2nd lines of code are clear, the 3rd is where I get lost. We create a split of test 0.2 train 0.8 but what exactly is happening from then on, what is the for loop used for? I have looked in a couple of pages for info but haven't really found anything that made the situation clear, so I would really appreciate the help. Thank you in advance for your answers.
2020/02/25
[ "https://Stackoverflow.com/questions/60404756", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12946446/" ]
That for loop is just taking the indices being used for the split and calling those rows of the original data to form the training and test sets.
StratifiedShuffleSplit is better if you are using a K-fold cross-validation, where you divide the training and testing data in different ways and then calculate the mean of a result in K iterations. `n_splits` must equals the *K* value and in your case *K* is one, which makes no sense for cross-validation. I think you'd better use sklearn.model\_selection.train\_test\_split, which makes more sense.
70,310,388
I have a list of nested dictionaries (python 3.9) that looks something like this: ``` records = [ {'Total:': {'Owner:': {'Available:': {'15 to 34 years': 1242}}}}, {'Total:': {'Owner:': {'Available:': {'35 to 64 years': 5699}}}}, {'Total:': {'Owner:': {'Available:': {'65 years and over': 2098}}}}, {'Total:': {'Owner:': {'No Service:': {'15 to 34 years': 43}}}}, {'Total:': {'Owner:': {'No Service:': {'35 to 64 years': 64}}}}, {'Total:': {'Owner:': {'No Service:': {'65 years and over': 5}}}}, {'Total:': {'Renter:': {'Available:': {'15 to 34 years': 1403}}}}, {'Total:': {'Renter:': {'Available:': {'35 to 64 years': 2059}}}}, {'Total:': {'Renter:': {'Available:': {'65 years and over': 395}}}}, {'Total:': {'Renter:': {'No Service:': {'15 to 34 years': 16}}}}, {'Total:': {'Renter:': {'No Service:': {'35 to 64 years': 24}}}}, {'Total:': {'Renter:': {'No Service:': {'65 years and over': 0}}}}, ] ``` The levels of nesting is not always consistent. The example above has 4 levels (total, owner/renter, available/no service, age group) but there are some examples with a single level and others with as many as 5. I would like to merge the data in a way that doesn't replace the final dictionary like `update()` or `{*dict_a, **dict_b}` does. The final output should look something like this: ``` combined = { 'Total': { 'Owner': { 'Available': { '15 to 34 years': 1242, '35 to 64 years': 5699, '65 years and over': 2098 }, 'No Service:': { '15 to 34 years': 43, '35 to 64 years': 64, '65 years and over': 5 } }, 'Renter': { 'Available': { '15 to 34 years': 1403, '35 to 64 years': 2059, '65 years and over': 395 }, 'No Service:': { '15 to 34 years': 16, '35 to 64 years': 24, '65 years and over': 0 } }, } } ```
2021/12/10
[ "https://Stackoverflow.com/questions/70310388", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12413845/" ]
Recursion is an easy way to navigate and operate on arbitrarily nested structures: ``` def combine_into(d: dict, combined: dict) -> None: for k, v in d.items(): if isinstance(v, dict): combine_into(v, combined.setdefault(k, {})) else: combined[k] = v combined = {} for record in records: combine_into(record, combined) print(combined) ``` ``` {'Total:': {'Owner:': {'Available:': {'15 to 34 years': 1242, '35 to 64 years': 5699, '65 years and over': 2098}, 'No Service:': {'15 to 34 years': 43, '35 to 64 years': 64, '65 years and over': 5}}, 'Renter:': {'Available:': {'15 to 34 years': 1403, '35 to 64 years': 2059, '65 years and over': 395}, 'No Service:': {'15 to 34 years': 16, '35 to 64 years': 24, '65 years and over': 0}}}} ``` The general idea here is that each call to `combine_into` takes one dict and combines it into the `combined` dict -- each value that is itself a dict results in another recursive call, while other values just get copied into `combined` as-is. Note that this will raise an exception (or clobber some data) if some of the `records` have disagreements about whether a particular node is a leaf or not!
Quick solution: `res = NDict.from_list_of_dict(records).raw_dict` Test: ```py >>> from naapc import NDict >>> records = [ ... {'Total:': {'Owner:': {'Available:': {'15 to 34 years': 1242}}}}, ... {'Total:': {'Owner:': {'Available:': {'35 to 64 years': 5699}}}}, ... {'Total:': {'Owner:': {'Available:': {'65 years and over': 2098}}}}, ... {'Total:': {'Owner:': {'No Service:': {'15 to 34 years': 43}}}}, ... {'Total:': {'Owner:': {'No Service:': {'35 to 64 years': 64}}}}, ... {'Total:': {'Owner:': {'No Service:': {'65 years and over': 5}}}}, ... {'Total:': {'Renter:': {'Available:': {'15 to 34 years': 1403}}}}, ... {'Total:': {'Renter:': {'Available:': {'35 to 64 years': 2059}}}}, ... {'Total:': {'Renter:': {'Available:': {'65 years and over': 395}}}}, ... {'Total:': {'Renter:': {'No Service:': {'15 to 34 years': 16}}}}, ... {'Total:': {'Renter:': {'No Service:': {'35 to 64 years': 24}}}}, ... {'Total:': {'Renter:': {'No Service:': {'65 years and over': 0}}}}, ... ] >>> a = NDict.from_list_of_dict(records) >>> print(a) { "Total:": { "Owner:": { "Available:": { "15 to 34 years": 1242, "35 to 64 years": 5699, "65 years and over": 2098 }, "No Service:": { "15 to 34 years": 43, "35 to 64 years": 64, "65 years and over": 5 } }, "Renter:": { "Available:": { "15 to 34 years": 1403, "35 to 64 years": 2059, "65 years and over": 395 }, "No Service:": { "15 to 34 years": 16, "35 to 64 years": 24, "65 years and over": 0 } } } } >>> type(a) <class 'naapc.ndict.NDict'> >>> normal_dict = a.raw_dict >>> type(normal_dict) <class 'dict'> >>> normal_dict is not a.raw_dict True >>> ``` Doc: <https://github.com/eiphy/naapc>
9,638,826
I'm using python to simulate some automation models, and with the help of matplotlib I'm producing plots like the one shown below. ![enter image description here](https://i.stack.imgur.com/SUeE2.png) I'm currently plotting with the following command: ``` ax.imshow(self.g, cmap=map, interpolation='nearest') ``` where `self.g` is the binary map (`0` -> blue, `1` -> red in my current plots). However, to include this in my report I would like the plot to be with black dots on white background instead of red on blue. How do I accomplish that?
2012/03/09
[ "https://Stackoverflow.com/questions/9638826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/38055/" ]
You can change the color map you are using via the `cmap` keyword. The color map `'Greys'` provides the effect you want. You can find a list of [available maps on the scipy website](http://scipy-cookbook.readthedocs.io/items/Matplotlib_Show_colormaps.html). ``` import matplotlib.pyplot as plt import numpy as np np.random.seed(101) g = np.floor(np.random.random((100, 100)) + .5) plt.subplot(211) plt.imshow(g) plt.subplot(212) plt.imshow(g, cmap='Greys', interpolation='nearest') plt.savefig('blkwht.png') plt.show() ``` which results in: ![enter image description here](https://i.stack.imgur.com/rlSzf.png)
There is an alternative method to Yann's answer that gives you finer control. Matplotlib's [imshow](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.imshow) can take a `MxNx3` matrix where each entry is the RGB color value - just set them to white `[1,1,1]` or black `[0,0,0]` accordingly. If you want three colors it's easy to expand this method. ``` import matplotlib.pyplot as plt import numpy as np # Z is your data set N = 100 Z = np.random.random((N,N)) # G is a NxNx3 matrix G = np.zeros((N,N,3)) # Where we set the RGB for each pixel G[Z>0.5] = [1,1,1] G[Z<0.5] = [0,0,0] plt.imshow(G,interpolation='nearest') plt.show() ``` ![enter image description here](https://i.stack.imgur.com/uyU7om.png)
18,080,556
I have written a project in python which I am now in the process of moving to google app engine. The problem that occurs is when I run this code on GAE: ``` import requests from google.appengine.api import urlfetch def retrievePage(url, id): response = 'http://online2.citybreak.com/Book/Package/Result.aspx?onlineid=%s' % id # Set the timeout to 60 seconds urlfetch.set_default_fetch_deadline(60) # Send the first request r1 = requests.get(url) cookies = r1.cookies print 'Cookies: %s' % r1.cookies # Retrieve the content r2 = requests.get(response, cookies=cookies) return r2.text ``` When running the code on GAE the cookies from the first request are missing. That is to say, `r1.cookies` is just an empty cookie jar. The same code works just fine on my django server where the cookies should contain a asp.net session id. The reason i have two requests is because the first one redirects the user and will only retrieve the correct page if the session cookie is the same. print output on GAE: ``` Cookies: <<class 'requests.cookies.RequestsCookieJar'>[]> ``` print output on Django: ``` Cookies: <<class 'requests.cookies.RequestsCookieJar'>[<Cookie ASP.NET_SessionId=dhmk1vt3ujgmhhhmbwsclukb for online2.citybreak.com/>]> ``` Anyone know what the problem might be? Is GAE stripping away the cookie information? I am also opened to any suggestions on another way to retrieve the page, i just found that the requests module was easier than the alternatives i found.
2013/08/06
[ "https://Stackoverflow.com/questions/18080556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2656606/" ]
I think this will do it: ``` var text = "aaa bbb eee ccc <br>ddd eee fff ggg hhh iii jjj kkk"; var search = /eee [^e<>]*ggg/g; console.log(text.search(search)); // a) console.log(text.replace(search, "newinsertedstring $&")); // b) ```
``` var text = "whatever".Replace("eee fff ggg", "newinsertedstring eee fff ggg"); ```
18,080,556
I have written a project in python which I am now in the process of moving to google app engine. The problem that occurs is when I run this code on GAE: ``` import requests from google.appengine.api import urlfetch def retrievePage(url, id): response = 'http://online2.citybreak.com/Book/Package/Result.aspx?onlineid=%s' % id # Set the timeout to 60 seconds urlfetch.set_default_fetch_deadline(60) # Send the first request r1 = requests.get(url) cookies = r1.cookies print 'Cookies: %s' % r1.cookies # Retrieve the content r2 = requests.get(response, cookies=cookies) return r2.text ``` When running the code on GAE the cookies from the first request are missing. That is to say, `r1.cookies` is just an empty cookie jar. The same code works just fine on my django server where the cookies should contain a asp.net session id. The reason i have two requests is because the first one redirects the user and will only retrieve the correct page if the session cookie is the same. print output on GAE: ``` Cookies: <<class 'requests.cookies.RequestsCookieJar'>[]> ``` print output on Django: ``` Cookies: <<class 'requests.cookies.RequestsCookieJar'>[<Cookie ASP.NET_SessionId=dhmk1vt3ujgmhhhmbwsclukb for online2.citybreak.com/>]> ``` Anyone know what the problem might be? Is GAE stripping away the cookie information? I am also opened to any suggestions on another way to retrieve the page, i just found that the requests module was easier than the alternatives i found.
2013/08/06
[ "https://Stackoverflow.com/questions/18080556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2656606/" ]
> > i want to find the line where the pattern which contains eee AND ggg is > > > That's a different question, but OK. ``` var text = "aaa bbb eee ccc <br>ddd eee fff ggg hhh iii jjj kkk", search = "eee ggg"; var lines = text.split("<br>"), terms = search.split(" "), firstOccurence = new RegExp(terms.join("|")); function containedIn(l){ return function(t) { return l.indexOf(t) > -1; } } for (var i=0; i<lines.length; i++) if (terms.every(containedIn(lines[i])) lines[i] = lines[i].replace(firstOccurence, "newinsertedString $&"); var result = lines.join("<br>"); ```
``` var text = "whatever".Replace("eee fff ggg", "newinsertedstring eee fff ggg"); ```
18,080,556
I have written a project in python which I am now in the process of moving to google app engine. The problem that occurs is when I run this code on GAE: ``` import requests from google.appengine.api import urlfetch def retrievePage(url, id): response = 'http://online2.citybreak.com/Book/Package/Result.aspx?onlineid=%s' % id # Set the timeout to 60 seconds urlfetch.set_default_fetch_deadline(60) # Send the first request r1 = requests.get(url) cookies = r1.cookies print 'Cookies: %s' % r1.cookies # Retrieve the content r2 = requests.get(response, cookies=cookies) return r2.text ``` When running the code on GAE the cookies from the first request are missing. That is to say, `r1.cookies` is just an empty cookie jar. The same code works just fine on my django server where the cookies should contain a asp.net session id. The reason i have two requests is because the first one redirects the user and will only retrieve the correct page if the session cookie is the same. print output on GAE: ``` Cookies: <<class 'requests.cookies.RequestsCookieJar'>[]> ``` print output on Django: ``` Cookies: <<class 'requests.cookies.RequestsCookieJar'>[<Cookie ASP.NET_SessionId=dhmk1vt3ujgmhhhmbwsclukb for online2.citybreak.com/>]> ``` Anyone know what the problem might be? Is GAE stripping away the cookie information? I am also opened to any suggestions on another way to retrieve the page, i just found that the requests module was easier than the alternatives i found.
2013/08/06
[ "https://Stackoverflow.com/questions/18080556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2656606/" ]
I think this will do it: ``` var text = "aaa bbb eee ccc <br>ddd eee fff ggg hhh iii jjj kkk"; var search = /eee [^e<>]*ggg/g; console.log(text.search(search)); // a) console.log(text.replace(search, "newinsertedstring $&")); // b) ```
> > i want to find the line where the pattern which contains eee AND ggg is > > > That's a different question, but OK. ``` var text = "aaa bbb eee ccc <br>ddd eee fff ggg hhh iii jjj kkk", search = "eee ggg"; var lines = text.split("<br>"), terms = search.split(" "), firstOccurence = new RegExp(terms.join("|")); function containedIn(l){ return function(t) { return l.indexOf(t) > -1; } } for (var i=0; i<lines.length; i++) if (terms.every(containedIn(lines[i])) lines[i] = lines[i].replace(firstOccurence, "newinsertedString $&"); var result = lines.join("<br>"); ```
12,558,878
> > **Possible Duplicate:** > > [Python 3.2.3 programming…Almost had it working](https://stackoverflow.com/questions/12558717/python-3-2-3-programming-almost-had-it-working) > > > ``` x = float(input("What is/was the cost of the meal?")) y = float(input("What is/was the sales tax?")) z = float(input("What percentage tip would you like to leave?")) print ("Original Food Charge: ${}" .format(x))) print ("Sales Tax: ${}" .format((y/100)*x))) print ("Tip: ${}" .format(x*(z/100))) print ("Total Charge For Food: ${}" .format(x+((y/100)*x)+((z/100)*x))) ``` output error: line 10, in Syntax Error: .format(x))):line 1015 Someone report that this worked in their earlier version of python (v2.6 I think). I'm using the later 3.2.3 and racking my darn brain as to why this is not working to this version. It makes sense to me someone please ENLIGHTEN me.
2012/09/24
[ "https://Stackoverflow.com/questions/12558878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1693293/" ]
There is a parentheses missing after `.format(x*(z/100))` that belongs to the preceding `print`. It should be: ``` print ("Tip: ${}".format(x*(z/100))) ``` **UPDATE:** Not sure if you got it working or not yet, here is the complete code after fixing your unbalanced parentheses... ``` x = float(input("What is/was the cost of the meal?")) y = float(input("What is/was the sales tax?")) z = float(input("What percentage tip would you like to leave?")) print ("Original Food Charge: ${}" .format(x)) print ("Sales Tax: ${}" .format((y/100)*x)) print ("Tip: ${}" .format(x*(z/100))) print ("Total Charge For Food: ${}" .format(x+((y/100)*x)+((z/100)*x))) ```
You missed an end bracket after the third print: `.format(x*(z/100)))` Here is what appears to be a working version after I fixed the brackets: ``` x = float(input("What is/was the cost of the meal?")) y = float(input("What is/was the sales tax?")) z = float(input("What percentage tip would you like to leave?")) print("Original Food Charge: ${}".format(x)) print("Sales Tax: ${}".format((y/100)*x)) print("Tip: ${}".format(x*(z/100))) print("Total Charge For Food: ${}".format(x+((y/100)*x)+((z/100)*x))) ``` Also there is no need for newlines if the line width is less than 79.
12,558,878
> > **Possible Duplicate:** > > [Python 3.2.3 programming…Almost had it working](https://stackoverflow.com/questions/12558717/python-3-2-3-programming-almost-had-it-working) > > > ``` x = float(input("What is/was the cost of the meal?")) y = float(input("What is/was the sales tax?")) z = float(input("What percentage tip would you like to leave?")) print ("Original Food Charge: ${}" .format(x))) print ("Sales Tax: ${}" .format((y/100)*x))) print ("Tip: ${}" .format(x*(z/100))) print ("Total Charge For Food: ${}" .format(x+((y/100)*x)+((z/100)*x))) ``` output error: line 10, in Syntax Error: .format(x))):line 1015 Someone report that this worked in their earlier version of python (v2.6 I think). I'm using the later 3.2.3 and racking my darn brain as to why this is not working to this version. It makes sense to me someone please ENLIGHTEN me.
2012/09/24
[ "https://Stackoverflow.com/questions/12558878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1693293/" ]
There is a parentheses missing after `.format(x*(z/100))` that belongs to the preceding `print`. It should be: ``` print ("Tip: ${}".format(x*(z/100))) ``` **UPDATE:** Not sure if you got it working or not yet, here is the complete code after fixing your unbalanced parentheses... ``` x = float(input("What is/was the cost of the meal?")) y = float(input("What is/was the sales tax?")) z = float(input("What percentage tip would you like to leave?")) print ("Original Food Charge: ${}" .format(x)) print ("Sales Tax: ${}" .format((y/100)*x)) print ("Tip: ${}" .format(x*(z/100))) print ("Total Charge For Food: ${}" .format(x+((y/100)*x)+((z/100)*x))) ```
You're missing a close paren on the preceding line: ``` .format(x*(z/100)) ```
13,221,021
I'm trying to create a basic blogging application in Python using Web.Py. I have started without a direcotry structure, but soon I needed one. So I created this structure: ``` Blog/ ├── Application/ │   ├── App.py │   └── __init__.py | ├── Engine/ │   ├── Connection/ │   │   ├── __init__.py │   │   └── MySQLConnection.py │   ├── Errors.py │   └── __init__.py ├── __init__.py ├── Models/ │   ├── BlogPostModel.py │   └── __init__.py ├── start.py └── Views/ ├── Home.py └── __init__.py ``` `start.py` imports `Application.App`, which contains Web.Py stuff and imports `Blog.Models.BlogPostModel`, which imports `Blog.Engine.Connection.MySQLConnection`. `Application.App` also imports `Engine.Errors` and `Views.Home`. All these imports happen inside contructors, and all code inside all files are in classes. When I run `python start.py`, which contains these three lines of code: ``` from Application import App app = App.AppInstance() app.run() ``` The following stack trace is printed: ``` Blog $ python start.py Traceback (most recent call last): File "start.py", line 2, in <module> Blog = App.AppInstance() File "/home/goktug/code/Blog/Application/App.py", line 4, in __init__ from Blog.Views import Home ImportError: No module named Blog.Views ``` But according to what I understand from some research, this should run, at least until it reaches something after App.py. *Can anyone tell where I made the mistake?* (I can provide more code on request, but for now I'm stopping here, as this one is getting messier and messier).
2012/11/04
[ "https://Stackoverflow.com/questions/13221021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
No, you simply can't. AutoBean requires everything to be *statically typed*: no polymorphism, and no mixed-typed lists of maps. You might be interested by RequestFactory's built-in support for JSON-RPC though.
Why do your params all need to be passed back in a list? Surely you're not going to do the same thing with a `String`, an `Integer`, and another `Object`! Just send them all back separately. Further, you're not sending a custom `Object` over the JSON, you're sending the `objid` of that object... so just send the `Integer id` and let the server handle it.
34,759,366
I have a python script that uses a while true and continues to run. In the while loop I have several if statements. Some of the if statements have a `time.sleep` in them. After using this for a while I notice that all of the if statements below the first `time.sleep` are waiting. Is there way to have the if statements below the first `time.sleep` to run or will they always wait? I could put all the if statements with the `time.sleep` on the bottom so all the if statements without a `time.sleep` get executed first. Unless there is another way I will try to build this on the fly script that way if I can. ``` while True: temp1 = tempRead1() if temp1 < 65: GPIO.output(17, False) else: GPIO.output(17, True) if temp1 > 70 and GPIO.input(17): time.sleep(120) GPIO.output(27, False) else: GPIO.output(27, True) if temp1 > 72 and GPIO.input(17): time.sleep(120) GPIO.output(22, False) else: GPIO.output(22, True) if temp1 > 80 and GPIO.input(17): time.sleep(120) GPIO.output(5, False) else: GPIO.output(5, True) if temp1 < 55: GPIO.output(6, False) else: GPIO.output(6, True) time.sleep(60) ```
2016/01/13
[ "https://Stackoverflow.com/questions/34759366", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5729013/" ]
I have used a Javascript timer in the past to poll the server to test if the session is still active. If I receive a 403 then redirect to the login page. ``` var AutoLogout = {}; AutoLogout.pollInterval = 60000; // Default is 1 minute. // Force auto logout when session expires AutoLogout.start = function (heartBeatUrl, sessionExpiredUrl, interval) { if (interval) AutoLogout.pollInterval = interval; var timer = $.timer(function() { checkSession(heartBeatUrl, sessionExpiredUrl, timer); }); timer.set({ time: AutoLogout.pollInterval, autostart: true }); }; // Check the session serverside to see if we need to auto-logout // if the clientActivity flag is set then the session will be extended before checking. // if the session is still alive then set the next timer interval to be the time returned from the server. function checkSession(sessionUrl, sessionExpiredUrl, timer) { $.ajax(sessionUrl, { type: "post", contentType: "application/json", success: function (result) { // update the timer poll interval based on return value. try { var r = ko.toJS(result); timer.set({ time: r.TimeoutMilliseconds ? r.TimeoutMilliseconds : AutoLogout.pollInterval, autostart: true }); } catch(e) { } }, error: function(e) { // do nothing }, statusCode: { 403: function () { window.location.href = sessionExpiredUrl; }, 401: function () { window.location.href = sessionExpiredUrl; } } }); } ``` Then when you page loads call AutoLogout.start with the necessary url's for you application. Notice: I have used Knockout JS in this example to parse the data returned from the server request but that is up to you.
I will try to solve your problem I will suggest to put your logic in action filter ``` public class AuthorizeActionFilterAttribute : ActionFilterAttribute { public override void OnActionExecuting(FilterExecutingContext filterContext) { HttpSessionStateBase session = filterContext.HttpContext.Session; Controller controller = filterContext.Controller as Controller; if (controller != null) { if (session != null && session ["authstatus"] == null) { filterContext.Result = new RedirectToRouteResult( new RouteValueDictionary{{ "controller", "Login" }, { "action", "Index" } }); } } base.OnActionExecuting(filterContext); } } ``` also please refer this [Redirect From Action Filter Attribute](https://stackoverflow.com/questions/5453338/redirect-from-action-filter-attribute/5453371#5453371)
35,475,770
Every forum I have looked at says that: ``` pip install pillow ``` remedies issues with installing pyautogui, however, I have installed pillow and I am still receiving: ``` python setup.py egg_info failed with error code 1 ``` Any suggestions? I also tried installing PIL but that failed as well with the same error.
2016/02/18
[ "https://Stackoverflow.com/questions/35475770", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5365094/" ]
Some of the possible solutions. 1. Use an ajax method to get your resource by passing a key. 2. Use Hidden input fields and load values. 3. Use a dedicated jsp page to declare js variables or even a js function to get values according to key. like this. > > > ``` > <script type="text/javascript"> > > var messageOne = '<%=bundle.getString("line1") %>'; > var messageTwo = '<%=bundle.getString("line2") %>'; > > </script> > > ``` > >
It is normally bad practice to use scriplets `<% %>` inside your `jsp` files. You can use the `fmt` tag from the jstl core library to fetch information from your resource bundles. ``` <fmt:bundle basename="bundle"> <fmt:message var="variableName" key="bundleKey" /> </fmt:bundle> <input type="hidden" name="hiddenLine2" id="hiddenLine2" value="${variableName}"> ``` should work infact, i think you can also directly embed it into the javascript with EL aswell ``` var line2 = ${variableName}; //instead of getting it from document.getElement(...) ```
35,475,770
Every forum I have looked at says that: ``` pip install pillow ``` remedies issues with installing pyautogui, however, I have installed pillow and I am still receiving: ``` python setup.py egg_info failed with error code 1 ``` Any suggestions? I also tried installing PIL but that failed as well with the same error.
2016/02/18
[ "https://Stackoverflow.com/questions/35475770", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5365094/" ]
It is normally bad practice to use scriplets `<% %>` inside your `jsp` files. You can use the `fmt` tag from the jstl core library to fetch information from your resource bundles. ``` <fmt:bundle basename="bundle"> <fmt:message var="variableName" key="bundleKey" /> </fmt:bundle> <input type="hidden" name="hiddenLine2" id="hiddenLine2" value="${variableName}"> ``` should work infact, i think you can also directly embed it into the javascript with EL aswell ``` var line2 = ${variableName}; //instead of getting it from document.getElement(...) ```
Here is a different solution. 1. Load bundle like OP did, with the method `getBundle()`. 2. Using the third option on Arun's answer, create a separate JSP file to create a custom JavaScript object. This is the content of said JSP: ``` <%@page import="com.tenea.intranet.conf.Conf" %> <%@page import="java.util.ResourceBundle, java.util.Enumeration" %> <script type="text/javascript"> var _get = function(ID){ if (this.hasOwnProperty(ID)) return this[ID]; else { console.warn("[Resources] Error al obtener clave <"+ ID +">"); return "[ERROR]"; } }; var _search = function(text){ var elems = { } Object.keys(this).map(e => { if (typeof (this[e]) !== "function" && this[e].includes(text)) { elems[e] = this[e]; } }); return elems; }; var Resources = { <% ResourceBundle labels = ResourceBundle.getBundle("content.test.Language", locale2); Enumeration<String> e = labels.getKeys(); while (e.hasMoreElements()) { String param = e.nextElement(); out.print(param +":\""+ labels.getString(param) +"\""); if (e.hasMoreElements()) out.println(","); } %> }; Resources._get = _get; Resources._search = _search; </script> ``` What this JSP does is: * Creates object Resources * Using some snippets (sorry Martin :p), and iterating on the list of keys from the resourceBundle, for each key I print a line like "key: value" with out.println(). The resulting object is something like this: ``` Resources { abrilAbrText: "Apr" abrilText: "April" ... } ``` To make some extra functionality, I also added 2 functions inside Resources. \_get() returns the text related to the key passed as parameter. If said key doesn't exist, return the text '[ERROR]'. \_search() is a function I added for development purposes. It searches and returns a custom object with every key whose corresponding text contains the text passed as parameter. NOTE: since it uses "e => {}", it won't work on IE or Safari, so it's best to comment it once the development phase has ended. Once you have this JSP created, to use it you just have to import it to any JSP you want with this: ``` <%@include file="[relative_path]" %> ``` Hope it helps! :)
35,475,770
Every forum I have looked at says that: ``` pip install pillow ``` remedies issues with installing pyautogui, however, I have installed pillow and I am still receiving: ``` python setup.py egg_info failed with error code 1 ``` Any suggestions? I also tried installing PIL but that failed as well with the same error.
2016/02/18
[ "https://Stackoverflow.com/questions/35475770", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5365094/" ]
Some of the possible solutions. 1. Use an ajax method to get your resource by passing a key. 2. Use Hidden input fields and load values. 3. Use a dedicated jsp page to declare js variables or even a js function to get values according to key. like this. > > > ``` > <script type="text/javascript"> > > var messageOne = '<%=bundle.getString("line1") %>'; > var messageTwo = '<%=bundle.getString("line2") %>'; > > </script> > > ``` > >
Based on what I have tried, you can use jstl library to print the translated messages directly into JavaScript like: ``` alert("<fmt:message key='line1'/>"); ``` And if you are using struts2 for handling the locales you can easily define you Bundles getting either the struts2 locale, saved by the i18nInterceptor present on the default stack, or the user request locale (the clients' browser one) ``` <!-- //Import the requierd libraries --> <%@taglib prefix="fmt" uri="http://java.sun.com/jsp/jstl/fmt" %> <%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <!-- //Take the Locale from struts if it's present and from the user request if not --> <c:set var="locale" value="${not empty sessionScope.WW_TRANS_I18N_LOCALE ? sessionScope.WW_TRANS_I18N_LOCALE : pageContext.request.locale}"/> <!-- //Get the bundle based on the Locale --> <fmt:setLocale value="${locale}"/> <fmt:setBundle basename="content.test.Language"/> ``` But if you want to be able to extract that JavaScript code into an external .js file on the future I recommend you to use some of the internalionalization libraries available for JavaScript, like [Globalize](https://github.com/globalize/globalize) (It's the only one I have used, but there are plenty on the net). The downside of using an external JavaScript library for internationalization is that you will have to define the tranlation resources directly on .js files, it's impossible to access to your .properties on the server from a client-based language like JavaScript.
35,475,770
Every forum I have looked at says that: ``` pip install pillow ``` remedies issues with installing pyautogui, however, I have installed pillow and I am still receiving: ``` python setup.py egg_info failed with error code 1 ``` Any suggestions? I also tried installing PIL but that failed as well with the same error.
2016/02/18
[ "https://Stackoverflow.com/questions/35475770", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5365094/" ]
Some of the possible solutions. 1. Use an ajax method to get your resource by passing a key. 2. Use Hidden input fields and load values. 3. Use a dedicated jsp page to declare js variables or even a js function to get values according to key. like this. > > > ``` > <script type="text/javascript"> > > var messageOne = '<%=bundle.getString("line1") %>'; > var messageTwo = '<%=bundle.getString("line2") %>'; > > </script> > > ``` > >
Here is a different solution. 1. Load bundle like OP did, with the method `getBundle()`. 2. Using the third option on Arun's answer, create a separate JSP file to create a custom JavaScript object. This is the content of said JSP: ``` <%@page import="com.tenea.intranet.conf.Conf" %> <%@page import="java.util.ResourceBundle, java.util.Enumeration" %> <script type="text/javascript"> var _get = function(ID){ if (this.hasOwnProperty(ID)) return this[ID]; else { console.warn("[Resources] Error al obtener clave <"+ ID +">"); return "[ERROR]"; } }; var _search = function(text){ var elems = { } Object.keys(this).map(e => { if (typeof (this[e]) !== "function" && this[e].includes(text)) { elems[e] = this[e]; } }); return elems; }; var Resources = { <% ResourceBundle labels = ResourceBundle.getBundle("content.test.Language", locale2); Enumeration<String> e = labels.getKeys(); while (e.hasMoreElements()) { String param = e.nextElement(); out.print(param +":\""+ labels.getString(param) +"\""); if (e.hasMoreElements()) out.println(","); } %> }; Resources._get = _get; Resources._search = _search; </script> ``` What this JSP does is: * Creates object Resources * Using some snippets (sorry Martin :p), and iterating on the list of keys from the resourceBundle, for each key I print a line like "key: value" with out.println(). The resulting object is something like this: ``` Resources { abrilAbrText: "Apr" abrilText: "April" ... } ``` To make some extra functionality, I also added 2 functions inside Resources. \_get() returns the text related to the key passed as parameter. If said key doesn't exist, return the text '[ERROR]'. \_search() is a function I added for development purposes. It searches and returns a custom object with every key whose corresponding text contains the text passed as parameter. NOTE: since it uses "e => {}", it won't work on IE or Safari, so it's best to comment it once the development phase has ended. Once you have this JSP created, to use it you just have to import it to any JSP you want with this: ``` <%@include file="[relative_path]" %> ``` Hope it helps! :)
35,475,770
Every forum I have looked at says that: ``` pip install pillow ``` remedies issues with installing pyautogui, however, I have installed pillow and I am still receiving: ``` python setup.py egg_info failed with error code 1 ``` Any suggestions? I also tried installing PIL but that failed as well with the same error.
2016/02/18
[ "https://Stackoverflow.com/questions/35475770", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5365094/" ]
Based on what I have tried, you can use jstl library to print the translated messages directly into JavaScript like: ``` alert("<fmt:message key='line1'/>"); ``` And if you are using struts2 for handling the locales you can easily define you Bundles getting either the struts2 locale, saved by the i18nInterceptor present on the default stack, or the user request locale (the clients' browser one) ``` <!-- //Import the requierd libraries --> <%@taglib prefix="fmt" uri="http://java.sun.com/jsp/jstl/fmt" %> <%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <!-- //Take the Locale from struts if it's present and from the user request if not --> <c:set var="locale" value="${not empty sessionScope.WW_TRANS_I18N_LOCALE ? sessionScope.WW_TRANS_I18N_LOCALE : pageContext.request.locale}"/> <!-- //Get the bundle based on the Locale --> <fmt:setLocale value="${locale}"/> <fmt:setBundle basename="content.test.Language"/> ``` But if you want to be able to extract that JavaScript code into an external .js file on the future I recommend you to use some of the internalionalization libraries available for JavaScript, like [Globalize](https://github.com/globalize/globalize) (It's the only one I have used, but there are plenty on the net). The downside of using an external JavaScript library for internationalization is that you will have to define the tranlation resources directly on .js files, it's impossible to access to your .properties on the server from a client-based language like JavaScript.
Here is a different solution. 1. Load bundle like OP did, with the method `getBundle()`. 2. Using the third option on Arun's answer, create a separate JSP file to create a custom JavaScript object. This is the content of said JSP: ``` <%@page import="com.tenea.intranet.conf.Conf" %> <%@page import="java.util.ResourceBundle, java.util.Enumeration" %> <script type="text/javascript"> var _get = function(ID){ if (this.hasOwnProperty(ID)) return this[ID]; else { console.warn("[Resources] Error al obtener clave <"+ ID +">"); return "[ERROR]"; } }; var _search = function(text){ var elems = { } Object.keys(this).map(e => { if (typeof (this[e]) !== "function" && this[e].includes(text)) { elems[e] = this[e]; } }); return elems; }; var Resources = { <% ResourceBundle labels = ResourceBundle.getBundle("content.test.Language", locale2); Enumeration<String> e = labels.getKeys(); while (e.hasMoreElements()) { String param = e.nextElement(); out.print(param +":\""+ labels.getString(param) +"\""); if (e.hasMoreElements()) out.println(","); } %> }; Resources._get = _get; Resources._search = _search; </script> ``` What this JSP does is: * Creates object Resources * Using some snippets (sorry Martin :p), and iterating on the list of keys from the resourceBundle, for each key I print a line like "key: value" with out.println(). The resulting object is something like this: ``` Resources { abrilAbrText: "Apr" abrilText: "April" ... } ``` To make some extra functionality, I also added 2 functions inside Resources. \_get() returns the text related to the key passed as parameter. If said key doesn't exist, return the text '[ERROR]'. \_search() is a function I added for development purposes. It searches and returns a custom object with every key whose corresponding text contains the text passed as parameter. NOTE: since it uses "e => {}", it won't work on IE or Safari, so it's best to comment it once the development phase has ended. Once you have this JSP created, to use it you just have to import it to any JSP you want with this: ``` <%@include file="[relative_path]" %> ``` Hope it helps! :)
63,378,402
I am trying to stream Elasticsearch data into Snowflake. I am testing a python script which ultimately will be deployed as a cloud function/docker app on AWS. For historical I am using the `scroll` API to write x amount of objects into a string and the string to a file. I've used Snowflake's `PUT file://file.json.gz @stage` but that implies I need to write the file temporary to disk before storing on a stage. I have an insanely large amount of data to pull and am trying to eliminate as many steps as possible. Is there a cheeky way I can write files to the stage directly?
2020/08/12
[ "https://Stackoverflow.com/questions/63378402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9972301/" ]
If you create a Snowflake Stage linked to an S3 when you save to your S3, with whatever you decide to use, it will be automatically on your Snowflake Stage, this way, you can just send a COPY INTO command and save a step or two. In my opinion, it's a simple and handy solution. if you need the steps, let me know and I'll be glad to post those here.
You can use snowpipe. You need to create smaller files continuously and using snowpipe, keep on uploading them. You can use Amazon Kinesis Firehose to manage the batches. Refer to documentation at <https://docs.snowflake.com/en/user-guide/data-load-considerations-prepare.html#continuous-data-loads-i-e-snowpipe-and-file-sizing> and <https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html>
17,160,968
Can inputting and checking be done in the same line in python? Eg) in C we have ``` if (scanf("%d",&a)) ``` The above statement if block works if an integer input is given. But similarly, ``` if a=input(): ``` Doesn't work in python. Is there a way to do it?
2013/06/18
[ "https://Stackoverflow.com/questions/17160968", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2046858/" ]
No, Python can't do assignment as part of the condition of an `if` statement. The only way to do it is on two lines: ``` a=input() if a: // Your code here pass ``` This is by design, as it means that assignment is maintained as an atomic action, independent of comparison. This can help with readability of the code, which in turn limits the potential introduction of bugs.
You can't do it. This was a deliberate design choice for Python because this construct is good for causing hard to find bugs. see @Jonathan's comment on the question for an example
57,860,405
My understanding of Python asserts is that are meant for debugging and that they don't get executed for "optimized" Python code (`python -O`). For production app engine code, is `-O` used and thus stripping asserts or will asserts get executed?
2019/09/09
[ "https://Stackoverflow.com/questions/57860405", "https://Stackoverflow.com", "https://Stackoverflow.com/users/136598/" ]
I ran a test on the platforms I use to know for sure. Asserts do get executed for: * GAE standard first generation * GAE flexible
As far as i understand from Python asserts, once you set the global assertions to -0 they all become "null-operations" as in, they will get compiled but won't be evaluated or have they're conditional expressions executed. They get set like that at the Python interpreter level so i don't think that GAE actually affects that.
25,888,828
I'm trying to make an script which takes all rows starting by 'HELIX', 'SHEET' and 'DBREF' from a .txt, from that rows takes some specifical columns and then saves the results on a new file. ``` #!/usr/bin/python import sys if len(sys.argv) != 3: print("2 Parameters expected: You must introduce your pdb file and a name for output file.")` exit() for line in open(sys.argv[1]): if 'HELIX' in line: helix = line.split() cols_h = helix[0], helix[3:6:2], helix[6:9:2] elif 'SHEET'in line: sheet = line.split() cols_s = sheet[0], sheet[4:7:2], sheet[7:10:2], sheet [12:15:2], sheet[16:19:2] elif 'DBREF' in line: dbref = line.split() cols_id = dbref[0], dbref[3:5], dbref[8:10] modified_data = open(sys.argv[2],'w') modified_data.write(cols_id) modified_data.write(cols_h) modified_data.write(cols_s) ``` My problem is that when I try to write my final results it gives this error: ``` Traceback (most recent call last): File "funcional2.py", line 21, in <module> modified_data.write(cols_id) TypeError: expected a character buffer object ``` When I try to convert to a string using ''.join() it returns another error ``` Traceback (most recent call last): File "funcional2.py", line 21, in <module> modified_data.write(' '.join(cols_id)) TypeError: sequence item 1: expected string, list found ``` What am I doing wrong? Also, if there is some easy way to simplify my code, it'll be great. PS: I'm no programmer so I'll probably need some explanation if you do something... Thank you very much.
2014/09/17
[ "https://Stackoverflow.com/questions/25888828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4027271/" ]
cols\_id, cols\_h and cols\_s seems to be lists, not strings. You can only write a string in your file so you have to convert the list to a string. ``` modified_data.write(' '.join(cols_id)) ``` and similar. `'!'.join(a_list_of_things)` converts the list into a string separating each element with an exclamation mark EDIT: ``` #!/usr/bin/python import sys if len(sys.argv) != 3: print("2 Parameters expected: You must introduce your pdb file and a name for output file.")` exit() cols_h, cols_s, cols_id = [] for line in open(sys.argv[1]): if 'HELIX' in line: helix = line.split() cols_h.append(''.join(helix[0]+helix[3:6:2]+helix[6:9:2])) elif 'SHEET'in line: sheet = line.split() cols_s.append( ''.join(sheet[0]+sheet[4:7:2]+sheet[7:10:2]+sheet[12:15:2]+sheet[16:19:2])) elif 'DBREF' in line: dbref = line.split() cols_id.append(''.join(dbref[0]+dbref[3:5]+dbref[8:10])) modified_data = open(sys.argv[2],'w') cols = [cols_id,cols_h,cols_s] for col in cols: modified_data.write(''.join(col)) ```
You need to convert the tuple created on RHS in your assignments to string. ``` # Replace this with statement given below cols_id = dbref[0], dbref[3:5], dbref[8:10] # Create a string out of the tuple cols_id = ''.join((dbref[0], dbref[3:5], dbref[8:10])) ```