qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
5,939,850
I'm writing some shell scripts to check if I can manipulate information on a web server. I use a bash shell script as a wrapper script to kick off multiple python scripts in sequential order The wrapper script takes the web server's host name and user name and password. and after performing some tasks, hand's it to the python scripts. I have no problem getting the information passed to the python scripts ``` host% ./bash_script.sh host user passwd ``` Inside the wrapper script I do the following: ``` /usr/dir/python_script.py $1 $2 $3 print "Parameters: " print "Host: ", sys.argv[1] print "User: ", sys.argv[2] print "Passwd: ", sys.argv[3] ``` The values are printed correctly Now, I try to pass the values to open a connection to the web server: (I successfully opened a connection to the web server using the literal host name) ``` f_handler = urlopen("https://sys.argv[1]/path/menu.php") ``` Trying the variable substitution format throws the following error: ``` HTTP Response Arguments: (gaierror(8, 'nodename nor servname provided, or not known'),) ``` I tried different variations, ``` 'single_quotes' "double_quotes" http://%s/path/file.php % value ``` for the variable substitution but I always get the same error I assume that the urlopen function does not convert the substituted variable correctly. Does anybody now a fix for this? Do I need to convert the host name variable first? Roland
2011/05/09
[ "https://Stackoverflow.com/questions/5939850", "https://Stackoverflow.com", "https://Stackoverflow.com/users/745512/" ]
Please try the following code. ``` <?php $thumbnail_id = get_post_thumbnail_id(get_the_ID()); if (!empty($thumbnail_id)) { $thumbnail = wp_get_attachment_image_src($thumbnail_id, 'full'); if (count ($thumbnail) >= 3) { $thumbnail_url = $thumbnail[0]; $thumbnail_width = $thumbnail[1]; $thumbnail_height = $thumbnail[2]; $thumbnail_w = 620; $thumbnail_h = floor($thumbnail_height * $thumbnail_w / $thumbnail_width); } } if (!empty ($thumbnail_url)): ?> <img class="thumbnail" src="<?php echo $thumbnail_url; ?>" alt="<?php the_title_attribute(); ?>" width="<?php echo $thumbnail_w; ?>" height="<?php echo $thumbnail_h; ?>" /> <?php endif; ?> ``` <http://www.boxoft.net/2011/10/display-the-wordpress-featured-image-without-stretching-it/>
What about putting a around that an setting the above code and then using CSS to set the width and height? Something like this: **CSS** ``` .deals img { width: 620px; max-height: 295px; } ``` **HTML / PHP** ``` <div class="deals"><?php the_post_thumbnail( array(620,295) ); ?></div> ```
13,794
49,809,680
I'm extremely new to python, I've just got it set up on Visual Studio 2017CE version 15.6.6 on Windows 7 SP1 x64 Home Premium. I went through a couple of walk through tutorials and can verify that at the least Python is installed and working. I'm trying to follow instructions from the MPIR documentations on building the needed libraries for (c/c++) to run in Visual Studio. I have the tools needed: I have Python, VYASM, and MPIR, MPFR and MPFRC++. I have all the newest versions of the libraries from the websites directly (no third party). These are default distributions. While reading through the documentation for MPIR; It mentions that I should run the Python Script (mpir\_config.py) where N is the version of the visual studio that you will be building the libraries (static-dll) - (debug-release) versions. It states that I should run the Python Script first, and it also states that if available to choose a custom build for specific platforms-architects according to your cpu. Here is the list that is generated from running Python script (module) in the Python Shell without any arguments. ``` 1. gc 2. p3 (win32) 3. p3_p3mmx (win32) 4. p4 (win32) 5. p4_mmx (win32) 6. p4_sse2 (win32) 7. p6 (win32) 8. p6_mmx (win32) 9. p6_p3mmx (win32) 10. pentium4 (win32) 11. pentium4_mmx (win32) 12. pentium4_sse2 (win32) 13. atom (x64) 14. bobcat (x64) 15. bulldozer (x64) 16. bulldozer_piledriver (x64) 17. core2 (x64) 18. core2_penryn (x64) 19. haswell (x64) 20. haswell_avx (x64) 21. k8 (x64) 22. k8_k10 (x64) 23. k8_k10_k102 (x64) 24. nehalem (x64) 25. nehalem_westmere (x64) 26. netburst (x64) 27. sandybridge (x64) 28. sandybridge_ivybridge (x64) 29. skylake (x64) 30. skylake_avx (x64) Space separated list of builds (1..30, 0 to exit)? ``` My system is a Intel DP45SG motherboard with chip set P45 running a QuadCore Intel Core 2 Quad Q9650, 3.0Ghz (9x333). The Alias or code names are Intel Skyburg for the mother board. Intel Eaglelake for the chipset, and Yorkfield for the processor. I don't know what selection I should choose if any... That's the first half of the problem. The other half is if I am to choose one being that it is appropriate; how do I run the mpir\_config.py file to set this? Does it accept an argument as you call it? Or do you run it in the shell then give it a value? Or would the actual code within the script have to be changed? I'm a Python noobie... you can call me (worm) I haven't reached the status of a snake yet. Being that I'm new to Python and I have no idea what to do next. Now as for setting up projects in visual studio to actually build out the (c/c++) libraries from their solutions, setting the configurations and even setting environment variables is not a problem for me. Any and all help would be appreciative. *All of this hassle because boost's multi precision library uses GMP which does not really support windows...*
2018/04/13
[ "https://Stackoverflow.com/questions/49809680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1757805/" ]
As Intel Core 2 Quad Q9650 is a Yorkfield core from the Penryn family, 18. core2\_penryn (x64) should be fine.
And for the second part of your question, mpir\_config.py will generate 2 projects in the mpir-3.0.0\build.vc15 solution directory : one for the dynamic lib, and one for the static lib. Just open mpir.sln and build the desired one.
13,795
53,311,249
I am reading a utf8 file with normal python text encoding. I also need to get rid of all the quotes in the file. However, the utf8 code has multiple types of quotes and I can't figure out how to get rid of all of them. The code below serves as an example of what I've been trying to do. ``` def change_things(string, remove): for thing in remove: string = string.replace(thing, remove[thing]) return string ``` where ``` remove = { '\'': '', '\"': '', } ``` Unfortunately, this code only removes normal quotes, not left or right facing quotes. Is there any way to remove all such quotes using a similar format to what I have done (I recognize that there are other, more efficient ways of removing items from strings but given the overall context of the code this makes more sense for my specific project)?
2018/11/15
[ "https://Stackoverflow.com/questions/53311249", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10655038/" ]
You can just type those sorts of into your file, and replace them same as any other character. ``` utf8_quotes = "“”‘’‹›«»" mystr = 'Text with “quotes”' mystr.replace('“', '"').replace('”', '"') ``` There's a few different single quote variants too.
There's a list of unicode quote marks at <https://gist.github.com/goodmami/98b0a6e2237ced0025dd>. That should allow you to remove any type of quotes.
13,796
72,148,172
Is there a website or git repository where there is a clear listing of all the python packages and other dependencies installed on Google CoLaboratory? Is there a docker image that corresponds to a Google CoLab environment?
2022/05/06
[ "https://Stackoverflow.com/questions/72148172", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3659451/" ]
I see your question is to view the already installed packages in google colab. You can run this command for viewing it: ``` !pip list -v | grep [Kk]eras ``` You can also view all the packages, through this command: ``` !pip list -v ``` For more info, visit [this link](https://colab.research.google.com/github/aviadr1/learn-advanced-python/blob/master/content/10_package_management/10-package_management.ipynb#scrollTo=HecBSOomHiF1).
You can find out about all the packages installed in Colab using ``` !pip list -v ``` Where it shows you the packages' names, versions, locations, and the installer.
13,798
34,461,003
I'm new to programming and am learning Python with the book Learning Python the Hard Way. I'm at exercise 36, where we're asked to write our own simple game. <http://learnpythonthehardway.org/book/ex36.html> (The problem is, when I'm in the hallway (or more precisely, in 'choices') and I write 'gate' the game responds as if I said "man" of "oldman" etc. What am I doing wrong?) EDIT: I shouldn't have written 'if "oldman" or "man" in choice' but instead put 'in choice' after every option. However, next problem.. The man never stands up and cursus me, it keeps frowning. Why does the game not proceed to that elif? ``` experiences = [] def choices(): while True: choice = raw_input("What do you wish to do? ").lower() if "oldman" in choice or "man" in choice or "ask" in choice: print oldman elif "gate" in choice or "fight" in choice or "try" in choice and not "pissedoff" in experiences: print "The old man frowns. \"I said, you shall not pass through this gate until you prove yourself worthy!\"" experiences.append("pissedoff") elif "gate" in choice or "fight" in choice or "try" in choice and "pissedoff" in experiences and not "reallypissed" in experiences: print "The old man is fed up. \"I told you not to pass this gate untill you are worthy! Try me again and you will die.\"" experiences.append("reallypissed") elif "gate" in choice or "fight" in choice or "try" in choice and "reallypissed" in experiences: print "The old man stands up and curses you. You die" exit() elif "small" in choice: print "You go into the small room" firstroom() else: print "Come again?" ``` **EDIT: FIXED IT!!** ``` def choices(): while True: choice = raw_input("What do you wish to do? ").lower() if choice in ['oldman', 'man', 'ask']: print oldman elif choice in ['gate', 'fight', 'try'] and not 'pissedoff' in experiences: print "The old man frowns. \"I said, you shall not pass through this gate until you prove yourself worthy!\"" experiences.append("pissedoff") elif choice in ['gate', 'fight', 'try'] and not 'reallypissed' in experiences: print "The old man is fed up. \"I told you not to pass this gate untill you are worthy! Try me again and you will die.\"" experiences.append("reallypissed") elif choice in ['gate', 'fight', 'try'] and 'reallypissed' in experiences: print "The old man stands up and curses you. You die" exit() elif "small" in choice: print "You go into the small room" firstroom() else: print "Come again?" ``` Thanks for all your help :).
2015/12/25
[ "https://Stackoverflow.com/questions/34461003", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5713452/" ]
> > The problem is, when I'm in the hallway (or more precisely, in > 'choices') and I write 'gate' the game responds as if I said "man" of > "oldman" etc. > > > Explanation ----------- What you were doing was being seen by Python as: ``` if ("oldman") or ("man") or ("ask" in choice): ``` Which evaluates to True if **any** of those 3 conditions is of a ["truthy" value](https://docs.python.org/3/library/stdtypes.html#truth-value-testing). A non-empty string like "oldman" and "man" evaluates to True. So that explains why your code behaved the way it did. Test if a choice is in a list of choices: ----------------------------------------- You're looking for ``` if choice in ['oldman', 'man', 'ask']: ``` Or, at least: ``` if 'oldman' in choice or 'man' in choice or 'ask' in choice ```
Look carefully at the evaluation of your conditions: ``` if "oldman" or "man" or "ask" in choice: ``` This will first evaluate if `"oldman"` is `True`, which happens to be the case, and the `if` condition will evaluate to be `True`. I think what you intended is: ``` if "oldman" in choice or "man" in choice or "ask" in choice: ``` Alternatively you could use: ``` if choice in ["oldman" , "man", "ask" ]: ``` The one caveat, is with this method it is looking for an exact match and not a substring match.
13,800
49,978,967
I have a nested list like this : ``` [['Asan', '20180418', 'A', '17935.00'], ['Asan', '20180417', 'B', '17948.00'], ['Asan', '20180416', 'C', '17979.00'], ['Asan', '20180416', 'D', '17816.00'], . . . . ['Asan', '20180415', 'N', '18027.00']] ``` I need to do some calculations for the 3rd element in every list with the one before it, an example : ``` if (A/B) > 0.05: .... if (B/C) > 0.05: .... if (C/D) > 0.05: .... . . if (N-1/N) == 1: .... ``` and if the condition is not met, I pop that list. How can I approach this and what is the most pythonic and fastest way to do it. EDIT : let me clarify : A and B and so on are FLOATS, I just name them A, B,... for better understanding. the if statements are supposed to find if the A is how much greater than the previous float B.
2018/04/23
[ "https://Stackoverflow.com/questions/49978967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8077586/" ]
[`zip`](https://docs.python.org/3.4/library/functions.html#zip) makes things easy here for pair-wise comparison: ``` l = [['Asan', '20180418', 'A', '17935.00'], ['Asan', '20180417', 'B', '17948.00'], ['Asan', '20180416', 'C', '17979.00'], ['Asan', '20180416', 'C', '200.00'], ['Asan', '20180416', 'D', '17816.00']] new_list = [] for x, y in zip(l, l[1:]): if x[2] == y[2]: new_list.extend([x, y]) print(new_list) # [['Asan', '20180416', 'C', '17979.00'], # ['Asan', '20180416', 'C', '200.00']] ``` **EDIT**: If `'A'`, `'B'`, `'C'`.. are `float`s, then to check for condition you required, you could replace the condition in the above example. And do whatever if that condition is satisfied.
Do you mean something like this? ``` l = [['Asan', '20180418', 'A', '17935.00'], ['Asan', '20180417', 'B', '17948.00'], ['Asan', '20180416', 'C', '17979.00'], ['Asan', '20180416', 'C', '200.00'], ['Asan', '20180416', 'D', '17816.00']] last_seen = l[0] new_list = [] for r in l[1:]: if r[2] == last_seen[2]: new_list.append(last_seen) new_list.append(r) last_seen = r print(new_list) ``` Which gives: ``` [['Asan', '20180416', 'C', '17979.00'], ['Asan', '20180416', 'C', '200.00']] ```
13,801
9,874,042
I'm trying to complete 100 model runs on my 8-processor 64-bit Windows 7 machine. I'd like to run 7 instances of the model concurrently to decrease my total run time (approx. 9.5 min per model run). I've looked at several threads pertaining to the Multiprocessing module of Python, but am still missing something. [Using the multiprocessing module](https://stackoverflow.com/questions/7780478/using-the-multiprocessing-module) [How to spawn parallel child processes on a multi-processor system?](https://stackoverflow.com/questions/884650/python-spawn-parallel-child-processes-on-a-multi-processor-system-use-multipr) [Python Multiprocessing queue](https://stackoverflow.com/questions/5506227/python-multiprocessing-queue) **My Process:** I have 100 different parameter sets I'd like to run through SEAWAT/MODFLOW to compare the results. I have pre-built the model input files for each model run and stored them in their own directories. What I'd like to be able to do is have 7 models running at a time until all realizations have been completed. There needn't be communication between processes or display of results. So far I have only been able to spawn the models sequentially: ``` import os,subprocess import multiprocessing as mp ws = r'D:\Data\Users\jbellino\Project\stJohnsDeepening\model\xsec_a' files = [] for f in os.listdir(ws + r'\fieldgen\reals'): if f.endswith('.npy'): files.append(f) ## def work(cmd): ## return subprocess.call(cmd, shell=False) def run(f,def_param=ws): real = f.split('_')[2].split('.')[0] print 'Realization %s' % real mf2k = r'c:\modflow\mf2k.1_19\bin\mf2k.exe ' mf2k5 = r'c:\modflow\MF2005_1_8\bin\mf2005.exe ' seawatV4 = r'c:\modflow\swt_v4_00_04\exe\swt_v4.exe ' seawatV4x64 = r'c:\modflow\swt_v4_00_04\exe\swt_v4x64.exe ' exe = seawatV4x64 swt_nam = ws + r'\reals\real%s\ss\ss.nam_swt' % real os.system( exe + swt_nam ) if __name__ == '__main__': p = mp.Pool(processes=mp.cpu_count()-1) #-leave 1 processor available for system and other processes tasks = range(len(files)) results = [] for f in files: r = p.map_async(run(f), tasks, callback=results.append) ``` I changed the `if __name__ == 'main':` to the following in hopes it would fix the lack of parallelism I feel is being imparted on the above script by the `for loop`. However, the model fails to even run (no Python error): ``` if __name__ == '__main__': p = mp.Pool(processes=mp.cpu_count()-1) #-leave 1 processor available for system and other processes p.map_async(run,((files[f],) for f in range(len(files)))) ``` Any and all help is greatly appreciated! **EDIT 3/26/2012 13:31 EST** Using the "Manual Pool" method in @J.F. Sebastian's answer below I get parallel execution of my external .exe. Model realizations are called up in batches of 8 at a time, but it doesn't wait for those 8 runs to complete before calling up the next batch and so on: ``` from __future__ import print_function import os,subprocess,sys import multiprocessing as mp from Queue import Queue from threading import Thread def run(f,ws): real = f.split('_')[-1].split('.')[0] print('Realization %s' % real) seawatV4x64 = r'c:\modflow\swt_v4_00_04\exe\swt_v4x64.exe ' swt_nam = ws + r'\reals\real%s\ss\ss.nam_swt' % real subprocess.check_call([seawatV4x64, swt_nam]) def worker(queue): """Process files from the queue.""" for args in iter(queue.get, None): try: run(*args) except Exception as e: # catch exceptions to avoid exiting the # thread prematurely print('%r failed: %s' % (args, e,), file=sys.stderr) def main(): # populate files ws = r'D:\Data\Users\jbellino\Project\stJohnsDeepening\model\xsec_a' wdir = os.path.join(ws, r'fieldgen\reals') q = Queue() for f in os.listdir(wdir): if f.endswith('.npy'): q.put_nowait((os.path.join(wdir, f), ws)) # start threads threads = [Thread(target=worker, args=(q,)) for _ in range(8)] for t in threads: t.daemon = True # threads die if the program dies t.start() for _ in threads: q.put_nowait(None) # signal no more files for t in threads: t.join() # wait for completion if __name__ == '__main__': mp.freeze_support() # optional if the program is not frozen main() ``` No error traceback is available. The `run()` function performs its duty when called upon a single model realization file as with mutiple files. The only difference is that with multiple files, it is called `len(files)` times though each of the instances immediately closes and only one model run is allowed to finish at which time the script exits gracefully (exit code 0). Adding some print statements to `main()` reveals some information about active thread-counts as well as thread status (note that this is a test on only 8 of the realization files to make the screenshot more manageable, theoretically all 8 files should be run concurrently, however the behavior continues where they are spawn and immediately die except one): ``` def main(): # populate files ws = r'D:\Data\Users\jbellino\Project\stJohnsDeepening\model\xsec_a' wdir = os.path.join(ws, r'fieldgen\test') q = Queue() for f in os.listdir(wdir): if f.endswith('.npy'): q.put_nowait((os.path.join(wdir, f), ws)) # start threads threads = [Thread(target=worker, args=(q,)) for _ in range(mp.cpu_count())] for t in threads: t.daemon = True # threads die if the program dies t.start() print('Active Count a',threading.activeCount()) for _ in threads: print(_) q.put_nowait(None) # signal no more files for t in threads: print(t) t.join() # wait for completion print('Active Count b',threading.activeCount()) ``` ![screenshot](https://i.stack.imgur.com/6yffC.png) \*\*The line which reads "`D:\\Data\\Users...`" is the error information thrown when I manually stop the model from running to completion. Once I stop the model running, the remaining thread status lines get reported and the script exits. **EDIT 3/26/2012 16:24 EST** SEAWAT does allow concurrent execution as I've done this in the past, spawning instances manually using iPython and launching from each model file folder. This time around, I'm launching all model runs from a single location, namely the directory where my script resides. It looks like the culprit may be in the way SEAWAT is saving some of the output. When SEAWAT is run, it immediately creates files pertaining to the model run. One of these files is not being saved to the directory in which the model realization is located, but in the top directory where the script is located. This is preventing any subsequent threads from saving the same file name in the same location (which they all want to do since these filenames are generic and non-specific to each realization). The SEAWAT windows were not staying open long enough for me to read or even see that there was an error message, I only realized this when I went back and tried to run the code using iPython which directly displays the printout from SEAWAT instead of opening a new window to run the program. I am accepting @J.F. Sebastian's answer as it is likely that once I resolve this model-executable issue, the threading code he has provided will get me where I need to be. **FINAL CODE** Added cwd argument in subprocess.check\_call to start each instance of SEAWAT in its own directory. Very key. ``` from __future__ import print_function import os,subprocess,sys import multiprocessing as mp from Queue import Queue from threading import Thread import threading def run(f,ws): real = f.split('_')[-1].split('.')[0] print('Realization %s' % real) seawatV4x64 = r'c:\modflow\swt_v4_00_04\exe\swt_v4x64.exe ' cwd = ws + r'\reals\real%s\ss' % real swt_nam = ws + r'\reals\real%s\ss\ss.nam_swt' % real subprocess.check_call([seawatV4x64, swt_nam],cwd=cwd) def worker(queue): """Process files from the queue.""" for args in iter(queue.get, None): try: run(*args) except Exception as e: # catch exceptions to avoid exiting the # thread prematurely print('%r failed: %s' % (args, e,), file=sys.stderr) def main(): # populate files ws = r'D:\Data\Users\jbellino\Project\stJohnsDeepening\model\xsec_a' wdir = os.path.join(ws, r'fieldgen\reals') q = Queue() for f in os.listdir(wdir): if f.endswith('.npy'): q.put_nowait((os.path.join(wdir, f), ws)) # start threads threads = [Thread(target=worker, args=(q,)) for _ in range(mp.cpu_count()-1)] for t in threads: t.daemon = True # threads die if the program dies t.start() for _ in threads: q.put_nowait(None) # signal no more files for t in threads: t.join() # wait for completion if __name__ == '__main__': mp.freeze_support() # optional if the program is not frozen main() ```
2012/03/26
[ "https://Stackoverflow.com/questions/9874042", "https://Stackoverflow.com", "https://Stackoverflow.com/users/698809/" ]
I don't see any computations in the Python code. If you just need to execute several external programs in parallel it is sufficient to use `subprocess` to run the programs and `threading` module to maintain constant number of processes running, but the simplest code is using `multiprocessing.Pool`: ``` #!/usr/bin/env python import os import multiprocessing as mp def run(filename_def_param): filename, def_param = filename_def_param # unpack arguments ... # call external program on `filename` def safe_run(*args, **kwargs): """Call run(), catch exceptions.""" try: run(*args, **kwargs) except Exception as e: print("error: %s run(*%r, **%r)" % (e, args, kwargs)) def main(): # populate files ws = r'D:\Data\Users\jbellino\Project\stJohnsDeepening\model\xsec_a' workdir = os.path.join(ws, r'fieldgen\reals') files = ((os.path.join(workdir, f), ws) for f in os.listdir(workdir) if f.endswith('.npy')) # start processes pool = mp.Pool() # use all available CPUs pool.map(safe_run, files) if __name__=="__main__": mp.freeze_support() # optional if the program is not frozen main() ``` If there are many files then `pool.map()` could be replaced by `for _ in pool.imap_unordered(safe_run, files): pass`. There is also `mutiprocessing.dummy.Pool` that provides the same interface as `multiprocessing.Pool` but uses threads instead of processes that might be more appropriate in this case. You don't need to keep some CPUs free. Just use a command that starts your executables with a low priority (on Linux it is a `nice` program). ### [`ThreadPoolExecutor` example](http://docs.python.org/dev/library/concurrent.futures.html#threadpoolexecutor-example) [`concurrent.futures.ThreadPoolExecutor`](http://docs.python.org/dev/library/concurrent.futures.html) would be both simple and sufficient but it requires [3rd-party dependency on Python 2.x](http://pypi.python.org/pypi/futures/) (it is in the stdlib since Python 3.2). ``` #!/usr/bin/env python import os import concurrent.futures def run(filename, def_param): ... # call external program on `filename` # populate files ws = r'D:\Data\Users\jbellino\Project\stJohnsDeepening\model\xsec_a' wdir = os.path.join(ws, r'fieldgen\reals') files = (os.path.join(wdir, f) for f in os.listdir(wdir) if f.endswith('.npy')) # start threads with concurrent.futures.ThreadPoolExecutor(max_workers=8) as executor: future_to_file = dict((executor.submit(run, f, ws), f) for f in files) for future in concurrent.futures.as_completed(future_to_file): f = future_to_file[future] if future.exception() is not None: print('%r generated an exception: %s' % (f, future.exception())) # run() doesn't return anything so `future.result()` is always `None` ``` Or if we ignore exceptions raised by `run()`: ``` from itertools import repeat ... # the same # start threads with concurrent.futures.ThreadPoolExecutor(max_workers=8) as executor: executor.map(run, files, repeat(ws)) # run() doesn't return anything so `map()` results can be ignored ``` ### `subprocess` + `threading` (manual pool) solution ``` #!/usr/bin/env python from __future__ import print_function import os import subprocess import sys from Queue import Queue from threading import Thread def run(filename, def_param): ... # define exe, swt_nam subprocess.check_call([exe, swt_nam]) # run external program def worker(queue): """Process files from the queue.""" for args in iter(queue.get, None): try: run(*args) except Exception as e: # catch exceptions to avoid exiting the # thread prematurely print('%r failed: %s' % (args, e,), file=sys.stderr) # start threads q = Queue() threads = [Thread(target=worker, args=(q,)) for _ in range(8)] for t in threads: t.daemon = True # threads die if the program dies t.start() # populate files ws = r'D:\Data\Users\jbellino\Project\stJohnsDeepening\model\xsec_a' wdir = os.path.join(ws, r'fieldgen\reals') for f in os.listdir(wdir): if f.endswith('.npy'): q.put_nowait((os.path.join(wdir, f), ws)) for _ in threads: q.put_nowait(None) # signal no more files for t in threads: t.join() # wait for completion ```
Here is my way to maintain the minimum x number of threads in the memory. Its an combination of threading and multiprocessing modules. It may be unusual to other techniques like respected fellow members have explained above BUT may be worth considerable. For the sake of explanation, I am taking a scenario of crawling a minimum of 5 websites at a time. so here it is:- ``` #importing dependencies. from multiprocessing import Process from threading import Thread import threading # Crawler function def crawler(domain): # define crawler technique here. output.write(scrapeddata + "\n") pass ``` Next is threadController function. This function will control the flow of threads to the main memory. It will keep activating the threads to maintain the threadNum "minimum" limit ie. 5. Also it won't exit until, all Active threads(acitveCount) are finished up. It will maintain a minimum of threadNum(5) startProcess function threads (these threads will eventually start the Processes from the processList while joining them with a time out of 60 seconds). After staring threadController, there would be 2 threads which are not included in the above limit of 5 ie. the Main thread and the threadController thread itself. thats why threading.activeCount() != 2 has been used. ``` def threadController(): print "Thread count before child thread starts is:-", threading.activeCount(), len(processList) # staring first thread. This will make the activeCount=3 Thread(target = startProcess).start() # loop while thread List is not empty OR active threads have not finished up. while len(processList) != 0 or threading.activeCount() != 2: if (threading.activeCount() < (threadNum + 2) and # if count of active threads are less than the Minimum AND len(processList) != 0): # processList is not empty Thread(target = startProcess).start() # This line would start startThreads function as a seperate thread ** ``` startProcess function, as a separate thread, would start Processes from the processlist. The purpose of this function (\*\*started as a different thread) is that It would become a parent thread for Processes. So when It will join them with a timeout of 60 seconds, this would stop the startProcess thread to move ahead but this won't stop threadController to perform. So this way, threadController will work as required. ``` def startProcess(): pr = processList.pop(0) pr.start() pr.join(60.00) # joining the thread with time out of 60 seconds as a float. if __name__ == '__main__': # a file holding a list of domains domains = open("Domains.txt", "r").read().split("\n") output = open("test.txt", "a") processList = [] # thread list threadNum = 5 # number of thread initiated processes to be run at one time # making process List for r in range(0, len(domains), 1): domain = domains[r].strip() p = Process(target = crawler, args = (domain,)) processList.append(p) # making a list of performer threads. # starting the threadController as a seperate thread. mt = Thread(target = threadController) mt.start() mt.join() # won't let go next until threadController thread finishes. output.close() print "Done" ``` Besides maintaining a minimum number of threads in the memory, my aim was to also have something which could avoid stuck threads or processes in the memory. I did this using the time out function. My apologies for any typing mistake. I hope this construction would help anyone in this world. Regards, Vikas Gautam
13,806
34,889,737
I am a new programmer that just learned python so I made a small program just for fun: ``` year = int(input("Year Of Birth: "), 10) if year < 2016: print("You passed! You may continue!") else: break ``` I searched this up and I tried using break but I got a error saying I have to use break outside of the loop. Can anyone give a simple solution? Thanks.
2016/01/20
[ "https://Stackoverflow.com/questions/34889737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4584838/" ]
`Break` is used in a `loop`. For example: ``` i = 3 while True: i = i + 1 #increments i by 1 print(i) if i == 5: break print("Done!") ``` Normally, `while True` will execute indefinitely because `True` is always equal to `True`. (This is equivalent to something like `1==1`). The `break` statement tells Python to stop executing the loop right then and there. Then, it will continue executing code where the while loop left off. *The output of this program will be:* ``` 4 5 Done! ``` First, `i=3`. Then, in the `while` loop, `i` becomes `4`, and `4` is outputted to the console. Since `i` is not equal to `5`, it does not `break`. Now, `i=4`. `i` increments to `5` and is printed to the console. This time, `i==5`, so it `break`s. Then, it prints `Done!` because that line is after the `while` loop we just broke out of.
`Break` is used inside a loop to get out of that loop which looks something like `while`, `for` and you are trying to use it in Flow Control statements i.e `if` , `elif` and `else`
13,807
68,940,884
I have written a code in javascript that I don't know how to write in python. The use of it is rather simple. There are 2 variables: Buy and Sell. E.g There is Buy, Buy, Sell, Buy, Sell, Sell the javascript gives me a new list with Buy,Sell,Buy,Sell ``` ´´const fs = require('fs'); class Trade { price; side; } const array = console.log('AllTrades:'); console.log(array); let trades = []; for(let i = array.length - 1; i >= 0; i--) { let trade = array[i]; let nextTrade = array[i - 1]; if(nextTrade) { if(trade[0] == "BUY" && nextTrade[0] == "SELL") { trades[i] = trade; } if(trade[0] == "SELL" && nextTrade[0] == "BUY") { trades[i] = trade; } } else if(trade[0] == "BUY") { trades[i] = trade; } } trades = trades.filter(function (i) { return i != undefined;}); console.log('D Trades:'); for(let i = 0; i < trades.length; i++) { console.log(trades[i]); } fs.writeFile('./data/trades.json', JSON.stringify(trades), (err, data) => { if (err) return console.log(err); console.log('Trades > trades.txt'); }); ´´ ```
2021/08/26
[ "https://Stackoverflow.com/questions/68940884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16480780/" ]
You can try: 1. `stack` all the dates into one column 2. `drop_duplicates`: effectively getting rid of equal end and start dates 3. Keep every second row and restructure the result ``` df = df.sort_values(["category", "start", "end"]) stacked = df.set_index("category").stack().droplevel(-1).rename("start") output = stacked.reset_index().drop_duplicates(keep=False).set_index("category") output["end"] = output["start"].shift(-1) output = output.iloc[range(0, len(output.index), 2)] >>> stacked start end category A 2015-01-01 2016-12-01 B 2016-01-01 2016-07-01 B 2018-01-01 2018-08-01 ```
You could do it like this: **Sample data:** ``` import pandas as pd d = {'category': {0: 'A', 1: 'A', 2: 'A', 3: 'B', 4: 'B'}, 'start': {0: '2015-01-01', 1: '2016-01-01', 2: '2016-06-01', 3: '2016-01-01', 4: '2018-01-01'}, 'end': {0: '2016-01-01', 1: '2016-06-01', 2: '2016-12-01', 3: '2016-07-01', 4: '2018-08-01'}} df = pd.DataFrame(d) ``` **Code:** ``` # Create a series of boolean essentially marking the start of a new continuous group df = df.sort_values(['category', 'start', 'end']) mask_continue = ~df['start'].eq(df['end'].shift()) mask_group = ~df['category'].eq(df['category'].shift()) df['group'] = (mask_continue | mask_group).cumsum() # group by that above generated group and get first and last value of it g = df.groupby('group') g_start = g.head(1).set_index('group')[['category', 'start']] g_end = g.tail(1).set_index('group')['end'] # combine both g_start['end'] = g_end ``` **Output:** ``` print(g_start) category start end group 1 A 2015-01-01 2016-12-01 2 B 2016-01-01 2016-07-01 3 B 2018-01-01 2018-08-01 ```
13,808
57,665,576
I’m new to nix and would like to be able to install openconnect with it. Darwin seems to be unsupported at this time via nix, though it can be installed with either brew or macports. I’ve tried both a basic install and allowing unsupported. I didn’t see anything on google or stackoverflow as yet that helped. Basic Install results --------------------- ```sh $ nix-env -i openconnect warning: there are multiple derivations named 'openconnect-8.03'; using the first one installing 'openconnect-8.03' error: Package ‘«name-missing»’ in /nix/store/bsxpbjipz85ws3wbznhpgc38779pzak5-nixpkgs-19.09pre186574.88d9f776091/nixpkgs/pkgs/tools/networking/openconnect/default.nix:28 is not supported on ‘x86_64-darwin’, refusing to evaluate. a) For `nixos-rebuild` you can set { nixpkgs.config.allowUnsupportedSystem = true; } in configuration.nix to override this. b) For `nix-env`, `nix-build`, `nix-shell` or any other Nix command you can add { allowUnsupportedSystem = true; } to ~/.config/nixpkgs/config.nix. (use '--show-trace' to show detailed location information) ``` Allow Unsupported Results ------------------------- ```sh $ NIXPKGS_ALLOW_UNSUPPORTED_SYSTEM=1 nix-env -i openconnect warning: there are multiple derivations named 'openconnect-8.03'; using the first one installing 'openconnect-8.03' these derivations will be built: /nix/store/rf9afn87lch36bj7q61ks6rr3igsbv83-builder.pl.drv /nix/store/25pr0lcpbgdx78vpv2jb99sngzwq6b44-nettools-1003.1-2008.drv /nix/store/v9i3z3bh3jjzs80wj4jbb5vn37mwflah-openresolv-3.9.0.drv /nix/store/1i8i9qww079gm1l82bnggkpxphg27v9w-vpnc-0.5.3-post-r550.drv /nix/store/1fzxm6cdd5d2mgizkvqqx0zi0vg9ilpk-openconnect-8.03.drv these paths will be fetched (157.54 MiB download, 811.71 MiB unpacked): /nix/store/02cjbymr9q0abw1vdy9i4c67dklgq4iq-autogen-5.18.12-lib /nix/store/09cdrnckixb7i5fsv2fc33rd7p5zx0n8-libjpeg-turbo-2.0.2 /nix/store/0cifmxw0r6lijng796a3z3nwq67ma5b3-llvm-7.1.0-lib /nix/store/0x4jdm1dhb593bpgazqclg7np5mb9yp2-vpnc-r550 /nix/store/1d1xq0r2zg4r1mk8f1ydfygw19i84lpq-fontconfig-2.12.6 /nix/store/1ki1m09g8wnd5vzadcxh0pjwfz27zk8z-expand-response-params /nix/store/1qkqjb02khxr13q8hhwicznz5zsvjvzv-gnused-4.7 /nix/store/28ciaclbgwii11yna529xim4w1cnc7bn-expat-2.2.7 /nix/store/29alwmfhca3y5gj0fki5raahi2s6330n-nettle-3.4.1-dev /nix/store/2ia88csiygmwq9bprdlwdpj9ajnjw1wd-gtk+3-3.24.8 /nix/store/2pj2hdfy8jqy41hbh5h6z7sqhmcpi7xy-cctools-binutils-darwin /nix/store/2x82drnhq64vpw0pj4gkgid0qhm7f20q-gnutls-3.6.8-dev /nix/store/33whmhh30l2c1cx65pakygv520fjsi96-jasper-2.0.16 /nix/store/4366mxm4p1032s9s8l6pyzwz56xr9inb-libxml2-2.9.9-py /nix/store/49nmvn7iaxlahhc66w8b4y9j66n8fjlb-autogen-5.18.12 /nix/store/4mjyhc65kdri28g59gpnjvimjkb44vpy-gawk-4.2.1 /nix/store/4zn384psyqihmm6fnmavxw5w3509qgnw-Security-osx-10.9.5 ... configuring building build flags: SHELL=/nix/store/d9wxsyxsj3ssjcbywwm35254d14yv87a-bash-4.4-p23/bin/bash sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \ -e 's:@VARDIR@:/run/resolvconf:g' \ -e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \ resolvconf.in > resolvconf sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \ -e 's:@VARDIR@:/run/resolvconf:g' \ -e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \ resolvconf.8.in > resolvconf.8 sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \ -e 's:@VARDIR@:/run/resolvconf:g' \ -e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \ resolvconf.conf.5.in > resolvconf.conf.5 sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \ -e 's:@VARDIR@:/run/resolvconf:g' \ -e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \ libc.in > libc sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \ -e 's:@VARDIR@:/run/resolvconf:g' \ -e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \ dnsmasq.in > dnsmasq sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \ -e 's:@VARDIR@:/run/resolvconf:g' \ -e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \ named.in > named sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \ -e 's:@VARDIR@:/run/resolvconf:g' \ -e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \ pdnsd.in > pdnsd sed -e 's:@SBINDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin:g' -e 's:@SYSCONFDIR@:/etc:g' -e 's:@LIBEXECDIR@:/nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf:g' \ -e 's:@VARDIR@:/run/resolvconf:g' \ -e 's:@RCDIR@::g' -e 's:@RESTARTCMD@:false:g' -e 's:@RCDIR@::g' -e 's:@STATUSARG@::g' \ unbound.in > unbound installing install flags: SHELL=/nix/store/d9wxsyxsj3ssjcbywwm35254d14yv87a-bash-4.4-p23/bin/bash SYSCONFDIR=\$\(out\)/etc install install -d /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin install -m 0755 resolvconf /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin install -d /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/etc test -e /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/etc/resolvconf.conf || \ install -m 0644 resolvconf.conf /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/etc install -d /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf install -m 0644 libc dnsmasq named pdnsd unbound /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec/resolvconf install -d /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/share/man/man8 install -m 0444 resolvconf.8 /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/share/man/man8 install -d /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/share/man/man5 install -m 0444 resolvconf.conf.5 /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/share/man/man5 post-installation fixup gzipping man pages under /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/share/man/ strip is /nix/store/2pj2hdfy8jqy41hbh5h6z7sqhmcpi7xy-cctools-binutils-darwin/bin/strip stripping (with command strip and flags -S) in /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/libexec /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin patching script interpreter paths in /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0 /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin/.resolvconf-wrapped: interpreter directive changed from "/bin/sh" to "/nix/store/d9wxsyxsj3ssjcbywwm35254d14yv87a-bash-4.4-p23/bin/sh" moving /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/sbin/* to /nix/store/izy3kb6035gl46dkxlhl09g2ghcg2hqp-openresolv-3.9.0/bin building '/nix/store/25pr0lcpbgdx78vpv2jb99sngzwq6b44-nettools-1003.1-2008.drv'... created 6 symlinks in user environment building '/nix/store/1i8i9qww079gm1l82bnggkpxphg27v9w-vpnc-0.5.3-post-r550.drv'... unpacking sources unpacking source archive /nix/store/0x4jdm1dhb593bpgazqclg7np5mb9yp2-vpnc-r550 source root is vpnc-r550 patching sources applying patch /nix/store/cjmd5815vwl3kfhm8f5yzrnd0zcms7ki-makefile.patch patching file Makefile Hunk #1 succeeded at 20 with fuzz 1. Hunk #2 succeeded at 82 with fuzz 2 (offset 11 lines). applying patch /nix/store/9lq5151d73q7m5d291qayz73828mky1f-no_default_route_when_netmask.patch patching file vpnc-script configuring substituteStream(): WARNING: pattern '/usr/bin/perl' doesn't match anything in file 'pcf2vpnc' no configure script, doing nothing ... clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o config.o config.c config.c:212:6: warning: variable 'pass' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] if (WEXITSTATUS(status) != 0) { ^~~~~~~~~~~~~~~~~~~~~~~~ /nix/store/bhywddq18j79xr274n45byvqjb8fs52j-Libsystem-osx-10.12.6/include/sys/wait.h:144:24: note: expanded from macro 'WEXITSTATUS' #define WEXITSTATUS(x) ((_W_INT(x) >> 8) & 0x000000ff) ^ config.c:242:9: note: uninitialized use occurs here return pass; ^~~~ config.c:212:2: note: remove the 'if' if its condition is always false if (WEXITSTATUS(status) != 0) { ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ config.c:207:6: warning: variable 'pass' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] if (!WIFEXITED(status)) { ^~~~~~~~~~~~~~~~~~ config.c:242:9: note: uninitialized use occurs here return pass; ^~~~ config.c:207:2: note: remove the 'if' if its condition is always false if (!WIFEXITED(status)) { ^~~~~~~~~~~~~~~~~~~~~~~~~ config.c:204:6: warning: variable 'pass' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] if (r == -1) ^~~~~~~ config.c:242:9: note: uninitialized use occurs here return pass; ^~~~ config.c:204:2: note: remove the 'if' if its condition is always false if (r == -1) ^~~~~~~~~~~~ config.c:177:6: warning: variable 'pass' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] if (pid == -1) ^~~~~~~~~ config.c:242:9: note: uninitialized use occurs here return pass; ^~~~ config.c:177:2: note: remove the 'if' if its condition is always false if (pid == -1) ^~~~~~~~~~~~~~ config.c:173:6: warning: variable 'pass' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] if (pipe(fds) == -1) ^~~~~~~~~~~~~~~ config.c:242:9: note: uninitialized use occurs here return pass; ^~~~ config.c:173:2: note: remove the 'if' if its condition is always false if (pipe(fds) == -1) ^~~~~~~~~~~~~~~~~~~~ config.c:170:12: note: initialize the variable 'pass' to silence this warning char *pass; ^ = NULL 5 warnings generated. clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o dh.o dh.c clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o math_group.o math_group.c clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o supp.o supp.c clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o decrypt-utils.o decrypt-utils.c clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o crypto.o crypto.c clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o crypto-openssl.o crypto-openssl.c crypto-openssl.c:66:30: warning: unused parameter 'buf' [-Wunused-parameter] static int password_cb(char *buf, int size, int rwflag, void *userdata) ^ crypto-openssl.c:66:39: warning: unused parameter 'size' [-Wunused-parameter] static int password_cb(char *buf, int size, int rwflag, void *userdata) ^ crypto-openssl.c:66:49: warning: unused parameter 'rwflag' [-Wunused-parameter] static int password_cb(char *buf, int size, int rwflag, void *userdata) ^ crypto-openssl.c:66:63: warning: unused parameter 'userdata' [-Wunused-parameter] static int password_cb(char *buf, int size, int rwflag, void *userdata) ^ 4 warnings generated. clang -O3 -g -W -Wall -Wmissing-declarations -Wwrite-strings -I/nix/store/f7pyl32f8jq8g2i62qqfrjwigi0znb1a-libgcrypt-1.8.4-dev/include -I/nix/store/5pnsm2flly4w5zi0d1qlvz0zs71jwlv9-libgpg-error-1.36-dev/include -DOPENSSL_GPL_VIOLATION -DCRYPTO_OPENSSL -DVERSION=\"0.5.3\" -c -o vpnc.o vpnc.c vpnc.c:1901:39: warning: 'memset' call operates on objects of type 'unsigned char' while the size is based on a different type 'unsigned char *' [-Wsizeof-pointer-memaccess] memset(dh_shared_secret, 0, sizeof(dh_shared_secret)); ~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~ vpnc.c:1901:39: note: did you mean to provide an explicit length? memset(dh_shared_secret, 0, sizeof(dh_shared_secret)); ^~~~~~~~~~~~~~~~ 1 warning generated. ... checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether make supports nested variables... yes checking whether UID '501' is supported by ustar format... yes checking whether GID '20' is supported by ustar format... yes checking how to create a ustar tar archive... gnutar checking whether make supports nested variables... (cached) yes checking for style of include used by make... GNU checking for gcc... clang checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether clang accepts -g... yes checking for clang option to accept ISO C89... none needed checking whether clang understands -c and -o together... yes checking dependency style of clang... none checking for fdevname_r... no checking for statfs... yes checking for getline... yes checking for strcasestr... yes checking for strndup... yes checking for asprintf... yes checking for vasprintf... yes checking for supported compiler flags... -Wall -Wextra -Wno-missing-field-initializers -Wno-sign-compare -Wno-unused-parameter -Werror=pointer-to-int-cast -Wdeclaration-after-statement -Werror-implicit-function-declaration -Wformat-nonliteral -Wformat-security -Winit-self -Wmissing-declarations -Wmissing-include-dirs -Wnested-externs -Wpointer-arith -Wwrite-strings checking For memset_s... yes checking for socket... yes checking for inet_aton... yes checking for IPV6_PATHMTU socket option... no checking for __android_log_vprint... no checking for __android_log_vprint in -llog... no checking for nl_langinfo... yes checking for ld used by clang... ld checking if the linker (ld) is GNU ld... no checking for shared library run path origin... done checking how to run the C preprocessor... clang -E checking for grep that handles long lines and -e... /nix/store/bzr287y1sy2qsrmywwkgxlkblz7vx61w-gnugrep-3.3/bin/grep checking for egrep... /nix/store/bzr287y1sy2qsrmywwkgxlkblz7vx61w-gnugrep-3.3/bin/grep -E checking for iconv... no, consider installing GNU libiconv checking for GNUTLS... yes checking for gnutls_system_key_add_x509... yes checking for gnutls_pkcs11_add_provider... yes checking for P11KIT... no checking for tss library... no checking for TASN1... no checking for LIBLZ4... no configure: WARNING: *** *** lz4 not found. *** checking for egrep... (cached) /nix/store/bzr287y1sy2qsrmywwkgxlkblz7vx61w-gnugrep-3.3/bin/grep -E checking how to print strings... printf checking for a sed that does not truncate output... /nix/store/1qkqjb02khxr13q8hhwicznz5zsvjvzv-gnused-4.7/bin/sed checking for fgrep... /nix/store/bzr287y1sy2qsrmywwkgxlkblz7vx61w-gnugrep-3.3/bin/grep -F checking for ld used by clang... ld checking if the linker (ld) is GNU ld... no checking for BSD- or MS-compatible name lister (nm)... nm checking the name lister (nm) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 196608 checking how to convert x86_64-apple-darwin18.7.0 file names to x86_64-apple-darwin18.7.0 format... func_convert_file_noop checking how to convert x86_64-apple-darwin18.7.0 file names to toolchain format... func_convert_file_noop checking for ld option to reload object files... -r checking for objdump... no checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for archiver @FILE support... no checking for strip... strip checking for ranlib... ranlib checking command to parse nm output from clang object... ok checking for sysroot... no checking for a working dd... /nix/store/6964byz5cbs03s8zckqn72i6zq12ipqv-coreutils-8.31/bin/dd checking how to truncate binary pipes... /nix/store/6964byz5cbs03s8zckqn72i6zq12ipqv-coreutils-8.31/bin/dd bs=4096 count=1 checking for mt... no checking if : is a manifest tool... no checking for dsymutil... dsymutil checking for nmedit... no checking for lipo... lipo checking for otool... otool checking for otool64... no checking for -single_module linker flag... yes checking for -exported_symbols_list linker flag... yes checking for -force_load linker flag... yes checking for dlfcn.h... yes checking for objdir... .libs checking if clang supports -fno-rtti -fno-exceptions... yes checking for clang option to produce PIC... -fno-common -DPIC checking if clang PIC flag -fno-common -DPIC works... yes checking if clang static flag -static works... no checking if clang supports -c -o file.o... yes checking if clang supports -c -o file.o... (cached) yes checking whether the clang linker (ld) supports shared libraries... yes checking dynamic linker characteristics... darwin18.7.0 dyld checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no checking linker version script flag... unsupported checking for LIBXML2... yes checking for ZLIB... yes checking for LIBPROXY... no checking for libproxy... no checking for LIBSTOKEN... yes checking for LIBPSKC... no checking for krb5-config... no checking gssapi/gssapi.h usability... no checking gssapi/gssapi.h presence... no checking for gssapi/gssapi.h... no checking gssapi.h usability... no checking gssapi.h presence... no checking for gssapi.h... no configure: WARNING: Cannot find <gssapi/gssapi.h> or <gssapi.h> configure: WARNING: Building without GSSAPI support checking if_tun.h usability... no checking if_tun.h presence... no checking for if_tun.h... no checking linux/if_tun.h usability... no checking linux/if_tun.h presence... no checking for linux/if_tun.h... no checking net/if_tun.h usability... no checking net/if_tun.h presence... no checking for net/if_tun.h... no checking net/tun/if_tun.h usability... no checking net/tun/if_tun.h presence... no checking for net/tun/if_tun.h... no checking for net/if_utun.h... yes checking alloca.h usability... yes checking alloca.h presence... yes checking for alloca.h... yes checking endian.h usability... no checking endian.h presence... no checking for endian.h... no checking sys/endian.h usability... no checking sys/endian.h presence... no checking for sys/endian.h... no checking sys/isa_defs.h usability... no checking sys/isa_defs.h presence... no checking for sys/isa_defs.h... no checking for python3... no checking for python2... no checking for python... /usr/bin/python checking if groff can create UTF-8 XHTML... no. Not building HTML pages checking for CWRAP... no checking for nuttcp... no checking that generated files are newer than configure... done configure: creating ./config.status config.status: creating Makefile config.status: creating openconnect.pc config.status: creating po/Makefile config.status: creating www/Makefile config.status: creating libopenconnect.map config.status: creating openconnect.8 config.status: creating www/styles/Makefile config.status: creating www/inc/Makefile config.status: creating www/images/Makefile config.status: creating tests/Makefile config.status: creating tests/softhsm2.conf config.status: creating tests/configs/test-user-cert.config config.status: creating tests/configs/test-user-pass.config config.status: creating config.h config.status: executing depfiles commands config.status: executing libtool commands BUILD OPTIONS: SSL library: GnuTLS PKCS#11 support: no DTLS support: yes ESP support: yes libproxy support: no RSA SecurID support: yes PSKC OATH file support: no GSSAPI support: no Yubikey support: yes LZ4 compression: no Java bindings: no Build docs: no Unit tests: no Net namespace tests: no building build flags: SHELL=/nix/store/d9wxsyxsj3ssjcbywwm35254d14yv87a-bash-4.4-p23/bin/bash make all-recursive make[1]: Entering directory '/private/var/folders/6j/j96x43893xd3vst_44_w7pqm0000gn/T/nix-build-openconnect-8.03.drv-0/openconnect-8.03' CC libopenconnect_la-version.lo CC libopenconnect_la-ssl.lo CC libopenconnect_la-http.lo CC libopenconnect_la-http-auth.lo CC libopenconnect_la-auth-common.lo CC libopenconnect_la-library.lo CC libopenconnect_la-compat.lo CC libopenconnect_la-lzs.lo CC libopenconnect_la-mainloop.lo CC libopenconnect_la-script.lo CC libopenconnect_la-ntlm.lo CC libopenconnect_la-digest.lo CC libopenconnect_la-oncp.lo CC libopenconnect_la-lzo.lo CC libopenconnect_la-auth-juniper.lo CC libopenconnect_la-esp.lo CC libopenconnect_la-esp-seqno.lo CC libopenconnect_la-gnutls-esp.lo CC libopenconnect_la-auth.lo CC libopenconnect_la-cstp.lo CC libopenconnect_la-dtls.lo CC libopenconnect_la-gnutls-dtls.lo CC libopenconnect_la-oath.lo CC libopenconnect_la-gpst.lo CC libopenconnect_la-auth-globalprotect.lo CC libopenconnect_la-yubikey.lo yubikey.c:63:10: fatal error: 'PCSC/wintypes.h' file not found #include <PCSC/wintypes.h> ^~~~~~~~~~~~~~~~~ 1 error generated. make[1]: *** [Makefile:1163: libopenconnect_la-yubikey.lo] Error 1 make[1]: Leaving directory '/private/var/folders/6j/j96x43893xd3vst_44_w7pqm0000gn/T/nix-build-openconnect-8.03.drv-0/openconnect-8.03' make: *** [Makefile:697: all] Error 2 builder for '/nix/store/1fzxm6cdd5d2mgizkvqqx0zi0vg9ilpk-openconnect-8.03.drv' failed with exit code 2 error: build of '/nix/store/1fzxm6cdd5d2mgizkvqqx0zi0vg9ilpk-openconnect-8.03.drv’ failed ``` Naturally I’d expect the install to succeed. Does anyone know how to make this kind of install work on macos and how to configure it via nix?
2019/08/26
[ "https://Stackoverflow.com/questions/57665576", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1241878/" ]
Please check this ``` let dataJobs = await page.evaluate(()=>{ var a = document.getElementById("task-listing-datatable").getAttribute("data-tasks"); //Search job list var ar = eval(a);//Convert string to arr var keyword = ['Image Annotation', 'What Is The Best Dialogue Category About Phones', 'Label Roads( With Sidewalks) In Images']; //This is the list where I'll find match let ret = [] ; ar.forEach(val => { if (keyword.includes(val[1])) { ret.push(`${val[0]} - ${val[1]} - Paga: ${val[3]} - Numero de Tasks: ${val[5]} @everyone`)} } ) return ret ; }); ``` **Ps: Instead of using eval please use JSON.parse(a) to prevent javascript code injection .**
why not try and save the result of the match in a dynamic array instead of returning the value, something like a global array: ``` let array = []; let dataJobs = await page.evaluate(()=>{ var a = document.getElementById("task-listing-datatable").getAttribute("data-tasks"); //Search job list var ar = eval(a);//Convert string to arr var keyword = ['Image Annotation', 'What Is The Best Dialogue Category About Phones', 'Label Roads( With Sidewalks) In Images']; //This is the list where I'll find match for(let i=0; i<ar.length; i++){ //hago la busqueda de coincidencia for(let j=0; j<keyword.length; j++){ if(ar[i][1] === keyword[j]){ let jobMatch =`${ar[i][0]} - ${ar[i][1]} - Paga: ${ar[i][3]} - Numero de Tasks: ${ar[i][5]} @everyone`; //return the Match array.push(jobMatch); } } } }); ```
13,809
60,920,262
First I have created an environment in anaconda navigator in which i have installed packages like Keras,Tensorflow and Theano. Now I have activated my environment through anaconda navigator and have launched the Spyder Application. Now When i try to import keras it is showing error stating that: ``` Traceback (most recent call last): File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "C:\Users\Quickhotshot612\Anaconda3\envs\tensorflow\lib\imp.py", line 342, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found. ``` Failed to load the native TensorFlow runtime. See <https://www.tensorflow.org/install/errors> for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. And is continuously the showing the error like an infinite. Where did i go wrong? Anyone help me with this.[enter image description here](https://i.stack.imgur.com/XCybv.png) But when i give (import theano),it is working,but does'nt in the case of tensorflow and keras.
2020/03/29
[ "https://Stackoverflow.com/questions/60920262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13148984/" ]
You could use below aggregation for unique active users over day and month wise. I've assumed ct as timestamp field. ``` db.collection.aggregate( [ {"$match":{"ct":{"$gte":dateFrom,"$lt":dateTo}}}, {"$facet":{ "dau":[ {"$group":{ "_id":{ "user":"$user", "ymd":{"$dateToString":{"format":"%Y-%m-%d","date":"$ct"}} } }}, {"$group":{"_id":"$_id.ymd","dau":{"$sum":1}}} ], "mau":[ {"$group":{ "_id":{ "user":"$user", "ym":{"$dateToString":{"format":"%Y-%m","date":"$ct"}} } }}, {"$group":{"_id":"$_id.ym","mau":{"$sum":1}}} ] }} ]) ``` DAU ``` db.collection.aggregate( [ {"$match":{"ct":{"$gte":dateFrom,"$lt":dateTo}}}, {"$group":{ "_id":{ "user":"$user", "ymd":{"$dateToString":{"format":"%Y-%m-%d","date":"$ct"}} } }}, {"$group":{"_id":"$_id.ymd","dau":{"$sum":1}}} ]) ``` MAU ``` db.collection.aggregate( [ {"$match":{"ct":{"$gte":dateFrom,"$lt":dateTo}}}, {"$group":{ "_id":{ "user":"$user", "ym":{"$dateToString":{"format":"%Y-%m","date":"$ct"}} } }}, {"$group":{"_id":"$_id.ym","mau":{"$sum":1}}} ]) ```
I made a quick test on a large database of events, and counting with a distinct is much faster than an aggregate if you have the right indexes : ``` collection.distinct('user', { ct: { $gte: dateFrom, $lt: dateTo } }).length ```
13,810
51,549,234
I'm trying to spin up an EMR cluster with a Spark step using a Lambda function. Here is my lambda function (python 2.7): ``` import boto3 def lambda_handler(event, context): conn = boto3.client("emr") cluster_id = conn.run_job_flow( Name='LSR Batch Testrun', ServiceRole='EMR_DefaultRole', JobFlowRole='EMR_EC2_DefaultRole', VisibleToAllUsers=True, LogUri='s3n://aws-logs-171256445476-ap-southeast-2/elasticmapreduce/', ReleaseLabel='emr-5.16.0', Instances={ "Ec2SubnetId": "<my-subnet>", 'InstanceGroups': [ { 'Name': 'Master nodes', 'Market': 'ON_DEMAND', 'InstanceRole': 'MASTER', 'InstanceType': 'm3.xlarge', 'InstanceCount': 1, }, { 'Name': 'Slave nodes', 'Market': 'ON_DEMAND', 'InstanceRole': 'CORE', 'InstanceType': 'm3.xlarge', 'InstanceCount': 2, } ], 'KeepJobFlowAliveWhenNoSteps': False, 'TerminationProtected': False }, Applications=[{ 'Name': 'Spark', 'Name': 'Hive' }], Configurations=[ { "Classification": "hive-site", "Properties": { "hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory" } }, { "Classification": "spark-hive-site", "Properties": { "hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory" } } ], Steps=[{ 'Name': 'mystep', 'ActionOnFailure': 'TERMINATE_CLUSTER', 'HadoopJarStep': { 'Jar': 's3://elasticmapreduce/libs/script-runner/script-runner.jar', 'Args': [ "/home/hadoop/spark/bin/spark-submit", "--deploy-mode", "cluster", "--master", "yarn-cluster", "--class", "org.apache.spark.examples.SparkPi", "s3://support.elasticmapreduce/spark/1.2.0/spark-examples-1.2.0-hadoop2.4.0.jar", "10" ] } }], ) return "Started cluster {}".format(cluster_id) ``` The cluster is starting up, but when trying to execute the step it fails. The error log is containing the following exception: ``` Exception in thread "main" java.lang.RuntimeException: Local file does not exist. at com.amazon.elasticmapreduce.scriptrunner.ScriptRunner.fetchFile(ScriptRunner.java:30) at com.amazon.elasticmapreduce.scriptrunner.ScriptRunner.main(ScriptRunner.java:56) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:234) at org.apache.hadoop.util.RunJar.main(RunJar.java:148) ``` So it seems like the script-runner is not understanding to pick up the .jar file from S3? Any help appreciated...
2018/07/27
[ "https://Stackoverflow.com/questions/51549234", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5175894/" ]
I could solve the problem eventually. Main problem was the broken "Applications" configuration, which has to look like the following instead: ``` Applications=[{ 'Name': 'Spark' }, { 'Name': 'Hive' }], ``` The final Steps element: ``` Steps=[{ 'Name': 'lsr-step1', 'ActionOnFailure': 'TERMINATE_CLUSTER', 'HadoopJarStep': { 'Jar': 'command-runner.jar', 'Args': [ "spark-submit", "--class", "org.apache.spark.examples.SparkPi", "s3://support.elasticmapreduce/spark/1.2.0/spark-examples-1.2.0-hadoop2.4.0.jar", "10" ] } }] ```
Not all EMR pre-built with ability to copy your jar, script from S3 so you must do that in bootstrap steps: ``` BootstrapActions=[ { 'Name': 'Install additional components', 'ScriptBootstrapAction': { 'Path': code_dir + '/scripts' + '/emr_bootstrap.sh' } } ], ``` And here is what my bootstrap does ``` #!/bin/bash HADOOP="/home/hadoop" BUCKET="s3://<yourbucket>/<path>" # Sync jars libraries aws s3 sync ${BUCKET}/jars/ ${HADOOP}/ aws s3 sync ${BUCKET}/scripts/ ${HADOOP}/ # Install python packages sudo pip install --upgrade pip sudo ln -s /usr/local/bin/pip /usr/bin/pip sudo pip install psycopg2 numpy boto3 pythonds ``` Then you can call your script and jar like this ``` { 'Name': 'START YOUR STEP', 'ActionOnFailure': 'TERMINATE_CLUSTER', 'HadoopJarStep': { 'Jar': 'command-runner.jar', 'Args': [ "spark-submit", "--jars", ADDITIONAL_JARS, "--py-files", "/home/hadoop/modules.zip", "/home/hadoop/<your code>.py" ] } }, ```
13,813
53,503,196
I want to separate the exact words of a text file (*text.txt*) ending in a certain string by using 'endswith'. The fact is that my variable ``` h=[w for w in 'text.txt' if w.endswith('os')] ``` does not give what I want when I call it. On the other hand, if I try naming the open file ``` f=open('text.txt') h=[w for w in f if w.endswith('os')] ``` does not work either. Should I convert the text into a list first? Comment: I hope this is not duplicate. It is not the same as this [former question](https://stackoverflow.com/questions/14676265/how-to-read-a-text-file-into-a-list-or-an-array-with-python) although close to it.
2018/11/27
[ "https://Stackoverflow.com/questions/53503196", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10502088/" ]
Open the file first and then process it like this: ``` with open('text.txt', 'r') as file: content = file.read() h=[w for w in content.split() if w.endswith('os')] ```
``` with open('text.txt') as f: words = [word for line in f for word in line.split() if word.endswith('os')] ``` Your first attempt does not read the file, or even open it. Instead, it loops over the characters of the string `'text.txt'` and checks each of them if it ends with `'os'`. Your second attempt iterates over lines of the file, not words -- that's how a `for` loop works with a file handle.
13,814
55,318,862
I want to try to make a little webserver where you can put location data into a database using a grid system in Javascript. I've found a setup that I really liked and maked an example in CodePen: <https://codepen.io/anon/pen/XGxEaj>. A user can click in a grid to point out the location. In the console log I see the data that is generated as output. For every cell in the grid it has a dictionary that looks like this: ```js click: 4 height: 50 width: 50 x: 1 xnum: 1 y: 1 ynum: 1 ``` A complete overview of output data looks like this, where their are 10 arrays for every row, 10 arrays for every column of that row, and then a dictionary with the values: [![enter image description here](https://i.stack.imgur.com/IkYP3.png)](https://i.stack.imgur.com/IkYP3.png) Now i want to get this information back to python in a dictionary as well, so i can store particular information in a database. But I can't figure out what the best way is to do that. Hopefully you could provide a little push in the right direction, links to usefull pages or code snippets to guide me. Thanks for any help in advance!
2019/03/23
[ "https://Stackoverflow.com/questions/55318862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9606978/" ]
This can be achieved by converting your JS object into JSON: ``` var json = JSON.stringify(myJsObj); ``` And then converting your Python object to JSON: ``` import json myPythonObj = json.loads(json) ``` More information about the JSON package for Python can be found [here](https://www.w3schools.com/python/python_json.asp) and more information about converting JS objects to and from JSON can be found [here](https://www.w3schools.com/js/js_json_intro.asp)
You should use the python [json](https://docs.python.org/3/library/json.html) module. Here is how you can perform such operation using the module. ``` import json json_data = '{"a": 1, "b": 2, "c": 3, "d": 4, "e": 5}' loaded_json = json.loads(json_data) ```
13,819
66,602,656
I am following the instruction (<https://github.com/huggingface/transfer-learning-conv-ai>) to install conv-ai from huggingface, but I got stuck on the docker build step: `docker build -t convai .` I am using Mac 10.15, python 3.8, increased Docker memory to 4G. I have tried the following ways to solve the issue: 1. add `numpy` in `requirements.txt` 2. add `RUN pip3 install --upgrade setuptools` in Dockerfile 3. add `--upgrade` to `RUN pip3 install -r /tmp/requirements.txt` in Dockerfile 4. add `RUN pip3 install numpy` before `RUN pip3 install -r /tmp/requirements.txt` in Dockerfile 5. add `RUN apt-get install python3-numpy` before `RUN pip3 install -r /tmp/requirements.txt` in Dockerfile 6. using python 3.6.13 because of this [post](https://github.com/huggingface/transfer-learning-conv-ai/issues/97), but it has exact same error. 7. I am currently working on debugging inside the container by entering right before the `RUN pip3 install requirements.txt` Can anyone help me on this? Thank you!! The error: ``` => [6/9] COPY . ./ 0.0s => [7/9] COPY requirements.txt /tmp/requirements.txt 0.0s => ERROR [8/9] RUN pip3 install -r /tmp/requirements.txt 98.2s ------ > [8/9] RUN pip3 install -r /tmp/requirements.txt: #12 1.111 Collecting torch (from -r /tmp/requirements.txt (line 1)) #12 1.754 Downloading https://files.pythonhosted.org/packages/46/99/8b658e5095b9fb02e38ccb7ecc931eb1a03b5160d77148aecf68f8a7eeda/torch-1.8.0-cp36-cp36m-manylinux1_x86_64.whl (735.5MB) #12 81.11 Collecting pytorch-ignite (from -r /tmp/requirements.txt (line 2)) #12 81.76 Downloading https://files.pythonhosted.org/packages/f8/d3/640f70d69393b415e6a29b27c735047ad86267921ad62682d1d756556d48/pytorch_ignite-0.4.4-py3-none-any.whl (200kB) #12 81.82 Collecting transformers==2.5.1 (from -r /tmp/requirements.txt (line 3)) #12 82.17 Downloading https://files.pythonhosted.org/packages/13/33/ffb67897a6985a7b7d8e5e7878c3628678f553634bd3836404fef06ef19b/transformers-2.5.1-py3-none-any.whl (499kB) #12 82.29 Collecting tensorboardX==1.8 (from -r /tmp/requirements.txt (line 4)) #12 82.50 Downloading https://files.pythonhosted.org/packages/c3/12/dcaf67e1312475b26db9e45e7bb6f32b540671a9ee120b3a72d9e09bc517/tensorboardX-1.8-py2.py3-none-any.whl (216kB) #12 82.57 Collecting tensorflow (from -r /tmp/requirements.txt (line 5)) #12 83.12 Downloading https://files.pythonhosted.org/packages/de/f0/96fb2e0412ae9692dbf400e5b04432885f677ad6241c088ccc5fe7724d69/tensorflow-1.14.0-cp36-cp36m-manylinux1_x86_64.whl (109.2MB) #12 95.24 Collecting spacy (from -r /tmp/requirements.txt (line 6)) #12 95.81 Downloading https://files.pythonhosted.org/packages/65/01/fd65769520d4b146d92920170fd00e01e826cda39a366bde82a87ca249db/spacy-3.0.5.tar.gz (7.0MB) #12 97.41 Complete output from command python setup.py egg_info: #12 97.41 Traceback (most recent call last): #12 97.41 File "<string>", line 1, in <module> #12 97.41 File "/tmp/pip-build-cc3a804w/spacy/setup.py", line 5, in <module> #12 97.41 import numpy #12 97.41 ModuleNotFoundError: No module named 'numpy' #12 97.41 #12 97.41 ---------------------------------------- #12 98.11 Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-cc3a804w/spacy/ ``` @Håken Lid The error I got if I `RUN pip3 install numpy` right before `RUN pip3 install -r tmp/requirements`: ``` => [ 8/10] RUN pip3 install numpy 10.1s => ERROR [ 9/10] RUN pip3 install -r /tmp/requirements.txt 112.4s ------ > [ 9/10] RUN pip3 install -r /tmp/requirements.txt: #13 1.067 Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from -r /tmp/requirements.txt (line 1)) #13 1.074 Collecting torch (from -r /tmp/requirements.txt (line 2)) #13 1.656 Downloading https://files.pythonhosted.org/packages/46/99/8b658e5095b9fb02e38ccb7ecc931eb1a03b5160d77148aecf68f8a7eeda/torch-1.8.0-cp36-cp36m-manylinux1_x86_64.whl (735.5MB) #13 96.46 Collecting pytorch-ignite (from -r /tmp/requirements.txt (line 3)) #13 97.02 Downloading https://files.pythonhosted.org/packages/f8/d3/640f70d69393b415e6a29b27c735047ad86267921ad62682d1d756556d48/pytorch_ignite-0.4.4-py3-none-any.whl (200kB) #13 97.07 Collecting transformers==2.5.1 (from -r /tmp/requirements.txt (line 4)) #13 97.32 Downloading https://files.pythonhosted.org/packages/13/33/ffb67897a6985a7b7d8e5e7878c3628678f553634bd3836404fef06ef19b/transformers-2.5.1-py3-none-any.whl (499kB) #13 97.43 Collecting tensorboardX==1.8 (from -r /tmp/requirements.txt (line 5)) #13 97.70 Downloading https://files.pythonhosted.org/packages/c3/12/dcaf67e1312475b26db9e45e7bb6f32b540671a9ee120b3a72d9e09bc517/tensorboardX-1.8-py2.py3-none-any.whl (216kB) #13 97.76 Collecting tensorflow (from -r /tmp/requirements.txt (line 6)) #13 98.27 Downloading https://files.pythonhosted.org/packages/de/f0/96fb2e0412ae9692dbf400e5b04432885f677ad6241c088ccc5fe7724d69/tensorflow-1.14.0-cp36-cp36m-manylinux1_x86_64.whl (109.2MB) #13 109.6 Collecting spacy (from -r /tmp/requirements.txt (line 7)) #13 110.0 Downloading https://files.pythonhosted.org/packages/65/01/fd65769520d4b146d92920170fd00e01e826cda39a366bde82a87ca249db/spacy-3.0.5.tar.gz (7.0MB) #13 111.6 Complete output from command python setup.py egg_info: #13 111.6 Traceback (most recent call last): #13 111.6 File "<string>", line 1, in <module> #13 111.6 File "/tmp/pip-build-t6n57csv/spacy/setup.py", line 10, in <module> #13 111.6 from Cython.Build import cythonize #13 111.6 ModuleNotFoundError: No module named 'Cython' #13 111.6 #13 111.6 ---------------------------------------- #13 112.3 Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-t6n57csv/spacy/ ------ executor failed running [/bin/sh -c pip3 install -r /tmp/requirements.txt]: exit code: 1 ``` requirements.txt: ``` torch pytorch-ignite transformers==2.5.1 tensorboardX==1.8 tensorflow # for tensorboardX spacy ``` Dockerfile: ``` FROM ubuntu:18.04 MAINTAINER Loreto Parisi loretoparisi@gmail.com ######################################## BASE SYSTEM # set noninteractive installation ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update && apt-get install -y apt-utils RUN apt-get install -y --no-install-recommends \ build-essential \ pkg-config \ tzdata \ curl ######################################## PYTHON3 RUN apt-get install -y \ python3 \ python3-pip # set local timezone RUN ln -fs /usr/share/zoneinfo/America/New_York /etc/localtime && \ dpkg-reconfigure --frontend noninteractive tzdata # transfer-learning-conv-ai ENV PYTHONPATH /usr/local/lib/python3.6 COPY . ./ COPY requirements.txt /tmp/requirements.txt RUN pip3 install -r /tmp/requirements.txt # model zoo RUN mkdir models && \ curl https://s3.amazonaws.com/models.huggingface.co/transfer-learning-chatbot/finetuned_chatbot_gpt.tar.gz > models/finetuned_chatbot_gpt.tar.gz && \ cd models/ && \ tar -xvzf finetuned_chatbot_gpt.tar.gz && \ rm finetuned_chatbot_gpt.tar.gz CMD ["bash"] ``` Steps I ran so far: ``` git clone https://github.com/huggingface/transfer-learning-conv-ai cd transfer-learning-conv-ai pip install -r requirements.txt python -m spacy download en docker build -t convai . ```
2021/03/12
[ "https://Stackoverflow.com/questions/66602656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11210924/" ]
It seems that pip does not install the pre-built wheel, but instead tries to build *spacy* from source. This is a fragile process and requires extra dependencies. To avoid this, you should ensure that the Python packages *pip*, *wheel* and *setuptools* are up to date before proceeding with the installation. ``` # replace RUN pip3 install -r /tmp/requirements.txt RUN python3 -m pip install --upgrade pip setuptools wheel RUN python3 -m pip install -r /tmp/requirements.txt ```
Did you try adding numpy into the requirements.txt? It looks to me that it is missing.
13,820
47,186,849
Given an iterable consisting of a finite set of elements: > > (a, b, c, d) > > > as an example What would be a Pythonic way to generate the following (pseudo) ordered pair from the above iterable: ``` ab ac ad bc bd cd ``` A trivial way would be to use for loops, but I'm wondering if there is a pythonic way of generating this list from the iterable above ?
2017/11/08
[ "https://Stackoverflow.com/questions/47186849", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962891/" ]
Try using [combinations](https://docs.python.org/2/library/itertools.html#itertools.combinations). ``` import itertools combinations = itertools.combinations('abcd', n) ``` will give you an iterator, you can loop over it or convert it into a list with `list(combinations)` In order to only include pairs as in your example, you can pass 2 as the argument: ``` combinations = itertools.combinations('abcd', 2) >>> print list(combinations) [('a', 'b'), ('a', 'c'), ('a', 'd'), ('b', 'c'), ('b', 'd'), ('c', 'd')] ```
You can accomplish this in a list comprehension. ``` data = [1, 2, 3, 4] output = [(data[n], next) for n in range(0,len(data)) for next in data[n:]] print(repr(output)) ``` Pulling the comprehension apart, it's making a tuple of the first member of the iterable and the first object in a list composed of all other members of the iterable. `data[n:]` tells Python to take all members after the nth. [Here it is in action.](https://repl.it/NyYe)
13,821
54,551,016
I am trying to use cloud SQL / mysql instance for my APPEngine account. The app is an python django-2.1.5 appengine app. I have created an instance of MYSQL in the google cloud. I have added the following in my app.yaml file copied from the SQL instance details: ``` beta_settings: cloud_sql_instances: <INSTANCE_CONNECTION_NAME>=tcp:<TCP_PORT> ``` * I have given rights for my appengine project xxx-app's owner xxx-`app@appspot.gserviceaccount.com` the `Cloud SQL Client` rights. I have created a DB user account specific for the app `XYZ` which can connect to all hosts (\* option) * My connection details in settings.py are the following: ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'my-db', 'USER': 'appengine', 'PASSWORD': 'xxx', 'HOST': '111.111.11.11', # used actual ip 'PORT': '3306' } } ``` * I have also tried as per <https://github.com/GoogleCloudPlatform/appengine-django-skeleton/blob/master/mysite/settings.py>: ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'HOST': '/cloudsql/<your-project-id>:<your-cloud-sql-instance>', 'NAME': '<your-database-name>', 'USER': 'root', } } ``` * I cannot connect from the local as well. However, if I do a `Add Network` of my local IP and then try to connect the local connects. THe app runs fine locally after adding the network of local IP Address using CIDR notation. My problems: * I am unable to connect to the Cloud sql without adding the AppEngine assigned IP Address. It gives me an error : ``` OperationalError: (2003, "Can't connect to MySQL server on '0.0.0.0' ([Errno 111] Connection refused)") ``` * Where can I find the appengine's assigned IP Address. I dont mind even if it is temporary. I understand if I need static IP Address I will have to create a Compute VM Instance.
2019/02/06
[ "https://Stackoverflow.com/questions/54551016", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3204942/" ]
App Engine doesn't have any guarantees regarding IP address of a particular instance, and may change at any time. Since it is a Serverless platform, it abstracts away the infrastructure to allow you to focus on your app. There are two options when using App Engine Flex: Unix domain socket and TCP port. Which one App Engine provides for you depends on how you specify it in your app.yaml: * `cloud_sql_instances: <INSTANCE_CONNECTION_NAME>` provides a Unix socket at `/cloudsql/<INSTANCE_CONNECTION_NAME>` * `cloud_sql_instances: <INSTANCE_CONNECTION_NAME>=tcp:<TCP_PORT>` provides a local tcp port (`127.0.0.1:<TCP_PORT>`). You can find more information about this on the [Connecting from App Engine](https://cloud-dot-devsite.googleplex.com/sql/docs/postgres/connect-app-engine#connecting_the_flexible_environment_to) page.
I've encountered this issue before and after hours of scratching my head, all I needed to do was enable "Cloud SQL Admin API" and the deployed site connected to the database. This also sets permissions on your GAE service account for cloud proxy to connect to your GAE service.
13,823
60,765,225
right now i use ``` foo() import pdb; pdb.set_trace() bar() ``` for debugging purposes. Its a tad cumbersome, is there a better way to implement a breakpoint in python? edit: i understand there are IDE's that are useful, but i want to do this programatically, furthermore all the documentation i can find is for `pdb.set_trace()` instead of `breakpoint()` i would appriciate an explanation on how to use it
2020/03/19
[ "https://Stackoverflow.com/questions/60765225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11898750/" ]
Personally I prefer to use my editor for debugging. I can visually set break points and use keyboard shortcuts to step through my code and evaluate variables and expressions. Visual Studio Code, Eclipse and PyCharm are all popular choices. If you insist on using the command line, you can use the interactive debugger. See the [pdb documentation](https://docs.python.org/3/library/pdb.html) for details.
Python 3.7 added a [breakpoint() function](https://docs.python.org/3/library/functions.html#breakpoint) that, by default, performs the same job without the explicit import (to keep it lower overhead). It's [quite configurable](https://docs.python.org/3/whatsnew/3.7.html#whatsnew37-pep553) though, so you can have it do other things.
13,824
50,885,291
Trying to upgrade pip in my terminal. Here's my code: ``` (env) macbook-pro83:zappatest zorgan$ pip install --upgrade pip Could not fetch URL https://pypi.python.org/simple/pip/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:645) - skipping Requirement already up-to-date: pip in ./env/lib/python3.5/site-packages You are using pip version 8.1.2, however version 10.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. ``` Any idea what the problem is? I also get the same error `There was a problem confirming the ssl certificate` when I perform `pip install django`. **Edit** `pip install --upgrade pip -vvv returns`: ``` 1 location(s) to search for versions of pip: * https://pypi.python.org/simple/pip/ Getting page https://pypi.python.org/simple/pip/ Looking up "https://pypi.python.org/simple/pip/" in the cache Returning cached "301 Moved Permanently" response (ignoring date and etag information) Looking up "https://pypi.org/simple/pip/" in the cache Current age based on date: 23811 Freshness lifetime from max-age: 600 Freshness lifetime from request max-age: 600 Starting new HTTPS connection (1): pypi.org Could not fetch URL https://pypi.python.org/simple/pip/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:645) - skipping Installed version (8.1.2) is most up-to-date (past versions: none) Requirement already up-to-date: pip in ./env/lib/python3.5/site-packages Cleaning up... You are using pip version 8.1.2, however version 10.0.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. ```
2018/06/16
[ "https://Stackoverflow.com/questions/50885291", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6733153/" ]
a. Check your system date and time and see whether it is correct. b. If the 1st solution doesn't fix it try using a lower security method: ``` pip install --index-url=http://pypi.python.org/simple/linkchecker ``` This bypasses HTTPS and uses HTTP instead, try this workaround only if you are in a hurry. c. Try downgrading pip to a version that does not use SSL verification by: ``` pip install pip==1.2.1 ``` and then upgrade pip back again to the newer version by: ``` pip install --upgrade pip ``` d. If nothing of the above works, uninstall pip and reinstall it.
You can get more details about upgrade failed by using `$ pip install --upgrade pip -vvv` there will be more details help you debug it. --- Try `pip --trusted-host pypi.python.org install --upgrade pip` Maybe useful. --- Another solution: `$ pip install certifi` then run install.
13,825
1,195,494
Was wondering how you'd do the following in Windows: From a c shell script (extension csh), I'm running a Python script within an 'eval' method so that the output from the script affects the shell environment. Looks like this: ``` eval `python -c "import sys; run_my_code_here(); "` ``` Was wondering how I would do something like the eval statement in Windows using Windows' built in CMD shell. I want to run a Python script within a Windows script and have the script run what the Python script prints out. \*\* update: specified interest in running from CMD shell.
2009/07/28
[ "https://Stackoverflow.com/questions/1195494", "https://Stackoverflow.com", "https://Stackoverflow.com/users/101126/" ]
If it's in `cmd.exe`, using a temporary file is the only option [that I know of]: ``` python -c "print(\"Hi\")" > temp.cmd call temp.cmd del temp.cmd ```
(Making some guesses where details are missing from your question) In CMD, when a batch script modifies the environment, the default behavior is that it modifies the environment of the CMD process that is executing it. Now, if you have a batch script that calls another batch script, there are 3 ways to do it. 1. execute the batch file directly: ``` REM call q.bat q.bat REM this line never runs ``` Usually you don't want this, because it won't return to the calling batch script. This is more like `goto` than `gosub`. The CMD process just switches from one script to another. 2. execute with `call`: ``` REM call q.bat CALL q.bat REM changes that q.bat affects will appear here. ``` This is the most common way for one batch file to call another. When `q.bat` exits, control will return to the caller. Since this is the same CMD process, changes to the environment will still be there. Note: If `q.bat` uses the `EXIT` statement, it can cause the CMD process to terminate, without returning control to the calling script. Note 2: If `q.bat` uses `EXIT /B`, then the CMD process will not exit. This is useful for setting `ERRORLEVEL`. 3. Execute in a new CMD process: ``` REM call q.bat CMD /C q.bat REM environment changes in q.bat don't affect me ``` Since q.bat run ins a new CMD process, it affects the environment of that process, and not the CMD that the caller is running in. Note: If `q.bat` uses `EXIT`, it won't terminate the process of the caller. --- The `SETLOCAL` CMD command will create a new environment for the current script. Changes in that environment won't affect the caller. In general, `SETLOCAL` is a good practice, to avoid leaking environment changes by accident. To use `SETLOCAL` and still push environment changes to the calling script, end the script with: ``` ENDLOCAL && SET X=%X% && SET Y=%Y% ``` This will push the values of X and Y to the parent environment. --- If on the other hand you want to run another process (not a CMD script) and have it affect the current script's environment, than have the tool generate a batch file that makes the changes you want, then execute that batch file. ``` REM q.exe will write %TEMP%\runme.cmd, which looks like: REM set X=Y q.exe call "%TEMP%\runme.cmd" ```
13,828
36,510,859
I'm trying to use the CVXOPT qp solver to compute the Lagrange Multipliers for a Support Vector Machine ``` def svm(X, Y, c): m = len(X) P = matrix(np.dot(Y, Y.T) * np.dot(X, X.T)) q = matrix(np.ones(m) * -1) g1 = np.asarray(np.diag(np.ones(m) * -1)) g2 = np.asarray(np.diag(np.ones(m))) G = matrix(np.append(g1, g2, axis=0)) h = matrix(np.append(np.zeros(m), (np.ones(m) * c), axis =0)) A = np.reshape((Y.T), (1,m)) b = matrix([0]) print (A).shape A = matrix(A) sol = solvers.qp(P, q, G, h, A, b) print sol ``` Here `X` is a `1000 X 2` matrix and `Y` has the same number of labels. The solver throws the following error: `$ python svm.py (1, 1000) Traceback (most recent call last): File "svm.py", line 35, in <module> svm(X, Y, 50) File "svm.py", line 29, in svm sol = solvers.qp(P, q, G, h, A, b) File "/usr/local/lib/python2.7/site-packages/cvxopt/coneprog.py", line 4468, in qp return coneqp(P, q, G, h, None, A, b, initvals, options = options) File "/usr/local/lib/python2.7/site-packages/cvxopt/coneprog.py", line 1914, in coneqp %q.size[0]) TypeError: 'A' must be a 'd' matrix with 1000 columns` I printed the shape of A and it's a `(1,1000)` matrix after reshaping from a vector. What exactly is causing this error?
2016/04/08
[ "https://Stackoverflow.com/questions/36510859", "https://Stackoverflow.com", "https://Stackoverflow.com/users/681410/" ]
Your matrix elements have to be of the floating-point type as well. So the error is removed by using `A = A.astype('float')` to cast it.
The error - `"TypeError: 'A' must be a 'd' matrix with 1000 columns:"` has two condition namely: 1. if the type code is not equal to '`d`' 2. if the `A.size[1] != c.size[0]`. Check for these conditions.
13,829
51,940,568
I'm trying to learn Python with [CS Dojo's tutorial](https://www.youtube.com/watch?v=NSbOtYzIQI0). I'm currently working on the task he gave us, which was building a "miles to kilometers converter". The task itself was very simple, but I want to build a program that does a little more than that. I want the output to be like "The distance" + "name (ex. from NewYork to Washington)" + " is" + (Value) + " in kilometers." but it is giving me the same error message over and over. My code is # miles variables name1 = "from\_NewYork\_to\_Washington" miles1 = 20 ``` name2 = "from_Boston_to_Dallas" miles2 = 30 name3 = "from_Tokyo_to_Seoul" miles3 = 40 #define function def conver(name, distance): d = distance * 1.6 print("result: ") print(d) return "The distance" + name + "is" + d + "in kilometers." # result variables result1 = conver(name1, miles1) result2 = conver(name2, miles2) result3 = conver(name3, miles3) ``` The error message is following: ``` result: 32.0 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-135-4d54c2c21df2> in <module>() 17 18 # result variables ---> 19 result1 = conver(name1, miles1) 20 result2 = conver(name2, miles2) 21 result3 = conver(name3, miles3) <ipython-input-135-4d54c2c21df2> in conver(name, distance) 14 print("result: ") 15 print(d) ---> 16 return "The distance" + name + "is" + d + "in kilometers." 17 18 # result variables TypeError: must be str, not float ``` Thanks.
2018/08/21
[ "https://Stackoverflow.com/questions/51940568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10252435/" ]
The value **d** needs to be converted into a string before it can be concatenated with the rest of the return statement: ``` d = str(distance * 1.6) ```
In Python 3.6+, f-string saves your time typing ugly quotes and plus signs and improves the readability a lot. ``` return f"The distance {name} is {d} in kilometers." ``` You can also do formatting easily inside f-strings ``` return f"The distance {name.title()} is {d:.2f} in kilometers." ```
13,834
9,885,071
I am reading a text file which I guess is encoded in utf-8. Some lines can only be decoded as latin-1 though. I would say this is very bad practice, but nevertheless I have to cope with it. I have the following questions: First: how can I "guess" the encoding of a file? I have tried `enca`, but in my machine: ``` enca --list languages belarussian: CP1251 IBM866 ISO-8859-5 KOI8-UNI maccyr IBM855 KOI8-U bulgarian: CP1251 ISO-8859-5 IBM855 maccyr ECMA-113 czech: ISO-8859-2 CP1250 IBM852 KEYBCS2 macce KOI-8_CS_2 CORK estonian: ISO-8859-4 CP1257 IBM775 ISO-8859-13 macce baltic croatian: CP1250 ISO-8859-2 IBM852 macce CORK hungarian: ISO-8859-2 CP1250 IBM852 macce CORK lithuanian: CP1257 ISO-8859-4 IBM775 ISO-8859-13 macce baltic latvian: CP1257 ISO-8859-4 IBM775 ISO-8859-13 macce baltic polish: ISO-8859-2 CP1250 IBM852 macce ISO-8859-13 ISO-8859-16 baltic CORK russian: KOI8-R CP1251 ISO-8859-5 IBM866 maccyr slovak: CP1250 ISO-8859-2 IBM852 KEYBCS2 macce KOI-8_CS_2 CORK slovene: ISO-8859-2 CP1250 IBM852 macce CORK ukrainian: CP1251 IBM855 ISO-8859-5 CP1125 KOI8-U maccyr chinese: GBK BIG5 HZ none: ``` Which is not enough to detect latin-1. By the way: how can I increase the list of available languages for `enca`? Second: is it possible to have a file partially encoded in different encodings? I think that, from a technical point of view, this is very much possible: just use different codecs for different parts of the file. Have you ever seen this? How would you handle reading such a file? Would you try to decode each line with a different encoding? How would you go about this? I am actually using python, and an example of handling multiple-encoding would be appreciated. Third: how do you usually go about files for which you do not know the encoding that they are using? I encounter this problem often when downloading text files from different sources. They are usually CSV files exported from spreadsheets (I guess excel), but depending on the exporting parameters, they are using one encoding or another, without hints about which one has been chosen, so I am forced to guess.
2012/03/27
[ "https://Stackoverflow.com/questions/9885071", "https://Stackoverflow.com", "https://Stackoverflow.com/users/647991/" ]
1. Guess the text encoding a file - in what? Python, Perl, Java, C? Note that this sometimes produces unexpected results (e.g. [the "bush hid the facts" bug](http://en.wikipedia.org/wiki/Bush_hid_the_facts)) 2. Sometimes a file will have text encoded in different encodings. Either the file format specifies different encodings for different parts in the file, or the format permits specifying an encoding switch. For example in MIME the headers keys are ascii, the header values can have embedded sections encoded in other encodings, and the encoding of the content is declared in the Content-Type header. 3. If you don't know the encoding you just have to read bytes from the file (in a binary safe manner) to a buffer until you can determine the encoding.
When you say "partially encoded in different encodings", are you sure it's not just UTF-8? UTF-8 mixes single-byte, 2-byte and more-byte encodings, depending on the complexity of the character, so part of it looks like ASCII / latin-1 and part of it looks like Unicode. <http://www.joelonsoftware.com/articles/Unicode.html> EDIT: for guessing encodings of downloaded plain-text files, I usually open them in Chrome or Firefox. They support lots of encodings and are very good at choosing the right one. Then, it is possible to copy the content into a Unicode-encoded file from there.
13,835
70,463,263
I am not able to serialize a custom object in python 3 . Below is the explanation and code ``` packages= [] data = { "simplefield": "value1", "complexfield": packages, } ``` where packages is a list of `custom object Library`. `Library object` is a class as below (I have also sub-classed `json.JSONEncoder` but it is not helping ) ``` class Library(json.JSONEncoder): def __init__(self, name): print("calling Library constructor ") self.name = name def default(self, object): print("calling default method ") if isinstance(object, Library): return { "name": object.name } else: raiseExceptions("Object is not instance of Library") ``` Now I am calling `json.dumps(data)` but it is throwing below exception. ``` TypeError: Object of type `Library` is not JSON serializable ``` It seems `"calling default method"` is not printed meaning, `Library.default` method is not called Can anybody please help here ? I have also referred to [Serializing class instance to JSON](https://stackoverflow.com/questions/10252010/serializing-class-instance-to-json) but its is not helping much
2021/12/23
[ "https://Stackoverflow.com/questions/70463263", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1281182/" ]
Inheriting `json.JSONEncoder` will not make a class serializable. You should define your encoder separately and then provide the encoder instance in the `json.dumps` call. See an example approach below ``` import json from typing import Any class Library: def __init__(self, name): print("calling Library constructor ") self.name = name def __repr__(self): # I have just created a serialized version of the object return '{"name": "%s"}' % self.name # Create your custom encoder class CustomEncoder(json.JSONEncoder): def default(self, o: Any) -> Any: # Encode differently for different types return str(o) if isinstance(o, Library) else super().default(o) packages = [Library("package1"), Library("package2")] data = { "simplefield": "value1", "complexfield": packages, } #Use your encoder while serializing print(json.dumps(data, cls=CustomEncoder)) ``` Output was like: ``` {"simplefield": "value1", "complexfield": ["{\"name\": \"package1\"}", "{\"name\": \"package2\"}"]} ```
You can use the `default` parameter of `json.dump`: ``` def default(obj): if isinstance(obj, Library): return {"name": obj.name} raise TypeError(f"Can't serialize {obj!r}.") json.dumps(data, default=default) ```
13,836
57,584,229
I tried the following python code to assign a new value to a list. ``` a,b,c=[],[],[] def change(): return [1] for i in [a,b,c]: i=change() print(a) ``` The output is `[]`, but what I need is `[1]`
2019/08/21
[ "https://Stackoverflow.com/questions/57584229", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4917205/" ]
What you're doing here is you're re-assigning the variable `i` within the loop, but you're not actually changing `a`, `b`, or `c`. As an example, what would you expect the following source code to output? ```py a = [] i = a i = [1] print(a) ``` Here, you are reassigning `i` after you've assigned `a` to it. `a` itself is not changing when you perform the `i = [1]` operation, and thus `[]` will output. This problem is the same one as what you're seeing in your loop. You likely want something like this. `a, b, c = (change() for i in (a, b, c))`
You created a variable called i, to it you assigned (in turn) a, then 1, then b, then 1, then c, then 1. Then you printed a, unaltered. If you want to change a, you need to assign to a. That is: ``` a = [1] ```
13,837
38,193,878
I am trying to automate my project create process and would like as part of it to create a new jupyter notebook and populate it with some cells and content that I usually have in every notebook (i.e., imports, titles, etc.) Is it possible to do this via python?
2016/07/05
[ "https://Stackoverflow.com/questions/38193878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5983171/" ]
You can do it using **nbformat**. Below an example taken from [Creating an IPython Notebook programatically](https://gist.github.com/fperez/9716279): ```py import nbformat as nbf nb = nbf.v4.new_notebook() text = """\ # My first automatic Jupyter Notebook This is an auto-generated notebook.""" code = """\ %pylab inline hist(normal(size=2000), bins=50);""" nb['cells'] = [nbf.v4.new_markdown_cell(text), nbf.v4.new_code_cell(code)] fname = 'test.ipynb' with open(fname, 'w') as f: nbf.write(nb, f) ```
This is absolutely possible. Notebooks are just json files. This [notebook](https://gist.github.com/anonymous/1c79fa2d43f656b4463d7ce7e2d1a03b "notebook") for example is just: ``` { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Header 1" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "ExecuteTime": { "end_time": "2016-09-16T16:28:53.333738", "start_time": "2016-09-16T16:28:53.330843" }, "collapsed": false }, "outputs": [], "source": [ "def foo(bar):\n", " # Standard functions I want to define.\n", " pass" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Header 2" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true }, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.10" }, "toc": { "toc_cell": false, "toc_number_sections": true, "toc_threshold": 6, "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 0 } ``` While messy it's just a list of cell objects. I would probably create my template in an actual notebook and save it rather than trying to generate the initial template by hand. If you want to add titles or other variables programmatically, you could always copy the raw notebook text in the \*.ipynb file into a python file and insert values using string formatting.
13,846
55,411,549
**EDIT:** I am trying to manipulate JSON files in Python. In my data some polygons have multiple related information: coordinates (`LineString`) and **area percent** and **area** (`Text` and `Area` in `Point`), I want to combine them to a single JSON object. As an example, the data from files are as follows: ``` data = { "type": "FeatureCollection", "name": "entities", "features": [{ "type": "Feature", "properties": { "Layer": "0", "SubClasses": "AcDbEntity:AcDbBlockReference", "EntityHandle": "2F1" }, "geometry": { "type": "LineString", "coordinates": [ [61.971069681118479, 36.504485105673659], [46.471068755199667, 36.504485105673659], [46.471068755199667, 35.954489281866685], [44.371068755199758, 35.954489281866685], [44.371068755199758, 36.10448936390457], [43.371069617387093, 36.104489150107824], [43.371069617387093, 23.904496401184584], [48.172716774891342, 23.904496401184584], [48.171892994728751, 17.404489374370311], [61.17106949647404, 17.404489281863786], [61.17106949647404, 19.404489281863786], [61.971069689453991, 19.404489282256687], [61.971069681118479, 36.504485105673659] ] } }, { "type": "Feature", "properties": { "Layer": "0", "SubClasses": "AcDbEntity:AcDbMText", "EntityHandle": "2F1", "Text": "6%" }, "geometry": { "type": "Point", "coordinates": [49.745686139884583, 28.11445704760262, 0.0] } }, { "type": "Feature", "properties": { "Layer": "0", "SubClasses": "AcDbEntity:AcDbMText", "EntityHandle": "2F1", "Area": "100" }, "geometry": { "type": "Point", "coordinates": [50.216857362443989, 63.981197759829229, 0.0] } }, { "type": "Feature", "properties": { "Layer": "0", "SubClasses": "AcDbEntity:AcDbBlockReference", "EntityHandle": "2F7" }, "geometry": { "type": "LineString", "coordinates": [ [62.37106968111857, 36.504489398648715], [62.371069689452725, 19.404489281863786], [63.171069496474047, 19.404489281863786], [63.171069496474047, 17.404489281863786], [77.921070051947027, 17.404489281863786], [77.921070051947027, 19.504489281855054], [78.671070051947027, 19.504489281855054], [78.671070051897914, 36.504485105717322], [62.37106968111857, 36.504489398648715] ] } }, { "type": "Feature", "properties": { "Layer": "0", "SubClasses": "AcDbEntity:AcDbMText", "EntityHandle": "2F7", "Text": "5.8%" }, "geometry": { "type": "Point", "coordinates": [67.27548061311245, 28.11445704760262, 0.0] } } ] } ``` I want to combine `Point`'s `Text` and `Area` key and values to `LineString` based on `EntityHandle`'s values, and also delete `Point` lines. The expected output is: ``` { "type": "FeatureCollection", "name": "entities", "features": [{ "type": "Feature", "properties": { "Layer": "0", "SubClasses": "AcDbEntity:AcDbBlockReference", "EntityHandle": "2F1", "Text": "6%", "Area": "100" }, "geometry": { "type": "LineString", "coordinates": [ [61.971069681118479, 36.504485105673659], [46.471068755199667, 36.504485105673659], [46.471068755199667, 35.954489281866685], [44.371068755199758, 35.954489281866685], [44.371068755199758, 36.10448936390457], [43.371069617387093, 36.104489150107824], [43.371069617387093, 23.904496401184584], [48.172716774891342, 23.904496401184584], [48.171892994728751, 17.404489374370311], [61.17106949647404, 17.404489281863786], [61.17106949647404, 19.404489281863786], [61.971069689453991, 19.404489282256687], [61.971069681118479, 36.504485105673659] ] } }, { "type": "Feature", "properties": { "Layer": "0", "SubClasses": "AcDbEntity:AcDbBlockReference", "EntityHandle": "2F7", "Text": "5.8%" }, "geometry": { "type": "LineString", "coordinates": [ [62.37106968111857, 36.504489398648715], [62.371069689452725, 19.404489281863786], [63.171069496474047, 19.404489281863786], [63.171069496474047, 17.404489281863786], [77.921070051947027, 17.404489281863786], [77.921070051947027, 19.504489281855054], [78.671070051947027, 19.504489281855054], [78.671070051897914, 36.504485105717322], [62.37106968111857, 36.504489398648715] ] } } ] } ``` Is it possible to get result above in Python? Thanks. **Updated solution**, thanks to @dodopy: ``` import json features = data["features"] point_handle_text = { i["properties"]["EntityHandle"]: i["properties"]["Text"] for i in features if i["geometry"]["type"] == "Point" } point_handle_area = { i["properties"]["EntityHandle"]: i["properties"]["Area"] for i in features if i["geometry"]["type"] == "Point" } combine_features = [] for i in features: if i["geometry"]["type"] == "LineString": i["properties"]["Text"] = point_handle_text.get(i["properties"]["EntityHandle"]) combine_features.append(i) data["features"] = combine_features combine_features = [] for i in features: if i["geometry"]["type"] == "LineString": i["properties"]["Area"] = point_handle_area.get(i["properties"]["EntityHandle"]) combine_features.append(i) data["features"] = combine_features with open('test.geojson', 'w+') as f: json.dump(data, f, indent=2) ``` But I get an error: ``` Traceback (most recent call last): File "<ipython-input-131-d132c8854a9c>", line 6, in <module> for i in features File "<ipython-input-131-d132c8854a9c>", line 7, in <dictcomp> if i["geometry"]["type"] == "Point" KeyError: 'Text' ```
2019/03/29
[ "https://Stackoverflow.com/questions/55411549", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8410477/" ]
Can you try the following: ``` >>> df = pd.DataFrame([[1, 2, 4], [4, 5, 6], [7, '[8', 9]]) >>> df = df.astype('str') >>> df 0 1 2 0 1 2 4 1 4 5 6 2 7 [8 9 >>> df[df.columns[[df[i].str.contains('[', regex=False).any() for i in df.columns]]] 1 0 2 1 5 2 [8 >>> ```
Use this: ``` s=df.applymap(lambda x: '[' in x).any() print(df[s[s].index]) ``` Output: ``` col2 col4 0 b [d 1 [f h 2 j l 3 n [pa ```
13,847
62,461,130
I am trying to update a Lex bot using the python SDK put\_bot function. I need to pass a checksum value of the function as specified [here](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lex-models.html#LexModelBuildingService.Client.put_bot). How do I get this value? So far I have tried using below functions and the checksum values returned from these 1. The get\_bot with the prod alias 2. The get\_bot\_alias function 3. The get\_bot\_aliases function Is the checksum value
2020/06/18
[ "https://Stackoverflow.com/questions/62461130", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13765073/" ]
The issue here seems to be that in the first example you got one `div` with multiple components in it: ``` <div id="controls"> <NavControl /> <NavControl /> <NavControl /> </div> ``` The problem is that in the second example you are creating multiple `div` elements with the same id and each of them have nested component inside of it. Here, in a loop, you create multiple `div` elements with `id="controls"` ``` <div id="controls" v-bind:key="control.id" v-for="control in controls"> <NavControl v-bind:control="control" /> </div> ``` It ends up being something similar to this: ``` <div id="controls"> <NavControl /> </div> <div id="controls"> <NavControl /> </div> <div id="controls"> <NavControl /> </div> ``` You would probably see it better if you inspected your code in the browser tools. *As a side note: please, keep in mind that if you for whatever reason wanted to do something like this then you would use `class` instead of `id`.* **Solution:** What you would want to do instead is to create multiple components inside your `<div id="controls"></div>`. To do that you would place `v-for` inside the component and not the outer `div`. ``` <NavControl v-for="control in controls" v-bind:control="control" :key="control.id"/> ```
I am unsure of the actual desired scope, so I will go over both. If we want multiple `controls` in the template, as laid out, switch `id="controls"` to `class="controls"`: ``` <template> <div id="navControlPanel"> <div class="controls" v-bind:key="control.id" v-for="control in controls"> <NavControl v-bind:control="control" /> </div> </div> </template> ... .controls { /* my styles */ } ``` If the `v-for` is out of scope, and we only want this ran on those items on `NavControl`: ``` <template> <div id="navControlPanel"> <div class="controls"> <NavControl v-for="control in controls" /> </div> </div> </template> ```
13,851
24,245,324
Something about the `id` of objects of type `str` (in python 2.7) puzzles me. The `str` type is immutable, so I would expect that once it is created, it will always have the same `id`. I believe I don't phrase myself so well, so instead I'll post an example of input and output sequence. ``` >>> id('so') 140614155123888 >>> id('so') 140614155123848 >>> id('so') 140614155123808 ``` so in the meanwhile, it changes all the time. However, after having a variable pointing at that string, things change: ``` >>> so = 'so' >>> id('so') 140614155123728 >>> so = 'so' >>> id(so) 140614155123728 >>> not_so = 'so' >>> id(not_so) 140614155123728 ``` So it looks like it freezes the id, once a variable holds that value. Indeed, after `del so` and `del not_so`, the output of `id('so')` start changing again. This is **not** the same behaviour as with (small) integers. I know there is not real connection between immutability and having the same `id`; still, I am trying to figure out the source of this behaviour. I believe that someone whose familiar with python's internals would be less surprised than me, so I am trying to reach the same point... Update ====== Trying the same with a different string gave different results... ``` >>> id('hello') 139978087896384 >>> id('hello') 139978087896384 >>> id('hello') 139978087896384 ``` Now it **is** equal...
2014/06/16
[ "https://Stackoverflow.com/questions/24245324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1565828/" ]
This behavior is specific to the Python interactive shell. If I put the following in a .py file: ``` print id('so') print id('so') print id('so') ``` and execute it, I receive the following output: ``` 2888960 2888960 2888960 ``` In CPython, a string literal is treated as a constant, which we can see in the bytecode of the snippet above: ``` 2 0 LOAD_GLOBAL 0 (id) 3 LOAD_CONST 1 ('so') 6 CALL_FUNCTION 1 9 PRINT_ITEM 10 PRINT_NEWLINE 3 11 LOAD_GLOBAL 0 (id) 14 LOAD_CONST 1 ('so') 17 CALL_FUNCTION 1 20 PRINT_ITEM 21 PRINT_NEWLINE 4 22 LOAD_GLOBAL 0 (id) 25 LOAD_CONST 1 ('so') 28 CALL_FUNCTION 1 31 PRINT_ITEM 32 PRINT_NEWLINE 33 LOAD_CONST 0 (None) 36 RETURN_VALUE ``` The *same* constant (i.e. the same string object) is loaded 3 times, so the IDs are the same.
In your first example a new instance of the string `'so'` is created each time, hence different id. In the second example you are binding the string to a variable and Python can then maintain a shared copy of the string.
13,852
7,927,904
How do I close a file in python after opening it this way: `line = open("file.txt", "r").readlines()[7]`
2011/10/28
[ "https://Stackoverflow.com/questions/7927904", "https://Stackoverflow.com", "https://Stackoverflow.com/users/442852/" ]
Best to use a context manager. This will close the file automatically at the end of the block and doesn't rely on implementation details of the garbarge collection ``` with open("file.txt", "r") as f: line = f.readlines()[7] ```
There are several ways, e.g you could just use `f = open(...)` and `del f`. If your Python version supports it you can also use the `with` statement: ``` with open("file.txt", "r") as f: line = f.readlines()[7] ``` This automatically closes your file. **EDIT:** I don't get why I'm downvoted for explaining how to create a correct way to deal with files. Maybe *"without assigning a variable"* is just not preferable.
13,861
37,582,199
How I can pass the data from object oriented programming to mysql in python? Do I need to make connection in every class? Update: This is my object orinted ``` class AttentionDataPoint(DataPoint): def __init__ (self, _dataValueBytes): DataPoint._init_(self, _dataValueBytes) self.attentionValue=self._dataValueBytes[0] def __str__(self): if(self.attentionValue): return "Attention Level: " + str(self.attentionValue) class MeditationDataPoint(DataPoint): def __init__ (self, _dataValueBytes): DataPoint._init_(self, _dataValueBytes) self.meditationValue=self._dataValueBytes[0] def __str__(self): if(self.meditationValue): return "Meditation Level: " + str(self.meditationValue) ``` And I try to get the data to mysql using this coding. ``` import time import smtplib import datetime import MySQLdb db = MySQLdb.connect("192.168.0.101", "fyp", "123456", "system") cur = db.cursor() while True: Meditation_Level = meditationValue() Attention_Level = attentionValue() current_time = datetime.datetime.now() sql = "INSERT INTO table (id, Meditation_Level, Attention_Level, current_time) VALUES ('test1', %s, %s, %s)" data = (Meditation_Level, Attention_Level, current_time) cur.execute(sql, data) db.commit() db.close() ``` Update: DataPoint ``` class DataPoint: def __init__(self, dataValueBytes): self._dataValueBytes = dataValueBytes ```
2016/06/02
[ "https://Stackoverflow.com/questions/37582199", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6341585/" ]
You asked a [similar question](https://stackoverflow.com/questions/37575008/adding-sheet2-to-existing-excelfile-from-data-of-sheet1-with-pandas-python) on adding data to a second excel sheet. Perhaps you can address there any issues around the `to_excel()` part. On the category count, you can do: ``` df.Manufacturer.value_counts().to_frame() ``` to get a `pd.Series` with the `counts`. You need to convert the result `.to_frame()` because only `DataFrame` has `to_excel()` method. So altogether, using my linked answer: ``` import pandas as pd from openpyxl import load_workbook book = load_workbook('Abc.xlsx') writer = pd.ExcelWriter('Abc.xlsx', engine='openpyxl') writer.book = book writer.sheets = dict((ws.title, ws) for ws in book.worksheets) df.Manufacturer.value_counts().to_frame().to_excel(writer, sheet_name='Categories') writer.save() ```
As answered by [Stefan](https://stackoverflow.com/users/1494637/stefan), using `value_counts()` over the designated column would do. Since you are saving multiple DataFrames to a single workbook, I'd use `pandas.ExcelWriter`: ``` import pandas as pd writer = pd.ExcelWriter('file_name.xlsx') df.to_excel(writer) # this one writes to 'Sheet1' by default pd.Series.to_frame(df.Manufacturer.value_counts()).to_excel(writer, 'Sheet2') writer.save() ``` Without necessarily using `openpyxl`. As noted in [`to_excel()`](http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.to_excel.html) documentation, > > If passing an existing ExcelWriter object, then the sheet will be added to the existing workbook. This can be used to save different DataFrames to one workbook > > > Note that in order to use `to_excel()`, the `Series` (returned from `value_counts()`) must be cast to a `DataFrame`. this can be done as above (by `to_frame()`) or explicitly by using: ``` pd.DataFrame(df.Manufacturer.value_counts()).to_excel(writer, 'Sheet2') ``` While the first is usually a bit faster, the second may be considered more readable.
13,862
4,870,393
We have a gazillion spatial coordinates (x, y and z) representing atoms in 3d space, and I'm constructing a function that will translate these points to a new coordinate system. Shifting the coordinates to an arbitrary origin is simple, but I can't wrap my head around the next step: 3d point rotation calculations. In other words, I'm trying to translate the points from (x, y, z) to (x', y', z'), where x', y' and z' are in terms of i', j' and k', the new axis vectors I'm making with the help of the [euclid python module](http://pypi.python.org/pypi/euclid/0.01). I *think* all I need is a euclid quaternion to do this, i.e. ``` >>> q * Vector3(x, y, z) Vector3(x', y', z') ``` but to make THAT i believe I need a rotation axis vector and an angle of rotation. But I have no idea how to calculate these from i', j' and k'. This seems like a simple procedure to code from scratch, but I suspect something like this requires linear algebra to figure out on my own. Many thanks for a nudge in the right direction.
2011/02/02
[ "https://Stackoverflow.com/questions/4870393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/572043/" ]
Using quaternions to represent rotation isn't difficult from an algebraic point of view. Personally, I find it hard to reason *visually* about quaternions, but the formulas involved in using them for rotations are not overly complicated. I'll provide a basic set of reference functions here.1 (See also this lovely answer by [hosolmaz](https://stackoverflow.com/a/42180896/577088), in which he packages these together to create a handy Quaternion class.) You can think of a quaternion (for our purposes) as a scalar plus a 3-d vector -- abstractly, `w + xi + yj + zk`, here represented by an ordinary tuple `(w, x, y, z)`. The space of 3-d rotations is represented in full by a sub-space of the quaternions, the space of *unit* quaternions, so you want to make sure that your quaternions are normalized. You can do so in just the way you would normalize any 4-vector (i.e. magnitude should be close to 1; if it isn't, scale down the values by the magnitude): ``` def normalize(v, tolerance=0.00001): mag2 = sum(n * n for n in v) if abs(mag2 - 1.0) > tolerance: mag = sqrt(mag2) v = tuple(n / mag for n in v) return v ``` Please note that for simplicity, the following functions assume that quaternion values are *already normalized*. In practice, you'll need to renormalize them from time to time, but the best way to deal with that will depend on the problem domain. These functions provide just the very basics, for reference purposes only. Every rotation is represented by a unit quaternion, and concatenations of rotations correspond to *multiplications* of unit quaternions. The formula2 for this is as follows: ``` def q_mult(q1, q2): w1, x1, y1, z1 = q1 w2, x2, y2, z2 = q2 w = w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2 x = w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2 y = w1 * y2 + y1 * w2 + z1 * x2 - x1 * z2 z = w1 * z2 + z1 * w2 + x1 * y2 - y1 * x2 return w, x, y, z ``` To rotate a *vector* by a quaternion, you need the quaternion's conjugate too: ``` def q_conjugate(q): w, x, y, z = q return (w, -x, -y, -z) ``` Quaternion-vector multiplication is then a matter of converting your vector into a quaternion (by setting `w = 0` and leaving `x`, `y`, and `z` the same) and then multiplying `q * v * q_conjugate(q)`: ``` def qv_mult(q1, v1): q2 = (0.0,) + v1 return q_mult(q_mult(q1, q2), q_conjugate(q1))[1:] ``` Finally, you need to know how to convert from axis-angle rotations to quaternions and back. This is also surprisingly straightforward. It makes sense to "sanitize" input and output here by calling `normalize`. ``` def axisangle_to_q(v, theta): v = normalize(v) x, y, z = v theta /= 2 w = cos(theta) x = x * sin(theta) y = y * sin(theta) z = z * sin(theta) return w, x, y, z ``` And back: ``` def q_to_axisangle(q): w, v = q[0], q[1:] theta = acos(w) * 2.0 return normalize(v), theta ``` Here's a quick usage example. A sequence of 90-degree rotations about the x, y, and z axes will return a vector on the y axis to its original position. This code performs those rotations: ``` x_axis_unit = (1, 0, 0) y_axis_unit = (0, 1, 0) z_axis_unit = (0, 0, 1) r1 = axisangle_to_q(x_axis_unit, numpy.pi / 2) r2 = axisangle_to_q(y_axis_unit, numpy.pi / 2) r3 = axisangle_to_q(z_axis_unit, numpy.pi / 2) v = qv_mult(r1, y_axis_unit) v = qv_mult(r2, v) v = qv_mult(r3, v) print v # output: (0.0, 1.0, 2.220446049250313e-16) ``` Keep in mind that this sequence of rotations won't return *all* vectors to the same position; for example, for a vector on the x axis, it will correspond to a 90 degree rotation about the y axis. (Think of the [right-hand-rule](https://en.wikipedia.org/wiki/Right-hand_rule) here; a positive rotation about the y axis pushes a vector on the x axis into the *negative* z region.) ``` v = qv_mult(r1, x_axis_unit) v = qv_mult(r2, v) v = qv_mult(r3, v) print v # output: (4.930380657631324e-32, 2.220446049250313e-16, -1.0) ``` As always, please let me know if you find any problems here. --- 1. These are adapted from an OpenGL tutorial [archived here](https://web.archive.org/web/20150515143824/http://content.gpwiki.org/index.php/OpenGL:Tutorials:Using_Quaternions_to_represent_rotation). 2. The quaternion multiplication formula looks like a horrible rat's nest at first, but the derivation is easy, albeit tedious. Working with pencil and paper, you can represent the two quaternions like so: `w + xi + yj + zk`. Then note that `ii = jj = kk = -1`; that `ij = k`, `jk = i`, `ki = j`; and that `ji = -k`, `kj = -i`, `ik = -j`. Finally, multiply the two quaternions, distributing out the terms and rearranging them based on the results of each of the 16 multiplications. This helps to illustrate *why* you can use quaternions to represent rotation; the last six identities follow the right-hand rule, creating bijections between rotations *from* `i` to `j` and rotations *around* `k`, and so on. If you do this, you'll see that the identity `ii = jj = kk = -1` explains the last three terms in the `w` formula; that `jk = i` explains the third term in the `x` formula; and that `kj = -i` explains the fourth term in the `x` formula. The `y` and `z` formulas work the same way. The remaining terms are all examples of ordinary scalar multiplication.
Note that the inversion of matrix is not that trivial at all! Firstly, all n (where n is the dimension of your space) points must be in general position (i.e. no individual point can be expressed as a linear combination of rest of the points [caveat: this may seem to be a simple requirement indeed, but in the realm of numerical linear algebra, it's nontrivial; final decison wheter such configuration really exist or not, will eventually be based on the 'actual domain' specific knowledge]). Also the 'correspondence' of the new and old points may not be exact (and then you should utilize the best possible approximator of the 'true correspondence', i.e.:). Pseudo inverse (instead of trying to utilize the plain inverse) is recommend allways when your lib provides it. The pseudo inverse has the advantage that you'll be able to use more points for your transformation, hence increasing the probability that at least n points will be in general position. Here is an example, rotation of unit square 90 deg. ccw in 2D (but obviously this determination works in any dim), with `numpy`: ``` In []: P= matrix([[0, 0, 1, 1], [0, 1, 1, 0]]) In []: Pn= matrix([[0, -1, -1, 0], [0, 0, 1, 1]]) In []: T= Pn* pinv(P) In []: (T* P).round() Out[]: matrix([[ 0., -1., -1., 0.], [ 0., 0., 1., 1.]]) ``` P.S. `numpy` is also fast. Transformation of 1 million points in my modest computer: ``` In []: P= matrix(rand(2, 1e6)) In []: %timeit T* P 10 loops, best of 3: 37.7 ms per loop ```
13,863
6,931,112
I was wondering how the random number functions work. I mean is the server time used or which other methods are used to generate random numbers? Are they really random numbers or do they lean to a certain pattern? Let's say in python: ``` import random number = random.randint(1,10) ```
2011/08/03
[ "https://Stackoverflow.com/questions/6931112", "https://Stackoverflow.com", "https://Stackoverflow.com/users/366121/" ]
Try: ``` links.parent().addClass('current') .closest('li').addClass('current'); ``` [jQuery Docs](http://api.jquery.com/parent/)
Use the `.parent()` api to do it [Docs](http://api.jquery.com/parent/) **EDIT** Something like this: ``` $(this).parent().parent().addClass("current") ```
13,866
457,207
I was wondering if anyone had any experience in working programmatically with .pdf files. I have a .pdf file and I need to crop every page down to a certain size. After a quick Google search I found the pyPdf library for python but my experiments with it failed. When I changed the cropBox and trimBox attributes on a page object the results were not what I had expected and appeared to be quite random. Has anyone had any experience with this? Code examples would be well appreciated, preferably in python.
2009/01/19
[ "https://Stackoverflow.com/questions/457207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/367099/" ]
Use this to get the dimension of pdf ``` from PyPDF2 import PdfWriter, PdfReader, PdfMerger reader = PdfReader("/Users/user.name/Downloads/sample.pdf") page = reader.pages[0] print(page.cropbox.lower_left) print(page.cropbox.lower_right) print(page.cropbox.upper_left) print(page.cropbox.upper_right) ``` After this get page reference and then apply crop command ``` page.mediabox.lower_right = (lower_right_new_x_coordinate, lower_right_new_y_coordinate) page.mediabox.lower_left = (lower_left_new_x_coordinate, lower_left_new_y_coordinate) page.mediabox.upper_right = (upper_right_new_x_coordinate, upper_right_new_y_coordinate) page.mediabox.upper_left = (upper_left_new_x_coordinate, upper_left_new_y_coordinate) #f or example :- my custom coordinates # page.mediabox.lower_right = (611, 500) # page.mediabox.lower_left = (0, 500) # page.mediabox.upper_right = (611, 700) # page.mediabox.upper_left = (0, 700) ```
Acrobat Javascript API has a setPageBoxes method, but Adobe doesn't provide any Python code samples. Only C++, C# and VB.
13,868
16,128,572
I have number of molecules in smiles format and I want to get molecular name from smiles format of molecule and I want to use python for that conversion. for example : ``` CN1CCC[C@H]1c2cccnc2 - Nicotine OCCc1c(C)[n+](=cs1)Cc2cnc(C)nc(N)2 - Thiamin ``` which python module will help me in doing such conversions? Kindly let me know.
2013/04/21
[ "https://Stackoverflow.com/questions/16128572", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778942/" ]
I don't know of any one module that will let you do this, I had to play at data wrangler to try to get a satisfactory answer. I tackled this using Wikipedia which is being used more and more for structured bioinformatics / chemoinformatics data, but as it turned out my program reveals that a lot of that data is incorrect. I used urllib to submit a SPARQL query to dbpedia, first searching for the smiles string and failing that searching for the molecular weight of the compound. ``` import sys import urllib import urllib2 import traceback import pybel import json def query(q,epr,f='application/json'): try: params = {'query': q} params = urllib.urlencode(params) opener = urllib2.build_opener(urllib2.HTTPHandler) request = urllib2.Request(epr+'?'+params) request.add_header('Accept', f) request.get_method = lambda: 'GET' url = opener.open(request) return url.read() except Exception, e: traceback.print_exc(file=sys.stdout) raise e url = 'http://dbpedia.org/sparql' q1 = ''' select ?name where { ?s <http://dbpedia.org/property/smiles> "%s"@en. ?s rdfs:label ?name. FILTER(LANG(?name) = "" || LANGMATCHES(LANG(?name), "en")) } limit 10 ''' q2 = ''' select ?name where { ?s <http://dbpedia.org/property/molecularWeight> '%s'^^xsd:double. ?s rdfs:label ?name. FILTER(LANG(?name) = "" || LANGMATCHES(LANG(?name), "en")) } limit 10 ''' smiles = filter(None, ''' CN1CCC[C@H]1c2cccnc2 CN(CCC1)[C@@H]1C2=CC=CN=C2 OCCc1c(C)[n+](=cs1)Cc2cnc(C)nc(N)2 Cc1nnc2CN=C(c3ccccc3)c4cc(Cl)ccc4-n12 CN1C(=O)CN=C(c2ccccc2)c3cc(Cl)ccc13 CCc1nn(C)c2c(=O)[nH]c(nc12)c3cc(ccc3OCC)S(=O)(=O)N4CCN(C)CC4 CC(C)(N)Cc1ccccc1 CN(C)C(=O)Cc1c(nc2ccc(C)cn12)c3ccc(C)cc3 COc1ccc2[nH]c(nc2c1)S(=O)Cc3ncc(C)c(OC)c3C CCN(CC)C(=O)[C@H]1CN(C)[C@@H]2Cc3c[nH]c4cccc(C2=C1)c34 '''.splitlines()) OBMolecules = {} for smile in smiles: try: OBMolecules[smile] = pybel.readstring('smi', smile) except Exception as e: print e for smi in smiles: print '--------------' print smi try: print "searching by smiles string.." results = json.loads(query(q1 % smi, url)) if len(results['results']['bindings']) == 0: raise Exception('no results from smiles') else: print 'NAME: ', results['results']['bindings'][0]['name']['value'] except Exception as e: print e try: mol_weight = round(OBMolecules[smi].molwt, 2) print "search ing by molecular weight %s" % mol_weight results = json.loads(query(q2 % mol_weight, url)) if len(results['results']['bindings']) == 0: raise Exception('no results from molecular weight') else: print 'NAME: ', results['results']['bindings'][0]['name']['value'] except Exception as e: print e ``` output... ``` -------------- CN1CCC[C@H]1c2cccnc2 searching by smiles string.. no results from smiles search ing by molecular weight 162.23 NAME: Anabasine -------------- CN(CCC1)[C@@H]1C2=CC=CN=C2 searching by smiles string.. no results from smiles search ing by molecular weight 162.23 NAME: Anabasine -------------- OCCc1c(C)[n+](=cs1)Cc2cnc(C)nc(N)2 searching by smiles string.. no results from smiles search ing by molecular weight 267.37 NAME: Pipradrol -------------- Cc1nnc2CN=C(c3ccccc3)c4cc(Cl)ccc4-n12 searching by smiles string.. no results from smiles search ing by molecular weight 308.76 no results from molecular weight -------------- CN1C(=O)CN=C(c2ccccc2)c3cc(Cl)ccc13 searching by smiles string.. no results from smiles search ing by molecular weight 284.74 NAME: Mazindol -------------- CCc1nn(C)c2c(=O)[nH]c(nc12)c3cc(ccc3OCC)S(=O)(=O)N4CCN(C)CC4 searching by smiles string.. no results from smiles search ing by molecular weight 460.55 no results from molecular weight -------------- CC(C)(N)Cc1ccccc1 searching by smiles string.. no results from smiles search ing by molecular weight 149.23 NAME: Phenpromethamine -------------- CN(C)C(=O)Cc1c(nc2ccc(C)cn12)c3ccc(C)cc3 searching by smiles string.. no results from smiles search ing by molecular weight 307.39 NAME: Talastine -------------- COc1ccc2[nH]c(nc2c1)S(=O)Cc3ncc(C)c(OC)c3C searching by smiles string.. no results from smiles search ing by molecular weight 345.42 no results from molecular weight -------------- CCN(CC)C(=O)[C@H]1CN(C)[C@@H]2Cc3c[nH]c4cccc(C2=C1)c34 searching by smiles string.. no results from smiles search ing by molecular weight 323.43 NAME: Lysergic acid diethylamide ``` As you can see the first two results which *should* be nicotine come out wrong, this is because the wikipedia entry for nicotine reports the molecular mass in the molecular weight field.
Reference: [NCI/CADD](https://cactus.nci.nih.gov/chemical/structure) from urllib.request import urlopen ``` def CIRconvert(smi): try: url ="https://cactus.nci.nih.gov/chemical/structure/" + smi+"/iupac_name" ans = urlopen(url).read().decode('utf8') return ans except: return 'Name Not Available' smiles = 'CCCCC(C)CC' print(smiles, CIRconvert(smiles)) ``` Output: CCCCC(C)CC - **3-Methylheptane**
13,878
68,236,416
I am trying to install django using pipenv but failing to do so. ``` pipenv install django Creating a virtualenv for this project... Pipfile: /home/djangoTry/Pipfile Using /usr/bin/python3.9 (3.9.5) to create virtualenv... ⠴ Creating virtual environment...RuntimeError: failed to query /usr/bin/python3.9 with code 1 err: 'Traceback (most recent call last):\n File "/home/.local/lib/python3.6/site-packages/virtualenv/discovery/py_info.py", line 16, in <module>\n from distutils import dist\nImportError: cannot import name \'dist\' from \'distutils\' (/usr/lib/python3.9/distutils/__init__.py)\n' ✘ Failed creating virtual environment [pipenv.exceptions.VirtualenvCreationException]: Failed to create virtual environment. ``` Other info ``` python --version Python 2.7.17 python3 --version Python 3.6.9 python3.9 --version Python 3.9.4 pip3 --version pip 21.1 from /home/aman/.local/lib/python3.6/site-packages/pip (python 3.6) ``` Please help.Thanks
2021/07/03
[ "https://Stackoverflow.com/questions/68236416", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16091170/" ]
The problem isn't with installing Django, is on creating your venv, I don't create on this way so I will pass the way that i create Step 1 - Install pip Step 2 - Install virtaulEnv [check here how to install both](https://gist.github.com/Geoyi/d9fab4f609e9f75941946be45000632b) after installed, `run python3 -m venv venv`this wil create an venv on your current directory. Initiate them with `. venv/bin/activate`and so run `pip install django`. That should work
Try pipenv install --python 3.9
13,882
72,664,661
I’m new to python and trying to get a loop working. I’m trying to iterate through a dataframe – for each unique value in the “Acct” column I want to create a csv file, with the account name. So there should be an Account1.csv, Account2.csv and Account3.csv created. When I run this code, only Account3.csv is being created (I’m guessing the others are being created but overwritten somehow?) ``` data = [('Account1', 'ibm', 20), ('Account1', 'aapl', 30), ('Account1', 'googl', 15), ('Account1', 'cvs', 18), ('Account2', 'wal', 24), ('Account2', 'abc', 65), ('Account2', 'bmw', 98), ('Account2', 'deo', 70), ('Account2', 'glen', 40), ('Account3', 'aapl', 50), ('Account3', 'googl', 68), ('Account3', 'orcl', 95)] data_df = pd.DataFrame(data, columns = ['Acct', 'ticker', 'qty']) acctlist = data_df["Acct"].drop_duplicates() save_to = "c:/temp/csv/" save_as = Acct + ".csv" for Acct in acctlist: #print(data_df[data_df["Acct"] == Acct]) Acct_df = (data_df[data_df["Acct"] == Acct]) Acct_df.to_csv(save_to + save_as) ```
2022/06/17
[ "https://Stackoverflow.com/questions/72664661", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19361388/" ]
you need to use a `TextInput` component to handle this kind of event. You can find the API here: <https://reactnative.dev/docs/textinput>. Bear in mind that react library is built as an API so it can take advantage of different renders. But the issue is that when using the mobile native renderer you will not have the DOM properties you would have on a web view, it is way different.
First, you add `onKeyDown={keyDownHandler}` to a React element. This will trigger your `keyDownHandler` function each time a key is pressed. Ex: ``` <div className='example-class' onKeyDown={keyDownHandler}> ``` You can set up your function like this: ``` const keyDownHandler = ( event: React.KeyboardEvent<HTMLDivElement> ) => { if (event.code === 'ArrowDown') { // do something } else if (event.code === 'A') { // do something } } ```
13,883
43,536,182
i am having on HDFS this huge file which is an extract of my db. e.g.: ``` 1||||||1||||||||||||||0002||01||1999-06-01 16:18:38||||2999-12-31 00:00:00||||||||||||||||||||||||||||||||||||||||||||||||||||||||2||||0||W.ISHIHARA||||1999-06-01 16:18:38||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||19155||||||||||||||1||1||NBV||||||||||||||U||||||||N|||||||||||||||||||||| 1||||||8||2000-08-25 00:00:00||||||||3||||0001||01||1999-06-01 16:26:16||||1999-06-01 17:57:10||||||||||300||||||PH||400||Yes||PH||0255097�`||400||||1||103520||||||1||4||10||||20||||||||||2||||0||S.OSARI||1961-10-05 00:00:00||1999-06-01 16:26:16||�o��������������||�o��������������||1||||����||||1||1994-01-24 00:00:00||2||||||75||1999-08-25 00:00:00||1999-08-25 00:00:00||0||1||||4||||||�l��������������||�o��������������||�l��������������||||�o��������������||NP||||�l��������������||�l��������������||||||5||19055||||||||||1||||8||1||NBV||||||||||||||U||||||||N|||||||||||||||||||||| ``` * Size of the file: 40GB * Number of records: ~120 000 000 * Number of fields: 112 * Field sep: || * Line sep: \n * Encoding: sjis I want to load this file in hive using pyspark (1.6 with python 3). But my jobs keep failing. Here is my code: ``` toProcessFileDF = sc.binaryFiles("MyFile")\ .flatMap(lambda x: x[1].split(b'\n'))\ .map(lambda x: x.decode('sjis'))\ .filter(lambda x: x.count('|')==sepCnt*2)\ .map(lambda x: x.split('||'))\ .toDF(schema=tableSchema) #tableSchema is the schema retrieved from hive toProcessFileDF.write.saveAsTable(tableName, mode='append') ``` I received several errors but amongst other, jave 143 (memory error), heartbeat timeout, and kernel has died. (let me know if you need the exact log errors). Is it the right way to do it ? Maybe there is a more clever way or more efficient. Can you advice me anything on how to perform this ?
2017/04/21
[ "https://Stackoverflow.com/questions/43536182", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5013752/" ]
I found the databrick csv reader quite useful to do that. ``` toProcessFileDF_raw = sqlContext.read.format('com.databricks.spark.csv')\ .options(header='false', inferschema='false', charset='shift-jis', delimiter='\t')\ .load(toProcessFile) ``` I can use the delimiter option to split with only one character unfortunately. Therefore my solution is to split with tab because i am sure i do not have any in my file. and then I can apply a split on my line. This is not perfect, but at least i have the right encoding and i do not put everything in memory.
Logs would have been helpful. `binaryFiles` will read this as binary file from HDFS as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file. Note from Spark documentation: Small files are preferred, large file is also allowable, but may cause bad performance. It would be better if you use `textFile` ``` toProcessFileDF = sc.textFile("MyFile")\ .map(lambda line: line.split("||"))\ .... ``` Also, one option would be to specify a larger number of partitions when reading in the initial text file (e.g. sc.textFile(path, 200000)) rather than re-partitioning after reading. Another important thing is to make sure that your input file is splittable (some compression options make it not splittable, and in that case Spark may have to read it on a single machine causing OOMs).
13,888
60,054,247
I have a python list of list that I would like to insert into a MySQL database. The list looks like this: ``` [[column_name_1,value_1],[column_name_2, value_2],.......] ``` I want to insert this into the database table which has column names as `column_name_1, column_name_2.....` so that the database table looks like this: ``` column_name_1 | column_name_2 | column_name_3 | column_name_4 value_1 | value_2 | value_3 | value_4 value_1 | value_2 | value_3 | value_4 value_1 | value_2 | value_3 | value_4 ``` How do I make this happen in python? Thanks in advance!
2020/02/04
[ "https://Stackoverflow.com/questions/60054247", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12675531/" ]
You have several options. ``` working_directory=$PWD python3.8 main.py "$working_directory" ``` This will pass the value of your variable as a command line argument. In Python you will use `sys.argv[1]` to access it. ``` export working_directory=$PWD python3.8 main.py ``` This will add an environment variable using `export`. In Python you can access this using `import os` and `os.getenv('working_directory')`. You also can use this syntax: ``` working_directory=$PWD python3.8 main.py ``` This results in the same (an environment variable) without cluttering the environment of the calling shell.
You can use the `pathlib` library to avoid putting it as a parameter: ``` import pathlib working_derictory = pathlib.Path().absolute() ```
13,889
7,521,623
I am trying to parse HTML with BeautifulSoup. The content I want is like this: ``` <a class="yil-biz-ttl" id="yil_biz_ttl-2" href="http://some-web-url/" title="some title">Title</a> ``` i tried and got the following error: ``` maxx = soup.findAll("href", {"class: "yil-biz-ttl"}) ------------------------------------------------------------ File "<ipython console>", line 1 maxx = soup.findAll("href", {"class: "yil-biz-ttl"}) ^ SyntaxError: invalid syntax ``` what i want is the string : `http://some-web-url/`
2011/09/22
[ "https://Stackoverflow.com/questions/7521623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/648866/" ]
You're missing a close-quote after `"class`: ``` maxx = soup.findAll("href", {"class: "yil-biz-ttl"}) ``` should be ``` maxx = soup.findAll("href", {"class": "yil-biz-ttl"}) ``` also, I don't think you can search for an attribute like `href` like that, I think you need to search for a tag: ``` maxx = [link['href'] for link in soup.findAll("a", {"class": "yil-biz-ttl"})] ```
Well first of all you have a syntax error. You have your quotes wrong in `class` part. Try: `maxx = soup.findAll("href", {"class": "yil-biz-ttl"})`
13,890
4,302,201
I'll give a simplified example here. Suppose I have an iterator in python, and each object that this iterator returns is itself iterable. I want to take all the objects returned by this iterator and chain them together into one long iterator. Is there a standard utility to make this possible? Here is a contrived example. ``` x = iter([ xrange(0,5), xrange(5,10)]) ``` `x` is an iterator that returns iterators, and I want to chain all the iterators returned by x into one big iterator. The result of such an operation in this example should be equivalent to xrange(0,10), and it should be lazily evaluated.
2010/11/29
[ "https://Stackoverflow.com/questions/4302201", "https://Stackoverflow.com", "https://Stackoverflow.com/users/125921/" ]
You can use [`itertools.chain.from_iterable`](http://docs.python.org/library/itertools.html#itertools.chain.from_iterable) ``` >>> import itertools >>> x = iter([ xrange(0,5), xrange(5,10)]) >>> a = itertools.chain.from_iterable(x) >>> list(a) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ``` If that's not available on your version (apparently, it's new in 2.6), you can just do it manually: ``` >>> x = iter([ xrange(0,5), xrange(5,10)]) >>> a = (i for subiter in x for i in subiter) >>> list(a) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ```
Here's the old-fashioned way: ``` Python 2.2.3 (#42, May 30 2003, 18:12:08) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from __future__ import generators >>> def mychain(iterables): ... for it in iterables: ... for item in it: ... yield item ... >>> x = iter([ xrange(0,5), xrange(5,10)]) >>> a = mychain(x) >>> list(a) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> ``` I'm not sure what the purpose or benefit of using `iter()` is, in this case: ``` >>> x = [xrange(0,5), xrange(5,10)] >>> a = mychain(x) >>> list(a) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> ```
13,895
22,385,108
On the `operator` module, we have the `or_` function, [which is the bitwise or](http://docs.python.org/2/library/operator.html#operator.or_) (`|`). However I can't seem to find the logical or (`or`). The documentation [doesn't seem to list it](http://docs.python.org/2/library/operator.html#mapping-operators-to-functions). I'm wondering why isn't it included? Is it not considered a operator? Is there a builtin function that provides its behaviour?
2014/03/13
[ "https://Stackoverflow.com/questions/22385108", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1595865/" ]
The `or` operator *short circuits*; the right-hand expression is not evaluated when the left-hand returns a true value. This applies to the `and` operator as well; when the left-hand side expression returns a false value, the right-hand expression is not evaluated. You could not do this with a function; all operands *have* to be evaluated before the function can be called. As such, there is no equivalent function for it in the `operator` module. Compare: ``` foo = None result = foo and foo(bar) ``` with ``` foo = None result = operator.boolean_and(foo, foo(bar)) # hypothetical and implementation ``` The latter expression will fail, because you cannot use `None` as a callable. The first version works, because the `and` operator won't evaluate the `foo(bar)` expression.
The reason you are getting `1` after executing `>>> 1 or 2` is because `1` is `true` so the expression has been satisfied. It might make more sense if you try executing `>>> 0 or 2`. It will return `2` because the first statement is `0` or `false` so the second statement gets evaluated. In this case `2` evaluates to a `true` boolean value. The `and` and `or` operators evaluate the first value and then plug the result into the "AND and OR" truth tables. In the case of `and` the second value is only considered if the first evaluates to `true`. In the case of the `or` operator, if the first value evaluates to `true` the expression is true and can return, if it isn't the second value is evaluated.
13,896
12,866,612
I need to get distribute version 0.6.28 running on Heroku. I updated my requirements.txt, but that seems to have no effect. I'm trying to install from a module from a tarball that required this later version of the distribute package. During deploy I only get this: ``` Running setup.py egg_info for package from http://downloads.sourceforge.net/project/mysql-python/mysql-python-test/1.2.4b4/MySQL-python-1.2.4b4.tar.gz The required version of distribute (>=0.6.28) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U distribute'. (Currently using distribute 0.6.27 (/tmp/build_ibj6h3in4vgp/.heroku/venv/lib/python2.7/site-packages/distribute-0.6.27-py2.7.egg)) Complete output from command python setup.py egg_info: The required version of distribute (>=0.6.28) is not available, ```
2012/10/12
[ "https://Stackoverflow.com/questions/12866612", "https://Stackoverflow.com", "https://Stackoverflow.com/users/591222/" ]
Ok, here's a solution that does work for the moment. Long-term fix I think is Heroku upgrading their version of distribute. 1. Fork the python buildpack: <https://github.com/heroku/heroku-buildpack-python/> 2. Add the requirements you need to the pack (I put them in bin/compile, just before the other pip install requirements step). See <https://github.com/buildingenergy/heroku-buildpack-python/commit/12635e22aa3a3651f9bedb3b326e2cb4fd1d2a4b> for that diff. 3. Change the buildpack for your app: heroku config:add BUILDPACK\_URL=git://github.com/heroku/heroku-buildpack-python.git 4. Push again. It should work.
Try adding distribute with the specific version to your dependencies on a first push, then adding the required dependency. ``` cat requirements.txt ... distribute==0.6.28 ... git push heroku master ... cat requirements.txt ... your deps here ```
13,902
37,192,957
I have a numpy and a boolean array: ``` nparray = [ 12.66 12.75 12.01 13.51 13.67 ] bool = [ True False False True True ] ``` I would like to replace all the values in `nparray` by the same value divided by 3 where `bool` is False. I am a student, and I'm reasonably new to python indexing. Any advice or suggestions are greatly appreciated!
2016/05/12
[ "https://Stackoverflow.com/questions/37192957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6141885/" ]
naming an array bool might not be the best idea. As ayhan did try renaming it to bl or something else. You can use numpy.where see the docs [here](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html) ``` nparray2 = np.where(bl == False, nparray/3, nparray) ```
Use [`boolean indexing`](http://docs.scipy.org/doc/numpy-1.10.1/user/basics.indexing.html#boolean-or-mask-index-arrays) with `~` as the negation operator: ``` arr = np.array([12.66, 12.75, 12.01, 13.51, 13.67 ]) bl = np.array([True, False, False, True ,True]) arr[~bl] = arr[~bl] / 3 array([ 12.66 , 4.25 , 4.00333333, 13.51 , 13.67 ]) ```
13,903
38,638,180
Running interpreter ``` >>> x = 5000 >>> y = 5000 >>> print(x is y) False ``` running the same in script using `python test.py` returns `True` What the heck?
2016/07/28
[ "https://Stackoverflow.com/questions/38638180", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3337819/" ]
You need to use the [EnhancedPatternLayout](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/EnhancedPatternLayout.html) then you can use the `{GMT+0}` specifier as per documentation. If you instead you want to run your application in UTC timezone, you can add the following JVM parameter : ``` -Duser.timezone=UTC ```
``` log4j.appender.A1.layout=org.apache.log4j.EnhancedPatternLayout log4j.appender.A1.layout.ConversionPattern=[%d{ISO8601}{GMT}] %-4r [%t] %-5p %c %x - %m%n ```
13,906
64,926,963
I used the official docker image of flask. And installed the rpi.gpio library in the container ``` pip install rpi.gpio ``` It's succeeded: ``` root@e31ba5814e51:/app# pip install rpi.gpio Collecting rpi.gpio Downloading RPi.GPIO-0.7.0.tar.gz (30 kB) Building wheels for collected packages: rpi.gpio Building wheel for rpi.gpio (setup.py) ... done Created wheel for rpi.gpio: filename=RPi.GPIO-0.7.0-cp39-cp39-linux_armv7l.whl size=68495 sha256=0c2c43867c304f2ca21da6cc923b13e4ba22a60a77f7309be72d449c51c669db Stored in directory: /root/.cache/pip/wheels/09/be/52/39b324bfaf72ab9a47e81519994b2be5ddae1e99ddacd7a18e Successfully built rpi.gpio Installing collected packages: rpi.gpio Successfully installed rpi.gpio-0.7.0 ``` But it prompted the following error: ``` Traceback (most recent call last): File "/app/hello/app2.py", line 2, in <module> import RPi.GPIO as GPIO File "/usr/local/lib/python3.9/site-packages/RPi/GPIO/__init__.py", line 23, in <module> from RPi._GPIO import * RuntimeError: This module can only be run on a Raspberry Pi! ``` I tried the method in this link, but it didn't work: [Docker Access to Raspberry Pi GPIO Pins](https://stackoverflow.com/questions/30059784/docker-access-to-raspberry-pi-gpio-pins) I want to know if this can be done, and if so, how to proceed.
2020/11/20
[ "https://Stackoverflow.com/questions/64926963", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12192583/" ]
You need to close the second file. You were missing the () at the end of `f2.close` so the close method actually won't have been executed. In the example below, I am using `with` which creates a context manager to automatically close the file. ``` with open('BestillingNr.txt', 'r') as f: bestillingNr = int(f.read()) bestillingNr += 1 with open('BestillingNr.txt', 'w') as f2: f2.write(f'{str(bestillingNr)}') ```
after this line `f2.write(f'{str(bestillingNr)}')`, you should add flush command `f2.flush()` This code is work well: ``` f = open('BestillingNr.txt', 'r') bestillingNr = int(f.read()) f.close() bestillingNr += 1 f2 = open('BestillingNr.txt', 'w') f2.write(f'{str(bestillingNr)}') f2.flush() f2.close() ```
13,907
10,505,851
I have a complex class and I want to simplify it by implementing a facade class (assume I have no control on complex class). My problem is that complex class has many methods and I will just simplify some of them and rest of the will stay as they are. What I mean by "simplify" is explained below. I want to find a way that if a method is implemented with facade then call it, if not then call the method in complex class. The reason I want this is to write less code :) [less is more] **Example:** ``` Facade facade = // initialize facade.simplified(); // defined in Facade class so call it // not defined in Facade but exists in the complex class // so call the one in the complex class facade.alreadySimple(); ``` The options that comes to mind mind are: **Option 1:** Write a class holding a variable of complex class and implement complex ones then implement simple ones with direct delegation: ``` class Facade { private PowerfulButComplexClass realWorker = // initialize public void simplified() { // do stuff } public void alreadySimple() { realWorker.alreadySimple(); } // more stuff } ``` But with this approach I will need to implement all the simple methods with just a single delegation statement. So I need to write more code (it is simple though) **Option 2:** Extend the complex class and implement simplified methods but then both simple and complex versions of these methods will be visible. In python I can achieve similar behaviour like this: ``` class PowerfulButComplexClass(object): def alreadySimple(self): # really simple # some very complex methods class Facade(object): def __init__(self): self.realworker = PowerfulButComplexClass() def simplified(self): # simplified version of complex methods in PowerfulButComplexClass def __getattribute__(self, key): """For rest of the PowerfulButComplexClass' methods use them as they are because they are simple enough. """ try: # return simplified version if we did attr = object.__getattribute__(self, key) except AttributeError: # return PowerfulButComplexClass' version because it is already simple attr = object.__getattribute__(self.data, key) return attr obj = Facace() obj.simplified() # call the one we have defined obj.alreadySimple( # call the one defined in PowerfulButComplexClass ``` *So what is the Java way to achieve this?* Edit: What I mean by "simplify": A complex method can be either a method with too many arguments ``` void complex method(arg1, arg2, ..., argn) // n is sufficiently large ``` or a set of related methods that will almost always called together to achieve a single task ``` outArg1 = someMethod(arg1, arg2); outArg2 = someOtherMethod(outArg1, arg3); actualResult = someAnotherMethod(outArg2); ``` so we want to have something like this: ``` String simplified(arg1, arg2, arg3) { outArg1 = someMethod(arg1, arg2); outArg2 = someOtherMethod(outArg1, arg3); return someAnotherMethod(outArg2); } ```
2012/05/08
[ "https://Stackoverflow.com/questions/10505851", "https://Stackoverflow.com", "https://Stackoverflow.com/users/932278/" ]
It's called Inheritence. Consider the following code: ``` class Complex { public function complex() { /* Complex Function */ } public function simple() { /* Already simple function */ } } class SimplifiedComplex extends Complex { public function complex() { /* Simplify Here */ } } ``` The `simple()` method will work on a `SimplifiedComplex` object.
Depending on your use case, you might want to create a facade in front of some of the functionality in complex class by delegating (Option 1), and instead of providing support for the rest of the functionality in complex class, you provide means to access complex class directly (getComplexClass). This might make the design clearer. Consider, for example, a complex class handling most of the features of a bank system. Creating a class named "Account" that has access to the complex class but only uses the methods relevant for a bank account helps the design. For convenience, the Account class could have a method getBankSystemForAccount or something similar.
13,908
26,747,308
I have a Python script, say `myscript.py`, that uses relative module imports, ie `from .. import module1`, where my project layout is as follows: ``` project + outer_module - __init__.py - module1.py + inner_module - __init__.py - myscript.py - myscript.sh ``` And I have a Bash script, say `myscript.sh`, which is a wrapper for my python script, shown below: ``` #!/bin/bash python -m outer_module.inner_module.myscript $@ ``` This works to execute `myscript.py` and forwards the arguments to my script as desired, but it only works when I call `./outer_module/inner_module/myscript.sh` from within the `project` directory shown above. How can I make this script work from anywhere? For example, how can I make this work for a call like `bash /root/to/my/project/outer_module/inner_module/myscript.sh`? Here are my attempts: When executing `myscript.sh` from anywhere else, I get the error: `No module named outer_module.inner_module`. Then I tried another approach to execute the bash script from anywhere, by replacing `myscript.sh` with: ``` #!/bin/bash scriptdir=`dirname "$BASH_SOURCE"` python $scriptdir/myscript.py $@ ``` When I execute `myscript.sh` as shown above, I get the following: ``` Traceback (most recent call last): File "./inner_module/myscript.py", line 10, in <module> from .. import module1 ValueError: Attempted relative import in non-package ``` Which is due to the relative import on the first line in `myscript.py`, as mentioned earlier, which is `from .. import module1`.
2014/11/04
[ "https://Stackoverflow.com/questions/26747308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1884158/" ]
You want to create a custom filter. It can be done very easily. This filter will test the name of your friend against the search. If ``` app.filter('filterFriends', function() { return function(friends, search) { if (search === "") return friends; return friends.filter(function(friend) { return friend.name.match(search); }); } }); ``` This should give you a good idea of how to create filters, and you can customize it as you like. In your HTML you would use it like this. ``` <input ng-model="search"> <tr ng-repeat="friend in friends | filterFriends:search"> <td>{{friend.name}}</td> <td>{{friend.phone}}</td> </tr> ``` Here is a [JSFiddle](http://jsfiddle.net/zargyle/dxzfku44/).
I think it would be easier to just do: ``` <tr ng-repeat="friend in friends" ng-show="([friend] | filter:search).length > 0"> <td>{{friend.name}}</td> <td>{{friend.phone}}</td> </tr> ``` Also this way the filter will apply to all the properties, not just name.
13,912
55,374,909
I am new in python so I'm trying to read a csv with 700 lines included a header, and get a list with the unique values of the first csv column. Sample CSV: ``` SKU;PRICE;SUPPLIER X100;100;ABC X100;120;ADD X101;110;ABV X102;100;ABC X102;105;ABV X100;119;ABG ``` I used the example here [How to create a list in Python with the unique values of a CSV file?](https://stackoverflow.com/questions/24441606/how-to-create-a-list-in-python-with-the-unique-values-of-a-csv-file) so I did the following: ``` import csv mainlist=[] with open('final_csv.csv', 'r', encoding='utf-8') as csvf: rows = csv.reader(csvf, delimiter=";") for row in rows: if row[0] not in rows: mainlist.append(row[0]) print(mainlist) ``` I noticed that in debugging, rows is 1 line not 700 and I get only the ['SKU'] field what I did wrong? thank you
2019/03/27
[ "https://Stackoverflow.com/questions/55374909", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
A solution using pandas. You'll need to call the `unique` method on the correct column, this will return a pandas series with the unique values in that column, then convert it to a list using the `tolist` method. An example on the `SKU` column below. ``` import pandas as pd df = pd.read_csv('final_csv.csv', sep=";") sku_unique = df['SKU'].unique().tolist() ``` If you don't know / care for the column name you can use `iloc` on the correct number of column. Note that the count index starts at 0: ``` df.iloc[:,0].unique().tolist() ``` If the question is intending get only the values occurring once then you can use the `value_counts` method. This will create a series with the index as the values of `SKU` with the counts as values, you must then convert the index of the series to a list in a similar manner. Using the first example: ``` import pandas as pd df = pd.read_csv('final_csv.csv', sep=";") sku_counts = df['SKU'].value_counts() sku_single_counts = sku_counts[sku_counts == 1].index.tolist() ```
A solution using neither `pandas` nor `csv` : ```py lines = open('file.csv', 'r').read().splitlines()[1:] col0 = [v.split(';')[0] for v in lines] uniques = filter(lambda x: col0.count(x) == 1, col0) ``` or, using `map` (but less readable) : ```py col0 = list(map(lambda line: line.split(';')[0], open('file.csv', 'r').read().splitlines()[1:])) uniques = filter(lambda x: col0.count(x) == 1, col0) ```
13,913
42,854,598
If I reset the index of my pandas DataFrame with "inplace=True" (following [the documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html)) it returns a class 'NoneType'. If I reset the index with "inplace=False" it returns the DataFrame with the new index. Why? ``` print(type(testDataframe)) print(testDataframe.head()) ``` returns: ``` <class 'pandas.core.frame.DataFrame'> ALandbouwBosbouwEnVisserij AantalInkomensontvangers AantalInwoners \ 0 73780.0 None 16979120 1 290.0 None 25243 2 20.0 None 3555 ``` Set\_index returns a new index: ``` testDataframe = testDataframe.set_index(['Codering']) print(type(testDataframe)) print(testDataframe.head()) ``` returns ``` <class 'pandas.core.frame.DataFrame'> ALandbouwBosbouwEnVisserij AantalInkomensontvangers \ Codering NL00 73780.0 None GM1680 290.0 None WK168000 20.0 None BU16800000 15.0 None ``` But the same set\_index with "inplace=True": ``` testDataframe = testDataframe.set_index(['Codering'], inplace=True) print(type(testDataframe)) print(testDataframe.head()) ``` returns ``` <class 'NoneType'> --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-50-0d6304ebaae1> in <module>() ``` Version info: ``` python: 3.4.4.final.0 python-bits: 64 pandas: 0.18.1 numpy: 1.11.1 IPython: 5.2.2 ```
2017/03/17
[ "https://Stackoverflow.com/questions/42854598", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6421290/" ]
Ok, now I understand, thanks for the comments! So inplace=True should return None **and** make the change in the original object. It seemed that on listing the dataframe again, no changes were present. But of course I should not have **assigned the return value** to the dataframe, i.e. ``` testDataframe = testDataframe.set_index(['Codering'], inplace=True) ``` should just be ``` testDataframe.set_index(['Codering'], inplace=True) ``` or ``` testDataframe = testDataframe.set_index(['Codering'], inplace=False) ``` otherwise the return value of the inplace index change (None) is the new content of the dataframe which is of course not the intend. I am sure this is obvious to many and now it is to me as well but it wasn't without your help, thanks!!!
inplace=True is always changed in the original data\_frame. If you want a changed data\_frame then remove second parameter i.e `inplace = True` ``` new_data_frame = testDataframe.set_index(['Codering']) ``` Then ``` print(new_data_frame) ```
13,915
32,303,006
I need to compute the integral of the following function within ranges that start as low as `-150`: ``` import numpy as np from scipy.special import ndtr def my_func(x): return np.exp(x ** 2) * 2 * ndtr(x * np.sqrt(2)) ``` The problem is that this part of the function ``` np.exp(x ** 2) ``` tends toward infinity -- I get `inf` for values of `x` less than approximately `-26`. And this part of the function ``` 2 * ndtr(x * np.sqrt(2)) ``` which is equivalent to ``` from scipy.special import erf 1 + erf(x) ``` tends toward 0. So, a very, very large number times a very, very small number should give me a reasonably sized number -- but, instead of that, `python` is giving me `nan`. What can I do to circumvent this problem?
2015/08/31
[ "https://Stackoverflow.com/questions/32303006", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2623899/" ]
I think a combination of @askewchan's solution and [`scipy.special.log_ndtr`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.log_ndtr.html) will do the trick: ``` from scipy.special import log_ndtr _log2 = np.log(2) _sqrt2 = np.sqrt(2) def my_func(x): return np.exp(x ** 2) * 2 * ndtr(x * np.sqrt(2)) def my_func2(x): return np.exp(x * x + _log2 + log_ndtr(x * _sqrt2)) print(my_func(-150)) # nan print(my_func2(-150) # 0.0037611803122451198 ``` For `x <= -20`, `log_ndtr(x)` [uses a Taylor series expansion of the error function to iteratively compute the log CDF directly](https://github.com/scipy/scipy/blob/07c263874ca93312d9a54c1249ff3a7b9a88226e/scipy/special/cephes/ndtr.c#L296-L338), which is much more numerically stable than simply taking `log(ndtr(x))`. --- Update ====== As you mentioned in the comments, the `exp` can also overflow if `x` is sufficiently large. Whilst you could work around this using `mpmath.exp`, a simpler and faster method is to cast up to a `np.longdouble` which, on my machine, can represent values up to 1.189731495357231765e+4932: ``` import mpmath def my_func3(x): return mpmath.exp(x * x + _log2 + log_ndtr(x * _sqrt2)) def my_func4(x): return np.exp(np.float128(x * x + _log2 + log_ndtr(x * _sqrt2))) print(my_func2(50)) # inf print(my_func3(50)) # mpf('1.0895188633566085e+1086') print(my_func4(50)) # 1.0895188633566084842e+1086 %timeit my_func3(50) # The slowest run took 8.01 times longer than the fastest. This could mean that # an intermediate result is being cached 100000 loops, best of 3: 15.5 µs per # loop %timeit my_func4(50) # The slowest run took 11.11 times longer than the fastest. This could mean # that an intermediate result is being cached 100000 loops, best of 3: 2.9 µs # per loop ```
Not sure how helpful will this be, but here are a couple of thoughts that are too long for a comment. You need to calculate the integral of [![2 \cdot e^{x^2} \cdot f(\sqrt{2}x)](https://i.stack.imgur.com/Tt5t4.gif)](https://i.stack.imgur.com/Tt5t4.gif), which you [correctly identified](http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.ndtr.html#scipy.special.ndtr) would be [![e^{x^2}*(1 + erf(x))](https://i.stack.imgur.com/qgWkq.gif)](https://i.stack.imgur.com/qgWkq.gif). Opening the brackets you can integrate both parts of the summation. [![enter image description here](https://i.stack.imgur.com/EcbfJ.png)](https://i.stack.imgur.com/EcbfJ.png) Scipy has this [imaginary error function implemented](http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.erfi.html#scipy.special.erfi) The second part is harder: [![enter image description here](https://i.stack.imgur.com/nGtc7.png)](https://i.stack.imgur.com/nGtc7.png) This is a [generalized hypergeometric function](https://en.wikipedia.org/wiki/Generalized_hypergeometric_function). Sadly it looks like scipy [does not have an implementation of it](http://docs.scipy.org/doc/scipy/reference/special.html#hypergeometric-functions), but [this package](https://code.google.com/p/mpmath/) claims it does. Here I used indefinite integrals without constants, knowing the `from` `to` values it is clear how to use definite ones.
13,916
46,409,846
I am trying to implement `Kmeans` algorithm in python which will use `cosine distance` instead of euclidean distance as distance metric. I understand that using different distance function can be fatal and should done carefully. Using cosine distance as metric forces me to change the average function (the average in accordance to cosine distance must be an element by element average of the normalized vectors). I have seen [this](https://gist.github.com/jaganadhg/b3f6af86ad99bf6e9bb7be21e5baa1b5) elegant solution of manually overriding the distance function of sklearn, and I want to use the same technique to override the averaging section of the code but I couldn't find it. Does anyone knows How can it be done ? How critical is it that the distance metric doesn't satisfy the triangular inequality? If anyone knows a different efficient implementation of kmeans where I use cosine metric or satisfy an distance and averaging functions it would also be realy helpful. Thank you very much! **Edit:** After using the angular distance instead of cosine distance, The code looks as something like that: ``` def KMeans_cosine_fit(sparse_data, nclust = 10, njobs=-1, randomstate=None): # Manually override euclidean def euc_dist(X, Y = None, Y_norm_squared = None, squared = False): #return pairwise_distances(X, Y, metric = 'cosine', n_jobs = 10) return np.arccos(cosine_similarity(X, Y))/np.pi k_means_.euclidean_distances = euc_dist kmeans = k_means_.KMeans(n_clusters = nclust, n_jobs = njobs, random_state = randomstate) _ = kmeans.fit(sparse_data) return kmeans ``` I noticed (with mathematics calculations) that if the vectors are normalized the standard average works well for the angular metric. As far as I understand, I have to change `_mini_batch_step()` in [k\_means\_.py](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/cluster/k_means_.py). But the function is pretty complicated and I couldn't understand how to do it. Does anyone knows about alternative solution? Or maybe, Does anyone knows how can I edit this function with a one that always forces the centroids to be normalized?
2017/09/25
[ "https://Stackoverflow.com/questions/46409846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8671126/" ]
So it turns out you can just normalise X to be of unit length and use K-means as normal. The reason being if X1 and X2 are unit vectors, looking at the following equation, the term inside the brackets in the last line is cosine distance. [![vect_dist](https://i.stack.imgur.com/7amCq.png)](https://i.stack.imgur.com/7amCq.png) So in terms of using k-means, simply do: ```py length = np.sqrt((X**2).sum(axis=1))[:,None] X = X / length kmeans = KMeans(n_clusters=10, random_state=0).fit(X) ``` And if you need the centroids and distance matrix do: ```py len_ = np.sqrt(np.square(kmeans.cluster_centers_).sum(axis=1)[:,None]) centers = kmeans.cluster_centers_ / len_ dist = 1 - np.dot(centers, X.T) # K x N matrix of cosine distances ``` Notes: ------ * Just realised that you are trying to minimise the distance between the mean vector of the cluster, and its constituents. The mean vector has length of less than one when you simply average the vectors. But in practice, it's still worth running the normal sklearn algorithm and checking the length of the mean vector. In my case the mean vectors were close to unit length (averaging around 0.9, but this depends on how dense your data is). TLDR: Use the [spherecluster](https://github.com/jasonlaska/spherecluster) package as @σηγ pointed out.
Unfortunately no. Sklearn current implementation of k-means only uses Euclidean distances. The reason is K-means includes calculation to find the cluster center and assign a sample to the closest center, and Euclidean only have the meaning of the center among samples. If you want to use K-means with cosine distance, you need to make your own function or class. Or, try to use other clustering algorithm such as DBSCAN.
13,918
73,326,374
I want to solve NP-hard combinatorial optimization problem using quantum optimization.In this regard, I am using "classiq" python library, which a high level API for making hardware compatible quantum circuits, with IBMQ backend. To use "classiq", you have to first do the authentication of your machine (according to the official "classiq" website: <https://docs.classiq.io/latest/getting-started/python-sdk/> Unfortunatly, whenever I ran the statement (classiq.authenticate()), I got runtime error as shown in the attached figure (with the full traceback). [enter image description here](https://i.stack.imgur.com/vmqe2.png)
2022/08/11
[ "https://Stackoverflow.com/questions/73326374", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17990100/" ]
Prepared statement escaped the parameters: ```java private final String insertSQL = "INSERT INTO demo2 (contract_nr, name, surname, address, tel, date_of_birth, license_nr, nationality, driver1, driver2, pickup_date, drop_date, renting_days, pickup_time, drop_time, price, included_km, effected_km, mail, type, valid, created_by) " + "VALUES ('?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?')"; ``` It needed to be replaced with: ```java private final String insertSQL = "INSERT INTO demo2 (contract_nr, name, surname, address, tel, date_of_birth, license_nr, nationality, driver1, driver2, pickup_date, drop_date, renting_days, pickup_time, drop_time, price, included_km, effected_km, mail, type, valid, created_by) " + "VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)"; ```
The answer to this question is that we do not need `'` around `?` in insert sql. This is wrong: ``` private final String insertSQL = "INSERT INTO demo2 (contract_nr, name, surname, address, tel, date_of_birth, license_nr, nationality, driver1, driver2, pickup_date, drop_date, renting_days, pickup_time, drop_time, price, included_km, effected_km, mail, type, valid, created_by) VALUES ('?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?','?') "; ``` correct one is: ``` private final String insertSQL = "INSERT INTO demo2 (contract_nr, name, surname, address, tel, date_of_birth, license_nr, nationality, driver1, driver2, pickup_date, drop_date, renting_days, pickup_time, drop_time, price, included_km, effected_km, mail, type, valid, created_by) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) "; ```
13,921
63,532,068
I have my python3 path under `/usr/bin`, any my pip path under `/.local/bin` of my local repository. With some pip modules installed, I can run my code successfully through `python3 mycode.py`. But I tried to run the shell script: ``` #!/usr/bin echo "starting now..." nohup python3 mycode.py > log.txt 2>&1 & echo $! > pid echo "mycode.py started at pid: " cat pid ``` and my python code: mycode.py ``` #!/usr/bin/env python3 from aiocqhttp import CQHttp, Event, Message, MessageSegment ... ... ... ``` It gives me: ``` Traceback (most recent call last): File "mycode.py", line 2, in <module> from aiocqhttp import CQHttp, Event, Message, MessageSegment ModuleNotFoundError: No module named 'aiocqhttp' ``` I tried to replace the interpreter of the shell script with my local path, but it doesn't help. How should I run my python codes using the shell script?
2020/08/22
[ "https://Stackoverflow.com/questions/63532068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9207332/" ]
Just use: ``` msgbox Range("B1:B19").SpecialCells(xlCellTypeBlanks).Address ```
This solution adapts your code. ``` Dim cell As Range Dim emptyStr As String emptyStr = "" For Each cell In Range("B1:B19") If IsEmpty(cell) Then _ emptyStr = emptyStr & cell.Address(0, 0) & ", " Next cell If emptyStr <> "" Then MsgBox Left(emptyStr, Len(emptyStr) - 2) ``` If the `cell` is empty, it stores the address in `emptyStr`. The `if` condition can be condensed as `isEmpty` returns a Boolean.
13,922
52,302,767
In my pc, how can i listen incoming connection as a DLNA server? On my TV there is the possibility for get media from a DLNA server, i would write a simple python script that grant access at TV to my files. The ends are: * LG webOS smartTV * macOS
2018/09/12
[ "https://Stackoverflow.com/questions/52302767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8954551/" ]
One project I've found is Cohen3. Does not look as too much alive, but at least supports Python version 3: <https://github.com/opacam/Cohen3> There's also a Coherence project (it seems Cohen3 is somehow derived from it, not sure): <https://gitlab.digitalcourage.de/coherence>
There is already server app doing it [Serviio](http://serviio.org/) I already using this on windows 10 and raspberry pi 2 and 3 and it works well, easy to setup by web app and you'll save a time with making your own solution.
13,926
10,496,649
Continued from [How to use wxSpellCheckerDialog in Django?](https://stackoverflow.com/questions/10474971/how-to-use-wxspellcheckerdialog-python-django/10476811#comment13562681_10476811) I have added spell checking to Django application using pyenchant. It works correctly when first run. But when I call it again (or after several runs) it gives following error. ![enter image description here](https://i.stack.imgur.com/IjzSy.png) > > PyAssertionError at /quiz/submit/ > > > C++ assertion "wxThread::IsMain()" > failed at ....\src\msw\evtloop.cpp(244) in wxEventLoop::Dispatch(): > only the main thread can process Windows messages > > > How to fix this?
2012/05/08
[ "https://Stackoverflow.com/questions/10496649", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1110394/" ]
You don't need wxPython to use pyEnchant. And you certainly shouldn't be using wx stuff with django. wxPython is for desktop GUIs, while django is a web app framework. As "uhz" pointed out, you can't call wxPython methods outside the main thread that wxPython runs in unless you use its threadsafe methods, such as wx.CallAfter. I don't know why you'd call wxPython from Django though.
It seems you are trying to use wx controls from inside Django code, is that correct? If so you are doing a very weird thing :) When you write a GUI application with wxPython there is one main thread which can process Window messages - the main thread is defined as the one where wx.App was created. You are trying to do a UI thing from a non-UI thread. So probably at first run everything works (everything is performed in the GUI thread) but on second attempt a different python thread (spawned by django?) is performing some illegal GUI actions. You could try using wx.CallAfter which would execute a function from the argument in GUI thread but this is non-blocking. Also I've found something you might consider: wxAnyThread [wxAnyThread](http://pypi.python.org/pypi/wxAnyThread). But I didn't use it and I don't know if it applies in your case.
13,927
55,585,079
I tried running this code in TensorFlow 2.0 (alpha): ```py import tensorflow_hub as hub @tf.function def elmo(texts): elmo_module = hub.Module("https://tfhub.dev/google/elmo/2", trainable=True) return elmo_module(texts, signature="default", as_dict=True) embeds = elmo(tf.constant(["the cat is on the mat", "dogs are in the fog"])) ``` But I got this error: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-1-c7f14c7ed0e9> in <module> 9 10 elmo(tf.constant(["the cat is on the mat", ---> 11 "dogs are in the fog"])) .../tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 417 # This is the first call of __call__, so we have to initialize. 418 initializer_map = {} --> 419 self._initialize(args, kwds, add_initializers_to=initializer_map) 420 if self._created_variables: 421 try: .../tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to) 361 self._concrete_stateful_fn = ( 362 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access --> 363 *args, **kwds)) 364 365 def invalid_creator_scope(*unused_args, **unused_kwds): .../tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 1322 if self.input_signature: 1323 args, kwargs = None, None -> 1324 graph_function, _, _ = self._maybe_define_function(args, kwargs) 1325 return graph_function 1326 .../tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 1585 or call_context_key not in self._function_cache.missed): 1586 self._function_cache.missed.add(call_context_key) -> 1587 graph_function = self._create_graph_function(args, kwargs) 1588 self._function_cache.primary[cache_key] = graph_function 1589 return graph_function, args, kwargs .../tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 1518 arg_names=arg_names, 1519 override_flat_arg_shapes=override_flat_arg_shapes, -> 1520 capture_by_value=self._capture_by_value), 1521 self._function_attributes) 1522 .../tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 705 converted_func) 706 --> 707 func_outputs = python_func(*func_args, **func_kwargs) 708 709 # invariant: `func_outputs` contains only Tensors, IndexedSlices, .../tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds) 314 # __wrapped__ allows AutoGraph to swap in a converted function. We give 315 # the function a weak reference to itself to avoid a reference cycle. --> 316 return weak_wrapped_fn().__wrapped__(*args, **kwds) 317 weak_wrapped_fn = weakref.ref(wrapped_fn) 318 .../tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 697 optional_features=autograph_options, 698 force_conversion=True, --> 699 ), args, kwargs) 700 701 # Wrapping around a decorator allows checks like tf_inspect.getargspec .../tensorflow/python/autograph/impl/api.py in converted_call(f, owner, options, args, kwargs) 355 356 if kwargs is not None: --> 357 result = converted_f(*effective_args, **kwargs) 358 else: 359 result = converted_f(*effective_args) /var/folders/wy/h39t6kb11pnbb0pzhksd_fqh0000gn/T/tmp4v3g2d_1.py in tf__elmo(texts) 11 retval_ = None 12 print('Eager:', ag__.converted_call('executing_eagerly', tf, ag__.ConversionOptions(recursive=True, force_conversion=False, optional_features=(), internal_convert_user_code=True), (), None)) ---> 13 elmo_module = ag__.converted_call('Module', hub, ag__.ConversionOptions(recursive=True, force_conversion=False, optional_features=(), internal_convert_user_code=True), ('https://tfhub.dev/google/elmo/2',), {'trainable': True}) 14 do_return = True 15 retval_ = ag__.converted_call(elmo_module, None, ag__.ConversionOptions(recursive=True, force_conversion=False, optional_features=(), internal_convert_user_code=True), (texts,), {'signature': 'default', 'as_dict': True}) .../tensorflow/python/autograph/impl/api.py in converted_call(f, owner, options, args, kwargs) 252 if tf_inspect.isclass(f): 253 logging.log(2, 'Permanently whitelisted: %s: constructor', f) --> 254 return _call_unconverted(f, args, kwargs) 255 256 # Other built-in modules are permanently whitelisted. .../tensorflow/python/autograph/impl/api.py in _call_unconverted(f, args, kwargs) 174 175 if kwargs is not None: --> 176 return f(*args, **kwargs) 177 else: 178 return f(*args) .../tensorflow_hub/module.py in __init__(self, spec, trainable, name, tags) 167 name=self._name, 168 trainable=self._trainable, --> 169 tags=self._tags) 170 # pylint: enable=protected-access 171 .../tensorflow_hub/native_module.py in _create_impl(self, name, trainable, tags) 338 trainable=trainable, 339 checkpoint_path=self._checkpoint_variables_path, --> 340 name=name) 341 342 def _export(self, path, variables_saver): .../tensorflow_hub/native_module.py in __init__(self, spec, meta_graph, trainable, checkpoint_path, name) 389 # TPU training code. 390 with tf.init_scope(): --> 391 self._init_state(name) 392 393 def _init_state(self, name): .../tensorflow_hub/native_module.py in _init_state(self, name) 392 393 def _init_state(self, name): --> 394 variable_tensor_map, self._state_map = self._create_state_graph(name) 395 self._variable_map = recover_partitioned_variable_map( 396 get_node_map_from_tensor_map(variable_tensor_map)) .../tensorflow_hub/native_module.py in _create_state_graph(self, name) 449 meta_graph, 450 input_map={}, --> 451 import_scope=relative_scope_name) 452 453 # Build a list from the variable name in the module definition to the actual .../tensorflow/python/training/saver.py in import_meta_graph(meta_graph_or_file, clear_devices, import_scope, **kwargs) 1443 """ # pylint: disable=g-doc-exception 1444 return _import_meta_graph_with_return_elements( -> 1445 meta_graph_or_file, clear_devices, import_scope, **kwargs)[0] 1446 1447 .../tensorflow/python/training/saver.py in _import_meta_graph_with_return_elements(meta_graph_or_file, clear_devices, import_scope, return_elements, **kwargs) 1451 """Import MetaGraph, and return both a saver and returned elements.""" 1452 if context.executing_eagerly(): -> 1453 raise RuntimeError("Exporting/importing meta graphs is not supported when " 1454 "eager execution is enabled. No graph exists when eager " 1455 "execution is enabled.") RuntimeError: Exporting/importing meta graphs is not supported when eager execution is enabled. No graph exists when eager execution is enabled. ```
2019/04/09
[ "https://Stackoverflow.com/questions/55585079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/38626/" ]
In Tensorflow 2.0 you should be using `hub.load()` or `hub.KerasLayer()`. **[April 2019]** - For now only Tensorflow 2.0 modules are loadable via them. In the future many of 1.x Hub modules should be loadable as well. For the 2.x only modules you can see examples in the notebooks created for the modules [here](https://tfhub.dev/s?q=tf2-preview)
this function load will work with tensorflow 2 ``` embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder-large/3") ``` instead of ``` embed = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-large/3") ``` [this is not accepted in tf2] use hub.load()
13,928
16,138,090
What is the most efficient and portable way to generate a random random in `[0,1]` in Cython? One approach is to use `INT_MAX` and `rand()` from the C library: ``` from libc.stdlib cimport rand cdef extern from "limits.h": int INT_MAX cdef float randnum = rand() / float(INT_MAX) ``` Is it OK to use `INT_MAX` in this way? I noticed that it's quite different from the constant you get from Python's max int: ``` import sys print INT_MAX print sys.maxint ``` yields: ``` 2147483647 (C max int) 9223372036854775807 (python max int) ``` Which is the right "normalization" number for `rand()`? **EDIT** additionally, how can the random seed be set (e.g. seeded based on current time) if one uses the C approach of calling `rand()` from libc?
2013/04/22
[ "https://Stackoverflow.com/questions/16138090", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The C standard says `rand` returns an `int` in the range 0 to RAND\_MAX inclusive, so dividing it by RAND\_MAX (from `stdlib.h`) is the proper way to normalise it. In practice, RAND\_MAX will almost always be equal to MAX\_INT, but don't rely on that. Because `rand` has been part of ISO C since C89, it's guaranteed to be available everywhere, but no guarantees are made regarding the quality of its random numbers. If portability is your main concern, though, it's your best option, unless you're willing to use Python's `random` module. Python's [`sys.maxint`](http://docs.python.org/2/library/sys.html#sys.maxint) is a different concept entirely; it's just the largest positive number Python can represent in *its own* int type; larger ones will have to be longs. Python's ints and longs aren't particularly related to C's.
'c' stdlib rand() returns a number between 0 and RAND\_MAX which is generally 32767. Is there any reason not to use the python random() ? [Generate random integers between 0 and 9](https://stackoverflow.com/questions/3996904/python-generate-random-integers-between-0-and-9)
13,930
52,397,563
I've been pulling my hair out trying to figure this one out, hoping someone else has already encountered this and knows how to solve it :) I'm trying to build a very simple Flask endpoint that just needs to call a long running, blocking `php` script (think `while true {...}`). I've tried a few different methods to async launch the script, but the problem is my browser never actually receives the response back, even though the code for generating the response after running the script is executed. I've tried using both `multiprocessing` and `threading`, neither seem to work: ``` # multiprocessing attempt @app.route('/endpoint') def endpoint(): def worker(): subprocess.Popen('nohup php script.php &', shell=True, preexec_fn=os.setpgrp) p = multiprocessing.Process(target=worker) print '111111' p.start() print '222222' return json.dumps({ 'success': True }) # threading attempt @app.route('/endpoint') def endpoint(): def thread_func(): subprocess.Popen('nohup php script.php &', shell=True, preexec_fn=os.setpgrp) t = threading.Thread(target=thread_func) print '111111' t.start() print '222222' return json.dumps({ 'success': True }) ``` In both scenarios I see the `111111` and `222222`, yet my browser still hangs on the response from the endpoint. I've tried `p.daemon = True` for the process, as well as `p.terminate()` but no luck. I had hoped launching a script with nohup in a different shell and separate processs/thread would just work, but somehow Flask or uWSGI is impacted by it. ### Update Since this does work locally on my Mac when I start my Flask app directly with `python app.py` and hit it directly without going through my Nginx proxy and uWSGI, I'm starting to believe it may not be the code itself that is having issues. And because my Nginx just forwards the request to uWSGI, I believe it may possibly be something there that's causing it. Here is my ini configuration for the domain for uWSGI, which I'm running in emperor mode: ``` [uwsgi] protocol = uwsgi max-requests = 5000 chmod-socket = 660 master = True vacuum = True enable-threads = True auto-procname = True procname-prefix = michael- chdir = /srv/www/mysite.com module = app callable = app socket = /tmp/mysite.com.sock ```
2018/09/19
[ "https://Stackoverflow.com/questions/52397563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2407296/" ]
This kind of stuff is the actual and probably main use case for `Python Celery` (<https://docs.celeryproject.org/>). As a general rule, do not run long-running jobs that are CPU-bound in the `wsgi` process. It's tricky, it's inefficient, and most important thing, it's more complicated than setting up an *async* task in a celery worker. If you want to just prototype you can set the broker to `memory` and not using an external server, or run a single-threaded `redis` on the very same machine. This way you can launch the task, call `task.result()` which is blocking, but it blocks in an **IO-bound fashion**, **or even better you can just return immediately by retrieving the `task_id` and build a second endpoint** `/result?task_id=<task_id>` that checks if result is available: ``` result = AsyncResult(task_id, app=app) if result.state == "SUCCESS": return result.get() else: return result.state # or do something else depending on the state ``` This way you have a non-blocking `wsgi` app that does what is best suited for: short time CPU-unbound calls that have IO calls at most with OS-level scheduling, then you can rely directly to the `wsgi` server `workers|processes|threads` or whatever you need to scale the API in whatever wsgi-server like uwsgi, gunicorn, etc. for the 99% of workloads as celery scales horizontally by increasing the number of worker processes.
This approach works for me, it calls the timeout command (sleep 10s) in the command line and lets it work in the background. It returns the response immediately. ``` @app.route('/endpoint1') def endpoint1(): subprocess.Popen('timeout 10', shell=True) return 'success1' ``` However, not testing on WSGI server, but just locally.
13,933
28,150,433
I'm trying to install Scrapy on windows 7. I'm following these instructions: <http://doc.scrapy.org/en/0.24/intro/install.html#intro-install> I’ve downloaded and installed python-2.7.5.msi for windows following this tutorial <https://adesquared.wordpress.com/2013/07/07/setting-up-python-and-easy_install-on-windows-7/>, and I set up the environment variables as mentioned, but when I try to run python in my command prompt I get this error: ``` Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\>python ‘python’ is not recognized as an internal or external command, operable program or batch file. C:\> python ez_setup.py install ‘python’ is not recognized as an internal or external command, operable program or batch file. C:\> ``` Could you please help me solve this?
2015/01/26
[ "https://Stackoverflow.com/questions/28150433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3640056/" ]
`ur` is python2 syntax you are trying to install an [incompatible](http://doc.scrapy.org/en/latest/faq.html#does-scrapy-work-with-python-3) package meant for python2 not python3: ``` _ajax_crawlable_re = re.compile(ur'<meta\s+name=["\']fragment["\']\s+content=["\']!["\']/?>') ^^ python2 syntax ``` Also pip is installed by default for python3.4
Step By Step way to install scrapy on Windows 7 1. Install Python 2.7 from [Python Download link](https://www.python.org/downloads) (Be sure to install Python 2.7 only because currently scrapy is not available for Python3 in Windows) 2. During pyhton install there is checkbox available to add python path to system variable click that option. Otherwise you can add path variable manually. You need to adjust PATH environment variable to include paths to the Python executable and additional scripts. The following paths need to be added to PATH `C:\Python27\;C:\Python27\Scripts\;` [![windows add path variable](https://i.stack.imgur.com/KvD96.jpg)](https://i.stack.imgur.com/KvD96.jpg) If you have any other problem in adding path variable please refer to this [link](https://stackoverflow.com/questions/3701646/how-to-add-to-the-pythonpath-in-windows-7) 3. To update the PATH open a Command prompt in administration mode and run: `:\python27\python.exe c:\python27\tools\scripts\win_add2path.py`.Close the command prompt window and reopen it so changes take effect, run the following command, to check ever thing added to path variable. `python -–version` which will give output as `Python 2.7.12` (your version might be different than mine) `pip --version` which will give output as `pip 9.0.1` (your version might be different than mine) 4. You need to install visual basic C++ Python complier. You can download that from [Download link](http://www.microsoft.com/en-us/download/details.aspx?id=44266) 5. Then you install to install libxml which python library used by scrapy. You download it by writing a command `pip install libxml` into command prompt. but if you face some problem in pip installation you can download it from <http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml> *download **libxml** package according to your system architecture*. Open command prompt into that download directory and `pip install NAME_OF_PACKAGE.whl` 6. Install pywin32 from [Download link](http://sourceforge.net/projects/pywin32/). *Be sure you download the architecture (win32 or amd64) that matches your system* 7. Then open command prompt and run this command `pip install scrapy` I hope this will help in successful installing scrapy 8. For Reference use can your these links [Scrapy official Page](https://doc.scrapy.org/en/latest/intro/install.html) and [Blog on how to install scrapy on windows](http://www.blog.thoughttoast.com/crawling/scrapy/install-scrapy-in-windows/)
13,935
71,102,179
I have tried anything in the internet and nothing works. My code: ```py @bot.command() async def bug(ctx, bug): deleted_message_id = ctx.id await ctx.channel.send(str(deleted_message_id)) await ctx.send(ctx.author.mention+", you're bug report hs been reported!") ``` I use python version 3.10
2022/02/13
[ "https://Stackoverflow.com/questions/71102179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18196675/" ]
If you don't want to store the translation on the elements html themselves, you'll need to have an object mapping the values to the names to be output. Here's a minimalist example: ```js const numberNameMap = { 1: 'one', 2: 'two', 3: 'three', 4: 'four', 5: 'five' }; const select = document.querySelector('select'); const output = document.getElementById('output'); select.addEventListener('change', ({ target: { value } }) => { output.textContent = numberNameMap[value]; }); ``` ```html <select> <option>1</option> <option>2</option> <option>3</option> <option>4</option> <option>5</option> </select> <div id="output"></div> ```
I've simplified your code to do exactly what you want. Try this ```js $(window).keydown(function(e) { if( [37, 38, 39, 40, 13].includes(e.which) ){ e.preventDefault(); let $col = $('.multiple li.selected'); let $row = $col.closest('.multiple'); let $nextCol = $col; let $nextRow = $row; if( [37, 38, 39, 40].includes(e.which) && $col.length === 0 ){ $('.multiple:first-child').find('li:first').addClass('selected'); return; } switch (e.which) { case 38: // up key $nextRow = $row.is(':first-child') ? $row.siblings(":last-child") : $row.prev(); break; case 40: // down key $nextRow = $row.is(':last-child') ? $row.siblings(":first-child") : $row.next(); break; case 37: // left key $nextCol = $col.is(':first-child') ? $col.siblings(":last-child") : $col.prev(); break; case 39: // right key $nextCol = $col.is(':last-child') ? $col.siblings(":first-child") : $col.next(); break; case 13: // enter key if($col.length > 0) $('#screen').text($col.find('a').trigger('click').text()); break; } $('.multiple li').removeClass('selected'); $nextRow.find('li').eq($nextCol.index()).addClass('selected'); } }); ``` ```css li.selected { background: green; } .multiple, .allRows li{ text-align:center; position:relative; display: flex; flex-wrap: nowrap; } .allRows a { text-decoration:none; padding: 3px; border: 1px solid #555; color: #222; margin: 0px; text-align: center; font-size:20px; } #screen { background:#222; color:#ddd; position:absolute; top: 100px; padding:30px 150px; } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <ul class="allRows"> <div class="multiple"> <li><a href="#" patterns="one" id="one">One</a></li> <li><a href="#" patterns="two" id="two">Two</a></li> <li><a href="#" patterns="three" id="three">Three</a></li> <li><a href="#" patterns="four" id="four">Four</a></li> <li><a href="#" patterns="five" id="five">Five</a></li> <li><a href="#" patterns="six" id="six">Six</a></li> </div> <div class="multiple"> <li><a href="#" patterns="seven" id="seven">Seven</a></li> <li><a href="#" patterns="eight" id="eight">Eight</a></li> <li><a href="#" patterns="nine" id="nine">Nine</a></li> <li><a href="#" patterns="ten" id="ten">Ten</a></li> <li><a href="#" patterns="eleven" id="eleven">Eleven</a></li> <li><a href="#" patterns="twelve" id="twelve">Twelve</a></li> </div> </ul> <div id="screen">Numbers</div> ```
13,940
19,721,027
After reading the [Software Carpentry](http://software-carpentry.org/) essay on [Handling Configuration Files](http://software-carpentry.org/v4/essays/config.html) I'm interested in their *Method #5: put parameters in a dynamically-loaded code module*. Basically I want the power to do calculations within my input files to create my variables. Based on this SO answer for [how to import a string as a module](https://stackoverflow.com/a/7548190/2530083) I've written the following function to import a string or oen fileobject or STringIO as a module. I can then access varibales using the . operator: ``` import imp def make_module_from_text(reader): """make a module from file,StringIO, text etc Parameters ---------- reader : file_like object object to get text from Returns ------- m: module text as module """ #for making module out of strings/files see https://stackoverflow.com/a/7548190/2530083 mymodule = imp.new_module('mymodule') #may need to randomise the name; not sure exec reader in mymodule.__dict__ return mymodule ``` then ``` import textwrap reader = textwrap.dedent("""\ import numpy as np a = np.array([0,4,6,7], dtype=float) a_normalise = a/a[-1] """) mymod = make_module_from_text(reader) print(mymod.a_normalise) ``` gives ``` [ 0. 0.57142857 0.85714286 1. ] ``` All well and good so far, but having looked around it seems using python `eval` and `exec` introduces security holes if I don't trust the input. A common response is "Never use `eval or`exec; they are evil", but I really like the power and flexibility of executing the code. Using `{'__builtins__': None}` I don't think will work for me as I will want to import other modules (e.g. `import numpy as np` in my above code). A number of people (e.g. [here](https://stackoverflow.com/a/9558001/2530083)) suggest using the `ast` module but I am not at all clear on how to use it(can `ast` be used with `exec`?). Is there simple ways to whitelist/allow specific functionality (e.g. [here](https://stackoverflow.com/questions/10661079/restricting-pythons-syntax-to-execute-user-code-safely-is-this-a-safe-approach))? Is there simple ways to blacklist/disallow specific functionality? Is there a magic way to say execute this but don't do anythinh nasty. Basically what are the options for making sure `exec` doesn't run any nasty malicious code? **EDIT:** My example above of normalising an array within my input/configuration file is perhaps a bit simplistic as to what computations I would want to perform within my input/configuration file (I could easily write a method/function in my program to do that). But say my program calculates a property at various times. The user needs to specify the times in some way. Should I only accept a list of explicit time values so the user has to do some calculations before preparing the input file? (note: even using a list as configuration variable is not trivial [see here](https://stackoverflow.com/a/6759256/2530083)). I think that is very limiting. Should I allow start-end-step values and then use `numpy.linspace` within my program? I think that is limiting too; whatif I want to use `numpy.logspace` instead? What if I have some function that can accept a list of important time values and then nicely fills in other times to get well spaced time values. Wouldn't it be good for the user to be able to import that function and use it? What if I want to input a list of user defined objects? The thing is, I don't want to code for all these specific cases when the functinality of python is already there for me and my user to use. Once I accept that I do indead want the power and functionality of executing code in my input/configuration file I wonder if there is actually any difference, security wise, in using `exec` vs using `importlib` vs [imp.load\_source](https://stackoverflow.com/a/67692/2530083) and so on. To me there is the limited standard configparser or the all powerful, all dangerous exec. I just wish there was some middle ground with which I could say 'execute this... without stuffing up my computer'.
2013/11/01
[ "https://Stackoverflow.com/questions/19721027", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2530083/" ]
"Never use eval or exec; they are evil". This is the only answer that works here, I think. There is no fully safe way to use exec/eval on an untrusted string string or file. The best you can do is to come up with your own language, and either interpret it yourself or turn it into safe Python code before handling it to exec. Be careful to proceed from the ground up --- if you allow the whole Python language minus specific things you thought of as dangerous, it would never be really safe. For example, you can use the ast module if you want Python-like syntax; and then write a small custom ast interpreter that only recognizes a small subset of all possible nodes. That's the safest solution.
If you are willing to use PyPy, then its [sandboxing feature](http://doc.pypy.org/en/latest/sandbox.html) is specifically designed for running untrusted code, so it may be useful in your case. Note that there are some issues with CPython interoperability mentioned that you may need to check. Additionally, there is a link on this page to an abandoned project called pysandbox, explaining the [problems with sandboxing](https://mail.python.org/pipermail/python-dev/2013-November/130132.html) directly within python.
13,942
69,607,117
I am trying to train a dl model with tf.keras. I have 67 classes of images inside the image directory like airports, bookstore, casino. And for each classes i have at least 100 images. The data is from [mit indoor scene](http://web.mit.edu/torralba/www/indoor.html) dataset But when I am trying to train the model, I am constantly getting this error. ``` tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: Input size should match (header_size + row_size * abs_height) but they differ by 2 [[{{node decode_image/DecodeImage}}]] [[IteratorGetNext]] (1) Invalid argument: Input size should match (header_size + row_size * abs_height) but they differ by 2 [[{{node decode_image/DecodeImage}}]] [[IteratorGetNext]] [[IteratorGetNext/_7]] 0 successful operations. 0 derived errors ignored. [Op:__inference_train_function_1570] Function call stack: train_function -> train_function ``` I tried to resolve the problem by resizing the image with the resizing layer, also included the `labels='inferred'` and `label_mode='categorical'` in the `image_dataset_from_directory` method and included `loss='categorical_crossentropy'` in the model compile method. Previously labels and label\_model were not set and loss was sparse\_categorical\_crossentropy which i think is not right. so I changed them as described above.But I am still having problems. There is one question related to this in [stackoverflow](https://stackoverflow.com/questions/65517669/invalidargumenterror-with-model-fit-in-tensorflow) but the person did not mentioned how he solved the problem just updated that - My suggestion is to check the metadata of the dataset. It helped to fix my problem. But did not mentioned what metadata to look for or what he did to solve the problem. The code that I am using to train the model - ``` import os import PIL import numpy as np import pandas as pd import tensorflow as tf from tensorflow.keras.layers import Conv2D, Dense, MaxPooling2D, GlobalAveragePooling2D from tensorflow.keras.layers import Flatten, Dropout, BatchNormalization, Rescaling from tensorflow.keras.models import Sequential from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping from tensorflow.keras.regularizers import l1, l2 import matplotlib.pyplot as plt import matplotlib.image as mpimg from pathlib import Path os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # define directory paths PROJECT_PATH = Path.cwd() DATA_PATH = PROJECT_PATH.joinpath('data', 'Images') # create a dataset batch_size = 32 img_height = 180 img_width = 180 train = tf.keras.utils.image_dataset_from_directory( DATA_PATH, validation_split=0.2, subset="training", labels="inferred", label_mode="categorical", seed=123, image_size=(img_height, img_width), batch_size=batch_size ) valid = tf.keras.utils.image_dataset_from_directory( DATA_PATH, validation_split=0.2, subset="validation", labels="inferred", label_mode="categorical", seed=123, image_size=(img_height, img_width), batch_size=batch_size ) class_names = train.class_names for image_batch, label_batch in train.take(1): print("\nImage shape:", image_batch.shape) print("Label Shape", label_batch.shape) # resize image resize_layer = tf.keras.layers.Resizing(img_height, img_width) train = train.map(lambda x, y: (resize_layer(x), y)) valid = valid.map(lambda x, y: (resize_layer(x), y)) # standardize the data normalization_layer = tf.keras.layers.Rescaling(1./255) train = train.map(lambda x, y: (normalization_layer(x), y)) valid = valid.map(lambda x, y: (normalization_layer(x), y)) image_batch, labels_batch = next(iter(train)) first_image = image_batch[0] print("\nImage (min, max) value:", (np.min(first_image), np.max(first_image))) print() # configure the dataset for performance AUTOTUNE = tf.data.AUTOTUNE train = train.cache().prefetch(buffer_size=AUTOTUNE) valid = valid.cache().prefetch(buffer_size=AUTOTUNE) # create a basic model architecture num_classes = len(class_names) # initiate a sequential model model = Sequential() # CONV1 model.add(Conv2D(filters=64, kernel_size=3, activation="relu", input_shape=(img_height, img_width, 3))) model.add(BatchNormalization()) # CONV2 model.add(Conv2D(filters=64, kernel_size=3, activation="relu")) model.add(BatchNormalization()) # Pool + Dropout model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.3)) # CONV3 model.add(Conv2D(filters=128, kernel_size=3, activation="relu")) model.add(BatchNormalization()) # CONV4 model.add(Conv2D(filters=128, kernel_size=3, activation="relu")) model.add(BatchNormalization()) # POOL + Dropout model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.3)) # FC5 model.add(Flatten()) model.add(Dense(128, activation="relu")) model.add(Dense(num_classes, activation="softmax")) # compile the model model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=['accuracy']) # train the model epochs = 25 early_stopping_cb = EarlyStopping(patience=10, restore_best_weights=True) history = model.fit(train, validation_data=valid, epochs=epochs, callbacks=[early_stopping_cb], verbose=2) result = pd.DataFrame(history.history) print() print(result.head()) ``` **Note -** I just modified the code to make it as simple as possible to reduce the error. The model run for few batches than again got the above error. ``` Epoch 1/10 732/781 [===========================>..] - ETA: 22s - loss: 3.7882Traceback (most recent call last): File ".\02_model1.py", line 139, in <module> model.fit(train, epochs=10, validation_data=valid) File "C:\Users\BHOLA\anaconda3\lib\site-packages\keras\engine\training.py", line 1184, in fit tmp_logs = self.train_function(iterator) File "C:\Users\BHOLA\anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py", line 885, in __call__ result = self._call(*args, **kwds) File "C:\Users\BHOLA\anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py", line 917, in _call return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable File "C:\Users\BHOLA\anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 3039, in __call__ return graph_function._call_flat( File "C:\Users\BHOLA\anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 1963, in _call_flat return self._build_call_outputs(self._inference_function.call( File "C:\Users\BHOLA\anaconda3\lib\site-packages\tensorflow\python\eager\function.py", line 591, in call outputs = execute.execute( File "C:\Users\BHOLA\anaconda3\lib\site-packages\tensorflow\python\eager\execute.py", line 59, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: Input size should match (header_size + row_size * abs_height) but they differ by 2 [[{{node decode_image/DecodeImage}}]] [[IteratorGetNext]] (1) Invalid argument: Input size should match (header_size + row_size * abs_height) but they differ by 2 [[{{node decode_image/DecodeImage}}]] [[IteratorGetNext]] [[IteratorGetNext/_2]] 0 successful operations. 0 derived errors ignored. [Op:__inference_train_function_11840] Function call stack: train_function -> train_function ``` **Modified code -** ``` # create a dataset batch_size = 16 img_height = 256 img_width = 256 train = image_dataset_from_directory( DATA_PATH, validation_split=0.2, subset="training", labels="inferred", label_mode="categorical", seed=123, image_size=(img_height, img_width), batch_size=batch_size ) valid = image_dataset_from_directory( DATA_PATH, validation_split=0.2, subset="validation", labels="inferred", label_mode="categorical", seed=123, image_size=(img_height, img_width), batch_size=batch_size ) model = tf.keras.applications.Xception( weights=None, input_shape=(img_height, img_width, 3), classes=67) model.compile(optimizer='rmsprop', loss='categorical_crossentropy') model.fit(train, epochs=10, validation_data=valid) ```
2021/10/17
[ "https://Stackoverflow.com/questions/69607117", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12195048/" ]
I think it might be a corrupted file. It is throwing an exception after a data integrity check in the `DecodeBMPv2` function (<https://github.com/tensorflow/tensorflow/blob/0b6b491d21d6a4eb5fbab1cca565bc1e94ca9543/tensorflow/core/kernels/image/decode_image_op.cc#L594>) If that's the issue and you want to find out which file(s) are throwing the exception, you can try something like this below on the directory containing the files. Remove/replace any files you find and it should train normally. ```py import glob img_paths = glob.glob(os.path.join(<path_to_dataset>,'*/*.*') # assuming you point to the directory containing the label folders. bad_paths = [] for image_path in img_paths: try: img_bytes = tf.io.read_file(path) decoded_img = tf.decode_image(img_bytes) except tf.errors.InvalidArgumentError as e: print(f"Found bad path {image_path}...{e}") bad_paths.append(image_path) print(f"{image_path}: OK") print("BAD PATHS:") for bad_path in bad_paths: print(f"{bad_path}") ```
This is in fact a **corrupted file problem**. However, the underlying issue is far more subtle. Here is an explanation of what is going on and how to circumvent this obstacle. I encountered the very same problem on the very same [MIT Indoor Scene Classification](https://www.kaggle.com/datasets/itsahmad/indoor-scenes-cvpr-2019) dataset. All the images are JPEG files (*spoiler alert: well, are they?*). It has been correctly noted that the exception is raised exactly [here](https://github.com/tensorflow/tensorflow/blob/0b6b491d21d6a4eb5fbab1cca565bc1e94ca9543/tensorflow/core/kernels/image/decode_image_op.cc#L594), in a C++ file related to the [`tf.io.decode_image()`](https://www.tensorflow.org/api_docs/python/tf/io/decode_image) function. It is the `decode_image()` function where the issue lies, which is called by the [`tf.keras.utils.image_dataset_from_directory()`](https://www.tensorflow.org/api_docs/python/tf/keras/utils/image_dataset_from_directory). On the other hand, [`tf.keras.preprocessing.image.ImageDataGenerator().flow_from_directory()`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator#flow_from_directory) relies on [Pillow](https://pillow.readthedocs.io/en/stable/) under the hood (shown [here](https://github.com/keras-team/keras/blob/07e13740fd181fc3ddec7d9a594d8a08666645f6/keras/utils/image_utils.py#L340), which is called from [here](https://github.com/keras-team/keras/blob/07e13740fd181fc3ddec7d9a594d8a08666645f6/keras/preprocessing/image.py#L324)). This is the reason why adopting the `ImageDataGenerator` class works. After closer inspection of the corresponding C++ source file, one can observe that the function is actually called `DecodeBmpV2(...)`, as defined [here](https://github.com/tensorflow/tensorflow/blob/0b6b491d21d6a4eb5fbab1cca565bc1e94ca9543/tensorflow/core/kernels/image/decode_image_op.cc#L519). This raises the question of why a JPEG image is being treated as a BMP one. The aforementioned function is actually called [here](https://github.com/tensorflow/tensorflow/blob/0b6b491d21d6a4eb5fbab1cca565bc1e94ca9543/tensorflow/core/kernels/image/decode_image_op.cc#L211), as part of a basic switch statement the aim of which is further direct data conversion according to the determined type. Thus, the piece of code that determines the file type should be subjected to deeper analysis. The file type is determined according to the value of starting bytes (see [here](https://github.com/tensorflow/tensorflow/blob/0b6b491d21d6a4eb5fbab1cca565bc1e94ca9543/tensorflow/core/kernels/image/decode_image_op.cc#L62)). Long story short, a simple comparison of so-called [magic bytes](https://en.wikipedia.org/wiki/List_of_file_signatures) that signify file type is performed. Here is a code extract with the corresponding magic bytes. ``` static const char kPngMagicBytes[] = "\x89\x50\x4E\x47\x0D\x0A\x1A\x0A"; static const char kGifMagicBytes[] = "\x47\x49\x46\x38"; static const char kBmpMagicBytes[] = "\x42\x4d"; static const char kJpegMagicBytes[] = "\xff\xd8\xff"; ``` After identifying which files raise the exception, I saw that they were supposed to be JPEG files, however, their starting bytes indicated a BMP format instead. Here is an example of 3 files and their first 10 bytes. ``` laundromat\laundry_room_area.jpg b'ffd8ffe000104a464946' laundromat\Laundry_Room_Edens1A.jpg b'ffd8ffe000104a464946' laundromat\Laundry_Room_bmp.jpg b'424d3800030000000000' ``` Look at the last one. It even contains the word *bmp* in the file name. Why is that so? I do not know. The dataset **does contain corrupted image files**. Someone probably converted the file from BMP to JPEG, yet the tool used did not work correctly. We can just guess the real reason, but that is now irrelevant. The method by which the file type is determined is different from the one performed by the Pillow package, thus, there is nothing we can do about it. The recommendation is to identify the corrupted files, which is actually easy or to rely on the `ImageDataGenerator`. However, I would advise against doing so as this class has been marked as deprecated. It is not a bug in code per se, but rather bad data inadvertently introduced into the dataset.
13,943
61,183,427
I am beginner of python and trying to insert percentage sign in the output. Below is the code that I've got. ``` print('accuracy :', accuracy_score(y_true, y_pred)*100) ``` when I run this code I got 50.0001 and I would like to have %sign at the end of the number so I tried to do as below ``` print('Macro average precision :', precision_score(y_true, y_pred, average='macro')*100"%\n") ``` I got error say `SyntaxError: invalid syntax` Can any one help with this? Thank you!
2020/04/13
[ "https://Stackoverflow.com/questions/61183427", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10267370/" ]
Use f strings: ``` print(f"Macro average precision : {precision_score(y_true, y_pred, average='macro')*100}%\n") ``` Or convert the value to string, and add (*concatenate*) the strings: ``` print('Macro average precision : ' + str(precision_score(y_true, y_pred, average='macro')*100) + "%\n") ``` See [the discussion here](https://stackoverflow.com/questions/59180574/string-concatenation-with-vs-f-string_) of the merits of each; basically the first is more convenient; and the second is computationally faster, and perhaps more simple to understand.
The simple, "low-tech" way is to correct your (lack of) output expression. Convert the float to a string and concatenate. To make it easy to follow: ``` pct = precision_score(y_true, y_pred, average='macro')*100 print('Macro average precision : ' + str(pct) + "%\n") ``` This is inelegant, but easy to follow.
13,944
52,418,698
I have Spark and Airflow cluster, I want to send a spark job from Airflow container to Spark container. But I am new about Airflow and I don't know which configuration I need to perform. I copied `spark_submit_operator.py` under the plugins folder. ```py from airflow import DAG from airflow.contrib.operators.spark_submit_operator import SparkSubmitOperator from datetime import datetime, timedelta args = { 'owner': 'airflow', 'start_date': datetime(2018, 7, 31) } dag = DAG('spark_example_new', default_args=args, schedule_interval="*/10 * * * *") operator = SparkSubmitOperator( task_id='spark_submit_job', conn_id='spark_default', java_class='Simple', application='/spark/abc.jar', total_executor_cores='1', executor_cores='1', executor_memory='2g', num_executors='1', name='airflow-spark-example', verbose=False, driver_memory='1g', application_args=["1000"], conf={'master':'spark://master:7077'}, dag=dag, ) ``` **master** is the hostname of our Spark Master container. When I run the dag, it produce following error: ``` [2018-09-20 05:57:46,637] {{models.py:1569}} INFO - Executing <Task(SparkSubmitOperator): spark_submit_job> on 2018-09-20T05:57:36.756154+00:00 [2018-09-20 05:57:46,637] {{base_task_runner.py:124}} INFO - Running: ['bash', '-c', 'airflow run spark_example_new spark_submit_job 2018-09-20T05:57:36.756154+00:00 --job_id 4 --raw -sd DAGS_FOLDER/firstJob.py --cfg_path /tmp/tmpn2hznb5_'] [2018-09-20 05:57:47,002] {{base_task_runner.py:107}} INFO - Job 4: Subtask spark_submit_job [2018-09-20 05:57:47,001] {{settings.py:174}} INFO - setting.configure_orm(): Using pool settings. pool_size=5, pool_recycle=1800 [2018-09-20 05:57:47,312] {{base_task_runner.py:107}} INFO - Job 4: Subtask spark_submit_job [2018-09-20 05:57:47,311] {{__init__.py:51}} INFO - Using executor CeleryExecutor [2018-09-20 05:57:47,428] {{base_task_runner.py:107}} INFO - Job 4: Subtask spark_submit_job [2018-09-20 05:57:47,428] {{models.py:258}} INFO - Filling up the DagBag from /usr/local/airflow/dags/firstJob.py [2018-09-20 05:57:47,447] {{base_task_runner.py:107}} INFO - Job 4: Subtask spark_submit_job [2018-09-20 05:57:47,447] {{cli.py:492}} INFO - Running <TaskInstance: spark_example_new.spark_submit_job 2018-09-20T05:57:36.756154+00:00 [running]> on host e6dd59dc595f [2018-09-20 05:57:47,471] {{logging_mixin.py:95}} INFO - [2018-09-20 05:57:47,470] {{spark_submit_hook.py:283}} INFO - Spark-Submit cmd: ['spark-submit', '--master', 'yarn', '--conf', 'master=spark://master:7077', '--num-executors', '1', '--total-executor-cores', '1', '--executor-cores', '1', '--executor-memory', '2g', '--driver-memory', '1g', '--name', 'airflow-spark-example', '--class', 'Simple', '/spark/ugur.jar', '1000'] [2018-09-20 05:57:47,473] {{models.py:1736}} ERROR - [Errno 2] No such file or directory: 'spark-submit': 'spark-submit' Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 1633, in _run_raw_task result = task_copy.execute(context=context) File "/usr/local/lib/python3.6/site-packages/airflow/contrib/operators/spark_submit_operator.py", line 168, in execute self._hook.submit(self._application) File "/usr/local/lib/python3.6/site-packages/airflow/contrib/hooks/spark_submit_hook.py", line 330, in submit **kwargs) File "/usr/local/lib/python3.6/subprocess.py", line 709, in __init__ restore_signals, start_new_session) File "/usr/local/lib/python3.6/subprocess.py", line 1344, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'spark-submit': 'spark-submit' ``` It's running command: ``` Spark-Submit cmd: ['spark-submit', '--master', 'yarn', '--conf', 'master=spark://master:7077', '--num-executors', '1', '--total-executor-cores', '1', '--executor-cores', '1', '--executor-memory', '2g', '--driver-memory', '1g', '--name', 'airflow-spark-example', '--class', 'Simple', '/spark/ugur.jar', '1000'] ``` but I didn't use yarn.
2018/09/20
[ "https://Stackoverflow.com/questions/52418698", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4230665/" ]
If you use SparkSubmitOperator the connection to master will be set as "Yarn" by default regardless master which you set on your python code, however, you can override master by specifying conn\_id through its constructor with the condition you have already created the aforementioned `conn_id` at "Admin->Connection" menu in Airflow Web Interface. I hope it can help you.
I guess you need to set master in extra options in your connections for this spark conn id (spark\_default).By default is yarn, so try{"master":"your-con"} Also does your airflow user has spark-submit in the classpath?, from logs looks like it doesnt.
13,947
28,034,947
So i'm trying to do something like this ``` #include <stdio.h> int main(void) { char string[] = "bobgetbob"; int i = 0, count = 0; for(i; i < 10; ++i) { if(string[i] == 'b' && string[i+1] == 'o' && string[i+2] == 'b') count++; } printf("Number of 'bobs' is: %d\n",count); } ``` but in python terms which works like this ``` count = 0 s = "bobgetbob" for i in range(0,len(s)): if s[i] == 'b' and s[i+1] == 'o' and s[i+2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ``` anytime I get a string that so happens to end with a 'b' or the second to last is 'b' followed by a 'o' I get an index out of range error. Now in c this is not an issue because it will still perform the comparison with a garbage value i'm assuming which works with c. How do I go about doing this in python without going outside of the range? Could i iterate through the string itself like so? ``` for letter in s: #compare stuff ``` How would I compare specific indexes in the string using the above method? if I try to use ``` letter == 'b' and letter + 1 == 'o' ``` this is invalid syntax in python, my issue is i'm thinking in terms of c and I'm not completely sure of the right syntax to tackle this situation. I know about string slicing like so ``` for i in range(0,len(s)): if s[i:i+3] == "bob": count += 1 ``` this solves this specific problem but, I feel like using specific index positions to compare characters is a very powerful tool. I can't figure out for the life of me how to properly do this in python without having some situations that break it like the first python example above.
2015/01/19
[ "https://Stackoverflow.com/questions/28034947", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4123828/" ]
> > Could i iterate through the string itself like so? > > > > ``` > for letter in s: > #compare stuff > > ``` > > How would I compare specific indexes in the string using the above method? > > > The pythonic way of doing these kind of comparisons without specifically referring to indexes would be: ``` for curr, nextt, nexttt in zip(s, s[1:], s[2:]): if curr == 'b' and nextt == 'o' and nexttt == 'b': count += 1 ``` This avoids out of index errors. You can also use a comprehension and in this way you eliminate the need to initialize and update the `count` variable. This line will do the same as your C code: ``` >>> sum(1 for curr, nextt, nexttt in zip(s, s[1:], s[2:]) if curr == 'b' and nextt == 'o' and nexttt == 'b') 2 ``` **How it works:** This is the result of the zip between the lists: ``` >>> s 'bobgetbob' >>> s[1:] 'obgetbob' >>> s[2:] 'bgetbob' >>> zip(s, s[1:], s[2:]) [('b', 'o', 'b'), ('o', 'b', 'g'), ('b', 'g', 'e'), ('g', 'e', 't'), ('e', 't', 'b'), ('t', 'b', 'o'), ('b', 'o', 'b')] ``` In the loop you iterate the list unpacking each of the tuples to the three variables. Finally, if you really need the index you can use [`enumerate`](https://docs.python.org/2/library/functions.html#enumerate): ``` >>> for i, c in enumerate(s): print i, c 0 b 1 o 2 b 3 g 4 e 5 t 6 b 7 o 8 b ```
Try this - i.e. go to len(s)-2 as you won't ever get a bob starting after that point ``` count = 0 s = "bobgetbob" for i in range(len(s) - 2): if s[i] == 'b' and s[i + 1] == 'o' and s[i + 2] == 'b': count += 1 print "Number of 'bobs' is: %d" % count ```
13,948
11,464,750
useI am working on a python script to check if the url is working. The script will write the url and response code to a log file. To speed up the check, I am using threading and queue. The script works well if the number of url's to check is small but when increasing the number of url's to hundreds, some url's just will miss from the log file. Is there anything I need to fix? My script is ``` #!/usr/bin/env python import Queue import threading import urllib2,urllib,sys,cx_Oracle,os import time from urllib2 import HTTPError, URLError queue = Queue.Queue() ##print_queue = Queue.Queue() class NoRedirectHandler(urllib2.HTTPRedirectHandler): def http_error_302(self, req, fp, code, msg, headers): infourl = urllib.addinfourl(fp, headers, req.get_full_url()) infourl.status = code infourl.code = code return infourl http_error_300 = http_error_302 http_error_301 = http_error_302 http_error_303 = http_error_302 http_error_307 = http_error_302 class ThreadUrl(threading.Thread): #Threaded Url Grab ## def __init__(self, queue, print_queue): def __init__(self, queue,error_log): threading.Thread.__init__(self) self.queue = queue ## self.print_queue = print_queue self.error_log = error_log def do_something_with_exception(self,idx,url,error_log): exc_type, exc_value = sys.exc_info()[:2] ## self.print_queue.put([idx,url,exc_type.__name__]) with open( error_log, 'a') as err_log_f: err_log_f.write("{0},{1},{2}\n".format(idx,url,exc_type.__name__)) def openUrl(self,pair): try: idx = pair[1] url = 'http://'+pair[2] opener = urllib2.build_opener(NoRedirectHandler()) urllib2.install_opener(opener) request = urllib2.Request(url) request.add_header('User-Agent', 'Mozilla/5.0 (Windows NT 5.1; rv:13.0) Gecko/20100101 Firefox/13.0.1') #open urls of hosts resp = urllib2.urlopen(request, timeout=10) ## self.print_queue.put([idx,url,resp.code]) with open( self.error_log, 'a') as err_log_f: err_log_f.write("{0},{1},{2}\n".format(idx,url,resp.code)) except: self.do_something_with_exception(idx,url,self.error_log) def run(self): while True: #grabs host from queue pair = self.queue.get() self.openUrl(pair) #signals to queue job is done self.queue.task_done() def readUrlFromDB(queue,connect_string,column_name,table_name): try: connection = cx_Oracle.Connection(connect_string) cursor = cx_Oracle.Cursor(connection) query = 'select ' + column_name + ' from ' + table_name cursor.execute(query) #Count lines in the file rows = cursor.fetchall() total = cursor.rowcount #Loop through returned urls for row in rows: #print row[1],row[2] ## url = 'http://'+row[2] queue.put(row) cursor.close() connection.close() return total except cx_Oracle.DatabaseError, e: print e[0].context raise def main(): start = time.time() error_log = "D:\\chkWebsite_Error_Log.txt" #Check if error_log file exists #If exists then deletes it if os.path.isfile(error_log): os.remove(error_log) #spawn a pool of threads, and pass them queue instance for i in range(10): t = ThreadUrl(queue,error_log) t.setDaemon(True) t.start() connect_string,column_name,table_name = "user/pass@db","*","T_URL_TEST" tn = readUrlFromDB(queue,connect_string,column_name,table_name) #wait on the queue until everything has been processed queue.join() ## print_queue.join() print "Total retrived: {0}".format(tn) print "Elapsed Time: %s" % (time.time() - start) main() ```
2012/07/13
[ "https://Stackoverflow.com/questions/11464750", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1516432/" ]
Python's threading module isn't really multithreaded because of the global interpreter lock, <http://wiki.python.org/moin/GlobalInterpreterLock> as such you should really use `multiprocessing` <http://docs.python.org/library/multiprocessing.html> if you really want to take advantage of multiple cores. Also you seem to be accessing a file simultatnously ``` with open( self.error_log, 'a') as err_log_f: err_log_f.write("{0},{1},{2}\n".format(idx,url,resp.code)) ``` This is really bad AFAIK, if two threads are trying to write to the same file at the same time or almost at the same time, keep in mind, their not really multithreaded, the behavior tends to be undefined, imagine one thread writing while another just closed it... Anyway you would need a third queue to handle writing to the file.
At first glance this looks like a race condition, since many threads are trying to write to the log file at the same time. See [this question](https://stackoverflow.com/q/489861/520779) for some pointers on how to lock a file for writing (so only one thread can access it at a time).
13,957
13,350,427
I am learning to use javascript, ajax, python and django together. In my project, a user selects a language from a drop-down list. Then the selected is sent back to the server. Then the server sends the response back to the django template. This is done by javascript. In the django template, I need the response, for example, German, to update the html code. How to pass the response to the html code. The response can be seen in the range of .... How to do it without reload the html page? Thanks.
2012/11/12
[ "https://Stackoverflow.com/questions/13350427", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1165201/" ]
You could use jquery to send the ajax request and server could send the response with html content. For example, Server: When server receives the ajax request. This would return the html content i.e. a template which could be rendered to the client via ajax ``` def update_html_on_client(request): language = request.GET.get('language', None) #for selected language do something cal_1 = ... return render_to_response('my_template.html', {'cal':cal_1}, content_instance = template.RequestContent(request)) ``` Template: This is example of ajax function you would use to generate ajax request. You can select the div in which you can populate the html response returned by the server. ``` function getServerResponse(){ $.ajax({ url: 'your_url_here', data: {language:'German'}, dataType:'html' success : function(data, status, xhr){ $('#server_response').html(data); } }); } ```
Because your templates are rendered on the server, your best bet is to simply reload the page (which re-renders the page with your newly selected language). An alternative to using ajax would be to store the language in a cookie, that way you don't have to maintain state on the client. Either way, the reload is still necessary. You could investigate client side templating. [Handlebars](http://handlebarsjs.com/) is a good choice.
13,958
66,889,445
I am trying to push my app to heroku, I am following the tutorial from udemy - everything goes smooth as explained. Once I am at the last step - doing `git push heroku master` I get the following error in the console: ``` (flaskdeploy) C:\Users\dmitr\Desktop\jose\FLASK_heroku>git push heroku master Enumerating objects: 5, done. Counting objects: 100% (5/5), done. Delta compression using up to 8 threads remote: ! remote: ! git push heroku <branchname>:main remote: ! remote: ! This article goes into details on the behavior: remote: ! https://devcenter.heroku.com/articles/duplicate-build-version remote: remote: Verifying deploy... remote: remote: ! Push rejected to mitya-test-app. remote: To https://git.heroku.com/mitya-test-app.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to 'https://git.heroku.com/mitya-test-app.git' ``` On a heroku site in the log area I have the following explanation of this error: ``` -----> Building on the Heroku-20 stack -----> Determining which buildpack to use for this app -----> Python app detected -----> Installing python-3.6.13 -----> Installing pip 20.1.1, setuptools 47.1.1 and wheel 0.34.2 -----> Installing SQLite3 -----> Installing requirements with pip Collecting certifi==2020.12.5 Downloading certifi-2020.12.5-py2.py3-none-any.whl (147 kB) Processing /home/linux1/recipes/ci/click_1610990599742/work ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: '/home/linux1/recipes/ci/click_1610990599742/work' ! Push rejected, failed to compile Python app. ! Push failed ``` In my requirements.txt I have the following: ``` certifi==2020.12.5 click @ file:///home/linux1/recipes/ci/click_1610990599742/work Flask @ file:///home/ktietz/src/ci/flask_1611932660458/work gunicorn==20.0.4 itsdangerous @ file:///home/ktietz/src/ci/itsdangerous_1611932585308/work Jinja2 @ file:///tmp/build/80754af9/jinja2_1612213139570/work MarkupSafe @ file:///C:/ci/markupsafe_1607027406824/work Werkzeug @ file:///home/ktietz/src/ci/werkzeug_1611932622770/work wincertstore==0.2 ``` This is, in fact, only a super simple test app that consists of `app.py`: ``` from flask import Flask app = Flask(__name__) @app.route('/') def index(): return "THE SITE IS OK" if __name__ == '__main__': app.run() ``` and `Procfile` with this inside: ``` web: gunicorn app:app ```
2021/03/31
[ "https://Stackoverflow.com/questions/66889445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11803958/" ]
You could use [repr](https://docs.python.org/3/library/functions.html#repr) ```py print(repr(var)) ```
Maybe you want to use f-string ? ``` var = "textlol" print(f"\"{var}\"") ```
13,959
16,206,224
I have a python list and I would like to export it to a csv file, but I don't want to store all the list in the same row. I would like to slice the list at a given point and start a new line. Something like this: ``` list = [x1,x2,x3,x4,y1,y2,y3,y4] ``` and I would like it to export it in this format ``` x1 x2 x3 x4 y1 y2 y3 y4 ``` So far I have this: ``` import csv A = ["1", "2", "3", "4", "5", "6", "7", "8"] result = open("newfile.csv",'wb') writer = csv.writer(result, dialect = 'excel') writer.writerow(A) result.close ``` And the output looks something like this: ``` 1 2 3 4 5 6 7 8 ``` I would like the output to be ``` 1 2 3 4 5 6 7 8 ``` Any suggestions? Any help is appreciated.
2013/04/25
[ "https://Stackoverflow.com/questions/16206224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1956586/" ]
For a `list` (call it `seq`) and a target row length (call it `rowsize`), you would do something like this: ``` split_into_rows = [seq[i: i + rowsize] for i in range(0, len(seq), rowsize)] ``` You could then use the writer's `writerows` method to write elements to the file: ``` writer.writerows(split_into_rows) ``` For lazy evaluation, use a generator expression instead: ``` split_into_rows = (seq[i: i + rowsize] for i in range(0, len(seq), rowsize)) ```
I suggest using a temp array B. 1. Get the length of the original array A 2. Copy the first A.length/2 to array B 3. Add new line character to array B. 4. Append the rest of array A to B.
13,963
13,219,585
im teaching myself python and im abit confused ``` #!/usr/bin/python def Age(): age_ = int(input("How old are you? ")) def Name(): name_ = raw_input("What is your name? ") def Sex(): sex_ = raw_input("Are you a man(1) or a woman(2)? ") if sex_ == 1: man = 1 elif sex_ == 2: woman = 2 else: print "Please enter 1 for man or 2 for woman " Age() Name() Sex() print "Your name is " + name_ + " you are " + age_ + " years old and you are a " + sex_ + "." ``` **Error** > > File "./functions2.py", line 25, in print "Your name is " + > name\_ + " you are " + age\_ + " years old and you are a " + sex\_ + "." > NameError: name 'name\_' is not defined > > > Surely it is defined in the `Name()` function? Im confused :( --- Arr I am now thanks for making it a bit more newb proof, Ive now got a problem in the Sex() function. It was returning a number in the print rather than a word "man" or "woman" so I change the code to try fix it. But im getting the following error now File ``` "./functions2.py", line 16 2 = woman SyntaxError: can't assign to literal ``` I tried to make the 2 an str(2) but it gave me another error. Thanks for you help so far appreciated ``` #!/usr/bin/python def Age(): age_ = raw_input("How old are you? ") return age_ def Name(): name_ = raw_input("What is your name? ") return name_ def Sex(): sex_ = str(raw_input("Are you a man or a woman? ")) if sex_ == 1: 1 = man return sex_ elif sex_ == 2: 2 = woman return sex_ else: print "Please enter man or woman " age_ = Age() name_ = Name() sex_ = Sex() print "Your name is " + name_ + " you are " + age_ + " years old and you are a " + sex_ + "." ```
2012/11/04
[ "https://Stackoverflow.com/questions/13219585", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1222298/" ]
Programming Lua has example on [non-preemptive multithreading](http://www.lua.org/pil/9.4.html) (using coroutines), which you can probably use almost directly. In my opinion coroutines will be a better solution for your use case. There is also [copas library](http://keplerproject.github.com/copas/index.html) that describes itself as "a dispatcher based on coroutines that can be used by TCP/IP servers", but you can actually use it for sending requests asynchronously too (using a combination of `addthread` and `step` calls).
Lua Lanes creates an entirely new (but minimal) Lua state for each lane. Any upvalues or arguments passed are copied, not referenced; this means your allClients table is being copied, along with the sockets it contains. The error is occurring because the sockets are userdata, which Lua Lanes does not know how to copy without some advice from a C module. Unfortunately, LuaSocket offers no such advice. (There are ways around this, but be cautious: LuaSocket is *not* thread-safe and synchronisation bugs are very difficult to track down.) Although it will not solve your problem, you should note that you need to require LuaSocket in your spawned lane; it is not copied by default. Solutions! ========== These are ordered from easy to hard (and mostly transcribed from my other answer [here](https://stackoverflow.com/a/12890914/837856)). Single-threaded Polling ----------------------- Repeatedly call polling functions in LuaSocket: * Blocking: call `socket.select` with no time argument and wait for the socket to be readable. * Non-blocking: call `socket.select` with a timeout argument of `0`, and use `sock:settimeout(0)` on the socket you're reading from. Simply call these repeatedly. I would suggest using a [coroutine scheduler](https://gist.github.com/1818054) for the non-blocking version, to allow other parts of the program to continue executing without causing too much delay. (If you go for this solution, I suggest reviewing [Paul's answer](https://stackoverflow.com/questions/13219570/lualanes-and-luasockets/13223515#13223515).) Dual-threaded Polling --------------------- Your main thread does not deal with sockets at all. Instead, it spawns another lane which requires LuaSocket, handles all clients and communicates with the main thread via a [linda](http://kotisivu.dnainternet.net/askok/bin/lanes/#lindas). This is probably the most viable option for you. Multi-threaded Polling ---------------------- Same as above, except each thread handles a subset of all the clients (1 to 1 is possible, but diminishing return will set in with large quantities of clients). I've made a simple example of this, available [here](https://gist.github.com/4026258). It relies on Lua Lanes 3.4.0 ([GitHub repo](https://github.com/LuaLanes/lanes/)) and a patched LuaSocket 2.0.2 ([source](http://luarocks.org/repositories/rocks/#luasocket), [patch](http://www.net-core.org/dl/luasocket-2.0.2-acceptfd.patch), [blog post re' patch](http://www.net-core.org/39/lua/patching-luasocket-to-make-it-compatible)) The results are promising, though you should definitely refactor my example code if you derive from it. [LuaJIT](http://luajit.org/luajit.html) + ENet ---------------------------------------------- [ENet](http://enet.bespin.org/) is a great library. It provides the perfect mix between TCP and UDP: reliable when desired, unreliable otherwise. It also abstracts operating system specific details, much like LuaSocket does. You can use the Lua API to bind it, or directly access it via [LuaJIT's FFI](http://luajit.org/ext_ffi.html) (recommended).
13,970
17,664,841
I have a set and dictionary and a value = 5 ``` v = s = {'a', 'b', 'c'} d = {'b':5 //<--new value} ``` If the key 'b' in dictionary d for example is in set s then I want to make that value equal to the new value when I return a dict comprehension or 0 if the key in set s is not in the dictionary d. So this is my code to do it where s['b'] = 5 and my new dictionary is is ... ``` {'a':0, 'b':5, 'c':0} ``` I wrote a dict comprehension ``` { k:d[k] if k in d else k:0 for k in s} ^ SyntaxError: invalid syntax ``` Why?! Im so furious it doesnt work. This is how you do if else in python isnt it?? So sorry everyone. For those who visited this page I originally put { k:d[k] if k in v else k:0 for k in v} and s['b'] = 5 was just a representation that the new dictionary i created would have a key 'b' equaling 5, but it isnt correct cus you cant iterate a set like that. So to reiterate v and s are equal. They just mean vector and set.
2013/07/15
[ "https://Stackoverflow.com/questions/17664841", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1509483/" ]
The expanded form of what you're trying to achieve is ``` a = {} for k in v: a[k] = d[k] if k in d else 0 ``` where `d[k] if k in d else 0` is [the Python's version of ternary operator](https://docs.python.org/dev/faq/programming.html#is-there-an-equivalent-of-c-s-ternary-operator). See? You need to drop `k:` from the right part of the expression: ``` {k: d[k] if k in d else 0 for k in v} # ≡ {k: (d[k] if k in d else 0) for k in v} ``` You can write it concisely like ``` a = {k: d.get(k, 0) for k in d} ```
You can't use a ternary `if` expression for a name:value pair, because a name:value pair isn't a value. You *can* use an `if` expression for the value or key separately, which seems to be exactly what you want here: ``` {k: (d[k] if k in v else 0) for k in v} ``` However, this is kind of silly. You're doing `for k in v`, so every `k` is in `v` by definition. Maybe you wanted this: ``` {k: (d[k] if k in v else 0) for k in d} ```
13,971
2,616,190
e.g., how can I find out that the executable has been installed in "/usr/bin/python" and the library files in "/usr/lib/python2.6"?
2010/04/11
[ "https://Stackoverflow.com/questions/2616190", "https://Stackoverflow.com", "https://Stackoverflow.com/users/116/" ]
In c# 6.0 you can now do ``` string username = SomeUserObject?.Username; ``` username will be set to null if SomeUSerObject is null. If you want it to get the value "", you can do ``` string username = SomeUserObject?.Username ?? ""; ```
It is called null coalescing and is performed as follows: ``` string username = SomeUserObject.Username ?? "" ```
13,974
70,279,160
trying to find out correct syntax to traverse through through the list to get all values and insert into oracle. edit: Below is the json structure : ``` [{ "publishTime" : "2021-11-02T20:18:36.223Z", "data" : { "DateTime" : "2021-11-01T15:10:17Z", "Name" : "x1", "frtb" : { "code" : "set1" }, "mainAccounts" : [ { "System" : { "identifier" : { "domain" : "System", "id" : "xxx" }, "name" : "TEST1" }, "Account" : "acc1" }, { "System" : { "identifier" : { "domain" : "System", "id" : "xxx" }, "name" : "TEST2" }, "Account" : "acc2" }, { "System" : { "identifier" : { "domain" : "System", "id" : "xxx" }, "name" : "TEST3" }, "Account" : "acc3" }], "sub" : { "ind" : false, "identifier" : { "domain" : "ops", "id" : "1", "version" : "1" }] ``` My python code : ``` insert_statement = """INSERT INTO mytable VALUES (:1,:2)""" r =requests.get(url, cert=(auth_certificate, priv_key), verify=root_cert, timeout=3600) data=json.loads(r.text) myrows = [] for item in data: try: name = (item.get("data").get("Name")) except AttributeError: name='' try: account= (item.get("data").get("mainAccounts")[0].get("Account") ) except TypeError: account='' rows=(name,account) myrows.append(rows) cursor.executemany(insert_statement,myrows) connection_target.commit() ``` with the above i only get first value for 'account' in list i.e. ("acc1") , how to get all the values i.e. (acc1,acc2,acc3) ? I have tried below with no success : try: Account = (item.get("data").get("mainAccounts")[0].get("Account") for item in data["mainAccounts") except TypeError: Account= '' please advise.Appreciate your help always.
2021/12/08
[ "https://Stackoverflow.com/questions/70279160", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17544527/" ]
Knative Serving does not support such a policy as of today. However, the community thinks that Kubernetes' Descheduler should work just fine, as is discussed in <https://github.com/knative/serving/issues/6176>.
Not kn specific but you could try adding `activeDeadlineSeconds` to the podSpec.
13,984
47,351,274
In summary, When provisioning my vagrant box using Ansible, I get thrown a mysterious error when trying to clone my bitbucket private repo using ssh. The error states that the "Host key verification failed". Yet if I vagrant ssh and then run the '*git clone*' command, the private repo is successfully cloned. This indicates that the ssh forward agent is indeed working and the vagrant box can access my private key associated with the bitbucket repo. I have been struggling for two days on this issue and am loosing my mind! Please, somebody help me!!! **Vagrantfile:** ``` Vagrant.configure("2") do |config| config.vm.box = "ubuntu/xenial64" config.vm.network "private_network", ip: "192.168.33.10" config.ssh.forward_agent = true # Only contains ansible dependencies config.vm.provision "shell", inline: "sudo apt-get install python-minimal -y" # Use ansible for all provisioning: config.vm.provision "ansible" do |ansible| ansible.playbook = "provisioning/playbook.yml" end end ``` My **playbook.yml** is as follows: ``` --- - hosts: all become: true tasks: - name: create /var/www/ directory file: dest=/var/www/ state=directory owner=www-data group=www-data mode=0755 - name: Add the user 'ubuntu' to group 'www-data' user: name: ubuntu shell: /bin/bash groups: www-data append: yes - name: Clone bitbucket repo git: repo: git@bitbucket.org:gustavmahler/example.com.git dest: /var/www/poo version: master accept_hostkey: yes ``` **Error Message:** `vagrant provision` > > TASK [common : Clone bitbucket repo] \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* > > > fatal: [default]: FAILED! => {"changed": false, "cmd": "/usr/bin/git clone --origin origin '' /var/www/poo", "failed": true, "msg": "Cloning into '/var/www/poo'...\nWarning: Permanently added the RSA host key for IP address '104.192.143.3' to the list of known hosts.\r\nPermission denied (publickey).\r\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.", "rc": 128, "stderr": "Cloning into '/var/www/poo'...\nWarning: Permanently added the RSA host key for IP address '104.192.143.3' to the list of known hosts.\r\nPermission denied (publickey).\r\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n", "stderr\_lines": ["Cloning into '/var/www/poo'...", "Warning: Permanently added the RSA host key for IP address '104.192.143.3' to the list of known hosts.", "Permission denied (publickey).", "fatal: Could not read from remote repository.", "", "Please make sure you have the correct access rights", "and the repository exists."], "stdout": "", "stdout\_lines": []} > > > **Additional Info:** * *ssh-add -l* on my machine does contain the associated bitbucket repo key. * *ssh-add -l* inside the vagrant box does also contain the associated bitbucket repo key (through ssh-forwarding). **Yet cloning works** if done manually inside the vagrant box **?**: ``` vagrant ssh git clone git@bitbucket.org:myusername/myprivaterepo.com.git Then type "yes" to allow the RSA fingerprint to be added to ~/.ssh/known_hosts (as its first connection with bitbucket) ``` **Possible solution?** I have seen in the Ansible documentation that there is a *[key\_file](http://docs.ansible.com/ansible/latest/git_module.html#options):* option. How would I reference the private key which is located outside the vagrant box and is passed in using ssh forwarding? I do have multiple ssh keys for different entities inside my ~/.ssh/ Perhaps the git clone command when run by Ansible provisioning isn't selecting the correct key? Any help is greatly appreciated and thanks for reading my nightmare.
2017/11/17
[ "https://Stackoverflow.com/questions/47351274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4515324/" ]
Since you run the whole playbook with `become: true`, SSH key-forwarding (as well as troubleshooting) becomes irrelevant, because the user connecting to BitBucket from your play is `root`. Run the task connecting to BitBucket as `ubuntu` user: * either specifying `become: false` in the `Clone bitbucket repo` task), * or removing `become: true` From the play and adding it only to tasks that require elevated permissions.
This answer comes direct from techraf's helpful comments. * I changed the owner of the /var/www directory from 'www-data' to 'ubuntu' (the username I use to login via ssh). * I also added "become: false" above the git task. NOTE: I have since been dealing with the following issue so this answer does not fully resolve my problems: [Ansible bitbucket clone repo provisioning ssh error](https://stackoverflow.com/questions/47784489/ansible-bitbucket-clone-repo-provisioning-ssh-error) **Updated working playbook.yml file**: ``` --- - hosts: all become: true tasks: - name: create /var/www/ directory file: dest=/var/www/ state=directory owner=ubuntu group=www-data mode=0755 - name: Add the user 'ubuntu' to group 'www-data' user: name: ubuntu shell: /bin/bash groups: www-data append: yes - name: Clone bitbucket repo become: false git: repo: git@bitbucket.org:[username]/example.com.git dest: /var/www/poo version: master accept_hostkey: yes ```
13,985
40,477,687
I'm getting `ImportError: No module named django` when trying to up my containers running `docker-compose up`. Here's my scenario: Dockerfile ``` FROM python:2.7 ENV PYTHONUNBUFFERED 1 RUN mkdir /code WORKDIR /code ADD requirements.txt /code/ RUN pip install -r requirements.txt ADD . /code/ ``` docker-compose.yml ``` version: '2' services: db: image: postgres web: build: . command: bash -c "python manage.py runserver" volumes: - .:/code expose: - "8000" links: - db:db ``` manage.py ``` import os import sys if __name__ == "__main__": os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp.settings") from django.core.management import execute_from_command_line execute_from_command_line(sys.argv) ``` * The containers were built successfully * My packages are present in the container (I get them when I run `pip freeze`) I've changed the manage.py content for ``` import django print django.get_version() ``` and it worked. This is basically the same example as in this [tutorial](https://docs.docker.com/compose/django/), except for starting a new project.
2016/11/08
[ "https://Stackoverflow.com/questions/40477687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2812350/" ]
Maybe what you want is to do this: ``` void printHandResults(struct Card (*hand)[]); ``` and this: ``` void printHandResults(struct Card (*hand)[]) { } ``` What you were doing was passing a pointer to an array of struct variables in the main, BUT, the function was set to receive **an array of pointers to struct variables** and not **a pointer to an array of struct variables**! Now the type mismatch would not occur and thus, no warning. Note that `[]`, the (square) brackets have higher precedence than the (unary) dereferencing operator `*`, so we would need a set of parentheses `()` enclosing the array name and `*` operator to ensure what we are talking about here!
More easy way: ``` #define HAND_SIZE 5 typedef struct Cards{ char suit; char face; }card_t; void printHandResults( card_t*); int main(void) { card_t hand[HAND_SIZE]; card_t* p_hand = hand; ... printHandResults(p_hand); } void printHandResults( card_t *hand) { // Here you must play with pointer's arithmetic ... } ```
13,986
57,498,262
It seems so basic, but I can't work out how to achieve the following... Consider the scenario where I have the following data: ```py all_columns = ['A','B','C','D'] first_columns = ['A','B'] second_columns = ['C','D'] new_columns = ['E','F'] values = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]] df = pd.DataFrame(data = values, columns = all_columns) df A B C D 0 1 2 3 4 1 5 6 7 8 2 9 10 11 12 3 13 14 15 16 ``` How can I using this data subsequently subtract let's say column C - column A, then column D - column B and return two new columns E and F respectively to my df Pandas dataframe? I have multiple columns so writing the formula one by one is not an option. I imagine it should be something like that, but python thinks that I am trying to subtract list names rather than the values in the actual lists... ```py df[new_columns] = df[second_columns] - df[first_columns] ``` Expected output: ```py A B C D E F 0 1 2 3 4 2 2 1 5 6 7 8 2 2 2 9 10 11 12 2 2 3 13 14 15 16 2 2 ```
2019/08/14
[ "https://Stackoverflow.com/questions/57498262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10750173/" ]
An option is `pivot_longer` from the dev version of `tidyr` ``` library(tidyr) library(dplyr) library(tibble) nm1 <- sub("\\.?\\d+$", "", names(df)) names(df) <- paste0(nm1, ":", ave(seq_along(nm1), nm1, FUN = seq_along)) df %>% rownames_to_column('rn') %>% pivot_longer(-rn, names_to= c(".value", "Group"), names_sep= ":") ```
You may use base R's `reshape`. The trick is the nested `varying` and to `cbind` an `id` column. ``` reshape(cbind(id=1:nrow(DF), DF), varying=list(c(2, 4), c(3, 5)), direction="long", timevar="Group", v.names=c("Gene", "AVG_EXPR"), times=c("Group1", "Group2")) # id Group Gene AVG_EXPR # 1.Group1 1 Group1 EMX1 0.55748718 # 2.Group1 2 Group1 EXO_C3L4 -0.71399573 # 3.Group1 3 Group1 FAF2P1 0.72595733 # 4.Group1 4 Group1 FAM224A 0.78106399 # 5.Group1 5 Group1 GATC 0.33296728 # 6.Group1 6 Group1 FAM43A -1.14699031 # 7.Group1 7 Group1 FAT4 0.64475462 # 8.Group1 8 Group1 EXO_FEZF1-AS1 0.88621529 # 1.Group2 1 Group2 EXO_BRPF3 -0.62445044 # 2.Group2 2 Group2 AFS 0.14577195 # 3.Group2 3 Group2 IJAS -1.15083794 # 4.Group2 4 Group2 CCDC187 0.95483371 # 5.Group2 5 Group2 CCDC200 -1.55217630 # 6.Group2 6 Group2 CCDC7 -0.51225644 # 7.Group2 7 Group2 CCL27 0.52270643 # 8.Group2 8 Group2 CD6 -0.08751736 ``` ### Data ``` DF <- structure(list(Group1 = structure(c(1L, 2L, 4L, 5L, 8L, 6L, 7L, 3L), .Label = c("EMX1", "EXO_C3L4", "EXO_FEZF1-AS1", "FAF2P1", "FAM224A", "FAM43A", "FAT4", "GATC"), class = "factor"), AVG_EXPR = c(0.557487175664929, -0.713995729705457, 0.725957334982896, 0.781063988053138, 0.332967281017435, -1.14699031084438, 0.644754618475526, 0.88621528791546), Group2 = structure(c(7L, 1L, 8L, 2L, 3L, 4L, 5L, 6L), .Label = c("AFS", "CCDC187", "CCDC200", "CCDC7", "CCL27", "CD6", "EXO_BRPF3", "IJAS"), class = "factor"), AVG_EXPR.1 = c(-0.624450436709626, 0.145771950049532, -1.15083793547685, 0.954833708527147, -1.55217630379434, -0.512256436764118, 0.522706429197282, -0.0875173629601027)), class = "data.frame", row.names = c(NA, -8L)) ```
13,988
67,080,686
So I need to enforce the type of a class variable, but more importantly, I need to enforce a list with its types. So if I have some code which looks like this: ``` class foo: def __init__(self, value: list[int]): self.value = value ``` Btw I'm using python version 3.9.4 So essentially my question is, how can I make sure that value is a list of integers. Thanks in advance * Zac
2021/04/13
[ "https://Stackoverflow.com/questions/67080686", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15017177/" ]
You can enable the system keyboard by configuring `OVRCameraRig` Oculus features. 1. From the Hierarchy view, select `OVRCameraRig` to open settings in the Inspector view. 2. Under `OVR Manager`, in the `Quest Features` section, select `Require System Keyboard`. [![enter image description here](https://i.stack.imgur.com/X84TD.png)](https://i.stack.imgur.com/X84TD.png) [![enter image description here](https://i.stack.imgur.com/DME4s.png)](https://i.stack.imgur.com/DME4s.png) For more: <https://developer.oculus.com/documentation/unity/unity-keyboard-overlay/>
What @Codemaker answered here is for Oculus Integration. For XR Interaction Toolkit, you can manually enable a keyboard instance, by doing the following. ``` private TouchScreenKeyboard keyboard; public void ShowKeyboard() { keyboard = TouchScreenKeyboard.Open("", TouchScreenKeyboardType.Default); } ``` Just add this `Showkeyboard()` under the Inputfield's `On Select ()` event. And you are done. You can find the source [here](https://docs.unity3d.com/ScriptReference/TouchScreenKeyboard.Open.html). I tried it and it works for me.
13,990
1,399,727
Assuming the text is typed at the same time in the same (Israeli) timezone, The following free text lines are equivalent: ``` Wed Sep 9 16:26:57 IDT 2009 2009-09-09 16:26:57 16:26:57 September 9th, 16:26:57 ``` Is there a python module that would convert all these text-dates to an (identical) `datetime.datetime` instance? I would like to use it in a command-line tool that would get a freetext date and time as an argument, and return the equivalent date and time in different time zones, e.g.: ``` ~$wdate 16:00 Israel Israel: 16:00 San Francisco: 06:00 UTC: 13:00 ``` or: ``` ~$wdate 18:00 SanFran San Francisco 18:00:22 Israel: 01:00:22 (Day after) UTC: 22:00:22 ``` Any Ideas? Thanks, Udi
2009/09/09
[ "https://Stackoverflow.com/questions/1399727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/51197/" ]
The [python-dateutil](http://labix.org/python-dateutil) package sounds like it would be helpful. Your examples only use simple HH:MM timestamps with a (magically shortened) city identifier, but it seems able to handle more complicated formats like those earlier in the question, too.
You could you [`time.strptime`](http://docs.python.org/library/time.html#time.strptime) Like this : ``` time.strptime("2009-09-09 16:26:57", "%Y-%m-%d %H:%M:%S") ``` It will return a struct\_time value (more info on the python doc page).
13,991
62,767,387
I'm a noob with python beautifoulsoup library and i'm trying to scrape data from a website's highcharts. i found that all the data i need is located in a script tag, however i dont know how to scrape them (please see the attached image) Is there a way to get the data from this script tag using python beautifulsoup? [script](https://i.stack.imgur.com/1Ogao.png)
2020/07/07
[ "https://Stackoverflow.com/questions/62767387", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13881565/" ]
Check out this working sandbox: <https://codesandbox.io/s/confident-taussig-tg3my?file=/src/App.js> One thing to note is you shouldn't base keys off of linear sequences [or indexes] because that can mess up React's tracking. Instead, pass the index value from map into the IssueRow component to track order, and give each issue its own serialized id. ``` const issueRows = issues.map((issue, index) => ( <IssueRow key={issue.id} issue={issue} index={index} /> )); ``` Also, the correct way to update state that depends on previous state, is to pass a function rather than an object: ``` setIssues(state => [...state, issue]); ``` You essentially iterate previous state, plus add the new issue.
This should be inside another `React.useEffect()`: ``` setTimeout(() => { createIssue(sampleIssue); }, 2000); ``` Otherwise, it will run every render in a loop. As a general rule of thumb there should be no side effects outside of useEffect.
13,993
63,438,737
I have an image in which text is lighter and not clearly visible, i want to enhance the text and lighten then background using python pil or cv2. Help is needed. I needed to print this image and hence it has ro be clearer. what i have done till now is below ``` from PIL import Image, ImageEnhance im = Image.open('t.jpg') im2 = im.point(lambda p:p * 0.8) im2.save('t1.jpg') ``` Above has slightly darken the text but still i needed to make text clear and background whiter or lighter. The image is a photo of a printed document with text colour in black on white paper but that is not clearly visible. Below is snap of photo [![enter image description here](https://i.stack.imgur.com/cRYkY.jpg)](https://i.stack.imgur.com/cRYkY.jpg) [![enter image description here](https://i.stack.imgur.com/cRYkY.jpg)](https://i.stack.imgur.com/cRYkY.jpg) In image above behind and surrounding letters and words there is some kind of noise white and silver dots. How can i completely remove them so that text borders can be clearly visible. I have also inverted image colours and given below sample [![enter image description here](https://i.stack.imgur.com/R4tex.jpg)](https://i.stack.imgur.com/R4tex.jpg) I have tried following Adaptive thresolding ,(failed) Otsu method, (failed) Gaussian blurring, (failed) Colour segmentation, (failed) Connected component removal (failed) Denoising( failed because it also removed text) but nothing worked. Above all methods made it even worse because result of above methods gave me below image. Also text is affected when connected compoment method is used [![enter image description here](https://i.stack.imgur.com/RT5uk.jpg)](https://i.stack.imgur.com/RT5uk.jpg) Please help is needed. Also tried image magick morphology editing but failed How actually noise exist is shown in below image. It is continous rectangle bar filled with dots surrounding text ![enter image description here (https://i.stack.imgur.com/ZMHim.jpg)!
2020/08/16
[ "https://Stackoverflow.com/questions/63438737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Save first column in an array, too. Insert before your `END` section `{dates[FNR] = $1}` and replace both `$1` with `dates[n]`.
Could you please try following, have written on mobile so couldn't test it as of now, should work but. ``` awk ' { diff=$2-$3 a[$1]=(a[$1]!="" && a[$1]>diff?a[$1]:diff) } END{ for(i in a){ print i,a[i] } }' *.txt ```
13,995
58,771,456
i am installing connector of Psycopg2 but i am unable install, how do i install that successfully? in using this command line to install: ``` pip install psycopg2 (test) C:\Users\Shree.Shree-PC\Desktop\projects\purchasepo>pip install psycopg2 Collecting psycopg2 Using cached https://files.pythonhosted.org/packages/84/d7/6a93c99b5ba4d4d22daa3928b983cec66df4536ca50b22ce5dcac65e4e71/psycopg2-2.8.4.tar .gz ERROR: Command errored out with exit status 1: command: 'c:\users\shree.shree-pc\envs\test\scripts\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Shr ee.Shree-PC\\AppData\\Local\\Temp\\pip-install-_bgz1zs9\\psycopg2\\setup.py'"'"'; __file__='"'"'C:\\Users\\Shree.Shree-PC\\AppData\\Local\\T emp\\pip-install-_bgz1zs9\\psycopg2\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\Shree.Shree-PC\AppData\Local\Temp\pip- install-_bgz1zs9\psycopg2\pip-egg-info' cwd: C:\Users\Shree.Shree-PC\AppData\Local\Temp\pip-install-_bgz1zs9\psycopg2\ Complete output (23 lines): running egg_info creating C:\Users\Shree.Shree-PC\AppData\Local\Temp\pip-install-_bgz1zs9\psycopg2\pip-egg-info\psycopg2.egg-info writing C:\Users\Shree.Shree-PC\AppData\Local\Temp\pip-install-_bgz1zs9\psycopg2\pip-egg-info\psycopg2.egg-info\PKG-INFO writing dependency_links to C:\Users\Shree.Shree-PC\AppData\Local\Temp\pip-install-_bgz1zs9\psycopg2\pip-egg-info\psycopg2.egg-info\depe ndency_links.txt writing top-level names to C:\Users\Shree.Shree-PC\AppData\Local\Temp\pip-install-_bgz1zs9\psycopg2\pip-egg-info\psycopg2.egg-info\top_l evel.txt writing manifest file 'C:\Users\Shree.Shree-PC\AppData\Local\Temp\pip-install-_bgz1zs9\psycopg2\pip-egg-info\psycopg2.egg-info\SOURCES.t xt' Error: pg_config executable not found. pg_config is required to build psycopg2 from source. Please add the directory containing pg_config to the $PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. If you prefer to avoid building psycopg2 from source, please install the PyPI 'psycopg2-binary' package instead. For further information please check the 'doc/src/install.rst' file (also at <http://initd.org/psycopg/docs/install.html>). ---------------------------------------- ``` ERROR: Command errored out with exit status 1: python setup.py egg\_info Check the logs for full command output.
2019/11/08
[ "https://Stackoverflow.com/questions/58771456", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10123621/" ]
It was such a dump solution, it took me hours to find this: When I get the file from DocumentPicker I had to add the type of the file because DocumentPicker return an odd type called "success", when I changed it to 'image/jpeg' it worked :D its not a solution at all because I will need to find a way to know what type of file is each file a user chooses, anyways, this code works c: ```js let response = await DocumentPicker.getDocumentAsync({type: 'image/jpeg'}) response.type = 'image/jpeg' // <- lasdfkasdfaslfkfsdkdsaf const data = new FormData(); data.append('file', response); axios.post('http://192.168.0.3:8000/api/file', data , {headers: { 'Content-type': 'application/x-www-form-urlencoded' }} ) .then(res => { console.log("gooosh", res.data) }) .catch(error => { console.log("error", error, JSON.stringify(error)) }); ```
you should try to modify the content-type to ``` fetch("http://192.168.0.3:8000/api/file", { method: "POST", headers:{ 'Content-Type': 'multipart/form-data', }, body: data }) ``` and for the form-url-urlencoded, the fetch is not supported. you have to push it by yourself.you can see this [answer](https://stackoverflow.com/questions/35325370/post-a-x-www-form-urlencoded-request-from-react-native).
13,997
1,817,183
I'm trying to learn the super() function in Python. I thought I had a grasp of it until I came over this example (2.6) and found myself stuck. [http://www.cafepy.com/article/python\_attributes\_and\_methods/python\_attributes\_and\_methods.html#super-with-classmethod-example](https://web.archive.org/web/20170820065901/http://www.cafepy.com/article/python_attributes_and_methods/python_attributes_and_methods.html#super-with-classmethod-example) ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "test.py", line 9, in do_something do_something = classmethod(do_something) TypeError: unbound method do_something() must be called with B instance as first argument (got nothing instead) >>> ``` It wasn't what I expected when I read this line right before the example: > If we're using a class method, we don't have an instance to call super with. Fortunately for us, super works even with a type as the second argument. --- The type can be passed directly to super as shown below. Which is exactly what Python tells me is not possible by saying that do\_something() should be called with an instance of B.
2009/11/29
[ "https://Stackoverflow.com/questions/1817183", "https://Stackoverflow.com", "https://Stackoverflow.com/users/169854/" ]
Sometimes texts have to be read more for the flavor of the idea rather than for the details. This is one of those cases. In the [linked page](http://www.cafepy.com/article/python_attributes_and_methods/python_attributes_and_methods.html#a-super-solution), Examples 2.5, 2.6 and 2.7 should all use one method, `do_your_stuff`. (That is, `do_something` should be changed to `do_your_stuff`.) In addition, as [Ned Deily pointed out](https://stackoverflow.com/a/1817242/190597), `A.do_your_stuff` has to be a class method. ``` class A(object): @classmethod def do_your_stuff(cls): print 'This is A' class B(A): @classmethod def do_your_stuff(cls): super(B, cls).do_your_stuff() B.do_your_stuff() ``` `super(B, cls).do_your_stuff` returns a *bound* method (see [footnote 2](http://www.cafepy.com/article/python_attributes_and_methods/python_attributes_and_methods.html#a-super-solution)). Since `cls` was passed as the second argument to `super()`, it is `cls` that gets bound to the returned method. In other words, `cls` gets passed as the first argument to the method `do_your_stuff()` of class A. To reiterate: `super(B, cls).do_your_stuff()` causes `A`'s `do_your_stuff` method to be called with `cls` passed as the first argument. In order for that to work, `A`'s `do_your_stuff` has to be a class method. The linked page doesn't mention that, but that is definitively the case. PS. `do_something = classmethod(do_something)` is the old way of making a classmethod. The new(er) way is to use the @classmethod decorator. --- Note that `super(B, cls)` can not be replaced by `super(cls, cls)`. Doing so could lead to infinite loops. For example, ``` class A(object): @classmethod def do_your_stuff(cls): print('This is A') class B(A): @classmethod def do_your_stuff(cls): print('This is B') # super(B, cls).do_your_stuff() # CORRECT super(cls, cls).do_your_stuff() # WRONG class C(B): @classmethod def do_your_stuff(cls): print('This is C') # super(C, cls).do_your_stuff() # CORRECT super(cls, cls).do_your_stuff() # WRONG C.do_your_stuff() ``` will raise `RuntimeError: maximum recursion depth exceeded while calling a Python object`. If `cls` is `C`, then `super(cls, cls)` searches `C.mro()` for the class that comes after `C`. ``` In [161]: C.mro() Out[161]: [__main__.C, __main__.B, __main__.A, object] ``` Since that class is `B`, when `cls` is `C`, `super(cls, cls).do_your_stuff()` **always** calls `B.do_your_stuff`. Since `super(cls, cls).do_your_stuff()` is called inside `B.do_your_stuff`, you end up calling `B.do_your_stuff` in an infinite loop. In Python3, the [0-argument form of `super`](https://docs.python.org/3/library/functions.html#super) was added so `super(B, cls)` could be replaced by `super()`, and Python3 will figure out from context that `super()` in the definition of `class B` should be equivalent to `super(B, cls)`. But in no circumstance is `super(cls, cls)` (or for similar reasons, `super(type(self), self)`) ever correct.
The example from the web page seems to work as published. Did you create a `do_something` method for the superclass as well but not make it into a classmethod? Something like this will give you that error: ``` >>> class A(object): ... def do_something(cls): ... print cls ... # do_something = classmethod(do_something) ... >>> class B(A): ... def do_something(cls): ... super(B, cls).do_something() ... do_something = classmethod(do_something) ... >>> B().do_something() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in do_something TypeError: unbound method do_something() must be called with B instance as first argument (got nothing instead) ```
13,998
10,269,860
Here is a simple code in python 2.7.2, which fetches site and gets all links from given site: ``` import urllib2 from bs4 import BeautifulSoup def getAllLinks(url): response = urllib2.urlopen(url) content = response.read() soup = BeautifulSoup(content, "html5lib") return soup.find_all("a") links1 = getAllLinks('http://www.stanford.edu') links2 = getAllLinks('http://med.stanford.edu/') print len(links1) print len(links2) ``` Problem is that it doesn't work in second scenario. It prints 102 and 0, while there are clearly links on the second site. BeautifulSoup doesn't throw parsing errors and it pretty prints markup ok. I suspect it maybe caused by first line from source of med.stanford.edu which says that it's xml (even though content-type is: text/html): ``` <?xml version="1.0" encoding="iso-8859-1"?> ``` I can't figure out how to set up Beautiful to disregard it, or workaround. I'm using html5lib as parser because I had problems with default one (incorrect markup).
2012/04/22
[ "https://Stackoverflow.com/questions/10269860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/808271/" ]
When a document claims to be XML, I find the lxml parser gives the best results. Trying your code but using the lxml parser instead of html5lib finds the 300 links.
You are precisely right that the problem is the `<?xml...` line. Disregarding it is very simple: just skip the first line of content, by replacing ``` content = response.read() ``` with something like ``` content = "\n".join(response.readlines()[1:]) ``` Upon this change, `len(links2)` becomes 300. ETA: You probably want to do this conditionally, so you don't always skip the first line of content. An example would be something like: ``` content = response.read() if content.startswith("<?xml"): content = "\n".join(content.split("\n")[1:]) ```
14,007
73,494,380
I have a script using `docker` python library or [Docker Client API](https://docker-py.readthedocs.io/en/1.8.0/api/#containers). I would like to limit each docker container to use only 10cpus (total 30cpus in the instance), but I couldn't find the solution to achieve that. I know in docker, there is `--cpus` flag, but `docker` only has `cpu_shares (int): CPU shares (relative weight)` parameter to use. Does everyone have experience in setting the limit on cpu usage using `docker`? ```py import docker client = docker.DockerClient(base_url='unix://var/run/docker.sock') container = client.containers.run(my_docker_image, mem_limit=30G) ``` Edits: I tried `nano_cpus` as what [here](https://github.com/docker/docker-py/issues/1920) suggests, like `client.containers.run(my_docker_image, nano_cpus=10000000000)` to set 10CPUS. When I inspectED the container, it did show "NanoCpus": 10000000000". However, if I run the R in the container and do `parallel::detectCores()`, it still shows 30, which I am confused. I also link R tag now. Thank you!
2022/08/25
[ "https://Stackoverflow.com/questions/73494380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16729348/" ]
cPanel has a feature called Multi-PHP, which does what you need (if your host has it enabled). For each project, it puts a snippet like this in the `.htaccess` which sets the PHP version to use: ``` # php -- BEGIN cPanel-generated handler, do not edit # Set the “ea-php81” package as the default “PHP” programming language. <IfModule mime_module> AddHandler application/x-httpd-ea-php81 .php .php8 .phtml </IfModule> # php -- END cPanel-generated handler, do not edit ``` The example above is for PHP 8.1.
You can use as many different versions as you want. Consider: 1. Install PHP versios with PHP-FPM 2. Create directory strucure for each version (website) 3. Configure Apache for Both Websites. With the above configuration you have combined virtual hosts and PHP-FPM to serve multiple websites and multiple versions of PHP on a single server. The only practical limit on the number of PHP sites and PHP versions that your Apache service can handle is the processing power of your instance. There is a step by step guide here :[Multiple PHP version on one server](https://www.digitalocean.com/community/tutorials/how-to-run-multiple-php-versions-on-one-server-using-apache-and-php-fpm-on-ubuntu-18-04)
14,008
7,073,557
Question: Is there a way to make a program run with out logging in that doesn't involve the long painful task of creating a windows service, or is there an easy way to make a simple service? --- Info: I'm working on a little project for college which is a simple distributed processing program. I'm going to harness the computers on campus that are currently sitting completely idle and make my own little super computer. but to do this I need the client running on my target machines. Through a little mucking around on the internet I find that what I need is a service, I also find that this isn't something quite as simple as making a scheduled task or dropping a .bat file into the start up folder. I don't need a lot. if I could just get it to call "python cClient.py" i'm completely set. Is there an easy way to do this? like adding a key in the registry some where? or do I have to go through the whole song and dance ritual that microsoft has made for setting up a service?
2011/08/16
[ "https://Stackoverflow.com/questions/7073557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/186608/" ]
The Windows NT Resource Kit introduced the [Srvany.exe command-line utility](http://support.microsoft.com/kb/137890), which can be used to start any Windows NT/2000/2003 application as a service. You can download Srvany.exe [here](http://www.microsoft.com/download/en/details.aspx?id=17657).
Use Visual Studio and just create yourself a Windows Service project. I think you'll find it very easy.
14,009
26,044,173
I made a program in python which allows you to type commands (e.g: if you type clock, it shows you the date and time). But, I want it to be fullscreen. The problem is that my software doesnt have gui and I dont want it to so that probably means that I wont be using tkinter or pygame. Can some of you write a whole 'hello world' program in fullscreen for me? What I am aiming for is for my program to look a bit like ms-dos. Any help??? By the way, im very new to python (approximately 4 weeks). NOTE: I have python 3.4.1
2014/09/25
[ "https://Stackoverflow.com/questions/26044173", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4061311/" ]
Since Vista, cmd.exe can no longer go to full-screen mode. You'll need to implement a full-screen console emulator yourself or look for another existing solution. E.g. [ConEmu](http://www.hanselman.com/blog/conemuthewindowsterminalconsolepromptwevebeenwaitingfor.aspx) appears to be able to do it.
Solution -------- Use your Operating System services to configure parameters. ``` <_aMouseRightClick_>->[Properties]->[Layout] ``` Kindly notice, that some of the `python` interpreter process window parameters are given in [char]-s, while some other in [px]: ``` size.Width [char]-s size.Height[char]-s loc.X [px] loc.Y [px] ``` So adjust these values so as to please your intentions. ![Settings before pseudo-FullScreenMode...](https://i.stack.imgur.com/YukIf.jpg) You may set negative values for `[loc.X, loc.Y]` to move/hide window edges "outside" the screen left/top edges
14,018
40,265,591
I have a data frame that in which every row represents a day of the week, and every column represents the serial number of an internet-connected device that failed to communicate with the server on that day. I am trying to get a Series of serial numbers that have failed to communicate for a full week. The code block: ``` counts = df.stack().value_counts() seven_day = counts[counts == 7] for a in seven_day: print(a) ``` The problem is that nothing is printed. What I want is a list of the serial numbers, and not the counts themselves. This question is a follow-up from: [Python Pandas -- Determine if Values in Column 0 Are Repeated in Each Subsequent Column](https://stackoverflow.com/questions/40244094/python-pandas-determine-if-values-in-column-0-are-repeated-in-each-subsequent)
2016/10/26
[ "https://Stackoverflow.com/questions/40265591", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3285817/" ]
You could try something like this, if you pass in the name of the controller as a string. This solution assumes that your models are using `ActiveRecord` prior to rails 5 where `ApplicationRecord` was used to define models; in that case just switch `ActiveRecord::Base` with `ApplicationRecord`. Also if you have models that are plain old ruby objects (POROs), then this wont work for them. ``` def has_model?(controller_klass) begin class_string = controller_klass.to_s.gsub('Controller', '').singularize class_instance = class_string.constantize.new return class_instance.class.ancestors.include? ActiveRecord::Base rescue NameError => e return false end end ```
This method doesn't rely on exceptions, and works with input as Class or String. It should work for any Rails version : ``` def has_model?(controller_klass) all_models = ActiveRecord::Base.descendants.map(&:to_s) model_klass_string = controller_klass.to_s.sub(/Controller$/,'').singularize all_models.include?(model_klass_string) end ``` Note : you need to set ``` config.eager_load = true ``` in > > config/environments/development.rb > > > If you have non ActiveRecord models, you can ignore the previous note and use : ``` all_models = Dir[File.join(Rails.root,"app/models", "**","*.rb")].map{|f| File.basename(f,'.rb').camelize} ```
14,019
66,942,621
Im attempting to launch a python script from my Java program - The python script listens for socket connections from the Java program and responds with data. In order to do this I have attempted to use the ProcessBuilder API to: 1. activate a python virtualenv (located in my working directory) 2. run my python script predictprobability.py such that it starts listening for connections from my java program. this is what I have so far: ``` public class MLClassifierProcess{ //bash location //activate python venv //python interpreter //script final String[] command_to_run = new String[] { "/bin/bash", "-c", "/env/bin/activate;", "/env/bin/python","predictprobability.py;" }; public void startML(){ ProcessBuilder builder = new ProcessBuilder(command_to_run); Process pr = null; try { pr = builder.start(); } catch (IOException ioException) { System.out.println("error"); } } public static void main(String[] args){ MLClassifierProcess p = new MLClassifierProcess(); p.startML(); } } ``` however running the main function terminates straight away, when the script predictprobability.py should continue to run indefinitely. I'm new to the ProcessBuilder API so any pointers on how to proceed would be greatly appreciated
2021/04/04
[ "https://Stackoverflow.com/questions/66942621", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15539237/" ]
If your Java program exits, the Python process you launched will exit as well as child processes are killed when a parent process dies unless they have been detached from that process. If you want your Java program to keep running until the Python program has completed execution, then you need to have the Java code wait appropriately. Here's how to have it do so: ``` public class MLClassifierProcess{ //bash location //activate python venv //python interpreter //script final String[] command_to_run = new String[] { "/bin/bash", "-c", "/env/bin/activate;", "/env/bin/python","predictprobability.py;" }; public Process startML(){ ProcessBuilder builder = new ProcessBuilder(command_to_run); Process pr = null; try { pr = builder.start(); } catch (IOException ioException) { System.out.println("error"); } return pr; } public static void main(String[] args){ MLClassifierProcess p = new MLClassifierProcess(); Process pr = p.startML(); if (pr != null) pr.waitFor(); } } ```
In the end the solution was simple: ``` Process process = Runtime.getRuntime().exec("<path/to/venv/python_interpreter> "+"path/to/scripytorun.py) ``` In my case the succcessful command was ``` Process process = Runtime.getRuntime().exec( System.getProperty("user.dir")+"/env/bin/python "+System.getProperty("user.dir")+"/predictprobability.py"); process.waitFor(); ``` @CryptoFool was right to point out that the absolute path to resources is needed, hence the `System.getProperty("user.dir")` prepended to the relative paths
14,020
51,783,232
I am using python regular expressions. I want all colon separated values in a line. e.g. ``` input = 'a:b c:d e:f' expected_output = [('a','b'), ('c', 'd'), ('e', 'f')] ``` But when I do ``` >>> re.findall('(.*)\s?:\s?(.*)','a:b c:d') ``` I get ``` [('a:b c', 'd')] ``` I have also tried ``` >>> re.findall('(.*)\s?:\s?(.*)[\s$]','a:b c:d') [('a', 'b')] ```
2018/08/10
[ "https://Stackoverflow.com/questions/51783232", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1217998/" ]
Use split instead of regex, also avoid giving variable name like keywords : ``` inpt = 'a:b c:d e:f' k= [tuple(i.split(':')) for i in inpt.split()] print(k) # [('a', 'b'), ('c', 'd'), ('e', 'f')] ```
The easiest way using `list comprehension` and `split` : ``` [tuple(ele.split(':')) for ele in input.split(' ')] ``` #driver values : ``` IN : input = 'a:b c:d e:f' OUT : [('a', 'b'), ('c', 'd'), ('e', 'f')] ```
14,021
36,563,002
I have a python code. I need to execute the python script from my c# program. After searching a bit about this, I came to know that there is mainly two ways of executing a python script from c#. One by using 'Process' command and the other by using Iron Python. My question might seem dumb, is there any other way through which I can execute a python script? To be more specific, can I create a class , lets say 'Python' in c# and a member function 'execute\_script' which doesn't use any api like iron python or doesn't create a process for executing the script, so that if call 'execute\_scipt(mypythonprogram.py)' , my script gets executed. Sorry if this seems dumb. If this is possible, please do help me. Thanks in advance.
2016/04/12
[ "https://Stackoverflow.com/questions/36563002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5743035/" ]
The constants you're looking for are not called `LONG_LONG_...`. Check your `limits.h` header. Most likely you're after `ULLONG_MAX`, `LLONG_MAX`, etc.
> > Why do the integers and long integers have the same limits? Shouldn't the long integers have a larger range of values? > > > You have stepped into one of the hallmarks of the C language - its adaptability. C defines the range of `int` to be at least as wide as `short` and the range of `long` to be at least as wide as `int`. All have minimum ranges. Rather than precisely defined the range of `short, int, long`, C opted for versatility. On OP's platform, the range of `int` matches the range of `long` (32-bit). On many embedded processors of 2016 (and home computers of the 70s,80s), the range of `int` matches the range of `short` (16-bit). On some platforms (64-bit) the range of `int` exceeds `short`, and *narrower than* `long`. So directly to OP's question: **`int` does not always have the same range as `long`.** The trick is that `int` is not just another rung of the `singed char, short, int, long, long long` ladder. It is *the* integer type. Given the *usual integer promotions*, all narrows types promote to `int`. `int` is *often* the processor's native bit width. Most code has been written with `int` as 32-bit and also a large percentage as 16-bit. With 64-bit processors, having `int` as 64-bit is possible, but that leaves only 2 standard type for `signed char, short` for 8, 16 and 32-bit. Going forward, simple count on `signed char` range `<=` the `short` range, `short` range `<=` the `int` range, **`int` range `<=` the `long` range**, etc. Also that `signed char` is at least 8-bit, `short, int` at least 16-bit, `long` at least 32-bit, `long long` at least 64-bit. If code needs explicit widths, use `int8_t`, `int16_t`, etc. The fact that C is used 40+ years later attests that this versatility has/had merit. [Discussion omits unsigned types, `_Bool_t` and `char` for brevity. Also omitted are rare non-power-of-2 types (9, 18, 24, 36 etc.)]
14,026
19,940,549
I am using the [NakedMUD](http://homepages.uc.edu/~hollisgf/nakedmud.html) code base for a project. I am running into an issue in importing modules. In \*.py (Python files) they import modules with the following syntax: ``` import mudsys, mud, socket, char, hooks ``` and in C to embed Python they use: ``` mudmod = PyImport_ImportModule("char"); ``` Both of these methods seem to indicate to me that is some mudsys.py, mud.py ... files somewhere in a path findable by Python. I can not locate them. I am wondering where I might look to find how they are renaming the module other then the filename. I am unsure what else would be required to locate this. The issue is that the second import `PyImport_ImportModule()` in one case is not finding the modules and they reference a `null` pointer returned by this. The Python documentation mentions "[Changed in version 2.6: Always uses absolute imports.](http://docs.python.org/2/c-api/import.html)" and I am suspicious that this is the part of the issue. Of note, they override some builtin functions of Python which may also be affecting this in `__restricted_builtin__.py` and `__restricted_builtin_funcs__.py`. ``` ################################################################################ # # __restricted_builtin_funcs__.py # # This contains functions used by __restricted_builtin__ to do certain # potentially dangerous actions in a safe mode # ################################################################################ import __builtin__ def r_import(name, globals = {}, locals = {}, fromlist = []): '''Restricted __import__ only allows importing of specific modules''' ok_modules = ("mud", "obj", "char", "room", "exit", "account", "mudsock", "event", "action", "random", "traceback", "utils", "__restricted_builtin__") if name not in ok_modules: raise ImportError, "Untrusted module, %s" % name return __builtin__.__import__(name, globals, locals, fromlist) def r_open(file, mode = "r", buf = -1): if mode not in ('r', 'rb'): raise IOError, "can't open files for writing in restricted mode" return open(file, mode, buf) def r_exec(code): """exec is disabled in restricted mode""" raise NotImplementedError,"execution of code is disabled" def r_eval(code): """eval is disabled in restricted mode""" raise NotImplementedError,"evaluating code is disabled" def r_execfile(file): """executing files is disabled in restricted mode""" raise NotImplementedError,"executing files is disabled" def r_reload(module): """reloading modules is disabled in restricted mode""" raise NotImplementedError, "reloading modules is disabled" def r_unload(module): """unloading modules is disabled in restricted mode""" raise NotImplementedError, "unloading modules is disabled" ``` and ``` ################################################################################ # # __restricted_builtin__.py # # This module is designed to replace the __builtin__, but overwrite many of the # functions that would allow an unscrupulous scripter to take malicious actions # ################################################################################ from __builtin__ import * from __restricted_builtin_funcs__ import r_import, r_open, r_execfile, r_eval, \ r_reload, r_exec, r_unload # override some dangerous functions with their safer versions __import__ = r_import execfile = r_execfile open = r_open eval = r_eval reload = r_reload ``` EDIT: `PyObject *sys = PyImport_ImportModule("sys");` is returning NULL as well.
2013/11/12
[ "https://Stackoverflow.com/questions/19940549", "https://Stackoverflow.com", "https://Stackoverflow.com/users/583608/" ]
You may want to enable WCF Tracing and Message Logging, which will allow you to monitor/review communication to/from the WCF service and hopefully isolate the issue (which, based on the provided error message, may likely be a timeout issue.) The following links provide a good overview: <http://msdn.microsoft.com/en-us/library/ms733025.aspx> Regards,
adkSerenity and shambulator, Thank you. I found the problem. It turned out to be a buffer size. I was pretty sure it wasn't a timeout because the shortest timeout was set to one minute and I could reproduce the error in thirty seconds. I had been avoiding WCF Tracing and Message Logging because it was so intimidating. So much to read, so little time. ;-} When I generated the output at looked at it with the viewer, it might as well have been Martian. After studying it for a while in a fog, I finally get the impression that the loss of communication was not the fault of the service. Then the light bulb went on and I checked the client's config file. Sure enough, the MaxBufferSize was set to 65535, effectively cutting off the 70000 byte message. Thanks again for pointing me in the right direction.
14,031
12,993,175
I'm having a problem with python keyring after the installation. here are my steps: ``` $ python >>> import keyring >>> keyring.set_password('something','otherSomething','lotOfMoreSomethings') ``` and then throws this: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/keyring/core.py", line 42, in set_password _keyring_backend.set_password(service_name, username, password) File "/usr/local/lib/python2.6/dist-packages/keyring/backend.py", line 222, in set_password _, session = service_iface.OpenSession("plain", "") File "/usr/lib/pymodules/python2.6/dbus/proxies.py", line 68, in __call__ return self._proxy_method(*args, **keywords) File "/usr/lib/pymodules/python2.6/dbus/proxies.py", line 140, in __call__ **keywords) File "/usr/lib/pymodules/python2.6/dbus/connection.py", line 630, in call_blocking message, timeout) dbus.exceptions.DBusException: org.freedesktop.DBus.Error.UnknownMethod: Method "OpenSession" with signature "ss" on interface "org.freedesktop.Secret.Service" doesn't exist ``` i have installed keyring from [here](http://pypi.python.org/pypi/keyring) using ``` easy_install keyring ``` Do i do anything wrong?? There are any solution?? Edit: Also i've installed python-keyring and python-keyring-gnome from repos and just import like ``` >>> import gnome_keyring ``` and works.
2012/10/20
[ "https://Stackoverflow.com/questions/12993175", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1715386/" ]
For each market where you have specific requirements due to market-specific licensing or legal issues, you can create a separate app in iTunes Connect and make it available for download only in the relevant market. And if you need to, this also allows you to provide a market-specific EULA. It's a big maintenance burden, but it would ensure that only users in a given market can access the app. Note that in XCode you can fairly easily build, deploy and publish multiple versions of your project built from different configurations (XCode calls this "Targets"), so you could still achieve this in a single codebase by simply adding some preprocessor definitions in the relevant target definitions and putting `#ifdef` in your code where you want differentiated logic.
A 3rd party app has no access whatsoever to any information about the user of the device or access to the iTunes account. There is no way to know the user's true country. At any given time, the device may not even be associated with any one person. An iPod touch, for example, may have no user logged into any iTunes account. The same device can ultimately be logged into one of several different accounts. All you have access to is the user's current GPS location (if the user allows your app to access that information) or their current locale. Basically, there is no way to do what you need. Of course you could prompt the user but obviously there is no way to verify this information.
14,032
38,156,681
I create a custom Authentication backends for my login system. Surely, the custom backends works when I try it in python shell. However, I got error when I run it in the server. The error says "The following fields do not exist in this model or are m2m fields: last\_login". Do I need include the last\_login field in customer model or Is there any other solution to solve the problem? Here is my sample code: ``` In my models.py class Customer(models.Model): yes_or_no = ((True, 'Yes'),(False, 'No')) male_or_female = ((True,'Male'),(False,'Female')) name = models.CharField(max_length=100) email = models.EmailField(max_length=100,blank = False, null = False) password = models.CharField(max_length=100) gender = models.BooleanField(default = True, choices = male_or_female) birthday = models.DateField(default =None,blank = False, null = False) created = models.DateTimeField(default=datetime.now, blank=True) _is_active = models.BooleanField(default = False,db_column="is_active") @property def is_active(self): return self._is_active # how to call setter method, how to pass value ? @is_active.setter def is_active(self,value): self._is_active = value def __str__(self): return self.name ``` In backends.py ``` from .models import Customer from django.conf import settings class CustomerAuthBackend(object): def authenticate(self, name=None, password=None): try: user = Customer.objects.get(name=name) if password == getattr(user,'password'): # Authentication success by returning the user user.is_active = True return user else: # Authentication fails if None is returned return None except Customer.DoesNotExist: return None def get_user(self, user_id): try: return Customer.objects.get(pk=user_id) except Customer.DoesNotExist: return None ``` In views.py ``` @login_required(login_url='/dataInfo/login/') def login_view(request): if request.method == 'POST': username = request.POST['username'] password = request.POST['password'] user = authenticate(name=username,password=password) if user is not None: if user.is_active: login(request,user) #redirect to user profile print "suffcessful login!" return HttpResponseRedirect('/dataInfo/userprofile') else: # return a disable account return HttpResponse("User acount or password is incorrect") else: # Return an 'invalid login' error message. print "Invalid login details: {0}, {1}".format(username, password) # redirect to login page return HttpResponseRedirect('/dataInfo/login') else: login_form = LoginForm() return render_to_response('dataInfo/login.html', {'form': login_form}, context_instance=RequestContext(request)) ``` In setting.py ``` AUTHENTICATION_BACKENDS = ('dataInfo.backends.CustomerAuthBackend', 'django.contrib.auth.backends.ModelBackend',) ```
2016/07/02
[ "https://Stackoverflow.com/questions/38156681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5458894/" ]
This is happening because you are using django's [`login()`](https://docs.djangoproject.com/en/1.9/topics/auth/default/#django.contrib.auth.login) function to log the user in. Django's `login` function emits a signal named `user_logged_in` with the `user` instance you supplied as argument. [See `login()` source](https://docs.djangoproject.com/en/1.9/_modules/django/contrib/auth/#login). And this signal is listened in django's [`contrib.auth.models`](https://github.com/django/django/blob/09119dff14ad24d53ac0273e5cd2de24de0b0d81/django/contrib/auth/models.py#L19). It tries to update a field named `last_login` assuming that the user instance you have supplied is a subclass of django's default `AbstractUser` model. In order to fix this, you can do one of the following things: 1. Stop using the `login()` function shipped with django and create a custom one. 2. Disconnect the `user_logged_in` signal from `update_last_login` receiver. [Read how](https://docs.djangoproject.com/en/dev/topics/signals/#disconnecting-signals). 3. Add a field named `last_login` to your model 4. Extend your model from django's base auth models. [Read how](https://docs.djangoproject.com/en/1.9/topics/auth/customizing/#specifying-a-custom-user-model)
Thanks, I defined a custom `login` method as follows to get through this issue in my automated tests in which I by default keep the signals off. Here's a working code example. ``` def login(client: Client, user: User) -> None: """ Disconnect the update_last_login signal and force_login as `user` Ref: https://stackoverflow.com/questions/38156681/error-about-django-custom-authentication-and-login Args: client: Django Test client instance to be used to login user: User object to be used to login """ user_logged_in.disconnect(receiver=update_last_login) client.force_login(user=user) user_logged_in.connect(receiver=update_last_login) ``` This in turn is used in tests as follows: ``` class TestSomething(TestCase): """ Scenarios to validate: .... """ @classmethod @factory.django.mute_signals(signals.pre_save, signals.post_save) def setUpTestData(cls): """ Helps keep tests execution time under control """ cls.client = Client() cls.content_type = 'application/json' def test_a_scenario(self): """ Scenario details... """ login(client=self.client, user=<User object>) response = self.client.post(...) ... ``` Hope it helps.
14,033
50,092,608
I've defined a helper method to load json from a string or file like so: ``` def get_json_from_string_or_file(obj): if type(obj) is str: return json.loads(obj) return json.load(obj) ``` When I try it with a file it fails on the `load` call with the following exception: ``` File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 291, in load **kw) File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 339, in loads return _default_decoder.decode(s) File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 382, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded ``` I have triple checked my json file and it is definitely not empty, it just has a simple foo bar key value mapping. Also, I'm feeding the json file into the method like so: `os.path.join(os.path.dirname(__file__), "..", "test.json")` Any idea whats going on here? EDIT: I changed my helper method to the following to fix up the mistakes addressed in the answers and comments and it's still not working, i'm getting the same error when I pass in an already open file. ``` def get_json_from_file(_file): if type(_file) is str: with open(_file) as json_file: return json.load(json_file) return json.load(_file) ``` EDIT #2: I FOUND THE PROBLEM! this error occurs if you call `json.load` on an open file twice in a row. It turns out that another part of my application had already got that file open. Here is some reproduction code: ``` with open("/test.json") as f: json.load(f) json.load(f) ```
2018/04/30
[ "https://Stackoverflow.com/questions/50092608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2155605/" ]
Following code works. ``` import os import json def get_json_from_string_or_file(obj): if type(obj) is str: return json.loads(obj) return json.load(obj) filename = os.path.join(os.path.dirname(__file__), "..", "test.json") with open(filename, 'r') as f: result = get_json_from_string_or_file(f) print(result) ``` `ValueError: No JSON object could be decoded` is reproduced when I passed the filename to the function, like `get_json_from_string_or_file(filename)`. --- ### 2nd version ``` import os import json def get_json_from_file(_file): if type(_file) is str: # <-- unnecessary if with open(_file) as json_file: return json.load(json_file) return json.load(_file) filename = os.path.join(os.path.dirname(__file__), "..", "test.json") result = get_json_from_file(filename) print(result) # python2: {u'foo': u'bar'} # python3: {'foo': 'bar'} ```
Sorry, I'd comment but I lack the rep. Can you paste a sample JSON file that doesn't work into a pastebin? I'll edit this into an answer after that if I can.
14,034
67,103,105
I'm trying to upload PDF-file in the Xero account using the python request library (POST method) and Xeros FilesAPI said "Requests must be formatted as multipart MIME" and have some required fields ([link](https://developer.xero.com/documentation/files-api/files#POST)) but I don't how to do that exactly...If I do GET-request I'm getting the list of files in the Xero account but having a problem while posting the file (POST-request)... My Code: ``` post_url = 'https://api.xero.com/files.xro/1.0/Files/' files = {'file': open('/home/mobin/PycharmProjects/s3toxero/data/in_test_upload.pdf', 'rb')} response = requests.post( post_url, headers={ 'Authorization': 'Bearer ' + new_tokens[0], 'Xero-tenant-id': xero_tenant_id, 'Accept': 'application/json', 'Content-type': 'multipart/form-data; boundary=JLQPFBPUP0', 'Content-Length': '1068', }, files=files, ) json_response = response.json() print(f'Uploading Responsoe ==> {json_response}') print(f'Uploading Responsoe ==> {response}') ``` Error Mesage/Response: ``` Uploading Responsoe ==> [{'type': 'Validation', 'title': 'Validation failure', 'detail': 'No file is was attached'}] Uploading Responsoe ==> <Response [400]> ```
2021/04/15
[ "https://Stackoverflow.com/questions/67103105", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9025820/" ]
As I see you're improperly set the boundary. You set it in the headers but not tell to `requests` library to use custom boundary. Let me show you an example: ``` >>> import requests >>> post_url = 'https://api.xero.com/files.xro/1.0/Files/' >>> files = {'file': open('/tmp/test.txt', 'rb')} >>> headers = { ... 'Authorization': 'Bearer secret', ... 'Xero-tenant-id': '42', ... 'Accept': 'application/json', ... 'Content-type': 'multipart/form-data; boundary=JLQPFBPUP0', ... 'Content-Length': '1068', ... } >>> print(requests.Request('POST', post_url, files=files, headers=headers).prepare().body.decode('utf8')) --f3e21ca5e554dd96430f07bb7a0d0e77 Content-Disposition: form-data; name="file"; filename="test.txt" --f3e21ca5e554dd96430f07bb7a0d0e77-- ``` As you can see the real boundary (`f3e21ca5e554dd96430f07bb7a0d0e77`) is different from what was passed in the header (`JLQPFBPUP0`). You can actually directly use the requests module to controll boundary like this: Let's prepare a test file: ``` $ touch /tmp/test.txt $ echo 'Hello, World!' > /tmp/test.txt ``` Test it: ``` >>> import requests >>> post_url = 'https://api.xero.com/files.xro/1.0/Files/' >>> files = {'file': open('/tmp/test.txt', 'rb')} >>> headers = { ... 'Authorization': 'Bearer secret', ... 'Xero-tenant-id': '42', ... 'Accept': 'application/json', ... 'Content-Length': '1068', ... } >>> body, content_type = requests.models.RequestEncodingMixin._encode_files(files, {}) >>> headers['Content-type'] = content_type >>> print(requests.Request('POST', post_url, data=body, headers=headers).prepare().body.decode('utf8')) --db57d23ff5dee7dc8dbab418e4bcb6dc Content-Disposition: form-data; name="file"; filename="test.txt" Hello, World! --db57d23ff5dee7dc8dbab418e4bcb6dc-- >>> headers['Content-type'] 'multipart/form-data; boundary=db57d23ff5dee7dc8dbab418e4bcb6dc' ``` Here boundary is the same as in the header. Another alternative is using `requests-toolbelt`; below example taken from [this GitHub issue](https://github.com/requests/requests/issues/1997#issuecomment-40031999) thread: ``` from requests_toolbelt import MultipartEncoder fields = { # your multipart form fields } m = MultipartEncoder(fields, boundary='my_super_custom_header') r = requests.post(url, headers={'Content-Type': m.content_type}, data=m.to_string()) ``` But it is better not to pass `bundary` by hand at all and entrust this work to the requests library. --- **Update:** A minimal working example using Xero Files API and Python request: ``` from os.path import abspath import requests access_token = 'secret' tenant_id = 'secret' filename = abspath('./example.png') post_url = 'https://api.xero.com/files.xro/1.0/Files' files = {'filename': open(filename, 'rb')} values = {'name': 'Xero'} headers = { 'Authorization': f'Bearer {access_token}', 'Xero-tenant-id': f'{tenant_id}', 'Accept': 'application/json', } response = requests.post( post_url, headers=headers, files=files, data=values ) assert response.status_code == 201 ```
I've tested this with Xero's Files API to upload a file called "helloworld.rtf" in the same directory as my main app.py file. ``` var1 = "Bearer " var2 = YOUR_ACCESS_TOKEN access_token_header = var1 + var2 body = open('helloworld.rtf', 'rb') mp_encoder = MultipartEncoder( fields={ 'helloworld.rtf': ('helloworld.rtf', body), } ) r = requests.post( 'https://api.xero.com/files.xro/1.0/Files', data=mp_encoder, # The MultipartEncoder is posted as data # The MultipartEncoder provides the content-type header with the boundary: headers={ 'Content-Type': mp_encoder.content_type, 'xero-tenant-id': YOUR_XERO_TENANT_ID, 'Authorization': access_token_header } ) ```
14,035
19,829,952
Objective: Trying to run SL4A facade APIs from python shell on the host system (windows 7 PC) My environment: 1. On my windows 7 PC, i have python 2.6.2 2. Android sdk tools rev 21, platform tools rev 16 3. API level 17 supported for JB 4.2 4. I have 2 devices ( one running android 2.3.3 and another android 4.2.2) both running Python for android and SL4A I'm trying these commands as specified at <http://code.google.com/p/android-scripting/wiki/RemoteControl> Here are the commands which i'm trying on the python shell: Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. ``` >>> import android >>> droid=android.Android >>> droid.makeToast("Hello") ``` Traceback (most recent call last): File "", line 1, in AttributeError: type object 'Android' has no attribute 'makeToast' Before this i'm performing the port forwarding and starting a private server as shown below ``` $ adb forward tcp:9999 tcp:4321 $ set AP_PORT=9999 ``` Also set the server on the target listening on port 9999 ( through SL4A->preferences->serverport. Please help to understand where i'm doing mistake which gives the above error while trying droid.makeToast("Hello") ?
2013/11/07
[ "https://Stackoverflow.com/questions/19829952", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2949720/" ]
***You can create your own Video Recording Screen*** Try like this, First Create a Custom Recorder using `SurfaceView` ``` public class VideoCapture extends SurfaceView implements SurfaceHolder.Callback { private MediaRecorder recorder; private SurfaceHolder holder; public Context context; private Camera camera; public static String videoPath = Environment.getExternalStorageDirectory() .getPath() +"/YOUR_VIDEO.mp4"; public VideoCapture(Context context) { super(context); this.context = context; init(); } public VideoCapture(Context context, AttributeSet attrs) { super(context, attrs); init(); } public VideoCapture(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); init(); } @SuppressLint("NewApi") public void init() { try { recorder = new MediaRecorder(); holder = getHolder(); holder.addCallback(this); holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); camera = getCameraInstance(); if(android.os.Build.VERSION.SDK_INT > 7) camera.setDisplayOrientation(90); camera.unlock(); recorder.setCamera(camera); recorder.setVideoSource(MediaRecorder.VideoSource.DEFAULT); recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4); recorder.setVideoEncoder(MediaRecorder.VideoEncoder.MPEG_4_SP); recorder.setOutputFile(videoPath); } catch (Exception e) { e.printStackTrace(); } } public void surfaceChanged(SurfaceHolder arg0, int arg1, int arg2, int arg3) { } public void surfaceCreated(SurfaceHolder mHolder) { try { recorder.setPreviewDisplay(mHolder.getSurface()); recorder.prepare(); recorder.start(); } catch (Exception e) { e.printStackTrace(); } } public void stopCapturingVideo() { try { recorder.stop(); camera.lock(); } catch (Exception e) { e.printStackTrace(); } } @TargetApi(5) public void surfaceDestroyed(SurfaceHolder arg0) { if (recorder != null) { stopCapturingVideo(); recorder.release(); camera.lock(); camera.release(); recorder = null; } } private Camera getCameraInstance() { Camera c = null; try { c = Camera.open(); // attempt to get a Camera instance } catch (Exception e) { // Camera is not available (in use or does not exist) } return c; } } ``` And you can use it inside your Layout for Activity Class ``` <your_package.VideoCapture android:id="@+id/videoView" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="#00000000" /> ``` EDITED:- ``` public class CaptureVideo extends Activity { private VideoCapture videoCapture; private Button stop; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.video_capture); videoCapture = (VideoCapture) findViewById(R.id.videoView); stop= (Button) findViewById(R.id.stop); stop.setOnClickListener(new OnClickListener() { public void onClick(View arg0) { videoCapture.stopCapturingVideo(); setResult(Activity.RESULT_OK); finish(); } }); } } ``` Hope this will help you
This way is using fragments: ``` public class CaptureVideo extends Fragment implements OnClickListener, SurfaceHolder.Callback{ private Button btnStartRec; MediaRecorder recorder; SurfaceHolder holder; boolean recording = false; private int randomNum; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); } public void onActivityCreated(Bundle savedInstanceState) { super.onActivityCreated(savedInstanceState); } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view001 = inflater.inflate(R.layout.capture_video,container,false); recorder = new MediaRecorder(); initRecorder(); btnStartRec = (Button) view001.findViewById(R.id.btnCaptureVideo); btnStartRec.setOnClickListener(this); SurfaceView cameraView = (SurfaceView)view001.findViewById(R.id.surfaceCamera); holder = cameraView.getHolder(); holder.addCallback(this); holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); cameraView.setClickable(true); cameraView.setOnClickListener(this); return view001; } @SuppressLint({ "SdCardPath", "NewApi" }) private void initRecorder() { Random rn = new Random(); int maximum = 10000000; int minimum = 00000001; int range = maximum - minimum + 1; randomNum = rn.nextInt(range) + minimum + 1 - 10; recorder.setVideoSource(MediaRecorder.VideoSource.CAMERA); recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4); recorder.setVideoEncoder(MediaRecorder.VideoEncoder.MPEG_4_SP); if (this.getResources().getConfiguration().orientation != Configuration.ORIENTATION_LANDSCAPE) { recorder.setOrientationHint(90);//plays the video correctly }else{ recorder.setOrientationHint(180); } recorder.setOutputFile("/sdcard/MediaAppVideos/"+randomNum+".mp4"); } private void prepareRecorder() { recorder.setPreviewDisplay(holder.getSurface()); try { recorder.prepare(); } catch (IllegalStateException e) { e.printStackTrace(); //finish(); } catch (IOException e) { e.printStackTrace(); //finish(); } } public void onClick(View v) { switch (v.getId()) { case R.id.btnCaptureVideo: try{ if (recording) { recorder.stop(); recording = false; // Let's initRecorder so we can record again //initRecorder(); //prepareRecorder(); } else { recording = true; recorder.start(); } }catch(Exception e){ } default: break; } } public void surfaceCreated(SurfaceHolder holder) { prepareRecorder(); } public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { } public void surfaceDestroyed(SurfaceHolder holder) { try { if (recording) { recorder.stop(); recording = false; } recorder.release(); // finish(); } catch (Exception e) { } } } ``` xml: ``` <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" > <SurfaceView android:id="@+id/surfaceCamera" android:layout_width="match_parent" android:layout_height="match_parent" /> <Button android:id="@+id/btnCaptureVideo" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:layout_centerHorizontal="true" android:text="Start Recording" /> </RelativeLayout> ```
14,037
64,018,103
I've been working in a project for about 6 months, and I've been adding more urls each time. Right now, I'm coming into the problem that when I'm using `extend 'base.html'` into another pages, the CSS overlap, and I'm getting a mess. My question is: Which are the best practices when using `extend` for CSS files? Should every html file I create have it's own css? Or should I have one css that is called in the base.html file and from there all of the other html files take their styling from? This is what my base.html looks like: ``` <head> <meta charset="utf-8"> <meta http-equiv="cache-control" content="max-age=0" /> <meta http-equiv="cache-control" content="no-cache" /> <meta http-equiv="expires" content="0" /> <meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" /> <meta http-equiv="pragma" content="no-cache" /> </script> </head> <style> * { margin: 0; box-sizing: border-box; } html { font-size: 10px; font-family: sans-serif; } a { text-decoration: none; color: #ea0a8e; font-size: 18px; } header { width: 100%; height: auto; background: #121212; background-size: cover; position: relative; overflow: hidden; } .container { max-width: 120rem; width: 90%; margin: 0 auto; } .menu-toggle { position: fixed; top: 2.5rem; right: 2.5rem; color: #eeeeee; font-size: 3rem; cursor: pointer; z-index: 1000; display: none; } nav { padding-top: 5rem; display: flex; justify-content: space-between; align-items: center; text-transform: uppercase; font-size: 1.6rem; } .logo { font-size: 3rem; font-weight: 300; transform: translateX(-100rem); animation: slideIn .5s forwards; color: #ea0a8e; } .logo span { color: white; } nav ul { display: flex; padding: 0; } nav ul li { list-style: none; transform: translateX(100rem); animation: slideIn .5s forwards; } nav ul li:nth-child(1) { animation-delay: 0s; } nav ul li:nth-child(2) { animation-delay: .5s; } nav ul li:nth-child(3) { animation-delay: 1s; } nav ul li:nth-child(4) { animation-delay: 1.5s; } nav ul li:nth-child(5) { animation-delay: 1s; } nav ul li:nth-child(6) { animation-delay: .5s; } nav ul li:nth-child(7) { animation-delay: 0s; } nav ul li a { padding: 1rem 0; margin: 0 3rem; position: relative; } nav ul li a:last-child { margin-right: 0; } nav ul li a::before, nav ul li a::after { content: ''; position: absolute; width: 100%; height: 2px; background-color: crimson; left: 0; transform: scaleX(0); transition: all .5s; } nav ul li a::before { top: 0; transform-origin: left; } nav ul li a::after { bottom: 0; transform-origin: right; } nav ul li a:hover::before, nav ul li a:hover::after { transform: scaleX(1); } @keyframes slideIn { from {} to { transform: translateX(0); } } @media screen and (max-width: 700px) { .menu-toggle { display: block; } nav { padding-top: 0; display: none; flex-direction: column; justify-content: space-evenly; align-items: center; height: 100vh; text-align: center; } nav ul { flex-direction: column; } nav ul li { margin-top: 5rem; } nav ul li a { margin: 0; font-size: 2.5rem; } .logo { font-size: 5rem; } body::-webkit-scrollbar { width: 11px; } body { scrollbar-width: thin; scrollbar-color: var(--thumbBG) var(--scrollbarBG); } body::-webkit-scrollbar-track { background: var(--scrollbarBG); } body::-webkit-scrollbar-thumb { background-color: var(--thumbBG); border-radius: 6px; border: 3px solid var(--scrollbarBG); } } </style> <body> <div class="container"> <nav> <h1 class="logo">GP Recon<span>ciliation</span></a></h1> <ul> <li><a href="{% url 'dashboard' %}">Dashboard</a></li> <li><a href="{% url 'stats' %}">Stats</a></li> <li><a href="{% url 'addtransaction' %}">Add transaction</a></li> <li><a href="{% url 'upload' %}">Docs</a></li> <li><a href="{% url 'suggestion' %}">Suggestions</a></li> <li><a href="{% url 'about' %}">About</a></li> <li><a href="{% url 'logout' %}">Log out</a></li> </ul> </nav> </div> {% block content %} {% endblock %} </body> ``` This is the testing.html file I'm extending the base to: ``` {% extends 'pythonApp/base.html' %} {% load static %} {% block content %} <style> *{ margin: 0; padding: 0; background-color: #121212; } li{ list-style: none; position: relative; } ul.child{ display: none; width: 100%! important; } ul.parent > li{ background: #ea0a8e; width: 20%; padding: 10px; box-sizing: border-box; color: white; cursor: pointer; } </style> <body> <nav id="navigation"> <ul class="parent"> <li>Human Resources <ul class="child"> <li>Contact </li> <li>Escalation </li> <li>Performance</li> <li>Resource </li> <li>Benefits</li> </ul> </li> <li>Mision <ul class="child"> <li>Mission</li> </ul> </li> <li>Open <ul class="child"> <li>Field</li> <li>Office</li> </ul> </li> <li>Training <ul class="child"> <li>Training </li> <li>GP </li> <li>Mobile </li> <li>GP </li> <li>Field!</li> <li>Ready!</li> <li>Customer</li> </ul> </li> <li>Commissions <ul class="child"> <li>Mobile</li> <li>Quotas</li> </ul> </li> <li>Operations <ul class="child"> <li>Aud</li> <li>Ops</li> <li>To Documents</li> </ul> </li> <li>Help <ul class="child"> <li>Help</li> </ul> </li> <li>Documents <ul class="child"> <li>Documents</li> </ul> </li> <li>Resources <ul class="child"> <li>Resources</li> </ul> </li> <li>Dep <ul class="child"> <li>Dep</li> </ul> </li> <li>District <ul class="child"> <li>District</li> </ul> </li> <li>Store <ul class="child"> <li>Store</li> </ul> </li> <li>ME <ul class="child"> <li>ME</li> </ul> </li> </ul> </div> </body> </html> <script> $(function() { $('ul.parent > li').hover(function() { $(this).find('ul.child').show(200); }, function() { $(this).find('ul.child').hide(400); }); }); </script> {% endblock %} ``` What's happening is that the css effects that are in the styling of base.html are also applying to the test.html file
2020/09/22
[ "https://Stackoverflow.com/questions/64018103", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13821665/" ]
Django provides django.contrib.staticfiles which is tasked with static files(CSS,JavaScript,media).In a nut shell each template in the app will inherit the base static folder in your app [Read the below doc and see how to configure the static files](https://docs.djangoproject.com/en/3.1/howto/static-files/)
you can add your static files in your template: ``` {% extends 'pythonApp/base.html' %} {% load staticfiles %} <link rel="stylesheet" type="text/css" href="{% static 'pathtostaticfile' %}" /> ... ```
14,040
65,278,114
I'm a beginner currently working on a small python project. It's a dice game in which there is player 1 and player 2. Players take turns to roll the dice and a game is 10 rounds in total(5 rounds for each player). The player with the highest number of points wins. I am trying to validate the first input so that when for example, the player enters a number instead of “Higher” or “Lower”, the program should respond with ‘Invalid input, please respond with “Higher” or “Lower”’. Here's my code so far ``` import random goAgain = "y" activePlayer = 1 player1Score = 0 player2Score = 0 maxGuesses = 9 while (goAgain == "y"): randNum = random.randint(1, 10) print(randNum) storedAnswer = input("Will you roll higher or lower?: ") while(storedAnswer.strip() != 'higher' or storedAnswer.strip() != 'lower'): print('Invalid input, please respond with "Higher" or "Lower"') storedAnswer = input("Will you roll higher or lower?: ") randNum2 = random.randint(1, 10) print(randNum2) if(storedAnswer.strip() == "lower" and randNum2 < randNum): if(activePlayer == 1): player1Score+=1 else: player2Score+=1 print("You Win!") maxGuesses+=1 # print(player1Score) # print(player2Score) elif (storedAnswer.strip() == "lower" and randNum2 > randNum): print("You Lose!") maxGuesses+=1 # print(player1Score) # print(player2Score) elif (storedAnswer.strip() == "higher" and randNum2 > randNum): if(activePlayer == 1): player1Score+=1 else: player2Score+=1 print("You Win!") maxGuesses+=1 # print(player1Score) # print(player2Score) elif (storedAnswer.strip() == "higher" and randNum2 < randNum): print("You Lose!") maxGuesses+=1 # print(player1Score) # print(player2Score) elif(randNum2 == randNum): print("Draw!") maxGuesses+=1 print(player1Score) print(player2Score) if(activePlayer == 1): activePlayer = 2 else: activePlayer = 1 if(maxGuesses == 10): if(player1Score > player2Score): print("Player 1 wins!") elif(player2Score > player1Score): print("Player 2 wins!") else: print("DRAW!") break else: goAgain = input("Do you want to play again: ") ``` The problem that occurs is that instead it print 'invalid input' regardless of whether the condition is true or not.
2020/12/13
[ "https://Stackoverflow.com/questions/65278114", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13824901/" ]
Wrap that section of input in a `while True` loop. break the loop when the input is correct, otherwise keep looping for more input. ``` while True: b = int(input ("is player a Computer (0) or a human (1)?")) if b == 0: # player is a computer ... # do computer stuff break elif b == 1: # player is a human ... # do human stuff break else: print("That was not 1 or 0. Please try again") ```
Because elif is not a loop , compiler just checks the condition and then it moves forward executing the statement. You can solve this by just adding a \*\*\*while loop \*\*\* before first if condition like this : for i in range (players): a = input("name of player: ") b = int(input ("is player a Computer (0) or a human (1)?")) ``` while (b!=0 and b!=1): # Checking if b is not equal to both '0' and '1' b = int(input ("is player a Computer (0) or a human (1)?")) if b == 0: list_2.append([a] + [True]) elif b == 1: list_2.append([a] + [False]) elif b!= 0 and b!= 1: print (b) ```
14,042
65,940,602
I'm new to python and I'm using the book called "Automate the Boring Stuff with Python". I was entering the following code (which was the same as the book): ``` while True: print('Please type your name.') name = input() if name == 'your name': break print('Thank you!') ``` And I got a 'break outside loop' error. I found out that a break could only be used within loops. Then I tried entering the following: ``` while True: print('Please type your name.') name = input() while name == 'your name': break print('Thank you!') ``` But it didn't work, it kept asking for a name. What do you think there was a mistake in the book or something?
2021/01/28
[ "https://Stackoverflow.com/questions/65940602", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14883326/" ]
Its perfectly fine just check your indentation
I think it's because of an **indentation**'s mistake. copy & paste the code below and check if it solves your problem or not. ``` while True: print('Please type your name.') name = input() if name == 'your name': break print('Thank you!') ``` indentations are so critical in python programming. you should always check them precisely.
14,044