qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,66,6,6,64), 'b':(1,8,5,3,2,1,22,9)} ``` For example, for key 'a', indices 2 and 3 different values. As the dictionaries are large and the tuples lengths are not necessary equal, my amateur looping method doesn't work. ``` for k1,v1 in dict1: for k2, v2 in dict2: if k1 == k2: for i in range(len(v1)): for j in range(len(v2)): if i==j: if v1[i] != v2[j]: print k1, v1[i] print k2, v2[i] ```
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
**SOLUTION** The trick is to run the VM without GUI. With this you can easily run VM on WIN server like a service too. Prerequired is that exist some VM, you have some already. Below put its name instead `{vm_name}`. --- **1) At first we use build-in executable file "VBoxHeadless.exe".** Create file ``` vm.run.bat ``` with ``` cd "c:\Program Files\Oracle\VirtualBox\" VBoxHeadless.exe -s {vm_name} -v on ``` run and test it - with WIN "[Command Line Interface (CLI)](http://en.wikipedia.org/wiki/Command-line_interface)" called "[Command shell](http://technet.microsoft.com/en-us/library/bb490954.aspx)" - and VM will be opened running in background. ``` vm.run.bat ``` --- **2) Then we use "[Windows-based script host (WSCRIPT)](http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/wsh_runfromwindowsbasedhost.mspx?mfr=true)"** and language "[Microsoft Visual Basic Script (VBS)](http://en.wikipedia.org/wiki/VBScript)" and run above file "vm.run.bat". Create file ``` vm.run.vbs ``` put code ``` Set WshShell = WScript.CreateObject("WScript.Shell") obj = WshShell.Run("vm.run.bat", 0) set WshShell = Nothing ``` run and test it - CLI will be run in background. ``` wscript.exe vm.run.vbs ``` --- **Ref** * Thanks to [iain](http://web.archive.org/web/20150407100735/http://www.techques.com/question/2-188105/Virtualbox-Start-VM-Headless-on-Windows)
If you do not mind operating the application once manually, to end with OS running in background; here are the options: Open Virtual Box. Right Click on your Guest OS > Choose: Start Headless. Wait for a while till the OS boots. Then close the Virtual Box application.
14,527
72,751,658
I have written a python program for printing a diamond. It is working properly except that it is printing an extra kite after printing a diamond. May someone please help me to remove this bug? I can't find it and please give a fix from this code please. CODE: ``` limitRows = int(input("Enter the maximum number of rows: ")) maxRows = limitRows + (limitRows - 1) while currentRow <= limitRows: spaces = limitRows - currentRow while spaces > 0: print(" ", end='') spaces -= 1 stars = currentRow while stars > 0: print("*", end=' ') stars -= 1 print() currentRow += 1 while currentRow <= maxRows: leftRows = maxRows - currentRow + 1 while leftRows > 0: spaces = limitRows - leftRows while spaces > 0: print(" ", end='') spaces -= 1 stars = leftRows while stars > 0: print("*", end=' ') stars -= 1 print() leftRows -= 1 currentRow += 1 ``` OUTPUT(Case 1): ``` D:\Python\diamond\venv\Scripts\python.exe D:/Python/diamond/main.py Enter the maximum number of rows: 4 * * * * * * * * * * * * * * * * * * * * Process finished with exit code 0 ``` OUTPUT(Case 2): ``` D:\Python\diamond\venv\Scripts\python.exe D:/Python/diamond/main.py Enter the maximum number of rows: 5 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Process finished with exit code 0 ```
2022/06/25
[ "https://Stackoverflow.com/questions/72751658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19409398/" ]
You have an extra while loop in the second half of your code. Try this ``` limitRows = int(input("Enter the maximum number of rows: ")) maxRows = limitRows + (limitRows - 1) currentRow = 0 while currentRow <= limitRows: spaces = limitRows - currentRow while spaces > 0: print(" ", end='') spaces -= 1 stars = currentRow while stars > 0: print("*", end=' ') stars -= 1 print() currentRow += 1 while currentRow <= maxRows: leftRows = maxRows - currentRow + 1 spaces = limitRows - leftRows # Removed unnecessary while loop here while spaces > 0: print(" ", end='') spaces -= 1 stars = leftRows while stars > 0: print("*", end=' ') stars -= 1 print() leftRows -= 1 currentRow += 1 ```
You can make this less complex by using just one loop as follows: ``` def make_row(n): return ' '.join(['*'] * n) rows = input('Number of rows: ') if (nrows := int(rows)) > 0: diamond = [make_row(nrows)] j = len(diamond[0]) for i in range(nrows-1, 0, -1): j -= 1 diamond.append(make_row(i).rjust(j)) print(*diamond[::-1], *diamond[1:], sep='\n') ``` **Example:** ``` Number of rows: 5 * * * * * * * * * * * * * * * * * * * * * * * * * ```
14,537
59,529,038
I am using [nameko](https://nameko.readthedocs.io/en/stable/) to build an ETL pipeline with a micro-service architecture, and I do not want to wait for a reply after making a RPC request. ``` from nameko.rpc import rpc, RpcProxy class Scheduler(object): name = "scheduler" task_runner = RpcProxy('task_runner') @rpc def schedule(self, task_type, group_id, time): return self.task_runner.start.async(task_type, group_id) ``` This code throws an error: ``` Traceback (most recent call last): File "/home/satnam-sandhu/.anaconda3/envs/etl/bin/nameko", line 8, in <module> sys.exit(main()) File "/home/satnam-sandhu/.anaconda3/envs/etl/lib/python3.8/site-packages/nameko/cli/main.py", line 112, in main args.main(args) File "/home/satnam-sandhu/.anaconda3/envs/etl/lib/python3.8/site-packages/nameko/cli/commands.py", line 110, in main main(args) File "/home/satnam-sandhu/.anaconda3/envs/etl/lib/python3.8/site-packages/nameko/cli/run.py", line 181, in main import_service(path) File "/home/satnam-sandhu/.anaconda3/envs/etl/lib/python3.8/site-packages/nameko/cli/run.py", line 46, in import_service __import__(module_name) File "./scheduler/service.py", line 15 return self.task_runner.start.async(task_type, group_id) ^ SyntaxError: invalid syntax ``` I am new with microservices and Nameko, and also I am using RabbitMQ as the queuing service.
2019/12/30
[ "https://Stackoverflow.com/questions/59529038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8776121/" ]
I had the same problem; you need to replace the `async` method with the `call_async` one, and retrieve the data with `result()`. [Documentation](https://nameko.readthedocs.io/en/stable/built_in_extensions.html) [GitHub issue](https://github.com/nameko/nameko/pull/318)
use call\_async instead async or for better result use event from nameko.events import EventDispatcher, event\_handler ``` @event_handler("service_a", "event_emit_name") def get_result(self, payload): #do_something... ``` and in other service ``` from nameko.events import EventDispatcher, event_handler @event_handler("service_a", "event_emit_name") def return_result(self, payload): #get payload and work over there ```
14,538
31,196,412
I am new to the world of map reduce, I have run a job and it seems to be taking forever to complete given that it is a relatively small task, I am guessing something has not gone according to plan. I am using hadoop version 2.6, here is some info gathered I thought could help. The map reduce programs themselves are straightforward so I won't bother adding those here unless someone really wants me to give more insight - the python code running for map reduce is identical to the one here - <http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/>. If someone can give a clue as to what has gone wrong or why that would be great. Thanks in advance. ``` Name: streamjob1669011192523346656.jar Application Type: MAPREDUCE Application Tags: State: ACCEPTED FinalStatus: UNDEFINED Started: 3-Jul-2015 00:17:10 Elapsed: 20mins, 57sec Tracking URL: UNASSIGNED Diagnostics: ``` this is what I get when running the program: ``` bin/hadoop jar share/hadoop/tools/lib/hadoop-streaming-2.6.0.jar - file python-files/mapper.py -mapper python-files/mapper.py -file python - files/reducer.py -reducer python-files/reducer.py -input /user/input/* - output /user/output 15/07/03 00:16:41 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead. 2015-07-03 00:16:43.510 java[3708:1b03] Unable to load realm info from SCDynamicStore 15/07/03 00:16:44 WARN util.NativeCodeLoader: Unable to load native- hadoop library for your platform... using builtin-java classes where applicable packageJobJar: [python-files/mapper.py, python-files/reducer.py, /var/folders/4x/v16lrvy91ld4t8rqvnzbr83m0000gn/T/hadoop-unjar8212926403009053963/] [] /var/folders/4x/v16lrvy91ld4t8rqvnzbr83m0000gn/T/streamjob1669011192523346656.jar tmpDir=null 15/07/03 00:16:53 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 15/07/03 00:16:55 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 15/07/03 00:17:05 INFO mapred.FileInputFormat: Total input paths to process : 1 15/07/03 00:17:06 INFO mapreduce.JobSubmitter: number of splits:2 15/07/03 00:17:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1435852353333_0003 15/07/03 00:17:11 INFO impl.YarnClientImpl: Submitted application application_1435852353333_0003 15/07/03 00:17:11 INFO mapreduce.Job: The url to track the job: http://mymacbook.home:8088/proxy/application_1435852353333_0003/ 15/07/03 00:17:11 INFO mapreduce.Job: Running job: job_1435852353333_0003 ```
2015/07/02
[ "https://Stackoverflow.com/questions/31196412", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1285948/" ]
If a job is in `ACCEPTED` state for long time and not changing to `RUNNING` state, It could be due to the following reasons. Nodemanager(slave service) is either dead or unable to communicate with resource manager. if the `Active nodes` in the Yarn resource manager [Web ui main page](http://mymacbook.home:8088/) is zero then you can confirm no node managers are connected to resource manager. If so, you need to start nodemanager. Another reason is there might be other jobs running which occupies the available slot and no room for new jobs check the value of `Memory Total`, `Memory used` ,`Vcores Total` ,`VCores Used` in the resource manager webui main page.
Have you partitioned your data the same way you query it ? Basically, you don't want to query all your data, which is what you may be doing at the moment. That could explain why it's taking such a long time to run. You want to query a subset of your whole data set. For instance, if you partition over dates, you really want to write queries with a date constraint, otherwise the query will take forever to run. If you can, make your query with a constraint on the variable(s) used to partition your data.
14,539
32,959,770
In python, I can do this to get the current file's path: ``` os.path.dirname(os.path.abspath(__file__)) ``` But if I run this on a thread say: ``` def do_stuff(): class RunThread(threading.Thread): def run(self): print os.path.dirname(os.path.abspath(__file__)) a = RunThread() a.start() ``` I get this error: ``` Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner self.run() File "readrss.py", line 137, in run print os.path.dirname(os.path.abspath(__file__)) NameError: global name '__file__' is not defined ```
2015/10/06
[ "https://Stackoverflow.com/questions/32959770", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1515864/" ]
``` import inspect print(inspect.stack()[0][1]) ``` [inspect](https://docs.python.org/2/library/inspect.html)
I apologise for my previous answer. I was half asleep and replied stupidly. Every time I've done what you're trying to do, I have used it in the inverse order. E.g. `os.path.abspath(os.path.dirname(__file__))`
14,540
48,047,495
``` Collecting jws>=0.1.3 (from python-jwt==2.0.1->pyrebase) Using cached https://files.pythonhosted.org/packages/01/9e/1536d578ed50f5fe8196310ddcc921a3cd8e973312d60ac74488b805d395/jws-0.1.3.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\Wesely\AppData\Local\Temp\pip-install-w5z8dsub\jws\setup.py", line 17, in <module> long_description=read('README.md'), File "C:\Users\Wesely\AppData\Local\Temp\pip-install-w5z8dsub\jws\setup.py", line 5, in read return open(os.path.join(os.path.dirname(__file__), fname)).read() UnicodeDecodeError: 'cp949' codec can't decode byte 0xe2 in position 500: illegal multibyte sequence ---------------------------------------- ``` I tried easy\_install pyrebase, and using virtualenv. I'm using Korean Windows 10.
2018/01/01
[ "https://Stackoverflow.com/questions/48047495", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8272238/" ]
I've just solved this. [MyGitHub.io](https://wesely.github.io/pip,%20python,%20pip/Fix-'cp950'-Error-when-using-'pip-install'/) It's a bug from `jws` package, it should consider the encoding problem in its `setup.py`. My Solution : install `jws` first * use `pip download jws` instead of `pip install` * use 7z open the `filename.tar.gz` archive * edit the `setup.py` file * change this line ``` return open(os.path.join(os.path.dirname(__file__), fname)).read() ``` to ``` return open(os.path.join(os.path.dirname(__file__), fname), encoding="UTF-8").read() ``` * Re-archive the tar file, run pip install `filename.tar` After `jws` being installed, run `pip install pyrebase`. It should work.
I solved this problem by deleting Visual Studio Community 2017 including python dev. option
14,541
12,646,883
I've been trying to make a simple blackjack game in python and I seem to be stuck, my code is as follows: ``` from random import choice def deck(): cards = range(1, 12) return choice(cards) def diack(): card1= deck() card2 = deck() hand = card1 + card2 print hand if hand < 21: print raw_input("Would you like to hit or stand?") if "hit": return hand + deck() elif "stand": return hand ``` When I run that it seems to work for "hit" but when I type in "stand" it seems to "hit" aswell. As you can probably tell by now I am extremely new to programming. Could you guys help point me in the right direction on how to make my game work (I'd like to use as much of my code as possible).
2012/09/28
[ "https://Stackoverflow.com/questions/12646883", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1689672/" ]
`if "hit"` just tests if the string `"hit"` exists, and it does. Thus, the `elif` statement is never executed. You need to capture the user input in a variable and test against that instead: ``` choice = raw_input("Would you like to hit or stand?") print choice if choice == "hit": return hand + deck() elif choice == "stand": return hand ```
Assuming you get the indentation right: ``` print raw_input("Would you like to hit or stand?") if "hit": return hand + deck() elif "stand": return hand ``` Your `if` is just checking whether the string `"hit"` is true. All non-empty strings are true, and `"hit"` is non-empty, so this will always succeed. What you want is something like this: ``` cmd = raw_input("Would you like to hit or stand?") if cmd == "hit": return hand + deck() elif cmd == "stand": return hand ``` Now you're checking whether the result of `raw_input` is the string `"hit"`, which is what you want.
14,543
43,983,127
I wish to find all words that start with "Am" and this is what I tried so far with python ``` import re my_string = "America's mom, American" re.findall(r'\b[Am][a-zA-Z]+\b', my_string) ``` but this is the output that I get ``` ['America', 'mom', 'American'] ``` Instead of what I want ``` ['America', 'American'] ``` I know that in regex `[Am]` means match either `A` or `m`, but is it possible to match `A` and `m` as well?
2017/05/15
[ "https://Stackoverflow.com/questions/43983127", "https://Stackoverflow.com", "https://Stackoverflow.com/users/863713/" ]
The `[Am]`, a positive [character class](http://www.regular-expressions.info/charclass.html), matches either `A` or `m`. To match a *sequence* of chars, you need to use them one after another. Remove the brackets: ``` import re my_string = "America's mom, American" print(re.findall(r'\bAm[a-zA-Z]+\b', my_string)) # => ['America', 'American'] ``` See the [Python demo](http://ideone.com/saGZrG) **This pattern details**: * `\b` - a word boundary * `Am` - a string of chars matched as a sequence `Am` * `[a-zA-Z]+` - 1 or more ASCII letters * `\b` - a word boundary.
Don't use character class: ``` import re my_string = "America's mom, American" re.findall(r'\bAm[a-zA-Z]+\b', my_string) ```
14,544
65,376,345
I had started scrapy with Official Tutorial, but I can't go with it successfully.My code is totally same with official one. ``` import scrapy class QuotesSpider(scrapy.Spider): name = 'Quotes'; def start_requests(self): urls = [ 'http://quotes.toscrape.com/page/1/', ] for url in urls: yield scrapy.Request(url=url,callback = self.parse); def parse(self, response): page = response.url.split('/')[-2]; print('--------------------------------->>>>'); for quote in response.css('div.quote'): yield { 'text': quote.css('span.text::text').get(), 'author': quote.css('small.author::text').get(), 'tags': quote.css('div.tags a.tag::text').getall(), } ``` When i execute it on CMD with instruction (scrapy crawl Quotes) and the result like this: ``` 2020-12-20 10:00:25 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/page/1/> (referer: None) 2020-12-20 10:00:26 [scrapy.core.scraper] ERROR: Spider error processing <GET http://quotes.toscrape.com/page/1/> (referer: None) Traceback (most recent call last): File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\twisted\internet\defer.py", line 1418, in _inlineCallbacks result = g.send(result) StopIteration: <200 http://quotes.toscrape.com/page/1/> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\utils\defer.py", line 55, in mustbe_deferred result = f(*args, **kw) File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\core\spidermw.py", line 58, in process_spider_input return scrape_func(response, request, spider) File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\core\scraper.py", line 149, in call_spider warn_on_generator_with_return_value(spider, callback) File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\utils\misc.py", line 245, in warn_on_generator_with_return_value if is_generator_with_return_value(callable): File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\utils\misc.py", line 230, in is_generator_with_return_value tree = ast.parse(dedent(inspect.getsource(callable))) File "c:\users\a\appdata\local\programs\python\python38-32\lib\ast.py", line 47, in parse return compile(source, filename, mode, flags, File "<unknown>", line 1 def parse(self, response): ^ IndentationError: unexpected indent 2020-12-20 10:00:26 [scrapy.core.engine] INFO: Closing spider (finished) 2020-12-20 10:00:26 [scrapy.statscollectors] INFO: Dumping Scrapy stats: ``` --- I check it many times but I still do not know how to deal with it!
2020/12/20
[ "https://Stackoverflow.com/questions/65376345", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14482346/" ]
There is a `IndentationError`.Need to fix code indentation. Its work fine.
You might find a solution for your issue here > > [Scrapy installed, but won't run from the command line](https://stackoverflow.com/questions/37757233/scrapy-installed-but-wont-run-from-the-command-line) > > >
14,546
12,634,452
I have 2 dictionaries ``` a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} ``` how can i merge them such that i get the following result ``` c = merge_dicts(a,b) ``` where c is `{'I': [1,2,3,4], 'II': [1,2,3,4], 'III': [1,2], 'IV': [3,4]}` Is there a good pythonic way of doing this? Note that I am a python newbie, even though I am using words like pythonic. Thanks in advance.
2012/09/28
[ "https://Stackoverflow.com/questions/12634452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1742869/" ]
Are you sure they have the same keys? You could do: ``` c = dict( (k,a[k]+b[k]) for k in a ) ``` Addition of lists concatenates so `a[k] + b[k]` gives you something like `[1,2]+[3,4]` which equals `[1,2,3,4]`. The `dict` constructor can take a series of 2-element iterables which turn into `key` - `value` pairs. If they don't share the keys, you can use `set`s. ``` aset = set(a) bset = set(b) common_keys = aset & bset a_only_keys = aset - bset b_only_keys = bset - aset c = dict( (k,a[k]) for k in a_only_keys ) c.update( (k,b[k]) for k in b_only_keys ) c.update( (k,a[k]+b[k]) for k in common_keys ) ```
``` >>> a = {'I': [1,2], 'II': [1,2]} >>> b = {'I': [3,4], 'II': [3,4]} >>> {key:a[key]+b[key] for key in a} {'I': [1, 2, 3, 4], 'II': [1, 2, 3, 4]} ``` Note that this only works if they share keys exactly.
14,548
38,921,815
I am using python 3.5. When I tried to return a generator function instance and i am getting a StopIteration error. Why? here is my code: ``` >>> def gen(start, end): ... '''generator function similar to range function''' ... while start <= end: ... yield start ... start += 1 ... >>> def check(ingen, flag=None): ... if flag: ... for n in ingen: ... yield n*2 ... else: ... return ingen ... >>> # Trigger else clause in check function >>> a = check(gen(1,3)) >>> next(a) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration: <generator object gen at 0x7f37dc46e828> ``` It looks like the generator is somehow exhausted before the else clause is returns the generator. It works fine with this function: ``` >>> def check_v2(ingen): ... return ingen ... >>> b = check_v2(gen(1, 3)) >>> next(b) 1 >>> next(b) 2 >>> next(b) 3 ```
2016/08/12
[ "https://Stackoverflow.com/questions/38921815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
In Python, if `yield` is present in a function, then Python treats it as a generator. In a generator, any return will raise `StopIteration` with the returned value. This is a new feature in Python 3.3: see [PEP 380](https://www.python.org/dev/peps/pep-0380/) and [here](https://stackoverflow.com/a/16780113/2097780). `check_v2` works because it doesn't contain a `yield` and is therefore a normal function. There are two ways to accomplish what you want: * Change the `return` to a `yield` in `check`. * Have the caller trap `StopIteration`, as shown below ``` try: next(a) except StopIteration as ex: print(ex.value) ```
When a generator hits its `return` statement (explicit or not) it raises `StopIteration`. So when you `return ingen` you end the iteration. `check_v2` is not a generator, since it does not contain the `yield` statement, that's why it works.
14,553
57,484,399
I'm new to Python and am using Anaconda on Windows 10 to learn how to implement machine learning. Running this code on Spyder: ```py import sklearn as skl ``` Originally got me this: ``` Traceback (most recent call last): File "<ipython-input-1-7135d3f24347>", line 1, in <module> runfile('C:/Users/julia/.spyder-py3/temp.py', wdir='C:/Users/julia/.spyder-py3') File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/julia/.spyder-py3/temp.py", line 3, in <module> from sklearn.family import Model File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\__init__.py", line 76, in <module> from .base import clone File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\base.py", line 16, in <module> from .utils import _IS_32BIT File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\__init__.py", line 20, in <module> from .validation import (as_float_array, File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 21, in <module> from .fixes import _object_dtype_isnan File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\fixes.py", line 289, in <module> from scipy.sparse.linalg import lsqr as sparse_lsqr File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\sparse\linalg\__init__.py", line 114, in <module> from .isolve import * File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\sparse\linalg\isolve\__init__.py", line 6, in <module> from .iterative import * File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\sparse\linalg\isolve\iterative.py", line 10, in <module> from . import _iterative ImportError: DLL load failed: The specified module could not be found. ``` I then went to the command line and did ``` pip uninstall scipy pip install scipy pip uninstall scikit-learn pip install scikit-learn ``` and got no errors when doing so, with scipy 1.3.1 (along with numpy 1.17.0) and scikit-learn 0.21.3 being installed according to the command line. However, now when I try to import sklearn I get a different error: ``` File "<ipython-input-2-7135d3f24347>", line 1, in <module> runfile('C:/Users/julia/.spyder-py3/temp.py', wdir='C:/Users/julia/.spyder-py3') File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/julia/.spyder-py3/temp.py", line 3, in <module> from sklearn.family import Model File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\__init__.py", line 76, in <module> from .base import clone File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\base.py", line 16, in <module> from .utils import _IS_32BIT File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\__init__.py", line 20, in <module> from .validation import (as_float_array, File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 21, in <module> from .fixes import _object_dtype_isnan File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\fixes.py", line 289, in <module> from scipy.sparse.linalg import lsqr as sparse_lsqr File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\sparse\linalg\__init__.py", line 113, in <module> from .isolve import * File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\sparse\linalg\isolve\__init__.py", line 6, in <module> from .iterative import * File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\sparse\linalg\isolve\iterative.py", line 136, in <module> def bicg(A, b, x0=None, tol=1e-5, maxiter=None, M=None, callback=None, atol=None): File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\_lib\_threadsafety.py", line 59, in decorator return lock.decorate(func) File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\_lib\_threadsafety.py", line 47, in decorate return scipy._lib.decorator.decorate(func, caller) AttributeError: module 'scipy' has no attribute '_lib' ``` Any suggestions? I've uninstalled and reinstalled Anaconda and I'm still getting the same issue. EDIT: When I do ``` conda list --show-channel-urls ``` I get ``` # packages in environment at C:\ProgramData\Anaconda3: # # Name Version Build Channel _ipyw_jlab_nb_ext_conf 0.1.0 py37_0 defaults alabaster 0.7.12 py37_0 defaults anaconda-client 1.7.2 py37_0 defaults anaconda-navigator 1.9.7 py37_0 defaults asn1crypto 0.24.0 py37_0 defaults astroid 2.2.5 py37_0 defaults attrs 19.1.0 py37_1 defaults babel 2.7.0 py_0 defaults backcall 0.1.0 py37_0 defaults backports 1.0 py_2 defaults backports.functools_lru_cache 1.5 py_2 defaults backports.tempfile 1.0 py_1 defaults backports.weakref 1.0.post1 py_1 defaults beautifulsoup4 4.7.1 py37_1 defaults blas 1.0 mkl defaults bleach 3.1.0 py37_0 defaults bzip2 1.0.8 he774522_0 defaults ca-certificates 2019.5.15 1 defaults certifi 2019.6.16 py37_1 defaults cffi 1.12.3 py37h7a1dbc1_0 defaults chardet 3.0.4 py37_1003 defaults click 7.0 py37_0 defaults cloudpickle 1.2.1 py_0 defaults clyent 1.2.2 py37_1 defaults colorama 0.4.1 py37_0 defaults conda 4.7.11 py37_0 defaults conda-build 3.18.8 py37_0 defaults conda-env 2.6.0 1 defaults conda-package-handling 1.3.11 py37_0 defaults conda-verify 3.4.2 py_1 defaults console_shortcut 0.1.1 3 defaults cryptography 2.7 py37h7a1dbc1_0 defaults decorator 4.4.0 py37_1 defaults defusedxml 0.6.0 py_0 defaults docutils 0.15.1 py37_0 defaults entrypoints 0.3 py37_0 defaults filelock 3.0.12 py_0 defaults freetype 2.9.1 ha9979f8_1 defaults future 0.17.1 py37_0 defaults glob2 0.7 py_0 defaults icc_rt 2019.0.0 h0cc432a_1 defaults icu 58.2 ha66f8fd_1 defaults idna 2.8 py37_0 defaults imagesize 1.1.0 py37_0 defaults intel-openmp 2019.4 245 defaults ipykernel 5.1.1 py37h39e3cac_0 defaults ipython 7.7.0 py37h39e3cac_0 defaults ipython_genutils 0.2.0 py37_0 defaults ipywidgets 7.5.1 py_0 defaults isort 4.3.21 py37_0 defaults jedi 0.13.3 py37_0 defaults jinja2 2.10.1 py37_0 defaults joblib 0.13.2 py37_0 defaults jpeg 9b hb83a4c4_2 defaults json5 0.8.5 py_0 defaults jsonschema 3.0.1 py37_0 defaults jupyter_client 5.3.1 py_0 defaults jupyter_core 4.5.0 py_0 defaults jupyterlab 1.0.2 py37hf63ae98_0 defaults jupyterlab_server 1.0.0 py_1 defaults keyring 18.0.0 py37_0 defaults lazy-object-proxy 1.4.1 py37he774522_0 defaults libarchive 3.3.3 h0643e63_5 defaults libiconv 1.15 h1df5818_7 defaults liblief 0.9.0 ha925a31_2 defaults libpng 1.6.37 h2a8f88b_0 defaults libsodium 1.0.16 h9d3ae62_0 defaults libtiff 4.0.10 hb898794_2 defaults libxml2 2.9.9 h464c3ec_0 defaults lz4-c 1.8.1.2 h2fa13f4_0 defaults lzo 2.10 h6df0209_2 defaults m2w64-gcc-libgfortran 5.3.0 6 defaults m2w64-gcc-libs 5.3.0 7 defaults m2w64-gcc-libs-core 5.3.0 7 defaults m2w64-gmp 6.1.0 2 defaults m2w64-libwinpthread-git 5.0.0.4634.697f757 2 defaults markupsafe 1.1.1 py37he774522_0 defaults mccabe 0.6.1 py37_1 defaults menuinst 1.4.16 py37he774522_0 defaults mistune 0.8.4 py37he774522_0 defaults mkl 2019.4 245 defaults mkl-service 2.0.2 py37he774522_0 defaults mkl_fft 1.0.12 py37h14836fe_0 defaults mkl_random 1.0.2 py37h343c172_0 defaults msys2-conda-epoch 20160418 1 defaults navigator-updater 0.2.1 py37_0 defaults nbconvert 5.5.0 py_0 defaults nbformat 4.4.0 py37_0 defaults notebook 6.0.0 py37_0 defaults numpy 1.17.0 pypi_0 pypi numpy-base 1.16.4 py37hc3f5095_0 defaults numpydoc 0.9.1 py_0 defaults olefile 0.46 py37_0 defaults openssl 1.1.1c he774522_1 defaults packaging 19.0 py37_0 defaults pandas 0.25.0 py37ha925a31_0 defaults pandoc 2.2.3.2 0 defaults pandocfilters 1.4.2 py37_1 defaults parso 0.5.0 py_0 defaults pickleshare 0.7.5 py37_0 defaults pillow 6.1.0 py37hdc69c19_0 defaults pip 19.2.2 pypi_0 pypi pkginfo 1.5.0.1 py37_0 defaults powershell_shortcut 0.0.1 2 defaults prometheus_client 0.7.1 py_0 defaults prompt_toolkit 2.0.9 py37_0 defaults psutil 5.6.3 py37he774522_0 defaults py-lief 0.9.0 py37ha925a31_2 defaults pycodestyle 2.5.0 py37_0 defaults pycosat 0.6.3 py37hfa6e2cd_0 defaults pycparser 2.19 py37_0 defaults pyflakes 2.1.1 py37_0 defaults pygments 2.4.2 py_0 defaults pylint 2.3.1 py37_0 defaults pyopenssl 19.0.0 py37_0 defaults pyparsing 2.4.0 py_0 defaults pyqt 5.9.2 py37h6538335_2 defaults pyrsistent 0.14.11 py37he774522_0 defaults pysocks 1.7.0 py37_0 defaults python 3.7.3 h8c8aaf0_1 defaults python-dateutil 2.8.0 py37_0 defaults python-libarchive-c 2.8 py37_13 defaults pytz 2019.1 py_0 defaults pywin32 223 py37hfa6e2cd_1 defaults pywinpty 0.5.5 py37_1000 defaults pyyaml 5.1.1 py37he774522_0 defaults pyzmq 18.0.0 py37ha925a31_0 defaults qt 5.9.7 vc14h73c81de_0 defaults qtawesome 0.5.7 py37_1 defaults qtconsole 4.5.2 py_0 defaults qtpy 1.8.0 py_0 defaults requests 2.22.0 py37_0 defaults rope 0.14.0 py_0 defaults ruamel_yaml 0.15.46 py37hfa6e2cd_0 defaults scikit-learn 0.21.3 pypi_0 pypi scipy 1.3.0 pypi_0 pypi send2trash 1.5.0 py37_0 defaults setuptools 41.0.1 py37_0 defaults sip 4.19.8 py37h6538335_0 defaults six 1.12.0 py37_0 defaults snowballstemmer 1.9.0 py_0 defaults soupsieve 1.9.2 py37_0 defaults sphinx 2.1.2 py_0 defaults sphinxcontrib-applehelp 1.0.1 py_0 defaults sphinxcontrib-devhelp 1.0.1 py_0 defaults sphinxcontrib-htmlhelp 1.0.2 py_0 defaults sphinxcontrib-jsmath 1.0.1 py_0 defaults sphinxcontrib-qthelp 1.0.2 py_0 defaults sphinxcontrib-serializinghtml 1.1.3 py_0 defaults spyder 3.3.6 py37_0 defaults spyder-kernels 0.5.1 py37_0 defaults sqlite 3.29.0 he774522_0 defaults terminado 0.8.2 py37_0 defaults testpath 0.4.2 py37_0 defaults tk 8.6.8 hfa6e2cd_0 defaults tornado 6.0.3 py37he774522_0 defaults tqdm 4.32.1 py_0 defaults traitlets 4.3.2 py37_0 defaults urllib3 1.24.2 py37_0 defaults vc 14.1 h0510ff6_4 defaults vs2015_runtime 14.15.26706 h3a45250_4 defaults wcwidth 0.1.7 py37_0 defaults webencodings 0.5.1 py37_1 defaults wheel 0.33.4 py37_0 defaults widgetsnbextension 3.5.0 py37_0 defaults win_inet_pton 1.1.0 py37_0 defaults wincertstore 0.2 py37_0 defaults winpty 0.4.3 4 defaults wrapt 1.11.2 py37he774522_0 defaults xz 5.2.4 h2fa13f4_4 defaults yaml 0.1.7 hc54c509_2 defaults zeromq 4.3.1 h33f27b4_3 defaults zlib 1.2.11 h62dcd97_3 defaults zstd 1.3.7 h508b16e_0 defaults ``` with the version of scipy not matching up with the version that pip installed. Not sure how significant it is but it seemed strange to me. EDIT 2: Doing `pip list` returns ``` Package Version ----------------------------- --------- -cipy 1.3.0 alabaster 0.7.12 anaconda-client 1.7.2 anaconda-navigator 1.9.7 asn1crypto 0.24.0 astroid 2.2.5 attrs 19.1.0 Babel 2.7.0 backcall 0.1.0 backports.functools-lru-cache 1.5 backports.tempfile 1.0 backports.weakref 1.0.post1 beautifulsoup4 4.7.1 bleach 3.1.0 certifi 2019.6.16 cffi 1.12.3 chardet 3.0.4 Click 7.0 cloudpickle 1.2.1 clyent 1.2.2 colorama 0.4.1 conda 4.7.11 conda-build 3.18.8 conda-package-handling 1.3.11 conda-verify 3.4.2 cryptography 2.7 decorator 4.4.0 defusedxml 0.6.0 docutils 0.15.1 entrypoints 0.3 filelock 3.0.12 future 0.17.1 glob2 0.7 idna 2.8 imagesize 1.1.0 ipykernel 5.1.1 ipython 7.7.0 ipython-genutils 0.2.0 ipywidgets 7.5.1 isort 4.3.21 jedi 0.13.3 Jinja2 2.10.1 joblib 0.13.2 json5 0.8.5 jsonschema 3.0.1 jupyter-client 5.3.1 jupyter-core 4.5.0 jupyterlab 1.0.2 jupyterlab-server 1.0.0 keyring 18.0.0 lazy-object-proxy 1.4.1 libarchive-c 2.8 MarkupSafe 1.1.1 mccabe 0.6.1 menuinst 1.4.16 mistune 0.8.4 mkl-fft 1.0.12 mkl-random 1.0.2 mkl-service 2.0.2 navigator-updater 0.2.1 nbconvert 5.5.0 nbformat 4.4.0 notebook 6.0.0 numpy 1.17.0 numpydoc 0.9.1 olefile 0.46 packaging 19.0 pandas 0.25.0 pandocfilters 1.4.2 parso 0.5.0 pickleshare 0.7.5 Pillow 6.1.0 pio 0.0.3 pip 19.2.2 pkginfo 1.5.0.1 prometheus-client 0.7.1 prompt-toolkit 2.0.9 psutil 5.6.3 pycodestyle 2.5.0 pycosat 0.6.3 pycparser 2.19 pyflakes 2.1.1 Pygments 2.4.2 pylint 2.3.1 pyOpenSSL 19.0.0 pyparsing 2.4.0 pyrsistent 0.14.11 PySocks 1.7.0 python-dateutil 2.8.0 pytz 2019.1 pywin32 223 pywinpty 0.5.5 PyYAML 5.1.1 pyzmq 18.0.0 QtAwesome 0.5.7 qtconsole 4.5.2 QtPy 1.8.0 requests 2.22.0 rope 0.14.0 ruamel-yaml 0.15.46 scikit-learn 0.21.3 scipy 1.3.1 Send2Trash 1.5.0 setuptools 41.0.1 six 1.12.0 snowballstemmer 1.9.0 soupsieve 1.9.2 Sphinx 2.1.2 sphinxcontrib-applehelp 1.0.1 sphinxcontrib-devhelp 1.0.1 sphinxcontrib-htmlhelp 1.0.2 sphinxcontrib-jsmath 1.0.1 sphinxcontrib-qthelp 1.0.2 sphinxcontrib-serializinghtml 1.1.3 spyder 3.3.6 spyder-kernels 0.5.1 terminado 0.8.2 testpath 0.4.2 tornado 6.0.3 tqdm 4.32.1 traitlets 4.3.2 urllib3 1.24.2 wcwidth 0.1.7 webencodings 0.5.1 wheel 0.33.4 widgetsnbextension 3.5.0 win-inet-pton 1.1.0 wincertstore 0.2 wrapt 1.11.2 ``` `pip list` says scipy is version 1.3.1, while `conda list` says it's version 1.3.0. Again, not sure how relevant it is, but seems strange EDIT 3: I got this error after putting the following lines (suggested by @Brennan) in my command prompt then running the file ``` pip uninstall scikit-learn pip uninstall scipy conda uninstall scikit-learn conda uninstall scipy conda update --all conda install scipy conda install scikit-learn ``` This is the new error I get when trying to import sklearn: ``` Traceback (most recent call last): File "<ipython-input-15-7135d3f24347>", line 1, in <module> runfile('C:/Users/julia/.spyder-py3/temp.py', wdir='C:/Users/julia/.spyder-py3') File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/julia/.spyder-py3/temp.py", line 2, in <module> import sklearn as skl File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\__init__.py", line 76, in <module> from .base import clone File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\base.py", line 13, in <module> import numpy as np File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\__init__.py", line 140, in <module> from . import _distributor_init File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\_distributor_init.py", line 34, in <module> from . import _mklinit ImportError: DLL load failed: The specified module could not be found. ``` A possible cause of this might be me deleting the mkl\_rt.dll file from my Anaconda/Library/bin after encountering the error described here: <https://github.com/ContinuumIO/anaconda-issues/issues/10182> This puts me in a predicament, because reinstalling Anaconda to repair this will get me the same "ordinal 242 could not be located" error that I faced earlier, but not repairing it will continue the issue with sklearn... **FINAL EDIT: Solved by installing old version of Anaconda. Will mark as solved when I am able to (2 days)**
2019/08/13
[ "https://Stackoverflow.com/questions/57484399", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7385274/" ]
I ended up fixing this by uninstalling my current version of Anaconda and installing a version from a few months ago. I didn't get the "ordinal 242" error nor the issues with scikit-learn.
I encountered the same error after letting my PC sit for 4 days unattended. Restarting the kernel solved it. This probably won't work for everyone, but it might save someone a little agony.
14,559
66,436,933
I am working with sequencing data and need to count the number of reads that match to a grna library in python. Simplified my data looks like this: ``` reads = ['abc', 'abc','def', 'ghi'] grnas = ['abc', 'ghi'] ``` The grnas list is unique, while the reads list can contain entries that are not of interest and don't match to the grnas or are repeat entries. What I want to do is to reduce the list of reads to only contain those entries which match to one entry in grnas. I am currently doing this with a list comprehension like this: ``` reads_matched = [read for read in reads if (read in grnas)] ``` in my example for reads\_matched this would return : ``` ['abc', 'abc', 'ghi'] ``` Since both of my lists are very large (6 million entries in reads and 80k entries in grnas) this of course takes some time to compute. Is there any way for me to speed this up further? I have tried writing it as a for loop or while loop in many different variations but this is much slower than the method I listed above. In general I am very inexperienced with runtime improvements and have come to this solution through trial and error so any tips to further improve would be appreciated!
2021/03/02
[ "https://Stackoverflow.com/questions/66436933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8413867/" ]
Your case cannot be less of O(n). Using single process the best solution is: ``` [x for x in reads if x in set(grnas)] or [x for x in reads if x in dict.fromkeys(grnas)] ``` but this is a simple case to parallelyze, you can reduce input data in some bunch of works and append all results.
as the worst case of complexity search in both `set` & `dict` in python is some how `O(N)` so the complexity of the program would be `O(N * M)`. it would not be efficient to use them. so use the `Counter` object which will do the search in `O(1)` so the whole program would be done in `O(max(N, M))` complexity. ```py from collections import Counter reads = ['abc', 'abc','def', 'ghi'] grnas = ['abc', 'ghi'] count = Counter(reads) for g in grnas: print(g, count[g]) ```
14,560
58,774,718
I'm writing multi-process code, which runs perfectly in Python 3.7. Yet I want one of the parallel process to execute an IO process take stakes for ever using AsyncIO i order to get better performance, but have not been able to get it to run. Ubuntu 18.04, Python 3.7, AsyncIO, pipenv (all pip libraries installed) The method in particular runs as expected using multithreading, which is what I want to replace with AsyncIO. I have googled and tried looping in the main() function and now only in the intended cor-routine, have looked at examples and read about this new Async way of getting things down and no results so far. The following is the app.py code which is esecuted: **python app.py** ``` import sys import traceback import logging import asyncio from config import DEBUG from config import log_config from <some-module> import <some-class> if DEBUG: logging.config.dictConfig(log_config()) else: logging.basicConfig( level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s') logger = logging.getLogger(__name__) def main(): try: <some> = <some-class>([ 'some-data1.csv', 'some-data2.csv' ]) <some>.run() except: traceback.print_exc() pdb.post_mortem() sys.exit(0) if __name__ == '__main__': asyncio.run(main()) ``` Here is the code where I have the given class defined ``` _sql_client = SQLServer() _blob_client = BlockBlobStore() _keys = KeyVault() _data_source = _keys.fetch('some-data') # Multiprocessing _manager = mp.Manager() _ns = _manager.Namespace() def __init__(self, list_of_collateral_files: list) -> None: @timeit def _get_filter_collateral(self, ns: mp.managers.NamespaceProxy) -> None: @timeit def _get_hours(self, ns: mp.managers.NamespaceProxy) -> None: @timeit def _load_original_bids(self, ns: mp.managers.NamespaceProxy) -> None: @timeit def _merge_bids_with_hours(self, ns: mp.managers.NamespaceProxy) -> None: @timeit def _get_collaterial_per_month(self, ns: mp.managers.NamespaceProxy) -> None: @timeit def _calc_bid_per_path(self) -> None: @timeit def run(self) -> None: ``` The method containing the async code is here: ``` def _get_filter_collateral(self, ns: mp.managers.NamespaceProxy) -> None: all_files = self._blob_client.download_blobs(self._list_of_blob_files) _all_dfs = pd.DataFrame() async def read_task(file_: str) -> None: nonlocal _all_dfs df = pd.read_csv(StringIO(file_.content)) _all_dfs = _all_dfs.append(df, sort=False) tasks = [] loop = asyncio.new_event_loop() for file_ in all_files: tasks.append(asyncio.create_task(read_task(file_))) loop.run_until_complete(asyncio.wait(tasks)) loop.close() _all_dfs['TOU'] = _all_dfs['TOU'].map(lambda x: 'OFFPEAK' if x == 'OFF' else 'ONPEAK') ns.dfs = _all_dfs ``` And the method that calls the particular sequence and and this async method is: ``` def run(self) -> None: extract = [] extract.append(mp.Process(target=self._get_filter_collateral, args=(self._ns, ))) extract.append(mp.Process(target=self._get_hours, args=(self._ns, ))) extract.append(mp.Process(target=self._load_original_bids, args=(self._ns, ))) # Start the parallel processes for process in extract: process.start() # Await for database process to end extract[1].join() extract[2].join() # Merge both database results self._merge_bids_with_hours(self._ns) extract[0].join() self._get_collaterial_per_month(self._ns) self._calc_bid_per_path() self._save_reports() self._upload_data() ``` These are the errors I get: ``` Process Process-2: Traceback (most recent call last): File "<some-path>/.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "<some-path>/.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "<some-path>/src/azure/application/utils/lib.py", line 10, in timed result = method(*args, **kwargs) File "<some-path>/src/azure/application/caiso/main.py", line 104, in _get_filter_collateral tasks.append(asyncio.create_task(read_task(file_))) File "<some-path>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/tasks.py", line 350, in create_task loop = events.get_running_loop() RuntimeError: no running event loop <some-path>/.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/process.py:313: RuntimeWarning: coroutine '<some-class>._get_filter_collateral.<locals>.read_task' was never awaited traceback.print_exc() RuntimeWarning: Enable tracemalloc to get the object allocation traceback DEBUG Calculating monthly collateral... Traceback (most recent call last): File "app.py", line 25, in main caiso.run() File "<some-path>/src/azure/application/utils/lib.py", line 10, in timed result = method(*args, **kwargs) File "<some-path>/src/azure/application/caiso/main.py", line 425, in run self._get_collaterial_per_month(self._ns) File "<some-path>/src/azure/application/utils/lib.py", line 10, in timed result = method(*args, **kwargs) File "<some-path>/src/azure/application/caiso/main.py", line 196, in _get_collaterial_per_month credit_margin = ns.dfs File "<some-path>/.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/managers.py", line 1122, in __getattr__ return callmethod('__getattribute__', (key,)) File "<some-path>/.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/managers.py", line 834, in _callmethod raise convert_to_error(kind, result) AttributeError: 'Namespace' object has no attribute 'dfs' > <some-path>/.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/managers.py(834)_callmethod() -> raise convert_to_error(kind, result) (Pdb) ```
2019/11/08
[ "https://Stackoverflow.com/questions/58774718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/982446/" ]
As it seems from the *Traceback* log it is look like you are trying to add tasks to not running *event loop*. > > /.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/process.py:313: > RuntimeWarning: coroutine > '.\_get\_filter\_collateral..read\_task' **was never > awaited** > > > The *loop* was just created and it's not running yet, therefor the `asyncio` unable to attach tasks to it. The following example will reproduce the same results, adding tasks and then trying to `await` for all of them to finish: ``` import asyncio async def func(num): print('My name is func {0}...'.format(num)) loop = asyncio.get_event_loop() tasks = list() for i in range(5): tasks.append(asyncio.create_task(func(i))) loop.run_until_complete(asyncio.wait(tasks)) loop.close() ``` Results with: ``` Traceback (most recent call last): File "C:/tmp/stack_overflow.py", line 42, in <module> tasks.append(asyncio.create_task(func(i))) File "C:\Users\Amiram\AppData\Local\Programs\Python\Python37-32\lib\asyncio\tasks.py", line 324, in create_task loop = events.get_running_loop() RuntimeError: no running event loop sys:1: RuntimeWarning: coroutine 'func' was never awaited ``` Nonetheless the solution is pretty simple, you just need to add the tasks to the created loop - instead of asking the `asyncio` to go it. The only change is needed in the following line: ``` tasks.append(asyncio.create_task(func(i))) ``` Change the creation of the task from the `asyncio` to the newly created *loop*, you are able to do it because this is your loop unlike the *asynio* which is searching for a running one. So the new line should look like this: ``` tasks.append(loop.create_task(func(i))) ``` Another solution could be running an *async* function and create the tasks there for example (Because that loop is already running now the `asyncio` enable to attach tasks to it): ``` import asyncio async def func(num): print('Starting func {0}...'.format(num)) await asyncio.sleep(0.1) print('Ending func {0}...'.format(num)) loop = asyncio.get_event_loop() async def create_tasks_func(): tasks = list() for i in range(5): tasks.append(asyncio.create_task(func(i))) await asyncio.wait(tasks) loop.run_until_complete(create_tasks_func()) loop.close() ``` This simple change will results with: ``` Starting func 0... Starting func 1... Starting func 2... Starting func 3... Starting func 4... Ending func 0... Ending func 2... Ending func 4... Ending func 1... Ending func 3... ```
Use `asyncio.ensure_future` instead. See <https://docs.python.org/3/library/asyncio-future.html#asyncio.ensure_future>
14,561
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
Ok, wrote an easy bit. Will probably spend some time to write a complete minor mode. For time being the following function will send current line (or region if the mark is active). Does quite a good job for me: ``` (defun sh-send-line-or-region (&optional step) (interactive ()) (let ((proc (get-process "shell")) pbuf min max command) (unless proc (let ((currbuff (current-buffer))) (shell) (switch-to-buffer currbuff) (setq proc (get-process "shell")) )) (setq pbuff (process-buffer proc)) (if (use-region-p) (setq min (region-beginning) max (region-end)) (setq min (point-at-bol) max (point-at-eol))) (setq command (concat (buffer-substring min max) "\n")) (with-current-buffer pbuff (goto-char (process-mark proc)) (insert command) (move-marker (process-mark proc) (point)) ) ;;pop-to-buffer does not work with save-current-buffer -- bug? (process-send-string proc command) (display-buffer (process-buffer proc) t) (when step (goto-char max) (next-line)) )) (defun sh-send-line-or-region-and-step () (interactive) (sh-send-line-or-region t)) (defun sh-switch-to-process-buffer () (interactive) (pop-to-buffer (process-buffer (get-process "shell")) t)) (define-key sh-mode-map [(control ?j)] 'sh-send-line-or-region-and-step) (define-key sh-mode-map [(control ?c) (control ?z)] 'sh-switch-to-process-buffer) ``` Enjoy.
`M-x` `append-to-buffer` `RET`
14,562
41,565,091
I'm calling xgboost via its scikit-learn-style Python interface: ``` model = xgboost.XGBRegressor() %time model.fit(trainX, trainY) testY = model.predict(testX) ``` Some sklearn models tell you which importance they assign to features via the attribute `feature_importances`. This doesn't seem to exist for the `XGBRegressor`: ``` model.feature_importances_ AttributeError Traceback (most recent call last) <ipython-input-36-fbaa36f9f167> in <module>() ----> 1 model.feature_importances_ AttributeError: 'XGBRegressor' object has no attribute 'feature_importances_' ``` The weird thing is: For a collaborator of mine the attribute `feature_importances_` is there! What could be the issue? These are the versions I have: ``` In [2]: xgboost.__version__ Out[2]: '0.6' In [4]: sklearn.__version__ Out[4]: '0.18.1' ``` ... and the xgboost C++ library from github, commit `ef8d92fc52c674c44b824949388e72175f72e4d1`.
2017/01/10
[ "https://Stackoverflow.com/questions/41565091", "https://Stackoverflow.com", "https://Stackoverflow.com/users/626537/" ]
How did you install xgboost? Did you build the package after cloning it from github, as described in the doc? <http://xgboost.readthedocs.io/en/latest/build.html> As in this answer: [Feature Importance with XGBClassifier](https://stackoverflow.com/questions/38212649/feature-importance-with-xgbclassifier) There always seems to be a problem with the pip-installation and xgboost. Building and installing it from your build seems to help.
This is useful for you,maybe. `xgb.plot_importance(bst)` And this is the link:[plot](http://xgboost.readthedocs.io/en/latest/python/python_intro.html#plotting)
14,572
73,966,292
By "Google Batch" I'm referring to the new service Google launched about a month or so ago. <https://cloud.google.com/batch> I have a Python script which takes a few minutes to execute at the moment. However with the data it will soon be processing in the next few months this execution time will go from minutes to **hours**. This is why I am not using Cloud Function or Cloud Run to run this script, both of these have a max 60 minute execution time. Google Batch came about recently and I wanted to explore this as a possible method to achieve what I'm looking for **without** just using Compute Engine. However documentation is sparse across the internet and I can't find a method to "trigger" an already created Batch job by using Cloud Scheduler. I've already successfully manually created a batch job which runs my docker image. Now I need something to trigger this batch job 1x a day, **thats it**. It would be wonderful if Cloud Scheduler could serve this purpose. I've seen 1 article describing using GCP Workflow to create a a new Batch job on a cron determined by Cloud Scheduler. Issue with this is its creating a new batch job every time, not simply **re-running** the already existing one. To be honest I can't even re-run an already executed batch job on the GCP website itself so I don't know if its even possible. <https://www.intertec.io/resource/python-script-on-gcp-batch> Lastly, I've even explored the official Google Batch Python library and could not find anywhere in there some built in function which allows me to "call" a previously created batch job and just re-run it. <https://github.com/googleapis/python-batch>
2022/10/05
[ "https://Stackoverflow.com/questions/73966292", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13379101/" ]
EDIT -- added the "55 difference columns" part at the bottom. --- Adjusting data to be column pairs: ``` df <- data.frame(matrix(sample(1:10, 200, replace = TRUE), ncol = 20, nrow = 10)) names(df) <- paste0("var", rep(1:10, each = 2), "_", rep(c("apple", "banana"))) names(df) [1] "var1_apple" "var1_banana" "var2_apple" "var2_banana" "var3_apple" "var3_banana" [7] "var4_apple" "var4_banana" "var5_apple" "var5_banana" "var6_apple" "var6_banana" [13] "var7_apple" "var7_banana" "var8_apple" "var8_banana" "var9_apple" "var9_banana" [19] "var10_apple" "var10_banana" ``` --- ``` library(tidyverse) df %>% mutate(row = row_number()) %>% pivot_longer(-row, names_to = c("var", ".value"), names_sep = "_") # A tibble: 100 × 4 row var apple banana <int> <chr> <int> <int> 1 1 var1 8 7 2 1 var2 4 9 3 1 var3 7 3 4 1 var4 6 10 5 1 var5 10 10 6 1 var6 1 1 7 1 var7 2 10 8 1 var8 7 9 9 1 var9 3 8 10 1 var10 2 6 # … with 90 more rows # ℹ Use `print(n = ...)` to see more rows ``` --- Here a variation to add all the difference columns interspersed: ``` df %>% mutate(row = row_number()) %>% pivot_longer(-row, names_to = c("var", ".value"), names_sep = "_") %>% mutate(difference = banana - apple) %>% pivot_wider(names_from = var, values_from = apple:difference, names_glue = "{var}_{.value}", names_vary = "slowest") ``` Result (truncated) ``` # A tibble: 10 × 10 row var1_apple var1_banana var1_difference var2_apple var2_banana var2_difference var3_apple var3_banana var3_difference <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> 1 1 7 10 3 5 3 -2 1 9 8 2 2 9 2 -7 3 6 3 8 1 -7 3 3 2 10 8 3 3 0 7 8 1 4 4 3 1 -2 8 3 -5 9 9 0 5 5 2 7 5 7 10 3 6 9 3 6 6 5 4 -1 2 1 -1 5 4 -1 7 7 4 5 1 10 3 -7 9 4 -5 8 8 10 7 -3 3 2 -1 5 9 4 9 9 5 5 0 7 3 -4 10 7 -3 10 10 10 6 -4 1 4 3 10 10 0 ```
I think @Tom's comment is spot-on. Restructuring the data probably makes sense if you are working with paired data. E.g.: ``` od <- names(df)[c(TRUE,FALSE)] ev <- names(df)[c(FALSE,TRUE)] data.frame( odd = unlist(df[od]), oddname = rep(od,each=nrow(df)), even = unlist(df[ev]), evenname = rep(ev,each=nrow(df)) ) ## odd oddname even evenname ##X11 7 X1 10 X2 ##X12 6 X1 1 X2 ##X13 2 X1 6 X2 ##X14 5 X1 2 X2 ##X15 3 X1 1 X2 ## ... ``` It is then trival to take one column from another in this structure. If you must have the matrix-like output, then that is also achievable: ``` od <- names(df)[c(TRUE,FALSE)] ev <- names(df)[c(FALSE,TRUE)] setNames(df[od] - df[ev], paste(od, ev, sep="_")) ## X1_X2 X3_X4 X5_X6 X7_X8 X9_X10 X11_X12 X13_X14 X15_X16 X17_X18 X19_X20 ##1 -3 2 4 4 -2 4 3 1 -3 9 ##2 5 5 4 3 -1 3 -1 -3 5 -2 ##3 -4 3 7 4 -5 1 1 5 -4 4 ##4 3 0 6 3 4 -5 6 6 -7 4 ##5 2 2 1 4 -6 -3 6 2 3 1 ##6 -6 -2 4 -2 0 1 3 0 0 -7 ##7 0 -6 3 7 -1 0 0 -5 3 1 ##8 -1 3 3 1 2 -2 -5 3 0 0 ##9 -4 1 -5 -2 -4 7 6 -2 4 -4 ##10 2 -7 4 -1 0 -6 -4 -4 0 0 ```
14,575
73,937,555
I have a folder named `deployment`, under deployment there are two sibling folders: `folder1` and `folder2`. i need to move folder2 with its sub contents to folder1 with python scrips, so from: ``` .../deployment/folder1/... /folder1/... ``` to ``` .../deployment/folder1/... /folder1/folder2/... ``` I know how to copy folders and jobs in Jenkins, MANUALLY, and i need to copy tens of folders to a new folder programmatically, e.g. with Python scripting. I tried with the code: ``` import jenkins server = jenkins.Jenkins('https://comp.com/job/deployment', username='xxxx', password='******') server.copy_job('folder2', 'folder1/folder2') ``` The code returns: **JenkinsException: copy[folder2 to folder1/folder2] failed, source and destination folder must be the same** how can i have this done?
2022/10/03
[ "https://Stackoverflow.com/questions/73937555", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4483819/" ]
Settings are empty, maybe they are not exported correctly. Check your settings file.
I think you're not using the find API call for MongoDB properly, find usually takes up a filter object and an object of properties as a second argument. Check the syntax required for find(){} function and probably you'll get through with it. Hope it helps. Happy coding!!
14,577
41,836,353
I have a project in which I run multiple data through a specific function that `"cleans"` them. The cleaning function looks like this: Misc.py ``` def clean(my_data) sys.stdout.write("Cleaning genes...\n") synonyms = FileIO("raw_data/input_data", 3, header=False).openSynonyms() clean_genes = {} for g in data: if g in synonyms: # Found a data point which appears in the synonym list. #print synonyms[g] for synonym in synonyms[g]: if synonym in data: del data[synonym] clean_data[g] = synonym sys.stdout.write("\t%s is also known as %s\n" % (g, clean_data[g])) return data ``` `FileIO` is a custom class I made to open files. My question is, this function will be called many times throughout the program's life cycle. What I want to achieve is don't have to read the input\_data every time since it's gonna be the same every time. I know that I can just return it, and pass it as an argument in this way: ``` def clean(my_data, synonyms = None) if synonyms == None: ... else ... ``` But is there another, better looking way of doing this? My file structure is the following: ``` lib Misc.py FileIO.py __init__.py ... raw_data runme.py ``` From `runme.py`, I do this `from lib import *` and call all the functions I made. Is there a pythonic way to go around this? Like a 'memory' for the function Edit: this line: `synonyms = FileIO("raw_data/input_data", 3, header=False).openSynonyms()` returns a `collections.OrderedDict()` from `input_data` and using the 3rd column as the key of the dictionary. The dictionary for the following dataset: ``` column1 column2 key data ... ... A B|E|Z ... ... B F|W ... ... C G|P ... ``` Will look like this: ``` OrderedDict([('A',['B','E','Z']), ('B',['F','W']), ('C',['G','P'])]) ``` This tells my script that `A` is also known as `B,E,Z`. `B` as `F,W`. etc... So these are the synonyms. Since, The synonyms list will never change throughout the life of the code. I want to just read it once, and re-use it.
2017/01/24
[ "https://Stackoverflow.com/questions/41836353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3008400/" ]
Use a class with a \_\_call\_\_ operator. You can call objects of this class and store data between calls in the object. Some data probably can best be saved by the constructor. What you've made this way is known as a 'functor' or 'callable object'. Example: ``` class Incrementer: def __init__ (self, increment): self.increment = increment def __call__ (self, number): return self.increment + number incrementerBy1 = Incrementer (1) incrementerBy2 = Incrementer (2) print (incrementerBy1 (3)) print (incrementerBy2 (3)) ``` Output: ``` 4 5 ``` [EDIT] Note that you can combine the answer of @Tagc with my answer to create exactly what you're looking for: a 'function' with built-in memory. Name your class `Clean` rather than `DataCleaner` and the name the instance `clean`. Name the method `__call__` rather than `clean`.
I think the cleanest way to do this would be to decorate your "`clean`" (pun intended) function with another function that provides the `synonyms` local for the function. this is iamo cleaner and more concise than creating another custom class, yet still allows you to easily change the "input\_data" file if you need to (factory function): ``` def defineSynonyms(datafile): def wrap(func): def wrapped(*args, **kwargs): kwargs['synonyms'] = FileIO(datafile, 3, header=False).openSynonyms() return func(*args, **kwargs) return wrapped return wrap @defineSynonyms("raw_data/input_data") def clean(my_data, synonyms={}): # do stuff with synonyms and my_data... pass ```
14,578
13,152,085
Hi im trying to use regex in python 2.7 to search for text inbetween two quotation marks such as "hello there". Right now im using: ``` matchquotes = re.findall(r'"(?:\\"|.)*?"', text) ``` It works great but only finds quotes using this character: **"** However I'm finding sometimes that some text that im parsing use these DIFFERENT characters: **“** ... **”** How can I modify my regex such that it will find either **"**..**"** or **“**.. **”** or **"**..**”**
2012/10/31
[ "https://Stackoverflow.com/questions/13152085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1495000/" ]
Using character classes might work, or might break everything for you: ``` matchquotes = re.findall(r'[“”"](?:\\[“”"]|.)*?[“”"]', text) ``` If you don't care a lot about matching pairs always lining up, this will probably do what you want. The case where they use the third type inside the other two is always going to screw you unless you build a few patterns and find their intersection.
Depending on what other processing you are doing and where the text is coming from, it would be better to convert all quotation marks to " rather than handle each case.
14,580
71,232,402
First of all, thank you for the time you took to answer me. To give **a little example**, I have a huge dataset (n instances, 3 features) like that: `data = np.array([[7.0, 2.5, 3.1], [4.3, 8.8, 6.2], [1.1, 5.5, 9.9]])` It's labeled in another array: `label = np.array([0, 1, 0])` **Questions**: 1. I know that I can solve my problem by looping python like (for loop) but I'm concerned about a numpy way (without for-loop) to be less time consumption (do it as fast as possible). 2. If there aren't a way without for-loop, what would be the best one (M1, M2, any other wizardry method?)?. **My solution**: ``` clusters = [] for lab in range(label.max()+1): # M1: creating new object c = data[label == lab] clusters.append([c.min(axis=0), c.max(axis=0)]) # M2: comparing multiple times (called views?) # clusters.append([data[label == lab].min(axis=0), data[label == lab].max(axis=0)]) print(clusters) # [[array([1.1, 2.5, 3.1]), array([7. , 5.5, 9.9])], [array([4.3, 8.8, 6.2]), array([4.3, 8.8, 6.2])]] ```
2022/02/23
[ "https://Stackoverflow.com/questions/71232402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18285419/" ]
You could start from and easier variant of this problem: ***Given `arr` and its label, could you find a minimum and maximum values of `arr` items in each group of labels?*** For instance: ``` arr = np.array([55, 7, 49, 65, 46, 75, 4, 54, 43, 54]) label = np.array([1, 3, 2, 0, 0, 2, 1, 1, 1, 2]) ``` Then you would expect that minimum and maximum values of `arr` in each label group were: ``` min_values = np.array([46, 4, 49, 7]) max_values = np.array([65, 55, 75, 7]) ``` Here is a numpy approach to this kind of problem: ``` def groupby_minmax(arr, label, return_groups=False): arg_idx = np.argsort(label) arr_sort = arr[arg_idx] label_sort = label[arg_idx] div_points = np.r_[0, np.flatnonzero(np.diff(label_sort)) + 1] min_values = np.minimum.reduceat(arr_sort, div_points) max_values = np.maximum.reduceat(arr_sort, div_points) if return_groups: return min_values, max_values, label_sort[div_points] else: return min_values, max_values ``` Now there's not much to change in order to adapt it to your use case: ``` def groupby_minmax_OP(arr, label, return_groups=False): arg_idx = np.argsort(label) arr_sort = arr[arg_idx] label_sort = label[arg_idx] div_points = np.r_[0, np.flatnonzero(np.diff(label_sort)) + 1] min_values = np.minimum.reduceat(arr_sort, div_points, axis=0) max_values = np.maximum.reduceat(arr_sort, div_points, axis=0) if return_groups: return min_values, max_values, label_sort[div_points] else: return np.array([min_values, max_values]).swapaxes(0, 1) groupby_minmax(data, label) ``` Output: ``` array([[[1.1, 2.5, 3.1], [7. , 5.5, 9.9]], [[4.3, 8.8, 6.2], [4.3, 8.8, 6.2]]]) ```
it has already been answered, you can go to this link for your answer [python numpy access list of arrays without for loop](https://stackoverflow.com/questions/36530446/python-numpy-access-list-of-arrays-without-for-loop)
14,583
50,221,468
This question comes from "Automate the boring stuff with python" book. ``` atRegex1 = re.compile(r'\w{1,2}at') atRegex2 = re.compile(r'\w{1,2}?at') atRegex1.findall('The cat in the hat sat on the flat mat.') atRegex2.findall('The cat in the hat sat on the flat mat.') ``` I thought the question market ? should conduct a non-greedy match, so \w{1,2}? should only return 1 character. But for both of these functions, I get the same output: ['cat', 'hat', 'sat', 'flat', 'mat'] In the book, ``` nongreedyHaRegex = re.compile(r'(Ha){3,5}?') mo2 = nongreedyHaRegex.search('HaHaHaHaHa') mo2.group() 'HaHaHa' ``` Any one can help me understand why there is a difference? Thanks!
2018/05/07
[ "https://Stackoverflow.com/questions/50221468", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8370526/" ]
There's nothing wrong (that is, ordinary assignment in P6 is designed to do as it has done) but at a guess you were hoping that making the structure on the two sides the same would result in `$a` getting `1`, `$b` getting `2` and `$c` getting `3`. For that, you want "binding assignment" (aka just "binding"), not ordinary assignment: ``` my ($a, $b, $c); :(($a, $b), $c) := ((1, 2), 3); ``` Note the colon before the list on the left, making it a signature literal, and the colon before the `=`, making it a binding operation.
If you want to have the result be `1, 2, 3`, you must `Slip` the list: ``` my ($a, $b, $c) = |(1, 2), 3; ``` This is a consequence of the single argument rule: <https://docs.raku.org/type/Signature#Single_Argument_Rule_Slurpy> This is also why this just works: ``` my ($a, $b, $c) = (1, 2, 3); ``` Even though `(1,2,3)` is a `List` with 3 elements, it will be auto-slipped because of the same single argument rule. You can of course also just remove the (superstitious) parentheses: ``` my ($a, $b, $c) = 1, 2, 3; ```
14,584
66,779,282
I would like to print the rating result for different user in separate array. It can be solved by creating many arrays, but I didn't want to do so, because I have a lot of user in my Json file, so how can I do this programmatically? python code ``` with open('/content/user_data.json') as f: rating = [] js = json.load(f) for a in js['Rating']: for rate in a['rating']: rating.append(rate['rating']) print(rating) ``` output ``` ['4', '2', '5', '1', '3', '5', '2', '5'] ``` my expected result ``` ['4', '2', '5'] ['1', '3'] ['5', '2', '5'] ``` json file ``` { "Rating" : [ { "user" : "john", "rating":[ { "placename" : "Kingstreet Café", "rating" : "4" }, { "placename" : "Royce Hotel", "rating" : "2" }, { "placename" : "The Cabinet", "rating" : "5" } ] }, { "user" : "emily", "rating":[ { "placename" : "abc", "rating" : "1" }, { "placename" : "def", "rating" : "3" } ] }, { "user" : "jack", "rating":[ { "placename" : "a", "rating" : "5" }, { "placename" : "b", "rating" : "2" }, { "placename" : "C", "rating" : "5" } ] } ] } ```
2021/03/24
[ "https://Stackoverflow.com/questions/66779282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10275606/" ]
Don't only append all ratings to one list, but create a list for every user: ```py with open('a.json') as f: ratings = [] #to store ratings of all user js = json.load(f) for a in js['Rating']: rating = [] #to store ratings of single user for rate in a['rating']: rating.append(rate['rating']) ratings.append(rating) print(ratings) ```
A simple one-liner ``` all_ratings = [list(map(lambda x: x['rating'], r['rating'])) for r in js['Rating']] ``` Explanation ``` all_ratings = [ list( # Converts map to list map(lambda x: x['rating'], r['rating']) # Get attribute from list of dict ) for r in js['Rating'] # Iterate ratings ] ```
14,587
54,623,084
I'm trying to create a function in python that will print out the anagrams of words in a text file using dictionaries. I've looked at what feels like hundreds of similar questions, so I apologise if this is a repetition, but I can't seem to find a solution that fits my issue. I understand what I need to do (at least, I think so), but I'm stuck on the final part. This is what I have so far: ``` with open('words.txt', 'r') as fp: line = fp.readlines() def make_anagram_dict(line): dict = {} for word in line: key = ''.join(sorted(word.lower())) if key in dict.keys(): dict[key].append(word.lower()) else: dict[key] = [] dict[key].append(word.lower()) if line == key: print(line) make_anagram_dict(line) ``` I think I need something which compares the key of each value to the keys of other values, and then prints if they match, but I can't get something to work. At the moment, the best I can do is print out all the keys and values in the file, but ideally, I would be able to print all the anagrams from the file. Output: I don't have a concrete specified output, but something along the lines of: [cat: act, tac] for each anagram. Again, apologies if this is a repetition, but any help would be greatly appreciated.
2019/02/11
[ "https://Stackoverflow.com/questions/54623084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11041870/" ]
I'm not sure about the output format. In my implementation, all anagrams are printed out in the end. ``` with open('words.txt', 'r') as fp: line = fp.readlines() def make_anagram_dict(line): d = {} # avoid using 'dict' as variable name for word in line: word = word.lower() # call lower() only once key = ''.join(sorted(word)) if key in d: # no need to call keys() d[key].append(word) else: d[key] = [word] # you can initialize list with the initial value return d # just return the mapping to process it later if __name__ == '__main__': d = make_anagram_dict(line) for words in d.values(): if len(words) > 1: # several anagrams in this group print('Anagrams: {}'.format(', '.join(words))) ``` --- Also, consider using `defaultdict` - it's a dictionary, that creates values of a specified type for fresh keys. ``` from collections import defaultdict with open('words.txt', 'r') as fp: line = fp.readlines() def make_anagram_dict(line): d = defaultdict(list) # argument is the default constructor for value for word in line: word = word.lower() # call lower() only once key = ''.join(sorted(word)) d[key].append(word) # now d[key] is always list return d # just return the mapping to process it later if __name__ == '__main__': d = make_anagram_dict(line) for words in d.values(): if len(words) > 1: # several anagrams in this group print('Anagrams: {}'.format(', '.join(words))) ```
Your code is pretty much there, just needs some tweaks: ``` import re def make_anagram_dict(words): d = {} for word in words: word = word.lower() # call lower() only once key = ''.join(sorted(word)) # make the key if key in d: # check if it's in dictionary already if word not in d[key]: # avoid duplicates d[key].append(word) else: d[key] = [word] # initialize list with the initial value return d # return the entire dictionary if __name__ == '__main__': filename = 'words.txt' with open(filename) as file: # Use regex to extract words. You can adjust to include/exclude # characters, numbers, punctuation... # This returns a list of words words = re.findall(r"([a-zA-Z\-]+)", file.read()) # Now process them d = make_anagram_dict(words) # Now print them for words in d.values(): if len(words) > 1: # we found anagrams print('Anagram group {}: {}'.format(', '.join(words))) ```
14,590
57,876,971
I have a project for one of my college classes that requires me to pull all URLs from a page on the U.S. census bureau website and store them in a CSV file. For the most part I've figured out how to do that but for some reason when the data gets appended to the CSV file, all the entries are being inserted horizontally. I would expect the data to be arranged vertically, meaning row 1 has the first item in the list, row 2 has the second item and so on. I have tried several approaches but the data always ends up as a horizontal representation. I am new to python and obviously don't have a firm enough grasp on the language to figure this out. Any help would be greatly fully appreciated. I am parsing the website using Beautifulsoup4 and the request library. Pulling all the 'a' tags from the website was easy enough and getting the URLs from those 'a' tags into a list was pretty clear as well. But when I append the list to my CSV file with a writerow function, all the data ends up in one row as opposed to one separate row for each URL. ``` import requests import csv requests.get from bs4 import BeautifulSoup from pprint import pprint page = requests.get('https://www.census.gov/programs-surveys/popest.html') soup = BeautifulSoup(page.text, 'html.parser') ## Create Link to append web data to links = [] # Pull text from all instances of <a> tag within BodyText div AllLinks = soup.find_all('a') for link in AllLinks: links.append(link.get('href')) with open("htmlTable.csv", "w") as f: writer = csv.writer(f) writer.writerow(links) pprint(links) ```
2019/09/10
[ "https://Stackoverflow.com/questions/57876971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10912876/" ]
You can try: ``` colSums(df1[,2:4]>0) ``` Output: ``` var1 var2 var3 4 4 5 ```
One brutal solution is with `apply` function ``` apply(df1[ ,2:ncol(df1)], 2, function(x){sum(x != 0)}) ```
14,593
48,402,276
I am taking a Udemy course. The problem I am working on is to take two strings and determine if they are 'one edit away' from each other. That means you can make a single change -- change one letter, add one letter, delete one letter -- from one string and have it become identical to the other. Examples: ``` s1a = "abcde" s1b = "abfde" s2a = "abcde" s2b = "abde" s3a = "xyz" s3b = "xyaz" ``` * `s1a` changes the `'c'` to an `'f'`. * `s2a` deletes `'c'`. * `s3a` adds `'a'`. The instructors solution (and test suite): ``` def is_one_away(s1, s2): if len(s1) - len(s2) >= 2 or len(s2) - len(s1) >= 2: return False elif len(s1) == len(s2): return is_one_away_same_length(s1, s2) elif len(s1) > len(s2): return is_one_away_diff_lengths(s1, s2) else: return is_one_away_diff_lengths(s2, s1) def is_one_away_same_length(s1, s2): count_diff = 0 for i in range(len(s1)): if not s1[i] == s2[i]: count_diff += 1 if count_diff > 1: return False return True # Assumption: len(s1) == len(s2) + 1 def is_one_away_diff_lengths(s1, s2): i = 0 count_diff = 0 while i < len(s2): if s1[i + count_diff] == s2[i]: i += 1 else: count_diff += 1 if count_diff > 1: return False return True # NOTE: The following input values will be used for testing your solution. print(is_one_away("abcde", "abcd")) # should return True print(is_one_away("abde", "abcde")) # should return True print(is_one_away("a", "a")) # should return True print(is_one_away("abcdef", "abqdef")) # should return True print(is_one_away("abcdef", "abccef")) # should return True print(is_one_away("abcdef", "abcde")) # should return True print(is_one_away("aaa", "abc")) # should return False print(is_one_away("abcde", "abc")) # should return False print(is_one_away("abc", "abcde")) # should return False print(is_one_away("abc", "bcc")) # should return False ``` When I saw the problem I decided to tackle it using `set()`. I found this very informative: [Opposite of set.intersection in python?](https://stackoverflow.com/questions/29947844/opposite-of-set-intersection-in-python) This is my attempted solution: ``` def is_one_away(s1, s2): if len(set(s1).symmetric_difference(s2)) <= 1: return True if len(set(s1).symmetric_difference(s2)) == 2: if len(set(s1).difference(s2)) == len(set(s2).difference(s1)): return True return False return False ``` When I run my solution online (you can test within the course itself) I am failing on the last test suite item: ``` False != True : Input 1: abc Input 2: bcc Expected Result: False Actual Result: True ``` I have tried and tried but I can't get the last test item to work (at least not without breaking a bunch of other stuff). There is no guarantee that I can solve the full test suite with a `set()` based solution, but since I am one item away I was really wanting to see if I could get it done.
2018/01/23
[ "https://Stackoverflow.com/questions/48402276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7535419/" ]
This fails to pass this test, because you only look at *unique characters*: ``` >>> s1 = 'abc' >>> s2 = 'bcc' >>> set(s1).symmetric_difference(s2) {'a'} ``` That's a set of length 1, but there are **two** characters changed. By converting to a set, you only see that there is at least one `'c'` character in the `s2` input, not that there are now two. Another way your approach would fail is if the *order* of the characters changed. `'abc'` is two changes away from `'cba'`, but your approach would fail to detect those changes too. You can't solve this problem using sets, because sets remove two important pieces of information: how many times a character appears, and in what order the characters are listed.
Here's a solution using differences found by list comprehension. ``` def one_away(s1, s2): diff1 = [el for el in s1 if el not in s2] diff2 = [el for el in s2 if el not in s1] if len(diff1) < 2 and len(diff2) < 2: return True return False ``` Unlike a set-based solution, this one doesn't lose vital information about non-unique characters.
14,598
54,938,607
I have already read answer of this question [Image.open() cannot identify image file - Python?](https://stackoverflow.com/q/19230991/9235408), that question was solved by using `from PIL import Image`, but my situation is different. I am using `image_slicer`, and there I am getting these errors: ``` Traceback (most recent call last): File "image_slice.py", line 17, in <module> j=image_slicer.slice('file_name' , n_k) File "/home/user_name/.local/lib/python3.5/site- packages/image_slicer/main.py", line 114, in slice im = Image.open(filename) File "/home/user_name/.local/lib/python3.5/site-packages/PIL/Image.py", line 2687, in open % (filename if filename else fp)) OSError: cannot identify image file 'file_name' ``` The full code is: ``` import os from PIL import Image import image_slicer import numpy as np import nibabel as nib img = nib.load('/home/user_name/volume-20.nii') img.shape epi_img_data = img.get_data() #epi_img_data.shape n_i, n_j, n_k = epi_img_data.shape center_i = (n_i - 1) // 2 center_j = (n_j - 1) // 2 center_k = (n_k - 1) // 2 centers = [center_i, center_j, center_k] print("Co-ordinates in the voxel array: ", centers) #for i in range(n_k): j=image_slicer.slice('/home/user_name/volume-20.nii' , n_k) ``` However `nib.load()`, works fine, but `image_slicer` is not working. All the nii images are **3D images**.
2019/03/01
[ "https://Stackoverflow.com/questions/54938607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9235408/" ]
[Image slicer](https://image-slicer.readthedocs.io/en/latest/) is not intended for reading `nii` format. Here is the [list](https://pillow.readthedocs.io/en/5.1.x/handbook/image-file-formats.html#image-file-formats) of supported formats.
This error also occurs whenever the image file itself is corrupted. I once accidentally was in the process of deleting the subject image, until canceling mid-way through. TL;DR - open image file to see if it's ok.
14,601
3,561,221
this is similar to the question in [merge sort in python](https://stackoverflow.com/questions/3559807/merge-sort-in-python) I'm restating because I don't think I explained the problem very well over there. basically I have a series of about 1000 files all containing domain names. altogether the data is > 1gig so I'm trying to avoid loading all the data into ram. each individual file has been sorted using .sort(get\_tld) which has sorted the data according to its TLD (not according to its domain name. sorted all the .com's together, .orgs together, etc) a typical file might look like ``` something.ca somethingelse.ca somethingnew.com another.net whatever.org etc.org ``` but obviosuly longer. I now want to merge all the files into one, maintaining the sort so that in the end the one large file will still have all the .coms together, .orgs together, etc. What I want to do basically is ``` open all the files loop: read 1 line from each open file put them all in a list and sort with .sort(get_tld) write each item from the list to a new file ``` the problem I'm having is that I can't figure out how to loop over the files I can't use **with open() as** because I don't have 1 file open to loop over, I have many. Also they're all of variable length so I have to make sure to get all the way through the longest one. any advice is much appreciated.
2010/08/24
[ "https://Stackoverflow.com/questions/3561221", "https://Stackoverflow.com", "https://Stackoverflow.com/users/410296/" ]
Whether you're able to keep 1000 files at once is a separate issue and depends on your OS and its configuration; if not, you'll have to proceed in two steps -- merge groups of N files into temporary ones, then merge the temporary ones into the final-result file (two steps should suffice, as they let you merge a total of N squared files; as long as N is at least 32, merging 1000 files should therefore be possible). In any case, this is a separate issue from the "merge N input files into one output file" task (it's only an issue of whether you call that function once, or repeatedly). The general idea for the function is to keep a priority queue (module `heapq` is good at that;-) with small lists containing the "sorting key" (the current TLD, in your case) followed by the last line read from the file, and finally the open file ready for reading the next line (and something distinct in between to ensure that the normal lexicographical order won't accidentally end up trying to compare two open files, which would fail). I think some code is probably the best way to explain the general idea, so next I'll edit this answer to supply the code (however I have no time to *test* it, so take it as pseudocode intended to communicate the idea;-). ``` import heapq def merge(inputfiles, outputfile, key): """inputfiles: list of input, sorted files open for reading. outputfile: output file open for writing. key: callable supplying the "key" to use for each line. """ # prepare the heap: items are lists with [thekey, k, theline, thefile] # where k is an arbitrary int guaranteed to be different for all items, # theline is the last line read from thefile and not yet written out, # (guaranteed to be a non-empty string), thekey is key(theline), and # thefile is the open file h = [(k, i.readline(), i) for k, i in enumerate(inputfiles)] h = [[key(s), k, s, i] for k, s, i in h if s] heapq.heapify(h) while h: # get and output the lowest available item (==available item w/lowest key) item = heapq.heappop(h) outputfile.write(item[2]) # replenish the item with the _next_ line from its file (if any) item[2] = item[3].readline() if not item[2]: continue # don't reinsert finished files # compute the key, and re-insert the item appropriately item[0] = key(item[2]) heapq.heappush(h, item) ``` Of course, in your case, as the `key` function you'll want one that extracts the top-level domain given a line that's a domain name (with trailing newline) -- in a previous question you were already pointed to the urlparse module as preferable to string manipulation for this purpose. If you do insist on string manipulation, ``` def tld(domain): return domain.rsplit('.', 1)[-1].strip() ``` or something along these lines is probably a reasonable approach under this constraint. If you use Python 2.6 or better, [heapq.merge](http://docs.python.org/library/heapq.html#heapq.merge) is the obvious alternative, but in that case you need to prepare the iterators yourself (including ensuring that "open file objects" never end up being compared by accident...) with a similar "decorate / undecorate" approach from that I use in the more portable code above.
You want to use merge sort, e.g. `heapq.merge`. I'm not sure if your OS allows you to open 1000 files simultaneously. If not you may have to do it in 2 or more passes.
14,602
15,351,515
I wrote my own implementation of the `ISession` [interface](http://docs.pylonsproject.org/projects/pyramid/en/1.0-branch/_modules/pyramid/interfaces.html#ISession) of Pyramid which should store the Session in a database. Everything works real nice, but somehow `pyramid_tm` throws up on this. As soon as it is activated it says this: ``` DetachedInstanceError: Instance <Session at 0x38036d0> is not bound to a Session; attribute refresh operation cannot proceed ``` (Don't get confused here: The `<Session ...>` is the class name for the model, the "... to a Session" most likely refers to SQLAlchemy's Session (which I call `DBSession` to avoid confusion). I have looked through mailing lists and SO and it seems anytime someone has the problem, they are * spawning a new thread or * manually call `transaction.commit()` I do neither of those things. However, the specialty here is, that my session gets passed around by Pyramid a lot. First I do `DBSession.add(session)` and then `return session`. I can afterwards work with the session, flash new messages etc. However, it seems once the request finishes, I get this exception. Here is the full traceback: ``` Traceback (most recent call last): File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/waitress-0.8.1-py2.7.egg/waitress/channel.py", line 329, in service task.service() File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/waitress-0.8.1-py2.7.egg/waitress/task.py", line 173, in service self.execute() File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/waitress-0.8.1-py2.7.egg/waitress/task.py", line 380, in execute app_iter = self.channel.server.application(env, start_response) File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/pyramid/router.py", line 251, in __call__ response = self.invoke_subrequest(request, use_tweens=True) File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/pyramid/router.py", line 231, in invoke_subrequest request._process_response_callbacks(response) File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/lib/python2.7/site-packages/pyramid/request.py", line 243, in _process_response_callbacks callback(self, response) File "/home/javex/data/Arbeit/libraries/python/web_projects/pyramid/miniblog/miniblog/models.py", line 218, in _set_cookie print("Setting cookie %s with value %s for session with id %s" % (self._cookie_name, self._cookie, self.id)) File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/attributes.py", line 168, in __get__ return self.impl.get(instance_state(instance),dict_) File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/attributes.py", line 451, in get value = callable_(passive) File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/state.py", line 285, in __call__ self.manager.deferred_scalar_loader(self, toload) File "build/bdist.linux-x86_64/egg/sqlalchemy/orm/mapper.py", line 1668, in _load_scalar_attributes (state_str(state))) DetachedInstanceError: Instance <Session at 0x7f4a1c04e710> is not bound to a Session; attribute refresh operation cannot proceed ``` For this case, I deactivated the debug toolbar. The error gets thrown from there once I activate it. It seems the problem here is accessing the object at any point. I realize I could try to detach it somehow, but this doesn't seem like the right way as the element couldn't be modified without explicitly adding it to a session again. So when I don't spawn new threads and I don't explicitly call commit, I guess the transaction is committing before the request is fully gone and afterwards there is again access to it. How do I handle this problem?
2013/03/12
[ "https://Stackoverflow.com/questions/15351515", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1326104/" ]
I believe what you're seeing here is a quirk to the fact that response callbacks and finished callbacks are actually executed after tweens. They are positioned just between your app's egress, and middleware. `pyramid_tm`, being a tween, is committing the transaction before your response callback executes - causing the error upon later access. Getting the order of these things correct is difficult. A possibility off the top of my head is to register your own tween **under** `pyramid_tm` that performs a flush on the session, grabs the id, and sets the cookie on the response. I sympathize with this issue, as anything that happens after the transaction has been committed is a real gray area in Pyramid where it's not always clear that the session should not be touched. I'll make a note to continue thinking about how to improve this workflow for Pyramid in the future.
I first tried with registering a tween and it worked somehow, but the data did not get saved. I then stumpled upon the [SQLAlchemy Event System](http://docs.sqlalchemy.org/en/latest/core/event.html). I found the [after\_commit](http://docs.sqlalchemy.org/en/latest/orm/events.html#sqlalchemy.orm.events.SessionEvents.after_commit) event. Using this, I could set up the detaching of the session object after the commit was done by `pyramid_tm`. I think this provides the full fexibility and doesn't impose any requirements on the order. My final solution: ``` from sqlalchemy.event import listen from sqlalchemy.orm import Session as SASession def detach(db_session): from pyramid.threadlocal import get_current_request request = get_current_request() log.debug("Expunging (detaching) session for DBSession") db_session.expunge(request.session) listen(SASession, 'after_commit', detach) ``` Only drawback: It requires calling [get\_current\_request()](http://docs.pylonsproject.org/projects/pyramid/en/latest/api/threadlocal.html#pyramid.threadlocal.get_current_request) which is discouraged. However, I saw no way of passing the session in any way, as the event gets called by SQLAlchemy. I thought of some ugly wrapping stuff but I think that would have been way to risky and unstable.
14,611
51,118,801
i am very new in python (and programming in general) and here is my issue. i would like to replace (or delete) a part of a string from a txt file which contains hundreds or thousands of lines. each line starts with the very same string which i want to delete. i have not found a method to delete it so i tried a replace it with empty string but for some reason it doesn't work. here is what i have written: ``` file = "C:/Users/experimental/Desktop/testfile siera.txt" siera_log = open(file) text_to_replace = "Chart: Bar Backtest: NQU8-CME [CB] 1 Min #1 | Study: free dll = 0 |" for each_line in siera_log: new_line = each_line.replace("text_to_replace", " ") print(new_line) ``` when i print it to check if it was done, i can see that the lines are as they were before. no change was made. can anyone help me to find out why?
2018/06/30
[ "https://Stackoverflow.com/questions/51118801", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8741601/" ]
> > each line starts with the very same string which i want to delete. > > > The problem is you're passing a string `"text_to_replace"` rather than the variable `text_to_replace`. But, for this specific problem, you could just remove the first *n* characters from each line: ``` text_to_replace = "Chart: Bar Backtest: NQU8-CME [CB] 1 Min #1 | Study: free dll = 0 |" n = len(text_to_replace) for each_line in siera_log: new_line = each_line[n:] print(new_line) ```
If you quote a variable it becomes a string literal and won't be evaluated as a variable. Change your line for replacement to: ``` new_line = each_line.replace(text_to_replace, " ") ```
14,612
69,054,921
I want to run a docker container for `Ganache` on my MacBook M1, but get the following error: ``` The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested ``` After this line nothing else will happen anymore and the whole process is stuck, although the qemu-system-aarch64 is running on 100% CPU according to Activity Monitor until I press `CTRL`+`C`. My docker-files come from [this repository](https://github.com/unlock-protocol/unlock/blob/master/docker/docker-compose.override.yml). After running into the same issues there I tried to isolate the root cause and came up with the smallest setup that will run into the same error. This is the output of `docker-compose up --build`: ``` Building ganache Sending build context to Docker daemon 196.6kB Step 1/17 : FROM trufflesuite/ganache-cli:v6.9.1 ---> 40b011a5f8e5 Step 2/17 : LABEL Unlock <ops@unlock-protocol.com> ---> Using cache ---> aad8a72dac4e Step 3/17 : RUN apk add --no-cache git openssh bash ---> Using cache ---> 4ca6312438bd Step 4/17 : RUN apk add --no-cache python python-dev py-pip build-base && pip install virtualenv ---> Using cache ---> 0be290f541ed Step 5/17 : RUN npm install -g npm@6.4.1 ---> Using cache ---> d906d229a768 Step 6/17 : RUN npm install -g yarn ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested ---> Running in 991c1d804fdf ``` **docker-compose.yml:** ``` version: '3.2' services: ganache: restart: always build: context: ./development dockerfile: ganache.dockerfile env_file: ../.env.dev.local ports: - 8545:8545 ganache-standup: image: ganache-standup build: context: ./development dockerfile: ganache.dockerfile env_file: ../.env.dev.local entrypoint: ['node', '/standup/prepare-ganache-for-unlock.js'] depends_on: - ganache ``` **ganache.dockerfile:** The ganache.dockerfile can be found [here](https://github.com/unlock-protocol/unlock/blob/master/docker/development/ganache.dockerfile). Running the whole project on an older iMac with Intel-processor works fine.
2021/09/04
[ "https://Stackoverflow.com/questions/69054921", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6727976/" ]
On M1 MacBook Pro, I've had success using `docker run --platform linux/amd64` **Example** ``` docker run --platform linux/amd64 node ```
With docker-compose you also have the `platform` option. ``` version: "2.4" services: zookeeper: image: confluentinc/cp-zookeeper:7.1.1 hostname: zookeeper container_name: zookeeper platform: linux/amd64 ports: - "2181:2181" ```
14,613
49,924,302
I have couple of date string with following pattern MM DD(st, nd, rd, th) YYYY HH:MM am. what is the most pythonic way for me to replace (st, nd, rd, th) as empty string ''? ``` s = ['st', 'nd', 'rd', 'th'] string = 'Mar 1st 2017 00:00 am' string = 'Mar 2nd 2017 00:00 am' string = 'Mar 3rd 2017 00:00 am' string = 'Mar 4th 2017 00:00 am' for i in s: a = string.replace(i, '') a = [string.replace(i, '') for i in s][0] print(a) ```
2018/04/19
[ "https://Stackoverflow.com/questions/49924302", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6373357/" ]
The most pythonic way is to use `dateutil`. ``` from dateutil.parser import parse import datetime t = parse("Mar 2nd 2017 00:00 am") # you can access the month, hour, minute, etc: t.hour # 0 t.minute # 0 t.month # 3 ``` And then, you can use `t.strftime()`, where the formatting of the resulting string is any of these: <http://strftime.org/> If you want a more *appropriate representation* of the time(like for example in your proper locale), then you could do `t.strftime("%c")`, or you could easily format it to the answer you wanted above. This is much safer than a regex match because `dateutil` is a part of the standard library, and returns to you a concise `datetime` object.
You could use a regular expression as follows: ``` import re strings = ['Mar 1st 2017 00:00 am', 'Mar 2nd 2017 00:00 am', 'Mar 3rd 2017 00:00 am', 'Mar 4th 2017 00:00 am'] for string in strings: print(re.sub('(.*? \d+)(.*?)( .*)', r'\1\3', string)) ``` This would give you: ```none Mar 1 2017 00:00 am Mar 2 2017 00:00 am Mar 3 2017 00:00 am Mar 4 2017 00:00 am ``` If you want to restrict it do just `st` `nd` `rd` `th`: ``` print(re.sub('(.*? \d+)(st|nd|rd|th)( .*)', r'\1\3', string)) ```
14,623
30,522,420
I'm going through the new book "Data Science from Scratch: First Principles with Python" and I think I've found an errata. When I run the code I get `"TypeError: 'int' object has no attribute '__getitem__'".` I think this is because when I try to select `friend["friends"]`, `friend` is an integer that I can't subset. Is that correct? How can I continue the exercises so that I get the desired output? It should be a list of friend of friends (foaf). I know there's repetition problems but those are fixed later... ``` users = [ {"id": 0, "name": "Ashley"}, {"id": 1, "name": "Ben"}, {"id": 2, "name": "Conrad"}, {"id": 3, "name": "Doug"}, {"id": 4, "name": "Evin"}, {"id": 5, "name": "Florian"}, {"id": 6, "name": "Gerald"} ] #create list of tuples where each tuple represents a friendships between ids friendships = [(0,1), (0,2), (0,5), (1,2), (1,5), (2,3), (2,5), (3,4), (4,5), (4,6)] #add friends key to each user for user in users: user["friends"] = [] #go through friendships and add each one to the friends key in users for i, j in friendships: users[i]["friends"].append(j) users[j]["friends"].append(i) def friends_of_friend_ids_bad(user): #foaf is friend of friend return [foaf["id"] for friend in user["friends"] for foaf in friend["friends"]] print friends_of_friend_ids_bad(users[0]) ``` Full traceback: ``` Traceback (most recent call last): File "/Users/marlon/Desktop/test.py", line 57, in <module> print friends_of_friend_ids_bad(users[0]) File "/Users/marlon/Desktop/test.py", line 55, in friends_of_friend_ids_bad for foaf in friend["friends"]] TypeError: 'int' object has no attribute '__getitem__' [Finished in 0.6s with exit code 1] [shell_cmd: python -u "/Users/marlon/Desktop/test.py"] [dir: /Users/marlon/Desktop] [path: /usr/bin:/bin:/usr/sbin:/sbin] ``` How I think it can be fixed: I think you need users as a second argument and then do "for foaf in users[friend]["friends"]" instead of "for foaf in friend["friends"]
2015/05/29
[ "https://Stackoverflow.com/questions/30522420", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2469211/" ]
Yes, you've found an incorrect piece of code in the book. Implementation for `friends_of_friend_ids_bad` function should be like this: ``` def friends_of_friend_ids_bad(user): #foaf is friend of friend return [users[foaf]["id"] for friend in user["friends"] for foaf in users[friend]["friends"]] ``` `user["friends"]` is a list of integers, thus `friend` is an integer and `friend["friends"]` will raise `TypeError` exception --- **UPD** It seems, that the problem in the book was not about `friends_of_friend_ids_bad` function but about populating `friends` lists. Replace ``` for i, j in friendships: users[i]["friends"].append(j) users[j]["friends"].append(i) ``` with ``` for i, j in friendships: users[i]["friends"].append(users[j]) users[j]["friends"].append(users[i]) ``` Then `friends_of_friend_ids_bad` and `friends_of_friend_ids` will work as intended.
The error is on: ``` return [foaf["id"] for friend in user["friends"] for foaf in friend["friends"]] ``` In the second for loop, you're trying to access `__getitem__` of `users[0]["friends"]`, which is exactly 5 (ints don't have `__getitem__`). You're trying to store on the list `foaf["id"]` for each friend in `user["friends"]` and for each foaf `friend["friends"]`. The problem is that foaf gets from `friend["friends"]` the number from a tuple inside friendships that was stored on users, and then you try to access `["id"]` from it, trying to call `__getitem__` from an integer value. That's the exact cause of your problem.
14,624
65,154,521
When I want to selenium click this code button , selenium write me this error This is my code: ``` #LOGIN IN WEBSITE browser = webdriver.Firefox() browser.get("http://class.apphafez.ir/") username_input = browser.find_element_by_css_selector("input[name='UserName']") password_input = browser.find_element_by_css_selector("input[name='Password']") username_input.send_keys(username_entry.get()) password_input.send_keys(password_entry.get()) button_go = browser.find_element_by_xpath("//button[@type='submit']") button_go.click() #GO CLASS wait = WebDriverWait(browser , 10) go_to_class = wait.until(EC.element_to_be_clickable((By.XPATH , ("//div[@class='btn btn- palegreen enterClassBtn'")))) go_to_class.click() ``` this is site code : ``` <div class="databox-row padding-10"> <button data-bind="attr: { 'data-weekscheduleId' : Id}" style="width:100%" class="btn btn-palegreen enterClassBtn" data-weekscheduleid="320">"i want to ckick here"</button> ``` This is my program error : ``` File "hafezlearn.py", line 33, in login_use go_to_class = wait.until(EC.element_to_be_clickable((By.XPATH , ("//div[@class='btn btn- palegreen enterClassBtn'")))) File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/support/wait.py", line 80, in until raise TimeoutException(message, screen, stacktrace) selenium.common.exceptions.TimeoutException: Message: </div> ```
2020/12/05
[ "https://Stackoverflow.com/questions/65154521", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13937766/" ]
You were close enough. The value of the *class* attribute is **`btn btn-palegreen enterClassBtn`** but not `btn btn- palegreen enterClassBtn` and you can't add extra spaces within the attribute value. --- Solution -------- To click on the element you need to induce [WebDriverWait](https://stackoverflow.com/questions/59130200/selenium-wait-until-element-is-present-visible-and-interactable/59130336#59130336) for the `element_to_be_clickable()` and you can use either of the following [Locator Strategies](https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890): * Using `CSS_SELECTOR`: ``` WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button.btn.btn-palegreen.enterClassBtn[data-bind*='data-weekscheduleId']"))).click() ``` * Using `XPATH`: ``` WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//button[@class='btn btn-palegreen enterClassBtn' and text()='i want to ckick here'][contains(@data-bind, 'data-weekscheduleId')]"))).click() ``` * **Note**: You have to add the following imports : ``` from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC ```
Multiple class names for css values are tough to handle. usually easiest way is to use a css selector: ``` button.btn.btn-palegreen.enterClassBtn ``` Specifically: ``` go_to_class = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR , ("button.btn.btn-palegreen.enterClassBtn")))) ``` See also [How to get elements with multiple classes](https://stackoverflow.com/questions/7184562/how-to-get-elements-with-multiple-classes/7184581)
14,625
33,801,170
Let's say I have an ndarray with 100 elements, and I want to select the first 4 elements, skip 6 and go ahead like this (in other words, select the first 4 elements every 10 elements). I tried with python slicing with step but I think it's not working in my case. How can I do that? I'm using Pandas and numpy, can they help? I searched around but I have found nothing like that kind of slicing. Thanks!
2015/11/19
[ "https://Stackoverflow.com/questions/33801170", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5580662/" ]
You could reshape the array to a `10x10`, then use slicing to pick the first 4 elements of each row. Then flatten the reshaped, sliced array: ``` In [46]: print a [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99] In [47]: print a.reshape((10,-1))[:,:4].flatten() [ 0 1 2 3 10 11 12 13 20 21 22 23 30 31 32 33 40 41 42 43 50 51 52 53 60 61 62 63 70 71 72 73 80 81 82 83 90 91 92 93] ```
Use `% 10`: ``` print [i for i in range(100) if i % 10 in (0, 1, 2, 3)] [0, 1, 2, 3, 10, 11, 12, 13, 20, 21, 22, 23, 30, 31, 32, 33, 40, 41, 42, 43, 50, 51, 52, 53, 60, 61, 62, 63, 70, 71, 72, 73, 80, 81, 82, 83, 90, 91, 92, 93] ```
14,626
51,500,519
I can't use boto3 to connect to S3 with a role arn provided 100% programmatically. ```python session = boto3.Session(role_arn="arn:aws:iam::****:role/*****", RoleSessionName="****") s3_client = boto3.client('s3', aws_access_key_id="****", aws_secret_access_key="****") for b in s3_client.list_buckets()["Buckets"]: print (b["Name"]) ``` I can't provide arn info to Session and also client and there is no assume\_role() on a client based on s3. I found a way with a sts temporary token but I don't like that. ```python sess = boto3.Session(aws_access_key_id="*****", aws_secret_access_key="*****") sts_connection = sess.client('sts') assume_role_object = sts_connection.assume_role(RoleArn="arn:aws:iam::***:role/******", RoleSessionName="**", DurationSeconds=3600) session = boto3.Session( aws_access_key_id=assume_role_object['Credentials']['AccessKeyId'], aws_secret_access_key=assume_role_object['Credentials']['SecretAccessKey'], aws_session_token=assume_role_object['Credentials']['SessionToken']) s3_client = session.client('s3') for b in s3_client.list_buckets()["Buckets"]: print (b["Name"]) ``` Do you have any idea ?
2018/07/24
[ "https://Stackoverflow.com/questions/51500519", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6227500/" ]
You need to understand how temporary credentials are created. First you need to create a client using your current access keys. These credentials are then used to verify that you have the permissions to call assume\_role and have the rights to issue credentials from the IAM role. If someone could do it your way, there would be a HUGE security hole with assume\_role. Your rights must be validated first, then you can issue temporary credentials.
Firstly, *never* put an Access Key and Secret Key in your code. Always store credentials in a `~/.aws/credentials` file (eg via `aws configure`). This avoids embarrassing situations where your credentials are accidentally released to the world. Also, if you are running on an Amazon EC2 instance, then simply assign an IAM Role to the instance and it will automatically obtain credentials. An easy way to assume a role in `boto3` is to store the role details in the credentials file with a separate **profile**. You can then reference the profile when creating a client and boto3 will automatically call `assume-role` on your behalf. See: [boto3: Assume Role Provider](https://boto3.readthedocs.io/en/latest/guide/configuration.html#assume-role-provider)
14,635
9,433,021
Is there a way to group names together in python, to repeatedly assign to them *en masse*? While we can do: ``` a,b,c = (1,2,3) ``` I would like to be able to do something like: ``` names = a,b,c *names = (3,2,1) # this syntax doesn't work a,b,c == (3,2,1) #=> True ``` Is there a built-in syntax for this? If not, I assume it would be possible with an object that overloads its assignment operator. In that case, is there an existing implementation, and would this concept have any unexpected failure modes? The point is not to use the names as data, but rather to be able to use the actual names as variables that each refer to their own individual item, and to be able to use the list as a list, and to avoid code like: ``` a = 1 b = 2 c = 3 sequence = (a,b,c) ```
2012/02/24
[ "https://Stackoverflow.com/questions/9433021", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21640/" ]
You should go one level up in your data abstraction. You are not trying to access the entries by their individual names -- you rather use `names` to denote the whole collection of values, so a simple list might be what you want. If you want both, a name for the collection *and* names for the individual items, then a dictionary might be the way to go: ``` names = "a b c".split() d = dict(zip(names, (1, 2, 3))) d.update(zip(names, (3, 2, 1))) ``` If you need something like this repeatedly, you might want to define a class with the names as attributes: ``` class X(object): def __init__(self, a, b, c): self.update(a, b, c) def update(self, a, b, c) self.a, self.b, self.c = a, b, c x = X(1, 2, 3) x.update(3, 2, 1) print x.a, x.b. x.c ``` This reflects that you want to block `a`, `b` and `c` to some common structure, but keep the option to access them individually by name.
You should use a [**`dict`**](http://docs.python.org/library/stdtypes.html#mapping-types-dict): ``` >>> d = {"a": 1, "b": 2, "c": 3} >>> d.update({"a": 8}) >>> print(d) {"a": 8, "c": 3, "b": 2} ```
14,636
40,367,569
I am trying to set up a Python extension (Gambit, <http://gambit.sourceforge.net/gambit13/build.html>) and am getting an error when trying to build setup.py: > > Traceback (most recent call last): File "setup.py", line 32, in <module> > > > m.Extension.**dict** = m.\_Extension.**dict** > > > AttributeError: attribute '**dict**' of 'type' objects is not writable > > > This seems to be an issue with a certain type of (older) setup.py file. I created a minimal example based on <https://pypi.python.org/pypi/setuptools_cython/0.2>: ``` #Using Python 3.6 on Windows 10 (64-bit) from setuptools import setup #from distutils.extension import Extension #^That line can be included or excluded without changing the error import sys if 'setuptools.extension' in sys.modules: m = sys.modules['setuptools.extension'] m.Extension.__dict__ = m._Extension.__dict__ ``` Other packages have had similar problems in the past (see arcitc issue #17 on Github) and apparently fixed it by some Python magic which goes above my head (arctic's setup.py no longer includes the relevant lines). Any thoughts on what could cause the issue? If so, are there any changes I can make to setup.py to avoid this error without breaking the underlying functionality?
2016/11/01
[ "https://Stackoverflow.com/questions/40367569", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2537443/" ]
You are getting **[NullPointerException](https://docs.oracle.com/javase/7/docs/api/java/lang/NullPointerException.html)** at ***[android.support.v4.widget.drawerlayout](https://developer.android.com/reference/android/support/v4/widget/DrawerLayout.html)*** > > NullPointerException is thrown when an application attempts to use an > object reference that has the null value. > > > How can a NullPointerException launches only on release apk? ------------------------------------------------------------ > > When you prepare your application for release, you configure, build, > and test a release version of your application. The configuration > tasks are straightforward, involving basic code cleanup and code > modification tasks that help optimize your application. > > > 1. Read **[Prepare for Release](https://developer.android.com/studio/publish/preparing.html)** 2. set **`minifyEnabled false`** You can customise your `build.gradle` like this ``` buildTypes { debug { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' debuggable false zipAlignEnabled true jniDebuggable false renderscriptDebuggable false } } ``` Make sure using stable **support library and build tools** . ``` compileSdkVersion 24 buildToolsVersion "24.0.2" compile 'com.android.support:appcompat-v7:24.2.0' compile 'com.android.support:design:24.2.0' ``` **Project Level** ``` classpath 'com.android.tools.build:gradle:2.1.2' // or 2.2.2 ``` **Then** > > On the main menu, choose File | Invalidate Caches/Restart. The > Invalidate Caches message appears informing you that the caches will > be invalidated and rebuilt on the next start. Use buttons in the > dialog to invalidate caches, restart Android Studio . > > > **Note :** You can provide us your `build.gradle` . Disable `"instant run"` Facility . .
``` android{ buildTypes{ release{ minifyEnabled false } } } ``` Try this in your build.grade. Or Try to restart your Android Studio as well as your computer.As is known to all,Android Studio may perform stupid occasionally.
14,646
55,373,867
I have very basic producer-consumer code written with pika framework in python. The problem is - consumer side runs too slow on messages in queue. I ran some tests and found out that i can speed up the workflow up to 27 times with multiprocessing. The problem is - I don't know what is the right way to add multiprocessing functionality to my code. ```py import pika import json from datetime import datetime from functions import download_xmls def callback(ch, method, properties, body): print('Got something') body = json.loads(body) type = body[-1]['Type'] print('Object type in work currently ' + type) cnums = [x['cadnum'] for x in body[:-1]] print('Got {} cnums to work with'.format(len(cnums))) date_start = datetime.now() download_xmls(type,cnums) date_end = datetime.now() ch.basic_ack(delivery_tag=method.delivery_tag) print('Download complete in {} seconds'.format((date_end-date_start).total_seconds())) def consume(queue_name = 'bot-test'): parameters = pika.URLParameters('server@address') connection = pika.BlockingConnection(parameters) channel = connection.channel() channel.queue_declare(queue=queue_name, durable=True) channel.basic_qos(prefetch_count=1) channel.basic_consume(callback, queue='bot-test') print(' [*] Waiting for messages. To exit press CTRL+C') channel.start_consuming() ``` How do I start with adding multiprocessing functionality from here?
2019/03/27
[ "https://Stackoverflow.com/questions/55373867", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7047471/" ]
Pika has extensive [example code](https://github.com/pika/pika/blob/0.13.1/examples/basic_consumer_threaded.py) that I recommend you check out. Note that this code is for **example** use only. In the case of doing work on threads, you will have to use a more intelligent way to manage your threads. The goal is to not block the thread that runs Pika's IO loop, and to call back into the IO loop correctly from your worker threads. That's why `add_callback_threadsafe` exists and is used in that code. --- **NOTE:** the RabbitMQ team monitors the `rabbitmq-users` [mailing list](https://groups.google.com/forum/#!forum/rabbitmq-users) and only sometimes answers questions on StackOverflow.
``` import pika import json from multiprocessing import Process from datetime import datetime from functions import download_xmls import multiprocessing import concurrent.futures def do_job(body): body = json.loads(body) type = body[-1]['Type'] print('Object type in work currently ' + type) cnums = [x['cadnum'] for x in body[:-1]] print('Got {} cnums to work with'.format(len(cnums))) date_start = datetime.now() download_xmls(type,cnums) date_end = datetime.now() ch.basic_ack(delivery_tag=method.delivery_tag) print('Download complete in {} seconds'.format((date_end-date_start).total_seconds())) def callback(ch, method, properties, body): print('Got something') p = Process(target=do_job,args=(body)) p.start() p.join() def consume(queue_name = 'bot-test'): parameters = pika.URLParameters('server@address') connection = pika.BlockingConnection(parameters) channel = connection.channel() channel.queue_declare(queue=queue_name, durable=True) channel.basic_qos(prefetch_count=1) channel.basic_consume(callback, queue='bot-test') print(' [*] Waiting for messages. To exit press CTRL+C') channel.start_consuming() def get_workers(): try: return multiprocessing.cpu_count() except NotImplementedError: return 4 workers = get_workers() with concurrent.futures.ProcessPoolExecutor() as executor: for i in range(workers): executor.submit(consume) ``` Above is just simple demo how you can include multiprocessing execution here. I recommend you to go through documentation to further optimise the code and achieve what you require. <https://docs.python.org/3/library/multiprocessing.html#the-process-class>
14,651
42,740,284
I have question that I am having a hard time understanding what the code might look like so I will explain the best I can. I am trying to view and search a NUL byte and replace it with with another NUL type byte, but the computer needs to be able to tell the difference between the different NUL bytes. an Example would be Hex code 00 would equal NUL and hex code 01 equals SOH. lets say I wanted to create code to replace those with each other. code example ``` TextFile1 = Line.Replace('NUL','SOH') TextFile2.write(TextFile1) ``` Yes I have read a LOT of different posts just trying to understand to put it into working code. first problem is I can't just copy and paste the output of hex 00 into the python module it just won't paste. reading on that shows 0x00 type formats are used to represent that but I'm having issues finding the correct representation for python 3.x ``` Print (\x00) output = nothing shows #I'm trying to get output of 'NUL' or as hex would show '.' either works fine --Edited ``` so how to get the module to understand that I'm trying to represent HEX 00 or 'NUL' and represent as '.' and do the same for SOH, Not just limited to those types of NUL characters but just using those as exmple because I want to use all 256 HEX characters. but beable to tell the difference when pasting into another program just like a hex editor would do. maybe I need to get the two programs on the same encoding type not really sure. I just need a very simple example text as how I would search and replace none representable Hexadecimal characters and find and replace them in notepad or notepad++, from what I have read, only notepad++ has the ability to do so.
2017/03/11
[ "https://Stackoverflow.com/questions/42740284", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7620511/" ]
If you are on Python 3, you should really work with `bytes` objects. Python 3 strings are sequences of unicode code points. To work with byte-strings, use `bytes` (which is pretty much the same as a Python 2 string, which used the "sequence of bytes" model). ``` >>> bytes([97, 98, 99]) b'abc' >>> ``` Note, to write a `bytes` literal, prepend a `b` before the opening quote in your string. To answer your question, to find the representation of `0x00` and `0x01` just look at: ``` >>> bytes([0x00, 0x01]) b'\x00\x01' ``` Note, `0x00` and `0` are the same type, they are just different literal syntaxes (hex literal versus decimal literal). ``` >>> bytes([0, 1]) b'\x00\x01' ``` I have no idea what you mean with regards to Notepad++. Here is an example, though, of replacing a null byte with something else: ``` >>> byte_string = bytes([97, 98, 0, 99]) >>> byte_string b'ab\x00c' >>> print(byte_string) b'ab\x00c' >>> byte_string.replace(b'\x00', b'NONE') b'abNONEc' >>> print(byte_string.replace(b'\x00', b'NONE')) b'abNONEc' ```
another equivalent way to get the value of `\x00` in python is `chr(0)` i like that way a little better over the literal versions
14,652
33,697,263
i try to install snap7 (to read from a S7-1200) with it's python-snap7 0.4 wrapper but i get always a traceback with the following simple code. ``` from time import sleep import snap7 from snap7.util import * import struct plc = snap7.client.Client() ``` Traceback: ``` >>> Traceback (most recent call last): File "Y:\Lonnox\Projekte\Bibliothek\Python und SPS\S7-1200 Test.py", line 6, in <module> plc = snap7.client.Client() File "C:\Python34\lib\site-packages\snap7\client.py", line 30, in __init__ self.library = load_library() File "C:\Python34\lib\site-packages\snap7\common.py", line 54, in load_library return Snap7Library(lib_location).cdll File "C:\Python34\lib\site-packages\snap7\common.py", line 46, in __init__ raise Snap7Exception(msg) snap7.snap7exceptions.Snap7Exception: can't find snap7 library. If installed, try running ldconfig ``` The steps i do to install snap7 and python wrapper are: 1. Download snap7 from sourceforge and copy snap7.dll and snap7.lib to system32 folder of windows 8 2. Install wrapper by using pip install python-snap7 How to install snap7 on windows correctly ? [log of pip install][1]
2015/11/13
[ "https://Stackoverflow.com/questions/33697263", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4801693/" ]
After some try and error experiments and with some infos of snap7 involved developers, i fixed the problem. The folder where the snap7.dll and .lib file are located must be present in the Enviroment variables of Windows. Alternative you can copy the files to the Python install dir if you have checked the "add path" option from the Python installer. See the picture for Details: Edit Enviroment Vars [edit enviroment vars](http://i.stack.imgur.com/mwaLI.png) To give a good starting point for everyone who is a greenhorn like me here is a minimal snap7 tutorial to read variables of a DB from a S7 1212C PLC with Python3: ``` import snap7 from snap7.util import * import struct plc = snap7.client.Client() plc.connect("10.112.115.10",0,1) #---Read DB--- db = plc.db_read(1234,0,14) real = struct.iter_unpack("!f",db[:12] ) print( "3 x Real Vars:", [f for f, in real] ) print( "3 x Bool Vars:", db[12]&1==1, db[12]&2==2, db[12]&4==4 ) plc.disconnect() ``` IP and Subnetmask ----------------- The IP of the PLC must be in the range of the subnetmask of the PC LAN Device. If the IP of the LAN device is 10.112.115.1 and the submask is 255.255.255.0 this give you a range of 10.112.115.2 to 10.112.115.255 for your PLC. Every PLC IP outside this range will give you a "Unreachable peer" Error. Firewall -------- Make sure that your firewall allow the communication between your PC and PLC. PLC Data Location ----------------- If you are unfamilar with STEP 7/ TIA Portal. Look for the "Online Diagnostics" Button and see the pictures to find the location of your data. [DB Number and Variable Offsets](http://i.stack.imgur.com/X0zXH.png) PLC Configuration ----------------- Beside a PLC program that uses the variables you want to read, the PLC need no additional parts to communicate with snap7. The services that are needed to communicate with snap7 are started by the firmware on power on.
Try this: Search the snap7 folder for snap7.dll and snap7.lib files Copy the snap7.dll and snap7.lib into the "C:/PythonXX/site-packages/snap7 " directory and run you code again. You can figure out this in the common.py file in the same directory.
14,653
39,457,209
I am trying to do some white blob detection using OpenCV. But my script failed to detect the big white block which is my goal while some small blobs are detected. I am new to OpenCV, and am i doing something wrong when using simpleblobdetection in OpenCV? [Solved partially, please read below] And here is the script: ``` #!/usr/bin/python # Standard imports import cv2 import numpy as np; from matplotlib import pyplot as plt # Read image im = cv2.imread('whiteborder.jpg', cv2.IMREAD_GRAYSCALE) imfiltered = cv2.inRange(im,255,255) #OPENING kernel = np.ones((5,5)) opening = cv2.morphologyEx(imfiltered,cv2.MORPH_OPEN,kernel) #write out the filtered image cv2.imwrite('colorfiltered.jpg',opening) # Setup SimpleBlobDetector parameters. params = cv2.SimpleBlobDetector_Params() params.blobColor= 255 params.filterByColor = True # Create a detector with the parameters ver = (cv2.__version__).split('.') if int(ver[0]) < 3 : detector = cv2.SimpleBlobDetector(params) else : detector = cv2.SimpleBlobDetector_create(params) # Detect blobs. keypoints = detector.detect(opening) # Draw detected blobs as green circles. # cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures # the size of the circle corresponds to the size of blob print str(keypoints) im_with_keypoints = cv2.drawKeypoints(opening, keypoints, np.array([]), (0,255,0), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS) # Show blobs ##cv2.imshow("Keypoints", im_with_keypoints) cv2.imwrite('Keypoints.jpg',im_with_keypoints) cv2.waitKey(0) ``` **EDIT**: By adding a bigger value of area maximum value, i am able to identify a big blob but my end goal is to identify the big white rectangle exist or not. And the white blob detection i did returns not only the rectangle but also the surrounding areas as well. [This part solved] **EDIT 2:** Based on the answer from @PSchn, i update my code to apply the logic, first set the color filter to only get the white pixels and then remove the noise point using opening. It works for the sample data and i can successfully get the keypoint after blob detection. [![enter image description here](https://i.stack.imgur.com/U2TRP.jpg)](https://i.stack.imgur.com/U2TRP.jpg)
2016/09/12
[ "https://Stackoverflow.com/questions/39457209", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6779632/" ]
If you just want to detect the white rectangle you can try to set a higher threshold, e.g. 253, erase small object with an opening and take the biggest blob. I first smoothed your image, then thresholding it: [![enter image description here](https://i.stack.imgur.com/UrrBT.png)](https://i.stack.imgur.com/UrrBT.png) and the opening: [![enter image description here](https://i.stack.imgur.com/LNEBf.png)](https://i.stack.imgur.com/LNEBf.png) now you just have to use `findContours` and take the `boundingRect`. If your rectangle is always that white it should work. If you get lower then 251 with your threshold the other small blobs will appear and your region merges with them, like here: [![enter image description here](https://i.stack.imgur.com/GB6iM.png)](https://i.stack.imgur.com/GB6iM.png) Then you could still do an opening several times and you get this: [![enter image description here](https://i.stack.imgur.com/oKWJT.png)](https://i.stack.imgur.com/oKWJT.png) But i dont think that it is the fastest idea ;)
You could try setting params.maxArea to something obnoxiously large (somewhere in the tens of thousands): the default may be something lower than the area of the rectangle you're trying to detect. Also, I don't know how true this is or not, but I've heard that detection by color is bugged with a logic error, so it may be worth a try disabling it just in case that is causing problems (this has probably been fixed in later versions, but it could still be worth a try)
14,658
68,010,585
I can edit python code in a folder located in a Docker Volume. I use Visual Studio Code and in general lines it works fine. The only problem that I have is that the libraries (such as pandas and numpy) are not installed in the container that Visual Studio creates to mount the volume, so I get warning errors. How to install these libraries in Visual Studio Code container? \*\* UPDATE \*\* This is my application `Dockerfile`, see that the libraries are included in the image, not the volume: ``` FROM daskdev/dask RUN /opt/conda/bin/conda create -p /pyenv -y RUN /opt/conda/bin/conda install -p /pyenv scikit-learn flask waitress gunicorn \ pytest apscheduler matplotlib pyodbc -y RUN /opt/conda/bin/conda install -p /pyenv -c conda-forge dask-ml pyarrow -y RUN /opt/conda/bin/conda install -p /pyenv pip -y RUN /pyenv/bin/pip install pydrill ``` And the application is started with `docker compose`: ``` version: '3' services: web: image: img-python container_name: cont_flask volumes: - vol_py_code:/code ports: - "5000:5000" working_dir: /code entrypoint: - /pyenv/bin/gunicorn command: - -b 0.0.0.0:5000 - --reload - app.frontend.app:app ```
2021/06/16
[ "https://Stackoverflow.com/questions/68010585", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1362485/" ]
you can use this way. Put a img in div tag and use text-aling: center. There are many ways you can do this. ```css .fotos-block{ text-align: center; } ``` ```html <div class="fotos-block"> <img src = "https://www.imagemhost.com.br/images/2021/06/13/mail.png" class="fotos" id="foto1f"> </div> ```
And you can also use this way to center the img. ```css .fotos{ display: block; margin: auto; text-align: center; } ```
14,659
69,726,911
I need to return within this FOR only values equal to or less than 6 in each column. ``` colunas = list(df2.columns[8:19]) colunas ['Satisfação geral', 'Comunicação', 'Expertise da industria', 'Inovação', 'Parceira', 'Proatividade', 'Qualidade', 'responsividade', 'Pessoas', 'Expertise técnico', 'Pontualidade'] lista = [] for coluna in colunas: nome_coluna = coluna #total_parcial = df2[coluna].count() df2.loc[df2[coluna]<=6].shape[0] percentual = df2[coluna].count() / df2[coluna].count() lista.append([nome_coluna,total_parcial,percentual]) df_new = pd.DataFrame(data=lista, columns=['nome_coluna','total_parcial','percentual']) ``` But returns the error ``` TypeError Traceback (most recent call last) <ipython-input-120-364994f742fd> in <module>() 4 nome_coluna = coluna 5 #total_parcial = df2[coluna].count() ----> 6 df2.loc[df2[coluna]<=6].shape[0] 7 percentual = df2[coluna].count() / df2[coluna].count() 8 lista.append([nome_coluna,total_parcial,percentual]) 3 frames /usr/local/lib/python3.7/dist-packages/pandas/core/ops/array_ops.py in comp_method_OBJECT_ARRAY(op, x, y) 54 result = libops.vec_compare(x.ravel(), y.ravel(), op) 55 else: ---> 56 result = libops.scalar_compare(x.ravel(), y, op) 57 return result.reshape(x.shape) 58 pandas/_libs/ops.pyx in pandas._libs.ops.scalar_compare() TypeError: '<=' not supported between instances of 'str' and 'int' ``` If I put the code that is giving the error alone in a line it works ``` df2.loc[df2['Pontualidade'] <= 6].shape[0] 1537 ``` What is the correct syntax? Thanks
2021/10/26
[ "https://Stackoverflow.com/questions/69726911", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17161157/" ]
I found the solution: seems like AWS is using the term "Subnet Group" in multiple services. I created the group in the service "ElastiCache" but it needs to be created in service "DocumentDB" (see screenshot below). [![enter image description here](https://i.stack.imgur.com/NGPT3.png)](https://i.stack.imgur.com/NGPT3.png)
I had a similar issue. Before you create the cluster, you need to have a Security Group setup, and there, you should be able to change the VPC selected by default. [![enter image description here](https://i.stack.imgur.com/lsPTl.png)](https://i.stack.imgur.com/lsPTl.png) Additional info [here](https://docs.aws.amazon.com/documentdb/latest/developerguide/get-started-guide.html)
14,660
68,736,258
We successfully trained a TensorFlow model based on five climate features and one binary (0 or 1) label. We want an output for an outside input of five new climate variable values that will be inputted into model.predict(). However, we got an error when we tried to input an array of five values. Thanks in advance! ``` def split_dataset(dataset, test_ratio=0.10): """Splits a panda dataframe in two.""" test_indices = np.random.rand(len(dataset)) < test_ratio return dataset[~test_indices], dataset[test_indices] train_ds_pd, test_ds_pd = split_dataset(dataset_df) print("{} examples in training, {} examples for testing.".format( len(train_ds_pd), len(test_ds_pd))) label = "Presence" train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=label) test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(test_ds_pd, label=label) model_1 = tfdf.keras.RandomForestModel() model_1.compile( metrics=["accuracy"]) with sys_pipes(): model_1.fit(x=train_ds) evaluation = model_1.evaluate(test_ds, return_dict=True) print() for name, value in evaluation.items(): print(f"{name}: {value:.4f}") model_1.save("tfmodelmosquito") import numpy as np model_1=tf.keras.models.load_model ("tfmodelmosquito") import pandas as pd prediction = model_1.predict([9.0, 10.0, 11.0, 12.0, 13.0]) print (prediction) ``` Error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-67-be5f2b7bc739> in <module>() 3 import pandas as pd 4 ----> 5 prediction = model.predict([[9.0,10.0,11.0,12.0,13.0]]) 6 print (prediction) 9 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 984 except Exception as e: # pylint:disable=broad-except 985 if hasattr(e, "ag_error_metadata"): --> 986 raise e.ag_error_metadata.to_exception(e) 987 else: 988 raise ValueError: in user code: /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:1569 predict_function * return step_function(self, iterator) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:1559 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1285 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2833 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3608 _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:1552 run_step ** outputs = model.predict_step(data) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:1525 predict_step return self(x, training=False) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py:1030 __call__ outputs = call_fn(inputs, *args, **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/saved_model/utils.py:69 return_outputs_and_add_losses outputs, losses = fn(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/saved_model/utils.py:167 wrap_with_training_arg lambda: replace_training_and_call(False)) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/utils/control_flow_util.py:110 smart_cond pred, true_fn=true_fn, false_fn=false_fn, name=name) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/smart_cond.py:56 smart_cond return false_fn() /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/saved_model/utils.py:167 <lambda> lambda: replace_training_and_call(False)) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/saved_model/utils.py:163 replace_training_and_call return wrapped_call(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py:889 __call__ result = self._call(*args, **kwds) /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py:933 _call self._initialize(args, kwds, add_initializers_to=initializers) /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py:764 _initialize *args, **kwds)) /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py:3050 _get_concrete_function_internal_garbage_collected graph_function, _ = self._maybe_define_function(args, kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py:3444 _maybe_define_function graph_function = self._create_graph_function(args, kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py:3289 _create_graph_function capture_by_value=self._capture_by_value), /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py:999 func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py:672 wrapped_fn out = weak_wrapped_fn().__wrapped__(*args, **kwds) /usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/function_deserialization.py:291 restored_function_body "\n\n".join(signature_descriptions))) ValueError: Could not find matching function to call loaded from the SavedModel. Got: Positional arguments (2 total): * Tensor("inputs:0", shape=(None, 5), dtype=float32) * False Keyword arguments: {} Expected these arguments to match one of the following 4 option(s): Option 1: Positional arguments (2 total): * {'Humidity': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Humidity'), 'Cloud_Cover': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Cloud_Cover'), 'Temperature': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Temperature'), 'Pressure': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Pressure'), 'Precipitation': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Precipitation')} * False Keyword arguments: {} Option 2: Positional arguments (2 total): * {'Temperature': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Temperature'), 'Precipitation': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Precipitation'), 'Cloud_Cover': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Cloud_Cover'), 'Humidity': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Humidity'), 'Pressure': TensorSpec(shape=(None, 1), dtype=tf.float32, name='inputs/Pressure')} * True Keyword arguments: {} Option 3: Positional arguments (2 total): * {'Cloud_Cover': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Cloud_Cover'), 'Humidity': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Humidity'), 'Precipitation': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Precipitation'), 'Temperature': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Temperature'), 'Pressure': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Pressure')} * False Keyword arguments: {} Option 4: Positional arguments (2 total): * {'Temperature': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Temperature'), 'Precipitation': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Precipitation'), 'Humidity': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Humidity'), 'Cloud_Cover': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Cloud_Cover'), 'Pressure': TensorSpec(shape=(None, 1), dtype=tf.float32, name='Pressure')} * True Keyword arguments: {} ```
2021/08/11
[ "https://Stackoverflow.com/questions/68736258", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16637888/" ]
This is because of the non-blocking, asynchronous nature of the `con.query()` function call. It starts the asynchronous operation and then executes the lines of code after it. Then, sometime LATER, it calls its callback. So, in this code of yours with my adding logging: ``` router.post('/login', (req, res) => { con.query("SELECT id, username, password FROM authorizedusers;", (err, result) => { if(err) throw err; for(var i = 0; i < result.length; i++) { if(req.body.username === result[i].username && req.body.password === result[i].password){ con.query(`SELECT id FROM students WHERE user_id = ${result[i].id};`, (err, result) => { if(err) throw err; console.log("got student_id"); req.session.student_id = result[0].id; }); req.session.is_logged_in = true; req.session.user_id = result[i].id; console.log("redirecting and finishing request"); return res.redirect('/'); } } return res.render('login', { msg: "Error! Invalid Credentials!" }); }); }); ``` You would get this logging: ``` redirecting and finishing request got student_id ``` So, you finished the request BEFORE you got the `student_id` and set it into the session. Thus, when you go to immediately try to use the data from the session, it isn't there yet. --- This is not a particularly easy problem to solve without promises because you have an older-style asynchronous call inside a `for` loop. This would be much easier if you use the promise interface for your SQL library as then you can sequence things and can use `await` to run only one query at a time - your current loop is running all the queries from the `for` loop in parallel which is not going to make it easy to select the one winner and stop everything else. If you switch to `mysql2` and then use the promise version: ``` const mysql = require('mysql2/promise'); ``` Then, you can do something like this (I'm no expert on mysql2, but hopefully you can see the idea for how you use a promise interface to solve your problem): ``` router.post('/login', async (req, res) => { try { const [rows, fields] = await con.query("SELECT id, username, password FROM authorizedusers;"); for (let i = 0; i < rows.length; i++) { if (req.body.username === rows[i].username && req.body.password === rows[i].password) { let [students, fields] = await con.query(`SELECT id FROM students WHERE user_id = ${rows[i].id};`); req.session.student_id = students[0].id; req.session.is_logged_in = true; req.session.user_id = rows[i].id; return res.redirect('/'); } } return res.render('login', { msg: "Error! Invalid Credentials!" }); } catch (e) { console.log(e); res.sendStatus(500); } }); ``` Your existing implementation has other deficiencies too that are corrected with this code: 1. The loop variable declared with `var` will never be correct inside the asynchronous callback function. 2. Your `if (err) throw err` error handling is insufficient. You need to capture the error, log it and send an error response when you get an error in a response handler. 3. You will always call `res.render()` before any of the database calls in the loop complete.
It can be sometimes that the express session is saved when the out direct handler/function finished. On that situation, if you want to save your session within a new async function, you should add the `next` function variable in your handler. Then use it as the callback function to save you session. It should look like this: ```js router.post('/login', (req, res, next) => { con.query(..., (err, result) => { for(...) { if(...){ res.redirect('/'); next(); return; } } res.render('login', { msg: "Error! Invalid Credentials!" }); next(); return; }); }); ``` Before it happens, make use your code is executed correctly.
14,661
55,392,952
I have a Python script that runs selenium webdriver that executes in the following steps: 1) Execute a for loop that runs for x number of times 2) Within the main for loop, selenium web driver finds buttons on the page using xpath 3) For each button found by selenium, the nested for loop clicks each button 4) Once a button is clicked, a popup window opens, that redirects random websites within the popup 5) Further, the selenium webdriver finds other buttons within the popup and clicks the button, closes the popup and returns to main window to click the second button on the main website This code works fine while executing, but the issue occurs while selenium exceptions happen. 1) If the popup window has blank page, then selenium exception occurs, but the code written for that exception is not executing 2) If the popup closes by the main website after timeout(not closed by selenium webdriver), then NoSuchWindowException occours, but the under this exception never executes I have tried changing the code several times by adding if else condition, but not able to resolve NoSuchWindowException exceptio Below is the code: ``` for _ in range(100): print("main loop pass") fb_buttons = driver.find_elements_by_xpath('//a[contains(@class,"pages_button profile_view")]') for button in fb_buttons: try: time.sleep(10) button.click() driver.implicitly_wait(5) driver.switch_to.window(driver.window_handles[1]) driver.execute_script("window.scrollTo(0, 2500)") print("wiindow scrolled") like_right = driver.find_elements_by_xpath( "/html[1]/body[1]/div[1]/div[1]/div[4]/div[1]/div[1]/div[1]/div[1]/div[2]/div[2]/div[1]/div[1]/div[3]/div[1]/div[1]") like_left = driver.find_elements_by_xpath( "/html/body/div[1]/div/div[2]/div/div[1]/div[1]/div[2]/div/div[2]/table/tbody/tr/td[1]/a[1]") while like_right: for right in like_right: right.click() break while like_left: for left in like_left: left.click() break while like_post: for like in like_post: like.click() break time.sleep(5) driver.close() driver.implicitly_wait(5) driver.switch_to.window(driver.window_handles[0]) print("clicks executed successfully") continue except StaleElementReferenceException as e: driver.close() driver.switch_to.window(driver.window_handles[0]) popunder = driver.find_element_by_xpath("/html/body/div[1]/div[2]/div[3]/p[2]/a") if popunder is True: popunder.click() driver.implicitly_wait(5) else: continue print("exception occured-element is not attached to the page document") except ElementNotVisibleException as e: driver.close() driver.switch_to.window(driver.window_handles[0]) popunder = driver.find_element_by_xpath("/html/body/div[1]/div[2]/div[3]/p[2]/a") if popunder is True: popunder.click() driver.implicitly_wait(5) else: continue print("Exception occured - ElementNotVisibleException") except WebDriverException as e: driver.close() driver.switch_to.window(driver.window_handles[0]) popunder = driver.find_element_by_xpath("/html/body/div[1]/div[2]/div[3]/p[2]/a") if popunder is True: popunder.click() driver.implicitly_wait(5) else: continue print("Exception occured - WebDriverException") except NoSuchWindowException as e: driver.switch_to.window(driver.window_handles[0]) popunder = driver.find_element_by_xpath("/html/body/div[1]/div[2]/div[3]/p[2]/a") if popunder is True: popunder.click() driver.implicitly_wait(5) else: continue print("Exception - NoSuchWindowException - Switched to main window") else: time.sleep(5) refresh.click() print("refreshed") ``` I am trying to handle the NoSuchWindowException by the python code itself as everytime the popup window closes by main website, this exception occurs and python script stops to execute the next for loop: ``` File "C:\Program Files (x86)\Python37-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchWindowException: Message: no such window: target window already closed from unknown error: web view not found (Session info: chrome=73.0.3683.86) (Driver info: chromedriver=73.0.3683.68 (47787ec04b6e38e22703e856e101e840b65afe72),platform=Windows NT 6.1.7601 SP1 x86_64) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:/Users/javed/PycharmProjects/clicks/test/fb-click-perfect-working.py", line 98, in <module> driver.close() File "C:\Program Files (x86)\Python37-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 688, in close self.execute(Command.CLOSE) File "C:\Program Files (x86)\Python37-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute self.error_handler.check_response(response) File "C:\Program Files (x86)\Python37-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchWindowException: Message: no such window: target window already closed from unknown error: web view not found (Session info: chrome=73.0.3683.86) (Driver info: chromedriver=73.0.3683.68 (47787ec04b6e38e22703e856e101e840b65afe72),platform=Windows NT 6.1.7601 SP1 x86_64) Process finished with exit code 1 ```
2019/03/28
[ "https://Stackoverflow.com/questions/55392952", "https://Stackoverflow.com", "https://Stackoverflow.com/users/601787/" ]
Finally instead of: ```py conn2 = conn.connect_as_project(project_id) ``` I used: ```py conn2 = openstack.connection.Connection( region_name='RegionOne', auth=dict( auth_url='http://controller:5000/v3', username=u_name, password=password, project_id=project_id, user_domain_id='default'), compute_api_version='2', identity_interface='internal') ``` and it worked.
I did this just fine...the only difference is that the project is a new project and I have to give credentials to the user I was using. It was something like that: ```py project = sconn.create_project( name=name, domain_id='default') user_id = conn.current_user_id user = conn.get_user(user_id) roles = conn.list_roles() for r in roles: conn.identity.assign_project_role_to_user( project.id, user.id, r.id ) # Make sure the roles are correctly assigned to the user before proceed conn2 = self.conn.connect_as_project(project.name) ``` After that, anything created (servers, keypairs, networks, etc) is under the new project.
14,662
40,307,635
In the R xgboost package, I can specify `predictions=TRUE` to save the out-of-fold predictions during cross-validation, e.g.: ``` library(xgboost) data(mtcars) xgb_params = list( max_depth = 1, eta = 0.01 ) x = model.matrix(mpg~0+., mtcars) train = xgb.DMatrix(x, label=mtcars$mpg) res = xgb.cv(xgb_params, train, 100, prediction=TRUE, nfold=5) print(head(res$pred)) ``` How would I do the equivalent in the python package? I can't find a `prediction` argument for `xgboost.cv`in python.
2016/10/28
[ "https://Stackoverflow.com/questions/40307635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/345660/" ]
I'm not sure if this is what you want, but you can accomplish this by using the sklearn wrapper for xgboost: (I know I'm using iris dataset as regression problem -- which it isn't but this is for illustration). ``` import xgboost as xgb from sklearn.cross_validation import cross_val_predict as cvp from sklearn import datasets X = datasets.load_iris().data[:, :2] y = datasets.load_iris().target xgb_model = xgb.XGBRegressor() y_pred = cvp(xgb_model, X, y, cv=3, n_jobs = 1) y_pred array([ 9.07209516e-01, 1.84738374e+00, 1.78878939e+00, 1.83672094e+00, 9.07209516e-01, 9.07209516e-01, 1.77482617e+00, 9.07209516e-01, 1.75681138e+00, 1.83672094e+00, 9.07209516e-01, 1.77482617e+00, 1.84738374e+00, 1.84738374e+00, 1.12216723e+00, 9.96944368e-01, 9.07209516e-01, 9.07209516e-01, 9.96944368e-01, 9.07209516e-01, 9.07209516e-01, 9.07209516e-01, 1.77482617e+00, 8.35850239e-01, 1.77482617e+00, 9.87186074e-01, 9.07209516e-01, 9.07209516e-01, 9.07209516e-01, 1.78878939e+00, 1.83672094e+00, 9.07209516e-01, 9.07209516e-01, 8.91427517e-01, 1.83672094e+00, 9.09049034e-01, 8.91427517e-01, 1.83672094e+00, 1.84738374e+00, 9.07209516e-01, 9.07209516e-01, 1.01038718e+00, 1.78878939e+00, 9.07209516e-01, 9.07209516e-01, 1.84738374e+00, 9.07209516e-01, 1.78878939e+00, 9.07209516e-01, 8.35850239e-01, 1.99947178e+00, 1.99947178e+00, 1.99947178e+00, 1.94922602e+00, 1.99975276e+00, 1.91500926e+00, 1.99947178e+00, 1.97454870e+00, 1.99947178e+00, 1.56287444e+00, 1.96453893e+00, 1.99947178e+00, 1.99715066e+00, 1.99947178e+00, 2.84575284e-01, 1.99947178e+00, 2.84575284e-01, 2.00303388e+00, 1.99715066e+00, 2.04597521e+00, 1.99947178e+00, 1.99975276e+00, 2.00527954e+00, 1.99975276e+00, 1.99947178e+00, 1.99947178e+00, 1.99975276e+00, 1.99947178e+00, 1.99947178e+00, 1.91500926e+00, 1.95735490e+00, 1.95735490e+00, 2.00303388e+00, 1.99975276e+00, 5.92201948e-04, 1.99947178e+00, 1.99947178e+00, 1.99715066e+00, 2.84575284e-01, 1.95735490e+00, 1.89267385e+00, 1.99947178e+00, 2.00303388e+00, 1.96453893e+00, 1.98232651e+00, 2.39597082e-01, 2.39597082e-01, 1.99947178e+00, 1.97454870e+00, 1.91500926e+00, 9.99531507e-01, 1.00023842e+00, 1.00023842e+00, 1.00023842e+00, 1.00023842e+00, 1.00023842e+00, 9.22234297e-01, 1.00023842e+00, 1.00100708e+00, 1.16144836e-01, 1.00077248e+00, 1.00023842e+00, 1.00023842e+00, 1.00100708e+00, 1.00023842e+00, 1.00077248e+00, 1.00023842e+00, 1.13711983e-01, 1.00023842e+00, 1.00135887e+00, 1.00077248e+00, 1.00023842e+00, 1.00023842e+00, 1.00023842e+00, 9.99531507e-01, 1.00077248e+00, 1.00023842e+00, 1.00023842e+00, 1.00023842e+00, 1.00023842e+00, 1.00023842e+00, 1.13711983e-01, 1.00023842e+00, 1.00023842e+00, 1.00023842e+00, 1.00023842e+00, 9.78098869e-01, 1.00023842e+00, 1.00023842e+00, 1.00023842e+00, 1.00023842e+00, 1.00023842e+00, 1.00023842e+00, 1.00077248e+00, 9.99531507e-01, 1.00023842e+00, 1.00100708e+00, 1.00023842e+00, 9.78098869e-01, 1.00023842e+00], dtype=float32) ```
This is possible with `xgboost.cv()` but it is a bit hacky. It uses the callbacks and ... a global variable which I'm told is not desirable. ``` def oof_prediction(): """ Dirty global variable callback hack. """ global cv_prediction_dict def callback(env): """internal function""" cv_prediction_list = [] for i in [0, 1, 2, 3, 4]: cv_prediction_list.append([env.cvfolds[i].bst.predict(env.cvfolds[i].dtest)]) cv_prediction_dict['cv'] = cv_prediction_list return callback ``` Now we can call the callback from `xgboost.cv()` as follows. ``` cv_prediction_dict = {} xgb.cv(xgb_params, train, 100, callbacks=[oof_prediction()]), nfold=5) pos_oof_predictions = cv_prediction_dict.copy() ``` It will return the out-of-fold prediction for the last iteration/num\_boost\_round, even if there is `early_stopping` used. I believe this is something the R `predictions=TRUE` functionality [does/did](https://github.com/dmlc/xgboost/issues/1188) not do correctly. --- *Hack disclaimer: I know this is rather hacky but it is a work around my poor understanding of how the callback is working. If anyone knows how to make this better then please comment.*
14,663
33,879,523
is there a way in python to generate a continuous series of beeps in increasing amplitude and export it into a WAV file?
2015/11/23
[ "https://Stackoverflow.com/questions/33879523", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5192982/" ]
I've based this on the answer to the previous question and added a lot of comments. Hopefully this makes it clear. You'll probably want to introduce a for loop to control the number of beeps and the increasing volume. ``` #!/usr/bin/python # based on : www.daniweb.com/code/snippet263775.html import math import wave import struct # Audio will contain a long list of samples (i.e. floating point numbers describing the # waveform). If you were working with a very long sound you'd want to stream this to # disk instead of buffering it all in memory list this. But most sounds will fit in # memory. audio = [] sample_rate = 44100.0 def append_silence(duration_milliseconds=500): """ Adding silence is easy - we add zeros to the end of our array """ num_samples = duration_milliseconds * (sample_rate / 1000.0) for x in range(int(num_samples)): audio.append(0.0) return def append_sinewave( freq=440.0, duration_milliseconds=500, volume=1.0): """ The sine wave generated here is the standard beep. If you want something more aggresive you could try a square or saw tooth waveform. Though there are some rather complicated issues with making high quality square and sawtooth waves... which we won't address here :) """ global audio # using global variables isn't cool. num_samples = duration_milliseconds * (sample_rate / 1000.0) for x in range(int(num_samples)): audio.append(volume * math.sin(2 * math.pi * freq * ( x / sample_rate ))) return def save_wav(file_name): # Open up a wav file wav_file=wave.open(file_name,"w") # wav params nchannels = 1 sampwidth = 2 # 44100 is the industry standard sample rate - CD quality. If you need to # save on file size you can adjust it downwards. The stanard for low quality # is 8000 or 8kHz. nframes = len(audio) comptype = "NONE" compname = "not compressed" wav_file.setparams((nchannels, sampwidth, sample_rate, nframes, comptype, compname)) # WAV files here are using short, 16 bit, signed integers for the # sample size. So we multiply the floating point data we have by 32767, the # maximum value for a short integer. NOTE: It is theortically possible to # use the floating point -1.0 to 1.0 data directly in a WAV file but not # obvious how to do that using the wave module in python. for sample in audio: wav_file.writeframes(struct.pack('h', int( sample * 32767.0 ))) wav_file.close() return append_sinewave(volume=0.25) append_silence() append_sinewave(volume=0.5) append_silence() append_sinewave() save_wav("output.wav") ```
I added minor improvements to the [JCx](https://stackoverflow.com/users/3818191/jcx) code above. As author said, its not cool to use global variables. So I wrapped his solution into class, and it works just fine: ``` import math import wave import struct class BeepGenerator: def __init__(self): # Audio will contain a long list of samples (i.e. floating point numbers describing the # waveform). If you were working with a very long sound you'd want to stream this to # disk instead of buffering it all in memory list this. But most sounds will fit in # memory. self.audio = [] self.sample_rate = 44100.0 def append_silence(self, duration_milliseconds=500): """ Adding silence is easy - we add zeros to the end of our array """ num_samples = duration_milliseconds * (self.sample_rate / 1000.0) for x in range(int(num_samples)): self.audio.append(0.0) return def append_sinewave( self, freq=440.0, duration_milliseconds=500, volume=1.0): """ The sine wave generated here is the standard beep. If you want something more aggresive you could try a square or saw tooth waveform. Though there are some rather complicated issues with making high quality square and sawtooth waves... which we won't address here :) """ num_samples = duration_milliseconds * (self.sample_rate / 1000.0) for x in range(int(num_samples)): self.audio.append(volume * math.sin(2 * math.pi * freq * ( x / self.sample_rate ))) return def save_wav(self, file_name): # Open up a wav file wav_file=wave.open(file_name,"w") # wav params nchannels = 1 sampwidth = 2 # 44100 is the industry standard sample rate - CD quality. If you need to # save on file size you can adjust it downwards. The stanard for low quality # is 8000 or 8kHz. nframes = len(self.audio) comptype = "NONE" compname = "not compressed" wav_file.setparams((nchannels, sampwidth, self.sample_rate, nframes, comptype, compname)) # WAV files here are using short, 16 bit, signed integers for the # sample size. So we multiply the floating point data we have by 32767, the # maximum value for a short integer. NOTE: It is theortically possible to # use the floating point -1.0 to 1.0 data directly in a WAV file but not # obvious how to do that using the wave module in python. for sample in self.audio: wav_file.writeframes(struct.pack('h', int( sample * 32767.0 ))) wav_file.close() return if __name__ == "__main__": bg = BeepGenerator() bg.append_sinewave(volume=0.25, duration_milliseconds=100) bg.append_silence() bg.append_sinewave(volume=0.5, duration_milliseconds=700) bg.append_silence() bg.save_wav("output.wav") ```
14,666
3,263,665
I am trying to create a line-by-line filter in python. However, `stdin.readlines()` reads all lines in before starting to process, and python runs out of memory (`MemoryError`). How can I have *just one line* in memory at a time? The kind of code I have: ``` for line in sys.stdin.readlines(): if( filter.apply( line ) ): print( line ) ``` (note: I'm on 2.6)
2010/07/16
[ "https://Stackoverflow.com/questions/3263665", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6610/" ]
``` for line in sys.stdin: ... ``` Or call `.readline()` in a loop.
``` import sys while 1: line = sys.stdin.readline() if not line: break if (filter.apply(line)): print(line) ```
14,667
62,232,923
I'm pretty new to python but I need some help parsing a string with a unique structure. I have a CSV file with a column with the following structure: ``` [Chakroff, Alek; Young, Liane] Boston Coll, Chestnut Hill, MA 02167 USA; [Russell, Pascale Sophie] Univ Surrey, Guildford, Surrey, England; [Piazza, Jared] Univ Lancaster, Lancaster, England ``` I want to just pull the country name present right before the semicolons. So for the above, I want "USA, England, England". The overall structure of the string is: ``` [last name, first name], university, address, zip code, country; ``` How do I get just the countries with this string layout? Is there a way to specify that I want the country name which right before the semicolon? Or maybe an even easier way to pull the information I need? Please go easy on me, I'm not the best programmer by any means :)
2020/06/06
[ "https://Stackoverflow.com/questions/62232923", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13694393/" ]
You can take advantage of the unique substring before the elements you want: ``` # split string on substring '; [' for i in s.split('; ['): # split each resulting string on space char, return last element of array print(i.split()[-1]) USA England England ```
You can use the split() method for strings ``` states = [person_record.split(",")[-1] for person_record in records.split("; [")] ``` Where records is the string you get from your input.
14,668
51,688,822
Can anybody help me please? I am new to machine learning Studio. I am using free azure machine learning studio workspace trying to use in cell run all got the following error. ``` ValueError Traceback (most recent call last) <ipython-input-1-17afe06b8f16> in <module>() 1 from azureml import Workspace 2 ----> 3 ws = Workspace() 4 ds = ws.datasets['Lemonadecsv.csv'] home/nbuser/anaconda3_23/lib/python3.4/site-packages/azureml/__init__.py in __init__(self, workspace_id, authorization_token, endpoint) 883 endpoint = https://studio.azureml.net 884 """ --> 885 workspace_id, authorization_token, endpoint, management_endpoint = _get_workspace_info(workspace_id, authorization_token, endpoint, None) 886 887 _not_none_or_empty('workspace_id', workspace_id) /home/nbuser/anaconda3_23/lib/python3.4/site-packages/azureml/__init__.py in _get_workspace_info(workspace_id, authorization_token, endpoint, management_endpoint) 849 850 if workspace_id is None: --> 851 raise ValueError('workspace_id not provided and not available via config') 852 if authorization_token is None: 853 raise ValueError('authorization_token not provided and not available via config') ValueError: workspace_id not provided and not available via config 5 frame = ds.to_dataframe() ```
2018/08/04
[ "https://Stackoverflow.com/questions/51688822", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5892761/" ]
I have same problem as you. I have contacted tech support so once I get an answer, I will update this post. Meanwhile, you can use this **WORKAROUND**: Get missing parameters and input them as Strings. ``` ws = Workspace("[WORKSPACE_ID]", "[AUTH_TOKEN]") ``` Where to get them: [WOKRSPACE\_ID]: Azure ML Studio -> Settings -> Name Tab -> WorkspaceId [AUTH\_TOKEN]: Azure ML Studio -> Settings -> Authorization Token Tab -> Primary AUTH Token.
the easiest way is to right click on the data set you have and choose Generate Data Access Code, the system will do it for you and all you have to do is to copy it to the frame and it all will be there. I hope this helps!
14,670
46,230,413
I'm trying to run DMelt programs (<http://jwork.org/dmelt/>) program using Java9 (JDK9), and it gives me errors such as: ``` WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.python.core.PySystemState (file:/dmelt/jehep/lib/jython/jython.jar) to method java.io.Console.encoding() WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` How can I fix it? I was trying to add –illegal-access=permit to the last line of the script "dmelt.sh" (I'm using bash in Linux), but this did not solve this problem. I'm very frustrating with this. I was using this program very often, for very long time. Maybe I should never move to JDK9
2017/09/15
[ "https://Stackoverflow.com/questions/46230413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8612074/" ]
To avoid this error, you need to redefine `maven-war-plugin` to a newer one. For example: ```xml <plugins> . . . <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>3.2.2</version> </plugin> </plugins> ``` --- Works for `jdk-12`.
Since the Java update 9, the "illegal reflective access operation has occurred" warning occurs. To remove the warning message. You can replace maven-compiler-plugin with maven-war-plugin and/or updating the maven-war-plugin with the latest version in your pom.xml. Following are 2 examples: Change version from: ```xml <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.4</version> ... ... </plugin> ``` To: ```xml <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>3.3.1</version> ... ... </plugin> ``` Change the artifactId and version From: ```xml <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.5.1</version> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> ``` TO: ```xml <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>3.3.1</version> <executions> <execution> <id>prepare-war</id> <phase>prepare-package</phase> <goals> <goal>exploded</goal> </goals> </execution> </executions> </plugin> ``` When i re-run Maven Build or Maven Install, the "illegal reflective access operation has occurred" is gone.
14,672
17,586,599
Using win32com.client, I'm attempting to create a simple shortcut in a folder. The shortcut however I would like to have arguments, except I keep getting the following error. ``` Traceback (most recent call last): File "D:/Projects/Ms/ms.py", line 153, in <module> scut.TargetPath = '"C:/python27/python.exe" "D:/Projects/Ms/msd.py" -b ' + str(loop7) File "C:\Python27\lib\site-packages\win32com\client\dynamic.py", line 570, in __setattr__ raise AttributeError("Property '%s.%s' can not be set." % (self._username_, attr)) AttributeError: Property '<unknown>.TargetPath' can not be set. ``` My code looks like this. I've tried multiple different variates but can't seem to get it right. What am I doing wrong? ``` ws = win32com.client.Dispatch("wscript.shell") scut = ws.CreateShortcut("D:/Projects/Ms/TestDir/testlink.lnk") scut.TargetPath = '"C:/python27/python.exe" "D:/Projects/Ms/msd.py" -b 0' scut.Save() ```
2013/07/11
[ "https://Stackoverflow.com/questions/17586599", "https://Stackoverflow.com", "https://Stackoverflow.com/users/721386/" ]
Your code works for me without error. (Windows XP 32bit, Python 2.7.5, pywin32-216). (I slightly modified your code because `TargetPath` should contain only executable path.) ``` import win32com.client ws = win32com.client.Dispatch("wscript.shell") scut = ws.CreateShortcut('run_idle.lnk') scut.TargetPath = '"c:/python27/python.exe"' scut.Arguments = '-m idlelib.idle' scut.Save() ``` I got AttributeError similar to yours when I tried following (assign list to `Arguments` property.) ``` >>> scut.Arguments = [] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\python27\lib\site-packages\win32com\client\dynamic.py", line 570, in __setattr__ raise AttributeError("Property '%s.%s' can not be set." % (self._username_, attr)) AttributeError: Property '<unknown>.Arguments' can not be set. ```
"..TargetPath should contain only [an] executable path." is incorrect in two ways : 1. The target may also contain the executable's arguments. For instance, I have a file [ D:\DATA\CCMD\Expl.CMD ] whose essential line of code is START Explorer.exe "%Target%" An example of its use is D:\DATA\CCMD\Expl.CMD "D:\DATA\SYSTEM - NEW INSTALL PROGS" This entire line is the "executable" you are referring to. 1. The target doesn't have to be an "executable" at all. It may be *any* file in which the OS can act upon, such as those file types whose default actions run executable with the files as its arguments, such as : "My File.txt" The "default action" on this file type is to open it with a text editor. The actual executable file run isn't explicit.
14,682
59,209,756
I'm new to Django 1.11 LTS and I'm trying to solve this error from a very long time. Here is my code where the error is occurring: model.py: ``` name = models.CharField(db_column="name", db_index=True, max_length=128) description = models.TextField(db_column="description", null=True, blank=True) created = models.DateTimeField(db_column="created", auto_now_add=True, blank=True) updated = models.DateTimeField(db_column="updated", auto_now=True, null=True) active = models.BooleanField(db_column="active", default=True) customer = models.ForeignKey(Customer, db_column="customer_id") class Meta(object): db_table="customer_build" unique_together = ("name", "customer") def __unicode__(self): return u"%s [%s]" % (self.name, self.customer) def get(self,row,customer): build_name = row['build'] return self._default_manager.filter(name = build_name, customer_id = customer.id).first() def add(self,row): pass ``` Views.py block: ``` for row in others: rack_name = row['rack'] build = Build().get(row,customer) try: rack = Rack().get(row,customer) except Exception as E: msg = {'exception': str(E), 'where':'Non-server device portmap creation', 'doing_what': 'Rack with name {} does not exist in build {}'.format(rack_name,build.name), 'current_row': row, 'status': 417} log_it('An error occurred: {}'.format(msg)) return JsonResponse(msg, status = 417) ``` Error traceback: ``` File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/exception.py", line 41, in inner response = get_response(request) File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/base.py", line 249, in _legacy_get_response response = self._get_response(request) File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/base.py", line 187, in _get_response response = self.process_exception_by_middleware(e, request) File "/usr/local/lib/python3.6/dist-packages/django/core/handlers/base.py", line 185, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "./customers/views.py", line 2513, in create_rack add_rack_status = add_rack(customer, csv_file) File "./customers/views.py", line 1803, in add_rack build = Build().get(row,customer) File "./customers/models.py", line 69, in get return self._default_manager.filter(name = build_name, customer_id = customer.id).first() AttributeError: 'Build' object has no attribute '_default_manager' ``` I'm trying to understand the issue so that I can fix it. Thanks in advance. Regards, Bharath
2019/12/06
[ "https://Stackoverflow.com/questions/59209756", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5368168/" ]
<https://www.anylogic.com/files/anylogic-professional-8.3.3.exe> For any version, just put the version you want and you will likely be able to download it if using mac: <https://www.anylogic.com/files/anylogic-professional-8.3.3.dmg>
In addition to Felipe's answer, you can always ask > > support@anylogic.com > > > if you need *very* old versions. I believe that AL7.x is not available online anymore but they happily send the installers if you need them.
14,683
7,454,590
I'm trying to unit test a handler with webapp2 and am running into what has to be just a stupid little error. I'd like to be able to use webapp2.uri\_for in the test, but I can't seem to do that: ``` def test_returns_200_on_home_page(self): response = main.app.get_response(webapp2.uri_for('index')) self.assertEqual(200, response.status_int) ``` If I just do `main.app.get_response('/')` it works just fine. The exception received is: ``` Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/case.py", line 318, in run testMethod() File "tests.py", line 27, in test_returns_200_on_home_page webapp2.uri_for('index') File "/Users/.../webapp2_example/lib/webapp2.py", line 1671, in uri_for return request.app.router.build(request, _name, args, kwargs) File "/Users/.../webapp2_example/lib/webapp2_extras/local.py", line 173, in __getattr__ return getattr(self._get_current_object(), name) File "/Users/.../webapp2_example/lib/webapp2_extras/local.py", line 136, in _get_current_object raise RuntimeError('no object bound to %s' % self.__name__) RuntimeError: no object bound to request ``` Is there some silly setup I'm missing?
2011/09/17
[ "https://Stackoverflow.com/questions/7454590", "https://Stackoverflow.com", "https://Stackoverflow.com/users/233242/" ]
I think the only option is to set a dummy request just to be able to create URIs for the test: ``` def test_returns_200_on_home_page(self): // Set a dummy request just to be able to use uri_for(). req = webapp2.Request.blank('/') req.app = main.app main.app.set_globals(app=main.app, request=req) response = main.app.get_response(webapp2.uri_for('index')) self.assertEqual(200, response.status_int) ``` Never use `set_globals()` outside of tests. Is is called by the WSGI application to set the active app and request in a thread-safe manner.
`webapp2.uri_for()` assumes that you are in a web request context and it fails because it cannot find the `request` object. Instead of working around this you could think of your application as a black box and call it using literal URIs, like `'/'` as you mention it. After all, you want to simulate a normal web request, and a web browser will also use URIs and not internal routing shortcuts.
14,684
45,949,105
I had used created a GUI by wxpython to run stats model using statsmodels SARIMAX(). I put all five scripts in one file and tried to use ``` pyinstaller --onedir <mainscript.py> ``` to create compiled application. After the pyinstaller process completed, I ran the generated application in dist file but it gave this error: ``` c:\users\appdata\local\temp\pip-build-dm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py:389: Traceback (most recent call last): File "envs\conda_env1\myApp\mainscript.py", line 2, in <module> File "c:\users\appdata\local\temp\pip-build-dm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module File "envs\conda_env1\myApp\my_algorithm.py", line 3, in <module> File "c:\users\appdata\local\temp\pip-builddm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module File "site-packages\statsmodels\api.py", line 22, in <module> File "c:\users\appdata\local\temp\pip-builddm6yoc\pyinstaller\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module File "site-packages\statsmodels\__init__.py", line 8, in <module> ImportError: No module named tools.sm_exceptions Failed to execute script mainscript ``` I used python2.7 in Windows8 to create the GUI and statsmodel algorithm in conda environment but the pyinstaller was done by pip install. I wonder if this is what caused the error?? Any advise or link to associated discussion would be appreciated!! (I don't even know which topics this problem falls into ...)
2017/08/29
[ "https://Stackoverflow.com/questions/45949105", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7644284/" ]
If you have dark background in your application and want to use light colors for your ngx charts then you can use this method. It will use official code for ngx dark theme and show light colors for the chart labels. You can also change the color code in sccss variables and things work as you need. I solved it using the way used on the official website. In you application SCSS file for styles, add the following styles: ``` .dark { /** * Backgrounds */ $color-bg-darkest: #13141b; $color-bg-darker: #1b1e27; $color-bg-dark: #232837; $color-bg-med: #2f3646; $color-bg-light: #455066; $color-bg-lighter: #5b6882; /** * Text */ $color-text-dark: #72809b; $color-text-med-dark: #919db5; $color-text-med: #A0AABE; $color-text-med-light: #d9dce1; $color-text-light: #f0f1f6; $color-text-lighter: #fff; background: $color-bg-darker; .ngx-charts { text { fill: $color-text-med; } .tooltip-anchor { fill: rgb(255, 255, 255); } .gridline-path { stroke: $color-bg-med; } .refline-path { stroke: $color-bg-light; } .reference-area { fill: #fff; } .grid-panel { &.odd { rect { fill: rgba(255, 255, 255, 0.05); } } } .force-directed-graph { .edge { stroke: $color-bg-light; } } .number-card { p { color: $color-text-light; } } .gauge { .background-arc { path { fill: $color-bg-med; } } .gauge-tick { path { stroke: $color-text-med; } text { fill: $color-text-med; } } } .linear-gauge { .background-bar { path { fill: $color-bg-med; } } .units { fill: $color-text-dark; } } .timeline { .brush-background { fill: rgba(255, 255, 255, 0.05); } .brush { .selection { fill: rgba(255, 255, 255, 0.1); stroke: #aaa; } } } .polar-chart .polar-chart-background { fill: rgb(30, 34, 46); } } .chart-legend { .legend-labels { background: rgba(255, 255, 255, 0.05) !important; } .legend-item { &:hover { color: #fff; } } .legend-label { &:hover { color: #fff !important; } .active { .legend-label-text { color: #fff !important; } } } .scale-legend-label { color: $color-text-med; } } .advanced-pie-legend { color: $color-text-med; .legend-item { &:hover { color: #fff !important; } } } .number-card .number-card-label { font-size: 0.8em; color: $color-text-med; } } ``` Once this has been added make sure you have this scss file linked in your angular.json file. After that you just have to add class dark in the first wrapping component of your ngx chart like this for example: ``` <div class="areachart-wrapper dark"> <ngx-charts-area-chart [view]="view" [scheme]="colorScheme" [results]="data" [gradient]="gradient" [xAxis]="showXAxis" [yAxis]="showYAxis" [legend]="showLegend" [showXAxisLabel]="showXAxisLabel" [showYAxisLabel]="showYAxisLabel" [xAxisLabel]="xAxisLabel" [yAxisLabel]="yAxisLabel" [autoScale]="autoScale" [curve]="curve" (select)="onSelect($event)"> </ngx-charts-area-chart> </div> ``` This will make your charts look exactly as shown on the official website with dark theme for the charts: <https://swimlane.github.io/ngx-charts/#/ngx-charts/bar-vertical>. [![change ngx chart label color](https://i.stack.imgur.com/Gphgs.png)](https://i.stack.imgur.com/Gphgs.png)
Axis ticks formatting can be done like this <https://github.com/swimlane/ngx-charts/blob/master/demo/app.component.html> this has individual element classes.
14,685
30,489,449
How can I see a warning again without restarting python. Now I see them only once. Consider this code for example: ``` import pandas as pd pd.Series([1]) / 0 ``` I get ``` RuntimeWarning: divide by zero encountered in true_divide ``` But when I run it again it executes silently. **How can I see the warning again without restarting python?** --- I have tried to do ``` del __warningregistry__ ``` but that doesn't help. Seems like only some types of warnings are stored there. For example if I do: ``` def f(): X = pd.DataFrame(dict(a=[1,2,3],b=[4,5,6])) Y = X.iloc[:2] Y['c'] = 8 ``` then this will raise warning only first time when `f()` is called. However, now when if do `del __warningregistry__` I can see the warning again. --- What is the difference between first and second warning? Why only the second one is stored in this `__warningregistry__`? Where is the first one stored?
2015/05/27
[ "https://Stackoverflow.com/questions/30489449", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3549680/" ]
> > How can I see the warning again without restarting python? > > > As long as you do the following at the beginning of your script, you will not need to restart. ``` import pandas as pd import numpy as np import warnings np.seterr(all='warn') warnings.simplefilter("always") ``` At this point every time you attempt to divide by zero, it will display ``` RuntimeWarning: divide by zero encountered in true_divide ``` --- Explanation: We are setting up a couple warning filters. The first ([`np.seterr`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.seterr.html)) is telling NumPy how it should handle warnings. I have set it to show warnings on *all*, but if you are only interested in seeing the Divide by zero warnings, change the parameter from `all` to `divide`. Next we change how we want the `warnings` module to always display warnings. We do this by setting up a [warning filter](https://docs.python.org/2/library/warnings.html#the-warnings-filter). > > What is the difference between first and second warning? Why only the second one is stored in this \_\_warningregistry\_\_? Where is the first one stored? > > > This is described in the [bug report](https://bugs.python.org/msg75117) reporting this issue: > > If you didn't raise the warning before using the simple filter, this > would have worked. The undesired behavior is because of > \_\_warningsregistry\_\_. It is set the first time the warning is emitted. > When the second warning comes through, the filter isn't even looked at. > I think the best way to fix this is to invalidate \_\_warningsregistry\_\_ > when a filter is used. It would probably be best to store warnings data > in a global then instead of on the module, so it is easy to invalidate. > > > Incidentally, the [bug](https://bugs.python.org/issue4180) has been closed as fixed for versions 3.4 and 3.5.
`warnings` is a pretty awesome standard library module. You're going to enjoy getting to know it :) A little background ------------------- The default behavior of `warnings` is to only show a particular warning, coming from a particular line, on its first occurrence. For instance, the following code will result in two warnings shown to the user: ```py import numpy as np # 10 warnings, but only the first copy will be shown for i in range(10): np.true_divide(1, 0) # This is on a separate line from the other "copies", so its warning will show np.true_divide(1, 0) ``` You have a few options to change this behavior. Option 1: Reset the warnings registry ------------------------------------- when you want python to "forget" what warnings you've seen before, you can use [`resetwarnings`](https://docs.python.org/3/library/warnings.html#warnings.resetwarnings): ```py # warns every time, because the warnings registry has been reset for i in range(10): warnings.resetwarnings() np.true_divide(1, 0) ``` Note that this also resets any warning configuration changes you've made. Which brings me to... Option 2: Change the warnings configuration ------------------------------------------- The [warnings module documentation](https://docs.python.org/3/library/warnings.html) covers this in greater detail, but one straightforward option is just to use a `simplefilter` to change that default behavior. ```py import warnings import numpy as np # Show all warnings warnings.simplefilter('always') for i in range(10): # Now this will warn every loop np.true_divide(1, 0) ``` Since this is a global configuration change, it has global effects which you'll likely want to avoid (all warnings anywhere in your application will show every time). A less drastic option is to use the context manager: ```py with warnings.catch_warnings(): warnings.simplefilter('always') for i in range(10): # This will warn every loop np.true_divide(1, 0) # Back to normal behavior: only warn once for i in range(10): np.true_divide(1, 0) ``` There are also more granular options for changing the configuration on specific types of warnings. For that, check out the [docs](https://docs.python.org/3/library/warnings.html#overriding-the-default-filter).
14,690
15,784,537
Purpose: Given a PDB file, prints out all pairs of Cysteine residues forming disulfide bonds in the tertiary protein structure. Licence: GNU GPL Written By: Eric Miller ``` #!/usr/bin/env python import math def getDistance((x1,y1,z1),(x2,y2,z2)): d = math.sqrt(pow((x1-x2),2)+pow((y1-y2),2)+pow((z1-z2),2)); return round(d,3); def prettyPrint(dsBonds): print "Residue 1\tResidue 2\tDistance"; for (r1,r2,d) in dsBonds: print " {0}\t\t {1}\t\t {2}".format(r1,r2,d); def main(): pdbFile = open('2v5t.pdb','r'); maxBondDist = 2.5; isCysLine = lambda line: (line[0:4] == "ATOM" and line[13:15] == "SG"); cysLines = [line for line in pdbFile if isCysLine(line)]; pdbFile.close(); getCoords = lambda line:(float(line[31:38]), float(line[39:46]),float(line[47:54])); cysCoords = map(getCoords, cysLines); dsBonds = []; for i in range(len(cysCoords)-1): for j in range(i+1,len(cysCoords)): dist = getDistance(cysCoords[i],cysCoords[j]); residue1 = int(cysLines[i][23:27]); residue2 = int(cysLines[j][23:27]); if (dist < maxBondDist): dsBonds.append((residue1,residue2,dist)); prettyPrint(dsBonds); if __name__ == "__main__": main() ``` When I try to run this script I get indentation problem. I have 2v5t.pdb (required to run the above script) in my working directory. Any solution?
2013/04/03
[ "https://Stackoverflow.com/questions/15784537", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2176228/" ]
For me the indentation is broken within 'prettyPrint' and in '**main**'. Also no need to use ';'. Try this: ``` #!/usr/bin/env python import math # Input: Two 3D points of the form (x,y,z). # Output: Euclidean distance between the points. def getDistance((x1, y1, z1), (x2, y2, z2)): d = math.sqrt(pow((x1 - x2), 2) + pow((y1 - y2), 2) + pow((z1 - z2), 2)) return round(d, 3) # Purpose: Prints a list of 3-tuples (r1,r2,d). R1 and r2 are # residue numbers, and d is the distance between their respective # gamma sulfur atoms. def prettyPrint(dsBonds): print "Residue 1\tResidue 2\tDistance" for r1, r2, d in dsBonds: print " {0}\t\t {1}\t\t {2}".format(r1, r2, d) # Purpose: Find all pairs of cysteine residues whose gamma sulfur atoms # are within maxBondDist of each other. def main(): pdbFile = open('2v5t.pdb','r') #Max distance to consider a disulfide bond. maxBondDist = 2.5 # Anonymous function to check if a line from the PDB file is a gamma # sulfur atom from a cysteine residue. isCysLine = lambda line: (line[0:4] == "ATOM" and line[13:15] == "SG") cysLines = [line for line in pdbFile if isCysLine(line)] pdbFile.close() # Anonymous function to get (x,y,z) coordinates in angstroms for # the location of a cysteine residue's gamma sulfur atom. getCoords = lambda line:(float(line[31:38]), float(line[39:46]), float(line[47:54])) cysCoords = map(getCoords, cysLines) # Make a list of all residue pairs classified as disulfide bonds. dsBonds = [] for i in range(len(cysCoords)-1): for j in range(i+1, len(cysCoords)): dist = getDistance(cysCoords[i], cysCoords[j]) residue1 = int(cysLines[i][23:27]) residue2 = int(cysLines[j][23:27]) if dist < maxBondDist: dsBonds.append((residue1,residue2,dist)) prettyPrint(dsBonds) if __name__ == "__main__": main() ```
This: ``` if __name__ == "__main__": main() ``` Should be: ``` if __name__ == "__main__": main() ``` Also, the python interpreter will give you information on the IndentationError *down to the line*. I strongly suggest reading the error messages provided, as developers write them for a reason.
14,691
72,060,798
In python I am trying to lookup the relevant price depending on qty from a list of scale prices. For example when getting a quotation request: ``` Product Qty Price 0 A 6 1 B 301 2 C 1 3 D 200 4 E 48 ``` Price list with scale prices: ``` Product Scale Qty Scale Price 0 A 1 48 1 A 5 43 2 A 50 38 3 B 1 10 4 B 10 9 5 B 50 7 6 B 100 5 7 B 150 2 8 C 1 300 9 C 2 250 10 C 3 200 11 D 1 5 12 D 100 3 13 D 200 1 14 E 1 100 15 E 10 10 16 E 100 1 ``` Output that I would like: ``` Product Qty Price 0 A 6 43 1 B 301 2 2 C 1 300 3 D 200 1 4 E 48 10 ```
2022/04/29
[ "https://Stackoverflow.com/questions/72060798", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18991198/" ]
Try with `merge_asof`: ``` output = (pd.merge_asof(df2.sort_values("Qty"),df1.sort_values("Scale Qty"),left_on="Qty",right_on="Scale Qty",by="Product") .sort_values("Product", ignore_index=True) .drop("Scale Qty", axis=1) .rename(columns={"Scale Price":"Price"})) >>> output Product Qty Price 0 A 6 43 1 B 301 2 2 C 1 300 3 D 200 1 4 E 48 10 ``` ###### Inputs: ``` df1 = pd.DataFrame({'Product': ['A','A','A','B','B','B','B','B','C','C','C','D','D','D','E','E','E'], 'Scale Qty': [1, 5, 50, 1, 10, 50, 100, 150, 1, 2, 3, 1, 100, 200, 1, 10, 100], 'Scale Price': [48, 43, 38, 10, 9, 7, 5, 2, 300, 250, 200, 5, 3, 1, 100, 10, 1]}) df2 = pd.DataFrame({"Product": list("ABCDE"), "Qty": [6,301,1,200,48]}) ```
Assuming `df1` and `df2`, use `merge_asof`: ``` pd.merge_asof(df1.sort_values(by='Qty'), df2.sort_values(by='Scale Qty').rename(columns={'Scale Price': 'Price'}), by='Product', left_on='Qty', right_on='Scale Qty') ``` output: ``` Product Qty Scale Qty Price 0 C 1 1 300 1 A 6 5 43 2 E 48 10 10 3 D 200 200 1 4 B 301 150 2 ```
14,693
58,143,742
I'm working on a project using keras (python 3), and I've encountered a problem - I've installed using pip tensorflow, and imported it into my prject, but whenether I try to run it, I get an error saying: ``` ModuleNotFoundError: No module named 'tensorflow' ``` it seems my installation completed successfully, and I think I have the right PATH since I installed few other things such as numpy and their installation worked well. does anyone have a clue what did I do wrong? Thank you!
2019/09/28
[ "https://Stackoverflow.com/questions/58143742", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9126289/" ]
Use [`Series.str.replace`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.replace.html) with replace `uppercase` by same vales with space before and then remove first space: ``` df = pd.DataFrame({'U.N.Region':['WestAfghanistan','NorthEastAfghanistan']}) df['U.N.Region'] = df['U.N.Region'].str.replace( r"([A-Z])", r" \1").str.strip() print (df) U.N.Region 0 West Afghanistan 1 North East Afghanistan ```
Another option would be, ``` import pandas as pd import re df = pd.DataFrame({'U.N.Region': ['WestAfghanistan', 'NorthEastAfghanistan']}) df['U.N.Region'] = df['U.N.Region'].str.replace( r"(?<=[a-z])(?=[A-Z])", " ") print(df) ```
14,694
40,138,090
My data is organized in a dataframe: ``` import pandas as pd import numpy as np data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']} df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4']) ``` Which looks like this (only much bigger): ``` Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC ``` My algorithm loops through this table rows and performs a set of operations. For cleaness/lazyness sake, I would like to work on a single row at each iteration without typing `df.loc['row index', 'column name']` to get each cell value I have tried to follow the [right style](http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy) using for example: ``` row_of_interest = df.loc['R2', :] ``` However, I still get the warning when I do: ``` row_of_interest['Col2'] = row_of_interest['Col2'] + 1000 SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame ``` And it is not working (as I intended) it is making a copy ``` print df Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC ``` Any advice on the proper way to do it? Or should I just stick to work with the data frame directly? Edit 1: Using the replies provided the warning is removed from the code but the original dataframe is not modified: The "row of interest" `Series` is a copy not part of the original dataframe. For example: ``` import pandas as pd import numpy as np data = {'Col1' : [4,5,6,7], 'Col2' : [10,20,30,40], 'Col3' : [100,50,-30,-50], 'Col4' : ['AAA', 'BBB', 'AAA', 'CCC']} df = pd.DataFrame(data=data, index = ['R1','R2','R3','R4']) row_of_interest = df.loc['R2'] row_of_interest.is_copy = False new_cell_value = row_of_interest['Col2'] + 1000 row_of_interest['Col2'] = new_cell_value print row_of_interest Col1 5 Col2 1020 Col3 50 Col4 BBB Name: R2, dtype: object print df Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC ``` Edit 2: This is an example of the functionality I would like to replicate. In python a list of lists looks like: ``` a = [[1,2,3],[4,5,6]] ``` Now I can create a "label" ``` b = a[0] ``` And if I change an entry in b: ``` b[0] = 7 ``` Both a and b change. ``` print a, b [[7,2,3],[4,5,6]], [7,2,3] ``` Can this behavior be replicated between a pandas dataframe labeling one of its rows a pandas series?
2016/10/19
[ "https://Stackoverflow.com/questions/40138090", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2301970/" ]
The tint property does not affect the color of the title. To set the title color (along with other attributes like font) globally, you can set the `titleTextAttributes` property of the `UINavigationBar` appearance to suit your needs. Just place this code in your AppDelegate or somewhere else appropriate that gets called on launch: **Swift 3:** ``` UINavigationBar.appearance().titleTextAttributes = [NSForegroundColorAttributeName: UIColor.white] ``` **Swift 2** ``` UINavigationBar.appearance().titleTextAttributes = [NSForegroundColorAttributeName: UIColor.whiteColor()] ```
No you work correctly. But you should to set color for second view. You can use this code to solve your problem. In second view write this code to set color and font for your navigation title. --- ``` navigationController!.navigationBar.titleTextAttributes = ([NSFontAttributeName: UIFont(name: "Helvetica", size: 25)!,NSForegroundColorAttributeName: UIColor.white]) ``` --- ---
14,697
29,035,115
I am working with an existing SQLite database and experiencing errors due to the data being encoded in CP-1252, when Python is expecting it to be UTF-8. ``` >>> import sqlite3 >>> conn = sqlite3.connect('dnd.sqlite') >>> curs = conn.cursor() >>> result = curs.execute("SELECT * FROM dnd_characterclass WHERE id=802") Traceback (most recent call last): File "<input>", line 1, in <module> OperationalError: Could not decode to UTF-8 column 'short_description_html' with text ' <p>Over a dozen deities have worshipers who are paladins, promoting law and good across Faer�n, but it is the Weave itself that ``` The offending character is `\0xfb` which decodes to `û`. Other offending texts include `“?nd and slay illithids.”` which uses "smart quotes" `\0x93` and `\0x94`. [SQLite, python, unicode, and non-utf data](https://stackoverflow.com/questions/2392732/sqlite-python-unicode-and-non-utf-data) details how this problem can be solved when using `sqlite3` on its own. **However, I am using SQLAlchemy.** How can I deal with CP-1252 encoded data in an SQLite database, when I am using SQLAlchemy? --- Edit: This would also apply for any other funny encodings in an SQLite `TEXT` field, like `latin-1`, `cp437`, and so on.
2015/03/13
[ "https://Stackoverflow.com/questions/29035115", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1191425/" ]
SQLAlchemy and SQLite are behaving normally. The solution is to fix the non-UTF-8 data in the database. I wrote the below, drawing inspiration from <https://stackoverflow.com/a/2395414/1191425> . It: * loads up the target SQLite database * lists all columns in all tables * if the column is a `text`, `char`, or `clob` type - including variants like `varchar` and `longtext` - it re-encodes the data from the `INPUT_ENCODING` to UTF-8. --- ``` INPUT_ENCODING = 'cp1252' # The encoding you want to convert from import sqlite3 db = sqlite3.connect('dnd_fixed.sqlite') db.create_function('FIXENCODING', 1, lambda s: str(s).decode(INPUT_ENCODING)) cur = db.cursor() tables = cur.execute('SELECT name FROM sqlite_master WHERE type="table"').fetchall() tables = [t[0] for t in tables] for table in tables: columns = cur.execute('PRAGMA table_info(%s)' % table ).fetchall() # Note: pragma arguments can't be parameterized. for column_id, column_name, column_type, nullable, default_value, primary_key in columns: if ('char' in column_type) or ('text' in column_type) or ('clob' in column_type): # Table names and column names can't be parameterized either. db.execute('UPDATE "{0}" SET "{1}" = FIXENCODING(CAST("{1}" AS BLOB))'.format(table, column_name)) ``` --- After this script runs, all `*text*`, `*char*`, and `*clob*` fields are in UTF-8 and no more Unicode decoding errors will occur. I can now `Faerûn` to my heart's content.
If you have a connection URI then you can add the following options to your DB connection URI: ``` DB_CONNECTION = mysql+pymysql://{username}:{password}@{host}/{db_name}?{options} DB_OPTIONS = { "charset": "cp-1252", "use_unicode": 1, } connection_uri = DB_CONNECTION.format( username=???, ..., options=urllib.urlencode(DB_OPTIONS) ) ``` Assuming your SQLLite driver can handle those options (pymysql can, but I don't know 100% about sqllite), then your queries will return unicode strings.
14,698
58,647,020
I am trying to run the cvxpy package in an AWS lambda function. This package isn't in the SDK, so I've read that I'll have to compile the dependencies into a zip, and then upload the zip into the lambda function. I've done some research and tried out the links below, but when I try to pip install cvxpy I get error messages - I'm on a Windows box, but I know that AWS Lambda runs on Linux. Appreciate the help! <http://i-systems.github.io/HSE545/machine%20learning%20all/cvxpy_install/CVXPY%2BInstallation%2BGuide%2Bfor%2BWindows.html> <https://programwithus.com/learn-to-code/Pip-and-virtualenv-on-Windows/> <https://medium.com/@manivannan_data/import-custom-python-packages-on-aws-lambda-function-5fbac36b40f8> <https://www.cvxpy.org/install/index.html>
2019/10/31
[ "https://Stackoverflow.com/questions/58647020", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10756193/" ]
For installing `cvxpy` on windows it requires c++ build tools (please refer: <https://buildmedia.readthedocs.org/media/pdf/cvxpy/latest/cvxpy.pdf>) On Windows: ----------- * I created a lambda layer python directory structure `python/lib/python3.7/site-packages` (refer: <https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html>) and installed my pip packages in that site-packages directory. ``` pip install cvxpy --target python/lib/python3.7/site-packages ``` * Then, I zipped the `python/lib/python3.7/site-packages` as cvxpy\_layer.zip and uploaded it to an S3 bucket (layer zipped file max limit is only 50 MB <https://docs.aws.amazon.com/lambda/latest/dg/limits.html>), to attach it to my lambda layers. * Now, the layer is ready but the lambda is failing to import the packages as they were installed on a windows machine. (refer: [AWS Lambda - unable to import module 'lambda\_function'](https://stackoverflow.com/questions/49734744/aws-lambda-unable-to-import-module-lambda-function)) On Linux: --------- * I created the same directory structure as earlier `python/lib/python3.7/site-packages` and installed the cvxpy and zipped it as shown below. * Later I uploaded the zip file to an S3 bucket and created a new lambda layer. * Attaching that lambda layer to my lambda function, I colud able to resolve the import issues failing earlier and run the basic cvxpy program on lambda. ``` mkdir -p alley/python/lib/python3.7/site-packages pip install cvxpy --target alley/python/lib/python3.7/site-packages cd alley zip -rqvT cvxpy_layer.zip . ``` ### Lambda layer Image: [![enter image description here](https://i.stack.imgur.com/XVvQl.jpg)](https://i.stack.imgur.com/XVvQl.jpg) ### Lambda function execution: [![enter image description here](https://i.stack.imgur.com/qJtSI.jpg)](https://i.stack.imgur.com/qJtSI.jpg)
You can wrap all your dependencies along with lambda source into a single zipfile and deploy it. Doing this, you will end up having additional repetitive code in multiple lambda functions. Suppose, if more than one of your lambda functions needs the same package `cvxpy`, you will have to package it twice for both the functions individually. Instead a better option would be to try `Labmda Layers`, where you put all your dependencies into a package and deploy a layer into your Lambda. Then attach that layer to your function to fetch its dependencies from there. The layers can even be versioned. :) Please refer the below links: * <https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html> * <https://dev.to/vealkind/getting-started-with-aws-lambda-layers-4ipk>
14,699
13,391,444
I'm trying to use [PySide](http://qt-project.org/wiki/PySideDocumentation) so I did a `brew install pyside pyside-tools`. However, I get the following error: ``` >>> from PySide.QtGui import QApplication Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: dlopen(/Library/Python/2.7/site-packages/PySide/QtGui.so, 2): Library not loaded: QtGui.framework/Versions/4/QtGui Referenced from: /Library/Python/2.7/site-packages/PySide/QtGui.so Reason: image not found ``` [This](https://stackoverflow.com/questions/6970319/installing-pyside-on-osx-10-6-8) SO question says to install python 27 and then reinstall pyside but I'm using the native python on mac osx 10.8 and it is already 2.7.2. The [Homebrew](https://github.com/mxcl/homebrew/issues/15450) recipe for PySide seems to indicate that this should have been fixed but I'm still getting the errors. I made sure libpng is installed as well. Looking at the path, I know that the QtGui.so file is there. Since I'm new to Python, PySide, and Qt, it is hard for me to Google and further troubleshoot. If anyone knows why and can provide directions, I will be very grateful. It can involve uninstalling a bunch of stuff and reinstalling. Please give detailed instructions. I did uninstall and try to reinstall and got the same result. Thank you.
2012/11/15
[ "https://Stackoverflow.com/questions/13391444", "https://Stackoverflow.com", "https://Stackoverflow.com/users/384964/" ]
I was getting the same error, and I'm using Python installed via Homebrew. I found two PySide libraries in /Library/Python/2.7/site-packages/ . Moving them out of the way, and re-building/installing PySide through Homebrew worked.
I tried the import you gave - I am using same system environment. It worked fine. try: brew update and re-install.
14,700
61,698,002
From python in a nutshell, > > Where C is a class, the statement `x=C(23)` is equivalent to: > > > > ``` > x = C.__new__(C, 23) > if isinstance(x, C): type(x).__init__(x, 23) > > ``` > > From my understanding `object.__new__` creates a new, uninitialized instance of the class it receives as its first argument. Why is there need to be a check with the `isinstance()`. Isn't it obvious that `__new__` will return a object of type `C`. If it is what happens if this test fails? Since `classes` are `callables`, is the call to `__new__` is done in the class's `__call()__` method Am I missing something here, please clarify this to me
2020/05/09
[ "https://Stackoverflow.com/questions/61698002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13432122/" ]
> > Isn't it obvious that `__new__` will return a object of type C. > > > Not at all. The following is valid: ``` >>> class C: ... def __new__(cls): ... return "not a C" ... def __init__(self): ... print("Never called") ... >>> C() 'not a C' ``` When you override `__new__`, you will *probably* return an instance of `cls`, but you aren't *required* to. This is useful for defining factory classes, which don't create instances of themselves but rather instances of other classes. Note that you quote is not quite accurate. `x = C(32)` is equivalent, at the first step, to `x = type.__call__(C, 32)`. It is `type.__call__` that calls `C.__new__`, then decides whether to invoke the return value's `__init__` method. You can think of `type.__call__` as being defined something like ``` def __call__(cls, *args, **kwargs): obj = cls.__new__(cls, *args, **kwargs) if isinstance(obj, cls): obj.__init__(*args, **kwargs) return obj ``` Applying this to `C`, we see that `obj` is set to the `str` value `'not a C'`, not an instance of `C`, so that `'not a C'.__init__` is not called before returning the string.
[isinstance(a, b)](https://docs.python.org/3/library/functions.html#isinstance) is used to check if a is instance of b. Not sure why you checking it after creation. Magical methods can be redefined. Although in normal cases isinstance() is needed to check dynamic data.
14,702
44,718,204
I'm new to logging module of python. I want to create a new log file everyday while my application is in running condition. ``` log file name - my_app_20170622.log log file entries within time - 00:00:01 to 23:59:59 ``` On next day I want to create a new log file with next day's date - ``` log file name - my_app_20170623.log log file entries within time - 00:00:01 to 23:59:59 ``` I'm using logging module of python. I'm using like below - ``` log_level = int(log_level) logger = logging.getLogger('simple') logger.setLevel(log_level) fh = logging.FileHandler(log_file_name) fh.setLevel(log_level) formatter = logging.Formatter(log_format) fh.setFormatter(formatter) logger.addHandler(fh) ``` Is their any configurations in logging module of python to create a log on daily basis?
2017/06/23
[ "https://Stackoverflow.com/questions/44718204", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6102259/" ]
You have to create a `TimedRotatingFileHandler`: ``` from logging.handlers import TimedRotatingFileHandler logname = "my_app.log" handler = TimedRotatingFileHandler(logname, when="midnight", interval=1) handler.suffix = "%Y%m%d" logger.addHandler(handler) ``` This piece of code will create a `my_app.log` but the log will be moved to a new log file named `my_app.log.20170623` when the current day ends at midnight. I hope this helps.
I suggest you take a look at `logging.handlers.TimedRotatingFileHandler`. I think it's what you're looking for.
14,703
60,949,588
I have a python script that does some GUI test on a chromium application. Sometimes this application does not load up correctly and for this reason the GUI test will not pass, but a simple restart of this application can fix the problem. What I currently have is something like this: ``` def test(): ...do some settings... ...SystemOperator.restartController()... ...Login(My.PinCode)... ...GoToDeviceUI()... ...undo settings... ...SystemOperator.restartController()... ``` When doing this login, in case the app did not load correctly an exception is thrown and my test is failing. What I want to do is something like this: ``` def test(): def testBody(): ...do some settings... ...SystemOperator.restartController()... ...Login(My.PinCode)... ...GoToDeviceUI()... ...undo settings... ...SystemOperator.restartController()... try_cnt = 3 for i in range(try_cnt): try: testBody() break except: ...SystemOperator.restartController()... ``` But without using a for/while loop. Thank you!
2020/03/31
[ "https://Stackoverflow.com/questions/60949588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12200507/" ]
In OpenCV, the given threshold options (e.g. cv.THRESH\_BINARY or cv.THRESH\_BINARY\_INV) are actually constant integer values. You are trying to use strings instead of these integer values. That is the reason why you get the Type Error. If you want to apply all these different thresholds in a loop, one option is to create a different list for these options, like this: ```py threshold_options = [cv.THRESH_BINARY, cv.THRESH_BINARY_INV, ...] ``` That way, you can then use the values of this list in the loop as follows: ```py retval, thresh = cv.threshold(img, 127, 255, threshold_options[i]) ``` The entire code would be as follows: ```py titles = [ 'THRESH_BINARY', 'THRESH_BINARY_INV', 'THRESH_MASK', 'THRESH_OTSU', 'THRESH_TOZERO', 'THRESH_TOZERO_INV', 'THRESH_TRIANGLE', 'THRESH_TRUNC'] threshold_options = [ cv.THRESH_BINARY, cv.THRESH_BINARY_INV, cv.THRESH_MASK, cv.THRESH_OTSU, cv.THRESH_TOZERO, cv.THRESH_TOZERO_INV, cv.THRESH_TRIANGLE, cv.THRESH_TRUNC] for i in range(len(titles)): retval, thresh = cv.threshold(img, 127, 255, threshold_options[i]) plt.subplot(2,3,i+1), plt.title(titles[i]), plt.imshow(thresh, 'gray') plt.show() ```
This might be related: [OpenCV Thresholding example](https://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html) First off, there is no need to use `range`, you can simply do `for flag in titles:` and pass `flag`. Have you checked if your image is loaded correctly? Are you sure that your flag is repsonsible for your error? For future posts, please include a minimal reproducible example.
14,712
29,959,550
I'm trying to fetch forms for floorplans for individual property's. I can check that the object exists in the database, but when I try to create a form with an instance of it I receive this error: ``` Traceback: File "/Users/balrog911/Desktop/mvp/mvp_1/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 130. % (callback.__module__, view_name)) Exception Type: ValueError at /dashboard-property/253/ Exception Value: The view properties.views.dashboard_single_property didn't return an HttpResponse object. It returned None instead. ``` My models.py: ``` class Property(models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, blank=True, related_name='user') name = models.CharField(max_length=120, help_text="This is the name that will display on your profile") image = models.ImageField(upload_to='properties/', null=True, blank=True) options=(('House', 'House'),('Condo','Condo'),('Apartment','Apartment')) rental_type = models.CharField(max_length=120, blank=True, null=True, choices=options, default='Apartment') address = models.CharField(max_length=120) phone_number = models.CharField(max_length=120, blank=True, null=True) email = models.EmailField(max_length=120, blank=True, null=True) website = models.CharField(max_length=250, blank=True, null=True) description = models.CharField(max_length=500, blank=True, null=True) lat = models.CharField(max_length=120, blank=True, null=True) lng = models.CharField(max_length=120, blank=True, null=True) coordinates =models.CharField(max_length=120, blank=True, null=True) slug = models.SlugField(unique=True, max_length=501) active = models.BooleanField(default= True) date_added = models.DateTimeField(auto_now_add=True) def save(self): super(Property, self).save() max_length = Property._meta.get_field('slug').max_length slug_name = slugify(self.name) self.slug = '%s-%d' % (slug_name, self.id) self.coordinates = geo_lat_lng(self.address) self.lat = self.coordinates[0] self.lng = self.coordinates[1] super(Property, self).save() def __unicode__(self): return '%s-%s-%s' % (self.id, self.name, self.address) def get_absolute_url(self): return reverse("single_property", kwargs={"slug": self.slug}) def get_dashboard_url(self): return reverse("dashboard_single_property", kwargs={"id": self.id}) class FloorPlan(models.Model): property_name = models.ForeignKey(Property, related_name='property_name') floor_plan_name = models.CharField(max_length=120, blank=True, null=True) numbers = (('0','0'),('1','1'),('2','2'),('3','3'),('4','4'),('5','5'),('6+','6+'),) bedrooms = models.CharField(max_length=120, blank=True, null=True, choices=numbers) bathrooms = models.CharField(max_length=120, blank=True, null=True, choices=numbers) sqft = models.IntegerField(max_length=120, blank=True, null=True) min_price = models.IntegerField(max_length=120, blank=True, null=True) max_price = models.IntegerField(max_length=120, blank=True, null=True) availability = models.DateField(null=True, blank=True, help_text='Use mm/dd/yyyy format') image = models.ImageField(upload_to='floor_plans/', null=True, blank=True) def __unicode__(self): return '%s' % (self.property_name) ``` My views.py: ``` def dashboard_single_property(request, id): if request.user.is_authenticated(): user = request.user try: single_property = Property.objects.get(id=id) user_properties = Property.objects.filter(user=user) if single_property in user_properties: user_property = Property.objects.get(id=id) #Beginning of Pet Policy Instances user_floor_plan = FloorPlan.objects.select_related('Property').filter(property_name=user_property) if user_floor_plan: print user_floor_plan plans = user_floor_plan.count() plans = plans + 1 FloorPlanFormset = inlineformset_factory(Property, FloorPlan, extra=plans) formset_floor_plan = FloorPlanFormset(instance=user_floor_plan) print "formset_floor_plan is True" else: floor_plan_form = FloorPlanForm(request.POST or None) formset_floor_plan = False print 'formset is %s' % (formset_floor_plan) #End #Beginning of Pet Policy Instances user_pet_policy = PetPolicy.objects.select_related('Property').filter(property_name=user_property) print user_pet_policy if user_pet_policy: print user_pet_policy #pet_policy_form = PetPolicyForm(request.POST or None, instance=user_pet_policy) pet_policy_form = PetPolicyForm(request.POST or None) else: pet_policy_form = PetPolicyForm(request.POST or None) #End basic_form = PropertyForm(request.POST or None, instance=user_property) context = { 'user_property': user_property, 'basic_form': basic_form, 'floor_plan_form': floor_plan_form, 'formset_floor_plan': formset_floor_plan, 'pet_policy_form': pet_policy_form, } template = 'dashboard/dashboard_single_property.html' return render(request, template, context) else: return HttpResponseRedirect(reverse('dashboard')) except Exception as e: raise e #raise Http404 print "whoops" else: return HttpResponseRedirect(reverse('dashboard')) ``` **EDIT:** Took Vishen's tip to make sure the error was raised, updated the views and now I'm getting this error. Here's the full traceback: ``` Environment: Request Method: GET Request URL: http://localhost:8080/dashboard-property/253/ Django Version: 1.7.4 Python Version: 2.7.5 Installed Applications: ('django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.sites', 'django.contrib.sitemaps', 'django.contrib.staticfiles', 'base', 'properties', 'renters', 'allauth', 'allauth.account', 'crispy_forms', 'datetimewidget', 'djrill', 'import_export', 'multiselectfield') Installed Middleware: ('django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'django.middleware.locale.LocaleMiddleware') Traceback: File "/Users/balrog911/Desktop/mvp/mvp_1/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 111. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Users/balrog911/Desktop/mvp/mvp_1_live/src/properties/views.py" in dashboard_single_property 82. raise e Exception Type: AttributeError at /dashboard-property/253/ Exception Value: 'QuerySet' object has no attribute 'pk' ``` **EDIT:** Per Vishen's suggestion, removed the try statement to see if the error would be made more clear. It looks like the issues is with line 51: ``` formset_floor_plan = FloorPlanFormset(instance=user_floor_plan) ``` Here's the traceback: ``` Traceback: File "/Users/balrog911/Desktop/mvp/mvp_1/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 111. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Users/balrog911/Desktop/mvp/mvp_1_live/src/properties/views.py" in dashboard_single_property 51. formset_floor_plan = FloorPlanFormset(instance=user_floor_plan) File "/Users/balrog911/Desktop/mvp/mvp_1/lib/python2.7/site-packages/django/forms/models.py" in __init__ 855. if self.instance.pk is not None: Exception Type: AttributeError at /dashboard-property/253/ Exception Value: 'QuerySet' object has no attribute 'pk' ```
2015/04/30
[ "https://Stackoverflow.com/questions/29959550", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4847831/" ]
``` NSString *n = [NSString stringWithFormat:@"%@",@"http://somedomain.com/api/x?q={\"order_by\":[{\"field\":\"t\",\"direction\":\"desc\"}]}"]; NSURL *url = [NSURL URLWithString:[n stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]]; NSLog(@"%@",url); ```
The proper way to compose a URL from strings is to use [NSURLComponents](https://developer.apple.com/library/mac/documentation/Foundation/Reference/NSURLComponents_class/index.html) helper class. The reason for this seemingly elaborate approach is that each component of a URL (see [RFC 3986](https://www.rfc-editor.org/rfc/rfc3986#section-3)) requires slightly different percent encodings or possibly none. The exact structure of the *query* component is not defined in RFC 3986, though. Usually, its an array of key/value pairs that will be escaped as described at w3.org: [x-www-form-urlencoded-encoding-algorithm](http://www.w3.org/TR/html5/forms.html#application/x-www-form-urlencoded-encoding-algorithm). `NSURLComponents` provides a method to encode the query component as well.
14,714
58,472,090
I am trying to load a pickle object in R, using the following process found online. First, I create a Python file called: "pickle\_reader.py": ```py import pandas as pd def read_pickle_file(file): pickle_data = pd.read_pickle(file) return pickle_data ``` Then, I run the following R code: ``` install.packages('reticulate') require("reticulate") source_python("pickle_reader.py") pickle_data <- read_pickle_file("pathname") ``` but I get an error that says: > > Error in py\_run\_file\_impl(file, local, convert) : > ImportError: No module named pandas > > > N.B. I tried installing pandas again but this doesn't change the issue. Do you know how should I proceed? Thank you in advance
2019/10/20
[ "https://Stackoverflow.com/questions/58472090", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11898786/" ]
If you want to insert a python package into a different environment, which in this case is R, you must search how to install python packages in R. In this case, looking at the [CRAN webpage](https://cran.r-project.org/web/packages/reticulate/vignettes/python_packages.html) you can see that in order to install pandas in the environment of R you need the command *py\_install('pandas')*. Hope it helps!
Make sure that pandas installed. I suggest using conda environment. I read the pickle applying below steps: * Create conda environment and install necessary packages. * Then in R, you can set the right python (which is python in your conda env) ``` Sys.setenv(RETICULATE_PYTHON = "~/anaconda3/envs/your_env/bin/python") library(reticulate) ``` You can check with `py_config()` * Now you can read your pickle files in R, ``` loadData = function(file_path){ require("reticulate") source_python("pickle_reader.py") pd <- import("pandas") return (pd$read_pickle(file_path)) } features = loadData(features_path) ```
14,715
19,427,685
i have problems with the array indexes in python. at function readfile it crashes and prints: **"list index out of range"** ``` inputarr = [] def readfile(filename): lines = readlines(filename) with open(filename, 'r') as f: i = 0 j= 0 k = 0 for line in f: line = line.rstrip("\n") if not line == '': inputarr[j][k] = line k += 1 #print("\tnew entry\tj=%d\tk=%d" % (j, k)) elif line == '': k = 0 j += 1 #print("new block!\tj=%d\tk=%d" % (j, k)) i += 1 processing(i, lines) ```
2013/10/17
[ "https://Stackoverflow.com/questions/19427685", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2890530/" ]
I added the javafx runtime separately to the pom as below and it worked: ``` <dependency> <groupId>javafx</groupId> <artifactId>jfxrt</artifactId> <version>${javafx.min.version}</version> <scope>system</scope> <systemPath>${java.home}\lib\jfxrt.jar</systemPath> </dependency> ```
From [*What is JavaFX?*](http://docs.oracle.com/javafx/2/overview/jfxpub-overview.htm#A1095238): > > JavaFX 2.2 and later releases are fully integrated with the Java SE 7 Runtime Environment (JRE) and the Java Development Kit (JDK). > > > This means you should be able to just use the `javafx.*` packages without adding any library besides the JDK. It seems that Eclipse and Maven are being stupid in your case. (The JavaFX library and a bunch of others are in `$JDK_HOME/jre/lib/*`, Eclipse only seems to add what's in `$JDK_HOME/lib`. IntelliJ IDEA does the right thing here.)
14,718
53,961,912
Using django and python, I am building a web app that tracks prices. The user is a manufacturer and will want reports. Each of their products has a recommended price. Each product could have more than one seller, and each seller could have more than one product. My question is, where do I store the prices, especially the seller's price? Right now I have my database schema so the product table stores the recommended price and the seller's price, and that means one single product is repeated a lot of times. Is there a better way to do this? [![DB Schema](https://i.stack.imgur.com/HJPmc.jpg)](https://i.stack.imgur.com/HJPmc.jpg) Per the recommendations below this is the correct db schema: [![enter image description here](https://i.stack.imgur.com/xCs6J.jpg)](https://i.stack.imgur.com/xCs6J.jpg)
2018/12/28
[ "https://Stackoverflow.com/questions/53961912", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5096832/" ]
You're not adequately representing the one-to-many relationship between products and sellers. Your product table has the seller\_id and the seller\_price, but if one product is sold by many sellers, it cannot. Instead of duplicating product entries so the same product can have multiple sellers, what you need is a table between products and sellers. ``` CREATE TABLE seller_products ( seller_id integer, product_id integer, price decimal ); ``` I'll leave the indexes foreign keys etc to you. Seller ID and product ID might be a unique combination ( historical data is best removed from active datasets for performance longevity ) , but of course any given product will be listed once for each seller that sells it and any given seller will be listed once per product it sells ( along with its unique price). Then you can join the table back to products to get the data you currently store denormalized in the `products` table directly : ``` SELECT * FROM products LEFT JOIN seller_products ON ( seller_products.product_id = products.id) ```
This is a Data Warehouse question. I would recommend putting prices on a Fact as measures and having only attributes on the Dimensions. Dimensions: * Product * Seller * Manufacturer Fact (Columns): * List item * Seller Price * List item * MRSP * Product ID * Seller ID * Manufacturer ID * Timestamp
14,720
8,584,377
G'Day, I have a number of Django projects and a number of other Python projects as git repositories. I have pre-commit hook that runs Pylint on my code before allowing me to commit it - this hook doesn't know whether the project is a Django application or a vanilla Python project. For all my Django projects, I have a structure like: ``` > my_django_project |-- manage.py |-- settings.py |--> apps |--> my_django_app |-- models.py |-- admin.py ``` Now, when I run pylint on this project, it gives me errors like: ``` F: 4,0: Unable to import 'my_django_app.models' ``` for `my_django_app.admin` module for example. How to do I configure Pylint, so that when it is going over my django projects (not vanilla python projects), it knows that the `my_django_project/apps` should also be in the `sys.path`? Normally, `manage.py` adds it to the `sys.path`. Thanks!
2011/12/21
[ "https://Stackoverflow.com/questions/8584377", "https://Stackoverflow.com", "https://Stackoverflow.com/users/47825/" ]
Take a look at init\_hook in pylint configuration file. ``` init-hook=import sys; sys.path.insert(0, 'my_django_project/apps'); ``` You will obviously need a configuration file per Django application, and run pylint as, e.g. ``` pylint --rcfile=pylint.conf my_django_project ```
Maybe this doesn't fully answer your question, but I suggest to use [django-lint](http://chris-lamb.co.uk/projects/django-lint/), to avoid warnings like: ``` F: 4: Unable to import 'myapp.views' E: 15: MyClass.my_function: Class 'MyClass' has no 'objects' member E: 77: MyClass.__unicode__: Instance of 'MyClass' has no 'id' member ```
14,723
61,986,195
I have this python code that predicts the trade calls with the Bollinger band values and the Close Price. ```html from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score lr1 = LogisticRegression() x = df[['Lower_Band','Upper_Band','MA_14','Close Price']] y = df['Call'] x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3) lr1.fit(x_train,y_train) y_pred = lr1.predict(x_test) print("Accuracy=",accuracy_score(y_test,y_pred,normalize=True)) ``` Each time I run this code, different accuracy values are printed. The accuracy values range everything from 0.3 to 0.8. So how do I predict the accuracy of this model? Is there something wrong in my code?
2020/05/24
[ "https://Stackoverflow.com/questions/61986195", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9471113/" ]
As described by KolaB, you should use the `random_state` parameter of `train_test_split` to make results reproducible. But actually, you mentioned that your results vary between 0.3 and 0.8 in accuracy score. This is a strong indicator that your results depend on a particular random choice for the test set. I would, therefore, suggest to use k-fold cross-validation as a countermeasure. ``` from sklearn.linear_model import LogisticRegression from sklearn.model_selection import cross_val_score lr1 = LogisticRegression() x = df[['Lower_Band','Upper_Band','MA_14','Close Price']] y = df['Call'] print(f'Accuracy = {mean(cross_val_score(lr1, x, y, cv=5))}') ``` The example returns an array for 5 train/test iterations so that each sample was used in the test set once. By getting the average of these 5 runs, you get a better estimate of your model's performance.
Your problem is most probably in `train_test_split`. You are not initialising the random state that ensures you get reproducible results. Try changing the line with this function to: ``` x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3, random_state=1) ``` Also see scikit learn documentation on the [train\_test\_split function](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html)
14,725
26,450,336
I have this python code. And whenever i start the webbserver and go to the website i don't get the message " test ", just internal server error. How come? what am i doing wrong. Whenever i go to the website its a GET request right, so it should go into domain() function and give me the text " test " ``` @app.route("/", methods=['GET', 'POST']) def hello(): if request.method == 'GET': domain() else: test() def domain(): return "test" def test(): data = request.get_json() with open("text.txt", "w") as text_file: pickle.dump(data, text_file) if __name__ == "__main__": app.run() ```
2014/10/19
[ "https://Stackoverflow.com/questions/26450336", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4153021/" ]
I am also new to Android Reversing , and I have spent some time searching for simple understanding of Smali code and found this : note class structure is L; ========================== ``` Lcom/breakapp/dd/mymod/Processor;->l:I ``` original java file name ======================= ``` .source "example.java" ``` these are class instance variables ================================== ``` .field private someString:Ljava/lang/String; ``` This assigns a string value to v0 ================================= ``` const-string v0, "get_value_one" ``` *Finals are not actually used directly, because references to them are replaced by the value itself primitive cheat sheet:* V - void, B - byte, S - short, C - char, I - int ================================================ J - long (uses two registers), F - float, D - double ==================================================== ``` .field public final someInt:I # the :I means integer .field public final someBool:Z # the :Z means boolean ``` Taken From : [Android Cracking](http://androidcracking.blogspot.in/2010/09/examplesmali.html) !
You may want to read the dalvik bytecode doc's since they are more detailed then the documentation you can find about smali. Anyway, I am also in the process of learning smali so, probably, I can't give you the best answer but maybe this will help. Let's start by looking at what iput does: > > iput vx,vy, field\_id > Puts vx into an instance field. The instance is referenced by vy. > > > source: <http://pallergabor.uw.hu/androidblog/dalvik_opcodes.html> from the dalvik opcodes > > > The same happens here. You are affecting the v2 register with the v0 register. That being said the change you made was misguided. You changed the 'I' to '10' but that is not a value. The I means integer in this case. Furthermore, this is not even the place where you want to make a change in your code. Let's see: ``` const-string v0, "get_value_one" ``` the reg v0 now has the value of the string "get\_value\_one" (value may not be the best word to describe it since it is a string but I think i get my point across) ``` invoke-virtual {p0, v0}, Lorg/json/JSONObject;->getInt(Ljava/lang/String;)I move-result v0 ``` now you invoked the method getInt(String) on the JSONObject that you receive via parameter. You know this since the {p0, v0} means that you are passing v0 to the method of the object referenced by p0 which you know is a parameter since it follows the p\* rule. (You can read about it here: <https://code.google.com/p/smali/wiki/Registers>). By now you must be starting to understand that invoking this method won't help if you want to assing a cont value to your variable 'l'. ``` iput v0, v2, Lcom/breakapp/dd/mymod/Processor;->l:I ``` This last instruction takes your v2 register and puts the value of v0 in it. v0, before this line is executed, has the value that comes out of the JSONObject getInt(String) method while v2 references the Object MyProcessor and the "Lcom/breakapp/dd/mymod/Processor;->l" references the variable 'l' contained in that said obj. The ' :I ' let's you know the type of the variable. Since Java is strongly typed there is always an associated type to a variable as I'm sure you know. This has, of course, to be referenced in the bytecode and this is the way it's done. I hope this gave some information to be able to do the changes you want but I'll try to help out a little more by suggesting that you change the code you showed to something like this: ``` const/4 v0, 0xA iput v0, v2, Lcom/breakapp/dd/mymod/Processor;->l:I ``` The first line assings a constant (0xA hexa = 10 decimal) to v0 and then passes it as I referenced before. Good luck with learning smali and I hope it helped at least a little
14,726
61,442,421
I am using the combination of **request** and **beautifulsoup** to develop a web-scraping program in python. Unfortunately, I got 403 problem (even using **header**). Here my code: ``` from bs4 import BeautifulSoup from requests import get headers_m = ({'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36'}) sapo_m = "https://www.idealista.it/vendita-case/milano-milano/" response_m = get(sapo_m, headers=headers_m) ```
2020/04/26
[ "https://Stackoverflow.com/questions/61442421", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13378861/" ]
This is not general python question. The site blocks such straightforward attempts of **scraping**, you need to find a set of headers (specific for this site) that will pass validation. Regards,
Simply use `Chrome` as `User-Agent`. ``` from bs4 import BeautifulSoup BeautifulSoup(requests.get("https://...", headers={"User-Agent": "Chrome"}).content, 'html.parser') ```
14,727
66,254,984
I have a list of dicts in python which look like these: ```py [{'day' : 'Wednesday' , 'workers' : ['John' , 'Smith']} , {'day' : 'Monday' , 'workers' : ['Kelly']}] ``` I want to sort them by day of week such that the result is ```py [{'day' : 'Monday' , 'workers' : ['Kelly']}, {'day' : 'Wednesday' , 'workers' : ['John' , 'Smith']}] ``` I can use this answer to sort list of just weekday names: [Sort week day texts](https://stackoverflow.com/questions/13844158/sort-week-day-texts) but is there a way to sort the above dict?
2021/02/18
[ "https://Stackoverflow.com/questions/66254984", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6415973/" ]
Use a lambda function that extracts the weekday name from the dictionary and then returns the index as in your linked question. ``` weekdays = ["Mon", "Tue", "Wed", "Thu", "Fri"] list_of_dicts = [{'day' : 'Wednesday' , 'workers' : ['John' , 'Smith']} , {'day' : 'Monday' , 'workers' : ['Kelly']}] list_of_dicts.sort(key = lambda d: weekdays.index(d['day'])) ```
The same basic approach that the example you link to uses will work for your list of dictionaries case. The trick is, you need to extract the day value from the dictionaries within the list to make it work. A `lambda` expression used for the `key` parameter is one way to do that. Example: ``` day_order = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"] data = [{'day' : 'Wednesday' , 'workers' : ['John' , 'Smith']} , {'day' : 'Monday' , 'workers' : ['Kelly']}] sorted(data, key=lambda d: day_order.index(d["day"])) ``` Output: ``` [{'day': 'Monday', 'workers': ['Kelly']}, {'day': 'Wednesday', 'workers': ['John', 'Smith']}] ```
14,728
12,343,261
OK, so I went on <http://wiki.vg/Protocol>, but I don't understand how to send the packets through a socket to a Minecraft server. I would like to know if it is possible, and if it is how, to send Minecraft packets through a Python socket to a Minecraft server, as if the socket was the Minecraft client. I want to see if there is a way to make a minecraft person appear on a server using python and make him walk in a straight line, for a certain amount of time (probably through a python for loop), than log out. Is there a python package that allows you to do this? Thanks!
2012/09/09
[ "https://Stackoverflow.com/questions/12343261", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1983840/" ]
Well I'd start with [this](https://gist.github.com/1209061) which sends a packet. It's linked to from the same page you mention. Then adjust the packet ID and the data you add to the stream.
No, as a Minecraft server is nothing but a host listening to a TCP socket. You're better off looking for a Python TCP/sockets tutorial in general, or a Minecraft client/bot library.
14,734
59,416,899
We use ndb datastore in our current python 2.7 standard environment. We migrating this application to python 3.7 standard environment with firestore (native mode). We use pagination on ndb datastore and construct our query using fetch. ``` query_results , next_curs, more_flag = query_structure.fetch_page(10) ``` The next\_curs and more\_flag are very useful to indicate if there is more data to be fetched after the current query (to fetch 10 elements). We use this to flag the front end for "Next Page" / "Previous Page". We can't find an equivalent of this in Firestore. Can someone help how to achieve this?
2019/12/19
[ "https://Stackoverflow.com/questions/59416899", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3647998/" ]
There is no direct equivalent in Firestore pagination. What you can do instead is fetch one more document than the N documents that the page requires, then use the presence of the N+1 document to determine if there is "more". You would omit the N+1 document from the displayed page, then start the next page at that N+1 document.
I build a custom firestore API not long ago to fetch records with pagination. You can take a look at the [repository](https://github.com/vwt-digital/firestore-api). This is the story of the learning cycle I went through: My first attempt was to use limit and offset, this seemed to work like a charm, but then I walked into the issue that it ended up being very costly to fetch like 200.000 records. Because when using offset, google charges you also for the reads on **all** the records before that. The Google Firestore [Pricing Page](https://firebase.google.com/docs/firestore/pricing) clearly states this: > > There are no additional costs for using cursors, page tokens, and > limits. In fact, these features can help you save money by reading > only the documents that you actually need. > > > However, when you send a query that includes an offset, you are > charged a read for each skipped document. For example, if your query > uses an offset of 10, and the query returns 1 document, you are > charged for 11 reads. Because of this additional cost, you should use > cursors instead of offsets whenever possible. > > > My second attempt was using a cursor to minimize those reads. I ended up fetching N+1 documents and place the cursor like so: ``` collection = 'my-collection' cursor = 'we3adoipjcjweoijfec93r04' # N+1th doc id q = db.collection(collection) snapshot = db.collection(collection).document(cursor).get() q = q.start_at(snapshot) # Place cursor at this document docs = q.stream() ``` Google wrote a whole [page](https://firebase.google.com/docs/firestore/query-data/query-cursors) on pagination in Firestore. Some useful query methods when implementing pagination: * `limit()` limits the query to a fixed set of documents. * `start_at()` includes the cursor document. * `start_after()` starts right after the cursor document. * `order_by()` ensures all documents are ordered by a specific field.
14,735
1,668,223
I am in the process of coding an app which has to get the metadata (author,version..etc) out of several modules (.py files) and display them. The user selects a script and the script is executed. (New scripts can be added and old ones be taken of from the target folder just like a plugin system). Firstly I import a script and I take out the metadata, then I go for the next. But I want to de-import all the other modules except for the one that the user has selected. How can I implement this? I tried these ``` 1. del module 2. del sys.modules['module'] ``` The latter did not work. I tried #python and got the answer that it was not good to de-import modules, but I want to know a clean way of implementing this. Any ideas/suggestions would help.
2009/11/03
[ "https://Stackoverflow.com/questions/1668223", "https://Stackoverflow.com", "https://Stackoverflow.com/users/200816/" ]
i think [this post](http://mail.python.org/pipermail/tutor/2006-August/048596.html) should help you edit: to secure the availability of this information (in case the link dies or something similar) i will include the original message from the [tutor mailing list](http://mail.python.org/mailman/listinfo/tutor) here: --- On 8/14/06, Dick Moores wrote: > > Actually, my question is, after using IDLE to do some importing of > modules and initializing of variables, how to return it to it's > initial condition without closing and reopening it. > > > So, for example after I've done > > > > ``` > >>> import math, psyco > >>> a = 4**23 > > ``` > > How can I wipe those out without closing IDLE? > (I used to know how, but I've forgotten.) > > > Hi Dick, Usually this entails removing from the "module registry and removing references to it in the code. If you have a *really* well used module (like one with config parameters imported into every module), then you'll have an additional step of removing it from every module. Also, if you have use "from psyco import ...", then you will not be able to free up the module and the reference to the module easily (is it from that module, or imported from a third module? see "if paranoid: code below). The function below deletes a module by name from the Python interpreter, the "paranoid" parameter is a list of variable names to remove from every other module (supposedly being deleted with the module). Be VERY careful with the paranoid param; it could cause problems for your interpreter if your functions and classes are named the same in different modules. One common occurrance of this is "error" for exceptions. A lot of libraries have one "catch-all" exception called "error" in the module. If you also named your exception "error" and decided to include that in the paranoid list... there go a lot of other exception objects. ``` def delete_module(modname, paranoid=None): from sys import modules try: thismod = modules[modname] except KeyError: raise ValueError(modname) these_symbols = dir(thismod) if paranoid: try: paranoid[:] # sequence support except: raise ValueError('must supply a finite list for paranoid') else: these_symbols = paranoid[:] del modules[modname] for mod in modules.values(): try: delattr(mod, modname) except AttributeError: pass if paranoid: for symbol in these_symbols: if symbol[:2] == '__': # ignore special symbols continue try: delattr(mod, symbol) except AttributeError: pass ``` Then you should be able to use this like: ``` delete_module('psyco') ``` or ``` delete_module('psyco', ['Psycho', 'KillerError']) # only delete these symbols from every other module # (for "from psyco import Psycho, KillerError" statements) ``` -Arcege
Suggestion: Import your modules dynamically using `__import__` E.g. ``` module_list = ['os', 'decimal', 'random'] for module in module_list: x = __import__(module) print 'name: %s - module_obj: %s' % (x.__name__, x) ``` Will produce: ``` name: os - module_obj: <module 'os' from '/usr/lib64/python2.4/os.pyc'> name: decimal - module_obj: <module 'decimal' from '/usr/lib64/python2.4/decimal.pyc'> name: random - module_obj: <module 'random' from '/usr/lib64/python2.4/random.pyc'> ``` Though, this won't really remove it from the modules registry. The next time that it is imported, it won't re-read the package/module file and it won't execute it. To accomplish that, you can simply modify the above code snippet like this: ``` import sys module_list = ['os', 'decimal', 'random', 'test1'] for module_name in module_list: x = __import__(module_name) print 'name: %s - module_obj: %s' % (x.__name__, x) del x sys.modules.pop(module_name) ```
14,736
55,767,411
I have an potentially infinite python 'while' loop that I would like to keep running even after the main script/process execution has been completed. Furthermore, I would like to be able to later kill this loop from a unix CLI if needed (ie. kill -SIGTERM PID), so will need the pid of the loop as well. How would I accomplish this? Thanks! Loop: ``` args = 'ping -c 1 1.2.3.4' while True: time.sleep(60) return_code = subprocess.Popen(args, shell=True, stdout=subprocess.PIPE) if return_code == 0: break ```
2019/04/19
[ "https://Stackoverflow.com/questions/55767411", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1998671/" ]
In python, parent processes attempt to kill all their daemonic child processes when they exit. However, you can use `os.fork()` to create a completely new process: ``` import os pid = os.fork() if pid: #parent print("Parent!") else: #child print("Child!") ```
`Popen` returns an object which has the `pid`. According to the [doc](https://docs.python.org/3/library/subprocess.html#subprocess.Popen) > > Popen.pid > The process ID of the child process. > > > Note that if you set the shell argument to True, this is the process ID of the spawned shell. > > > You would need to turnoff the `shell=True` to get the pid of the process, otherwise it gives the pid of the shell. ``` args = 'ping -c 1 1.2.3.4' while True: time.sleep(60) with subprocess.Popen(args, shell=False, stdout=subprocess.PIPE) as proc: print('PID: {}'.format(proc.pid)) ... ```
14,741
10,649,623
I have a web app that uses google app engine .In ubuntu ,I start the app engine using ``` ./dev_appserver.py /home/me/dev/mycode ``` In the mycode folder ,I have app.yml and the python files of the web app.In the web app code,I have used logging to write values of some variables like ``` import logging LOG_FILENAME = '/home/me/logs/mylog.txt' logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG) class Handler(webapp2.RequestHandler): .... class Welcome(Handler): def get(self): if self.user: logging.debug('rendering welcome page for user') self.render('welcome.html',username= self.user.name) else: logging.debug('redirect to signup') self.redirect('/signup') class MainPage(Handler): def get(self): self.redirect('/welcome') app = webapp2.WSGIApplication([('/', MainPage),('/signup', Register),('/welcome', Welcome)], debug=True) ``` I have set `chmod 777` for the logs directory..Still no logs are created there.I am at a loss as to how any log can be viewed when google app engine runs..Since it is in linux,I don't have a `launcher with gui` ..which makes it difficult to see the app engine logs if any are generated If someone can help me solve this,it would be great.
2012/05/18
[ "https://Stackoverflow.com/questions/10649623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1291096/" ]
Did you read this <https://developers.google.com/appengine/articles/logging> as I understand you must not declare your own log file
I have the same environment (Ubuntu, python, gae) and ran into similar issues with logging. You can't log to local file as stated here: <https://developers.google.com/appengine/docs/python/overview> > > "The sandbox ensures that apps can only perform actions that do not interfere with the performance and scalability of other apps. For instance, an app cannot write data to the local file system or make arbitrary network connections." > > > "The development server runs your application on your local computer for testing your application. The server simulates the App Engine datastore, services and sandbox restrictions. " > > > I was able to getting console logging to work as follows: ``` import logging logging.getLogger().setLevel(logging.DEBUG) ```
14,742
6,069,690
I have a basic python question. I'm working on a class `foo` and I use `__init__():` to do some actions on a value: ``` class foo(): def __init__(self,bar=None): self.bar=bar if bar is None: isSet=False else: isSet=True print isSet ``` When I execute the code I get: `NameError: name 'isSet' is not defined`. How can I access `isSet`? What am I doing wrong? regards, martin
2011/05/20
[ "https://Stackoverflow.com/questions/6069690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/760797/" ]
Wrong indentation, it should be this instead, otherwise you're exiting the function. ``` class foo(): def __init__(self,bar=None): self.bar=bar if bar is None: isSet=False else: isSet=True print isSet ```
The indentation of the final line makes it execute in the context of the class and not `__init__`. Indent it one more time to make your program work.
14,743
37,906,459
I have a text file. The guts of it look like this/ all of it looks like this (has been edited. This was also not what it initially looked like) ``` (0, 16, 0) (0, 17, 0) (0, 18, 0) (0, 19, 0) (0, 20, 0) (0, 21, 0) (0, 22, 0) (0, 22, 1) (0, 22, 2) (0, 23, 0) (0, 23, 4) (0, 24, 0) (0, 25, 0) (0, 25, 1) (0, 26, 0) (0, 26, 3) (0, 26, 4) (0, 26, 5) (0, 26, 9) (0, 27, 0) (0, 27, 1) ``` Anyway, how do I put these values into a set on python 2? My most recent attempt was ``` om_set = set(open('Rye Grass.txt').read() ``` EDIT: This is the code I used to get my text file. import cv2 import numpy as np import time ``` om=cv2.imread('spectrum1.png') om=om.reshape(1,-1,3) om_list=om.tolist() om_tuple={tuple(item) for item in om_list[0]} om_set=set(om_tuple) im=cv2.imread('1.jpg') im=cv2.resize(im,(100,100)) im= im.reshape(1,-1,3) im_list=im.tolist() im_tuple={tuple(item) for item in im_list[0]} ColourCount= om_set & set(im_tuple) with open('Weedlist', 'a') as outputfile: output = ', '.join([str(tup) for tup in sorted(ColourCount)]) outputfile.write(output) print 'done' im=cv2.imread('2.jpg') im=cv2.resize(im,(100,100)) im= im.reshape(1,-1,3) im_list=im.tolist() im_tuple={tuple(item) for item in im_list[0]} ColourCount= om_set & set(im_tuple) with open('Weedlist', 'a') as outputfile: output = ', '.join([str(tup) for tup in sorted(ColourCount)]) outputfile.write(output) print 'done' ```
2016/06/19
[ "https://Stackoverflow.com/questions/37906459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6306510/" ]
As @TimPietzcker suggested and trusting the file to only have these fixed representations of integers in comma separated triplets, surrounded by parentheses, a simple parser in one go (OP's question also had a greed "read" of file into memors): ``` #! /usr/bin/env python from __future__ import print_function infile = 'pixel_int_tuple_reps.txt' split_pits = None with open(infile, 'rt') as f_i: split_pits = [z.strip(' ()') for z in f_i.read().strip().split('),')] if split_pits: on_set = set(tuple(int(z.strip()) for z in tup.split(', ')) for tup in split_pits) print(on_set) ``` tramsforms: ``` (0, 19, 0), (0, 20, 0), (0, 21, 1), (0, 22, 0), (0, 24, 3), (0, 27, 0), (0, 29, 2), (0, 35, 2), (0, 36, 1) ``` into: ``` set([(0, 27, 0), (0, 36, 1), (0, 21, 1), (0, 22, 0), (0, 24, 3), (0, 19, 0), (0, 35, 2), (0, 29, 2), (0, 20, 0)]) ``` The small snippet: 1. splits the pixel integer triplets into substrings of `0, 19, 0` cleansing a bit the stray parens and spaces away (also taking care of the closing parentheses at the end. 2. if that "worked" - further feeds the rgb split with integer conversion tuples into a set. I would really think twice, before using eval/exec on that kind of deserialization task. **Update** as suggested by comments from OP (please update the question!): 1. The file at the OP's site seems to be too big to print (keep in memory)? 2. It is **not** written, as the advertised in question ... ... so until we have further info from OP: For a theoretical clean 3-int-tuple dump file this answer works (if not too big to load at once and map into a set). For the concrete task, I may update the answer if sufficient new info has been added to the question ;-) One way, if the triple "lines" are concat from previous stages with or without a newline separating, but alwayss missing the comma, to change the file reading part either: 1. into a line based reader (when newlines separate) and pull the set generation into the loop always making a union of the new harvested set with the existing (accumulating one) like `s = s | fresh` that is tackling them in "isolation" or if these "chunks" are added like so `(0, 1, 230)(13, ...` that is `)(` "hitting hard": 2. modify the existing code inside reader and instead of: `f_i.read().strip().split('),')` write `f_i.read().replace(')('), (', ').strip().split('),')` ... that is "fixing" the `)(`part into a `), (`part to be able to continue as if it would be a homogene "structure". **Update** now parsing the version 2 of the dataset (updated question): File `pixel_int_tuple_reps_v2.txt`now has: ``` (0, 16, 0) (0, 17, 0) (0, 18, 0) (0, 19, 0) (0, 20, 0) (0, 21, 0) (0, 22, 0) (0, 22, 1) (0, 22, 2) (0, 23, 0) (0, 23, 4) (0, 24, 0) (0, 25, 0) (0, 25, 1) (0, 26, 0) (0, 26, 3) (0, 26, 4) (0, 26, 5) (0, 26, 9) (0, 27, 0) (0, 27, 1) ``` The code: ``` #! /usr/bin/env python from __future__ import print_function infile = 'pixel_int_tuple_reps_v2.txt' on_set = set() with open(infile, 'rt') as f_i: for line in f_i.readlines(): rgb_line = line.strip().lstrip('(').rstrip(')') try: rgb = set([tuple(int(z.strip()) for z in rgb_line.split(', '))]) on_set = on_set.union(rgb) except: print("Ignored:" + rgb_line) pass print(len(on_set)) for rgb in sorted(on_set): print(rgb) ``` Now parses this file and first dumps the length of the set and (as is the elements in sorted order): ``` 21 (0, 16, 0) (0, 17, 0) (0, 18, 0) (0, 19, 0) (0, 20, 0) (0, 21, 0) (0, 22, 0) (0, 22, 1) (0, 22, 2) (0, 23, 0) (0, 23, 4) (0, 24, 0) (0, 25, 0) (0, 25, 1) (0, 26, 0) (0, 26, 3) (0, 26, 4) (0, 26, 5) (0, 26, 9) (0, 27, 0) (0, 27, 1) ``` HTH. Note that there are no duplicates in the provided sample input. Doubling the last data line I still rceived 21 unique elements as output, so I guess now it works as designed ;-)
Only need small modification.You can try this. ``` om_set = set(eval(open('abc.txt').read())) ``` **Result** ``` {(0, 19, 0), (0, 20, 0), (0, 21, 1), (0, 22, 0), (0, 24, 3), (0, 27, 0), (0, 29, 2), (0, 35, 2)} ``` **Edit** Here is the working of code in in `IPython` prompt. ``` In [1]: file_ = open('abc.txt') In [2]: text_read = file_.read() In [3]: print eval(text_read) ((0, 19, 0), (0, 20, 0), (0, 21, 1), (0, 22, 0), (0, 24, 3), (0, 27, 0), (0, 29, 2), (0, 35, 2), (0, 36, 1)) In [4]: type(eval(text_read)) Out[1]: tuple In [5]: print set(eval(text_read)) set([(0, 27, 0), (0, 36, 1), (0, 21, 1), (0, 22, 0), (0, 24, 3), (0, 19, 0), (0, 35, 2), (0, 29, 2), (0, 20, 0)]) ```
14,751
59,538,746
A use case of the `super()` builtin in python is to call an overridden method. Here is a simple example of using `super()` to call `Parent` class's `echo` function: ```py class Parent(): def echo(self): print("in Parent") class Child(Parent): def echo(self): super().echo() print("in Child") ``` I've seen code that passes 2 parameters to `super()`. In that case, the signature looks somehing like `super(subClass, instance)` where `subClass` is the sub class calling `super()` from, and `instance` is the instance the call being made from, ie `self`. So in the above example, the `super()` line would become: ```py super(Child, self).echo() ``` Looking at [python3 docs](https://docs.python.org/3/library/functions.html#super), these 2 use cases are the same when calling from inside of a class. Is calling `super()` with 2 parameters completely deprecated as of python3? If this is only deprecated for calling overridden functions, can you show an example why they're needed for other cases? I'm also interested to know why python needed those 2 arguments? Are they injected/evaluated when making `super()` calls in python3, or they're just not needed in that case?
2019/12/31
[ "https://Stackoverflow.com/questions/59538746", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3411556/" ]
If you don't pass the arguments, Python 3 makes an effort to provide them for you. It's a little kludgy, but it *usually* works. Essentially, it just assumes the first parameter to your method is `self` (the second argument to `super`), and when the class definition completes, it provides a virtual closure scope for any function that refers to `super` or `__class__` that defines `__class__` as the class you just defined, so no-arg `super()` can check the stack frame to find `__class__` as the first argument. This usually works, but there are cases where it doesn't: * In `staticmethod`s (since the first argument isn't actually the class), though `staticmethod`s aren't really supposed to participate in the inheritance hierarchy the same way, so that's not exactly unexpected. * In functions that take `*args` as the first argument (which was the only safe way to implement any method that accepted arbitrary keyword arguments like `dict` subclasses prior to 3.8, when they introduced positional-only arguments). * If the `super` call is in a nested scope, which can be implicit, e.g. a generator expression, or comprehension of any kind (`list`, `set`, `dict`), since the nested scope isn't directly attached to the class, so it doesn't get the `__class__` defining magic that methods attached to the class itself get. There are also rare cases where you might want to explicitly bypass a particular class for resolving the parent method, in which case you'd need to explicitly pass a different class from later in the MRO than the one it would pull by default. This is deep and terrible magic, so I don't recommend it, but I think I've had cause to do it once. **All of those are relatively rare cases, so most of the time, for code purely targeting Python 3, you're fine using no-arg `super`, and only falling back to explicitly passing arguments when you *can't* use it for whatever reason.** No-arg `super` is cleaner, faster, and during live development can be more correct (if you replace `MyClass`, instances of the old version of the class will have the old version cached and no-arg `super` will continue to work, where looking it up manually in your globals would find the new version of the class and explode), so use it whenever you can.
In Python 3, `super()` with zero arguments is already the shortcut for `super(__class__, self)`. See [PEP3135](https://www.python.org/dev/peps/pep-3135/) for complete explanation. This is not the case for Python 2, so I guess that most code examples you found were actually written for Python 2 (or Python2+3 compatible)
14,752
65,784,777
I have a python script which pushes data to another system. If it cannot push data for whatever reason then it will exit with non-zero status code otherwise it will not. I am using my python script in my below shell script. ``` export NAME="${CI_COMMIT_REF_NAME//\//-}-1${CI_PIPELINE_IID}" for environment in dev stage; do FILE_NAME="${NAME}-${environment}.tgz"; python helper.py push ${environment} master ${FILE_NAME}; python helper.py push ${environment} slave ${FILE_NAME}; done ``` As of now if any one python line fails for any one environment then it doesn't move forward to the next iteration at all (or if first python line fails then it doesn't run second python line) but I want to run all iterations of my for loop and also run second python line if first one fails for whatever reason. Is there any way I can run all iterations of my for loop irrespective of whether my python script fails for any environment or not and also run second python line if previous one fail and then at the end it should fail if exit status code is non-zero (after running everything) after running everything. Is this possible to do? Basically somehow I need to store exit code of all the possible python line executions (total 4) and then exit at the end later on.
2021/01/19
[ "https://Stackoverflow.com/questions/65784777", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14431930/" ]
Use a conditional operator to set a variable. ``` failed=false for environment in dev stage; do FILE_NAME="${NAME}-${environment}.tgz"; python helper.py push ${environment} master "${FILE_NAME}" || failed=true python helper.py push ${environment} slave "${FILE_NAME}" || failed=true done if [ "$failed" = true ] then exit 1 fi ``` BTW, you probably don't need to export `NAME`. Exporting is only needed if a child process uses the variable, not for variables uses within the script itself.
> > Basically somehow I need to store exit code of all the possible python line > > > Try this ``` declare -A result for env in dev stage; do FILE_NAME="$NAME-$env.tgz" python helper.py push "$env" master "$FILE_NAME"; result["$FILE_NAME"]=$? python helper.py push "$env" slave "$FILE_NAME"; result["$FILE_NAME"]=$? done For key in ${!result[@]}; do echo "Exit code for $key is: ${result[$key]}" ((err+=${result[$key]})) done exit $err ```
14,753
44,771,837
I looked at some answers, including [this](https://stackoverflow.com/questions/37457277/remove-non-ascii-characters-from-csv-file-using-python) but none seem to answer my question. Here are some example lines from CSV: ``` _id category ObjectId(56266da778d34fdc048b470b) [{"group":"Home","id":"53cea0be763f4a6f4a8b459e","name":"Cleaning Services","name_singular":"Cleaning Service"}] ObjectId(56266e0c78d34f22058b46de) [{"group":"Local","id":"5637a1b178d34f20158b464f","name":"Balloon Dí©cor","name_singular":"Balloon Dí©cor"}] ``` Here is my code: ``` import csv import sys from sys import argv import json def ReadCSV(csvfile): with open('newCSVFile.csv','wb') as g: filewriter = csv.writer(g) #, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL) with open(csvfile, 'rb') as f: reader = csv.reader(f) # ceate reader object next(reader) # skip first row for row in reader: #go trhough all the rows listForExport = [] #initialize list that will have two items: id and list of categories # ID section vendorId = str(row[0]) #pull the raw vendor id out of the first column of the csv vendorId = vendorId[9:33] # slice to remove objectdId lable and parenthases listForExport.append(vendorId) #add evendor ID to first item in list # categories section tempCatList = [] #temporarly list of categories for scond item in listForExport #this is line 41 where the error stems categories = json.loads(row[1]) #create's a dict with the categoreis from a given row for names in categories: # loop through the categorie names using the key 'name' print names['name'] ``` Here's what I get: ``` Cleaning Services Traceback (most recent call last): File "csvtesting.py", line 57, in <module> ReadCSV(csvfile) File "csvtesting.py", line 41, in ReadCSV categories = json.loads(row[1]) #create's a dict with the categoreis from a given row File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 338, in loads return _default_decoder.decode(s) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 382, in raw_decode obj, end = self.scan_once(s, idx) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 9-10: invalid continuation byte ``` So the code pulls out the fist category `Cleaning Services`, but then fails when we get to the non ascii characters. How do I deal with this? I'm happy to just remove any non-ascii items.
2017/06/27
[ "https://Stackoverflow.com/questions/44771837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1472743/" ]
As you open the input csv file in `rb` mode, I assume that you are using a Python2.x version. The good news is that you have no problem in the csv part because the csv reader will read plain bytes without trying to interpret them. But the `json` module will insist in decoding the text into unicode and by default uses utf8. As your input file is not utf8 encoded is chokes and raises a UnicodeDecodeError. Latin1 has a nice property: the unicode value of any byte is just the value of the byte, so you are sure to decode anything - whether it makes sense then depend of actual encoding being Latin1... So you could just do: ``` categories = json.loads(row[1], encoding="Latin1") ``` Alternatively, if you want to ignore non ascii characters, you could first convert the byte string to unicode ignoring errors and only then load the json: ``` categories = json.loads(row[1].decode(errors='ignore)) # ignore all non ascii characters ```
Most probably you have certain non-ascii characters in your csv content. ``` import re def remove_unicode(text): if not text: return text if isinstance(text, str): text = str(text.decode('ascii', 'ignore')) else: text = text.encode('ascii', 'ignore') remove_ctrl_chars_regex = re.compile(r'[^\x20-\x7e]') return remove_ctrl_chars_regex.sub('', text) ... vendorId = remove_unicode(row[0]) ... categories = json.loads(remove_unicode(row[1])) ```
14,754
36,222,454
Trying to solve a problem I asked earlier that couldn't be done with postgres sql query. So I moved on to trying to find another way to do it. Essentially - what I have directory lets call it **server** that has multiple CSV files in it with the UUID as the name of the csv. ``` localhost Server]$ tree . ├── 503336947449727e6f99de97c0a22a98.csv ├── 503340d499169677a0ad8c97f4c75a6d.csv ├── 5033b53f9462e04e4665f7b18993193d.csv └── 529993499c47a442cab8a6ccba00dee4.csv ``` Inside that CSV looks like the following: ``` 2016-03-24 20:04:00,0.405 2016-03-24 20:05:00,0.769 2016-03-24 20:06:00,1.217 2016-03-24 20:07:00,1.355 2016-03-24 20:08:00,0.369 2016-03-24 20:09:00,0.338 2016-03-24 20:10:00,4.443 2016-03-24 20:11:00,1.195 2016-03-24 20:12:00,0.342 2016-03-24 20:13:00,0.351 2016-03-24 20:14:00,0.646 2016-03-24 20:15:00,0.879 ``` Now I am trying to do a couple of things. First take a reference file that has all of the server names and UUID mappings in it. servers.csv contains the following: ``` server3,50337efc58f19945205d89e9e5a8a3c1 desktop1,503336947449727e6f99de97c0a22a98 serv4,50330e69efa7c4470061855358d11610 server02,52f7df2e6641e211a33f7ff1ffd95514 small-8k,5033b53f9462e04e4665f7b18993193d small-9k,5033af5b3616a679d20abe9001a7e897 large-64k,5033009b928e1903e3a39ae78a8e2d25 ``` Ideally what i need to do is read the servers.csv file into an array then search through the folder and rename files to match the server name. So an example would be as follows: ``` localhost Server]$ tree . ├── desktop1.csv ├── 503340d499169677a0ad8c97f4c75a6d.csv ├── small-8k.csv └── 529993499c47a442cab8a6ccba00dee4.csv ``` Additionally - I need to add the headers to each file as the first row to look like this date,server. So ideally the newly renamed CSV example desktop1.csv would look like the following: Inside that CSV looks like the following: ``` date,desktop1 2016-03-24 20:04:00,0.405 2016-03-24 20:05:00,0.769 2016-03-24 20:06:00,1.217 2016-03-24 20:07:00,1.355 2016-03-24 20:08:00,0.369 2016-03-24 20:09:00,0.338 2016-03-24 20:10:00,4.443 2016-03-24 20:11:00,1.195 2016-03-24 20:12:00,0.342 2016-03-24 20:13:00,0.351 2016-03-24 20:14:00,0.646 2016-03-24 20:15:00,0.879 ``` I have been struggling with this for a couple days... trying to develop which language would be the easiest to do from shell. My guess is a combination of awk and sed will get me there, but struggling with both. I started researching on python which is a possible solution that could make the renaming possible with glob and all the files renamed. However not versed in python. I was able to clean up some of the files that were part of the servers.csv file. ``` cut -d, -f1-2 VMInfo.csv | awk 'BEGIN{FS=OFS=","} {for (i=2;i<=NF;i++) gsub(/\-/,"",$i)} 1' | sed 's/"//g' > servers.csv ``` Any help would be greatly appreciated. UPDATE - @Ed - this is what the output looks like for me. ``` localhost output]$ sh -x testin.sh + mv server server.bk + mkdir server + awk -F, ' NR==FNR { map[$2]=$1; next } FNR==1 { close(out); out="server/"map[FILENAME]".csv"; print "date,"map[FILENAME] > out } { print > out } ' servers.csv server.bk/503336947449727e6f99de97c0a22a98.csv server.bk/503340d499169677a0ad8c97f4c75a6d.csv server.bk/50335a21c2507fc702864fa9ee7e2563.csv server.bk/50335ab3ab5411f88b77900736338bc6.csv server.bk/50338e29d3414fc4c04baa95772e8454.csv server.bk/5033c14e463120a8dcace7baaee17577.csv server.bk/5033c52713310df05c3ab04f6c4cf293.csv server.bk/5033d82b24982b4a8ac9fd73ec1880f7.csv server.bk/5033d9951846c1841437b437f5a97f0a.csv server.bk/5033db62b38f86605f0baeccae5e6cbc.csv server.bk/5033dc788480a7eab4fd0a586477f856.csv server.bk/5033f3c162b5e0e3bd01db1b3faa542d.csv server.bk/529993499c47a442cab8a6ccba00dee4.csv ``` Update @Ed - Still running into the same thing. ``` [localhost output]$ sh -x testin.sh + mv server server.bk + mkdir server + awk -F, ' NR==FNR { map[$2".csv"]=$1; next } FNR==1 { close(out); out="server/"map[FILENAME]".csv"; print "date,"map[FILENAME] > out } { print > out } ' servers.csv server.bk/503336947449727e6f99de97c0a22a98.csv server.bk/503340d499169677a0ad8c97f4c75a6d.csv server.bk/50335a21c2507fc702864fa9ee7e2563.csv server.bk/50335ab3ab5411f88b77900736338bc6.csv server.bk/50338e29d3414fc4c04baa95772e8454.csv server.bk/5033c14e463120a8dcace7baaee17577.csv server.bk/5033c52713310df05c3ab04f6c4cf293.csv server.bk/5033d82b24982b4a8ac9fd73ec1880f7.csv server.bk/5033d9951846c1841437b437f5a97f0a.csv server.bk/5033db62b38f86605f0baeccae5e6cbc.csv server.bk/5033dc788480a7eab4fd0a586477f856.csv server.bk/5033f3c162b5e0e3bd01db1b3faa542d.csv server.bk/529993499c47a442cab8a6ccba00dee4.csv [localhost output]$ cat servers.csv | grep 5033f3c162b5e0e3bd01db1b3faa542d vpool02,5033f3c162b5e0e3bd01db1b3faa542d ``` It doesn't seem to be renaming the file to vpool02.csv
2016/03/25
[ "https://Stackoverflow.com/questions/36222454", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5992247/" ]
It sounds like you need something like this (untested): ``` mv server server.bk && mkdir server && awk -F, ' NR==FNR { map["server.bk/"$2".csv"]=$1; next } FNR==1 { close(out); out="server/"map[FILENAME]".csv"; print "date,"map[FILENAME] > out } { print > out } ' servers.csv server.bk/*.csv ``` At the end of running the above, your original CSV files will be in the directory named "server.bk" which you can remove if you like by adding `&& rm -rf server.bk` at the end so it's only removed if the awk script succeeded. If you're considering using a shell loop for this, then read <https://unix.stackexchange.com/questions/169716/why-is-using-a-shell-loop-to-process-text-considered-bad-practice> first.
Here's something I threw together in Python: ``` import csv import os import sys # This is mostly for convenience. Python convention is that all caps # are "constants", but there's nothing that enforces that FILEPATH = 'servers' SERVER_FILE = sys.argv[1] if len(sys.argv) > 1 else 'servers.csv' with open(SERVER_FILE) as f: reader = csv.reader(f) for row in reader: # This is called unpacking, the csv reader will read # row as a list of 2 elements, since that's how many you # have in your csv file. If you change your server.csv file # format, this line will have to change. name, uuid = row try: # Python needs the `\` line continuation marker to let this # go over two lines. Note that there can be *no* spaces after # the `\` with open(os.path.join(FILEPATH, uuid+'.csv')) as infile,\ open(os.path.join(FILEPATH, name+'.csv'), 'w') as outfile: # Note that I'm not creating a csv reader or writer here, # because I'm assuming that there is no comma in the server # name. If there is, you'd want to create a writer just to # avoid having to manually escape the field. outfile.write('date,'+name+'\n') # After writing the header we can just write the contents # of the csv file outfile.write(infile.read()) # And now we delete the old file that has the wrong name os.unlink(infile.name) except FileNotFoundError: # Technically this exception could also be raised if # the path you were writing to did not exist, but # since we're writing to the same directory, this should be fine. print('Warning, no {}.csv file found'.format(uuid)) ```
14,755
41,237,314
I have a python function (**pyfunc**): ``` def pyfunc(x): ... return someString ``` I want to apply this function to every item in a mysql table column, something like: ``` UPDATE tbl SET mycol=pyfunc(mycol); ``` This update includes tens of millions of records. Is there an efficient way to do this? **Note: I cannot rewrite this function in sql or any other programming language.**
2016/12/20
[ "https://Stackoverflow.com/questions/41237314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7319653/" ]
Even simplier (but for php5.5 and php7): ``` $numery = array_column( $command->queryAll(), 'phone_number' ); ```
Use below loop to get desired result ``` $numery = $command->queryAll(); $number_arr = array(); foreach($numery as $number) { array_push($number_arr,$number['phone_number']); } print_r($number_arr); ```
14,757
72,274,548
When I run any kubectl command I get following WARNING: ``` W0517 14:33:54.147340 46871 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead. To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke ``` I have followed the instructions in [the link](https://cloud.google.com/blog/products/containers-kubernetes) several times but the WARNING keeps appearing making kubectl output uncomfortable to read. OS: ``` cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=22.04 DISTRIB_CODENAME=jammy DISTRIB_DESCRIPTION="Ubuntu 22.04 LTS" ``` kubectl version: ``` Client Version: v1.24.0 Kustomize Version: v4.5.4 ``` gke-gcloud-auth-plugin: ``` Kubernetes v1.23.0-alpha+66064c62c6c23110c7a93faca5fba668018df732 ``` gcloud version: ``` Google Cloud SDK 385.0.0 alpha 2022.05.06 beta 2022.05.06 bq 2.0.74 bundled-python3-unix 3.9.12 core 2022.05.06 gsutil 5.10 ``` I "login" with: ``` gcloud init ``` and then: ``` gcloud container clusters get-credentials cluster_name --region my-region ``` finally: ``` myyser@mymachine:/$ k get pods -n madeupns W0517 14:50:10.570103 50345 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead. To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke No resources found in madeupns namespace. ``` **How can I remove the WARNING or fix the problem?** Removing my `.kube/config` and re-running get-credentials didn't work.
2022/05/17
[ "https://Stackoverflow.com/questions/72274548", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1869399/" ]
I fixed this problem by adding the correct export in `.bashrc` ``` export USE_GKE_GCLOUD_AUTH_PLUGIN=True ``` After sourcing `.bashrc` with `. ~/.bashrc` and reloading cluster config with: ``` gcloud container clusters get-credentials clustername ``` the warning dissapeared: ``` user@laptop:/$ k get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP kube-system default-http-backend NodePort 10.10.13.157 <none> kube-system kube-dns ClusterIP 10.10.0.10 <none> kube-system kube-dns-upstream ClusterIP 10.10.13.92 <none> kube-system metrics-server ClusterIP 10.10.2.191 <none> ```
Got a similar issue, while connecting to a fresh Kubernetes cluster having a version `v1.22.10-gke.600` ``` gcloud container clusters get-credentials my-cluster --zone europe-west6-b --project project ``` and got the below error, as seems like now its become error for the newer version ``` Fetching cluster endpoint and auth data. CRITICAL: ACTION REQUIRED: gke-gcloud-auth-plugin, which is needed for continued use of kubectl, was not found or is not executable. Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke ``` [![enter image description here](https://i.stack.imgur.com/pPp6N.png)](https://i.stack.imgur.com/pPp6N.png) **fix** that worked for me ``` gcloud components install gke-gcloud-auth-plugin ```
14,758
56,335,217
I have already install mysql 5.1 on my windows 10 machine , and I can connect mysql from python by : ``` import pymysql conn=pymysql.connect(host='localhost',user='root',password='MYSQLTB',db='shfuture') ``` then I download django frame and try to use it to connect mysql , what I do is : create a my.cnf file content is : ``` [client] database = shfuture host = localhost user = root password = MYSQLTB default-character-set = utf8 ``` change settings.py to : ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'OPTIONS': { 'read_default_file': os.path.join(BASE_DIR, 'my.cnf'), }, } } ``` then run : ``` python manage.py runserver ``` but got a error : ``` django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module. Did you install the MySQL client? ``` Do I still need to install an addition MySQL in Django virtual env? if I can use the existing MySQL instead
2019/05/28
[ "https://Stackoverflow.com/questions/56335217", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2031764/" ]
Yes, of course.You need to install 'mysqlclient' package or 'mysql-connector-python' package, with pip.
I guess you dont have `mysqlclient` python library installed in virtual environment. Since you are using Windows, you need to download and install `mysqlclient` python library from [here](https://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient)
14,760
36,188,632
this code works on the command line. ``` python -c 'import base64,sys; u,p=sys.argv[1:3]; print base64.encodestring("%s\x00%s\x00%s" % (u,u,p))' user pass ``` output is dXNlcgB1c2VyAHBhc3M= I am trying to get this to work in my script ``` test = base64.encodestring("{0}{0}{1}").format(acct_name,pw) print test ``` output is ezB9ezB9ezF9 anyone no what i am doing wrong ? thank you.
2016/03/23
[ "https://Stackoverflow.com/questions/36188632", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3954080/" ]
You have a mistake in parenthesis. Instead of: ``` test = base64.encodestring("{0}{0}{1}").format(acct_name,pw) ``` (which first encodes "{0}{0}{1}" in base64 and **then** tries to substitute variables using `format`), you should have ``` test = base64.encodestring("{0}{0}{1}".format(acct_name,pw)) ``` (which first substitutes variables using `format` and **then** encodes in base64).
Thanks SZYM i am all set. This is the code that gets it to work ``` test = base64.encodestring("{0}\x00{0}\x00{1}".format(acct_name,pw)) ``` Turns out the hex \x00 is needed so program getting the hash knows where username stops and password begins. -ALF
14,768
14,800,708
I am using the PyScripter integrated development environment and taking courses using Python 2.7. Why does `number = input("some input text")` immediately display the python input dialog when the program is ran? Wouldn't we have to execute it? Because really, it's just setting a variable to a python input. It never says to execute it? Is `number` not just any variable? There's a mini-forum which the site that I go to has, but have not received an answer in 5 days, so I came here.
2013/02/10
[ "https://Stackoverflow.com/questions/14800708", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2059278/" ]
Indeed `number` is variable and nothing more. See [documentation on input()](http://docs.python.org/release/3.2/library/functions.html#input).
python is just kind off a simple language, it does not need variable declaration for example. But it's better that it automatically asks your input instead that you have to write the code for starting the variable
14,769
69,633,739
I am pretty new to python, coming from Java and I want to update a variable in an initialized class This is my full code ``` import datetime import time import threading from tkinter import * from ibapi.client import EClient, TickAttribBidAsk from ibapi.wrapper import EWrapper, TickTypeEnum from ibapi.contract import Contract class TestApp(EWrapper, EClient): def __init__(self): EClient.__init__(self, self) def tickPrice(self, reqId, tickType, price, attrib): print("Tick price. Ticker Id:", reqId, "tickType:", TickTypeEnum.to_str(tickType), "Price:", price, end=' ') def tickByTickBidAsk(self, reqId: int, time: int, bidPrice: float, askPrice: float, bidSize: int, askSize: int, tickAttribBidAsk: TickAttribBidAsk): print(bidPrice) tkinterApp.price1 = bidPrice class Application: def runTest(self): app = TestApp() app.connect("127.0.0.1", 7497, 0) contract = Contract() contract.symbol = "PROG" contract.secType = "STK" contract.currency = "USD" contract.exchange = "SMART" contract.primaryExchange = "NASDAQ" time.sleep(1) app.reqMarketDataType(1) app.reqTickByTickData(19003, contract, "BidAsk", 0, True) app.run() def __init__(self): t = threading.Thread(target=self.runTest) t.start() self.runTest() class TkinterClass: ibkrConnection = Application() root = Tk() root.title("test") root.grid_columnconfigure((0, 1), weight=1) titleTicker = Label(root, text="TICKER", bg='black', fg='white', width=100) titleRating = Label(root, text="PRICE", bg='black', fg='white', width=100) ticker1 = Label(root, text="PROG", bg='black', fg='white', width=100) price1 = Label(root, text=0, bg='black', fg='white', width=100) # To be changed with every tick titleTicker.grid(row=1, column=1) titleRating.grid(row=1, column=2) ticker1.grid(row=2, column=1) price1.grid(row=2, column=2) root.mainloop() tkinterApp = TkinterClass() ``` The def `tickByTickBidAsk` is a callback and is called every ~2 sec I want to update the `price1` variable in the class `TkinterClass`, but when I try to execute my code, the line `tkinterApp.price1 = bidPrice` gives me a name error: `TkinterClass is not defined` This is probably a noob mistake I know :)
2021/10/19
[ "https://Stackoverflow.com/questions/69633739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12496189/" ]
I played with tk a few years ago, this is how I structured my code. I make a tkinter window and connect to TWS from the tkinter class. ``` from tkinter import * import threading from ibapi import wrapper from ibapi.client import EClient from ibapi.utils import iswrapper #just for decorator from ibapi.common import * from ibapi.ticktype import * from ibapi.contract import Contract class TkWdow(): def __init__(self): root = Tk() frame = Frame(root) frame.pack() button = Button(frame, text="START", fg="green", command=self.start) button.pack(side=LEFT) button = Button(frame, text="ReqData", command=self.reqData) button.pack(side=LEFT) button = Button(frame, text="QUIT", fg="red", command=self.quit) button.pack(side=LEFT) self.output = Text(root, height=50, width=100) self.output.pack(side=BOTTOM) self.log("This is where output goes") root.mainloop() #root.destroy() def start(self): self.client = TestApp(self) self.log("starting") self.client.connect("127.0.0.1", 7497, clientId=123) thread = threading.Thread(target = self.client.run) thread.start() def log(self, *args): for s in args: self.output.insert(END, str(s) + " ") self.output.insert(END, "\n") def quit(self): self.log("quitting") self.client.disconnect() def reqData(self): self.log("reqData") cont = Contract() cont.symbol = "cl" cont.secType = "FUT" cont.currency = "USD" cont.exchange = "nymex" cont.lastTradeDateOrContractMonth = "202112" self.client.reqMktData(1, cont, "233", False, False, None) def cancelMktData(self, reqId:TickerId): super().cancelMktData(reqId) self.log('sub cancel') class TestApp(wrapper.EWrapper, EClient): def __init__(self, wdow): self.wdow = wdow wrapper.EWrapper.__init__(self) EClient.__init__(self, wrapper=self) @iswrapper def nextValidId(self, orderId:int): self.wdow.log("setting nextValidOrderId: " , orderId) self.nextValidOrderId = orderId @iswrapper def error(self, reqId:TickerId, errorCode:int, errorString:str): self.wdow.log("Error. Id: " , reqId, " Code: " , errorCode , " Msg: " , errorString) @iswrapper def tickString(self, reqId:TickerId, tickType:TickType, value:str): if tickType == TickTypeEnum.RT_VOLUME: self.wdow.log(value)#price,size,time #if __name__ == "__main__": TkWdow() ```
It would probably help if you did something like this: ``` class TkinterClass: def __init__(self): self.ibkrConnection = Application() self.root = Tk() self.root.title("test") self.root.grid_columnconfigure((0, 1), weight=1) self.titleTicker = Label(root, text="TICKER", bg='black', fg='white', width=100) self.titleRating = Label(root, text="PRICE", bg='black', fg='white', width=100) self.ticker1 = Label(root, text="PROG", bg='black', fg='white', width=100) self.price1 = Label(root, text=0, bg='black', fg='white', width=100) # To be changed with every tick self.titleTicker.grid(row=1, column=1) self.titleRating.grid(row=1, column=2) self.ticker1.grid(row=2, column=1) self.price1.grid(row=2, column=2) def run(self): self.root.mainloop() tkinterApp = TkinterClass() tkinterApp.run() ``` However, there are still issues: 1. Overwriting the `tkinterApp.price1` `Label` with a number value To set the label, use: `tkinterApp.price1.config(str(value))` or use a `tkinter` `StringVar` to store the `price1` text, and use that `StringVar` as the `Label` value. 2. Using the `tkinterApp.price1` variable directly in two threads Tk is likely to be unhappy if you muck about with Tk variables from a background thread. I'd suggest getting some kind of timer running in the foreground thread and poll a variable updated in the background, so you're only updating the Tk variable from the foreground. Use `root.after(ms, callback)` to schedule a callback in the foreground (before invoking `root.mainloop()`). I don't believe a `threading.Lock` is required when reading a Python value updated in another thread, but it would be safer to add a `lock()/unlock()` around both update and access logic to be sure.
14,770
56,697,108
I am trying to read a shapefile using geopandas, for which I used `gp.read_file` ``` import geopandas as gp fl="M:/rathore/vic_5km/L2_data/L2_data/DAMSELFISH_distributions.shp" data=gp.read_file(fl) ``` I am getting the following error: `TypeError: invalid path: UnparsedPath(path='M:/rathore/vic_5km/L2_data/L2_data/DAMSELFISH_distributions.shp')` The traceback to the problem is: ``` ----> 1 data=gp.read_file(fl) c:\python27\lib\site-packages\geopandas\io\file.pyc in read_file(filename, bbox, **kwargs) 75 76 with fiona_env(): ---> 77 with reader(path_or_bytes, **kwargs) as features: 78 79 # In a future Fiona release the crs attribute of features will c:\python27\lib\site-packages\fiona\fiona\env.pyc in wrapper(*args, **kwargs) 395 def wrapper(*args, **kwargs): 396 if local._env: --> 397 return f(*args, **kwargs) 398 else: 399 if isinstance(args[0], str): c:\python27\lib\site-packages\fiona\__init__.pyc in open(fp, mode, driver, schema, crs, encoding, layer, vfs, enabled_drivers, crs_wkt, **kwargs) 255 if mode in ('a', 'r'): 256 c = Collection(path, mode, driver=driver, encoding=encoding, --> 257 layer=layer, enabled_drivers=enabled_drivers, **kwargs) 258 elif mode == 'w': 259 if schema: c:\python27\lib\site-packages\fiona\fiona\collection.pyc in __init__(self, path, mode, driver, schema, crs, encoding, layer, vsi, archive, enabled_drivers, crs_wkt, ignore_fields, ignore_geometry, **kwargs) 54 55 if not isinstance(path, (string_types, Path)): ---> 56 raise TypeError("invalid path: %r" % path) 57 if not isinstance(mode, string_types) or mode not in ('r', 'w', 'a'): 58 raise TypeError("invalid mode: %r" % mode) TypeError: invalid path: UnparsedPath(path='M:/rathore/vic_5km/L2_data/L2_data/DAMSELFISH_distributions.shp') ``` There is some problem with fiona I guess but I do not have much idea about. I have installed `fiona 1.8.6` and `geopandas 0.5.0` version installed in my system. I am using python 2.7
2019/06/21
[ "https://Stackoverflow.com/questions/56697108", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10705248/" ]
After so many tries, I have created an [issue](https://issuetracker.google.com/issues/135865377) on Google Issue Tracker under [component](https://issuetracker.google.com/issues?q=componentid:409906), also submitted the code sample to the team and they replied as: > > Your Worker is package protected, and hence we cannot instantiate it using the default WorkerFactory. > > > If you looked at Logcat, you would see something like: ``` 2019-06-24 10:49:18.501 14687-14786/com.example.workmanager.periodicworksample E/WM-WorkerFactory: Could not instantiate com.example.workmanager.periodicworksample.MyWorker java.lang.IllegalAccessException: java.lang.Class<com.example.workmanager.periodicworksample.MyWorker> is not accessible from java.lang.Class<androidx.work.WorkerFactory> at java.lang.reflect.Constructor.newInstance0(Native Method) at java.lang.reflect.Constructor.newInstance(Constructor.java:334) at androidx.work.WorkerFactory.createWorkerWithDefaultFallback(WorkerFactory.java:97) at androidx.work.impl.WorkerWrapper.runWorker(WorkerWrapper.java:228) at androidx.work.impl.WorkerWrapper.run(WorkerWrapper.java:127) at androidx.work.impl.utils.SerialExecutor$Task.run(SerialExecutor.java:75) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636) at java.lang.Thread.run(Thread.java:764) 2019-06-24 10:49:18.501 14687-14786/com.example.workmanager.periodicworksample E/WM-WorkerWrapper: Could not create Worker com.example.workmanager.periodicworksample.MyWorker ``` > > Your Worker needs to be public > > > And by making My Worker class public I got resolved the issue. Reference of Google's Reply on the issue: <https://issuetracker.google.com/issues/135865377#comment4>
This seems something similar to what has already been reported on some devices from this OEM. Here's a similar bug on [WorkManager's issuetracker](https://issuetracker.google.com/113676489), there's not much that WorkManager can do in these cases. As commented in this bug: > > ...if a device manufacturer has decided to modify stock Android to force-stop the app, WorkManager will stop working (as will JobScheduler, alarms, broadcast receivers, etc.). There is no way to work around this. Some device manufacturers do this, unfortunately, so in those cases WorkManager will stop working until the next time the app is launched. > > > Your best option is to open a [new issue](https://issuetracker.google.com/issues/new?component=409906&template=1094197) adding some details and possibly a small sample to reproduce the issue.
14,771
6,548,996
Eventhough I write in python I think the abstract concept is more interesting to me and others. So pseudocode please if you like :) I have a list with items from one of my classes. Lets do it with strings and numbers here, it really doesn't matter. Its nested to any depth. (Its not really a list but a container class which is based on a list.) **Example**: *[1, 2, 3, ['a', 'b', 'c'] 4 ['d', 'e', [100, 200, 300]] 5, ['a', 'b', 'c'], 6]* Note that both ['a', 'b', 'c'] are really the same container. If you change one you change the other. The containers and items can be edited, items inserted and most important containers can be used multiple times. To avoid redundancy its not possible to flatten the list (I think!) because you loose the ability to insert items in one container and it automatically appears in all other containers. **The Problem:** For the frontend (just commandline with the python "cmd" module) I want to navigate through this structure with a cursor which always points to the current item so it can be read or edited. The cursor can go left and right (users point of view) and should behave like the list is not a nested list but a flat one. For a human this is super easy to do. You just pretend that in this list above the sublists don't exist and simply go from left to right and back. For example if you are at the position of "3" in the list above and go right you get 'a' as next item, then 'b', 'c', and then "4" etc. Or if you go right from the "300" you get the "5" next. And backwards: If you go left from "6" the next is 'c'. If you go left from "5" its "300". So how do I do that in principle? I have one approach here but its wrong and the question is already so long that I fear most people will not read it :(. I can post it later. P.S. Even if its hard to resist: The answer to this question is not "Why do you want to do this, why do you organize your data this way, why don't you [flatten the list| something out of my imagination] first? The problem is exactly what I've described here, nothing else. The data is structured by the nature of the problem this way.
2011/07/01
[ "https://Stackoverflow.com/questions/6548996", "https://Stackoverflow.com", "https://Stackoverflow.com/users/824997/" ]
One solution would be to store current index and/or depth information and use it to traverse the nested list. But that seems like a solution that would do a lot of complicated forking -- testing for ends of lists, and so on. Instead, I came up with a compromise. Instead of flattening the list of lists, I created a generator that creates a flat list of indices into the list of lists: ``` def enumerate_nested(nested, indices): for i, item in enumerate(nested): if isinstance(item, collections.Iterable) and not isinstance(item, basestring): for new_indices in enumerate_nested(item, indices + (i,)): yield new_indices else: yield indices + (i,) ``` Then a simple function that extracts an innermost item from the list of lists based on an index tuple: ``` def tuple_index(nested_list, index_tuple): for i in index_tuple: nested_list = nested_list[i] return nested_list ``` Now all you have to do is traverse the flat index list, in whatever way you like. ``` >>> indices = list(enumerate_nested(l, tuple())) >>> print l [1, 2, 3, ['a', 'b', 'c'], 4, ['d', 'e', [100, 200, 300]], 5, ['a', 'b', 'c'], 6] >>> for i in indices: ... print tuple_index(l, i), ... 1 2 3 a b c 4 d e 100 200 300 5 a b c 6 ``` --- Since this answer was accepted for the stack-based solution that I posted on ideone in the comments, and since it's preferable not to use external pastebins for answer code, please note that [this answer](https://stackoverflow.com/questions/6558365/convert-list-of-positions-4-1-2-of-arbitrary-length-to-an-index-for-a-nested/6618179#6618179) also contains my stack-based solution.
Essentially I would base my own solution on recursion. I would extend the container class with the following: 1. `cursor_position` - Property that stores the index of the highlighted element (or the element that contains the element that contains the highlighted element, or any level of recursion beyond that). 2. `repr_with_cursor` - This method should return a printable version of the container's content, already highlighting the item currently selected. 3. `mov_right` - This method should be invoked when the cursor move right. Returns the new index of the cursor within the element or `None` if the cursor falls "outside" the current container (if you move past hte last element in the container. 4. `mov_left` - Idem, but towards the left. The way the recursion should work, is that for each method, depending of the *type* of the highlighted method you should have two different behaviours: * if the cursor is on a **container** it should invoke the method of the "pointed" container. * if the cursor is on a **non-container** it should perform the 'real thing'. **EDIT** I had a spare half an hour so I threw together an example class that implements my idea. It' not feature complete (for example it doesn't handle well when it reaches the either end of the largest container, and requires each instance of the class to be used only once in the largest sequence) but it works enough to demonstrate the concept. **I shall repeat before people comments on that: this is proof-of-concept code, it's not in any way ready to be used!** ``` #!/usr/bin/env python # -*- coding: utf-8 -*- class C(list): def __init__(self, *args): self.cursor_position = None super(C, self).__init__(*args) def _pointed(self): '''Return currently pointed item''' if self.cursor_position == None: return None return self[self.cursor_position] def _recursable(self): '''Return True if pointed item is a container [C class]''' return (type(self._pointed()) == C) def init_pointer(self, end): ''' Recursively set the pointers of containers in a way to point to the first non-container item of the nested hierarchy. ''' assert end in ('left', 'right') val = 0 if end == 'left' else len(self)-1 self.cursor_position = val if self._recursable(): self.pointed._init_pointer(end) def repr_with_cursor(self): ''' Return a representation of the container with highlighted item. ''' composite = '[' for i, elem in enumerate(self): if type(elem) == C: composite += elem.repr_with_cursor() else: if i != self.cursor_position: composite += str(elem) else: composite += '**' + str(elem) + '**' if i != len(self)-1: composite += ', ' composite += ']' return composite def mov_right(self): ''' Move pointer to the right. ''' if self._recursable(): if self._pointed().mov_right() == -1: if self.cursor_position != len(self)-1: self.cursor_position += 1 else: if self.cursor_position != len(self)-1: self.cursor_position += 1 if self._recursable(): self._pointed().init_pointer('left') else: self.cursor_position = None return -1 def mov_left(self): ''' Move pointer to the left. ''' if self._recursable(): if self._pointed().mov_left() == -1: if self.cursor_position != 0: self.cursor_position -= 1 else: if self.cursor_position != 0: self.cursor_position -= 1 if self._recursable(): self._pointed().init_pointer('right') else: self.cursor_position = None return -1 ``` A simple test script: ``` # Create the nested structure LevelOne = C(('I say',)) LevelTwo = C(('Hello', 'Bye', 'Ciao')) LevelOne.append(LevelTwo) LevelOne.append('!') LevelOne.init_pointer('left') # The container's content can be seen as both a regualar list or a # special container. print(LevelOne) print(LevelOne.repr_with_cursor()) print('---') # Showcase the effect of moving the cursor to right for i in range(5): print(LevelOne.repr_with_cursor()) LevelOne.mov_right() print('---') # Showcase the effect of moving the cursor to left LevelOne.init_pointer('right') for i in range(5): print(LevelOne.repr_with_cursor()) LevelOne.mov_left() ``` It outputs: ``` ['I say', ['Hello', 'Bye', 'Ciao'], '!'] [**I say**, [Hello, Bye, Ciao], !] --- [**I say**, [Hello, Bye, Ciao], !] [I say, [**Hello**, Bye, Ciao], !] [I say, [Hello, **Bye**, Ciao], !] [I say, [Hello, Bye, **Ciao**], !] [I say, [Hello, Bye, Ciao], **!**] --- [I say, [Hello, Bye, Ciao], **!**] [I say, [Hello, Bye, **Ciao**], !] [I say, [Hello, **Bye**, Ciao], !] [I say, [**Hello**, Bye, Ciao], !] [**I say**, [Hello, Bye, Ciao], !] ``` Fun problem! My favourite OS question of the day! :)
14,774
46,942,411
I'm analyzing revision histories, using `git-archive` to get the files at a particular revision (see <https://stackoverflow.com/a/40811494/1168342>). The approach works, but I'm trying to optimize for projects with many revisions. Much processing is wasted archiving (via tar) and back to a files in another directory (via tar again). I'm looking for a way to do this without involving `tar`, something like a `git cp $revision $dest/`. Here's what I've explored so far: * I could use the `git reset $revision --hard` approach with a file copy, but it renders parallelization of the analysis void, unless I create multiple copies of the repo (one for each thread/process). * There is a [Java project using JGit called Doris](https://github.com/gingerswede/doris) that accomplishes this with low-level operations, but it breaks when there are weird files (e.g., links to other repos). As git has evolved, there are a lot of special cases, so I don't want to do this at a low-level if possible. * I know there's a git API for Python, but its [archive feature](http://gitpython.readthedocs.io/en/stable/reference.html#module-git.repo.base) also uses tar. For the same reasons as above, I didn't want to code this at too low a level.
2017/10/25
[ "https://Stackoverflow.com/questions/46942411", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1168342/" ]
Use: ``` mkdir <path> && GIT_INDEX_FILE=<path>/.git git --work-tree=<path> checkout <revision> -- . && rm <path>/.git ``` The `git checkout` step will overwrite the index, so to make this parallelize well, we can just point the index file into the target. There's one file name that's pretty sure to be safe: `.git`! (This is like a lighter weight version of `git worktree add` that also avoids recording the new extracted tree as an active work-tree.) Edit to add a side note (I expect the OP is aware of this, but for future reference): note that `git archive` applies certain `.gitattributes` filters that this technique will not apply. In particular, `git checkout` will not obey `export-ignore` and `export-subst` directives.
In JGit the `ArchiveCommand` implements what `git archive` does and also provides several archive file formats out of the box. However, the `ArchiveCommand` can be extended with custom archive formats. A custom format needs to implement the `Format` interface and register it with `ArchiveCommand::registerFormat`. Even though the corresponding API seems to be designed with a single output file in mind, it should be possible to output the contents to a directory.
14,779
4,234,823
I am trying to open a serial port with python. This is on Ubuntu. I import the openinterface.py and enter in this ``` ser = openinterface.CreateBot(com_port = "/dev/ttyUSB1", mode="full") ``` I get an error saying "unsupported operand types for -: 'str' and 'int'" I tried the same call with single quotes instead of double, and with no quotes at all. How can I fix this? Or is there an alternative function to use? I only know the basics of Python so maybe its some small syntax thing I am not noticing? Any help would be appreciated, thanks.
2010/11/20
[ "https://Stackoverflow.com/questions/4234823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/508530/" ]
According to [this page in Russian](http://rus-linux.net/lib.php?name=/MyLDP/hard/irobot/irobot.html), there's a bug with the `openinterface.py` file that tries to subtract one from the port argument. It suggests making this change (removing the `- 1` on line 803) with `sed`: ``` sed -ie "803s/ - 1//" openinterface.py ``` Either try that, or see if there's an updated version of `openinterface.py`.
This is what you want if you are using python 3: ``` import serial #import pyserial lib ser = serial.Serial("/dev/ttyS0", 9600) #specify your port and braudrate data = ser.read() #read byte from serial device print(data) #display the read byte ```
14,780
44,777,408
I want to mock a method of a class and use `wraps`, so that it is actually called, but I can inspect the arguments passed to it. I have seen at several places ([here](https://stackoverflow.com/questions/25608107/python-mock-patching-a-method-without-obstructing-implementation) for example) that the usual way to do that is as follows (adapted to show my point): ``` from unittest import TestCase from unittest.mock import patch class Potato(object): def foo(self, n): return self.bar(n) def bar(self, n): return n + 2 class PotatoTest(TestCase): spud = Potato() @patch.object(Potato, 'foo', wraps=spud.foo) def test_something(self, mock): forty_two = self.spud.foo(n=40) mock.assert_called_once_with(n=40) self.assertEqual(forty_two, 42) ``` However, this instantiates the class `Potato`, in order to bind the mock to the instance method `spud.foo`. What I need is to mock the method `foo` in *all instances of `Potato`*, and wrap them around the original methods. I.e, I need the following: ``` from unittest import TestCase from unittest.mock import patch class Potato(object): def foo(self, n): return self.bar(n) def bar(self, n): return n + 2 class PotatoTest(TestCase): @patch.object(Potato, 'foo', wraps=Potato.foo) def test_something(self, mock): self.spud = Potato() forty_two = self.spud.foo(n=40) mock.assert_called_once_with(n=40) self.assertEqual(forty_two, 42) ``` This of course doesn't work. I get the error: ``` TypeError: foo() missing 1 required positional argument: 'self' ``` It works however if `wraps` is not used, so the problem is not in the mock itself, but in the way it calls the wrapped function. For example, this works (but of course I had to "fake" the returned value, because now `Potato.foo` is never actually run): ``` from unittest import TestCase from unittest.mock import patch class Potato(object): def foo(self, n): return self.bar(n) def bar(self, n): return n + 2 class PotatoTest(TestCase): @patch.object(Potato, 'foo', return_value=42)#, wraps=Potato.foo) def test_something(self, mock): self.spud = Potato() forty_two = self.spud.foo(n=40) mock.assert_called_once_with(n=40) self.assertEqual(forty_two, 42) ``` This works, but it does not run the original function, which I need to run because the return value is used elsewhere (and I cannot fake it from the test). Can it be done? **Note** The actual reason behind my needs is that I'm testing a rest api with webtest. From the tests I perform some wsgi requests to some paths, and my framework instantiates some classes and uses their methods to fulfill the request. I want to capture the parameters sent to those methods to do some `asserts` about them in my tests.
2017/06/27
[ "https://Stackoverflow.com/questions/44777408", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1264820/" ]
In short, you can't do this using `Mock` instances alone. `patch.object` creates `Mock`'s for the specified instance (Potato), i.e. it replaces `Potato.foo` with a single Mock the moment it is called. Therefore, there is no way to pass instances to the `Mock` as the mock is created before any instances are. To my knowledge getting instance information to the `Mock` at runtime is also very difficult. To illustrate: ``` from unittest.mock import MagicMock class MyMock(MagicMock): def __init__(self, *a, **kw): super(MyMock, self).__init__(*a, **kw) print('Created Mock instance a={}, kw={}'.format(a,kw)) with patch.object(Potato, 'foo', new_callable=MyMock, wrap=Potato.foo): print('no instances created') spud = Potato() print('instance created') ``` The output is: ``` Created Mock instance a=(), kw={'name': 'foo', 'wrap': <function Potato.foo at 0x7f5d9bfddea0>} no instances created instance created ``` I would suggest monkey-patching your class in order to add the `Mock` to the correct location. ``` from unittest.mock import MagicMock class PotatoTest(TestCase): def test_something(self): old_foo = Potato.foo try: mock = MagicMock(wraps=Potato.foo, return_value=42) Potato.foo = lambda *a,**kw: mock(*a, **kw) self.spud = Potato() forty_two = self.spud.foo(n=40) mock.assert_called_once_with(self.spud, n=40) # Now needs self instance self.assertEqual(forty_two, 42) finally: Potato.foo = old_foo ``` Note that you using `called_with` is problematic as you are calling your functions with an instance.
Do you control creation of `Potato` instances, or at least have access to these instances after creating them? You should, else you'd not be able to check particular arg lists. If so, you can wrap methods of individual instances using ``` spud = dig_out_a_potato() with mock.patch.object(spud, "foo", wraps=spud.foo) as mock_spud: # do your thing. mock_spud.assert_called... ```
14,781
57,943,053
When EMR machine is trying to run a step that includes boto3 initialisation it sometimes get the following error: `ValueError: Invalid endpoint: https://s3..amazonaws.com` When I'm trying to set up a new machine it can suddenly work. Attached the full error: ``` self.client = boto3.client("s3") File "/usr/local/lib/python3.6/site-packages/boto3/__init__.py", line 83, in client return _get_default_session().client(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/boto3/session.py", line 263, in client aws_session_token=aws_session_token, config=config) File "/usr/local/lib/python3.6/site-packages/botocore/session.py", line 861, in create_client client_config=config, api_version=api_version) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 76, in create_client verify, credentials, scoped_config, client_config, endpoint_bridge) File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 285, in _get_client_args verify, credentials, scoped_config, client_config, endpoint_bridge) File "/usr/local/lib/python3.6/site-packages/botocore/args.py", line 79, in get_client_args timeout=(new_config.connect_timeout, new_config.read_timeout)) File "/usr/local/lib/python3.6/site-packages/botocore/endpoint.py", line 297, in create_endpoint raise ValueError("Invalid endpoint: %s" % endpoint_url) ValueError: Invalid endpoint: https://s3..amazonaws.com ``` Any idea why it happens? (Versions: boto3==1.7.29, botocore==1.10.29)
2019/09/15
[ "https://Stackoverflow.com/questions/57943053", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6359988/" ]
It looks like you have an invalid region. [Check](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-credentials.html) your ~/.aws/config
In my case, even though `~/.aws/config` had the region set, ``` $ cat ~/.aws/config [default] region = us-east-1 ``` the env var `AWS_REGION` was set to an empty string ``` $ env | grep -i aws AWS_REGION= ``` unset this env var and all was good again ``` $ unset AWS_REGION $ aws sts get-caller-identity --output text --query Account 777***234534 ``` (*apologies for posting on a really old question, it did pop up in a Google search*)
14,783
28,572,764
I've been reading through the source for the cpython HTTP package for fun and profit, and noticed that in server.py they have the `__all__` variable set but also use a leading underscore for the function `_quote_html(html)`. Isn't this redundant? Don't both serve to limit what's imported by `from HTTP import *`? Why do they do both?
2015/02/17
[ "https://Stackoverflow.com/questions/28572764", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1029146/" ]
Aside from the *"private-by-convention"* functions with `_leading_underscores`, there are: * Quite a few `import`ed names; * Four class names; * Three function names *without* leading underscores; * Two string *"constants"*; and * One local variable (`nobody`). If `__all__` wasn't defined to cover only the classes, all of these would also be added to your namespace by a wildcard `from server import *`. Yes, you could just use one method or the other, but I think the leading underscore is a stronger sign than the exclusion from `__all__`; the latter says *"you probably won't need this often"*, the former says *"keep out unless you know what you're doing"*. They both have their place.
`__all__` indeed serves as a limit when doing `from HTTP import *`; prefixing `_` to the name of a function or method is a convention for informing the user that that item should be considered private and thus used at his/her own risk.
14,785