qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
18,314,228
I have a list of strings and I like to split that list in different "sublists" based on the character length of the words in th list e.g: ``` List = [a, bb, aa, ccc, dddd] Sublist1 = [a] Sublist2= [bb, aa] Sublist3= [ccc] Sublist2= [dddd] ``` How can i achieve this in python ? Thank you
2013/08/19
[ "https://Stackoverflow.com/questions/18314228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/413734/" ]
by using `itertools.groupby`: ``` values = ['a', 'bb', 'aa', 'ccc', 'dddd', 'eee'] from itertools import groupby output = [list(group) for key,group in groupby(sorted(values, key=len), key=len)] ``` The result is: ``` [['a'], ['bb', 'aa'], ['ccc', 'eee'], ['dddd']] ``` If your list is already sorted by string length and you just need to do grouping, then you can simplify the code to: ``` output = [list(group) for key,group in groupby(values, key=len)] ```
Assuming you're happy with a list of lists, indexed by length, how about something like ``` by_length = [] for word in List: wl = len(word) while len(by_length) < wl: by_length.append([]) by_length[wl].append(word) print "The words of length 3 are %s" % by_length[3] ```
18,314,228
I have a list of strings and I like to split that list in different "sublists" based on the character length of the words in th list e.g: ``` List = [a, bb, aa, ccc, dddd] Sublist1 = [a] Sublist2= [bb, aa] Sublist3= [ccc] Sublist2= [dddd] ``` How can i achieve this in python ? Thank you
2013/08/19
[ "https://Stackoverflow.com/questions/18314228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/413734/" ]
``` >>> from collections import defaultdict >>> l = ["a", "bb", "aa", "ccc", "dddd"] >>> d = defaultdict(list) >>> for elem in l: ... d[len(elem)].append(elem) ... >>> sublists = list(d.values()) >>> print(sublists) [['a'], ['bb', 'aa'], ['ccc'], ['dddd']] ```
Assuming you're happy with a list of lists, indexed by length, how about something like ``` by_length = [] for word in List: wl = len(word) while len(by_length) < wl: by_length.append([]) by_length[wl].append(word) print "The words of length 3 are %s" % by_length[3] ```
18,314,228
I have a list of strings and I like to split that list in different "sublists" based on the character length of the words in th list e.g: ``` List = [a, bb, aa, ccc, dddd] Sublist1 = [a] Sublist2= [bb, aa] Sublist3= [ccc] Sublist2= [dddd] ``` How can i achieve this in python ? Thank you
2013/08/19
[ "https://Stackoverflow.com/questions/18314228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/413734/" ]
by using `itertools.groupby`: ``` values = ['a', 'bb', 'aa', 'ccc', 'dddd', 'eee'] from itertools import groupby output = [list(group) for key,group in groupby(sorted(values, key=len), key=len)] ``` The result is: ``` [['a'], ['bb', 'aa'], ['ccc', 'eee'], ['dddd']] ``` If your list is already sorted by string length and you just need to do grouping, then you can simplify the code to: ``` output = [list(group) for key,group in groupby(values, key=len)] ```
``` >>> from collections import defaultdict >>> l = ["a", "bb", "aa", "ccc", "dddd"] >>> d = defaultdict(list) >>> for elem in l: ... d[len(elem)].append(elem) ... >>> sublists = list(d.values()) >>> print(sublists) [['a'], ['bb', 'aa'], ['ccc'], ['dddd']] ```
29,997,120
I have a problem with a little server-client assignment in python 2.7. The client can send 5 types of requests to the server: 1. get the server's IP 2. get contents of a directory on the server 3. run cmd command on the server and get the output 4. open a calculator on the server 5. disconnect This is the error I get: ``` error: msg_type, data_len = unpack("BH", client_structs[:3]) struct.error: unpack requires a string argument of length 4 ``` Code: ``` client_structs = client_soc.recv(1024) msg_type, data_len = unpack("BH", client_structs[:3]) ``` Doesn't the substring contain 4 chars including the null? Would appreciate explanation about this error + how to solve it. Entire server code: ``` __author__ = 'eyal' from struct import pack, unpack, calcsize import socket from os import listdir from subprocess import check_output, call def server(): ser_soc = socket.socket() ser_soc.bind(("0.0.0.0", 8080)) ser_soc.listen(1) while True: accept_flag = raw_input("Would you like to wait for a client? (y/n) ") if accept_flag == "y": client_soc, client_address = ser_soc.accept() while True: client_structs = client_soc.recv(1024) data_size = calcsize(client_structs) - 3 data_str = 'c' * data_size unpacked_data = unpack("BH" + data_str, client_structs) if unpacked_data[0] == 1: ip = socket.gethostbyname(socket.gethostname()) ip_data = 'c' * len(ip) to_send = pack("BH" + str(len(ip)) + ip_data, unpacked_data[0], len(ip), ip) elif unpacked_data[0] == 2: content = listdir(str(unpacked_data[2])) content_str = "\r\n".join(content) content_data = 'c' * len(content_str) to_send = pack("BH" + str(len(content_str)) + content_data, unpacked_data[0], len(content_str), content_str) elif unpacked_data[0] == 3: command = str(unpacked_data[2:]).split() output = check_output(command) message_data = 'c' * len(output) to_send = pack("BH" + message_data, unpacked_data[0], len(output), output) elif unpacked_data[0] == 4: call("gnome-calculator") msg_data = 'c' * len("The calculator is open.") to_send = pack("BH" + msg_data, unpacked_data[0], len("The calculator is open."), "The calculator is open.") elif unpacked_data[0] == 5: client_soc.close() break else: to_send = pack("BH" + 'c' * len("invalid message type, try again"), unpacked_data[0], len("invalid message type, try again"), "invalid message type, try again") if unpacked_data[0] != 5: client_soc.send(to_send) else: break ser_soc.close() def main(): server() if __name__ == "__main__": main() ``` Entire client code: ``` __author__ = 'eyal' from struct import pack, unpack, calcsize import socket def client(): my_soc = socket.socket() my_soc.connect(("127.0.0.1", 8080)) while True: send_flag = raw_input("Would you like to send the server a request? (y/n) ") if send_flag == "y": msg_code = input("What type of request would you like to send?\n" "1. Get the server's IP address.\n" "2. Get content of a directory on the server.\n" "3. Run a terminal command on the server and get the output.\n" "4. Open a calculator on the server.\n" "5. Disconnect from the server.\n" "Your choice: ") if msg_code == 1 or msg_code == 4 or msg_code == 5: to_send = pack("BH", msg_code, 0) elif msg_code == 2: path = raw_input("Enter path of wanted directory to get content of: ") to_send = pack("BH" + 'c' * len(path), msg_code, len(path), path) elif msg_code == 3: command = raw_input("Enter the wanted terminal command, including arguments: ") to_send = pack("BH" + 'c' * len(command), msg_code, len(command), command) else: print "Invalid message code, try again\n" if 1 <= msg_code <= 5: my_soc.send(to_send) else: break data = my_soc.recv(1024) unpacked_data = unpack("BH" + 'c' * (calcsize(data) - 3), data) print "The server's response to your type-" + str(msg_code) + " request:" print unpacked_data[2] my_soc.close() def main(): client() if __name__ == "__main__": main() ```
2015/05/02
[ "https://Stackoverflow.com/questions/29997120", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3554255/" ]
In code shown in your question: ``` HashTable::HashTable(int buckets) { this->buckets = buckets; vector<Entry>* table = new vector<Entry>[buckets]; } ``` you create a local variable `table` which is a pointer to `vector<Entry>` and then leak that memory. Then in `HashTable::insert` you try to access member variable `table` which is uninitialized.
``` HashTable::HashTable(int buckets) { this->buckets = buckets; vector<Entry>* table = new vector<Entry>[buckets]; // this table is local to this function, also a memory leek. } ``` As I can see in your `HashTable` constructor, you are initializing a local `vector<Entry>* table` to your constructor. ``` Entry HashTable::insert(GameBoard board, int number) { int index = compress(board.hashCode()); Entry entry = Entry(board, number); table[index].push_back(entry); return entry; } ``` and I can see you are going to `push_back()` to some other `table` at `insert` method. The seg. fault occuses because you going to `push_back` to an uninitalized `table`. Do you have any `vector<Entry>* table` in your `HashTable` class. If you have change your `HashTable::HashTable(int buckets)` to initialize that table as shown in below. ``` HashTable::HashTable(int buckets) { this->buckets = buckets; table = new vector<Entry>[buckets]; // Init class attribute `vector<Entry>* table` } ``` If you don't you have any `vector<Entry>* table` in your `HashTable` class, add it to your class and use above `HashTable::HashTable(int buckets)`. This will resolve your issue.
54,233,559
I am generating a doc using python docx module. I want to bold the specific cell of a row in python docx here is the code ``` book_title = '\n-:\n {}\n\n'.format(book_title) book_desc = '-: {}\n\n:\n{}\n\n :\n{}'.format(book.author,book_description,sales_point) row1.cells[1].text = (book_title + book_desc) ``` I just want to bold the book\_title. If I apply a style it automatically applies to whole document.
2019/01/17
[ "https://Stackoverflow.com/questions/54233559", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6892109/" ]
Here is how I understand it: Paragraph is holding the run objects and styles (bold, italic) are methods of run. So following this logic here is what might solve your question: ``` row1_cells[0].paragraphs[0].add_run(book_title + book_desc).bold=True ``` This is just an example for the first cell of the table. Please amend it in your code.
Since you are using the docx module, you can style your text/paragraph by explicitly defining the style. In order to apply a style, use the following code snippet referenced from docx documentation [here](https://python-docx.readthedocs.io/en/latest/user/styles-using.html#apply-a-style). ``` >>> from docx import Document >>> document = Document() >>> style = document.styles['Normal'] >>> font = style.font >>> font.bold= True ``` This will change the font style to bold for the applied paragraph.
54,233,559
I am generating a doc using python docx module. I want to bold the specific cell of a row in python docx here is the code ``` book_title = '\n-:\n {}\n\n'.format(book_title) book_desc = '-: {}\n\n:\n{}\n\n :\n{}'.format(book.author,book_description,sales_point) row1.cells[1].text = (book_title + book_desc) ``` I just want to bold the book\_title. If I apply a style it automatically applies to whole document.
2019/01/17
[ "https://Stackoverflow.com/questions/54233559", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6892109/" ]
A cell does not have a character style; character style can only be applied to text, and in particular to a *run* of text. This is in fact the defining characteristic of a run, being a sequence of characters that share the same character formatting, also known as *font* in `python-docx`. To get the book title with a different font than the description, they need to appear in separate runs. Assigning to `Cell.text` (as you have) results in all the text being in a single run. This might work for you, but assumes the cell is empty as you start: ``` paragraph = row1.cells[1].paragraphs[0] title_run = paragraph.add_run(book_title) description_run = paragraph.add_run(book_desc) title_run.bold = True ``` This code can be made more compact: ``` paragraph = row1.cells[1].paragraphs[0] paragraph.add_run(book_title).bold = True paragraph.add_run(book_desc) ``` but perhaps the former version makes it more clear just what you're doing in each step.
Since you are using the docx module, you can style your text/paragraph by explicitly defining the style. In order to apply a style, use the following code snippet referenced from docx documentation [here](https://python-docx.readthedocs.io/en/latest/user/styles-using.html#apply-a-style). ``` >>> from docx import Document >>> document = Document() >>> style = document.styles['Normal'] >>> font = style.font >>> font.bold= True ``` This will change the font style to bold for the applied paragraph.
54,233,559
I am generating a doc using python docx module. I want to bold the specific cell of a row in python docx here is the code ``` book_title = '\n-:\n {}\n\n'.format(book_title) book_desc = '-: {}\n\n:\n{}\n\n :\n{}'.format(book.author,book_description,sales_point) row1.cells[1].text = (book_title + book_desc) ``` I just want to bold the book\_title. If I apply a style it automatically applies to whole document.
2019/01/17
[ "https://Stackoverflow.com/questions/54233559", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6892109/" ]
Here is how I understand it: Paragraph is holding the run objects and styles (bold, italic) are methods of run. So following this logic here is what might solve your question: ``` row1_cells[0].paragraphs[0].add_run(book_title + book_desc).bold=True ``` This is just an example for the first cell of the table. Please amend it in your code.
In python-docx, the styling of any character in a docx template document can be overridden by the use of [Rich Text](https://docxtpl.readthedocs.io/en/latest/#richtext) styling. You should provide a context variable for the particular character/string that needs styling in your template, at the position of the character/string. This variable maps to the `RichText` object that has the style definition(that you define in your code), to style the character/string. To make things clearer, consider an example template doc "test.docx" that contains the following text: > > Hello {{r context\_var}}! > > > The `{{..}}` is the jinja2 tag syntax and `{{r` is the RichText tag that overrides the character styling. The `context_var` is a variable that maps the styling to your character string. We accomplish Rich Text styling like this: ``` from docxtpl import DocxTemplate, RichText doc = DocxTemplate("test.docx") rt = RichText() #create a RichText object rt.add('World', bold=True) #pass the text as an argument and the style, bold=True context = { 'context_var': rt } #add context variable to the context and map it to rt doc.render(context) #render the context doc.save("generated_doc.docx") #save as a new document ``` Let's look at the contents of "generated\_doc.docx": > > Hello **World**! > > > I'm not sure how your template is designed, but if you just want the `book_title` as bold, your template "test.docx" should have text like: > > Title:- > > > {{r book\_title\_var}} > > > The code should be modified to: ``` book_title = "Lord of the Rings" #or wherever you get the book title from rt.add(book_title, bold=True) context = { 'book_title_var': rt } ``` generated\_doc.docx: > > Title:- > > > **Lord of the Rings** > > >
54,233,559
I am generating a doc using python docx module. I want to bold the specific cell of a row in python docx here is the code ``` book_title = '\n-:\n {}\n\n'.format(book_title) book_desc = '-: {}\n\n:\n{}\n\n :\n{}'.format(book.author,book_description,sales_point) row1.cells[1].text = (book_title + book_desc) ``` I just want to bold the book\_title. If I apply a style it automatically applies to whole document.
2019/01/17
[ "https://Stackoverflow.com/questions/54233559", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6892109/" ]
A cell does not have a character style; character style can only be applied to text, and in particular to a *run* of text. This is in fact the defining characteristic of a run, being a sequence of characters that share the same character formatting, also known as *font* in `python-docx`. To get the book title with a different font than the description, they need to appear in separate runs. Assigning to `Cell.text` (as you have) results in all the text being in a single run. This might work for you, but assumes the cell is empty as you start: ``` paragraph = row1.cells[1].paragraphs[0] title_run = paragraph.add_run(book_title) description_run = paragraph.add_run(book_desc) title_run.bold = True ``` This code can be made more compact: ``` paragraph = row1.cells[1].paragraphs[0] paragraph.add_run(book_title).bold = True paragraph.add_run(book_desc) ``` but perhaps the former version makes it more clear just what you're doing in each step.
In python-docx, the styling of any character in a docx template document can be overridden by the use of [Rich Text](https://docxtpl.readthedocs.io/en/latest/#richtext) styling. You should provide a context variable for the particular character/string that needs styling in your template, at the position of the character/string. This variable maps to the `RichText` object that has the style definition(that you define in your code), to style the character/string. To make things clearer, consider an example template doc "test.docx" that contains the following text: > > Hello {{r context\_var}}! > > > The `{{..}}` is the jinja2 tag syntax and `{{r` is the RichText tag that overrides the character styling. The `context_var` is a variable that maps the styling to your character string. We accomplish Rich Text styling like this: ``` from docxtpl import DocxTemplate, RichText doc = DocxTemplate("test.docx") rt = RichText() #create a RichText object rt.add('World', bold=True) #pass the text as an argument and the style, bold=True context = { 'context_var': rt } #add context variable to the context and map it to rt doc.render(context) #render the context doc.save("generated_doc.docx") #save as a new document ``` Let's look at the contents of "generated\_doc.docx": > > Hello **World**! > > > I'm not sure how your template is designed, but if you just want the `book_title` as bold, your template "test.docx" should have text like: > > Title:- > > > {{r book\_title\_var}} > > > The code should be modified to: ``` book_title = "Lord of the Rings" #or wherever you get the book title from rt.add(book_title, bold=True) context = { 'book_title_var': rt } ``` generated\_doc.docx: > > Title:- > > > **Lord of the Rings** > > >
10,216,019
So I was developing an app in Django and needed a function from the 1.4 version so I decided to update. But then a weird error appeared when I wanted to do `syncdb` I am using the new `manage.py` and as You can see it makes some of the tables but then fails : ``` ./manage.py syncdb Creating tables ... Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_user_permissions Creating table auth_user_groups Creating table auth_user Creating table django_content_type Creating table django_session Creating table django_site Traceback (most recent call last): File "./manage.py", line 9, in <module> execute_from_command_line(sys.argv) File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/__init__.py", line 443, in execute_from_command_line utility.execute() File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/__init__.py", line 382, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/base.py", line 196, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/base.py", line 232, in execute output = self.handle(*args, **options) File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/base.py", line 371, in handle return self.handle_noargs(**options) File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/core/management/commands/syncdb.py", line 91, in handle_noargs sql, references = connection.creation.sql_create_model(model, self.style, seen_models) File "/usr/local/lib/python2.7/dist-packages/Django-1.4-py2.7.egg/django/db/backends/creation.py", line 44, in sql_create_model col_type = f.db_type(connection=self.connection) TypeError: db_type() got an unexpected keyword argument 'connection' ```
2012/04/18
[ "https://Stackoverflow.com/questions/10216019", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1092459/" ]
I had the same issue, the definition for my custom field was missing the connection parameter. ``` from django.db import models class BigIntegerField(models.IntegerField): def db_type(self, connection): return "bigint" ```
Although already old, answered and accepted question but I am adding my understanding I have added it because I am not using customized type and it is a Django Evolution error (but not syncdb)`evolve --hint --execute`. I think it may be helpful for someone in future. . I am average in Python and new to Django. I also encounter same issue when I added some new features to my existing project. To add new feature I had to add some new fields of `models.CharField()` type,as follows. ``` included_domains = models.CharField( "set of comma(,) seprated list of domains in target emails", default="", max_length=it_len.EMAIL_LEN*5) excluded_domains = models.CharField( "set of comma(,) seprated list of domains NOT in target emails", default="", max_length=it_len.EMAIL_LEN*5) ``` The Django version I am using is 1.3.1: ``` $ python -c "import django; print django.get_version()" 1.3.1 <--------# version $python manage.py syncdb Project signature has changed - an evolution is required ``` [Django Evolution:](https://code.google.com/p/django-evolution/#Django_Evolution) Django Evolution is an extension to Django that allows you to track changes in your models over time, and to update the database to reflect those changes. ``` $ python manage.py evolve --hint #----- Evolution for messagingframework from django_evolution.mutations import AddField from django.db import models MUTATIONS = [ AddField('MessageConfiguration', 'excluded_domains', models.CharField, initial=u'', max_length=300), AddField('MessageConfiguration', 'included_domains', models.CharField, initial=u'', max_length=300) ] #---------------------- Trial evolution successful. Run './manage.py evolve --hint --execute' to apply evolution. ``` The trial was susses and when I tried to apply changes in DB ``` $ python manage.py evolve --hint --execute Traceback (most recent call last): File "manage.py", line 25, in <module> execute_manager(settings) File "/var/www/sites/www.taxspanner.com/django/core/management/__init__.py", line 362, in execute_manager utility.execute() File "/var/www/sites/www.taxspanner.com/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/var/www/sites/www.taxspanner.com/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/var/www/sites/www.taxspanner.com/django/core/management/base.py", line 222, in execute output = self.handle(*args, **options) File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/management/commands/evolve.py", line 60, in handle self.evolve(*app_labels, **options) File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/management/commands/evolve.py", line 140, in evolve database)) File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/mutations.py", line 426, in mutate return self.add_column(app_label, proj_sig, database) File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/mutations.py", line 438, in add_column sql_statements = evolver.add_column(model, field, self.initial) File "/usr/local/lib/python2.7/dist-packages/django_evolution-0.6.9.dev_r225-py2.7.egg/django_evolution/db/common.py", line 142, in add_column f.db_type(connection=self.connection), # <=== here f is field class object TypeError: db_type() got an unexpected keyword argument 'connection' ``` To understand this exception I check that this exception is something similar to: ``` >>> def f(a): ... print a ... >>> f('b', b='a') Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: f() got an unexpected keyword argument 'b' >>> ``` So the function signature has been changed. Because I have not added any *new customized or enum* fields but only two similar fields that was already in model and char type field is supported by most of database (I am ussing PostgreSQL) even I was getting this error! Then I read from @: [Russell Keith-Magee-4 Reply](http://python.6.x6.nabble.com/Django-1-4-TypeError-get-db-prep-value-got-an-unexpected-keyword-argument-connection-td4695088.html). > > What you've hit here is the end of the deprecation cycle for code that > doesn't support multiple databases. > > > In Django 1.2, we introduced multiple database support; in order to > support this, the prototype for `get_db_preb_lookup()` and > `get_db_prep_value()` was changed. > > > For backwards compatibility, we added a shim that would transparently > 'fix' these methods if they hadn't already been fixed by the > developer. > > > In Django 1.2, the usage of these shims raised a > PendingDeprecationWarning. In Django 1.3, they raised a > DeprecationWarning. > > > Under Django 1.4, the shim code was been removed -- so any code that > wasn't updated will now raise errors like the one you describe. > > > But I am not getting any DeprecationWarning warning assuming because of newer version of Django Evolution. But from above quote I could understand that to support multiple databases function signature is added and an extra argument `connection` is needed. I also check the `db_type()` signature in my installation of Django as follows: ``` /django$ grep --exclude-dir=".svn" -n 'def db_type(' * -R contrib/localflavor/us/models.py:8: def db_type(self): contrib/localflavor/us/models.py:24: def db_type(self): : : ``` Ialso refer of Django documentation > > ### [Field.db\_type(self, connection):](https://docs.djangoproject.com/en/dev/howto/custom-model-fields/#django.db.models.Field.db_type) > > > Returns the *database column data type for the Field*, taking into account the connection > object, and the settings associated with it. > > > And Then I could understand that to resolve this issue I have to inherited `models.filed` class and overwrite `def db_type()` function. And because I am using [PostgreSQL in which to create 300 chars type field](http://www.postgresql.org/docs/9.1/static/datatype-character.html) I need to return `'char(300)'`. In my models.py I added: ``` class CharMaxlengthN(models.Field): def db_type(self, connection): return 'char(%d)' % self.max_length # because I am using postgresql ``` If you encounter similar problem please check your underline DB's manual that which type of column you need to create and return a string. And changed the definition of new fields (that I need to add) read comments: ``` included_domains = CharMaxlengthN( # <--Notice change "set of comma(,) seprated list of domains in target emails", default="", max_length=it_len.EMAIL_LEN*5) excluded_domains = CharMaxlengthN( # <-- Notice change "set of comma(,) seprated list of domains NOT in target emails", default="", max_length=it_len.EMAIL_LEN*5) ``` Then I executed same command that was failing previously: ``` t$ python manage.py evolve --hint --execute You have requested a database evolution. This will alter tables and data currently in the None database, and may result in IRREVERSABLE DATA LOSS. Evolutions should be *thoroughly* reviewed prior to execution. Are you sure you want to execute the evolutions? Type 'yes' to continue, or 'no' to cancel: yes Evolution successful. ``` I also check my DB and tested my new added features It is now working perfectly, and no DB problem. If you wants to create ENUM field read [`Specifying a mySQL ENUM in a Django model`](https://stackoverflow.com/a/19040441/1673391). **Edit:** I realized instead of sub classing `models.Field` I should have inherit more specific subclass that is `models.CharField`. Similarly I need to create Decimal DB fields so I added following class in model: ``` class DecimalField(models.DecimalField): def db_type(self, connection): d = { 'max_digits': self.max_digits, 'decimal_places': self.decimal_places, } return 'numeric(%(max_digits)s, %(decimal_places)s)' % d ```
35,689,139
I am writing a simple web application where I want to use a print out a few korean characters. Although I changed the encoding in the header, the web application, when opened in chrome, prints out gibberish instead of regular Korean characters. I also changed my chrome language settings to display korean as well. Here's my code: ``` #!/usr/bin/env python #-*- encoding: iso-8859-1 -*- import cgi import sys form = cgi.FieldStorage() print "Content-type: text/html; charset=iso-8859-1 " print "Accept-Language: fi, en, ko" print("Welcome") print("ํ™˜์˜") print("Tervetuloa") ```
2016/02/28
[ "https://Stackoverflow.com/questions/35689139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4590749/" ]
Change your encoding/charset to a charset that supports all the characters. For example by replacing both occurrences of `iso-8859-1` to `utf-8`. [UTF-8](https://en.wikipedia.org/wiki/UTF-8) can support Korean characters and basically any writing systems that exist.
You can use the [korean package](https://pypi.python.org/pypi/korean) : Example : ``` from korean import Noun fmt = u'{subj:์€} {obj:์„} ๋จน์—ˆ๋‹ค.' print fmt.format(subj=Noun(u'๋‚˜'), obj=Noun(u'๋ฐฅ')) print fmt.format(subj=Noun(u'ํ•™์ƒ'), obj=Noun(u'๋ˆ๊นŒ์Šค')) ``` Output : ``` ๋‚˜์€ ๋ฐฅ์„ ๋จน์—ˆ๋‹ค. ํ•™์ƒ์€ ๋ˆ๊นŒ์Šค์„ ๋จน์—ˆ๋‹ค. ```
67,663,059
I m testing my incomplete kivy app to grap a suitable apk of that. using buildozer and ubuntu i generate the apk, but it crashes right after starting it on android device. Is buildozer spec file the root cause should change something inside that? , or its incompatible version issue. please share kivy, kivymd, python and buildozer versions that are compatible. the py file is run on pycharm suitable with no error.
2021/05/23
[ "https://Stackoverflow.com/questions/67663059", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13331490/" ]
Try using `kivy 2.0.0rc4`. Install it in plugins trough settings in pycharm. And your `buildozer.spec` should be like this: ``` requirements = python3,kivy==2.0.0rc4 ```
Refer to the requirements specidied in buildozer.spec file for the KivyMD-kitchen\_sink app in the repo. This is the link -> [Kitchen\_Sink\_Repo](https://github.com/kivymd/KivyMD/blob/master/demos/kitchen_sink/buildozer.spec) **Tip** If, after changing the `requirements` you still see your app crashing, run the following command(s) ``` buildozer android clean buildozer android debug deploy run ``` Why? Because when buildozer installs the earlier specified requirements, it is quiet possible that it installs versions that are not matching your apps specifications. So clean it and then run. You should now be good to go.
60,389,566
I am new to multi-processing and python, from the documentation, <https://docs.python.org/3/library/multiprocessing.html> I was able to run the below code. ``` from multiprocessing import Pool def f(x): return x*x if __name__ == '__main__': with Pool(5) as p: print(p.map(f, [1, 2, 3])) ``` But if `f(x)` is changed to `f(x,y,z)` such that ``` def f(x, y, z): return x*y*z ``` What is the syntax to pass the 3 arguments to `f(x, y, z)` from the `p.map` method?
2020/02/25
[ "https://Stackoverflow.com/questions/60389566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9066431/" ]
Maybe you are looking for [`starmap`](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.starmap)? > > `starmap(func, iterable[, chunksize])` > > > Like map() except that the elements of the iterable are expected to be iterables that are unpacked as arguments. > > > Hence an iterable of `[(1,2), (3, 4)]` results in `[func(1,2), func(3,4)]`. > > > New in version 3.3. > > >
Use [`p.starmap()`](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.starmap), it's meant for exactly this case.
70,074,369
I have a list of words like this: ``` word_list=[{"word": "python", "repeted": 4}, {"word": "awsome", "repeted": 3}, {"word": "frameworks", "repeted": 2}, {"word": "programing", "repeted": 2}, {"word": "stackoverflow", "repeted": 2}, {"word": "work", "repeted": 1}, {"word": "error", "repeted": 1}, {"word": "teach", "repeted": 1} ] ``` ,that comes from another list of notes: ``` note_list = [{"note_id":1, "note_txt":"A curated list of awesome Python frameworks"}, {"note_id":2, "note_txt":"what is awesome Python frameworks"}, {"note_id":3, "note_txt":"awesome Python is good to wok with it"}, {"note_id":4, "note_txt":"use stackoverflow to lern programing with python is awsome"}, {"note_id":5, "note_txt":"error in programing is good to learn"}, {"note_id":6, "note_txt":"stackoverflow is very useful to share our knoloedge"}, {"note_id":7, "note_txt":"teach, work"}, ] ``` I want to know how can I map every word to its own note: ``` maped_list=[{"word": "python", "notes_ids": [1,2,3,4]}, {"word": "awsome", "notes_ids": [1,2,3]}, {"word": "frameworks", "notes_ids": [1,2]}, {"word": "programing", "notes_ids": [4,5]}, {"word": "stackoverflow", "notes_ids": [4,6]}, {"word": "work", "notes_ids": [7]}, {"word": "error", "notes_ids": [5]}, {"word": "teach", "notes_ids": [7]} ] ``` my work: ``` # i started by appending all the notes text into one list notes_test = [] for note in note_list: notes_test.append(note['note_txt']) # calculate the reptition of each word dict = {} for sentence in notes_test: for word in re.split('\s', sentence): # split with whitespace try: dict[word] += 1 except KeyError: dict[word] = 1 word_list= [] for key in dict.keys(): word = {} word['word'] = key word['repeted'] = dict[key] word_list.append(word) ``` my question: 1. how can I map the word list and note list to get the mapped list 2. how do you find the quality of my code, any remarks
2021/11/23
[ "https://Stackoverflow.com/questions/70074369", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17483739/" ]
numpy broadcasting is so useful here: ```py bm = df_other.values[:, 0] == df.values ``` Output: ```py >>> bm array([[ True, True, False, False, False], [False, False, True, False, False], [False, False, False, True, False]]) ``` If you need it as ints: ```py >>> bm.astype(int) array([[1, 1, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, 1, 0]]) ```
Another way to do this using pandas methods are as follows: ``` pd.crosstab(df_other['a'], df_other['c']).reindex(df['a']).to_numpy(dtype=int) ``` Output: ``` array([[1, 1, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, 1, 0]]) ```
63,759,451
I'm trying to check the validity of comma-separated strings in python. That is, it's possible that the strings contain mistakes whereby there are more than one comma used. Here is a valid string: ``` foo = "a, b, c, d, e" ``` This is a valid string as it is comma-delimited; only one comma, not several or spaces only. Here is an invalid string: ``` invalid = "a,, b, c,,,,d, e,,; f, g" ``` The invalid string is invalid because (1) it uses more than one comma and (2) it also uses a semicolon `;`. What would be the most effective way to check that the strings are valid? My first attempt was to try something like: ``` def check_valid_string(input_string): if ",," in input_string or ";" in input_string: return "Not valid" ## or False else: return "Valid" ## or True ``` however, it's not clear that this would catch all possible invalid strings. It's also not clear to me that this approach is the most computationally efficient (i.e. quick).
2020/09/05
[ "https://Stackoverflow.com/questions/63759451", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5269850/" ]
It appears the best way to accomplish this is with regex: Here is a valid string: ``` valid = "a, b, c, foo, bar, dog, cat" ``` Here are various invalid strings: ``` ## invalid1 is invalid as it contains multiple , i.e. `,,` and : invalid1 = "a,, b, c,,,,d, e,,; f, g" ## invalid2 is invalid as it contains `, ,` invalid2 = "a b, ,c, d, e" ## invalid3 is invalid as it contains spaces between strings invalid3 = "a, b, d, elephant, f g" ``` Here is the regex to check whether the string is valid: ``` import re pattern = re.compile(r"^(\w+)(,\s*\w+)*$") def check_valid(input_string): if pattern.match(input_string) == None: return "Invalid" else: return "Valid" ``` Here is the function: ``` >>> check_valid(invalid) 'Invalid' >>> check_valid(invalid2) 'Invalid' >>> check_valid(invalid3) 'Invalid' >>> check_valid(valid) 'Valid' ```
Here you have some way to check if it's valid: ``` def is_valid(comma_sep_str): if ';' in comma_sep_str or ',,' in comma_sep_str: return 'Not valid' else: return 'Valid' myString1 = "a,, b, c,,,,d, e,,; f, g" myString2 = "a, b, c, d, e" print(is_valid(myString1)) print(is_valid(myString2)) ``` PS: Maybe is not the most effective but it will check whether is valid or not. Note that in all wrong cases you will always have at least one of this two: ",," or ";".
62,882,592
I have a `df` named `data` as follows: ``` id upper_ci lower_ci max_power_contractual 0 12858 60.19878860406808 49.827481214215204 0 1 12858 60.61189293066522 49.298784196530896 0 2 12858 60.34397624424309 49.718421137642885 70 3 12858 59.87472261936114 49.464255779713476 10 4 12858 60.2735279368527 49.41672240525131 0 ``` I am trying to create a new column named `up_threshold` as follows: * If the value of `max_power_contractual` is either zero (`0`) or `NaN`, then the value in the `up_threshold` should be the value in the `upper_ci` * If the value of `max_power_contractual` is not zero, and the condition: `max_power_contractual > upper_ci` is `True`, then the value in the `up_threshold` should be the value in the `upper_ci` * If the value of `max_power_contractual < upper_ci` is `True`, then the value in the `up_threshold` should be the value in the `max_power_contractual` I tried: ``` if (data['max_power_contractual'] in (0, np.nan)) or (data['max_power_contractual'] > data['upper_ci']): data['up_threshold'] = data['upper_ci'] elif (data['upper_ci'] > data['max_power_contractual'] == 0): data['up_threshold'] = data['max_power_contractual'] ``` But it gives me the following error: > > Traceback (most recent call last): > > > File "/home/cortex/.config/spyder-py3/temp.py", line 179, in > > data = cp\_detection(data, threshold) > > > File "/home/cortex/.config/spyder-py3/temp.py", line 146, in > cp\_detection > if data['max\_power\_contractual'] == 0: > > > File > "/home/cortex/.local/lib/python3.7/site-packages/pandas/core/generic.py", > line 1479, in **nonzero** > f"The truth value of a {type(self).**name**} is ambiguous. " > > > ValueError: The truth value of a Series is ambiguous. Use a.empty, > a.bool(), a.item(), a.any() or a.all(). > > > Can someone please tell me my mistake and how can I solve it? **Expected output:** ``` id upper_ci lower_ci max_power_contractual up_threshold 0 12858 60.19878860406808 49.827481214215204 0 60.19878860406808 (Since `max_power_contractual` value is 0) 1 12858 60.61189293066522 49.298784196530896 NaN 60.61189293066522 (Since `max_power_contractual` value is NaN) 2 12858 60.34397624424309 49.718421137642885 70 60.34397624424309 (Since `upper_ci < max_power_contractual`) 3 12858 59.87472261936114 49.464255779713476 10 10 (Since `upper_ci > max_power_contractual`) ```
2020/07/13
[ "https://Stackoverflow.com/questions/62882592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11853632/" ]
You can use `np.where` to add the new column: ``` df['up_threshold'] = np.where(df['max_power_contractual'].fillna(0) == 0, df['upper_ci'], np.where(df['max_power_contractual'] > df['upper_ci'], df['upper_ci'], df['max_power_contractual']) ) print(df) ``` Prints: ``` id upper_ci lower_ci max_power_contractual up_threshold 0 12858 60.198789 49.827481 0.0 60.198789 1 12858 60.611893 49.298784 NaN 60.611893 2 12858 60.343976 49.718421 70.0 60.343976 3 12858 59.874723 49.464256 10.0 10.000000 4 12858 60.273528 49.416722 0.0 60.273528 ```
Not the efficient way but easier to understand ``` In [17]: def process(data): ...: result = None ...: if (data['max_power_contractual'] in (0, np.nan)) or (data['max_power_contractual'] > data['upper_ci']): ...: result = data['upper_ci'] ...: elif (data['upper_ci'] > data['max_power_contractual']): ...: result = data['max_power_contractual'] ...: ...: return result ...: In [18]: df.apply(process, axis=1) Out[18]: 0 60.198789 1 60.611893 2 60.343976 3 10.000000 4 60.273528 dtype: float64 In [19]: df["up_threshold"] = df.apply(process, axis=1) In [20]: df Out[20]: id upper_ci lower_ci max_power_contractual up_threshold 0 12858 60.198789 49.827481 0 60.198789 1 12858 60.611893 49.298784 0 60.611893 2 12858 60.343976 49.718421 70 60.343976 3 12858 59.874723 49.464256 10 10.000000 4 12858 60.273528 49.416722 0 60.273528 ```
62,882,592
I have a `df` named `data` as follows: ``` id upper_ci lower_ci max_power_contractual 0 12858 60.19878860406808 49.827481214215204 0 1 12858 60.61189293066522 49.298784196530896 0 2 12858 60.34397624424309 49.718421137642885 70 3 12858 59.87472261936114 49.464255779713476 10 4 12858 60.2735279368527 49.41672240525131 0 ``` I am trying to create a new column named `up_threshold` as follows: * If the value of `max_power_contractual` is either zero (`0`) or `NaN`, then the value in the `up_threshold` should be the value in the `upper_ci` * If the value of `max_power_contractual` is not zero, and the condition: `max_power_contractual > upper_ci` is `True`, then the value in the `up_threshold` should be the value in the `upper_ci` * If the value of `max_power_contractual < upper_ci` is `True`, then the value in the `up_threshold` should be the value in the `max_power_contractual` I tried: ``` if (data['max_power_contractual'] in (0, np.nan)) or (data['max_power_contractual'] > data['upper_ci']): data['up_threshold'] = data['upper_ci'] elif (data['upper_ci'] > data['max_power_contractual'] == 0): data['up_threshold'] = data['max_power_contractual'] ``` But it gives me the following error: > > Traceback (most recent call last): > > > File "/home/cortex/.config/spyder-py3/temp.py", line 179, in > > data = cp\_detection(data, threshold) > > > File "/home/cortex/.config/spyder-py3/temp.py", line 146, in > cp\_detection > if data['max\_power\_contractual'] == 0: > > > File > "/home/cortex/.local/lib/python3.7/site-packages/pandas/core/generic.py", > line 1479, in **nonzero** > f"The truth value of a {type(self).**name**} is ambiguous. " > > > ValueError: The truth value of a Series is ambiguous. Use a.empty, > a.bool(), a.item(), a.any() or a.all(). > > > Can someone please tell me my mistake and how can I solve it? **Expected output:** ``` id upper_ci lower_ci max_power_contractual up_threshold 0 12858 60.19878860406808 49.827481214215204 0 60.19878860406808 (Since `max_power_contractual` value is 0) 1 12858 60.61189293066522 49.298784196530896 NaN 60.61189293066522 (Since `max_power_contractual` value is NaN) 2 12858 60.34397624424309 49.718421137642885 70 60.34397624424309 (Since `upper_ci < max_power_contractual`) 3 12858 59.87472261936114 49.464255779713476 10 10 (Since `upper_ci > max_power_contractual`) ```
2020/07/13
[ "https://Stackoverflow.com/questions/62882592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11853632/" ]
Use `np.select` ```py import numpy as np m1 = df.max_power_contractual.isin([np.NaN, 0]) m2 = df.max_power_contractual > df.upper_ci df['up_threshold'] = np.select([m1, m2], [df.upper_ci, df.upper_ci], default=df.max_power_contractual) print(df) ``` Output ``` id upper_ci lower_ci max_power_contractual up_threshold 0 12858 60.198789 49.827481 0 60.198789 1 12858 60.611893 49.298784 0 60.611893 2 12858 60.343976 49.718421 70 60.343976 3 12858 59.874723 49.464256 10 10.000000 4 12858 60.273528 49.416722 0 60.273528 ``` Since we're using the same columns as choices, we can also use ```py df['up_threshold'] = np.where((m1 | m2), df.upper_ci, df.max_power_contractual) ```
Not the efficient way but easier to understand ``` In [17]: def process(data): ...: result = None ...: if (data['max_power_contractual'] in (0, np.nan)) or (data['max_power_contractual'] > data['upper_ci']): ...: result = data['upper_ci'] ...: elif (data['upper_ci'] > data['max_power_contractual']): ...: result = data['max_power_contractual'] ...: ...: return result ...: In [18]: df.apply(process, axis=1) Out[18]: 0 60.198789 1 60.611893 2 60.343976 3 10.000000 4 60.273528 dtype: float64 In [19]: df["up_threshold"] = df.apply(process, axis=1) In [20]: df Out[20]: id upper_ci lower_ci max_power_contractual up_threshold 0 12858 60.198789 49.827481 0 60.198789 1 12858 60.611893 49.298784 0 60.611893 2 12858 60.343976 49.718421 70 60.343976 3 12858 59.874723 49.464256 10 10.000000 4 12858 60.273528 49.416722 0 60.273528 ```
39,666,183
Im trying to extract all the images from a page. I have used Mechanize Urllib and selenium to extract the Html but the part i want to extract is never there. Also when i view the page source im not able to view the part i want to extract. Instead of the Description i want to extract there is this: ``` <div class="loading32"></div> </div> </div> </div> ``` But if i try to view it using the inspect element option its there. Is there a easy way to figure out what this script does without any java knowledge? So i can bypass it. or is there a way to get an equivalent of inspect element using selenium in python 2.7? What is the difference between View page source and inspect element anyway?
2016/09/23
[ "https://Stackoverflow.com/questions/39666183", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6736217/" ]
Possibly you're trying to get elements that are created with a client sided script. I don't think javascript elements run when you just send a GET/POST request (which is what I'm assuming you mean by "view source").
At the time I was not aware how much content is loaded in through js after the page is loaded. Mechanize does not have a JavaScript interpreter. The way I ended up solving this is extracting the links from the \*.js file and redoing the get commend with urllib and getting the required content that way.
32,302,725
Hi there am new to OOP and python, I am currently trying to increment a User Id variable from a child Class, when I create an instance of the parent class using inheritance it doesn't seem to recognise the Id Variable from its parent class. Example here ``` class User: _ID = 0 def __init__(self, name): self.name = name self.id = self._ID self.__class__._ID += 1 class Customer(User): def __init__(self, name): def lastname(self): return "self.name.split()[-1]" ``` If i do i am able to access the class attribute ``` >> Chris = User("Christopher Allan") >> Chris.id >> 0 ``` When I try to run ``` >> Andy = Customer('Andy Smith') >> Andy.id >> Traceback (most recent call last): File "<pyshell#83>", line 1, in <module> Andy.id AttributeError: 'Customer' object has no attribute 'id' ``` **Update** I completed the rest of the Customer Class which was the cause of the code not working for me, sorry about that people I used pass before as for briefness of question I didn't test that it would work with pass in the Customer class.
2015/08/31
[ "https://Stackoverflow.com/questions/32302725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2374021/" ]
First, the way to get hold of the ListView itself is relatively easy. In an Activity subclass, you would do this: ``` ListView itemList = (ListView) findViewById(R.id.ItemList); ``` In your example above, the ArrayAdapter needs a layout id in it's constructor. This layout should contain a single TextView element (or some subclass) which will be used to render the list item. ``` <TextView .... /> ``` In many cases, the value ``` android.R.layout.simple_list_item_1 ``` is sufficient. If you want different formatting, but still a single TextView, you can supply your own layout file in this constructor. If you want anything more complex than a straight TextView, then you can create a subclass of ArrayAdapter, and override the getView method. In this case, i'd recommend following the ViewHolder pattern as described here [How can I make my ArrayAdapter follow the ViewHolder pattern?](https://stackoverflow.com/questions/3832254/how-can-i-make-my-arrayadapter-follow-the-viewholder-pattern) The reasoning for the ViewHolder pattern can be seen here <http://developer.android.com/training/improving-layouts/smooth-scrolling.html> Finally, set the adapter on the ListView when you're all done: ``` itemList.setAdapter(adapter); ``` As for what "R" is, it's a file generated by your IDE (eclipse, intellij, android studio), that lives in the main package (as indicated in your AndroidManifest.xml). Everytime you create a new element in a layout file with a new id, an entry gets added to that class under the "R.id" scope. The same happens as you create layout files, drawables, dimension values, string values, etc... If you're outside you're main package, just use the IDE to help you import the class. Just take care you import the one from your package, as android it self has an "android.R" for it's own resources.
Writing apps for Android is much more complicated than writing apps for Windows in vb6. You should really study basics and do some tutorials. Start [here!](http://developer.android.com/training/index.html) But for your question, to get access to xml control in your code, first you have to make object of that control, e.g. ``` private Button button1; ``` then connect it with actual control from XML layout by method [findViewById()](http://developer.android.com/reference/android/app/Activity.html#findViewById(int)) ``` button1 = (Button) findViewById(R.id.your_button_id_in_xml_layout); ```
32,302,725
Hi there am new to OOP and python, I am currently trying to increment a User Id variable from a child Class, when I create an instance of the parent class using inheritance it doesn't seem to recognise the Id Variable from its parent class. Example here ``` class User: _ID = 0 def __init__(self, name): self.name = name self.id = self._ID self.__class__._ID += 1 class Customer(User): def __init__(self, name): def lastname(self): return "self.name.split()[-1]" ``` If i do i am able to access the class attribute ``` >> Chris = User("Christopher Allan") >> Chris.id >> 0 ``` When I try to run ``` >> Andy = Customer('Andy Smith') >> Andy.id >> Traceback (most recent call last): File "<pyshell#83>", line 1, in <module> Andy.id AttributeError: 'Customer' object has no attribute 'id' ``` **Update** I completed the rest of the Customer Class which was the cause of the code not working for me, sorry about that people I used pass before as for briefness of question I didn't test that it would work with pass in the Customer class.
2015/08/31
[ "https://Stackoverflow.com/questions/32302725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2374021/" ]
You would have to use the findViewById() method which is inherited from the AppCompatActivity class. Then call the list views setAdapter method. ``` ListView listView = (ListView) findViewById(R.id.'the id-name of your list view'); listView.setAdapater(myAdapter); ``` The ArrayAdapter takes 3 parameters in to it's constructor. The first is context which you have provided 'this', this works because activities are subclasses of the Context class. The second parameter you supplied; *android.R.layout.simple\_list\_item\_1.* all this does is format your ListView, e.g. you could have checkboxes next to your list items if you liked by using android, *android.R.layout.simple\_list\_item\_checked*. The third is the array, the values at each index of your array will be used in the list view.
Writing apps for Android is much more complicated than writing apps for Windows in vb6. You should really study basics and do some tutorials. Start [here!](http://developer.android.com/training/index.html) But for your question, to get access to xml control in your code, first you have to make object of that control, e.g. ``` private Button button1; ``` then connect it with actual control from XML layout by method [findViewById()](http://developer.android.com/reference/android/app/Activity.html#findViewById(int)) ``` button1 = (Button) findViewById(R.id.your_button_id_in_xml_layout); ```
32,302,725
Hi there am new to OOP and python, I am currently trying to increment a User Id variable from a child Class, when I create an instance of the parent class using inheritance it doesn't seem to recognise the Id Variable from its parent class. Example here ``` class User: _ID = 0 def __init__(self, name): self.name = name self.id = self._ID self.__class__._ID += 1 class Customer(User): def __init__(self, name): def lastname(self): return "self.name.split()[-1]" ``` If i do i am able to access the class attribute ``` >> Chris = User("Christopher Allan") >> Chris.id >> 0 ``` When I try to run ``` >> Andy = Customer('Andy Smith') >> Andy.id >> Traceback (most recent call last): File "<pyshell#83>", line 1, in <module> Andy.id AttributeError: 'Customer' object has no attribute 'id' ``` **Update** I completed the rest of the Customer Class which was the cause of the code not working for me, sorry about that people I used pass before as for briefness of question I didn't test that it would work with pass in the Customer class.
2015/08/31
[ "https://Stackoverflow.com/questions/32302725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2374021/" ]
First, the way to get hold of the ListView itself is relatively easy. In an Activity subclass, you would do this: ``` ListView itemList = (ListView) findViewById(R.id.ItemList); ``` In your example above, the ArrayAdapter needs a layout id in it's constructor. This layout should contain a single TextView element (or some subclass) which will be used to render the list item. ``` <TextView .... /> ``` In many cases, the value ``` android.R.layout.simple_list_item_1 ``` is sufficient. If you want different formatting, but still a single TextView, you can supply your own layout file in this constructor. If you want anything more complex than a straight TextView, then you can create a subclass of ArrayAdapter, and override the getView method. In this case, i'd recommend following the ViewHolder pattern as described here [How can I make my ArrayAdapter follow the ViewHolder pattern?](https://stackoverflow.com/questions/3832254/how-can-i-make-my-arrayadapter-follow-the-viewholder-pattern) The reasoning for the ViewHolder pattern can be seen here <http://developer.android.com/training/improving-layouts/smooth-scrolling.html> Finally, set the adapter on the ListView when you're all done: ``` itemList.setAdapter(adapter); ``` As for what "R" is, it's a file generated by your IDE (eclipse, intellij, android studio), that lives in the main package (as indicated in your AndroidManifest.xml). Everytime you create a new element in a layout file with a new id, an entry gets added to that class under the "R.id" scope. The same happens as you create layout files, drawables, dimension values, string values, etc... If you're outside you're main package, just use the IDE to help you import the class. Just take care you import the one from your package, as android it self has an "android.R" for it's own resources.
There is no syntactical similarities with VB6, In Android what happens is UI elements are created as object hierarchy in the memory. To get a particular object, we use findViewById method with id of that particular element as argument. You can get the List View. ``` ListView itemList = (ListView) findViewById(R.id.ItemList); ``` Then, ``` ArrayAdapter<String> myAdapter=new ArrayAdapter<String>( this, android.R.layout.simple_list_item_1, myStringArray); ``` Here, ``` android.R.layout.simple_list_item_1 ``` is not referencing your `ItemList`. This is for the Adapter. In the ListView you need to have a list of elements. ``` android.R.layout.simple_list_item_1 represent the template that is used to populate the list view. ``` then you do the, ``` itemList.setAdapter(myAdapter); ``` R issue many happens due to build failure, R is generated during the building of the project. To see the issue with the R, Can you add more code of your Activity class with the imports?
32,302,725
Hi there am new to OOP and python, I am currently trying to increment a User Id variable from a child Class, when I create an instance of the parent class using inheritance it doesn't seem to recognise the Id Variable from its parent class. Example here ``` class User: _ID = 0 def __init__(self, name): self.name = name self.id = self._ID self.__class__._ID += 1 class Customer(User): def __init__(self, name): def lastname(self): return "self.name.split()[-1]" ``` If i do i am able to access the class attribute ``` >> Chris = User("Christopher Allan") >> Chris.id >> 0 ``` When I try to run ``` >> Andy = Customer('Andy Smith') >> Andy.id >> Traceback (most recent call last): File "<pyshell#83>", line 1, in <module> Andy.id AttributeError: 'Customer' object has no attribute 'id' ``` **Update** I completed the rest of the Customer Class which was the cause of the code not working for me, sorry about that people I used pass before as for briefness of question I didn't test that it would work with pass in the Customer class.
2015/08/31
[ "https://Stackoverflow.com/questions/32302725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2374021/" ]
You would have to use the findViewById() method which is inherited from the AppCompatActivity class. Then call the list views setAdapter method. ``` ListView listView = (ListView) findViewById(R.id.'the id-name of your list view'); listView.setAdapater(myAdapter); ``` The ArrayAdapter takes 3 parameters in to it's constructor. The first is context which you have provided 'this', this works because activities are subclasses of the Context class. The second parameter you supplied; *android.R.layout.simple\_list\_item\_1.* all this does is format your ListView, e.g. you could have checkboxes next to your list items if you liked by using android, *android.R.layout.simple\_list\_item\_checked*. The third is the array, the values at each index of your array will be used in the list view.
There is no syntactical similarities with VB6, In Android what happens is UI elements are created as object hierarchy in the memory. To get a particular object, we use findViewById method with id of that particular element as argument. You can get the List View. ``` ListView itemList = (ListView) findViewById(R.id.ItemList); ``` Then, ``` ArrayAdapter<String> myAdapter=new ArrayAdapter<String>( this, android.R.layout.simple_list_item_1, myStringArray); ``` Here, ``` android.R.layout.simple_list_item_1 ``` is not referencing your `ItemList`. This is for the Adapter. In the ListView you need to have a list of elements. ``` android.R.layout.simple_list_item_1 represent the template that is used to populate the list view. ``` then you do the, ``` itemList.setAdapter(myAdapter); ``` R issue many happens due to build failure, R is generated during the building of the project. To see the issue with the R, Can you add more code of your Activity class with the imports?
51,420,803
I have been trying to install `python-poppler-qt4` but it shows the error that `ModuleNotFoundError: No module name sipdistutils`. When I tried installing the `sipdistutils`, it again showed the error. **Error Message** [![enter image description here](https://i.stack.imgur.com/m6Pdk.png)](https://i.stack.imgur.com/m6Pdk.png)
2018/07/19
[ "https://Stackoverflow.com/questions/51420803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10104837/" ]
I have found a simillar issue here: <https://github.com/wbsoft/python-poppler-qt5/issues/14> I think that `sipdistutils` should be part of `sip` package. Please verify if you have it installed: ``` $ pip freeze | grep sip sip==4.19.1 ``` If there's no output install it with `pip install sip`. If this won't work some proposed solutions: 1. > > It seems like the pip version of sip does not install sipdistutils. To install it from source, you can do that: > `wget https://sourceforge.net/projects/pyqt/files/sip/sip-4.19.3/sip-4.19.3.tar.gz > tar zxvf sip-4.19.3.tar.gz > cd sip-4.19.3 > python configure.py > make > make install` > > > 2. > > You can get the sipdistutils.py from riverbank's mercurial server, i.e. from [here](https://www.riverbankcomputing.com/hg/sip/file/14685a6e736e/sipdistutils.py). It is self-contained. Just place it into your Python site-packages folder > > >
It is true that using `sipdistutils` for building python extensions is no longer the way to do things. So, the absolute fix is to modify the build procedure for the package but since I am not in control of that (though I may try to find time to contribute to the project) I did find a work-around. In our case, on Ubuntu 20.04, we're using `pyenv` with the `virtualenv` plugin to create virtual environments for our applications and it is within these environments where we run into the issue. Did a little digging around and figured out that if you have three source files in your build path, it can be made to work: * `sipdistutils.py` which is provided by `python3-sip-dev` (apt, v4.19.21) * `sipconfig.py` which is provided by `python3-sip` (apt, v4.19.21) * `sipconfig_nd8.py` which is provided by `python3-sip` (apt, v.419.21) *note: v4.19, these files are not present in newer versions of sip* Now, simply installing those with `apt` would be enough if we were using the apt installed python, since we are not, I simply copied those three files from their default installed location to a custom path that we are using via `PYTHONPATH`.
52,750,669
I want to create a new list (V) from other lists (a, b, c) and using a function, but I would like to take advantage of python and apply the function to the three lists and not element by element. For example, I have the lists a, b and c; and the result after apply the function should be V. Thanks. ``` def mag(a, b, c): # something sophisticated return (a+b)*c a = [1, 5, 7] b = [4, 8, 3] c = [2, 6, 3] V = [10, 78, 30] ```
2018/10/11
[ "https://Stackoverflow.com/questions/52750669", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8766822/" ]
You want to first zip the arguments, then map the function on the unpacked tuples: ``` from itertools import starmap starmap(mag, zip(a,b,c)) ``` See [here](https://ideone.com/uTNa5L) for an example.
What about using only built-in functions? Like [zip](https://docs.python.org/3.3/library/functions.html#zip) ``` >>> [mag(a_, b_, c_) for a_,b_,c_ in zip(a, b, c)] [10, 78, 30] ``` Plus another python buit-in function, [map](https://docs.python.org/3.3/library/functions.html#map) which returns an iterator and thus makes things go faster and ends up saving memory: ``` >>> gen = map(lambda uple:mag(*uple), zip(a, b, c)) >>> list(gen) [10, 78, 30] ```
52,750,669
I want to create a new list (V) from other lists (a, b, c) and using a function, but I would like to take advantage of python and apply the function to the three lists and not element by element. For example, I have the lists a, b and c; and the result after apply the function should be V. Thanks. ``` def mag(a, b, c): # something sophisticated return (a+b)*c a = [1, 5, 7] b = [4, 8, 3] c = [2, 6, 3] V = [10, 78, 30] ```
2018/10/11
[ "https://Stackoverflow.com/questions/52750669", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8766822/" ]
You can easily do this with the `map` function: ``` V = list(map(mag, a, b, c)) ```
What about using only built-in functions? Like [zip](https://docs.python.org/3.3/library/functions.html#zip) ``` >>> [mag(a_, b_, c_) for a_,b_,c_ in zip(a, b, c)] [10, 78, 30] ``` Plus another python buit-in function, [map](https://docs.python.org/3.3/library/functions.html#map) which returns an iterator and thus makes things go faster and ends up saving memory: ``` >>> gen = map(lambda uple:mag(*uple), zip(a, b, c)) >>> list(gen) [10, 78, 30] ```
52,750,669
I want to create a new list (V) from other lists (a, b, c) and using a function, but I would like to take advantage of python and apply the function to the three lists and not element by element. For example, I have the lists a, b and c; and the result after apply the function should be V. Thanks. ``` def mag(a, b, c): # something sophisticated return (a+b)*c a = [1, 5, 7] b = [4, 8, 3] c = [2, 6, 3] V = [10, 78, 30] ```
2018/10/11
[ "https://Stackoverflow.com/questions/52750669", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8766822/" ]
You can easily do this with the `map` function: ``` V = list(map(mag, a, b, c)) ```
You want to first zip the arguments, then map the function on the unpacked tuples: ``` from itertools import starmap starmap(mag, zip(a,b,c)) ``` See [here](https://ideone.com/uTNa5L) for an example.
52,750,669
I want to create a new list (V) from other lists (a, b, c) and using a function, but I would like to take advantage of python and apply the function to the three lists and not element by element. For example, I have the lists a, b and c; and the result after apply the function should be V. Thanks. ``` def mag(a, b, c): # something sophisticated return (a+b)*c a = [1, 5, 7] b = [4, 8, 3] c = [2, 6, 3] V = [10, 78, 30] ```
2018/10/11
[ "https://Stackoverflow.com/questions/52750669", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8766822/" ]
You want to first zip the arguments, then map the function on the unpacked tuples: ``` from itertools import starmap starmap(mag, zip(a,b,c)) ``` See [here](https://ideone.com/uTNa5L) for an example.
An alternate solution is to use `map` and `lambda` ``` In [16]: list(map(lambda p: mag(*p), zip(a, b, c))) Out[16]: [10, 78, 30] ```
52,750,669
I want to create a new list (V) from other lists (a, b, c) and using a function, but I would like to take advantage of python and apply the function to the three lists and not element by element. For example, I have the lists a, b and c; and the result after apply the function should be V. Thanks. ``` def mag(a, b, c): # something sophisticated return (a+b)*c a = [1, 5, 7] b = [4, 8, 3] c = [2, 6, 3] V = [10, 78, 30] ```
2018/10/11
[ "https://Stackoverflow.com/questions/52750669", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8766822/" ]
You can easily do this with the `map` function: ``` V = list(map(mag, a, b, c)) ```
An alternate solution is to use `map` and `lambda` ``` In [16]: list(map(lambda p: mag(*p), zip(a, b, c))) Out[16]: [10, 78, 30] ```
12,246,908
What is my requirement ? --> I need Exception notifier which will email to some specific configured user, about any sort of exception occurring in plain python app and web.py. I want something similar to this <http://matharvard.ca/posts/2011/jul/31/exception-notification-for-rails-3/> Is there anything same sort present ?? Please reply asap. Thanks.
2012/09/03
[ "https://Stackoverflow.com/questions/12246908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/486565/" ]
You can get what you want by: * Wrapping your code in `try..except` clause. * Using `logging` module to log the exceptions with a certain level of severity e.g `ERROR`. * Setting an `SMTPHandler` for exceptions of and above certain level. This way is quite flexible. Your messages can be send to several places (like log files) and you can reconfigure your settings easily.
You can overwrite the `excepthook` function from the [`sys`](http://docs.python.org/library/sys.html#sys.excepthook) module, and handle any uncought exceptions there.
12,246,908
What is my requirement ? --> I need Exception notifier which will email to some specific configured user, about any sort of exception occurring in plain python app and web.py. I want something similar to this <http://matharvard.ca/posts/2011/jul/31/exception-notification-for-rails-3/> Is there anything same sort present ?? Please reply asap. Thanks.
2012/09/03
[ "https://Stackoverflow.com/questions/12246908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/486565/" ]
If you are not using any python heavy weight framework, try: <https://github.com/fossilet/exception-notifier> , it seems to be similar to the Rails' Exception notification, but quite simple. If you are using django, seems you can use its built-in feature:<https://docs.djangoproject.com/en/dev/howto/error-reporting/> ( and also see this: <https://github.com/pinax/django-notification>) If using tornado, try this: <https://github.com/kanevski/tornadotoad> this is the most similar solution in python comparing to rails. )
You can overwrite the `excepthook` function from the [`sys`](http://docs.python.org/library/sys.html#sys.excepthook) module, and handle any uncought exceptions there.
12,246,908
What is my requirement ? --> I need Exception notifier which will email to some specific configured user, about any sort of exception occurring in plain python app and web.py. I want something similar to this <http://matharvard.ca/posts/2011/jul/31/exception-notification-for-rails-3/> Is there anything same sort present ?? Please reply asap. Thanks.
2012/09/03
[ "https://Stackoverflow.com/questions/12246908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/486565/" ]
You can get what you want by: * Wrapping your code in `try..except` clause. * Using `logging` module to log the exceptions with a certain level of severity e.g `ERROR`. * Setting an `SMTPHandler` for exceptions of and above certain level. This way is quite flexible. Your messages can be send to several places (like log files) and you can reconfigure your settings easily.
I have the same requirement with you. I write [a simple module](https://github.com/fossilet/exception-notifier) to mail uncaught exceptions to the developers, as well as to record them to log files. It is used in our teams' cron scripts written in Python. Hope it can be useful for you too.
12,246,908
What is my requirement ? --> I need Exception notifier which will email to some specific configured user, about any sort of exception occurring in plain python app and web.py. I want something similar to this <http://matharvard.ca/posts/2011/jul/31/exception-notification-for-rails-3/> Is there anything same sort present ?? Please reply asap. Thanks.
2012/09/03
[ "https://Stackoverflow.com/questions/12246908", "https://Stackoverflow.com", "https://Stackoverflow.com/users/486565/" ]
If you are not using any python heavy weight framework, try: <https://github.com/fossilet/exception-notifier> , it seems to be similar to the Rails' Exception notification, but quite simple. If you are using django, seems you can use its built-in feature:<https://docs.djangoproject.com/en/dev/howto/error-reporting/> ( and also see this: <https://github.com/pinax/django-notification>) If using tornado, try this: <https://github.com/kanevski/tornadotoad> this is the most similar solution in python comparing to rails. )
I have the same requirement with you. I write [a simple module](https://github.com/fossilet/exception-notifier) to mail uncaught exceptions to the developers, as well as to record them to log files. It is used in our teams' cron scripts written in Python. Hope it can be useful for you too.
3,027,394
what does the '~' mean in python? i found this BF interpreter in python a while ago. ``` import sys #c,i,r,p=0,0,[0]*255,raw_input() c=0 i=0 p=raw_input() r=[0]*255 while c<len(p): m,n,u=p[c],0,r[i] if m==">":i+=1 if m=="<":i-=1 if m=="+":r[i]+=1 if m=="-":r[i]-=1 if m==".":sys.stdout.write(chr(u)) if m=="[": if ~u: while 1: m=p[c] if m=="]":n-=1 if m=="[":n+=1 if ~n:break c+=1 if m=="]": if u: while 1: m=p[c] if m=="]":n-=1 if m=="[":n+=1 if ~n:break c-=1 c+=1 ``` and i want to know what it does because i want to make one on my ti 84 (and a PF one) BF is <http://en.wikipedia.org/wiki/Brainfuck> and PF is something similar
2010/06/12
[ "https://Stackoverflow.com/questions/3027394", "https://Stackoverflow.com", "https://Stackoverflow.com/users/365060/" ]
Bitwise NOT, just like in C. In two's complement representation, `~n` is equivalent to `-n - 1`.
`~` is bitwise-not. I can't really think of a good way to illustrate it (unless you know that `-1` is the bitwise negation of `0`), but [the wikipedia entry](http://en.wikipedia.org/wiki/Bitwise_NOT#NOT) is pretty good.
3,027,394
what does the '~' mean in python? i found this BF interpreter in python a while ago. ``` import sys #c,i,r,p=0,0,[0]*255,raw_input() c=0 i=0 p=raw_input() r=[0]*255 while c<len(p): m,n,u=p[c],0,r[i] if m==">":i+=1 if m=="<":i-=1 if m=="+":r[i]+=1 if m=="-":r[i]-=1 if m==".":sys.stdout.write(chr(u)) if m=="[": if ~u: while 1: m=p[c] if m=="]":n-=1 if m=="[":n+=1 if ~n:break c+=1 if m=="]": if u: while 1: m=p[c] if m=="]":n-=1 if m=="[":n+=1 if ~n:break c-=1 c+=1 ``` and i want to know what it does because i want to make one on my ti 84 (and a PF one) BF is <http://en.wikipedia.org/wiki/Brainfuck> and PF is something similar
2010/06/12
[ "https://Stackoverflow.com/questions/3027394", "https://Stackoverflow.com", "https://Stackoverflow.com/users/365060/" ]
In this particular context, just replace '~' with 'not'. PS. ok i guess i will have to explain - started getting slapped with -1's, probably on the premise i don't know the difference between logical and bitwise negation. The thing is, the code in the question is **broken**. There is a bug in it. If you check how Brainfuck should work, it loops within **[ ]** braces while the current memory cell is !=0 (this is checked as pre-condition when entering **[** and as optimization before returning from **]**). But instead of arguing, perhaps is easier to show with examples of the code not working. Let's take the simple program `'[+]'`. Trying to tun this should just exit (because current cell is 0, it won even enter the loop). Instead if you run it in this interpreter, it goes into infinite loop. So i'll kindly ask you to revert your -1 votes if my clarification makes sense now ;-) Here is the interpreter slightly beautified, with fixed `~` bug and i also added the missing `,` input: ``` from sys import stdin, stdout bfHelloWorld = '++++++++++[>+++++++>++++++++++>+++>+<<<<-]>++.>+.+++++++..+++.>++.<<+++++++++++++++.>.+++.------.--------.>+.>.' # http://esoteric.sange.fi/brainfuck/bf-source/prog/yapi.b bfPiDigits = '''> +++++ (5 digits) [<+>>>>>>>>++++++++++<<<<<<<-]>+++++[<+++++++++>-]+>>>>>>+[<<+++[>>[-<]<[>]<-]>> [>+>]<[<]>]>[[->>>>+<<<<]>>>+++>-]<[<<<<]<<<<<<<<+[->>>>>>>>>>>>[<+[->>>>+<<<<]> >>>>]<<<<[>>>>>[<<<<+>>>>-]<<<<<-[<<++++++++++>>-]>>>[<<[<+<<+>>>-]<[>+<-]<++<<+ >>>>>>-]<<[-]<<-<[->>+<-[>>>]>[[<+>-]>+>>]<<<<<]>[-]>+<<<-[>>+<<-]<]<<<<+>>>>>>> >[-]>[<<<+>>>-]<<++++++++++<[->>+<-[>>>]>[[<+>-]>+>>]<<<<<]>[-]>+>[<<+<+>>>-]<<< <+<+>>[-[-[-[-[-[-[-[-[-<->[-<+<->>]]]]]]]]]]<[+++++[<<<++++++++<++++++++>>>>-]< <<<+<->>>>[>+<<<+++++++++<->>>-]<<<<<[>>+<<-]+<[->-<]>[>>.<<<<[+.[-]]>>-]>[>>.<< -]>[-]>[-]>>>[>>[<<<<<<<<+>>>>>>>>-]<<-]]>>[-]<<<[-]<<<<<<<<]++++++++++. ''' code = bfPiDigits # the code data = [0] * 255 # data memory cp = 0 # code pointer dp = 0 # data pointer while cp < len(code): cmd = code[cp] if cmd == '>': dp += 1 elif cmd == '<': dp -= 1 elif cmd == '+': data[dp] += 1 elif cmd == '-': data[dp] -= 1 elif cmd == '.': stdout.write(chr(data[dp])) elif cmd == ',': data[dp] = ord(stdin.read(1)) elif cmd == '[' and not data[dp]: # skip loop if ==0 n = 0 while True: cmd = code[cp] if cmd == '[': n += 1 elif cmd == ']': n -= 1 if not n: break cp += 1 elif cmd == ']' and data[dp]: # loop back if !=0 n = 0 while True: cmd = code[cp] if cmd == '[': n+=1 elif cmd == ']': n-=1 if not n: break cp -= 1 cp += 1 ```
`~` is bitwise-not. I can't really think of a good way to illustrate it (unless you know that `-1` is the bitwise negation of `0`), but [the wikipedia entry](http://en.wikipedia.org/wiki/Bitwise_NOT#NOT) is pretty good.
3,027,394
what does the '~' mean in python? i found this BF interpreter in python a while ago. ``` import sys #c,i,r,p=0,0,[0]*255,raw_input() c=0 i=0 p=raw_input() r=[0]*255 while c<len(p): m,n,u=p[c],0,r[i] if m==">":i+=1 if m=="<":i-=1 if m=="+":r[i]+=1 if m=="-":r[i]-=1 if m==".":sys.stdout.write(chr(u)) if m=="[": if ~u: while 1: m=p[c] if m=="]":n-=1 if m=="[":n+=1 if ~n:break c+=1 if m=="]": if u: while 1: m=p[c] if m=="]":n-=1 if m=="[":n+=1 if ~n:break c-=1 c+=1 ``` and i want to know what it does because i want to make one on my ti 84 (and a PF one) BF is <http://en.wikipedia.org/wiki/Brainfuck> and PF is something similar
2010/06/12
[ "https://Stackoverflow.com/questions/3027394", "https://Stackoverflow.com", "https://Stackoverflow.com/users/365060/" ]
Bitwise NOT, just like in C. In two's complement representation, `~n` is equivalent to `-n - 1`.
And to bring up one thing none of the other answers mentioned: the behavior of `~` for user-defined classes can be changed by overriding the `__invert__` method (or the `nb_invert` slot if you're using the Python/C API).
3,027,394
what does the '~' mean in python? i found this BF interpreter in python a while ago. ``` import sys #c,i,r,p=0,0,[0]*255,raw_input() c=0 i=0 p=raw_input() r=[0]*255 while c<len(p): m,n,u=p[c],0,r[i] if m==">":i+=1 if m=="<":i-=1 if m=="+":r[i]+=1 if m=="-":r[i]-=1 if m==".":sys.stdout.write(chr(u)) if m=="[": if ~u: while 1: m=p[c] if m=="]":n-=1 if m=="[":n+=1 if ~n:break c+=1 if m=="]": if u: while 1: m=p[c] if m=="]":n-=1 if m=="[":n+=1 if ~n:break c-=1 c+=1 ``` and i want to know what it does because i want to make one on my ti 84 (and a PF one) BF is <http://en.wikipedia.org/wiki/Brainfuck> and PF is something similar
2010/06/12
[ "https://Stackoverflow.com/questions/3027394", "https://Stackoverflow.com", "https://Stackoverflow.com/users/365060/" ]
In this particular context, just replace '~' with 'not'. PS. ok i guess i will have to explain - started getting slapped with -1's, probably on the premise i don't know the difference between logical and bitwise negation. The thing is, the code in the question is **broken**. There is a bug in it. If you check how Brainfuck should work, it loops within **[ ]** braces while the current memory cell is !=0 (this is checked as pre-condition when entering **[** and as optimization before returning from **]**). But instead of arguing, perhaps is easier to show with examples of the code not working. Let's take the simple program `'[+]'`. Trying to tun this should just exit (because current cell is 0, it won even enter the loop). Instead if you run it in this interpreter, it goes into infinite loop. So i'll kindly ask you to revert your -1 votes if my clarification makes sense now ;-) Here is the interpreter slightly beautified, with fixed `~` bug and i also added the missing `,` input: ``` from sys import stdin, stdout bfHelloWorld = '++++++++++[>+++++++>++++++++++>+++>+<<<<-]>++.>+.+++++++..+++.>++.<<+++++++++++++++.>.+++.------.--------.>+.>.' # http://esoteric.sange.fi/brainfuck/bf-source/prog/yapi.b bfPiDigits = '''> +++++ (5 digits) [<+>>>>>>>>++++++++++<<<<<<<-]>+++++[<+++++++++>-]+>>>>>>+[<<+++[>>[-<]<[>]<-]>> [>+>]<[<]>]>[[->>>>+<<<<]>>>+++>-]<[<<<<]<<<<<<<<+[->>>>>>>>>>>>[<+[->>>>+<<<<]> >>>>]<<<<[>>>>>[<<<<+>>>>-]<<<<<-[<<++++++++++>>-]>>>[<<[<+<<+>>>-]<[>+<-]<++<<+ >>>>>>-]<<[-]<<-<[->>+<-[>>>]>[[<+>-]>+>>]<<<<<]>[-]>+<<<-[>>+<<-]<]<<<<+>>>>>>> >[-]>[<<<+>>>-]<<++++++++++<[->>+<-[>>>]>[[<+>-]>+>>]<<<<<]>[-]>+>[<<+<+>>>-]<<< <+<+>>[-[-[-[-[-[-[-[-[-<->[-<+<->>]]]]]]]]]]<[+++++[<<<++++++++<++++++++>>>>-]< <<<+<->>>>[>+<<<+++++++++<->>>-]<<<<<[>>+<<-]+<[->-<]>[>>.<<<<[+.[-]]>>-]>[>>.<< -]>[-]>[-]>>>[>>[<<<<<<<<+>>>>>>>>-]<<-]]>>[-]<<<[-]<<<<<<<<]++++++++++. ''' code = bfPiDigits # the code data = [0] * 255 # data memory cp = 0 # code pointer dp = 0 # data pointer while cp < len(code): cmd = code[cp] if cmd == '>': dp += 1 elif cmd == '<': dp -= 1 elif cmd == '+': data[dp] += 1 elif cmd == '-': data[dp] -= 1 elif cmd == '.': stdout.write(chr(data[dp])) elif cmd == ',': data[dp] = ord(stdin.read(1)) elif cmd == '[' and not data[dp]: # skip loop if ==0 n = 0 while True: cmd = code[cp] if cmd == '[': n += 1 elif cmd == ']': n -= 1 if not n: break cp += 1 elif cmd == ']' and data[dp]: # loop back if !=0 n = 0 while True: cmd = code[cp] if cmd == '[': n+=1 elif cmd == ']': n-=1 if not n: break cp -= 1 cp += 1 ```
And to bring up one thing none of the other answers mentioned: the behavior of `~` for user-defined classes can be changed by overriding the `__invert__` method (or the `nb_invert` slot if you're using the Python/C API).
67,183,501
``` import os import numpy as np from scipy.signal import * import csv import matplotlib.pyplot as plt from scipy import signal from brainflow.board_shim import BoardShim, BrainFlowInputParams, LogLevels, BoardIds from brainflow.data_filter import DataFilter, FilterTypes, AggOperations, WindowFunctions, DetrendOperations from sklearn.cluster import KMeans #Options to read: 'EEG-IO', 'EEG-VV', 'EEG-VR', 'EEG-MB' data_folder = 'EEG-IO' # Parameters and bandpass filtering fs = 250.0 # Reading data files file_idx = 0 list_of_files = [f for f in os.listdir(data_folder) if os.path.isfile(os.path.join(data_folder, f)) and '_data' in f] #List of all the files, Lists are randomized, its only looking for file with _data in it print(list_of_files) file_sig = list_of_files[file_idx] # Data File file_stim = list_of_files[file_idx].replace('_data','_labels') #Label File, Replacing _data with _labels print ("Reading: ", file_sig, file_stim) # Loading data if data_folder == 'EEG-IO' or data_folder == 'EEG-MB': data_sig = np.loadtxt(open(os.path.join(data_folder,file_sig), "rb"), delimiter=";", skiprows=1, usecols=(0,1,2)) #data_sig would be a buffer elif data_folder == 'EEG-VR' or data_folder == 'EEG-VV': data_sig = np.loadtxt(open(os.path.join(data_folder,file_sig), "rb"), delimiter=",", skiprows=5, usecols=(0,1,2)) data_sig = data_sig[0:(int(200*fs)+1),:] # getting data ready -- not needed for previous 2 datasets data_sig = data_sig[:,0:3] # data_sig[:,0] = np.array(range(0,len(data_sig)))/fs ############ Calculating PSD ############ index, ch = data_sig.shape[0], data_sig.shape[1] # print(index) feature_vectors = [[], []] feature_vectorsa = [[], []] feature_vectorsb = [[], []] feature_vectorsc = [[], []] #for x in range(ch): #for x in range(1,3): #while x < #while x>0: x=1 while x>0 and x<3: if x==1: data_sig[:,1] = lowpass(data_sig[:,1], 10, fs, 4) elif x==2: data_sig[:,2] = lowpass(data_sig[:,2], 10, fs, 4) for y in range(500, 19328 ,500): #print(ch) if x==1: DataFilter.detrend(data_sig[y-500:y, 1], DetrendOperations.LINEAR.value) psd = DataFilter.get_psd_welch(data_sig[y-500:y, 1], nfft, nfft//2, 250, WindowFunctions.BLACKMAN_HARRIS.value) band_power_delta = DataFilter.get_band_power(psd, 1.0, 4.0) # Theta 4-8 band_power_theta = DataFilter.get_band_power(psd, 4.0, 8.0) #Alpha 8-12 band_power_alpha = DataFilter.get_band_power(psd, 8.0, 12.0) #Beta 12-30 band_power_beta = DataFilter.get_band_power(psd, 12.0, 30.0) # print(feature_vectors.shape) feature_vectors[x].insert(y, [band_power_delta, band_power_theta, band_power_alpha, band_power_beta]) feature_vectorsa[x].insert(y, [band_power_delta, band_power_theta]) elif x==2: DataFilter.detrend(data_sig[y-500:y, 2], DetrendOperations.LINEAR.value) psd = DataFilter.get_psd_welch(data_sig[y-500:y, 2], nfft, nfft//2, 250, WindowFunctions.BLACKMAN_HARRIS.value) band_power_delta = DataFilter.get_band_power(psd, 1.0, 4.0) # Theta 4-8 band_power_theta = DataFilter.get_band_power(psd, 4.0, 8.0) #Alpha 8-12 band_power_alpha = DataFilter.get_band_power(psd, 8.0, 12.0) #Beta 12-30 band_power_beta = DataFilter.get_band_power(psd, 12.0, 30.0) # print(feature_vectors.shape) # feature_vectorsc[x].insert(y, [band_power_delta, band_power_theta, band_power_alpha, band_power_beta]) # feature_vectorsd[x].insert(y, [band_power_delta, band_power_theta]) x = x+1 print(feature_vectorsa) powers = np.log10(np.asarray(feature_vectors, dtype=float)) powers1 = np.log10(np.asarray(feature_vectorsa, dtype=float)) # powers2 = np.log10(np.asarray(feature_vectorsb)) # powers3 = np.log10(np.asarray(feature_vectorsc)) print(powers.shape) print(powers1.shape) ``` Super confused. When I run my code, I keep on getting this error: > > ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part. > > > Traceback: > > File "/Users/mikaelhaji/Downloads/EEG-EyeBlinks/read\_data.py", line 170, in > powers = np.log10(np.asarray(feature\_vectors, dtype=float)) > File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/numpy/core/\_asarray.py", line 102, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part. > > > If you have any thoughts/ answers as to why this may be occurring, please let me know. Thanks in advance for the responses.
2021/04/20
[ "https://Stackoverflow.com/questions/67183501", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15708550/" ]
Here's a simple case that produces your error message: ``` In [19]: np.asarray([[1,2,3],[4,5]],float) Traceback (most recent call last): File "<ipython-input-19-72fd80bc7856>", line 1, in <module> np.asarray([[1,2,3],[4,5]],float) File "/usr/local/lib/python3.8/dist-packages/numpy/core/_asarray.py", line 102, in asarray return array(a, dtype, copy=False, order=order) ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part. ``` If I omit the `float`, it makes an object dtype array - with warning. ``` In [20]: np.asarray([[1,2,3],[4,5]]) /usr/local/lib/python3.8/dist-packages/numpy/core/_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. return array(a, dtype, copy=False, order=order) Out[20]: array([list([1, 2, 3]), list([4, 5])], dtype=object) ```
I was getting the same error. I was opening a txt file that contains a table of values, and saving it into a NumPy array defining the dtype as float since otherwise, the numbers would be strings. ``` with open(dirfile) as fh: next(fh) header = next(fh)[2:] next(fh) data = np.array([line.strip().split() for line in fh], float) ``` For the previous files, it worked perfectly however for the last file it did not: **The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (35351,) + inhomogeneous part.** However when I ran `data = nploadtxt(fh)` a new error appeared: **Wrong number of columns at line 35351** So, my problem was that the last line of the file was missing the values of the two last columns. I corrected it in the txt file since I wanted to have the same structure of a numpy.array(dtype=float) and everything worked fine.
67,183,501
``` import os import numpy as np from scipy.signal import * import csv import matplotlib.pyplot as plt from scipy import signal from brainflow.board_shim import BoardShim, BrainFlowInputParams, LogLevels, BoardIds from brainflow.data_filter import DataFilter, FilterTypes, AggOperations, WindowFunctions, DetrendOperations from sklearn.cluster import KMeans #Options to read: 'EEG-IO', 'EEG-VV', 'EEG-VR', 'EEG-MB' data_folder = 'EEG-IO' # Parameters and bandpass filtering fs = 250.0 # Reading data files file_idx = 0 list_of_files = [f for f in os.listdir(data_folder) if os.path.isfile(os.path.join(data_folder, f)) and '_data' in f] #List of all the files, Lists are randomized, its only looking for file with _data in it print(list_of_files) file_sig = list_of_files[file_idx] # Data File file_stim = list_of_files[file_idx].replace('_data','_labels') #Label File, Replacing _data with _labels print ("Reading: ", file_sig, file_stim) # Loading data if data_folder == 'EEG-IO' or data_folder == 'EEG-MB': data_sig = np.loadtxt(open(os.path.join(data_folder,file_sig), "rb"), delimiter=";", skiprows=1, usecols=(0,1,2)) #data_sig would be a buffer elif data_folder == 'EEG-VR' or data_folder == 'EEG-VV': data_sig = np.loadtxt(open(os.path.join(data_folder,file_sig), "rb"), delimiter=",", skiprows=5, usecols=(0,1,2)) data_sig = data_sig[0:(int(200*fs)+1),:] # getting data ready -- not needed for previous 2 datasets data_sig = data_sig[:,0:3] # data_sig[:,0] = np.array(range(0,len(data_sig)))/fs ############ Calculating PSD ############ index, ch = data_sig.shape[0], data_sig.shape[1] # print(index) feature_vectors = [[], []] feature_vectorsa = [[], []] feature_vectorsb = [[], []] feature_vectorsc = [[], []] #for x in range(ch): #for x in range(1,3): #while x < #while x>0: x=1 while x>0 and x<3: if x==1: data_sig[:,1] = lowpass(data_sig[:,1], 10, fs, 4) elif x==2: data_sig[:,2] = lowpass(data_sig[:,2], 10, fs, 4) for y in range(500, 19328 ,500): #print(ch) if x==1: DataFilter.detrend(data_sig[y-500:y, 1], DetrendOperations.LINEAR.value) psd = DataFilter.get_psd_welch(data_sig[y-500:y, 1], nfft, nfft//2, 250, WindowFunctions.BLACKMAN_HARRIS.value) band_power_delta = DataFilter.get_band_power(psd, 1.0, 4.0) # Theta 4-8 band_power_theta = DataFilter.get_band_power(psd, 4.0, 8.0) #Alpha 8-12 band_power_alpha = DataFilter.get_band_power(psd, 8.0, 12.0) #Beta 12-30 band_power_beta = DataFilter.get_band_power(psd, 12.0, 30.0) # print(feature_vectors.shape) feature_vectors[x].insert(y, [band_power_delta, band_power_theta, band_power_alpha, band_power_beta]) feature_vectorsa[x].insert(y, [band_power_delta, band_power_theta]) elif x==2: DataFilter.detrend(data_sig[y-500:y, 2], DetrendOperations.LINEAR.value) psd = DataFilter.get_psd_welch(data_sig[y-500:y, 2], nfft, nfft//2, 250, WindowFunctions.BLACKMAN_HARRIS.value) band_power_delta = DataFilter.get_band_power(psd, 1.0, 4.0) # Theta 4-8 band_power_theta = DataFilter.get_band_power(psd, 4.0, 8.0) #Alpha 8-12 band_power_alpha = DataFilter.get_band_power(psd, 8.0, 12.0) #Beta 12-30 band_power_beta = DataFilter.get_band_power(psd, 12.0, 30.0) # print(feature_vectors.shape) # feature_vectorsc[x].insert(y, [band_power_delta, band_power_theta, band_power_alpha, band_power_beta]) # feature_vectorsd[x].insert(y, [band_power_delta, band_power_theta]) x = x+1 print(feature_vectorsa) powers = np.log10(np.asarray(feature_vectors, dtype=float)) powers1 = np.log10(np.asarray(feature_vectorsa, dtype=float)) # powers2 = np.log10(np.asarray(feature_vectorsb)) # powers3 = np.log10(np.asarray(feature_vectorsc)) print(powers.shape) print(powers1.shape) ``` Super confused. When I run my code, I keep on getting this error: > > ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part. > > > Traceback: > > File "/Users/mikaelhaji/Downloads/EEG-EyeBlinks/read\_data.py", line 170, in > powers = np.log10(np.asarray(feature\_vectors, dtype=float)) > File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/numpy/core/\_asarray.py", line 102, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part. > > > If you have any thoughts/ answers as to why this may be occurring, please let me know. Thanks in advance for the responses.
2021/04/20
[ "https://Stackoverflow.com/questions/67183501", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15708550/" ]
Here's a simple case that produces your error message: ``` In [19]: np.asarray([[1,2,3],[4,5]],float) Traceback (most recent call last): File "<ipython-input-19-72fd80bc7856>", line 1, in <module> np.asarray([[1,2,3],[4,5]],float) File "/usr/local/lib/python3.8/dist-packages/numpy/core/_asarray.py", line 102, in asarray return array(a, dtype, copy=False, order=order) ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part. ``` If I omit the `float`, it makes an object dtype array - with warning. ``` In [20]: np.asarray([[1,2,3],[4,5]]) /usr/local/lib/python3.8/dist-packages/numpy/core/_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. return array(a, dtype, copy=False, order=order) Out[20]: array([list([1, 2, 3]), list([4, 5])], dtype=object) ```
[1, np.array[0,1,2], 3, np.array[8,9,10]] you might have an issue like this. simple thing you can do is: Put the break point where the error is arising -- > run the IDE in debug mode --> print the particular variable or line --> avoid this array within array scenario and it will work ! (It worked for me, hope it works for people looking for the solution !!)
67,183,501
``` import os import numpy as np from scipy.signal import * import csv import matplotlib.pyplot as plt from scipy import signal from brainflow.board_shim import BoardShim, BrainFlowInputParams, LogLevels, BoardIds from brainflow.data_filter import DataFilter, FilterTypes, AggOperations, WindowFunctions, DetrendOperations from sklearn.cluster import KMeans #Options to read: 'EEG-IO', 'EEG-VV', 'EEG-VR', 'EEG-MB' data_folder = 'EEG-IO' # Parameters and bandpass filtering fs = 250.0 # Reading data files file_idx = 0 list_of_files = [f for f in os.listdir(data_folder) if os.path.isfile(os.path.join(data_folder, f)) and '_data' in f] #List of all the files, Lists are randomized, its only looking for file with _data in it print(list_of_files) file_sig = list_of_files[file_idx] # Data File file_stim = list_of_files[file_idx].replace('_data','_labels') #Label File, Replacing _data with _labels print ("Reading: ", file_sig, file_stim) # Loading data if data_folder == 'EEG-IO' or data_folder == 'EEG-MB': data_sig = np.loadtxt(open(os.path.join(data_folder,file_sig), "rb"), delimiter=";", skiprows=1, usecols=(0,1,2)) #data_sig would be a buffer elif data_folder == 'EEG-VR' or data_folder == 'EEG-VV': data_sig = np.loadtxt(open(os.path.join(data_folder,file_sig), "rb"), delimiter=",", skiprows=5, usecols=(0,1,2)) data_sig = data_sig[0:(int(200*fs)+1),:] # getting data ready -- not needed for previous 2 datasets data_sig = data_sig[:,0:3] # data_sig[:,0] = np.array(range(0,len(data_sig)))/fs ############ Calculating PSD ############ index, ch = data_sig.shape[0], data_sig.shape[1] # print(index) feature_vectors = [[], []] feature_vectorsa = [[], []] feature_vectorsb = [[], []] feature_vectorsc = [[], []] #for x in range(ch): #for x in range(1,3): #while x < #while x>0: x=1 while x>0 and x<3: if x==1: data_sig[:,1] = lowpass(data_sig[:,1], 10, fs, 4) elif x==2: data_sig[:,2] = lowpass(data_sig[:,2], 10, fs, 4) for y in range(500, 19328 ,500): #print(ch) if x==1: DataFilter.detrend(data_sig[y-500:y, 1], DetrendOperations.LINEAR.value) psd = DataFilter.get_psd_welch(data_sig[y-500:y, 1], nfft, nfft//2, 250, WindowFunctions.BLACKMAN_HARRIS.value) band_power_delta = DataFilter.get_band_power(psd, 1.0, 4.0) # Theta 4-8 band_power_theta = DataFilter.get_band_power(psd, 4.0, 8.0) #Alpha 8-12 band_power_alpha = DataFilter.get_band_power(psd, 8.0, 12.0) #Beta 12-30 band_power_beta = DataFilter.get_band_power(psd, 12.0, 30.0) # print(feature_vectors.shape) feature_vectors[x].insert(y, [band_power_delta, band_power_theta, band_power_alpha, band_power_beta]) feature_vectorsa[x].insert(y, [band_power_delta, band_power_theta]) elif x==2: DataFilter.detrend(data_sig[y-500:y, 2], DetrendOperations.LINEAR.value) psd = DataFilter.get_psd_welch(data_sig[y-500:y, 2], nfft, nfft//2, 250, WindowFunctions.BLACKMAN_HARRIS.value) band_power_delta = DataFilter.get_band_power(psd, 1.0, 4.0) # Theta 4-8 band_power_theta = DataFilter.get_band_power(psd, 4.0, 8.0) #Alpha 8-12 band_power_alpha = DataFilter.get_band_power(psd, 8.0, 12.0) #Beta 12-30 band_power_beta = DataFilter.get_band_power(psd, 12.0, 30.0) # print(feature_vectors.shape) # feature_vectorsc[x].insert(y, [band_power_delta, band_power_theta, band_power_alpha, band_power_beta]) # feature_vectorsd[x].insert(y, [band_power_delta, band_power_theta]) x = x+1 print(feature_vectorsa) powers = np.log10(np.asarray(feature_vectors, dtype=float)) powers1 = np.log10(np.asarray(feature_vectorsa, dtype=float)) # powers2 = np.log10(np.asarray(feature_vectorsb)) # powers3 = np.log10(np.asarray(feature_vectorsc)) print(powers.shape) print(powers1.shape) ``` Super confused. When I run my code, I keep on getting this error: > > ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part. > > > Traceback: > > File "/Users/mikaelhaji/Downloads/EEG-EyeBlinks/read\_data.py", line 170, in > powers = np.log10(np.asarray(feature\_vectors, dtype=float)) > File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/numpy/core/\_asarray.py", line 102, in asarray > return array(a, dtype, copy=False, order=order) > ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part. > > > If you have any thoughts/ answers as to why this may be occurring, please let me know. Thanks in advance for the responses.
2021/04/20
[ "https://Stackoverflow.com/questions/67183501", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15708550/" ]
I was getting the same error. I was opening a txt file that contains a table of values, and saving it into a NumPy array defining the dtype as float since otherwise, the numbers would be strings. ``` with open(dirfile) as fh: next(fh) header = next(fh)[2:] next(fh) data = np.array([line.strip().split() for line in fh], float) ``` For the previous files, it worked perfectly however for the last file it did not: **The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (35351,) + inhomogeneous part.** However when I ran `data = nploadtxt(fh)` a new error appeared: **Wrong number of columns at line 35351** So, my problem was that the last line of the file was missing the values of the two last columns. I corrected it in the txt file since I wanted to have the same structure of a numpy.array(dtype=float) and everything worked fine.
[1, np.array[0,1,2], 3, np.array[8,9,10]] you might have an issue like this. simple thing you can do is: Put the break point where the error is arising -- > run the IDE in debug mode --> print the particular variable or line --> avoid this array within array scenario and it will work ! (It worked for me, hope it works for people looking for the solution !!)
68,654,663
I'm trying to aggregate my data by getting the sum every 30 seconds. I would like to know if the result of this aggregation is zero, this will happen if there are no rows in that 30s region. Here's a minimal working example illustrating the result I would like with pandas, and where it falls short with pyspark. Input data ========== ```py import pandas as pd from pyspark.sql import functions as F df = pd.DataFrame( [ (17, "2017-03-10T15:27:18+00:00"), (13, "2017-03-10T15:27:29+00:00"), (25, "2017-03-10T15:27:30+00:00"), (101, "2017-03-10T15:29:00+00:00"), (99, "2017-03-10T15:29:29+00:00") ], columns=["dollars", "timestamp"], ) df["timestamp"] = pd.to_datetime(df["timestamp"]) print(df) ``` ``` dollars timestamp 0 17 2017-03-10 15:27:18+00:00 1 13 2017-03-10 15:27:29+00:00 2 25 2017-03-10 15:27:30+00:00 3 101 2017-03-10 15:29:00+00:00 4 99 2017-03-10 15:29:29+00:00 ``` Pandas solution =============== With pandas, we can use resample to aggregate every 30 second window, and then apply the sum function over these windows (note the results for `2017-03-10 15:28:00+00:00`, and `2017-03-10 15:28:30+00:00`): ```py desired_result = df.set_index("timestamp").resample("30S").sum() desired_result ``` ``` dollars timestamp 2017-03-10 15:27:00+00:00 30 2017-03-10 15:27:30+00:00 25 2017-03-10 15:28:00+00:00 0 2017-03-10 15:28:30+00:00 0 2017-03-10 15:29:00+00:00 200 ``` PySpark near solution ===================== In pyspark, we can use [`pyspark.sql.functions.window`](http://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.functions.window.html) to window over every 30 seconds (adapted, with thanks from [this stack answer](https://stackoverflow.com/a/65839905/2550114)), but this will miss out the window where there are no rows: ```py spark: pyspark.sql.session.SparkSession # I expect you to have set up your session... sdf = spark.createDataFrame(df) sdf.groupby( F.window("timestamp", windowDuration="30 seconds", slideDuration="30 seconds") ).agg(F.sum("dollars")).display() ``` ``` window,sum(dollars) "{""start"":""2017-03-10T15:27:30.000+0000"",""end"":""2017-03-10T15:28:00.000+0000""}",25 "{""start"":""2017-03-10T15:27:00.000+0000"",""end"":""2017-03-10T15:27:30.000+0000""}",30 "{""start"":""2017-03-10T15:29:00.000+0000"",""end"":""2017-03-10T15:29:30.000+0000""}",200 ``` Question ======== How do I get pyspark to return window results for time window where there are no rows (like pandas)?
2021/08/04
[ "https://Stackoverflow.com/questions/68654663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2550114/" ]
The HTML5 specification now has a whole section on [Button Layout](https://html.spec.whatwg.org/multipage/rendering.html#button-layout) Sometimes it's treated like a replaced element, and sometimes like an inline-block element. But it's never treated as a non-replaced inline element. In detail, it says that: > > Button layout is as follows: > > > The 'display' property is expected to act as follows: > > > If the computed value of 'display' is 'inline-grid', 'grid', > 'inline-flex', or 'flex', then behave as the computed value. > > > Otherwise, if the computed value of 'display' is a value such that > the outer display type is 'inline', then behave as 'inline-block'. > > > Otherwise, behave as 'flow-root'. > > > ... > > > If the element is absolutely-positioned, then for the purpose of the CSS > visual formatting model, act as if the element is a replaced element. [CSS] > > > If the computed value of 'inline-size' is 'auto', then the used value is > the fit-content inline size. > > > For the purpose of the 'normal' keyword of the 'align-self' property, act as if the element is a replaced element. > > > ... > > >
If you want more clarification...it seems that the **button element is a replaced element** in most modern browsers today and in the past, which means no matter how you style it, even after changing the default UA browser styles, it still retains width and height characteristics regardless of display properties. It therefore does have design characteristics tied to the browser and OS that override both the default UA style sheet in the browser and the author's styles, UNLIKE the non-replaced elements which can be changed. Take the following test that demonstrates that: ``` <style type="text/css"> button, p, div { all:revert; all:unset; all:initial; display:initial; width:initial; height:initial; display:inline !important; width:100px; height:100px; background:green; color:white; text-align:center; } </style> <button>button</button> <br /> <p>paragraph</p> <br /> <div>div</div> ``` When the `<button>`, `<p>`, and `<div>` elements are completely cleared of their CSS properties (`all:revert` and `display:initial`), then `display:inline` set with width and height, only `<p>` and `<div>` lose dimension. But the button element in modern browsers (Chrome and Firefox) still retains its "special" replaced ability to regain dimensions, regardless. Therefore, yes its "replaced" status affects its width and height characteristics. Additional: If you set the dimensions above to "0px", the button element's background collapses but the "clickable" interface dimensions on the button element do not. The text area on the button is still clickable in most modern browsers. In Safari and Internet Explorer, the button becomes tiny but still exists with dimensions and is clickable. The point is, yes these replaced elements have dimensions you can control but not entirely erase.
43,934,830
According to [PythonCentral](http://pythoncentral.io/pyside-pyqt-tutorial-qwebview/) : > > QWebView ... allows you to display web pages from URLs, arbitrary HTML, *XML with XSLT stylesheets*, web pages constructed as QWebPages, and other data whose MIME types it knows how to interpret > > > However, the xml contents are displayed as if it were interpreted as html, that is, the tags filtered away and the textnodes shown w/o line breaks. **Question is: how do I show xml in QWebView with the xsl style sheet applied?** The same xml-file opened in any stand-alone webbrowser shows fine. The html-file resulted from the transformed xml (by lxml.etree) also displays well in QWebView. Here is my (abbreviated) xml file: ``` <?xml version='1.0' encoding='UTF-8'?> <?xml-stylesheet type="text/xsl" href="../../page.xsl"?> <specimen> ... </specimen> ```
2017/05/12
[ "https://Stackoverflow.com/questions/43934830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/508402/" ]
Try converting to NSData then storing to nsuserdefaults like below ``` func saveListIdArray(_ params: NSMutableArray = []) { let data = NSKeyedArchiver.archivedData(withRootObject: params) UserDefaults.standard.set(data, forKey: "test") UserDefaults.standard.synchronize() } ``` For retrieving the data use ``` if let data = UserDefaults.standard.object(forKey: "test") as? Data { if let storedData = NSKeyedUnarchiver.unarchiveObject(with: data) as? NSMutableArray { // In here you can access your array } } ```
You are force unwrapping the NSMutableArray for a key. Don't force unwrap when you try to get the value from a dictionary or UserDefault for a key because there may be a chance that the value does not exist for that key and force unwrapping will crash your app. Do this as: ``` //to get array from user default if let array = UserDefaults.standard.object(forKey:"ArrayKey") as? NSMutableArray print(array) } ```
43,934,830
According to [PythonCentral](http://pythoncentral.io/pyside-pyqt-tutorial-qwebview/) : > > QWebView ... allows you to display web pages from URLs, arbitrary HTML, *XML with XSLT stylesheets*, web pages constructed as QWebPages, and other data whose MIME types it knows how to interpret > > > However, the xml contents are displayed as if it were interpreted as html, that is, the tags filtered away and the textnodes shown w/o line breaks. **Question is: how do I show xml in QWebView with the xsl style sheet applied?** The same xml-file opened in any stand-alone webbrowser shows fine. The html-file resulted from the transformed xml (by lxml.etree) also displays well in QWebView. Here is my (abbreviated) xml file: ``` <?xml version='1.0' encoding='UTF-8'?> <?xml-stylesheet type="text/xsl" href="../../page.xsl"?> <specimen> ... </specimen> ```
2017/05/12
[ "https://Stackoverflow.com/questions/43934830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/508402/" ]
Try converting to NSData then storing to nsuserdefaults like below ``` func saveListIdArray(_ params: NSMutableArray = []) { let data = NSKeyedArchiver.archivedData(withRootObject: params) UserDefaults.standard.set(data, forKey: "test") UserDefaults.standard.synchronize() } ``` For retrieving the data use ``` if let data = UserDefaults.standard.object(forKey: "test") as? Data { if let storedData = NSKeyedUnarchiver.unarchiveObject(with: data) as? NSMutableArray { // In here you can access your array } } ```
I have 2 possible reasons for this: 1. You need to be 100% sure that you are retrieving array with the same key as you save it with. In your code you are saving the array with "ArrayKey" but retrieving it with NSUserDefaultsKey.LIST\_ID\_ARRAY, are you sure this is the same string? 2. What datatype is self.listId? If it's a custom class then you need to make that class conform to the nscoding protocol, then encode it to Data and save that to the userDefaults ([Save custom objects into NSUserDefaults](https://stackoverflow.com/questions/29986957/save-custom-objects-into-nsuserdefaults)) A 3rd reason is that you are trying to get an object from the defaults without ever writing anything to it. Try changing ``` let array = UserDefaults.standard.object(forKey: NSUserDefaultsKey.LIST_ID_ARRAY) as! NSMutableArray print(array) return array ``` to ``` if let array = UserDefaults.standard.object(forKey: "ArrayKey") as? NSMutableArray { print(array) return array } else { return NSMutableArray() } ```
39,010,366
While executing the code below, I'm getting `AttributeError: attribute '__doc__' of 'type' objects is not writable`. ``` from functools import wraps def memoize(f): """ Memoization decorator for functions taking one or more arguments. Saves repeated api calls for a given value, by caching it. """ @wraps(f) class memodict(dict): """memodict""" def __init__(self, f): self.f = f def __call__(self, *args): return self[args] def __missing__(self, key): ret = self[key] = self.f(*key) return ret return memodict(f) @memoize def a(): """blah""" pass ``` Traceback: ```none AttributeError Traceback (most recent call last) <ipython-input-37-2afb130b1dd6> in <module>() 17 return ret 18 return memodict(f) ---> 19 @memoize 20 def a(): 21 """blah""" <ipython-input-37-2afb130b1dd6> in memoize(f) 7 """ 8 @wraps(f) ----> 9 class memodict(dict): 10 """memodict""" 11 def __init__(self, f): /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.pyc in update_wrapper(wrapper, wrapped, assigned, updated) 31 """ 32 for attr in assigned: ---> 33 setattr(wrapper, attr, getattr(wrapped, attr)) 34 for attr in updated: 35 getattr(wrapper, attr).update(getattr(wrapped, attr, {})) AttributeError: attribute '__doc__' of 'type' objects is not writable ``` Even though the doc string is provided, I don't know what's wrong with this. It's works fine if not wrapped, but I need to do this.
2016/08/18
[ "https://Stackoverflow.com/questions/39010366", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2264738/" ]
`@wraps(f)` is primarily designed to be used as a *function* decorator, rather than as a class decorator, so using it as the latter may lead to the occasional odd quirk. The specific error message you're receiving relates to a limitation of builtin types on Python 2: ``` >>> class C(object): pass ... >>> C.__doc__ = "Not allowed" Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: attribute '__doc__' of 'type' objects is not writable ``` If you use Python 3, switch to a classic class in Python 2 (by inheriting from `UserDict.UserDict` rather than the `dict` builtin), or use a closure to manage the result cache rather than a class instance, the decorator will be able to copy the docstring over from the underlying function.
The `wraps` decorator you're trying to apply to your class doesn't work because you can't modify the docstring of a class after it has been created. You can recreate the error with this code: ``` class Foo(object): """inital docstring""" Foo.__doc__ = """new docstring""" # raises an exception in Python 2 ``` The exception doesn't occur in Python 3 (I'm not exactly sure why it's changed). A workaround might be to assign the class variable `__doc__` in your class, rather than using `wraps` to set the docstring after the class exists: ``` def memoize(f): """ Memoization decorator for functions taking one or more arguments. Saves repeated api calls for a given value, by caching it. """ class memodict(dict): __doc__ = f.__doc__ # copy docstring to class variable def __init__(self, f): self.f = f def __call__(self, *args): return self[args] def __missing__(self, key): ret = self[key] = self.f(*key) return ret return memodict(f) ``` This won't copy any of the other attributes that `wraps` tries to copy (like `__name__`, etc.). You may want to fix those up yourself if they're important to you. The `__name__` attribute however needs to be set after the class is created (you can't assign it in the class definition): ``` class Foo(object): __name__ = "Bar" # this has no effect Foo.__name__ = "Bar" # this works ```
39,010,366
While executing the code below, I'm getting `AttributeError: attribute '__doc__' of 'type' objects is not writable`. ``` from functools import wraps def memoize(f): """ Memoization decorator for functions taking one or more arguments. Saves repeated api calls for a given value, by caching it. """ @wraps(f) class memodict(dict): """memodict""" def __init__(self, f): self.f = f def __call__(self, *args): return self[args] def __missing__(self, key): ret = self[key] = self.f(*key) return ret return memodict(f) @memoize def a(): """blah""" pass ``` Traceback: ```none AttributeError Traceback (most recent call last) <ipython-input-37-2afb130b1dd6> in <module>() 17 return ret 18 return memodict(f) ---> 19 @memoize 20 def a(): 21 """blah""" <ipython-input-37-2afb130b1dd6> in memoize(f) 7 """ 8 @wraps(f) ----> 9 class memodict(dict): 10 """memodict""" 11 def __init__(self, f): /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.pyc in update_wrapper(wrapper, wrapped, assigned, updated) 31 """ 32 for attr in assigned: ---> 33 setattr(wrapper, attr, getattr(wrapped, attr)) 34 for attr in updated: 35 getattr(wrapper, attr).update(getattr(wrapped, attr, {})) AttributeError: attribute '__doc__' of 'type' objects is not writable ``` Even though the doc string is provided, I don't know what's wrong with this. It's works fine if not wrapped, but I need to do this.
2016/08/18
[ "https://Stackoverflow.com/questions/39010366", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2264738/" ]
`functools.wraps()` was designed to wrap function, not class objects. One of the things it does is attempt to assign the `__doc__` string of the wrapped (original) function to the wrapper function, which, as you've discovered, isn't allowed in Python 2. It also does the same for the `__name__` and `__module__` attributes. A simple way to work around this restriction is by manually doing it when the `MemoDict` class *is defined*. Here's what I mean. (Note for increased readability I always use `CamelCase` class names as per the [PEP 8 - Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/).) ``` def memoize(f): """ Memoization decorator for functions taking one or more arguments. Saves repeated api calls for a given value, by caching it. """ class MemoDict(dict): __doc__ = f.__doc__ __name__ = f.__name__ __module__ = f.__module__ def __init__(self, f): self.f = f def __call__(self, *args): return self[args] def __missing__(self, key): ret = self[key] = self.f(*key) return ret return MemoDict(f) @memoize def a(): """blah""" print('Hello world!') print(a.__doc__) # -> blah print(a.__name__) # -> a print(a.__module__) # -> __main__ a() # -> Hello world! ``` In fact, if you wished, you could create your own wrapper / class-decorating function to do it: ``` def wrap(f): """ Convenience function to copy function attributes to derived class. """ def class_decorator(cls): class Derived(cls): __doc__ = f.__doc__ __name__ = f.__name__ __module__ = f.__module__ return Derived return class_decorator def memoize(f): """ Memoization decorator for functions taking one or more arguments. Saves repeated api calls for a given value, by caching it. """ @wrap(f) class MemoDict(dict): def __init__(self, f): self.f = f def __call__(self, *args): return self[args] def __missing__(self, key): ret = self[key] = self.f(*key) return ret return MemoDict(f) @memoize def a(): """blah""" print('Hello world!') print(a.__doc__) # -> blah print(a.__name__) # -> a print(a.__module__) # -> __main__ a() # -> Hello world! ```
The `wraps` decorator you're trying to apply to your class doesn't work because you can't modify the docstring of a class after it has been created. You can recreate the error with this code: ``` class Foo(object): """inital docstring""" Foo.__doc__ = """new docstring""" # raises an exception in Python 2 ``` The exception doesn't occur in Python 3 (I'm not exactly sure why it's changed). A workaround might be to assign the class variable `__doc__` in your class, rather than using `wraps` to set the docstring after the class exists: ``` def memoize(f): """ Memoization decorator for functions taking one or more arguments. Saves repeated api calls for a given value, by caching it. """ class memodict(dict): __doc__ = f.__doc__ # copy docstring to class variable def __init__(self, f): self.f = f def __call__(self, *args): return self[args] def __missing__(self, key): ret = self[key] = self.f(*key) return ret return memodict(f) ``` This won't copy any of the other attributes that `wraps` tries to copy (like `__name__`, etc.). You may want to fix those up yourself if they're important to you. The `__name__` attribute however needs to be set after the class is created (you can't assign it in the class definition): ``` class Foo(object): __name__ = "Bar" # this has no effect Foo.__name__ = "Bar" # this works ```
65,460,702
I am having an issue with my personal project, my python skills are pretty basic but any help would be greatly appreciated Question: TASK 1 To simulate the monitoring required, write a routine that allows entry of the babyโ€™s temperature in degrees Celsius. The routine should check whether the temperature is within the acceptable range, too high or too low and output a suitable message in each case. TASK 2 Write another routine that stores the temperatures taken over a three hour period in an array. This routine should output the highest and lowest temperatures and calculate the difference between these temperatures. NOTE: MORE emphasis on task 2 my failed attempt: ``` from array import array print("BABY TEMPERATURE CHECKER") MinBbyTemp = float(36.0) MaxBbyTemp = float(37.5) routTemp = array("i", []) BabyTemp = float(input("What is the temperature of the baby?")) if BabyTemp < MinBbyTemp: print("The temperature of the baby is low/unusual and needs to be worked on") elif BabyTemp > MaxBbyTemp: print("The temperature of the baby is too high and above the average") else: print("The temperature inputted is out of range") ```
2020/12/26
[ "https://Stackoverflow.com/questions/65460702", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14861692/" ]
According to the discord.py docs [bot.run()](https://discordpy.readthedocs.io/en/latest/api.html#discord.Client.run) is "A blocking call that abstracts away the event loop initialisation from you." and further they said if we want more control over the loop we could use start() coroutine instead of run(). So now we should create a task for calling this coroutine and we know [discord.py](https://discordpy.readthedocs.io/en/latest/) and [FastAPI](https://fastapi.tiangolo.com/) all are [asynchronous](https://docs.python.org/3/library/asyncio.html) applications. For starting a FastAPI app you need an ASGI server to handle it. In this case, we're using [Uvicorn](https://www.uvicorn.org/). So far we have run FastAPI app, now we need to start our discord bot. According to FastAPI docs we could use [startup/shutdown event](https://fastapi.tiangolo.com/advanced/events/), for calling bot.start() coroutine before the main API starts. Here is an example of an app which has an API endpoint for sending a message to a discord's user: ``` import asyncio import discord import uvicorn from config import TOKEN, USER_ID from fastapi import FastAPI app = FastAPI() bot = discord.Client() @app.on_event("startup") async def startup_event(): #this fucntion will run before the main API starts asyncio.create_task(bot.start(TOKEN)) await asyncio.sleep(4) #optional sleep for established connection with discord print(f"{bot.user} has connected to Discord!") @app.get("/") async def root(msg: str): #API endpoint for sending a message to a discord's user user = await send_message(msg) return {"Message": f"'{msg}' sent to {user}"} async def send_message(message): user = await bot.fetch_user(USER_ID) await user.send(message) return user #for optional log in the response of endpoint if __name__ == "__main__": uvicorn.run(app, host="localhost", port=5000) ``` Tested with Python 3.7.4
You are not returning anything from your `send_message` function. Something like this should do good. ```py @app.post("/items/") async def create_item(item: Item): msg = await send_message() return msg async def send_message(): user = await bot.fetch_user(USER_ID) return await user.send('') ```
65,460,702
I am having an issue with my personal project, my python skills are pretty basic but any help would be greatly appreciated Question: TASK 1 To simulate the monitoring required, write a routine that allows entry of the babyโ€™s temperature in degrees Celsius. The routine should check whether the temperature is within the acceptable range, too high or too low and output a suitable message in each case. TASK 2 Write another routine that stores the temperatures taken over a three hour period in an array. This routine should output the highest and lowest temperatures and calculate the difference between these temperatures. NOTE: MORE emphasis on task 2 my failed attempt: ``` from array import array print("BABY TEMPERATURE CHECKER") MinBbyTemp = float(36.0) MaxBbyTemp = float(37.5) routTemp = array("i", []) BabyTemp = float(input("What is the temperature of the baby?")) if BabyTemp < MinBbyTemp: print("The temperature of the baby is low/unusual and needs to be worked on") elif BabyTemp > MaxBbyTemp: print("The temperature of the baby is too high and above the average") else: print("The temperature inputted is out of range") ```
2020/12/26
[ "https://Stackoverflow.com/questions/65460702", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14861692/" ]
According to the discord.py docs [bot.run()](https://discordpy.readthedocs.io/en/latest/api.html#discord.Client.run) is "A blocking call that abstracts away the event loop initialisation from you." and further they said if we want more control over the loop we could use start() coroutine instead of run(). So now we should create a task for calling this coroutine and we know [discord.py](https://discordpy.readthedocs.io/en/latest/) and [FastAPI](https://fastapi.tiangolo.com/) all are [asynchronous](https://docs.python.org/3/library/asyncio.html) applications. For starting a FastAPI app you need an ASGI server to handle it. In this case, we're using [Uvicorn](https://www.uvicorn.org/). So far we have run FastAPI app, now we need to start our discord bot. According to FastAPI docs we could use [startup/shutdown event](https://fastapi.tiangolo.com/advanced/events/), for calling bot.start() coroutine before the main API starts. Here is an example of an app which has an API endpoint for sending a message to a discord's user: ``` import asyncio import discord import uvicorn from config import TOKEN, USER_ID from fastapi import FastAPI app = FastAPI() bot = discord.Client() @app.on_event("startup") async def startup_event(): #this fucntion will run before the main API starts asyncio.create_task(bot.start(TOKEN)) await asyncio.sleep(4) #optional sleep for established connection with discord print(f"{bot.user} has connected to Discord!") @app.get("/") async def root(msg: str): #API endpoint for sending a message to a discord's user user = await send_message(msg) return {"Message": f"'{msg}' sent to {user}"} async def send_message(message): user = await bot.fetch_user(USER_ID) await user.send(message) return user #for optional log in the response of endpoint if __name__ == "__main__": uvicorn.run(app, host="localhost", port=5000) ``` Tested with Python 3.7.4
Code `bot.run(...)` runs all time and it blocks next line which starts API. You would have to run one of them in separated thread or process. I tried to run `bot` in `thread` ``` if __name__ == "__main__": import threading print('Starting bot') t = threading.Thread(target=bot.start, args=(TOKEN,)) t.start() print('Starting API') uvicorn.run(app, host='0.0.0.0') ``` but it gives me message that `bot` should run in main thread. But I found question [Discord bot and bottle in the same time in Python](https://stackoverflow.com/questions/52410570/discord-bot-and-bottle-in-the-same-time-in-python) and base on it I create code which works for me ``` if __name__ == "__main__": import asyncio print('Starting bot') bot_app = bot.start(TOKEN) bot_task = asyncio.ensure_future(bot_app) print('Starting API') uvicorn.run(app, host='0.0.0.0') ``` But I'm not sure if this is ellegant method because `uvicorn` runs `ayncio` indirectly. --- Full version ``` import uvicorn from fastapi import FastAPI from pydantic import BaseModel from typing import Optional from discord.ext import commands app = FastAPI() #import os #TOKEN = os.getenv("DISCORD_TOKEN") TOKEN = 'MY_TOKEN' bot = commands.Bot(command_prefix='>') class Item(BaseModel): name: str description: Optional[str] = None price: float tax: Optional[float] = None @app.get("/") def hello(): return {"message":"Hello"} @app.post("/items/") async def create_item(item: Item): await send_message() return item @bot.event async def on_ready(): print(f'{bot.user.name} has connected to Discord!') async def send_message(): user = await bot.fetch_user(USER_ID) await user.send('') if __name__ == "__main__": import asyncio print('Starting bot') bot_app = bot.start(TOKEN) bot_task = asyncio.ensure_future(bot_app) print('Starting API') uvicorn.run(app, host='0.0.0.0') ```
57,445,907
I want to detect malicious sites using python. Now, I've tried using `requests` module to get the contents of a website, then would search for `malicious words` in it. But, I didn't get it to work. [![here example images of red page](https://i.stack.imgur.com/f3vOj.png)](https://i.stack.imgur.com/f3vOj.png) this my all code : [link code](https://pastebin.com/t43WqW8U) ``` req_check = requests.get(url) if 'malicious words' in req_check.content: print ('[Your Site Detect Red Page] ===> '+url) else: print ('[Your Site Not Detect Red Page] ===> '+url) ```
2019/08/10
[ "https://Stackoverflow.com/questions/57445907", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10675882/" ]
It doesn't work because you're using the `requests` library wrong. In your code, you essentially only get the HTML of the virus site (line of code: `req_check = requests.get(url, verify=False)` and `if 'example for detect ' in req_check.content:`{source: <https://pastebin.com/6x24SN6v>}) In Chrome, the browser runs through a database of known virus links (its more complicated than this) and sees if the link is safe. However, the `requests` library **does not** do this. Instead, you're better off using their API. If you want to see how the API can be used in conjunction with `requests`, you can see my answer on another question: [Is there a way to extract information from shadow-root on a Website?](https://stackoverflow.com/questions/57281787/is-there-a-way-to-extract-information-from-shadow-root-on-a-website/57299979#57299979) Sidenote, the `redage()` is never called?
Tell the user to enter a website, then use selenium or something to upload the url to virustotal.com
57,445,907
I want to detect malicious sites using python. Now, I've tried using `requests` module to get the contents of a website, then would search for `malicious words` in it. But, I didn't get it to work. [![here example images of red page](https://i.stack.imgur.com/f3vOj.png)](https://i.stack.imgur.com/f3vOj.png) this my all code : [link code](https://pastebin.com/t43WqW8U) ``` req_check = requests.get(url) if 'malicious words' in req_check.content: print ('[Your Site Detect Red Page] ===> '+url) else: print ('[Your Site Not Detect Red Page] ===> '+url) ```
2019/08/10
[ "https://Stackoverflow.com/questions/57445907", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10675882/" ]
It doesn't work because you're using the `requests` library wrong. In your code, you essentially only get the HTML of the virus site (line of code: `req_check = requests.get(url, verify=False)` and `if 'example for detect ' in req_check.content:`{source: <https://pastebin.com/6x24SN6v>}) In Chrome, the browser runs through a database of known virus links (its more complicated than this) and sees if the link is safe. However, the `requests` library **does not** do this. Instead, you're better off using their API. If you want to see how the API can be used in conjunction with `requests`, you can see my answer on another question: [Is there a way to extract information from shadow-root on a Website?](https://stackoverflow.com/questions/57281787/is-there-a-way-to-extract-information-from-shadow-root-on-a-website/57299979#57299979) Sidenote, the `redage()` is never called?
I would comment that your indentation might be messed where there is other code. Otherwise, it should work flawlessly. Edit 2 ------ It appeared that OP was after a way to detect malicious sites in python. This is a documentation from [totalvirus](https://developers.virustotal.com/reference#url-scan) explaining how to leverage their `APIs`. Now to give you a working example, this will print a list of engines reporting positive: ``` import requests apikey = '<api_key>' def main(): scan_url('https://friborgerforbundet.no/') def scan_url(url): params = {'apikey': apikey, 'url': url} response = requests.post('https://www.virustotal.com/vtapi/v2/url/scan', data=params) scan_id = response.json()['scan_id'] report_params = {'apikey': apikey, 'resource': scan_id} report_response = requests.get('https://www.virustotal.com/vtapi/v2/url/report', params=report_params) scans = report_response.json()['scans'] positive_sites = [] for key, value in scans.items(): if value['detected'] == True: positive_sites.append(key) print(positive_sites) if __name__ == '__main__': main() ```
54,806,005
I have a list of dictionaries that looks something like this-> ``` list = [{"id":1,"path":"a/b", ........}, {"id":2,"path":"a/b/c", ........}, {"id":3,"path":"a/b/c/d", ........}] ``` Now I want to create a dict of path to id mapping. That should look something like this-> ``` d=dict() d["a/b"] = 1 d["a/b/c"] = 2 d["a/b/c/d"] = 3 ``` how to create it in pythonic way
2019/02/21
[ "https://Stackoverflow.com/questions/54806005", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9593015/" ]
Make outer div `fixed` then inner 4 div element will show next to each other. ```css .groups { display: flex; display: -webkit-flex; position: fixed; margin: 0 auto; left: 0; right: 0; justify-content: space-around; max-width: 500px; } .g{ height: 70px; width: 70px; background-color: black; margin: 5px; } ``` ```html <div class="groups"> <div class="g g1"></div> <div class="g g2"></div> <div class="g g3"></div> <div class="g g4"></div> </div> ```
Otherwise you can do it with flex: ```css .g{ height: 70px; width: 70px; background-color: black; margin: 5px; } .groups { display: flex; justify-content: space-between; width: 400px } ``` ```html <div class="groups"> <div class="g g1"></div> <div class="g g2"></div> <div class="g g3"></div> <div class="g g4"></div> </div> ```
54,806,005
I have a list of dictionaries that looks something like this-> ``` list = [{"id":1,"path":"a/b", ........}, {"id":2,"path":"a/b/c", ........}, {"id":3,"path":"a/b/c/d", ........}] ``` Now I want to create a dict of path to id mapping. That should look something like this-> ``` d=dict() d["a/b"] = 1 d["a/b/c"] = 2 d["a/b/c/d"] = 3 ``` how to create it in pythonic way
2019/02/21
[ "https://Stackoverflow.com/questions/54806005", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9593015/" ]
Make outer div `fixed` then inner 4 div element will show next to each other. ```css .groups { display: flex; display: -webkit-flex; position: fixed; margin: 0 auto; left: 0; right: 0; justify-content: space-around; max-width: 500px; } .g{ height: 70px; width: 70px; background-color: black; margin: 5px; } ``` ```html <div class="groups"> <div class="g g1"></div> <div class="g g2"></div> <div class="g g3"></div> <div class="g g4"></div> </div> ```
Try this fiddle [Fiddle here](https://jsfiddle.net/c1kjmsxv/) ``` .groups{ position: fixed; bottom: 15%; display:flex; justify-content:center; width:100%; } .g{ height: 70px; width: 70px; background-color: black; margin: 5px; } ```
54,806,005
I have a list of dictionaries that looks something like this-> ``` list = [{"id":1,"path":"a/b", ........}, {"id":2,"path":"a/b/c", ........}, {"id":3,"path":"a/b/c/d", ........}] ``` Now I want to create a dict of path to id mapping. That should look something like this-> ``` d=dict() d["a/b"] = 1 d["a/b/c"] = 2 d["a/b/c/d"] = 3 ``` how to create it in pythonic way
2019/02/21
[ "https://Stackoverflow.com/questions/54806005", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9593015/" ]
Make outer div `fixed` then inner 4 div element will show next to each other. ```css .groups { display: flex; display: -webkit-flex; position: fixed; margin: 0 auto; left: 0; right: 0; justify-content: space-around; max-width: 500px; } .g{ height: 70px; width: 70px; background-color: black; margin: 5px; } ``` ```html <div class="groups"> <div class="g g1"></div> <div class="g g2"></div> <div class="g g3"></div> <div class="g g4"></div> </div> ```
Just add `float:left;` to the css class .g Of course you MUST remove `position: fixed;` You can also use `position: fixed;` but you must provide different values for position for each div. ``` <div class="groups"> <div class="g g1"></div> <div class="g g2"></div> <div class="g g3"></div> <div class="g g4"></div> </div> <div class="groups"> <div class="h h1"></div> <div class="h h2"></div> <div class="h h3"></div> <div class="h h4"></div> </div> .g { height: 70px; width: 70px; background-color: black; margin: 5px; float:left; /* add this */ border: 1px solid red; bottom: 15%; } .h { height: 70px; width: 70px; background-color: blue; margin: 5px; position:fixed; } .h1 { bottom: 50px; right: 20px; } .h2 { bottom: 50px; right: 100px; } .h3 { bottom: 50px; right: 180px; } .h4 { bottom: 50px; right: 260px; } ``` Here is a working [Fiddle](https://jsfiddle.net/Meziano/d4rvy0Lc/)
54,806,005
I have a list of dictionaries that looks something like this-> ``` list = [{"id":1,"path":"a/b", ........}, {"id":2,"path":"a/b/c", ........}, {"id":3,"path":"a/b/c/d", ........}] ``` Now I want to create a dict of path to id mapping. That should look something like this-> ``` d=dict() d["a/b"] = 1 d["a/b/c"] = 2 d["a/b/c/d"] = 3 ``` how to create it in pythonic way
2019/02/21
[ "https://Stackoverflow.com/questions/54806005", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9593015/" ]
Make outer div `fixed` then inner 4 div element will show next to each other. ```css .groups { display: flex; display: -webkit-flex; position: fixed; margin: 0 auto; left: 0; right: 0; justify-content: space-around; max-width: 500px; } .g{ height: 70px; width: 70px; background-color: black; margin: 5px; } ``` ```html <div class="groups"> <div class="g g1"></div> <div class="g g2"></div> <div class="g g3"></div> <div class="g g4"></div> </div> ```
As position is fixed, you may need to sepcify the positions of class g1, g2 and g3 like below. ``` .g1{ left: 0px; } .g2{ left: 100px; } .g3{ left: 200px; } ```
29,704,139
I am trying to apply `_pickle` to save data onto disk. But when calling `_pickle.dump`, I got an error ``` OverflowError: cannot serialize a bytes object larger than 4 GiB ``` Is this a hard limitation to use `_pickle`? (`cPickle` for python2)
2015/04/17
[ "https://Stackoverflow.com/questions/29704139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2880978/" ]
Not anymore in Python 3.4 which has PEP 3154 and Pickle 4.0 <https://www.python.org/dev/peps/pep-3154/> But you need to say you want to use version 4 of the protocol: <https://docs.python.org/3/library/pickle.html> ``` pickle.dump(d, open("file", 'w'), protocol=4) ```
Yes, this is a hard-coded limit; from [`save_bytes` function](https://hg.python.org/cpython/file/2d8e4047c270/Modules/_pickle.c#l1958): ```c else if (size <= 0xffffffffL) { // ... } else { PyErr_SetString(PyExc_OverflowError, "cannot serialize a bytes object larger than 4 GiB"); return -1; /* string too large */ } ``` The protocol uses 4 bytes to write the size of the object to disk, which means you can only track sizes of up to 232 == 4GB. If you can break up the `bytes` object into multiple objects, each smaller than 4GB, you can still save the data to a pickle, of course.
29,704,139
I am trying to apply `_pickle` to save data onto disk. But when calling `_pickle.dump`, I got an error ``` OverflowError: cannot serialize a bytes object larger than 4 GiB ``` Is this a hard limitation to use `_pickle`? (`cPickle` for python2)
2015/04/17
[ "https://Stackoverflow.com/questions/29704139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2880978/" ]
Yes, this is a hard-coded limit; from [`save_bytes` function](https://hg.python.org/cpython/file/2d8e4047c270/Modules/_pickle.c#l1958): ```c else if (size <= 0xffffffffL) { // ... } else { PyErr_SetString(PyExc_OverflowError, "cannot serialize a bytes object larger than 4 GiB"); return -1; /* string too large */ } ``` The protocol uses 4 bytes to write the size of the object to disk, which means you can only track sizes of up to 232 == 4GB. If you can break up the `bytes` object into multiple objects, each smaller than 4GB, you can still save the data to a pickle, of course.
There is a great answers above for why pickle doesn't work. But it still doesn't work for Python 2.7, which is a problem if you are are still at Python 2.7 and want to support large files, especially NumPy (NumPy arrays over 4G fail). You can use OC serialization, which has been updated to work for data over 4Gig. There is a Python C Extension module available from: <http://www.picklingtools.com/Downloads> Take a look at the Documentation: <http://www.picklingtools.com/html/faq.html#python-c-extension-modules-new-as-of-picklingtools-1-6-0-and-1-3-3> But, here's a quick summary: there's ocdumps and ocloads, very much like pickle's dumps and loads:: ``` from pyocser import ocdumps, ocloads ser = ocdumps(pyobject) : Serialize pyobject into string ser pyobject = ocloads(ser) : Deserialize from string ser into pyobject ``` The OC Serialization is 1.5-2x faster and also works with C++ (if you are mixing langauges). It works with all built-in types, but not classes (partly because it is cross-language and it's hard to build C++ classes from Python).
29,704,139
I am trying to apply `_pickle` to save data onto disk. But when calling `_pickle.dump`, I got an error ``` OverflowError: cannot serialize a bytes object larger than 4 GiB ``` Is this a hard limitation to use `_pickle`? (`cPickle` for python2)
2015/04/17
[ "https://Stackoverflow.com/questions/29704139", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2880978/" ]
Not anymore in Python 3.4 which has PEP 3154 and Pickle 4.0 <https://www.python.org/dev/peps/pep-3154/> But you need to say you want to use version 4 of the protocol: <https://docs.python.org/3/library/pickle.html> ``` pickle.dump(d, open("file", 'w'), protocol=4) ```
There is a great answers above for why pickle doesn't work. But it still doesn't work for Python 2.7, which is a problem if you are are still at Python 2.7 and want to support large files, especially NumPy (NumPy arrays over 4G fail). You can use OC serialization, which has been updated to work for data over 4Gig. There is a Python C Extension module available from: <http://www.picklingtools.com/Downloads> Take a look at the Documentation: <http://www.picklingtools.com/html/faq.html#python-c-extension-modules-new-as-of-picklingtools-1-6-0-and-1-3-3> But, here's a quick summary: there's ocdumps and ocloads, very much like pickle's dumps and loads:: ``` from pyocser import ocdumps, ocloads ser = ocdumps(pyobject) : Serialize pyobject into string ser pyobject = ocloads(ser) : Deserialize from string ser into pyobject ``` The OC Serialization is 1.5-2x faster and also works with C++ (if you are mixing langauges). It works with all built-in types, but not classes (partly because it is cross-language and it's hard to build C++ classes from Python).
53,012,388
When I do `python -mzeep https://testingapi.ercot.com/2007-08/Nodal/eEDS/EWS/?WSDL` the operations are blank. When I pull that up in a browser I can find many things under an `<operation>` tag. What am I missing? I'm not sure if this is relevant but I hate to exclude this info if it is. The site has a zip file of XSDs and WSDL files that I don't know what to do with [here](http://www.ercot.com/content/wcm/lists/89535/External_Web_Services_XSD_V1.20K.zip).
2018/10/26
[ "https://Stackoverflow.com/questions/53012388", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1818713/" ]
> > I actually don't need those serialized into the JSON file > > > In JSON.NET there is a `[JsonIgnore]` attribute that you can decorate properties with. Related: [Newtonsoft ignore attributes?](https://stackoverflow.com/questions/6309725/newtonsoft-ignore-attributes)
To serialize and write it to a file, just do this: ``` string json = JsonConvert.SerializeObject(theme); System.IO.File.WriteAllText("yourfile.json", json); ```
53,356,449
I'm having trouble carrying out what I think should be a pretty straightforward task on a NIDAQ usb6002: I have a low frequency sine wave that I'm measuring at an analog input channel, and when it crosses zero I would like to light an LED for 1 second. I'm trying to use the nidaqmx Python API, but haven't been able to clear up some of my basic questions with the documentation. <https://nidaqmx-python.readthedocs.io/en/latest/> If anyone can offer any thoughts about the code or the basic logic of my setup, that would be hugely helpful. Here's what I have tried so far. I start with some imports and the definition of my channels: ``` import matplotlib.pyplot as plt from math import * import nidaqmx from nidaqmx import * from nidaqmx.constants import * import time V_PIN = "Dev1/ai6" LED_PIN = "Dev1/ao0" ``` I understand how tasks and things work generally- I can read and plot a signal of a given sampling rate and number of samples using task.ai\_channels methods without any trouble. But here's my best guess at how to carry out "detect zero and trigger output": ``` writeLED = nidaqmx.Task('LED') writeLED.ao_channels.add_ao_voltage_chan(LED_PIN) writeLED.timing.cfg_samp_clk_timing(1) writeLED.triggers.start_trigger.cfg_anlg_edge_start_trig(V_PIN,trigger_level = 0) writeLED.write([5], auto_start=True) ``` This gives me the error below at the cfg\_anlg\_edge line ``` DaqError: Requested value is not a supported value for this property. The property value may be invalid because it conflicts with another property. Property: DAQmx_StartTrig_Type Requested Value: DAQmx_Val_AnlgEdge Possible Values: DAQmx_Val_DigEdge, DAQmx_Val_None ``` I don't know why an analog input channel wouldn't be supported here. Page 245 of this document makes it sound like it should be: <https://media.readthedocs.org/pdf/nidaqmx-python/latest/nidaqmx-python.pdf> I'm sure there are other problems with the code, too. For example, it seems like the sample clock manipulations are quite a bit more complicated than what I've written above, but I haven't been able to find anything that explains how it would work in this situation. Thanks in advance for any help!
2018/11/17
[ "https://Stackoverflow.com/questions/53356449", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10397841/" ]
In short: Web worksers do not ignore messages even if the web worker thread is blocked. All browsers events, including web worker `postMessage()`/`onmessage()` events are queued. This is the fundamental philosophy of JavaScript (`onmessage()` is done in JS even if you use WebAssembly). Have a look at ["Concurrency model and Event Loop" from MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/EventLoop) for further detail. So what going to happen in your case is, while `onmessage()` is blocked, the events from main thread `postMessage()` are queued automatically. When a single `onmessage()` job is finished in the worker thread, from the worker event queue, will check if `postMessage()` is called before it finishes and catch the message if there is. So you don't need to worry about that case as long as the `onmessage()` job takes like 10 seconds and the you get hundreds of events in the queue. This is how asynchronous execution is done everywhere in the browser.
Considering you are targeting recent browsers (WebAssembly), you can most likely rely on SharedArrayBuffer and Atomics. Have a look at these solutions [Is it possible to pause/resume a web worker externally?](https://stackoverflow.com/questions/57701464/is-it-possible-to-pause-resume-a-web-worker-externally/71888014#71888014) , which in your case will need to be handled inside WebAssembly (`Atomics.wait` part)
30,019,283
I was wondering how to parse the CURL JSON output from the server into variables. Currently, I have - ``` curl -X POST -H "Content: agent-type: application/x-www-form-urlencoded" https://www.toontownrewritten.com/api/login?format=json -d username="$USERNAME" -d password="$PASSWORD" | python -m json.tool ``` But it only outputs the JSON from the server and then have it parsed, like so: ``` { "eta": "0", "position": "0", "queueToken": "6bee9e85-343f-41c7-a4d3-156f901da615", "success": "delayed" } ``` But how do I put - for example the success value above returned from the server into a variable $SUCCESS and have the value as delayed & have queueToken as a variable $queueToken and 6bee9e85-343f-41c7-a4d3-156f901da615 as a value? Then when I use- ``` echo "$SUCCESS" ``` it shows this as the output - ``` delayed ``` And when I use ``` echo "$queueToken" ``` and the output as ``` 6bee9e85-343f-41c7-a4d3-156f901da615 ``` Thanks!
2015/05/03
[ "https://Stackoverflow.com/questions/30019283", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3152204/" ]
Find and install `jq` (<https://stedolan.github.io/jq/>). `jq` is a JSON parser. JSON is not reliably parsed by line-oriented tools like `sed` because, like XML, JSON is not a line-oriented data format. In terms of your question: ``` source <( curl -X POST -H "$content_type" "$url" -d username="$USERNAME" -d password="$PASSWORD" | jq -r '. as $h | keys | map(. + "=\"" + $h[.] + "\"") | .[]' ) ``` The `jq` syntax is a bit weird, I'm still working on it. It's basically a series of filters, each pipe taking the previous input and transforming it. In this case, the end result is some lines that look like `variable="value"` This answer uses bash's "process substitution" to take the results of the `jq` command, treat it like a file, and `source` it into the current shell. The variables will then be available to use.
Here's an example of [Extract a JSON value from a BASH script](https://gist.github.com/cjus/1047794) ``` #!/bin/bash function jsonval { temp=`echo $json | sed 's/\\\\\//\//g' | sed 's/[{}]//g' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}' | sed 's/\"\:\"/\|/g' | sed 's/[\,]/ /g' | sed 's/\"//g' | grep -w $prop` echo ${temp##*|} } json=`curl -s -X GET http://twitter.com/users/show/$1.json` prop='profile_image_url' picurl=`jsonval` `curl -s -X GET $picurl -o $1.png` ``` > > A bash script which demonstrates parsing a JSON string to extract a > property value. The script contains a jsonval function which operates > on two variables, json and prop. When the script is passed the name of > a twitter user it attempts to download the user's profile picture. > > >
30,019,283
I was wondering how to parse the CURL JSON output from the server into variables. Currently, I have - ``` curl -X POST -H "Content: agent-type: application/x-www-form-urlencoded" https://www.toontownrewritten.com/api/login?format=json -d username="$USERNAME" -d password="$PASSWORD" | python -m json.tool ``` But it only outputs the JSON from the server and then have it parsed, like so: ``` { "eta": "0", "position": "0", "queueToken": "6bee9e85-343f-41c7-a4d3-156f901da615", "success": "delayed" } ``` But how do I put - for example the success value above returned from the server into a variable $SUCCESS and have the value as delayed & have queueToken as a variable $queueToken and 6bee9e85-343f-41c7-a4d3-156f901da615 as a value? Then when I use- ``` echo "$SUCCESS" ``` it shows this as the output - ``` delayed ``` And when I use ``` echo "$queueToken" ``` and the output as ``` 6bee9e85-343f-41c7-a4d3-156f901da615 ``` Thanks!
2015/05/03
[ "https://Stackoverflow.com/questions/30019283", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3152204/" ]
Find and install `jq` (<https://stedolan.github.io/jq/>). `jq` is a JSON parser. JSON is not reliably parsed by line-oriented tools like `sed` because, like XML, JSON is not a line-oriented data format. In terms of your question: ``` source <( curl -X POST -H "$content_type" "$url" -d username="$USERNAME" -d password="$PASSWORD" | jq -r '. as $h | keys | map(. + "=\"" + $h[.] + "\"") | .[]' ) ``` The `jq` syntax is a bit weird, I'm still working on it. It's basically a series of filters, each pipe taking the previous input and transforming it. In this case, the end result is some lines that look like `variable="value"` This answer uses bash's "process substitution" to take the results of the `jq` command, treat it like a file, and `source` it into the current shell. The variables will then be available to use.
You could use perl module on command line: 1st, ensure they is installed, under debian based, you could ``` sudo apt-get install libjson-xs-perl ``` But for other OS, you could install perl modules via [CPAN (the *Comprehensive Perl Archive Network*)](http://www.cpan.org/modules/INSTALL.html): ``` cpan App::cpanminus cpan JSON::XS ``` Note: You may have to run this with superuser privileges. then: ``` curlopts=(-X POST -H "Content: apent-type: application/x-www-form-urlencoded" -d username="$USERNAME" -d password="$PASSWORD") curlurl=https://www.toontownrewritten.com/api/login?format=json . <( perl -MJSON::XS -e ' $/=undef;my $a=JSON::XS::decode_json <> ; printf "declare -A Json=\047(%s)\047\n", join " ",map { "[".$_."]=\"".$a->{$_}."\"" } qw|queueToken success eta position|; ' < <( curl "${curlopts[@]}" $curlurl ) ) ``` The line `qw|...|` let you precise which variables you want to be driven... This could be replaced by `keys $a`, but could have to be debugged as some characters is forbiden is *associative arrays* values names. ``` echo ${Json[queueToken]} 6bee9e85-343f-41c7-a4d3-156f901da615 echo ${Json[eta]} 0 ```
13,774,443
I'm making a request in python to a web service which returns AMF. I don't know if it's AMF0 or AMF3 yet. ``` r = requests.post(url, data=data) >>> r.text u'\x00\x03...' ``` ([Full data here](http://pastebin.com/sdZnU8Ds)) How can I take `r.text` and convert it to a python object or similar? I found [amfast](http://code.google.com/p/amfast/) but it's `Decoder` class returns a `3.131513074181806e-294` assuming AMF0 and `None` for AMF3. (Both incorrect) ``` from amfast.decoder import Decoder decoder = Decoder(amf3=False) obj = decoder.decode(StringIO.StringIO(r.text)) ```
2012/12/08
[ "https://Stackoverflow.com/questions/13774443", "https://Stackoverflow.com", "https://Stackoverflow.com/users/246265/" ]
**EDIT:** I had calculated some of my probabilities wrongly. Also I've now mentioned that we need to randomly pick 2 distinct inputs for the function f in order to guarantee that, if f is balanced, then we know the probabilities of seeing the various possible outcomes. The fact that the prior probability of the function being constant is not known makes this question harder, because it means we can't directly calculate the probability of success for any algorithm. We will, however, be able to calculate *bounds on* this probability. I propose the following probabilistic algorithm: * Pick two distinct 4-bit values at random, and supply each to the function f. * If 0,0 or 1,1 is seen, output "constant" with probability 2/3 and "balanced" with probability 1/3. * Otherwise (if 0,1 or 1,0 is seen), always report "balanced". Let's start by looking at something we can actually calculate: conditional probabilities. 1. **"What is P(correct|constant), namely the probability that our algorithm gives the correct answer given that f is constant?"** When f is constant, our algorithm reports the right answer 2/3 of the time. 2. **"What is P(correct|balanced), namely the probability that our algorithm gives the correct answer given that f is balanced?"** When f is balanced, the probability of seeing 0,1 or 1,0 is 2\*(8/16 \* 8/15) = 8/15, in which case the correct answer will definitely be output. In the remaining 7/15 of cases -- i.e. those in which 0,0 or 1,1 is seen -- the correct answer will be output 1/3 of the time, so the total proportion of correct outputs will be 8/15 \* 1 + 7/15 \* 1/3 = 31/45 = 2/3 + 1/45 โ‰ˆ 0.6889. Now suppose that the prior probability of the function being constant is p. Then the probability that the algorithm gives the correct answer is pCorrect(p) = p\*P(correct|constant) + (1-p)\*P(correct|balanced). Given that 0 <= p <= 1, pCorrect(p) must be at least min(P(correct|constant), P(correct|balanced)), and at most max(P(correct|constant), P(correct|balanced)). The minimum of 2/3 and 31/45 is 2/3, **thus pCorrect is bounded from below at 2/3, for any prior probability of the function being constant.** (It might help to think of p as a "mixing lever" that controls how much of each term to include. If p = 0 or p = 1, then we effectively just have P(correct|balanced) or P(correct|constant), respectively, and for any in-between value of p, we will have an in-between total.)
Look at the probabilities for the different types of functions to return different results for two given values: ``` constant 0,0 50% constant 1,1 50% balanced 0,0 4/8 * 3/7 = 21,4% balanced 0,1 4/8 * 4/7 = 28.6% balanced 1,0 4/8 * 4/7 = 28.6% balanced 1,1 4/8 * 3/7 = 21.4% ``` If the results are 0,0 or 1,1 there is a 70% chance that the function is constant, while for the results 0,1 and 1,0 the is a 100% chance that the function is balanced. So, for the cases that occur 71.4% of the time we are 70% certain, and the cases that occur 28.6% of the time we are 100% certain. By average we are 78.6% certain.
17,081,363
I'm trying to convert C++ code to python but I'm stuck original C++ code ``` int main(void) { int levels = 40; int xp_for_first_level = 1000; int xp_for_last_level = 1000000; double B = log((double)xp_for_last_level / xp_for_first_level) / (levels - 1); double A = (double)xp_for_first_level / (exp(B) - 1.0); for (int i = 1; i <= levels; i++) { int old_xp = round(A * exp(B * (i - 1))); int new_xp = round(A * exp(B * i)); std::cout << i << " " << (new_xp - old_xp) << std::endl; } } ``` python code ``` import math from math import log from math import exp levels = 40 xp_for_first_level = 1000 xp_for_last_level = 1000000 B = log(xp_for_last_level / xp_for_first_level) / (levels - 1) A = xp_for_first_level / (exp(B) - 1.0) for i in range(1, levels): old_xp = round(A * exp(B * (i - 1))) new_xp = round(A * exp(B * i)) print(i + " " + (new_xp - old_xp)) ``` Any help is appreciated I can't seem to completely get it to work, when I fix one bug I'm creating another one.
2013/06/13
[ "https://Stackoverflow.com/questions/17081363", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1934748/" ]
Change the `print` line to: ``` print("%i %i" % (i, new_xp - old_xp)) ``` Refer to this [list of allowed type conversion specifiers](http://docs.python.org/3/library/stdtypes.html#printf-style-string-formatting) for more informations. Or use the new [format](http://docs.python.org/3/library/functions.html#format) method.
Depending on the version of python you are using, the cast to double in the C++ code ``` (double)xp_for_last_level / xp_for_first_level ``` might need to be taken into account in the python code. In python 3 you will get a float, in older python you can do ``` from __future__ import division ``` then `xp_for_last_level / xp_for_first_level` will give you a float. See the [discussion here](https://stackoverflow.com/questions/1267869/how-can-i-force-division-to-be-floating-point-in-python)
17,081,363
I'm trying to convert C++ code to python but I'm stuck original C++ code ``` int main(void) { int levels = 40; int xp_for_first_level = 1000; int xp_for_last_level = 1000000; double B = log((double)xp_for_last_level / xp_for_first_level) / (levels - 1); double A = (double)xp_for_first_level / (exp(B) - 1.0); for (int i = 1; i <= levels; i++) { int old_xp = round(A * exp(B * (i - 1))); int new_xp = round(A * exp(B * i)); std::cout << i << " " << (new_xp - old_xp) << std::endl; } } ``` python code ``` import math from math import log from math import exp levels = 40 xp_for_first_level = 1000 xp_for_last_level = 1000000 B = log(xp_for_last_level / xp_for_first_level) / (levels - 1) A = xp_for_first_level / (exp(B) - 1.0) for i in range(1, levels): old_xp = round(A * exp(B * (i - 1))) new_xp = round(A * exp(B * i)) print(i + " " + (new_xp - old_xp)) ``` Any help is appreciated I can't seem to completely get it to work, when I fix one bug I'm creating another one.
2013/06/13
[ "https://Stackoverflow.com/questions/17081363", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1934748/" ]
For the last line, you can simply use: ``` print(i, new_xp - old_xp) ``` As @pfnuesel commented, you will need to adjust the range of your for loop slightly. Finally, you don't need `import math`. You can replace the first 3 lines with: ``` from math import log, exp ```
Depending on the version of python you are using, the cast to double in the C++ code ``` (double)xp_for_last_level / xp_for_first_level ``` might need to be taken into account in the python code. In python 3 you will get a float, in older python you can do ``` from __future__ import division ``` then `xp_for_last_level / xp_for_first_level` will give you a float. See the [discussion here](https://stackoverflow.com/questions/1267869/how-can-i-force-division-to-be-floating-point-in-python)
57,169,697
When I use the PIL.ImageTk library to load a png to my GUI and use logging to log some events, it creates some unwanted logs in DEBUG mode. I have tried changing the `level` of `logging` to `INFO` or `WARNING` (or higher). But that does not help: ``` logging.basicConfig(filename='mylog.log', filemode='a', format='%(asctime)s %(levelname)s: %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p', level=logging.INFO) ``` For example, the following code will create a log file with some unwanted lines: ``` from PIL import ImageTk, Image import logging try: import tkinter as tk # Python 3.x except ImportError: import Tkinter as tk # Python 2.x class Example(tk.Frame): def __init__(self, parent): tk.Frame.__init__(self, parent) for i in range(2): self.grid_rowconfigure(i, weight=1) self.grid_columnconfigure(0, weight=1) self.img = ImageTk.PhotoImage(Image.open('test.png')) logo = tk.Label(self, image = self.img) logo.grid(row=0, column=0, columnspan=2, sticky="nw", pady=5, padx=10) testLabel = tk.Label(self, width=8, text="This is a test") testLabel.grid(row=1, column=0, sticky='ew', padx=5, pady=5) logging.info("This is a test log...") if __name__ == "__main__": logging.basicConfig(filename='mylog.log', filemode='a', format='%(asctime)s %(levelname)s: %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p', level=logging.DEBUG) root = tk.Tk() Example(root).pack(side="top", fill="both", expand=True) root.mainloop() ``` Here is a sample image `test.png` ![enter image description here](https://i.stack.imgur.com/MTM60.png) This will create a log file with some unwanted lines like this: ``` 07/23/2019 01:34:23 PM DEBUG: STREAM b'IHDR' 16 13 07/23/2019 01:34:23 PM DEBUG: STREAM b'IDAT' 41 6744 07/23/2019 01:34:23 PM INFO: This is a test log... ``` It should have been only: ``` 07/23/2019 01:34:23 PM INFO: This is a test log... ``` If you remove the image from the GUI, the problem goes away. Is there any workaround for this? EDIT: I apologize for not going through the [docs](https://docs.python.org/3/library/logging.html#logging.basicConfig) carefully. This was happening because the root module was created with DEBUG level when I first ran the script in Spyder with `level=DEBUG` and it was never changed by `basicConfig` subsequently when I changed the level to INFO. If I reload all the modules and libs (only by restarting the kernel in Spyder), the problem goes away, which means `level=INFO` would work perfectly as I want.
2019/07/23
[ "https://Stackoverflow.com/questions/57169697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2365866/" ]
Problem is because module already created root logger and now `basicConfig` uses this logger but it can't change level for existing logger. Doc: [basicConfig](https://docs.python.org/3/library/logging.html#logging.basicConfig) > > This function does nothing if the root logger already has handlers configured for it. > > > You have to create own logger (you can use `__name__` to make it unique) and then you can set root level and levels for file and console handlers. In own logger you will no see warnings from other loggers. ``` if __name__ == "__main__": logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) # root level # console ch = logging.StreamHandler() ch.setLevel(logging.DEBUG) # level only for console (if not collide with root level) logger.addHandler(ch) # file fh = logging.FileHandler('mylog.log') fh.setLevel(logging.DEBUG) # level only for file (if not collide with root level) logger.addHandler(fh) root = tk.Tk() Example(root).pack(side="top", fill="both", expand=True) root.mainloop() ``` Doc: [logging-advanced-tutorial](https://docs.python.org/3/howto/logging.html#logging-advanced-tutorial),
Instead Of Using: ``` ... level=logging.DEBUG ... ``` Use: ``` ... level=logging.INFO ... ``` And Your File Will Be: > > DD/MM/YYYY HH:MM:SS PM INFO: This is a test log... > > >
57,169,697
When I use the PIL.ImageTk library to load a png to my GUI and use logging to log some events, it creates some unwanted logs in DEBUG mode. I have tried changing the `level` of `logging` to `INFO` or `WARNING` (or higher). But that does not help: ``` logging.basicConfig(filename='mylog.log', filemode='a', format='%(asctime)s %(levelname)s: %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p', level=logging.INFO) ``` For example, the following code will create a log file with some unwanted lines: ``` from PIL import ImageTk, Image import logging try: import tkinter as tk # Python 3.x except ImportError: import Tkinter as tk # Python 2.x class Example(tk.Frame): def __init__(self, parent): tk.Frame.__init__(self, parent) for i in range(2): self.grid_rowconfigure(i, weight=1) self.grid_columnconfigure(0, weight=1) self.img = ImageTk.PhotoImage(Image.open('test.png')) logo = tk.Label(self, image = self.img) logo.grid(row=0, column=0, columnspan=2, sticky="nw", pady=5, padx=10) testLabel = tk.Label(self, width=8, text="This is a test") testLabel.grid(row=1, column=0, sticky='ew', padx=5, pady=5) logging.info("This is a test log...") if __name__ == "__main__": logging.basicConfig(filename='mylog.log', filemode='a', format='%(asctime)s %(levelname)s: %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p', level=logging.DEBUG) root = tk.Tk() Example(root).pack(side="top", fill="both", expand=True) root.mainloop() ``` Here is a sample image `test.png` ![enter image description here](https://i.stack.imgur.com/MTM60.png) This will create a log file with some unwanted lines like this: ``` 07/23/2019 01:34:23 PM DEBUG: STREAM b'IHDR' 16 13 07/23/2019 01:34:23 PM DEBUG: STREAM b'IDAT' 41 6744 07/23/2019 01:34:23 PM INFO: This is a test log... ``` It should have been only: ``` 07/23/2019 01:34:23 PM INFO: This is a test log... ``` If you remove the image from the GUI, the problem goes away. Is there any workaround for this? EDIT: I apologize for not going through the [docs](https://docs.python.org/3/library/logging.html#logging.basicConfig) carefully. This was happening because the root module was created with DEBUG level when I first ran the script in Spyder with `level=DEBUG` and it was never changed by `basicConfig` subsequently when I changed the level to INFO. If I reload all the modules and libs (only by restarting the kernel in Spyder), the problem goes away, which means `level=INFO` would work perfectly as I want.
2019/07/23
[ "https://Stackoverflow.com/questions/57169697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2365866/" ]
Add: ``` pil_logger = logging.getLogger('PIL') pil_logger.setLevel(logging.INFO) ``` (source: <https://github.com/camptocamp/pytest-odoo/issues/15#issuecomment-559203242>)
Problem is because module already created root logger and now `basicConfig` uses this logger but it can't change level for existing logger. Doc: [basicConfig](https://docs.python.org/3/library/logging.html#logging.basicConfig) > > This function does nothing if the root logger already has handlers configured for it. > > > You have to create own logger (you can use `__name__` to make it unique) and then you can set root level and levels for file and console handlers. In own logger you will no see warnings from other loggers. ``` if __name__ == "__main__": logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) # root level # console ch = logging.StreamHandler() ch.setLevel(logging.DEBUG) # level only for console (if not collide with root level) logger.addHandler(ch) # file fh = logging.FileHandler('mylog.log') fh.setLevel(logging.DEBUG) # level only for file (if not collide with root level) logger.addHandler(fh) root = tk.Tk() Example(root).pack(side="top", fill="both", expand=True) root.mainloop() ``` Doc: [logging-advanced-tutorial](https://docs.python.org/3/howto/logging.html#logging-advanced-tutorial),
57,169,697
When I use the PIL.ImageTk library to load a png to my GUI and use logging to log some events, it creates some unwanted logs in DEBUG mode. I have tried changing the `level` of `logging` to `INFO` or `WARNING` (or higher). But that does not help: ``` logging.basicConfig(filename='mylog.log', filemode='a', format='%(asctime)s %(levelname)s: %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p', level=logging.INFO) ``` For example, the following code will create a log file with some unwanted lines: ``` from PIL import ImageTk, Image import logging try: import tkinter as tk # Python 3.x except ImportError: import Tkinter as tk # Python 2.x class Example(tk.Frame): def __init__(self, parent): tk.Frame.__init__(self, parent) for i in range(2): self.grid_rowconfigure(i, weight=1) self.grid_columnconfigure(0, weight=1) self.img = ImageTk.PhotoImage(Image.open('test.png')) logo = tk.Label(self, image = self.img) logo.grid(row=0, column=0, columnspan=2, sticky="nw", pady=5, padx=10) testLabel = tk.Label(self, width=8, text="This is a test") testLabel.grid(row=1, column=0, sticky='ew', padx=5, pady=5) logging.info("This is a test log...") if __name__ == "__main__": logging.basicConfig(filename='mylog.log', filemode='a', format='%(asctime)s %(levelname)s: %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p', level=logging.DEBUG) root = tk.Tk() Example(root).pack(side="top", fill="both", expand=True) root.mainloop() ``` Here is a sample image `test.png` ![enter image description here](https://i.stack.imgur.com/MTM60.png) This will create a log file with some unwanted lines like this: ``` 07/23/2019 01:34:23 PM DEBUG: STREAM b'IHDR' 16 13 07/23/2019 01:34:23 PM DEBUG: STREAM b'IDAT' 41 6744 07/23/2019 01:34:23 PM INFO: This is a test log... ``` It should have been only: ``` 07/23/2019 01:34:23 PM INFO: This is a test log... ``` If you remove the image from the GUI, the problem goes away. Is there any workaround for this? EDIT: I apologize for not going through the [docs](https://docs.python.org/3/library/logging.html#logging.basicConfig) carefully. This was happening because the root module was created with DEBUG level when I first ran the script in Spyder with `level=DEBUG` and it was never changed by `basicConfig` subsequently when I changed the level to INFO. If I reload all the modules and libs (only by restarting the kernel in Spyder), the problem goes away, which means `level=INFO` would work perfectly as I want.
2019/07/23
[ "https://Stackoverflow.com/questions/57169697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2365866/" ]
Add: ``` pil_logger = logging.getLogger('PIL') pil_logger.setLevel(logging.INFO) ``` (source: <https://github.com/camptocamp/pytest-odoo/issues/15#issuecomment-559203242>)
Instead Of Using: ``` ... level=logging.DEBUG ... ``` Use: ``` ... level=logging.INFO ... ``` And Your File Will Be: > > DD/MM/YYYY HH:MM:SS PM INFO: This is a test log... > > >
60,414,356
I have a GUI application made using PySide2 and it some major modules it uses are OpenVino(2019), dlib, OpenCV-contrib(4.2.x) and Postgres(psycopg2) and I am trying to freeze the application using PyInstaller (--debug is True). The program gets frozen without errors but during execution, I get the following error: ``` Fatal Python error: initfsencoding: unable to load the file system codec ModuleNotFoundError: No module named 'encodings' ``` after which the application exits. I have tried many suggestions provided in other stackoverflow questions/github issues but none of them have worked. I have python version 3.7.6 but I have also tried with 3.6.8 (both local installation and after creating new venv in pycharm). I have tried different versions of pycharm as well(it shows som other errors below 3.5). I have tried pycharm 3.6 both develop branch and master branch. I have checked my PYTHONPATH and PYTHONHOME in env variables, they are pointing to python's location. I have modified my specfile to include the necessary binaries, files, imports and folders. I would share it if needed. Also any other logs during build or execution. I would like to know what I should do to solve this, wheather this issue is because of some component or is this a PyInstaller issue, and if so, should I raise it on github. My os is windows 10.
2020/02/26
[ "https://Stackoverflow.com/questions/60414356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8243797/" ]
You changed the python version. So, you have to give a new path according to the Python version. Just remove all older version and the current one and reinstall new Python v.3.8.1
You need to include base\_library.zip in your application folder
41,923,890
I'm attempting to create a simple selection sort program in python without using any built in functions. My problem right now is my code is only sorting the first digit of the list. What's wrong? Here's my sort ``` def selectionsort(list1): for x in range(len(list1)): tiniest = minimum(list1) swap(tiniest,x,list1) return(list1) ``` Here's the minimum and swap functions I'm using ``` def swap(index1,index2,list1): TheList = list1 temp = TheList[index1] TheList[index1] = TheList[index2] TheList[index2] = temp return(TheList) def minimum(list1): small = list1[0] for i in list1: if i < small: small = i return small ``` An example of output List = [3,2,1,0] Output = [0,2,1,3]
2017/01/29
[ "https://Stackoverflow.com/questions/41923890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6843896/" ]
Some simplification will make it more readable/comprehensible: ``` def swap(lst, i1, i2): lst[i1], lst[i2] = lst[i2], lst[i1] # easy-swapping by multi assignment def minimum(lst, s): # s: start index min_val, min_index = lst[s], s for i in range(s+1, len(lst)): if lst[i] < min_val: min_val, min_index = lst[i], i return min_index # return index of minimum, not minimum itself def selection_sort(lst): for i in range(len(lst)): swap(lst, i, minimum(lst, i)) # find min index starting from current and swap with current ```
It seems `minimum` returns the value of the smallest element in `list1`, but your `swap` expects an index instead. Try making `minimum` return the index instead of the value of the smallest element.
74,101,582
I am calculating concentrations from a file that has separate Date and Time columns. I set the date and time columns as indexes as they are not suppose to change. However when I print the new dataframe it only prints the "date" one time like this: ``` 55... Date Time 2020-12-30 10:37:04 0.000000 ... 10:37:07 0.000000 ... 10:37:10 0.000000 ... 10:37:13 0.000000 ... 10:37:16 0.000000 ... ``` What I need is for it to print the date for each row just like it is in the original dataframe. It should remain the same as the original dataframe date and time values. The original dataframe is: ``` Date Time Accum. Scatter... 2020-12-30 10:37:04 3 0.789... 2020-12-30 10:37:07 3 0.814... 2020-12-30 10:37:10 3 0.787... 2020-12-30 10:37:13 3 0.803... 2020-12-30 10:37:16 3 0.798... 2020-12-30 10:37:19 3 0.818... 2020-12-30 10:37:22 3 0.809... ``` The code I have is : ``` df = pd.read_csv('UHSAS 20201230c.txt',delimiter=r'\s+',engine = 'python') df.set_index(['Date', 'Time'],inplace=True) concentration = df.iloc[:,13:].div(df.Sample,axis=0) print(concentration.to_string()) ``` I know it seems simple but I am new to pandas. Thank you in advance.
2022/10/17
[ "https://Stackoverflow.com/questions/74101582", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19096358/" ]
From the Pandas [to\_string() docs](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_string.html#pandas-dataframe-to-string). > > sparsify : bool, optional, default True > > Set to False for a DataFrame with a hierarchical index to print every multiindex key at each row. > > > --- ``` print(concentration.to_string(sparsify=False)) ``` --- ``` import pandas as pd d = {'a':[1,1,1,3,5,6], 'b':[1,2,3,4,5,6], 'c':[9,8,7,6,5,4]} df = pd.DataFrame(d) df = df.set_index(['a','b']) >>> print(df.to_string()) c a b 1 1 9 2 8 3 7 3 4 6 5 5 5 6 6 4 >>> print(df.to_string(sparsify=False)) c a b 1 1 9 1 2 8 1 3 7 3 4 6 5 5 5 6 6 4 ```
remove `inplace=True` and replace it with `reset_index()` at the end here is the code (first two lines) ``` df = pd.read_csv('csv2.txt',delimiter=r'\s+',engine = 'python') df.set_index(['Date', 'Time'] ).reset_index() # cannot do calculation step, as sample column is not present in the sample data ``` ``` Date Time Accum. Scatter... 0 2020-12-30 10:37:04 3 0.789... 1 2020-12-30 10:37:07 3 0.814... 2 2020-12-30 10:37:10 3 0.787... 3 2020-12-30 10:37:13 3 0.803... 4 2020-12-30 10:37:16 3 0.798... 5 2020-12-30 10:37:19 3 0.818... 6 2020-12-30 10:37:22 3 0.809... ```
65,463,877
I've installed Spark and components locally and I'm able to execute PySpark code in Jupyter, iPython and via spark-submit - however receiving the following WARNING's: ``` WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/ayubk/spark-3.0.1-bin-hadoop3.2/jars/spark-unsafe_2.12-3.0.1.jar) to constructor java.nio.DirectByteBuffer(long,int) WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release 20/12/27 07:54:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties ``` The .py file executes but should I be worried about these warnings? Don't want to start writing some code to later find that it doesn't execute down the line. FYI installed PySpark locally. Here's the code: `test.txt`: ``` This is a test file This is the second line - TEST This is the third line this IS THE fourth LINE - tEsT ``` `test.py`: ```py import pyspark sc = pyspark.SparkContext.getOrCreate() # sc = pyspark.SparkContext(master='local[*]') # or 'local[2]' ? lines = sc.textFile("test.txt") llist = lines.collect() for line in llist: print(line) print("SparkContext version:\t", sc.version) # return SparkContext version print("python version:\t", sc.pythonVer) # return python version print("master URL:\t", sc.master) # master URL to connect to print("path where spark is installed on worker nodes:\t", sc.sparkHome) # path where spark is installed on worker nodes print("name of spark user running SparkContext:\t", sc.sparkUser()) # name of spark user running SparkContext ``` PATHs: ```sh export SPARK_HOME=/Users/ayubk/spark-3.0.1-bin-hadoop3.2 export PATH=$SPARK_HOME:$PATH export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH export PYSPARK_DRIVER_PYTHON="jupyter" export PYSPARK_DRIVER_PYTHON_OPTS="notebook" export PYSPARK_PYTHON=python3 ``` bash terminal: ```sh $ spark-3.0.1-bin-hadoop3.2/bin/spark-submit test.py ``` ``` WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/ayubk/spark-3.0.1-bin-hadoop3.2/jars/spark-unsafe_2.12-3.0.1.jar) to constructor java.nio.DirectByteBuffer(long,int) WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release 20/12/27 08:00:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 20/12/27 08:00:01 INFO SparkContext: Running Spark version 3.0.1 20/12/27 08:00:01 INFO ResourceUtils: ============================================================== 20/12/27 08:00:01 INFO ResourceUtils: Resources for spark.driver: 20/12/27 08:00:01 INFO ResourceUtils: ============================================================== 20/12/27 08:00:01 INFO SparkContext: Submitted application: test.py 20/12/27 08:00:01 INFO SecurityManager: Changing view acls to: ayubk 20/12/27 08:00:01 INFO SecurityManager: Changing modify acls to: ayubk 20/12/27 08:00:01 INFO SecurityManager: Changing view acls groups to: 20/12/27 08:00:01 INFO SecurityManager: Changing modify acls groups to: 20/12/27 08:00:01 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ayubk); groups with view permissions: Set(); users with modify permissions: Set(ayubk); groups with modify permissions: Set() 20/12/27 08:00:02 INFO Utils: Successfully started service 'sparkDriver' on port 51254. 20/12/27 08:00:02 INFO SparkEnv: Registering MapOutputTracker 20/12/27 08:00:02 INFO SparkEnv: Registering BlockManagerMaster 20/12/27 08:00:02 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 20/12/27 08:00:02 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 20/12/27 08:00:02 INFO SparkEnv: Registering BlockManagerMasterHeartbeat 20/12/27 08:00:02 INFO DiskBlockManager: Created local directory at /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/blockmgr-a99e3df1-6d15-4158-8e09-568910c2b045 20/12/27 08:00:02 INFO MemoryStore: MemoryStore started with capacity 434.4 MiB 20/12/27 08:00:02 INFO SparkEnv: Registering OutputCommitCoordinator 20/12/27 08:00:02 INFO Utils: Successfully started service 'SparkUI' on port 4040. 20/12/27 08:00:02 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.101:4040 20/12/27 08:00:02 INFO Executor: Starting executor ID driver on host 192.168.1.101 20/12/27 08:00:02 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51255. 20/12/27 08:00:02 INFO NettyBlockTransferService: Server created on 192.168.1.101:51255 20/12/27 08:00:02 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 20/12/27 08:00:02 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.101, 51255, None) 20/12/27 08:00:02 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.101:51255 with 434.4 MiB RAM, BlockManagerId(driver, 192.168.1.101, 51255, None) 20/12/27 08:00:02 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.101, 51255, None) 20/12/27 08:00:03 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.101, 51255, None) 20/12/27 08:00:03 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 175.8 KiB, free 434.2 MiB) 20/12/27 08:00:03 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 27.1 KiB, free 434.2 MiB) 20/12/27 08:00:03 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.101:51255 (size: 27.1 KiB, free: 434.4 MiB) 20/12/27 08:00:03 INFO SparkContext: Created broadcast 0 from textFile at NativeMethodAccessorImpl.java:0 20/12/27 08:00:04 INFO FileInputFormat: Total input files to process : 1 20/12/27 08:00:04 INFO SparkContext: Starting job: collect at /Users/ayubk/test.py:9 20/12/27 08:00:04 INFO DAGScheduler: Got job 0 (collect at /Users/ayubk/test.py:9) with 2 output partitions 20/12/27 08:00:04 INFO DAGScheduler: Final stage: ResultStage 0 (collect at /Users/ayubk/test.py:9) 20/12/27 08:00:04 INFO DAGScheduler: Parents of final stage: List() 20/12/27 08:00:04 INFO DAGScheduler: Missing parents: List() 20/12/27 08:00:04 INFO DAGScheduler: Submitting ResultStage 0 (test.txt MapPartitionsRDD[1] at textFile at NativeMethodAccessorImpl.java:0), which has no missing parents 20/12/27 08:00:04 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.0 KiB, free 434.2 MiB) 20/12/27 08:00:04 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.3 KiB, free 434.2 MiB) 20/12/27 08:00:04 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.1.101:51255 (size: 2.3 KiB, free: 434.4 MiB) 20/12/27 08:00:04 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1223 20/12/27 08:00:04 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (test.txt MapPartitionsRDD[1] at textFile at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0, 1)) 20/12/27 08:00:04 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks 20/12/27 08:00:04 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.1.101, executor driver, partition 0, PROCESS_LOCAL, 7367 bytes) 20/12/27 08:00:04 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 192.168.1.101, executor driver, partition 1, PROCESS_LOCAL, 7367 bytes) 20/12/27 08:00:04 INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 20/12/27 08:00:04 INFO Executor: Running task 1.0 in stage 0.0 (TID 1) 20/12/27 08:00:04 INFO HadoopRDD: Input split: file:/Users/ayubk/test.txt:52+52 20/12/27 08:00:04 INFO HadoopRDD: Input split: file:/Users/ayubk/test.txt:0+52 20/12/27 08:00:04 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 956 bytes result sent to driver 20/12/27 08:00:04 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1003 bytes result sent to driver 20/12/27 08:00:04 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 156 ms on 192.168.1.101 (executor driver) (1/2) 20/12/27 08:00:04 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 142 ms on 192.168.1.101 (executor driver) (2/2) 20/12/27 08:00:04 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 20/12/27 08:00:04 INFO DAGScheduler: ResultStage 0 (collect at /Users/ayubk/test.py:9) finished in 0.241 s 20/12/27 08:00:04 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job 20/12/27 08:00:04 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished 20/12/27 08:00:04 INFO DAGScheduler: Job 0 finished: collect at /Users/ayubk/test.py:9, took 0.296115 s This is a test file This is the second line - TEST This is the third line this IS THE fourth LINE - tEsT SparkContext version: 3.0.1 python version: 3.7 master URL: local[*] path where spark is installed on worker nodes: None name of spark user running SparkContext: ayubk 20/12/27 08:00:04 INFO SparkContext: Invoking stop() from shutdown hook 20/12/27 08:00:04 INFO SparkUI: Stopped Spark web UI at http://192.168.1.101:4040 20/12/27 08:00:04 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 20/12/27 08:00:04 INFO MemoryStore: MemoryStore cleared 20/12/27 08:00:04 INFO BlockManager: BlockManager stopped 20/12/27 08:00:04 INFO BlockManagerMaster: BlockManagerMaster stopped 20/12/27 08:00:04 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 20/12/27 08:00:04 INFO SparkContext: Successfully stopped SparkContext 20/12/27 08:00:04 INFO ShutdownHookManager: Shutdown hook called 20/12/27 08:00:04 INFO ShutdownHookManager: Deleting directory /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/spark-eb41b5d5-16e2-4938-8049-8f923e6cb46c 20/12/27 08:00:04 INFO ShutdownHookManager: Deleting directory /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/spark-76d186fb-cf42-4898-92db-050a73f9fcb7 20/12/27 08:00:04 INFO ShutdownHookManager: Deleting directory /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/spark-eb41b5d5-16e2-4938-8049-8f923e6cb46c/pyspark-ee1fe6ab-a27f-4be6-b8d8-06594704da12 ``` **Edit:** Tried to install Java8: ```sh brew update brew tap adoptopenjdk/openjdk brew search jdk brew install --cask adoptopenjdk8 ``` Although when typing this `java -version`, I'm getting this: ``` openjdk version "13" 2019-09-17 OpenJDK Runtime Environment (build 13+33) OpenJDK 64-Bit Server VM (build 13+33, mixed mode, sharing) ```
2020/12/27
[ "https://Stackoverflow.com/questions/65463877", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9372996/" ]
Install Java 8 instead of Java 11, which is known to give this sort of warnings with Spark.
If you run your PySpark code often and you are tired (like me) of looking through all these warnings again and again before you see **your** output, bash/zsh process substitution comes to the rescue: ``` $ spark-3.0.1-bin-hadoop3.2/bin/spark-submit test.py 2> >(tail -n +8 >&2) | cat ``` Here we redirect STDERR of our process to `tail`, it skips everything before line #8 and redirects the result back to STDERR. For more details see [this Stack Overflow answer](https://stackoverflow.com/a/52575087/3206908). The output is then piped to `cat` that helps to wait for both our process and `tail` to finish, see [this Stack Exchange answer](https://unix.stackexchange.com/a/458218/417629) for more details. You can put it into shell script to avoid typing it each time you run `spark-submit`.
65,463,877
I've installed Spark and components locally and I'm able to execute PySpark code in Jupyter, iPython and via spark-submit - however receiving the following WARNING's: ``` WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/ayubk/spark-3.0.1-bin-hadoop3.2/jars/spark-unsafe_2.12-3.0.1.jar) to constructor java.nio.DirectByteBuffer(long,int) WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release 20/12/27 07:54:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties ``` The .py file executes but should I be worried about these warnings? Don't want to start writing some code to later find that it doesn't execute down the line. FYI installed PySpark locally. Here's the code: `test.txt`: ``` This is a test file This is the second line - TEST This is the third line this IS THE fourth LINE - tEsT ``` `test.py`: ```py import pyspark sc = pyspark.SparkContext.getOrCreate() # sc = pyspark.SparkContext(master='local[*]') # or 'local[2]' ? lines = sc.textFile("test.txt") llist = lines.collect() for line in llist: print(line) print("SparkContext version:\t", sc.version) # return SparkContext version print("python version:\t", sc.pythonVer) # return python version print("master URL:\t", sc.master) # master URL to connect to print("path where spark is installed on worker nodes:\t", sc.sparkHome) # path where spark is installed on worker nodes print("name of spark user running SparkContext:\t", sc.sparkUser()) # name of spark user running SparkContext ``` PATHs: ```sh export SPARK_HOME=/Users/ayubk/spark-3.0.1-bin-hadoop3.2 export PATH=$SPARK_HOME:$PATH export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH export PYSPARK_DRIVER_PYTHON="jupyter" export PYSPARK_DRIVER_PYTHON_OPTS="notebook" export PYSPARK_PYTHON=python3 ``` bash terminal: ```sh $ spark-3.0.1-bin-hadoop3.2/bin/spark-submit test.py ``` ``` WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/ayubk/spark-3.0.1-bin-hadoop3.2/jars/spark-unsafe_2.12-3.0.1.jar) to constructor java.nio.DirectByteBuffer(long,int) WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release 20/12/27 08:00:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 20/12/27 08:00:01 INFO SparkContext: Running Spark version 3.0.1 20/12/27 08:00:01 INFO ResourceUtils: ============================================================== 20/12/27 08:00:01 INFO ResourceUtils: Resources for spark.driver: 20/12/27 08:00:01 INFO ResourceUtils: ============================================================== 20/12/27 08:00:01 INFO SparkContext: Submitted application: test.py 20/12/27 08:00:01 INFO SecurityManager: Changing view acls to: ayubk 20/12/27 08:00:01 INFO SecurityManager: Changing modify acls to: ayubk 20/12/27 08:00:01 INFO SecurityManager: Changing view acls groups to: 20/12/27 08:00:01 INFO SecurityManager: Changing modify acls groups to: 20/12/27 08:00:01 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ayubk); groups with view permissions: Set(); users with modify permissions: Set(ayubk); groups with modify permissions: Set() 20/12/27 08:00:02 INFO Utils: Successfully started service 'sparkDriver' on port 51254. 20/12/27 08:00:02 INFO SparkEnv: Registering MapOutputTracker 20/12/27 08:00:02 INFO SparkEnv: Registering BlockManagerMaster 20/12/27 08:00:02 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 20/12/27 08:00:02 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 20/12/27 08:00:02 INFO SparkEnv: Registering BlockManagerMasterHeartbeat 20/12/27 08:00:02 INFO DiskBlockManager: Created local directory at /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/blockmgr-a99e3df1-6d15-4158-8e09-568910c2b045 20/12/27 08:00:02 INFO MemoryStore: MemoryStore started with capacity 434.4 MiB 20/12/27 08:00:02 INFO SparkEnv: Registering OutputCommitCoordinator 20/12/27 08:00:02 INFO Utils: Successfully started service 'SparkUI' on port 4040. 20/12/27 08:00:02 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.101:4040 20/12/27 08:00:02 INFO Executor: Starting executor ID driver on host 192.168.1.101 20/12/27 08:00:02 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51255. 20/12/27 08:00:02 INFO NettyBlockTransferService: Server created on 192.168.1.101:51255 20/12/27 08:00:02 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 20/12/27 08:00:02 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.101, 51255, None) 20/12/27 08:00:02 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.101:51255 with 434.4 MiB RAM, BlockManagerId(driver, 192.168.1.101, 51255, None) 20/12/27 08:00:02 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.101, 51255, None) 20/12/27 08:00:03 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.101, 51255, None) 20/12/27 08:00:03 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 175.8 KiB, free 434.2 MiB) 20/12/27 08:00:03 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 27.1 KiB, free 434.2 MiB) 20/12/27 08:00:03 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.101:51255 (size: 27.1 KiB, free: 434.4 MiB) 20/12/27 08:00:03 INFO SparkContext: Created broadcast 0 from textFile at NativeMethodAccessorImpl.java:0 20/12/27 08:00:04 INFO FileInputFormat: Total input files to process : 1 20/12/27 08:00:04 INFO SparkContext: Starting job: collect at /Users/ayubk/test.py:9 20/12/27 08:00:04 INFO DAGScheduler: Got job 0 (collect at /Users/ayubk/test.py:9) with 2 output partitions 20/12/27 08:00:04 INFO DAGScheduler: Final stage: ResultStage 0 (collect at /Users/ayubk/test.py:9) 20/12/27 08:00:04 INFO DAGScheduler: Parents of final stage: List() 20/12/27 08:00:04 INFO DAGScheduler: Missing parents: List() 20/12/27 08:00:04 INFO DAGScheduler: Submitting ResultStage 0 (test.txt MapPartitionsRDD[1] at textFile at NativeMethodAccessorImpl.java:0), which has no missing parents 20/12/27 08:00:04 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.0 KiB, free 434.2 MiB) 20/12/27 08:00:04 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.3 KiB, free 434.2 MiB) 20/12/27 08:00:04 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.1.101:51255 (size: 2.3 KiB, free: 434.4 MiB) 20/12/27 08:00:04 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1223 20/12/27 08:00:04 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (test.txt MapPartitionsRDD[1] at textFile at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0, 1)) 20/12/27 08:00:04 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks 20/12/27 08:00:04 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.1.101, executor driver, partition 0, PROCESS_LOCAL, 7367 bytes) 20/12/27 08:00:04 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 192.168.1.101, executor driver, partition 1, PROCESS_LOCAL, 7367 bytes) 20/12/27 08:00:04 INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 20/12/27 08:00:04 INFO Executor: Running task 1.0 in stage 0.0 (TID 1) 20/12/27 08:00:04 INFO HadoopRDD: Input split: file:/Users/ayubk/test.txt:52+52 20/12/27 08:00:04 INFO HadoopRDD: Input split: file:/Users/ayubk/test.txt:0+52 20/12/27 08:00:04 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 956 bytes result sent to driver 20/12/27 08:00:04 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1003 bytes result sent to driver 20/12/27 08:00:04 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 156 ms on 192.168.1.101 (executor driver) (1/2) 20/12/27 08:00:04 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 142 ms on 192.168.1.101 (executor driver) (2/2) 20/12/27 08:00:04 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 20/12/27 08:00:04 INFO DAGScheduler: ResultStage 0 (collect at /Users/ayubk/test.py:9) finished in 0.241 s 20/12/27 08:00:04 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job 20/12/27 08:00:04 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished 20/12/27 08:00:04 INFO DAGScheduler: Job 0 finished: collect at /Users/ayubk/test.py:9, took 0.296115 s This is a test file This is the second line - TEST This is the third line this IS THE fourth LINE - tEsT SparkContext version: 3.0.1 python version: 3.7 master URL: local[*] path where spark is installed on worker nodes: None name of spark user running SparkContext: ayubk 20/12/27 08:00:04 INFO SparkContext: Invoking stop() from shutdown hook 20/12/27 08:00:04 INFO SparkUI: Stopped Spark web UI at http://192.168.1.101:4040 20/12/27 08:00:04 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 20/12/27 08:00:04 INFO MemoryStore: MemoryStore cleared 20/12/27 08:00:04 INFO BlockManager: BlockManager stopped 20/12/27 08:00:04 INFO BlockManagerMaster: BlockManagerMaster stopped 20/12/27 08:00:04 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 20/12/27 08:00:04 INFO SparkContext: Successfully stopped SparkContext 20/12/27 08:00:04 INFO ShutdownHookManager: Shutdown hook called 20/12/27 08:00:04 INFO ShutdownHookManager: Deleting directory /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/spark-eb41b5d5-16e2-4938-8049-8f923e6cb46c 20/12/27 08:00:04 INFO ShutdownHookManager: Deleting directory /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/spark-76d186fb-cf42-4898-92db-050a73f9fcb7 20/12/27 08:00:04 INFO ShutdownHookManager: Deleting directory /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/spark-eb41b5d5-16e2-4938-8049-8f923e6cb46c/pyspark-ee1fe6ab-a27f-4be6-b8d8-06594704da12 ``` **Edit:** Tried to install Java8: ```sh brew update brew tap adoptopenjdk/openjdk brew search jdk brew install --cask adoptopenjdk8 ``` Although when typing this `java -version`, I'm getting this: ``` openjdk version "13" 2019-09-17 OpenJDK Runtime Environment (build 13+33) OpenJDK 64-Bit Server VM (build 13+33, mixed mode, sharing) ```
2020/12/27
[ "https://Stackoverflow.com/questions/65463877", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9372996/" ]
Install Java 8 instead of Java 11, which is known to give this sort of warnings with Spark.
If you upgrade to the lastest JDBC driver, you won't see this warning with Java 11. I upgraded to org.postgresql:postgresql:42.3.3.
65,463,877
I've installed Spark and components locally and I'm able to execute PySpark code in Jupyter, iPython and via spark-submit - however receiving the following WARNING's: ``` WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/ayubk/spark-3.0.1-bin-hadoop3.2/jars/spark-unsafe_2.12-3.0.1.jar) to constructor java.nio.DirectByteBuffer(long,int) WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release 20/12/27 07:54:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties ``` The .py file executes but should I be worried about these warnings? Don't want to start writing some code to later find that it doesn't execute down the line. FYI installed PySpark locally. Here's the code: `test.txt`: ``` This is a test file This is the second line - TEST This is the third line this IS THE fourth LINE - tEsT ``` `test.py`: ```py import pyspark sc = pyspark.SparkContext.getOrCreate() # sc = pyspark.SparkContext(master='local[*]') # or 'local[2]' ? lines = sc.textFile("test.txt") llist = lines.collect() for line in llist: print(line) print("SparkContext version:\t", sc.version) # return SparkContext version print("python version:\t", sc.pythonVer) # return python version print("master URL:\t", sc.master) # master URL to connect to print("path where spark is installed on worker nodes:\t", sc.sparkHome) # path where spark is installed on worker nodes print("name of spark user running SparkContext:\t", sc.sparkUser()) # name of spark user running SparkContext ``` PATHs: ```sh export SPARK_HOME=/Users/ayubk/spark-3.0.1-bin-hadoop3.2 export PATH=$SPARK_HOME:$PATH export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH export PYSPARK_DRIVER_PYTHON="jupyter" export PYSPARK_DRIVER_PYTHON_OPTS="notebook" export PYSPARK_PYTHON=python3 ``` bash terminal: ```sh $ spark-3.0.1-bin-hadoop3.2/bin/spark-submit test.py ``` ``` WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/ayubk/spark-3.0.1-bin-hadoop3.2/jars/spark-unsafe_2.12-3.0.1.jar) to constructor java.nio.DirectByteBuffer(long,int) WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release 20/12/27 08:00:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 20/12/27 08:00:01 INFO SparkContext: Running Spark version 3.0.1 20/12/27 08:00:01 INFO ResourceUtils: ============================================================== 20/12/27 08:00:01 INFO ResourceUtils: Resources for spark.driver: 20/12/27 08:00:01 INFO ResourceUtils: ============================================================== 20/12/27 08:00:01 INFO SparkContext: Submitted application: test.py 20/12/27 08:00:01 INFO SecurityManager: Changing view acls to: ayubk 20/12/27 08:00:01 INFO SecurityManager: Changing modify acls to: ayubk 20/12/27 08:00:01 INFO SecurityManager: Changing view acls groups to: 20/12/27 08:00:01 INFO SecurityManager: Changing modify acls groups to: 20/12/27 08:00:01 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ayubk); groups with view permissions: Set(); users with modify permissions: Set(ayubk); groups with modify permissions: Set() 20/12/27 08:00:02 INFO Utils: Successfully started service 'sparkDriver' on port 51254. 20/12/27 08:00:02 INFO SparkEnv: Registering MapOutputTracker 20/12/27 08:00:02 INFO SparkEnv: Registering BlockManagerMaster 20/12/27 08:00:02 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 20/12/27 08:00:02 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 20/12/27 08:00:02 INFO SparkEnv: Registering BlockManagerMasterHeartbeat 20/12/27 08:00:02 INFO DiskBlockManager: Created local directory at /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/blockmgr-a99e3df1-6d15-4158-8e09-568910c2b045 20/12/27 08:00:02 INFO MemoryStore: MemoryStore started with capacity 434.4 MiB 20/12/27 08:00:02 INFO SparkEnv: Registering OutputCommitCoordinator 20/12/27 08:00:02 INFO Utils: Successfully started service 'SparkUI' on port 4040. 20/12/27 08:00:02 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.101:4040 20/12/27 08:00:02 INFO Executor: Starting executor ID driver on host 192.168.1.101 20/12/27 08:00:02 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51255. 20/12/27 08:00:02 INFO NettyBlockTransferService: Server created on 192.168.1.101:51255 20/12/27 08:00:02 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 20/12/27 08:00:02 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.101, 51255, None) 20/12/27 08:00:02 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.101:51255 with 434.4 MiB RAM, BlockManagerId(driver, 192.168.1.101, 51255, None) 20/12/27 08:00:02 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.101, 51255, None) 20/12/27 08:00:03 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.101, 51255, None) 20/12/27 08:00:03 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 175.8 KiB, free 434.2 MiB) 20/12/27 08:00:03 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 27.1 KiB, free 434.2 MiB) 20/12/27 08:00:03 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.101:51255 (size: 27.1 KiB, free: 434.4 MiB) 20/12/27 08:00:03 INFO SparkContext: Created broadcast 0 from textFile at NativeMethodAccessorImpl.java:0 20/12/27 08:00:04 INFO FileInputFormat: Total input files to process : 1 20/12/27 08:00:04 INFO SparkContext: Starting job: collect at /Users/ayubk/test.py:9 20/12/27 08:00:04 INFO DAGScheduler: Got job 0 (collect at /Users/ayubk/test.py:9) with 2 output partitions 20/12/27 08:00:04 INFO DAGScheduler: Final stage: ResultStage 0 (collect at /Users/ayubk/test.py:9) 20/12/27 08:00:04 INFO DAGScheduler: Parents of final stage: List() 20/12/27 08:00:04 INFO DAGScheduler: Missing parents: List() 20/12/27 08:00:04 INFO DAGScheduler: Submitting ResultStage 0 (test.txt MapPartitionsRDD[1] at textFile at NativeMethodAccessorImpl.java:0), which has no missing parents 20/12/27 08:00:04 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.0 KiB, free 434.2 MiB) 20/12/27 08:00:04 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.3 KiB, free 434.2 MiB) 20/12/27 08:00:04 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.1.101:51255 (size: 2.3 KiB, free: 434.4 MiB) 20/12/27 08:00:04 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1223 20/12/27 08:00:04 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (test.txt MapPartitionsRDD[1] at textFile at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0, 1)) 20/12/27 08:00:04 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks 20/12/27 08:00:04 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.1.101, executor driver, partition 0, PROCESS_LOCAL, 7367 bytes) 20/12/27 08:00:04 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 192.168.1.101, executor driver, partition 1, PROCESS_LOCAL, 7367 bytes) 20/12/27 08:00:04 INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 20/12/27 08:00:04 INFO Executor: Running task 1.0 in stage 0.0 (TID 1) 20/12/27 08:00:04 INFO HadoopRDD: Input split: file:/Users/ayubk/test.txt:52+52 20/12/27 08:00:04 INFO HadoopRDD: Input split: file:/Users/ayubk/test.txt:0+52 20/12/27 08:00:04 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 956 bytes result sent to driver 20/12/27 08:00:04 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1003 bytes result sent to driver 20/12/27 08:00:04 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 156 ms on 192.168.1.101 (executor driver) (1/2) 20/12/27 08:00:04 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 142 ms on 192.168.1.101 (executor driver) (2/2) 20/12/27 08:00:04 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 20/12/27 08:00:04 INFO DAGScheduler: ResultStage 0 (collect at /Users/ayubk/test.py:9) finished in 0.241 s 20/12/27 08:00:04 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job 20/12/27 08:00:04 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished 20/12/27 08:00:04 INFO DAGScheduler: Job 0 finished: collect at /Users/ayubk/test.py:9, took 0.296115 s This is a test file This is the second line - TEST This is the third line this IS THE fourth LINE - tEsT SparkContext version: 3.0.1 python version: 3.7 master URL: local[*] path where spark is installed on worker nodes: None name of spark user running SparkContext: ayubk 20/12/27 08:00:04 INFO SparkContext: Invoking stop() from shutdown hook 20/12/27 08:00:04 INFO SparkUI: Stopped Spark web UI at http://192.168.1.101:4040 20/12/27 08:00:04 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 20/12/27 08:00:04 INFO MemoryStore: MemoryStore cleared 20/12/27 08:00:04 INFO BlockManager: BlockManager stopped 20/12/27 08:00:04 INFO BlockManagerMaster: BlockManagerMaster stopped 20/12/27 08:00:04 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 20/12/27 08:00:04 INFO SparkContext: Successfully stopped SparkContext 20/12/27 08:00:04 INFO ShutdownHookManager: Shutdown hook called 20/12/27 08:00:04 INFO ShutdownHookManager: Deleting directory /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/spark-eb41b5d5-16e2-4938-8049-8f923e6cb46c 20/12/27 08:00:04 INFO ShutdownHookManager: Deleting directory /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/spark-76d186fb-cf42-4898-92db-050a73f9fcb7 20/12/27 08:00:04 INFO ShutdownHookManager: Deleting directory /private/var/folders/11/13mml0s91q39ckbt584szkp00000gn/T/spark-eb41b5d5-16e2-4938-8049-8f923e6cb46c/pyspark-ee1fe6ab-a27f-4be6-b8d8-06594704da12 ``` **Edit:** Tried to install Java8: ```sh brew update brew tap adoptopenjdk/openjdk brew search jdk brew install --cask adoptopenjdk8 ``` Although when typing this `java -version`, I'm getting this: ``` openjdk version "13" 2019-09-17 OpenJDK Runtime Environment (build 13+33) OpenJDK 64-Bit Server VM (build 13+33, mixed mode, sharing) ```
2020/12/27
[ "https://Stackoverflow.com/questions/65463877", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9372996/" ]
If you run your PySpark code often and you are tired (like me) of looking through all these warnings again and again before you see **your** output, bash/zsh process substitution comes to the rescue: ``` $ spark-3.0.1-bin-hadoop3.2/bin/spark-submit test.py 2> >(tail -n +8 >&2) | cat ``` Here we redirect STDERR of our process to `tail`, it skips everything before line #8 and redirects the result back to STDERR. For more details see [this Stack Overflow answer](https://stackoverflow.com/a/52575087/3206908). The output is then piped to `cat` that helps to wait for both our process and `tail` to finish, see [this Stack Exchange answer](https://unix.stackexchange.com/a/458218/417629) for more details. You can put it into shell script to avoid typing it each time you run `spark-submit`.
If you upgrade to the lastest JDBC driver, you won't see this warning with Java 11. I upgraded to org.postgresql:postgresql:42.3.3.
68,959,506
Lets say I have a CSV file that looks like this: ``` name,country,email john,US,john@fake.com brad,UK,brad@fake.com James,US,james@fake.com ``` I want to search for any county that equals US and if its exists, then print their email address. How would I do this in python without using pandas?
2021/08/27
[ "https://Stackoverflow.com/questions/68959506", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12260632/" ]
You can do something like: ``` with open('file.csv', 'r') as f: f.readline() for line in f: data = line.strip().split(',') ``` Then you can access the stuff inside of `data` to get what you need.
So to read an CSV as dataframe you need: ``` import pandas as pd df = pd.read_csv("filename.csv") ``` Next, I will generate a dummy df and show you how to get the US users and then print their emails in a list: ``` df= pd.DataFrame({"Name":['jhon','brad','james'], "Country":['US','UK','US'], "email":['john@fake.com','brad@fake.com','james@fake.com']}) US_USERS = df.loc[df['Country']=='US'] US_emails = df['email'].tolist() print(US_USERS) print(US_emails) ``` you should get: ``` Name Country email 0 jhon US john@fake.com 2 james US james@fake.com ['john@fake.com', 'brad@fake.com', 'james@fake.com'] ```
64,167,192
I am a begginer programer in python and when i run this code ``` from PIL import Image im = Image.open(r'C:\\images\\imagetest.png') width, height = im.size print(width, height) im.show() ``` I get this error: ``` im = Image.open(r'C:\\images\\imagetest.png') File "C:\Users\danie\AppData\Local\Programs\Python\Python38\lib\site-packages\PIL\Image.py", line 2878, in open fp = builtins.open(filename, "rb") FileNotFoundError: [Errno 2] No such file or directory: 'C:\\\\images\\\\imagetest.png' ``` PS C:\Users\danie\vscode projects>
2020/10/02
[ "https://Stackoverflow.com/questions/64167192", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14378032/" ]
r strings don't need their backslashes escaped Remove the r before the string declaration or remove the double backslashes
The path of the image is not correct. You have to write it like that: im = Image.open(r'C:\images\imagetest.png')
64,167,192
I am a begginer programer in python and when i run this code ``` from PIL import Image im = Image.open(r'C:\\images\\imagetest.png') width, height = im.size print(width, height) im.show() ``` I get this error: ``` im = Image.open(r'C:\\images\\imagetest.png') File "C:\Users\danie\AppData\Local\Programs\Python\Python38\lib\site-packages\PIL\Image.py", line 2878, in open fp = builtins.open(filename, "rb") FileNotFoundError: [Errno 2] No such file or directory: 'C:\\\\images\\\\imagetest.png' ``` PS C:\Users\danie\vscode projects>
2020/10/02
[ "https://Stackoverflow.com/questions/64167192", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14378032/" ]
r strings don't need their backslashes escaped Remove the r before the string declaration or remove the double backslashes
As the error said, the path you gave is incorrect. this ``` im = Image.open(r'C:\\images\\imagetest.png') ``` have issues such as double "\" which is not the proper format. Sometimes you also have the give it the full path: Example of a path: ``` C:\Users\Hamza\Downloads\me.jpg ``` If you're not sure about the path copy it from here [example path for an image](https://i.stack.imgur.com/ncc9X.png)
44,517,641
I want to connect to my database from Python shell ``` import MySQLdb db = MySQLdb.connect(host="localhost",user="milenko",passwd="********",db="classicmodels") ``` But ``` File "/home/milenko/anaconda3/lib/python3.6/site-packages/MySQLdb/connections.py", line 204, in __init__ super(Connection, self).__init__(*args, **kwargs2) _mysql_exceptions.OperationalError: (1045, "Access denied for user 'milenko'@'localhost' (using password: YES)") ``` I have created user ``` CREATE USER 'milenko'@'localhost' IDENTIFIED BY '8888888'; ``` but still the problem is still there. Databases ``` mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | ap1 | | classicmodels | | mysql | | performance_schema | | sys | +--------------------+ 6 rows in set (0.00 sec) ``` What does this mean?How to resolve this problem?
2017/06/13
[ "https://Stackoverflow.com/questions/44517641", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8006605/" ]
this generally means you do not have permissions to access the server from that particular machine. to fix it, either create user 'milenko'@'localhost' or 'milenko'@'%' using your root server user OR grant your user privileges on that particular db
Make sure your local server (e.g WampServer) is turned on.
56,576,470
I am confused about the runtime of the binary operator of the set in Python. e.g. - `set1 | set2` Does it take the linear time as `set1 - set2` or it takes quadratic time as each element in set1 has to do bitwise or with each number of set2 or vice-versa. I went through some websites but I am not able to find any clear view on this. ref: <https://www.geeksforgeeks.org/sets-in-python/>
2019/06/13
[ "https://Stackoverflow.com/questions/56576470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9072927/" ]
I had the same problem. Following command worked for me: After the `./gradlew bundleRelease` command we get a *.aab* version of our app. To get APK, you should run the app with release version on any device with the below command. * Make sure you have connected an android device * For the production ready app, firstly you have to remove the previous app from the device Run this command in `your-project/`: ``` react-native run-android --variant=release ``` Then APK can be found in `android/app/build/outputs/apk/release` Hope this helps
step - 1) ./gradlew bundleRelease step - 2) react-native run-android --variant=release Make sure you have connected an android device For the production-ready app, firstly you have to remove the previous app from the device
56,576,470
I am confused about the runtime of the binary operator of the set in Python. e.g. - `set1 | set2` Does it take the linear time as `set1 - set2` or it takes quadratic time as each element in set1 has to do bitwise or with each number of set2 or vice-versa. I went through some websites but I am not able to find any clear view on this. ref: <https://www.geeksforgeeks.org/sets-in-python/>
2019/06/13
[ "https://Stackoverflow.com/questions/56576470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9072927/" ]
I had the same problem. Following command worked for me: After the `./gradlew bundleRelease` command we get a *.aab* version of our app. To get APK, you should run the app with release version on any device with the below command. * Make sure you have connected an android device * For the production ready app, firstly you have to remove the previous app from the device Run this command in `your-project/`: ``` react-native run-android --variant=release ``` Then APK can be found in `android/app/build/outputs/apk/release` Hope this helps
Use `gradlew bundleRelease` to generate an app bundle (.aab file) and `gradlew assembleRelease` to generate an apk (.apk file). To install a release on your emulator, use `react-native run-android --variant=release`. I hope this helps
56,576,470
I am confused about the runtime of the binary operator of the set in Python. e.g. - `set1 | set2` Does it take the linear time as `set1 - set2` or it takes quadratic time as each element in set1 has to do bitwise or with each number of set2 or vice-versa. I went through some websites but I am not able to find any clear view on this. ref: <https://www.geeksforgeeks.org/sets-in-python/>
2019/06/13
[ "https://Stackoverflow.com/questions/56576470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9072927/" ]
I made a file that called `build.sh`. When i want to release a new version of android in ReactNative, just type and press `sh ./build.sh` in terminal. My shell script in `build.sh` file: ``` npx jetify && cd android && ./gradlew clean && ./gradlew assembleRelease && ./gradlew bundleRelease && cd .. ```
The following should solve your issue: Open Android Studio and select "build bundle(s)" from Build/Build Bundle(s)/Apk(s) ![Refer this image](https://i.stack.imgur.com/7BO2U.png) Then, open your console and run ``` cd android && ./gradlew bundleRelease ``` from your project's root. You may encounter an error such as > > Cannot add task 'wrapper' as a task with that name already exists., because overriding built-in tasks are deprecated in 4.8 and produces an error. > > > To prevent it, please update your `android/build.gradle`, as follows: FROM: ``` task wrapper(type: Wrapper) { gradleVersion = '4.4' distributionUrl = distributionUrl.replace("bin", "all") } ``` TO: ``` wrapper { gradleVersion = '4.4' distributionUrl = distributionUrl.replace("bin", "all") } ``` Lastly, don't forget to convert your `.aab` file to `.apk` through `bundletool` in order to test locally your app (exactly as an APK file)
56,576,470
I am confused about the runtime of the binary operator of the set in Python. e.g. - `set1 | set2` Does it take the linear time as `set1 - set2` or it takes quadratic time as each element in set1 has to do bitwise or with each number of set2 or vice-versa. I went through some websites but I am not able to find any clear view on this. ref: <https://www.geeksforgeeks.org/sets-in-python/>
2019/06/13
[ "https://Stackoverflow.com/questions/56576470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9072927/" ]
I had the same problem. Following command worked for me: After the `./gradlew bundleRelease` command we get a *.aab* version of our app. To get APK, you should run the app with release version on any device with the below command. * Make sure you have connected an android device * For the production ready app, firstly you have to remove the previous app from the device Run this command in `your-project/`: ``` react-native run-android --variant=release ``` Then APK can be found in `android/app/build/outputs/apk/release` Hope this helps
I made a file that called `build.sh`. When i want to release a new version of android in ReactNative, just type and press `sh ./build.sh` in terminal. My shell script in `build.sh` file: ``` npx jetify && cd android && ./gradlew clean && ./gradlew assembleRelease && ./gradlew bundleRelease && cd .. ```
56,576,470
I am confused about the runtime of the binary operator of the set in Python. e.g. - `set1 | set2` Does it take the linear time as `set1 - set2` or it takes quadratic time as each element in set1 has to do bitwise or with each number of set2 or vice-versa. I went through some websites but I am not able to find any clear view on this. ref: <https://www.geeksforgeeks.org/sets-in-python/>
2019/06/13
[ "https://Stackoverflow.com/questions/56576470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9072927/" ]
You can simply open the **Build Variants** window on the bottom left hand corner of Android Studio and choose *release* as the current variant: [![enter image description here](https://i.stack.imgur.com/fs3ln.png)](https://i.stack.imgur.com/fs3ln.png)
step - 1) ./gradlew bundleRelease step - 2) react-native run-android --variant=release Make sure you have connected an android device For the production-ready app, firstly you have to remove the previous app from the device
56,576,470
I am confused about the runtime of the binary operator of the set in Python. e.g. - `set1 | set2` Does it take the linear time as `set1 - set2` or it takes quadratic time as each element in set1 has to do bitwise or with each number of set2 or vice-versa. I went through some websites but I am not able to find any clear view on this. ref: <https://www.geeksforgeeks.org/sets-in-python/>
2019/06/13
[ "https://Stackoverflow.com/questions/56576470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9072927/" ]
I made a file that called `build.sh`. When i want to release a new version of android in ReactNative, just type and press `sh ./build.sh` in terminal. My shell script in `build.sh` file: ``` npx jetify && cd android && ./gradlew clean && ./gradlew assembleRelease && ./gradlew bundleRelease && cd .. ```
You can simply open the **Build Variants** window on the bottom left hand corner of Android Studio and choose *release* as the current variant: [![enter image description here](https://i.stack.imgur.com/fs3ln.png)](https://i.stack.imgur.com/fs3ln.png)
56,576,470
I am confused about the runtime of the binary operator of the set in Python. e.g. - `set1 | set2` Does it take the linear time as `set1 - set2` or it takes quadratic time as each element in set1 has to do bitwise or with each number of set2 or vice-versa. I went through some websites but I am not able to find any clear view on this. ref: <https://www.geeksforgeeks.org/sets-in-python/>
2019/06/13
[ "https://Stackoverflow.com/questions/56576470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9072927/" ]
Use `gradlew bundleRelease` to generate an app bundle (.aab file) and `gradlew assembleRelease` to generate an apk (.apk file). To install a release on your emulator, use `react-native run-android --variant=release`. I hope this helps
step - 1) ./gradlew bundleRelease step - 2) react-native run-android --variant=release Make sure you have connected an android device For the production-ready app, firstly you have to remove the previous app from the device
56,576,470
I am confused about the runtime of the binary operator of the set in Python. e.g. - `set1 | set2` Does it take the linear time as `set1 - set2` or it takes quadratic time as each element in set1 has to do bitwise or with each number of set2 or vice-versa. I went through some websites but I am not able to find any clear view on this. ref: <https://www.geeksforgeeks.org/sets-in-python/>
2019/06/13
[ "https://Stackoverflow.com/questions/56576470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9072927/" ]
the short answer uses `gradlew assembleRelease` instead. not short answer :) The command you are using `gradlew bundleRelease` builds an android App bundle. read this: [Difference between apk (.apk) and app bundle (.aab)](https://stackoverflow.com/questions/52059339/difference-between-apk-apk-and-app-bundle-aab) and this: <https://developer.android.com/guide/app-bundle> AAB file is new and not all stores support it.
Use `gradlew bundleRelease` to generate an app bundle (.aab file) and `gradlew assembleRelease` to generate an apk (.apk file). To install a release on your emulator, use `react-native run-android --variant=release`. I hope this helps
56,576,470
I am confused about the runtime of the binary operator of the set in Python. e.g. - `set1 | set2` Does it take the linear time as `set1 - set2` or it takes quadratic time as each element in set1 has to do bitwise or with each number of set2 or vice-versa. I went through some websites but I am not able to find any clear view on this. ref: <https://www.geeksforgeeks.org/sets-in-python/>
2019/06/13
[ "https://Stackoverflow.com/questions/56576470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9072927/" ]
the short answer uses `gradlew assembleRelease` instead. not short answer :) The command you are using `gradlew bundleRelease` builds an android App bundle. read this: [Difference between apk (.apk) and app bundle (.aab)](https://stackoverflow.com/questions/52059339/difference-between-apk-apk-and-app-bundle-aab) and this: <https://developer.android.com/guide/app-bundle> AAB file is new and not all stores support it.
I made a file that called `build.sh`. When i want to release a new version of android in ReactNative, just type and press `sh ./build.sh` in terminal. My shell script in `build.sh` file: ``` npx jetify && cd android && ./gradlew clean && ./gradlew assembleRelease && ./gradlew bundleRelease && cd .. ```
56,576,470
I am confused about the runtime of the binary operator of the set in Python. e.g. - `set1 | set2` Does it take the linear time as `set1 - set2` or it takes quadratic time as each element in set1 has to do bitwise or with each number of set2 or vice-versa. I went through some websites but I am not able to find any clear view on this. ref: <https://www.geeksforgeeks.org/sets-in-python/>
2019/06/13
[ "https://Stackoverflow.com/questions/56576470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9072927/" ]
Use `gradlew bundleRelease` to generate an app bundle (.aab file) and `gradlew assembleRelease` to generate an apk (.apk file). To install a release on your emulator, use `react-native run-android --variant=release`. I hope this helps
Cutting the long story short the command `gradlew bundleRelease` is used to generate an .aab file where as the command `gradlew assembleRelease` is used to generate .apk file so use the command accordingly
31,977,245
Let's say I have a web bot written in python that sends data via POST request to a web site. The data is pulled from a text file line by line and passed into an array. Currently, I'm testing each element in the array through a simple for-loop. How can I effectively implement multi-threading to iterate through the data quicker. Let's say the text file is fairly large. Would attaching a thread to each request be smart? What do you think the best approach to this would be? ``` with open("c:\file.txt") as file: dataArr = file.read().splitlines() dataLen = len(open("c:\file.txt").readlines())-1 def test(data): #This next part is pseudo code result = testData('www.example.com', data) if result == 'whatever': print 'success' for i in range(0, dataLen): test(dataArr[i]) ``` I was thinking of something along the lines of this, but I feel it would cause issues depending on the size of the text file. I know there is software that exists which allows the end-user to specify the amount of the threads when working with large amounts of data. I'm not entirely sure of how that works, but that's something I'd like to implement. ``` import threading with open("c:\file.txt") as file: dataArr = file.read().splitlines() dataLen = len(open("c:\file.txt").readlines())-1 def test(data): #This next part is pseudo code result = testData('www.example.com', data) if result == 'whatever': print 'success' jobs = [] for x in range(0, dataLen): thread = threading.Thread(target=test, args=(dataArr[x])) jobs.append(thread) for j in jobs: j.start() for j in jobs: j.join() ```
2015/08/12
[ "https://Stackoverflow.com/questions/31977245", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2313602/" ]
This sounds like a recipe for `multiprocessing.Pool` See here: <https://docs.python.org/2/library/multiprocessing.html#introduction> ``` from multiprocessing import Pool def test(num): if num%2 == 0: return True else: return False if __name__ == "__main__": list_of_datas_to_test = [0, 1, 2, 3, 4, 5, 6, 7, 8] p = Pool(4) # create 4 processes to do our work print(p.map(test, list_of_datas_to_test)) # distribute our work ``` Output looks like: ``` [True, False, True, False, True, False, True, False, True, False] ```
Threads are slow in python because of the [Global Interpreter Lock](https://wiki.python.org/moin/GlobalInterpreterLock). You should consider using multiple processes with the Python `multiprocessing` module instead of threads. Using multiple processes can increase the "ramp up" time of your code, as spawning a real process takes more time than a light thread, but due to the GIL, `threading` won't do what you're after. [Here](http://pymotw.com/2/multiprocessing/basics.html) and [here](http://sebastianraschka.com/Articles/2014_multiprocessing_intro.html) are a couple of basic resources on using the `multiprocessing` module. Here's an example from the second link: ``` import multiprocessing as mp import random import string # Define an output queue output = mp.Queue() # define a example function def rand_string(length, output): """ Generates a random string of numbers, lower- and uppercase chars. """ rand_str = ''.join(random.choice( string.ascii_lowercase + string.ascii_uppercase + string.digits) for i in range(length)) output.put(rand_str) # Setup a list of processes that we want to run processes = [mp.Process(target=rand_string, args=(5, output)) for x in range(4)] # Run processes for p in processes: p.start() # Exit the completed processes for p in processes: p.join() # Get process results from the output queue results = [output.get() for p in processes] print(results) ```
31,458,813
I get a TemplateNotFound after I installed django-postman and django-messages. I obviously installed them separately - first django-postman, and then django-messages. This is so simple and yet I've spent hours trying to resolve this. I'm using Django 1.8, a fresh base install using pip. I then installed the two above packages. The TEMPLATES portion of my settings.py file is as follows: ``` TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [ os.path.join(BASE_DIR, 'templates'), #os.path.join(BASE_DIR, 'templates/django_messages'), ], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] ``` Within my INSTALLED\_APPS tuple, I've also installed the above packages as well. Here's my addition to urls.py: ``` url(r'^messages/', include('django_messages.urls')), ``` No other changes were made to the system and yet when I go to /messages I get the following error message: ``` TemplateDoesNotExist at /messages/inbox/ django_messages/inbox.html Request Method: GET Request URL: http://localhost:8000/messages/inbox/ Django Version: 1.8.3 Exception Type: TemplateDoesNotExist Exception Value: django_messages/inbox.html Exception Location: /projects/.virtualenvs/blatter/lib/python2.7/site-packages/django/template/loader.py in render_to_string, line 138 Python Executable: /projects/.virtualenvs/blatter/bin/python Python Version: 2.7.6 ```
2015/07/16
[ "https://Stackoverflow.com/questions/31458813", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4431105/" ]
The issue is because it extends from the site's base.html. It is also mentioned in postman documentation :- <https://django-postman.readthedocs.org/en/latest/quickstart.html#templates> ``` The postman/base.html template extends a base.html site template, in which some blocks are expected: title: in <html><head><title>, at least for a part of the entire title string extrahead: in <html><head>, to put some <script> and <link> elements content: in <html><body>, to put the page contents postman_menu: in <html><body>, to put a navigation menu ``` A possible solution can be found here :- [django-postman extends a base.html that does not exist](https://stackoverflow.com/questions/12832891/django-postman-extends-a-base-html-that-does-not-exist)
The problem was resolved for django-messages after reviewing a called template and changing the extends/inheritance parameter. The file that was being called, inbox.html, inherited "django\_messages/base.html" ... which worked fine. "base.html" then inherited from "base.html," so there appeared to be some circular logic here causing the error. This is by default and wasn't added by me. When I removed the extends/inheritance declaration from "base.html" so that it didn't inherit from itself, django-messages worked. Perhaps Django 1.8 changed some logic w/templates? Either way, issue resolved.
44,112,399
I run a Python Discord bot. I import some modules and have some events. Now and then, it seems like the script gets killed for some unknown reason. Maybe because of an error/exception or some connection issue maybe? I'm no Python expert but I managed to get my bot working pretty well, I just don't exactly understand how it works under the hood (since the program does nothing besides waiting for events). Either way, I'd like it to restart automatically after it stops. I use Windows 10 and just start my program either by double-clicking on it or through pythonw.exe if I don't want the window. What would be the best approach to verify if my program is still running (it doesn't have to be instant, the verification could be done every X minutes)? I thought of using a batch file or another Python script but I have no idea how to do such thing. Thanks for your help.
2017/05/22
[ "https://Stackoverflow.com/questions/44112399", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1689179/" ]
You can write another `python code (B)` to call your original `python code (A)` using `Popen` from `subprocess`. In `python code (B)`, ask the program to `wait` for your `python code (A)`. If `'A'` exits with an `error code`, `recall` it from `B`. I provide an example for python\_code\_B.py ``` import subprocess filename = 'my_python_code_A.py' while True: """However, you should be careful with the '.wait()'""" p = subprocess.Popen('python '+filename, shell=True).wait() """#if your there is an error from running 'my_python_code_A.py', the while loop will be repeated, otherwise the program will break from the loop""" if p != 0: continue else: break ``` This will generally work well on Unix / Windows systems. Tested on Win7/10 with latest code update. Also, please run `python_code_B.py` from a 'real terminal' which means running from a command prompt or terminal, and not in IDLE.
for problem you stated i prefer to use python [subprocess](https://docs.python.org/3.4/library/subprocess.html) call to rerun python script or use [try blocks](https://docs.python.org/3.4/tutorial/errors.html). This might be helpful to you. check this sample try block code: ``` try: import xyz # consider it is not exist or any error code except: pass # go to next line of code to execute ```
55,290,527
How do I write python scrip to solve this? ``` l=[1,2,3] Length A X=[one,two,three,.... ] length A ``` how do print/write to file output should be ``` 1=one 2=two 3=three .... ``` Trying to use something like but since the Length A is variable this won't work ``` logfile.write('%d=%s %d=%s %d=%s %d=%s \n' % (l[1], X[1],l[2,X[3],l[4],X[4])) ```
2019/03/21
[ "https://Stackoverflow.com/questions/55290527", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7038853/" ]
Use `zip`: ``` l = [1, 2, 3] X = ['one', 'two', 'three'] ' '.join('{}={}'.format(first, second) for first, second in zip(l, X)) ``` Output: ``` '1=one 2=two 3=three' ```
You could also use an fString to make this more concise: ``` numbers = [1, 2, 3] strings = ['one', 'two', 'three'] print(' '.join(f'{n}={s}' for n,s in zip(numbers,strings))) ```
55,290,527
How do I write python scrip to solve this? ``` l=[1,2,3] Length A X=[one,two,three,.... ] length A ``` how do print/write to file output should be ``` 1=one 2=two 3=three .... ``` Trying to use something like but since the Length A is variable this won't work ``` logfile.write('%d=%s %d=%s %d=%s %d=%s \n' % (l[1], X[1],l[2,X[3],l[4],X[4])) ```
2019/03/21
[ "https://Stackoverflow.com/questions/55290527", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7038853/" ]
Use `zip`: ``` l = [1, 2, 3] X = ['one', 'two', 'three'] ' '.join('{}={}'.format(first, second) for first, second in zip(l, X)) ``` Output: ``` '1=one 2=two 3=three' ```
* zip(a, b) would return pairs from both lists until shorter list is over * map(f, a) would apply function f to every element in list a. * join is a python way to concatenate strings All combined: ``` print(''.join(map('{}={}'.format, zip(l, X)))) print(''.join(map('='.join, zip(map(str,l), X)))) ``` since join works on strings only, map(str,l) convert [1, 2,..] to ['1', '2', ..] format works on any input, so extra conversion is not required
3,929,096
A python program I created is IO bounded. The majority of the time (over 90%) is spent in a single loop which repeats ~10,000 times. In this loop, ~100KB data is generated and written to a temporary file; it is then read back out by another program and statistics about that data collected. This is the only way to pass data into the second program. Due to this being the main bottleneck, I thought that moving the location of the temporary file from my main HDD to a (~40MB) RAMdisk (inside of over 2GB of free RAM) would greatly increase the IO speed for this file and so reduce the run-time. However, I obtained the following results (each averaged over 20 runs): * Test data 1: Without RAMdisk - 72.7s, With RAMdisk - 78.6s * Test data 2: Without RAMdisk - 223.0s, With RAMdisk - 235.1s It would appear that the RAMdisk is slower that my HDD. What could be causing this? Are there any other alternative to using a RAMdisk in order to get faster file IO?
2010/10/14
[ "https://Stackoverflow.com/questions/3929096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/227567/" ]
Your operating system is almost certainly buffering/caching disk writes already. It's not surprising the RAM disk is so close in performance. Without knowing exactly what you're writing or how, we can only offer general suggestions. Some ideas: * If you have 2 GB RAM you probably have a decent processor, so you could write this data to a filesystem that has compression. That would trade I/O operations for CPU time, assuming your data is amenable to that. * If you're doing many small writes, combine them to write larger pieces at once. (Can we see the source code?) * Are you removing the 100 KB file after use? If you don't need it, then delete it. Otherwise the OS may be forced to flush it to disk.
I know that Windows is very aggressive about caching disk data in RAM, and 100K would fit easily. The writes are going directly to cache and then perhaps being written to disk via a non-blocking write, which allows the program to continue. The RAM disk probably wouldn't support non-blocking operations because it expects those operations to be quick and not worth the bother. By reducing the amount of memory available to programs and caching, you're going to increase the amount of disk I/O for paging even if only slightly. This is all speculation on my part, since I'm not familiar with the kernel or drivers. I also speculate that Linux would operate similarly.
3,929,096
A python program I created is IO bounded. The majority of the time (over 90%) is spent in a single loop which repeats ~10,000 times. In this loop, ~100KB data is generated and written to a temporary file; it is then read back out by another program and statistics about that data collected. This is the only way to pass data into the second program. Due to this being the main bottleneck, I thought that moving the location of the temporary file from my main HDD to a (~40MB) RAMdisk (inside of over 2GB of free RAM) would greatly increase the IO speed for this file and so reduce the run-time. However, I obtained the following results (each averaged over 20 runs): * Test data 1: Without RAMdisk - 72.7s, With RAMdisk - 78.6s * Test data 2: Without RAMdisk - 223.0s, With RAMdisk - 235.1s It would appear that the RAMdisk is slower that my HDD. What could be causing this? Are there any other alternative to using a RAMdisk in order to get faster file IO?
2010/10/14
[ "https://Stackoverflow.com/questions/3929096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/227567/" ]
Your operating system is almost certainly buffering/caching disk writes already. It's not surprising the RAM disk is so close in performance. Without knowing exactly what you're writing or how, we can only offer general suggestions. Some ideas: * If you have 2 GB RAM you probably have a decent processor, so you could write this data to a filesystem that has compression. That would trade I/O operations for CPU time, assuming your data is amenable to that. * If you're doing many small writes, combine them to write larger pieces at once. (Can we see the source code?) * Are you removing the 100 KB file after use? If you don't need it, then delete it. Otherwise the OS may be forced to flush it to disk.
I had the same mind boggling experience, and after many tries I figured it out. When ramdisk is formatted as FAT32, then even though benchmarks shows high values, real world use is actually slower than NTFS formatted SSD. But NTFS formatted ramdisk is faster in real life than SSD.
3,929,096
A python program I created is IO bounded. The majority of the time (over 90%) is spent in a single loop which repeats ~10,000 times. In this loop, ~100KB data is generated and written to a temporary file; it is then read back out by another program and statistics about that data collected. This is the only way to pass data into the second program. Due to this being the main bottleneck, I thought that moving the location of the temporary file from my main HDD to a (~40MB) RAMdisk (inside of over 2GB of free RAM) would greatly increase the IO speed for this file and so reduce the run-time. However, I obtained the following results (each averaged over 20 runs): * Test data 1: Without RAMdisk - 72.7s, With RAMdisk - 78.6s * Test data 2: Without RAMdisk - 223.0s, With RAMdisk - 235.1s It would appear that the RAMdisk is slower that my HDD. What could be causing this? Are there any other alternative to using a RAMdisk in order to get faster file IO?
2010/10/14
[ "https://Stackoverflow.com/questions/3929096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/227567/" ]
I know that Windows is very aggressive about caching disk data in RAM, and 100K would fit easily. The writes are going directly to cache and then perhaps being written to disk via a non-blocking write, which allows the program to continue. The RAM disk probably wouldn't support non-blocking operations because it expects those operations to be quick and not worth the bother. By reducing the amount of memory available to programs and caching, you're going to increase the amount of disk I/O for paging even if only slightly. This is all speculation on my part, since I'm not familiar with the kernel or drivers. I also speculate that Linux would operate similarly.
In my tests I've found that not only batch size affects overall performance, but also the nature of data itself. I've managed to get 5 times better write times compared to SSD in only one scenario: writing a 100MB chunk of pre-cooked random byte array to RAM drive. Writing more "predictable" data like letters "aaa" or current datetime yields quite opposite results - SSD is always faster or equal. So my guess is that opertating system (Win 7 in my case) does lots of caching and optimizations. Looks like the most hindering case for RAM-drive is when you perform lots of small writes instead of a few big ones, and RAM drive shines at writing large amounts of hard-to-compress data.
3,929,096
A python program I created is IO bounded. The majority of the time (over 90%) is spent in a single loop which repeats ~10,000 times. In this loop, ~100KB data is generated and written to a temporary file; it is then read back out by another program and statistics about that data collected. This is the only way to pass data into the second program. Due to this being the main bottleneck, I thought that moving the location of the temporary file from my main HDD to a (~40MB) RAMdisk (inside of over 2GB of free RAM) would greatly increase the IO speed for this file and so reduce the run-time. However, I obtained the following results (each averaged over 20 runs): * Test data 1: Without RAMdisk - 72.7s, With RAMdisk - 78.6s * Test data 2: Without RAMdisk - 223.0s, With RAMdisk - 235.1s It would appear that the RAMdisk is slower that my HDD. What could be causing this? Are there any other alternative to using a RAMdisk in order to get faster file IO?
2010/10/14
[ "https://Stackoverflow.com/questions/3929096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/227567/" ]
Can you write the data out in batches rather than one item at a time? Are you caching resources like open file handles etc or cleaning those up? Are your disk writes blocking, can you use background threads to saturate IO while not affecting compute performance. I would look at optimising the disk writes first, and then look at faster disks when that is complete.
I join the people having problems with RAM disk speeds (only on Windows). The SSD i have can write 30 GiB (in one big block, dump a 30GiB RAM ARRAY) with a speed of 550 MiB/s (arround 56 seconds to write 30 GiB) ... this is if the write is asked in one source code sentence. The RAM Disk (imDisk) i have can write 30 GiB write (in one big block, dump a 30GiB RAM ARRAY) with a speed of a bit less than 100 MiB/s (arround 5 minutes and 13 seconds to write 30 GiB) ... this is if the write is asked in one source code sentence. I had also done another RAM test: from source code do a sequential direct write (one byte per source code loop pass) to a 30GiB RAM ARRAY (i have 64GiB of RAM) and i get a speed of near 1.3GiB/s (1298 MiB per second). Why on the hell (on Windows) RAM Disk is so slow for one BIG secuential write? Of course that low write speed happens on RAM disks on Windows, since i tested the same 'concept' on Linux with Linux native ram disk and Linux ram disk can write at near one gigabyte per second. Please note that i had also tested SoftPerfect and other RAM disks on Windows, RAM Disk speeds are near the same, can not write at more than one hundred megabytes per second. Actual Windows tested: 10 & 11 (on both HOME & PRO, on 64 bits), RAM Disk format (exFAT & NTFS); since RAM disk speed was too slow i was trying to find one Windows version where RAM disk speed be normal, but found no one. Actual Linux Kernel tested: Only 5.15.11, since Linux native RAM disk speed was normal i do not test on any other kernel. Hope this help other people, since knowledge is the base to solve a problem.
3,929,096
A python program I created is IO bounded. The majority of the time (over 90%) is spent in a single loop which repeats ~10,000 times. In this loop, ~100KB data is generated and written to a temporary file; it is then read back out by another program and statistics about that data collected. This is the only way to pass data into the second program. Due to this being the main bottleneck, I thought that moving the location of the temporary file from my main HDD to a (~40MB) RAMdisk (inside of over 2GB of free RAM) would greatly increase the IO speed for this file and so reduce the run-time. However, I obtained the following results (each averaged over 20 runs): * Test data 1: Without RAMdisk - 72.7s, With RAMdisk - 78.6s * Test data 2: Without RAMdisk - 223.0s, With RAMdisk - 235.1s It would appear that the RAMdisk is slower that my HDD. What could be causing this? Are there any other alternative to using a RAMdisk in order to get faster file IO?
2010/10/14
[ "https://Stackoverflow.com/questions/3929096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/227567/" ]
Your operating system is almost certainly buffering/caching disk writes already. It's not surprising the RAM disk is so close in performance. Without knowing exactly what you're writing or how, we can only offer general suggestions. Some ideas: * If you have 2 GB RAM you probably have a decent processor, so you could write this data to a filesystem that has compression. That would trade I/O operations for CPU time, assuming your data is amenable to that. * If you're doing many small writes, combine them to write larger pieces at once. (Can we see the source code?) * Are you removing the 100 KB file after use? If you don't need it, then delete it. Otherwise the OS may be forced to flush it to disk.
Can you write the data out in batches rather than one item at a time? Are you caching resources like open file handles etc or cleaning those up? Are your disk writes blocking, can you use background threads to saturate IO while not affecting compute performance. I would look at optimising the disk writes first, and then look at faster disks when that is complete.
3,929,096
A python program I created is IO bounded. The majority of the time (over 90%) is spent in a single loop which repeats ~10,000 times. In this loop, ~100KB data is generated and written to a temporary file; it is then read back out by another program and statistics about that data collected. This is the only way to pass data into the second program. Due to this being the main bottleneck, I thought that moving the location of the temporary file from my main HDD to a (~40MB) RAMdisk (inside of over 2GB of free RAM) would greatly increase the IO speed for this file and so reduce the run-time. However, I obtained the following results (each averaged over 20 runs): * Test data 1: Without RAMdisk - 72.7s, With RAMdisk - 78.6s * Test data 2: Without RAMdisk - 223.0s, With RAMdisk - 235.1s It would appear that the RAMdisk is slower that my HDD. What could be causing this? Are there any other alternative to using a RAMdisk in order to get faster file IO?
2010/10/14
[ "https://Stackoverflow.com/questions/3929096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/227567/" ]
In my tests I've found that not only batch size affects overall performance, but also the nature of data itself. I've managed to get 5 times better write times compared to SSD in only one scenario: writing a 100MB chunk of pre-cooked random byte array to RAM drive. Writing more "predictable" data like letters "aaa" or current datetime yields quite opposite results - SSD is always faster or equal. So my guess is that opertating system (Win 7 in my case) does lots of caching and optimizations. Looks like the most hindering case for RAM-drive is when you perform lots of small writes instead of a few big ones, and RAM drive shines at writing large amounts of hard-to-compress data.
I had the same mind boggling experience, and after many tries I figured it out. When ramdisk is formatted as FAT32, then even though benchmarks shows high values, real world use is actually slower than NTFS formatted SSD. But NTFS formatted ramdisk is faster in real life than SSD.
3,929,096
A python program I created is IO bounded. The majority of the time (over 90%) is spent in a single loop which repeats ~10,000 times. In this loop, ~100KB data is generated and written to a temporary file; it is then read back out by another program and statistics about that data collected. This is the only way to pass data into the second program. Due to this being the main bottleneck, I thought that moving the location of the temporary file from my main HDD to a (~40MB) RAMdisk (inside of over 2GB of free RAM) would greatly increase the IO speed for this file and so reduce the run-time. However, I obtained the following results (each averaged over 20 runs): * Test data 1: Without RAMdisk - 72.7s, With RAMdisk - 78.6s * Test data 2: Without RAMdisk - 223.0s, With RAMdisk - 235.1s It would appear that the RAMdisk is slower that my HDD. What could be causing this? Are there any other alternative to using a RAMdisk in order to get faster file IO?
2010/10/14
[ "https://Stackoverflow.com/questions/3929096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/227567/" ]
In my tests I've found that not only batch size affects overall performance, but also the nature of data itself. I've managed to get 5 times better write times compared to SSD in only one scenario: writing a 100MB chunk of pre-cooked random byte array to RAM drive. Writing more "predictable" data like letters "aaa" or current datetime yields quite opposite results - SSD is always faster or equal. So my guess is that opertating system (Win 7 in my case) does lots of caching and optimizations. Looks like the most hindering case for RAM-drive is when you perform lots of small writes instead of a few big ones, and RAM drive shines at writing large amounts of hard-to-compress data.
I join the people having problems with RAM disk speeds (only on Windows). The SSD i have can write 30 GiB (in one big block, dump a 30GiB RAM ARRAY) with a speed of 550 MiB/s (arround 56 seconds to write 30 GiB) ... this is if the write is asked in one source code sentence. The RAM Disk (imDisk) i have can write 30 GiB write (in one big block, dump a 30GiB RAM ARRAY) with a speed of a bit less than 100 MiB/s (arround 5 minutes and 13 seconds to write 30 GiB) ... this is if the write is asked in one source code sentence. I had also done another RAM test: from source code do a sequential direct write (one byte per source code loop pass) to a 30GiB RAM ARRAY (i have 64GiB of RAM) and i get a speed of near 1.3GiB/s (1298 MiB per second). Why on the hell (on Windows) RAM Disk is so slow for one BIG secuential write? Of course that low write speed happens on RAM disks on Windows, since i tested the same 'concept' on Linux with Linux native ram disk and Linux ram disk can write at near one gigabyte per second. Please note that i had also tested SoftPerfect and other RAM disks on Windows, RAM Disk speeds are near the same, can not write at more than one hundred megabytes per second. Actual Windows tested: 10 & 11 (on both HOME & PRO, on 64 bits), RAM Disk format (exFAT & NTFS); since RAM disk speed was too slow i was trying to find one Windows version where RAM disk speed be normal, but found no one. Actual Linux Kernel tested: Only 5.15.11, since Linux native RAM disk speed was normal i do not test on any other kernel. Hope this help other people, since knowledge is the base to solve a problem.
3,929,096
A python program I created is IO bounded. The majority of the time (over 90%) is spent in a single loop which repeats ~10,000 times. In this loop, ~100KB data is generated and written to a temporary file; it is then read back out by another program and statistics about that data collected. This is the only way to pass data into the second program. Due to this being the main bottleneck, I thought that moving the location of the temporary file from my main HDD to a (~40MB) RAMdisk (inside of over 2GB of free RAM) would greatly increase the IO speed for this file and so reduce the run-time. However, I obtained the following results (each averaged over 20 runs): * Test data 1: Without RAMdisk - 72.7s, With RAMdisk - 78.6s * Test data 2: Without RAMdisk - 223.0s, With RAMdisk - 235.1s It would appear that the RAMdisk is slower that my HDD. What could be causing this? Are there any other alternative to using a RAMdisk in order to get faster file IO?
2010/10/14
[ "https://Stackoverflow.com/questions/3929096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/227567/" ]
I know that Windows is very aggressive about caching disk data in RAM, and 100K would fit easily. The writes are going directly to cache and then perhaps being written to disk via a non-blocking write, which allows the program to continue. The RAM disk probably wouldn't support non-blocking operations because it expects those operations to be quick and not worth the bother. By reducing the amount of memory available to programs and caching, you're going to increase the amount of disk I/O for paging even if only slightly. This is all speculation on my part, since I'm not familiar with the kernel or drivers. I also speculate that Linux would operate similarly.
I had the same mind boggling experience, and after many tries I figured it out. When ramdisk is formatted as FAT32, then even though benchmarks shows high values, real world use is actually slower than NTFS formatted SSD. But NTFS formatted ramdisk is faster in real life than SSD.
3,929,096
A python program I created is IO bounded. The majority of the time (over 90%) is spent in a single loop which repeats ~10,000 times. In this loop, ~100KB data is generated and written to a temporary file; it is then read back out by another program and statistics about that data collected. This is the only way to pass data into the second program. Due to this being the main bottleneck, I thought that moving the location of the temporary file from my main HDD to a (~40MB) RAMdisk (inside of over 2GB of free RAM) would greatly increase the IO speed for this file and so reduce the run-time. However, I obtained the following results (each averaged over 20 runs): * Test data 1: Without RAMdisk - 72.7s, With RAMdisk - 78.6s * Test data 2: Without RAMdisk - 223.0s, With RAMdisk - 235.1s It would appear that the RAMdisk is slower that my HDD. What could be causing this? Are there any other alternative to using a RAMdisk in order to get faster file IO?
2010/10/14
[ "https://Stackoverflow.com/questions/3929096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/227567/" ]
Can you write the data out in batches rather than one item at a time? Are you caching resources like open file handles etc or cleaning those up? Are your disk writes blocking, can you use background threads to saturate IO while not affecting compute performance. I would look at optimising the disk writes first, and then look at faster disks when that is complete.
In my tests I've found that not only batch size affects overall performance, but also the nature of data itself. I've managed to get 5 times better write times compared to SSD in only one scenario: writing a 100MB chunk of pre-cooked random byte array to RAM drive. Writing more "predictable" data like letters "aaa" or current datetime yields quite opposite results - SSD is always faster or equal. So my guess is that opertating system (Win 7 in my case) does lots of caching and optimizations. Looks like the most hindering case for RAM-drive is when you perform lots of small writes instead of a few big ones, and RAM drive shines at writing large amounts of hard-to-compress data.
3,929,096
A python program I created is IO bounded. The majority of the time (over 90%) is spent in a single loop which repeats ~10,000 times. In this loop, ~100KB data is generated and written to a temporary file; it is then read back out by another program and statistics about that data collected. This is the only way to pass data into the second program. Due to this being the main bottleneck, I thought that moving the location of the temporary file from my main HDD to a (~40MB) RAMdisk (inside of over 2GB of free RAM) would greatly increase the IO speed for this file and so reduce the run-time. However, I obtained the following results (each averaged over 20 runs): * Test data 1: Without RAMdisk - 72.7s, With RAMdisk - 78.6s * Test data 2: Without RAMdisk - 223.0s, With RAMdisk - 235.1s It would appear that the RAMdisk is slower that my HDD. What could be causing this? Are there any other alternative to using a RAMdisk in order to get faster file IO?
2010/10/14
[ "https://Stackoverflow.com/questions/3929096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/227567/" ]
Your operating system is almost certainly buffering/caching disk writes already. It's not surprising the RAM disk is so close in performance. Without knowing exactly what you're writing or how, we can only offer general suggestions. Some ideas: * If you have 2 GB RAM you probably have a decent processor, so you could write this data to a filesystem that has compression. That would trade I/O operations for CPU time, assuming your data is amenable to that. * If you're doing many small writes, combine them to write larger pieces at once. (Can we see the source code?) * Are you removing the 100 KB file after use? If you don't need it, then delete it. Otherwise the OS may be forced to flush it to disk.
I join the people having problems with RAM disk speeds (only on Windows). The SSD i have can write 30 GiB (in one big block, dump a 30GiB RAM ARRAY) with a speed of 550 MiB/s (arround 56 seconds to write 30 GiB) ... this is if the write is asked in one source code sentence. The RAM Disk (imDisk) i have can write 30 GiB write (in one big block, dump a 30GiB RAM ARRAY) with a speed of a bit less than 100 MiB/s (arround 5 minutes and 13 seconds to write 30 GiB) ... this is if the write is asked in one source code sentence. I had also done another RAM test: from source code do a sequential direct write (one byte per source code loop pass) to a 30GiB RAM ARRAY (i have 64GiB of RAM) and i get a speed of near 1.3GiB/s (1298 MiB per second). Why on the hell (on Windows) RAM Disk is so slow for one BIG secuential write? Of course that low write speed happens on RAM disks on Windows, since i tested the same 'concept' on Linux with Linux native ram disk and Linux ram disk can write at near one gigabyte per second. Please note that i had also tested SoftPerfect and other RAM disks on Windows, RAM Disk speeds are near the same, can not write at more than one hundred megabytes per second. Actual Windows tested: 10 & 11 (on both HOME & PRO, on 64 bits), RAM Disk format (exFAT & NTFS); since RAM disk speed was too slow i was trying to find one Windows version where RAM disk speed be normal, but found no one. Actual Linux Kernel tested: Only 5.15.11, since Linux native RAM disk speed was normal i do not test on any other kernel. Hope this help other people, since knowledge is the base to solve a problem.
4,149,274
Okay, I'm having one of those moments that makes me question my ability to use a computer. This is not the sort of question I imagined asking as my first SO post, but here goes. Started on Zed's new "Learn Python the Hard Way" since I've been looking to get back into programming after a 10 year hiatus and python was always what I wanted. This book has really spoken to me. That being said, I'm having a serious issue with pydoc from the command. I've got all the directories in c:/python26 in my system path and I can execute pydoc from the command line just fine regardless of pwd - but it accepts no arguments. Doesn't matter what I type, I just get the standard pydoc output telling me the acceptable arguments. Any ideas? For what it's worth, I installed ActivePython as per Zed's suggestion. ``` C:\Users\Chevee>pydoc file pydoc - the Python documentation tool pydoc.py <name> ... Show text documentation on something. <name> may be the name of a Python keyword, topic, function, module, or package, or a dotted reference to a class or function within a module or module in a package. If <name> contains a '\', it is used as the path to a Python source file to document. If name is 'keywords', 'topics', or 'modules', a listing of these things is displayed. pydoc.py -k <keyword> Search for a keyword in the synopsis lines of all available modules. pydoc.py -p <port> Start an HTTP server on the given port on the local machine. pydoc.py -g Pop up a graphical interface for finding and serving documentation. pydoc.py -w <name> ... Write out the HTML documentation for a module to a file in the current directory. If <name> contains a '\', it is treated as a filename; if it names a directory, documentation is written for all the contents. C:\Users\Chevee> ``` EDIT: New information, pydoc works just fine in PowerShell. As a linux user, I have no idea why I'm trying to use cmd anyways--but I'd still love to figure out what's up with pydoc and cmd. EDIT 2: More new information. In cmd... ``` c:\>python c:/python26/lib/pydoc.py file ``` ...works just fine. Everything works just fine with just pydoc in PowerShell without me worrying about pwd, or extensions or paths.
2010/11/10
[ "https://Stackoverflow.com/questions/4149274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/502486/" ]
When you type the name of a file at the windows command prompt, cmd can check the windows registry for the default file association, and use that program to open it. So if the Inkscape installer associated .py files with its own version of python, cmd might preferentially run that and ignore the PATH entirely. See [this question](https://stackoverflow.com/questions/2199739/pydoc-fails-under-windows-and-python-2-6-4).
Based on your second edit, you may have more than one copy of pydoc.py in your path, with the 'wrong' one first such that when it starts up it doesn't have the correct environment in which to execute.