qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
67,347,194
The thing is , that I am given some code and it is structured this way that we expect some accesses to not - initialised elements in the list. (I don't want to change the logic behind it, because it deals with some math concepts ). But I've been given the code in some other language and I want to do the same with python (handling this exception and let the program run). ``` a = [] for i in range(1,10,2): a.append(i) for j in range(10): try: a[i] +=1 finally: a.append(1) --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-2-edc940b2697a> in <module> 3 for j in range(10): 4 try: ----> 5 a[i] +=1 6 finally: 7 a.append(1) IndexError: list index out of range ``` I then tried something else (after reading a similar stack overflow post) ``` a = [] for i in range(1,10,2): a.append(i) for j in range(10): if(a[j]): a[j] +=1 else: a.append(1) ndexError Traceback (most recent call last) <ipython-input-4-324628aa850d> in <module> 3 for j in range(10): 4 ----> 5 if(a[j]): a[j] +=1 6 else: 7 a.append(1) IndexError: list index out of range ``` As you can see the error , persisted.
2021/05/01
[ "https://Stackoverflow.com/questions/67347194", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14386149/" ]
you can handle it with: ``` except IndexError: pass ``` full code: ``` a = [] for i in range(1, 10, 2): a.append(i) for j in range(10): try: a[i] += 1 except IndexError: pass finally: a.append(1) ```
``` a = [] for i in range(1,10,2): a.append(i) try: for j in range(10): a[i] +=1 except: pass finally: a.append(1) print(a) ```
59,414,043
Below is my project structure: ``` - MyProject - src - Master.py - Myfolder - file1 - Myfolder2 - file2 ``` when i tried to run `python3 Master.py` from `src` folder i get ``` ModuleNotFoundError: No Module name 'src' ``` when i tried to run Master.py from root using this command -> `python3 -m src.Master` i get `file1 Error: [Errno 2] No such file or directory` My master.py has this import - `from src.myfolder import file1` so it looks like that my Master.py is running from the root of the project but then it does not pick the imports that are in Master.py. I already tried to add empty init.py file in root of the project as well as `src` and `myfolder`but it does not work. I would appreciate any help !
2019/12/19
[ "https://Stackoverflow.com/questions/59414043", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4984616/" ]
Given the updated directory structure, you should add an `__init__.py` file to `myfolder` with the following line: ```py from .file1 import * # or # from .file1 import something # or # from . import file1 ``` and then in `master.py` do ```py import myfolder # or # from myfolder import file1 # or # from myfolder.file1 import something ```
While my solution worked with what was suggested by Maxim i also learned that with the help of simple `os.chdir(os.path.dirname('/usr/dirname'))` i was able to resolve the issue and i also did not have to add `__init__.py` anywhere in the package
32,140,380
I'm looking for a pythonic (1-line) way to extract a range of values from an array Here's some sample code that will extract the array elements that are >2 and <8 from x,y data, and put them into a new array. Is there a way to accomplish this on a single line? The code below works but seems kludgier than it needs to be. (Note I'm actually working with floats in my application) ``` import numpy as np x0 = np.array([0,3,9,8,3,4,5]) y0 = np.array([2,3,5,7,8,1,0]) x1 = x0[x0>2] y1 = y0[x0>2] x2 = x1[x1<8] y2 = y1[x1<8] print x2, y2 ``` This prints ``` [3 3 4 5] [3 8 1 0] ``` Part (b) of the problem would be to extract values say `1 < x < 3` *and* `7 < x < 9` as well as their corresponding `y` values.
2015/08/21
[ "https://Stackoverflow.com/questions/32140380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4618362/" ]
You can chain together boolean arrays using `&` for element-wise `logical and` and `|` for element-wise `logical or`, so that the condition `2 < x0` and `x0 < 8` becomes ``` mask = (2 < x0) & (x0 < 8) ``` --- For example, ``` import numpy as np x0 = np.array([0,3,9,8,3,4,5]) y0 = np.array([2,3,5,7,8,1,0]) mask = (2 < x0) & (x0 < 8) x2 = x0[mask] y2 = y0[mask] print(x2, y2) # (array([3, 3, 4, 5]), array([3, 8, 1, 0])) mask2 = ((1 < x0) & (x0 < 3)) | ((7 < x0) & (x0 < 9)) x3 = x0[mask2] y3 = y0[mask2] print(x3, y3) # (array([8]), array([7])) ```
``` import numpy as np x0 = np.array([0,3,9,8,3,4,5]) y0 = np.array([2,3,5,7,8,1,0]) list( zip( *[(x,y) for x, y in zip(x0, y0) if 1<=x<=3 or 7<=x<=9] ) ) # [(3, 9, 8, 3), (3, 5, 7, 8)] ```
64,995,258
I started learning python few days ago and im looking to create a simple program that creates conclusion text that shows what have i bought and how much have i paid depended on my inputs. So far i have created the program and technically it works well. But im having a problem with specifying the parts in text that might depend on my input. Code: ``` apples = input("How many apples did you buy?: ") bananas = input("How many bananas did you buy?: ") dollars = input("How much did you pay: ") print("") print("You bought " +apples+ " apples and " +bananas+ " bananas. You paid " +dollars +" dollars.") print("") print("") input("Press ENTER to exit") input() ``` So the problem begins when input ends with 1. For example 21, 31 and so on, except 11 as the conclusion text will say "You bought 41 apples and 21 bananas...". Is it even possible to make "apple/s", "banana/s", "dollar/s" as variables that depend on input variables? 1. Where do i start with creating variable that depends on input variable? 2. How do i define the criteria for "banana" or "bananas" by the ending of number? And also exclude 11 from criteria as that would be also "bananas" but ends with 1. It seems like this could be an easy task but i still can't get my head around this as i only recently started learning python. I tried creating IF and Dictionary for variables but failed.
2020/11/24
[ "https://Stackoverflow.com/questions/64995258", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14702072/" ]
* Give each group a consecutive range. For example, for 15%, the range will be between 30 and 45. * Pick a random number between 0 and 100. * Find in which range that random number falls: ```sql create or replace temp table probs as select 'a' id, 1 value, 20 prob union all select 'a', 2, 30 union all select 'a', 3, 40 union all select 'a', 4, 10 union all select 'b', 1, 5 union all select 'b', 2, 7 union all select 'b', 3, 8 union all select 'b', 4, 80; with calculated_ranges as ( select *, range_prob2-prob range_prob1 from ( select *, sum(prob) over(partition by id order by prob) range_prob2 from probs ) ) select id, random_draw, value, prob from ( select id, any_value(uniform(0, 100, random())) random_draw from probs group by id ) a join calculated_ranges b using (id) where range_prob1<=random_draw and range_prob2>random_draw ; ``` [![enter image description here](https://i.stack.imgur.com/kKcSE.png)](https://i.stack.imgur.com/kKcSE.png)
Felipe's answer is great, it definitely solved the problem. While trying out different approaches yesterday, I tested out this approach on Felipe's table and it seems to be working as well. I'm giving each record a random probability and comparing against the actual probability. The idea is that if the random probability is less than or equal to the actual probability, then it's accepted and the partitioning will do the deduplication based on a descending order with the probabilities. ``` create or replace temp table probs as select 'a' id, 1 value, 20 prob union all select 'a', 2, 30 union all select 'a', 3, 40 union all select 'a', 4, 10 union all select 'b', 1, 5 union all select 'b', 2, 7 union all select 'b', 3, 8 union all select 'b', 4, 80; create or replace temp table t2 as select *, min(compare_prob) over(partition by id) as min_compare_prob, max(compare_prob) over(partition by id) as max_compare_prob, min_compare_prob <> max_compare_prob as not_all_identical --min_rank2 <> max_rank2 checks if all records (by group) have different values from (select id, value, prob, UNIFORM(0.00001::float,1::float,random(2)) as rand_prob, --random probability case when prob >= rand_prob then 1 else 0 end as compare_prob from (select id, value, prob/100 as prob from probs) ); --dedeup results select id, value, prob, rand_prob from (select *, row_number() over(partition by id order by prob desc, rand_prob desc) as rn from t2 where not_all_identical = FALSE union all select *, row_number() over(partition by id order by prob desc, COMPARE_PROB desc) as rn from t2 where not_all_identical = TRUE) where rn = 1; ```
60,482,258
How can i replace a string in list of lists in python but i want to apply the changes only to the specific index and not affecting the other index, here some example: ``` mylist = [["test_one", "test_two"], ["test_one", "test_two"]] ``` i want to change the word "test" to "my" so the result would be only affecting the second index: ``` mylist = [["test_one", "my_two"], ["test_one", "my_two"]] ``` I can figure out how to change both of list but i can't figure out what I'm supposed to do if only change one specific index.
2020/03/02
[ "https://Stackoverflow.com/questions/60482258", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9675410/" ]
Use indexing: ``` newlist = [] for l in mylist: l[1] = l[1].replace("test", "my") newlist.append(l) print(newlist) ``` Or oneliner if you always have two elements in the sublist: ``` newlist = [[i, j.replace("test", "my")] for i, j in mylist] print(newlist) ``` Output: ``` [['test_one', 'my_two'], ['test_one', 'my_two']] ```
There is a way to do this on one line but it is not coming to me at the moment. Here is how to do it in two lines. ``` for two_word_list in mylist: two_word_list[1] = two_word_list.replace("test", "my") ```
53,048,002
I have a semicolon separated csv file which has the following form: ``` indx1; string1; char1; entry1 indx2; string1; char2; entry2 indx3; string2; char2; entry3 indx4; string1; char1; entry4 indx5; string3; char2; entry5 ``` I want to get unique entries of the 1st and 2nd columns of this file in the form of a list (without using pandas or numpy). In particular these are the lists that I desire: ``` [string1, string2, string3] [char1, char2] ``` The order doesn't matter, and I would like the operation to be fast. Presently, I am reading the file (say 'data.csv') using the command ``` with open('data.csv') as csv_file: csv_reader = csv.reader(csv_file, delimiter=';') ``` I am using python 2.7. What is the fastest way to achieve the functionality that I desire? I will appreciate any help.
2018/10/29
[ "https://Stackoverflow.com/questions/53048002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10415983/" ]
You could use [sets](https://docs.python.org/2.7/library/stdtypes.html#set) to keep track of the already seen values in the needed columns. Since you say that the order doesn't matter, you could just convert the sets to lists after processing all rows: ``` import csv col1, col2 = set(), set() with open('data.csv') as csv_file: csv_reader = csv.reader(csv_file, delimiter=';', skipinitialspace=True) for row in csv_reader: col1.add(row[1]) col2.add(row[2]) print list(col1), list(col2) # ['string1', 'string3', 'string2'] ['char2', 'char1'] ```
This should work. You can use it as benchmark. ``` myDict1 = {} myDict2 = {} with open('data.csv') as csv_file: csv_reader = csv.reader(csv_file, delimiter=';') for row in csv_reader: myDict1[row[1]] = 0 myDict2[row[2]] = 0 x = myDict1.keys() y = myDict2.keys() ```
10,631,419
My django app saves django models to a remote database. Sometimes the saves are bursty. In order to free the main thread (\*thread\_A\*) of the application from the time toll of saving multiple objects to the database, I thought of transferring the model objects to a separate thread (\*thread\_B\*) using [`collections.deque`](http://docs.python.org/library/collections.html#collections.deque) and have \*thread\_B\* save them sequentially. Yet I'm unsure regarding this scheme. `save()` returns the id of the new database entry, so it "ends" only after the database responds, which is at the end of the transaction. **Does `django.db.models.Model.save()` really block [GIL](http://en.wikipedia.org/wiki/Global_Interpreter_Lock)-wise and release other python threads *during* the transaction?**
2012/05/17
[ "https://Stackoverflow.com/questions/10631419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/348545/" ]
Django's `save()` does nothing special to the GIL. In fact, there is hardly anything you can do with the GIL in Python code -- when it is executed, the thread must hold the GIL. There are only two ways the GIL could get released in `save()`: * Python decides to switch threads (after [`sys.getcheckinterval()`](http://docs.python.org/library/sys.html#sys.getcheckinterval) instructions) * Django calls a database interface routine that is implemented to release the GIL The second point could be what you are looking for -- a SQL `COMMIT`is executed and during that execution, the SQL backend releases the GIL. However, this depends on the SQL interface, and I'm not sure if the popular ones actually release the GIL\*. Moreover, `save()` does a lot more than just running a few `UPDATE/INSERT` statements and a `COMMIT`; it does a lot of bookkeeping in Python, where it has to hold the GIL. In summary, I'm not sure that you will gain anything from moving `save()` to a different thread. --- **UPDATE**: From looking at the sources, I learned that both the `sqlite` module and `psycopg` do release the GIL when they are calling database routines, and I guess that other interfaces do the same.
I think python dont lock anything by itself, but database does.
10,631,419
My django app saves django models to a remote database. Sometimes the saves are bursty. In order to free the main thread (\*thread\_A\*) of the application from the time toll of saving multiple objects to the database, I thought of transferring the model objects to a separate thread (\*thread\_B\*) using [`collections.deque`](http://docs.python.org/library/collections.html#collections.deque) and have \*thread\_B\* save them sequentially. Yet I'm unsure regarding this scheme. `save()` returns the id of the new database entry, so it "ends" only after the database responds, which is at the end of the transaction. **Does `django.db.models.Model.save()` really block [GIL](http://en.wikipedia.org/wiki/Global_Interpreter_Lock)-wise and release other python threads *during* the transaction?**
2012/05/17
[ "https://Stackoverflow.com/questions/10631419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/348545/" ]
Generally you should never have to worry about threads in a Django application. If you're serving your application with Apache, gunicorn or nearly any other server other than the development server, the server will spawn multiple processes and evade the GIL entirely. The exception is if you're using gunicorn with gevent, in which case there will be multiple processes but also microthreads inside those processes -- in that case concurrency helps a bit, but you don't have to manage the threads yourself to take advantage of that. The only case where you need to worry about the GIL is if you're trying to spawn multiple threads to handle a single request, which is not usually a good idea. The Django save() method does not release the GIL itself, but the database backend will (in most cases the bulk of the time spent in save() will be doing database I/O). However, it's almost impossible to properly take advantage of this in a well-designed web application. Responses from your view should be fast even when done synchronously -- if they are doing too much work to be fast, then use a delayed job with Celery or another taskmaster to finish up the extra work. If you try to thread in your view, you'll have to finish up that thread before sending a response to the client, which in most cases won't help anything and will just add extra overhead.
I think python dont lock anything by itself, but database does.
10,631,419
My django app saves django models to a remote database. Sometimes the saves are bursty. In order to free the main thread (\*thread\_A\*) of the application from the time toll of saving multiple objects to the database, I thought of transferring the model objects to a separate thread (\*thread\_B\*) using [`collections.deque`](http://docs.python.org/library/collections.html#collections.deque) and have \*thread\_B\* save them sequentially. Yet I'm unsure regarding this scheme. `save()` returns the id of the new database entry, so it "ends" only after the database responds, which is at the end of the transaction. **Does `django.db.models.Model.save()` really block [GIL](http://en.wikipedia.org/wiki/Global_Interpreter_Lock)-wise and release other python threads *during* the transaction?**
2012/05/17
[ "https://Stackoverflow.com/questions/10631419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/348545/" ]
Django's `save()` does nothing special to the GIL. In fact, there is hardly anything you can do with the GIL in Python code -- when it is executed, the thread must hold the GIL. There are only two ways the GIL could get released in `save()`: * Python decides to switch threads (after [`sys.getcheckinterval()`](http://docs.python.org/library/sys.html#sys.getcheckinterval) instructions) * Django calls a database interface routine that is implemented to release the GIL The second point could be what you are looking for -- a SQL `COMMIT`is executed and during that execution, the SQL backend releases the GIL. However, this depends on the SQL interface, and I'm not sure if the popular ones actually release the GIL\*. Moreover, `save()` does a lot more than just running a few `UPDATE/INSERT` statements and a `COMMIT`; it does a lot of bookkeeping in Python, where it has to hold the GIL. In summary, I'm not sure that you will gain anything from moving `save()` to a different thread. --- **UPDATE**: From looking at the sources, I learned that both the `sqlite` module and `psycopg` do release the GIL when they are calling database routines, and I guess that other interfaces do the same.
Generally you should never have to worry about threads in a Django application. If you're serving your application with Apache, gunicorn or nearly any other server other than the development server, the server will spawn multiple processes and evade the GIL entirely. The exception is if you're using gunicorn with gevent, in which case there will be multiple processes but also microthreads inside those processes -- in that case concurrency helps a bit, but you don't have to manage the threads yourself to take advantage of that. The only case where you need to worry about the GIL is if you're trying to spawn multiple threads to handle a single request, which is not usually a good idea. The Django save() method does not release the GIL itself, but the database backend will (in most cases the bulk of the time spent in save() will be doing database I/O). However, it's almost impossible to properly take advantage of this in a well-designed web application. Responses from your view should be fast even when done synchronously -- if they are doing too much work to be fast, then use a delayed job with Celery or another taskmaster to finish up the extra work. If you try to thread in your view, you'll have to finish up that thread before sending a response to the client, which in most cases won't help anything and will just add extra overhead.
68,006,937
I am trying to make a daily line graph for certain stocks, but am running into an issue. Getting the 'Close' price every 2 minutes functions correctly, but when I try and get 'Datetime' I am getting an error. I believe yfinance used pandas to create a dataframe, but I may be wrong. Regardless, I am having issues pulling a certain column from yfinance. I am pretty new to python and many of the packages, so this might be a simple error, but my code is shown below. ``` stock = yf.Ticker('MSFT') print(stock.history(period='1d', interval='2m')) priceArray = stock.history(period='1d',interval='2m')['Close'] dateArray = stock.history(period='1d',interval='2m')['Datetime'] ``` The error I am getting is: ``` File "C:\Users\TrevorSmith\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\pandas\core\frame.py", line 3024, in __getitem__ indexer = self.columns.get_loc(key) File "C:\Users\TrevorSmith\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\pandas\core\indexes\base.py", line 3082, in get_loc raise KeyError(key) from err KeyError: 'Date' ``` When I print the `stock.history(period='1d',interval='2m)` It shows the following column names: ``` Open High Low Close Volume Dividends Stock Splits Datetime ``` Again, getting ['Close'] from this information works, but ['Date'], ['Datetime'], ['DateTime'], and ['Time'] do not work. Am I doing something wrong here? Or is there another way to get the info I am looking for?
2021/06/16
[ "https://Stackoverflow.com/questions/68006937", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16245213/" ]
Datetime is no column name, it just looks like one: [![enter image description here](https://i.stack.imgur.com/gK5iQ.png)](https://i.stack.imgur.com/gK5iQ.png) Try `print(stock.history(period='1d',interval='2m).keys())` and you will see.
The Datetime column is the index of the dataframe. If you reset the index you can do whatever with the ['Datetime'] column.
41,761,293
I am trying to access the worklogs in python by using the [jira python library](http://jira.readthedocs.io/en/latest/examples.html). I am doing the following: ``` issues = jira.search_issues("key=MYTICKET-1") print(issues[0].fields.worklogs) issue = jira.search_issues("MYTICKET-1") print(issue.fields.worklogs) ``` as described in the documentation, chapter 2.1.4. However, I get the following error (for both cases): ``` AttributeError: type object 'PropertyHolder' has no attribute 'worklogs' ``` Is there something I am doing wrong? Is the documentation outdated? How to access worklogs (or other fields, like comments etc)? And what is a `PropertyHolder`? How to access it (its not described in the documentation!)?
2017/01/20
[ "https://Stackoverflow.com/questions/41761293", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1581090/" ]
This is because **it seems** `jira.JIRA.search_issues` doesn't fetch all "builtin" fields, like `worklog`, by default (although documentation only uses vague term ["fields - [...] Default is to include *all fields*"](https://jira.readthedocs.io/en/master/api.html?highlight=worklogs#jira.JIRA.search_issues) - "all" out of what?). You either have to use [`jira.JIRA.issue`](https://jira.readthedocs.io/en/master/api.html?highlight=worklogs#jira.JIRA.issue): ```py client = jira.JIRA(...) issue = client.issue("MYTICKET-1") ``` or explicitly list fields which you want to fetch in `jira.JIRA.search_issues`: ```py client = jira.JIRA(...) issue = client.search_issues("key=MYTICKET-1", fields=[..., 'worklog'])[0] ``` Also be aware that this way you will get at most 20 worklog items attached to your JIRA issue instance. If you need all of them you should use [`jira.JIRA.worklogs`](https://jira.readthedocs.io/en/master/api.html?highlight=worklogs#jira.JIRA.worklogs): ```py client = jira.JIRA(...) issue = client.issue("MYTICKET-1") worklog = issue.fields.worklog all_worklogs = client.worklogs(issue) if worklog.total > 20 else worklog.worklogs ```
This question [here](https://stackoverflow.com/questions/24375473/access-specific-information-within-worklogs-in-jira-python) is similar to yours and someone has posted a work around. There is a also a [similar question on Github](https://github.com/pycontribs/jira/issues/224) in relation to attachments (not worklogs). The last answer in the comments has workaround that might assist.
48,244,805
I need to execute a function named "send()" who contain an ajax request. This function is in ajax.js (included in ) The Ajax success update the src of my image. This function work well, I don't think that it is the problem But when I load the page, send() function is not executed :o I don't understand why. After loading, I click on a button, and the function work ! (the code of the button is not in the code below) You can see the HTML code, my function below, and node JS code. Thanks for your help. Since your answer, the problem is now : POST <http://localhost:3000/index/gen/> net::ERR\_CONNECTION\_REFUSED The script is executed, the problem was that some data were not initialized (Jquery sliders) ``` <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>Génération</title> <link rel="stylesheet" media="screen" title="super style" href="./styles/feuille.css"> <script src="./script/ajax.js"></script> </head> <script src="https://code.jquery.com/jquery-1.10.2.js"></script> <body> <img src="images/map.jpg" class="superbg" alt="Carte du monde"/> <div id="main"> <header> <div id="titre"> <h1>Generator</h1> </div> </header> <img id="image" src="" alt="Map"> <script> send(); //in file ajax.js included in head alert($("#image").attr('src')); </script> <footer> Copyright CLEBS 2017, tous droits réservés. </footer> </body> </html> ``` Here send function ``` function send(){ var data = {"some data useless for my question"}; alert("i'm in send() function"); $.ajax({ type: 'POST', url: 'index/gen/', //(It's node JS) data: data, success : function(j){ var img = document.getElementById('image'); img.src = j; }, complete : function(j){ }, }); } ``` Node JS code ``` app.post('/index/gen/',urlencodedParser, function (req,res){ const { spawn } = require('child_process'); const ls = spawn('./generateur/minigen.py', req.session.donnees = '' ls.stdout.on('data', (data) => { req.session.donnees+=data.toString(); }); ls.stderr.on('datas', (datas) => { console.log("Erreur"+`stderr: ${datas}`); }); var options = { //Encapsulation des données à envoyer mode: 'JSON', pythonOptions: ['-u'], scriptPath: './generateur', //ligne du dessous c'est les valeurs saisies par l'utilisateur args: ["Some data useless for my question"] }; pythonShell.run('generation.py', options, function (err, results) { //Make the map to be download if (err) throw err; res.send(req.session.donnees); }); } }) ```
2018/01/13
[ "https://Stackoverflow.com/questions/48244805", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8846114/" ]
There is no function named `send` in your code. You have a function named `envoi`. Change `function envoi()` to `function send()` The hazards of multilingual coding! **Edit: since you updated your answer, try this.** ``` <script> $(document).ready(function() { send(); alert($("#image").attr('src')); }); </script> ```
The format of the javascript is incorrect. The Scope of your solution needs to be managed with JQuery. If you simply want to call the function when the page loads, you can call the function inside the JQuery handler. Also, your syntax is not correct. ``` <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>Génération</title> <link rel="stylesheet" media="screen" title="super style" href="./styles/feuille.css"> <script src="https://code.jquery.com/jquery-1.12.4.js"></script> <script src="./script/ajax.js"></script> </head> <body> <img src="images/map.jpg" class="superbg" alt="Carte du monde"/> <div id="main"> <header> <div id="titre"> <h1>Generator</h1> </div> </header> <img id="image" src="" alt="Map"> <footer> Copyright CLEBS 2017, tous droits réservés. </footer> <script> $(function(){ send(); //in file ajax.js included in head // alert($("#image").attr('src')); <-- won't update inline due to async ajax }) </script> </body> </html> ``` Also, the image tag alert should be updated in the Async callback. ``` var data = {"some data useless for my question"} $.ajax({ url: 'index/gen/', type: "POST", data: JSON.stringify(data), contentType: "application/json", complete: function(j) { $("#image").attr('src', j); } }); ``` Also be aware that you want JQuery to load before your custom JavaScript. I'm not sure where you are calling Envoi, but this code should help you. It will trigger the "send()" function when the page loads. Also, script tags should be included in the Head tag.
48,244,805
I need to execute a function named "send()" who contain an ajax request. This function is in ajax.js (included in ) The Ajax success update the src of my image. This function work well, I don't think that it is the problem But when I load the page, send() function is not executed :o I don't understand why. After loading, I click on a button, and the function work ! (the code of the button is not in the code below) You can see the HTML code, my function below, and node JS code. Thanks for your help. Since your answer, the problem is now : POST <http://localhost:3000/index/gen/> net::ERR\_CONNECTION\_REFUSED The script is executed, the problem was that some data were not initialized (Jquery sliders) ``` <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>Génération</title> <link rel="stylesheet" media="screen" title="super style" href="./styles/feuille.css"> <script src="./script/ajax.js"></script> </head> <script src="https://code.jquery.com/jquery-1.10.2.js"></script> <body> <img src="images/map.jpg" class="superbg" alt="Carte du monde"/> <div id="main"> <header> <div id="titre"> <h1>Generator</h1> </div> </header> <img id="image" src="" alt="Map"> <script> send(); //in file ajax.js included in head alert($("#image").attr('src')); </script> <footer> Copyright CLEBS 2017, tous droits réservés. </footer> </body> </html> ``` Here send function ``` function send(){ var data = {"some data useless for my question"}; alert("i'm in send() function"); $.ajax({ type: 'POST', url: 'index/gen/', //(It's node JS) data: data, success : function(j){ var img = document.getElementById('image'); img.src = j; }, complete : function(j){ }, }); } ``` Node JS code ``` app.post('/index/gen/',urlencodedParser, function (req,res){ const { spawn } = require('child_process'); const ls = spawn('./generateur/minigen.py', req.session.donnees = '' ls.stdout.on('data', (data) => { req.session.donnees+=data.toString(); }); ls.stderr.on('datas', (datas) => { console.log("Erreur"+`stderr: ${datas}`); }); var options = { //Encapsulation des données à envoyer mode: 'JSON', pythonOptions: ['-u'], scriptPath: './generateur', //ligne du dessous c'est les valeurs saisies par l'utilisateur args: ["Some data useless for my question"] }; pythonShell.run('generation.py', options, function (err, results) { //Make the map to be download if (err) throw err; res.send(req.session.donnees); }); } }) ```
2018/01/13
[ "https://Stackoverflow.com/questions/48244805", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8846114/" ]
There is no function named `send` in your code. You have a function named `envoi`. Change `function envoi()` to `function send()` The hazards of multilingual coding! **Edit: since you updated your answer, try this.** ``` <script> $(document).ready(function() { send(); alert($("#image").attr('src')); }); </script> ```
You could use the onload Event. This Event is called, wenn the page is loded completely and you can reach the send() function. <https://www.w3schools.com/jsref/event_onload.asp> <https://www.w3schools.com/tags/ev_onload.asp> For instance: ``` <body onload="myFunction()"> <script> function myFunction() { send(); //in file ajax.js included in head alert($("#image").attr('src')); } </script> ```
48,244,805
I need to execute a function named "send()" who contain an ajax request. This function is in ajax.js (included in ) The Ajax success update the src of my image. This function work well, I don't think that it is the problem But when I load the page, send() function is not executed :o I don't understand why. After loading, I click on a button, and the function work ! (the code of the button is not in the code below) You can see the HTML code, my function below, and node JS code. Thanks for your help. Since your answer, the problem is now : POST <http://localhost:3000/index/gen/> net::ERR\_CONNECTION\_REFUSED The script is executed, the problem was that some data were not initialized (Jquery sliders) ``` <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>Génération</title> <link rel="stylesheet" media="screen" title="super style" href="./styles/feuille.css"> <script src="./script/ajax.js"></script> </head> <script src="https://code.jquery.com/jquery-1.10.2.js"></script> <body> <img src="images/map.jpg" class="superbg" alt="Carte du monde"/> <div id="main"> <header> <div id="titre"> <h1>Generator</h1> </div> </header> <img id="image" src="" alt="Map"> <script> send(); //in file ajax.js included in head alert($("#image").attr('src')); </script> <footer> Copyright CLEBS 2017, tous droits réservés. </footer> </body> </html> ``` Here send function ``` function send(){ var data = {"some data useless for my question"}; alert("i'm in send() function"); $.ajax({ type: 'POST', url: 'index/gen/', //(It's node JS) data: data, success : function(j){ var img = document.getElementById('image'); img.src = j; }, complete : function(j){ }, }); } ``` Node JS code ``` app.post('/index/gen/',urlencodedParser, function (req,res){ const { spawn } = require('child_process'); const ls = spawn('./generateur/minigen.py', req.session.donnees = '' ls.stdout.on('data', (data) => { req.session.donnees+=data.toString(); }); ls.stderr.on('datas', (datas) => { console.log("Erreur"+`stderr: ${datas}`); }); var options = { //Encapsulation des données à envoyer mode: 'JSON', pythonOptions: ['-u'], scriptPath: './generateur', //ligne du dessous c'est les valeurs saisies par l'utilisateur args: ["Some data useless for my question"] }; pythonShell.run('generation.py', options, function (err, results) { //Make the map to be download if (err) throw err; res.send(req.session.donnees); }); } }) ```
2018/01/13
[ "https://Stackoverflow.com/questions/48244805", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8846114/" ]
You could use the onload Event. This Event is called, wenn the page is loded completely and you can reach the send() function. <https://www.w3schools.com/jsref/event_onload.asp> <https://www.w3schools.com/tags/ev_onload.asp> For instance: ``` <body onload="myFunction()"> <script> function myFunction() { send(); //in file ajax.js included in head alert($("#image").attr('src')); } </script> ```
The format of the javascript is incorrect. The Scope of your solution needs to be managed with JQuery. If you simply want to call the function when the page loads, you can call the function inside the JQuery handler. Also, your syntax is not correct. ``` <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>Génération</title> <link rel="stylesheet" media="screen" title="super style" href="./styles/feuille.css"> <script src="https://code.jquery.com/jquery-1.12.4.js"></script> <script src="./script/ajax.js"></script> </head> <body> <img src="images/map.jpg" class="superbg" alt="Carte du monde"/> <div id="main"> <header> <div id="titre"> <h1>Generator</h1> </div> </header> <img id="image" src="" alt="Map"> <footer> Copyright CLEBS 2017, tous droits réservés. </footer> <script> $(function(){ send(); //in file ajax.js included in head // alert($("#image").attr('src')); <-- won't update inline due to async ajax }) </script> </body> </html> ``` Also, the image tag alert should be updated in the Async callback. ``` var data = {"some data useless for my question"} $.ajax({ url: 'index/gen/', type: "POST", data: JSON.stringify(data), contentType: "application/json", complete: function(j) { $("#image").attr('src', j); } }); ``` Also be aware that you want JQuery to load before your custom JavaScript. I'm not sure where you are calling Envoi, but this code should help you. It will trigger the "send()" function when the page loads. Also, script tags should be included in the Head tag.
61,549,690
I have a bash script that starts my python script. Point of this is, that I hand over a lot of (sometimes changing) arguments to the python script. So I found it useful to start my python script with a bash script where I "save" my argument list. ``` #!/bin/bash cd $(dirname $0) python3 script.py [arg0] [arg1] ``` In my Python script I have the KeyboardInterrupt-Exception implemented which would save some open files and then exit the python script. Now my question: When I run the shell script I have to at least press 3-times Strg+C, get some python errors and it will stop. Am i guessing right, that my Strg+C is not recognized by python, but by the shell script instead? Is there a way to hand over the Keyboard Interrupt from the shell script to the python script running in it? btw: the python script is running an infinite loop (if this is important) Python script looks like this for the exception. As pointed out, it runs an infinite loop. ``` while True: try: #doing stuff here except KeyboardInterrupt: for file in files: file.close() break ```
2020/05/01
[ "https://Stackoverflow.com/questions/61549690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Sounds like a case of operator precedence? Does it work if you wrap the second part in brackets, like ```js console.log('myFather.__proto__ === Object.prototype:' + (myFather.__proto__ === Object.prototype)) ``` Operator precedence, as documented at [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Operator_Precedence) or other JS core documentation, defines the order in which operators are evaluated. This list is not that interesting in most cases, as simple assignments within one statement might not use multiple different operators. But in your case, there's a `+` operator and the `===` operator. The addition operator has a **higher** precedence than the equality operator, which means: it is evaluated first. So, these are the internal steps for your log call, line by line: ```js console.log('myFather.__proto__ === Object.prototype:' + myFather.__proto__ === Object.prototype) console.log('myFather.__proto__ === Object.prototype:[object Object]' === Object.prototype) console.log(false) ```
You are actually doing this : ``` console.log( ('myFather.__proto__ === Object.prototype:' + myFather.__proto__) === Object.prototype); ``` So the result of this equality is `false`
48,364,407
I'm new to python and I have found tons of my questions have already been answered. In 7 years of coding various languages, I've never actually posted a question on here before, so I'm really stumped this time. I'm using python 3.6 I have a pandas dataframe with a column that is just Boolean values. I have some code that I only want to execute if all of the rows in this column are True. Elsewhere in my code I have used: ``` if True not in df.column: ``` to identify if not even a single row in df is True. This works fine. But for some reason the converse does not work: ``` if False not in df.column: ``` to identify if all rows in df are True. Even this returns False: ``` import pandas as pd S = pd.Series([True, True, True]) print(False not in S) ``` But I have found that adding .values to the series works both ways: ``` import pandas as pd S = pd.Series([True, True, True]) print(False not in S.values) ``` The other alternative I can think of is to loop through the column and use the OR operator to compare each row with a variable initialized as True. Then if the variable makes it all the way to the end as True, all must be true. SO my question is: why does this return False? ``` import pandas as pd S = pd.Series([True, True, True]) print(False not in S) ```
2018/01/21
[ "https://Stackoverflow.com/questions/48364407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9246417/" ]
`False not in S` is equivalent to `False not in S.index`. Since the first index element is 0 (which, in turn, is numerically equivalent to `False`), `False` is technically `in` `S`.
When you call `s.values` you are going to have access to a `numpy.array` version of the pandas Series/Dataframe. Pandas provides a method called `isin` that is going to behave correctly, when calling `s.isin([False])`
48,364,407
I'm new to python and I have found tons of my questions have already been answered. In 7 years of coding various languages, I've never actually posted a question on here before, so I'm really stumped this time. I'm using python 3.6 I have a pandas dataframe with a column that is just Boolean values. I have some code that I only want to execute if all of the rows in this column are True. Elsewhere in my code I have used: ``` if True not in df.column: ``` to identify if not even a single row in df is True. This works fine. But for some reason the converse does not work: ``` if False not in df.column: ``` to identify if all rows in df are True. Even this returns False: ``` import pandas as pd S = pd.Series([True, True, True]) print(False not in S) ``` But I have found that adding .values to the series works both ways: ``` import pandas as pd S = pd.Series([True, True, True]) print(False not in S.values) ``` The other alternative I can think of is to loop through the column and use the OR operator to compare each row with a variable initialized as True. Then if the variable makes it all the way to the end as True, all must be true. SO my question is: why does this return False? ``` import pandas as pd S = pd.Series([True, True, True]) print(False not in S) ```
2018/01/21
[ "https://Stackoverflow.com/questions/48364407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9246417/" ]
It's not directly what you're asking but you can use .all() on a boolean series to determine if all values are true. Something like: ``` if df["column_name"].all(): #do something ```
When you call `s.values` you are going to have access to a `numpy.array` version of the pandas Series/Dataframe. Pandas provides a method called `isin` that is going to behave correctly, when calling `s.isin([False])`
36,468,707
I'm having trouble loading the R-package `edgeR` in Python using `rpy2`. When I run: ``` import rpy2.robjects as robjects robjects.r(''' library(edgeR) ''') ``` I get the following error: ``` /home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py:106: UserWarning: Loading required package: limma res = super(Function, self).__call__(*new_args, **new_kwargs) /home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py:106: UserWarning: Error in dyn.load(file, DLLpath = DLLpath, ...) : unable to load shared object '/data/scratch/user/source/anaconda/lib/R/library/edgeR/libs/edgeR.so': /data/scratch/user/source/anaconda/lib/R/library/edgeR/libs/edgeR.so: undefined symbol: _ZNSt7__cxx1118basic_stringstreamIcSt11char_traitsIcESaIcEED1Ev res = super(Function, self).__call__(*new_args, **new_kwargs) /home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py:106: UserWarning: Error: package or namespace load failed for ‘edgeR’ res = super(Function, self).__call__(*new_args, **new_kwargs) Traceback (most recent call last): File "differential_expression.py", line 221, in <module> diff_expr_object.run_edgeR() File "differential_expression.py", line 127, in run_edgeR probs = call_edger(data, groups, sizes, genes) File "differential_expression.py", line 64, in call_edger ''') File "/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/__init__.py", line 321, in __call__ res = self.eval(p) File "/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py", line 178, in __call__ return super(SignatureTranslatedFunction, self).__call__(*args, **kwargs) File "/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py", line 106, in __call__ res = super(Function, self).__call__(*new_args, **new_kwargs) rpy2.rinterface.RRuntimeError: Error: package or namespace load failed for ‘edgeR’ ``` The main problem being: ``` rpy2.rinterface.RRuntimeError: Error: package or namespace load failed for ‘edgeR’ ``` However, when I run the following: ``` R > library(edgeR) > sessionInfo() R version 3.2.2 (2015-08-14) Platform: x86_64-pc-linux-gnu (64-bit) Running under: CentOS release 6.5 (Final) locale: [1] LC_CTYPE=en_ZA.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_ZA.UTF-8 LC_COLLATE=en_ZA.UTF-8 [5] LC_MONETARY=en_ZA.UTF-8 LC_MESSAGES=en_ZA.UTF-8 [7] LC_PAPER=en_ZA.UTF-8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_ZA.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] edgeR_3.12.0 limma_3.26.9 > ``` I can see that `edgeR` is successfully installed and running in `R`. Why would it not be working in Python? I tried to load other packages from `rpy2` e.g. `library(tools)` which worked fine.
2016/04/07
[ "https://Stackoverflow.com/questions/36468707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5964533/" ]
If you want 2 different file button, you need to give them different names. ``` <form action="" enctype="multipart/form-data" method="post" accept-charset="utf-8"> <input type="file" name="userfile1" size="20"> <input type="file" name="userfile2" size="20"> <input type="submit" name="submit" value="upload"> ``` Than you have to modify your function `ddoo_upload()` like below :- ``` function ddoo_upload($filename){ $config['upload_path'] = './uploads/'; $config['allowed_types'] = 'gif|jpg|png'; $config['max_size'] = '100'; $config['max_width'] = '1024'; $config['max_height'] = '768'; $this->load->library('upload', $config); if ( ! $this->upload->do_upload($filename)) { $error = array('error' => $this->upload->display_errors()); return false; // $this->load->view('upload_form', $error); } else { $data = array('upload_data' => $this->upload->data()); return true; //$this->load->view('upload_success', $data); } } ``` **NOTE:-** We are passing `$filename` as variable and than using it to upload different files. Now in controller where the form action is redirecting, you need to write below code. ``` if ($this->input->post('submit')){ if (isset($_FILES['userfile1']) && $_FILES['userfile1']['name'] != ''){ $file1 = $this->ddoo_upload('userfile1'); } if (isset($_FILES['userfile2']) && $_FILES['userfile2']['name'] != ''){ $file2 = $this->ddoo_upload('userfile2'); } } ```
``` <form action="http://localhost/cod_login/club/test2" enctype="multipart/form-data" method="post" accept-charset="utf-8"> <input type="file" name="userfile" size="20" multiple=""> <input type="submit" name="submit" value="upload"> </form> ```
36,468,707
I'm having trouble loading the R-package `edgeR` in Python using `rpy2`. When I run: ``` import rpy2.robjects as robjects robjects.r(''' library(edgeR) ''') ``` I get the following error: ``` /home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py:106: UserWarning: Loading required package: limma res = super(Function, self).__call__(*new_args, **new_kwargs) /home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py:106: UserWarning: Error in dyn.load(file, DLLpath = DLLpath, ...) : unable to load shared object '/data/scratch/user/source/anaconda/lib/R/library/edgeR/libs/edgeR.so': /data/scratch/user/source/anaconda/lib/R/library/edgeR/libs/edgeR.so: undefined symbol: _ZNSt7__cxx1118basic_stringstreamIcSt11char_traitsIcESaIcEED1Ev res = super(Function, self).__call__(*new_args, **new_kwargs) /home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py:106: UserWarning: Error: package or namespace load failed for ‘edgeR’ res = super(Function, self).__call__(*new_args, **new_kwargs) Traceback (most recent call last): File "differential_expression.py", line 221, in <module> diff_expr_object.run_edgeR() File "differential_expression.py", line 127, in run_edgeR probs = call_edger(data, groups, sizes, genes) File "differential_expression.py", line 64, in call_edger ''') File "/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/__init__.py", line 321, in __call__ res = self.eval(p) File "/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py", line 178, in __call__ return super(SignatureTranslatedFunction, self).__call__(*args, **kwargs) File "/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py", line 106, in __call__ res = super(Function, self).__call__(*new_args, **new_kwargs) rpy2.rinterface.RRuntimeError: Error: package or namespace load failed for ‘edgeR’ ``` The main problem being: ``` rpy2.rinterface.RRuntimeError: Error: package or namespace load failed for ‘edgeR’ ``` However, when I run the following: ``` R > library(edgeR) > sessionInfo() R version 3.2.2 (2015-08-14) Platform: x86_64-pc-linux-gnu (64-bit) Running under: CentOS release 6.5 (Final) locale: [1] LC_CTYPE=en_ZA.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_ZA.UTF-8 LC_COLLATE=en_ZA.UTF-8 [5] LC_MONETARY=en_ZA.UTF-8 LC_MESSAGES=en_ZA.UTF-8 [7] LC_PAPER=en_ZA.UTF-8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_ZA.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] edgeR_3.12.0 limma_3.26.9 > ``` I can see that `edgeR` is successfully installed and running in `R`. Why would it not be working in Python? I tried to load other packages from `rpy2` e.g. `library(tools)` which worked fine.
2016/04/07
[ "https://Stackoverflow.com/questions/36468707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5964533/" ]
If you want 2 different file button, you need to give them different names. ``` <form action="" enctype="multipart/form-data" method="post" accept-charset="utf-8"> <input type="file" name="userfile1" size="20"> <input type="file" name="userfile2" size="20"> <input type="submit" name="submit" value="upload"> ``` Than you have to modify your function `ddoo_upload()` like below :- ``` function ddoo_upload($filename){ $config['upload_path'] = './uploads/'; $config['allowed_types'] = 'gif|jpg|png'; $config['max_size'] = '100'; $config['max_width'] = '1024'; $config['max_height'] = '768'; $this->load->library('upload', $config); if ( ! $this->upload->do_upload($filename)) { $error = array('error' => $this->upload->display_errors()); return false; // $this->load->view('upload_form', $error); } else { $data = array('upload_data' => $this->upload->data()); return true; //$this->load->view('upload_success', $data); } } ``` **NOTE:-** We are passing `$filename` as variable and than using it to upload different files. Now in controller where the form action is redirecting, you need to write below code. ``` if ($this->input->post('submit')){ if (isset($_FILES['userfile1']) && $_FILES['userfile1']['name'] != ''){ $file1 = $this->ddoo_upload('userfile1'); } if (isset($_FILES['userfile2']) && $_FILES['userfile2']['name'] != ''){ $file2 = $this->ddoo_upload('userfile2'); } } ```
This is the controller code i Have applied for uploading two image in codeigniter public function index() { ``` if($this->input->post('Submit')){ //-----------Image File Section Start Here -----------// $config['upload_path'] = './uploads/'; // Directory $config['allowed_types'] = 'jpg|jpeg|bmp|png'; //type of images allowed $config['max_size'] = '30720'; //Max Size $config['encrypt_name'] = TRUE; // For unique image name at a time $this->load->library('upload', $config); //File Uploading library $this->upload->do_upload('userfile'); // input name which have to upload $video_upload=$this->upload->data(); //variable which store the path //--------------End of Image File Section------------------------// //---------Thumbnail Image Upload Section Start Here -----------// $config2['upload_path'] = './thumb/'; // Directory $config2['allowed_types'] = 'jpg|jpeg|bmp|png'; //type of images allowed $config2['max_size'] = '30720'; //Max Size $config2['encrypt_name'] = TRUE; // For unique image name at a time $this->upload->initialize($config2); //we can not use upload library again and again it will not initialize again and again so thats why i have used initialize $this->upload->do_upload('txt_thumb'); // File Name $thumbnail_upload=$this->upload->data(); // store the name of the file //--------End of Thumbnail Upload Section-----------// $date=date("d-m-Y"); // Store current date in variable // Here the database query to insert $data = array( 'parent_id'=> $this->input->post('txt_parent'), 'cat_id' => $this->input->post('txt_category'), 'title'=> $this->input->post('txt_title'), 'status' => $this->input->post('txt_status'), 'featured' => $thumbnail_upload['file_name'], 'image' => $video_upload['file_name'], 'time'=>$date ); $sql_ins= $this->Insimage->insertimage($data); if($sql_ins) { $data['Success'] = "Image has been succesfully inserted!!"; } ``` } This code will surely work to upload 2 images Enjoy!!!! :-)
36,468,707
I'm having trouble loading the R-package `edgeR` in Python using `rpy2`. When I run: ``` import rpy2.robjects as robjects robjects.r(''' library(edgeR) ''') ``` I get the following error: ``` /home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py:106: UserWarning: Loading required package: limma res = super(Function, self).__call__(*new_args, **new_kwargs) /home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py:106: UserWarning: Error in dyn.load(file, DLLpath = DLLpath, ...) : unable to load shared object '/data/scratch/user/source/anaconda/lib/R/library/edgeR/libs/edgeR.so': /data/scratch/user/source/anaconda/lib/R/library/edgeR/libs/edgeR.so: undefined symbol: _ZNSt7__cxx1118basic_stringstreamIcSt11char_traitsIcESaIcEED1Ev res = super(Function, self).__call__(*new_args, **new_kwargs) /home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py:106: UserWarning: Error: package or namespace load failed for ‘edgeR’ res = super(Function, self).__call__(*new_args, **new_kwargs) Traceback (most recent call last): File "differential_expression.py", line 221, in <module> diff_expr_object.run_edgeR() File "differential_expression.py", line 127, in run_edgeR probs = call_edger(data, groups, sizes, genes) File "differential_expression.py", line 64, in call_edger ''') File "/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/__init__.py", line 321, in __call__ res = self.eval(p) File "/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py", line 178, in __call__ return super(SignatureTranslatedFunction, self).__call__(*args, **kwargs) File "/home/user/.local/lib/python2.7/site-packages/rpy2/robjects/functions.py", line 106, in __call__ res = super(Function, self).__call__(*new_args, **new_kwargs) rpy2.rinterface.RRuntimeError: Error: package or namespace load failed for ‘edgeR’ ``` The main problem being: ``` rpy2.rinterface.RRuntimeError: Error: package or namespace load failed for ‘edgeR’ ``` However, when I run the following: ``` R > library(edgeR) > sessionInfo() R version 3.2.2 (2015-08-14) Platform: x86_64-pc-linux-gnu (64-bit) Running under: CentOS release 6.5 (Final) locale: [1] LC_CTYPE=en_ZA.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_ZA.UTF-8 LC_COLLATE=en_ZA.UTF-8 [5] LC_MONETARY=en_ZA.UTF-8 LC_MESSAGES=en_ZA.UTF-8 [7] LC_PAPER=en_ZA.UTF-8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_ZA.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] edgeR_3.12.0 limma_3.26.9 > ``` I can see that `edgeR` is successfully installed and running in `R`. Why would it not be working in Python? I tried to load other packages from `rpy2` e.g. `library(tools)` which worked fine.
2016/04/07
[ "https://Stackoverflow.com/questions/36468707", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5964533/" ]
If you want 2 different file button, you need to give them different names. ``` <form action="" enctype="multipart/form-data" method="post" accept-charset="utf-8"> <input type="file" name="userfile1" size="20"> <input type="file" name="userfile2" size="20"> <input type="submit" name="submit" value="upload"> ``` Than you have to modify your function `ddoo_upload()` like below :- ``` function ddoo_upload($filename){ $config['upload_path'] = './uploads/'; $config['allowed_types'] = 'gif|jpg|png'; $config['max_size'] = '100'; $config['max_width'] = '1024'; $config['max_height'] = '768'; $this->load->library('upload', $config); if ( ! $this->upload->do_upload($filename)) { $error = array('error' => $this->upload->display_errors()); return false; // $this->load->view('upload_form', $error); } else { $data = array('upload_data' => $this->upload->data()); return true; //$this->load->view('upload_success', $data); } } ``` **NOTE:-** We are passing `$filename` as variable and than using it to upload different files. Now in controller where the form action is redirecting, you need to write below code. ``` if ($this->input->post('submit')){ if (isset($_FILES['userfile1']) && $_FILES['userfile1']['name'] != ''){ $file1 = $this->ddoo_upload('userfile1'); } if (isset($_FILES['userfile2']) && $_FILES['userfile2']['name'] != ''){ $file2 = $this->ddoo_upload('userfile2'); } } ```
1- create image\_uploader in your controller ``` function image_uploader($filename){ $config['upload_path'] = './assets/uploads/setting/'; $config['allowed_types'] = 'gif|jpg|png'; $config['max_size'] = '100'; $config['max_width'] = '2000'; $config['max_height'] = '2000'; $this->load->library('upload', $config); if ( ! $this->upload->do_upload($filename)) { $error = array('error' => $this->upload->display_errors()); } else { $data_foto = $this->upload->data(); return $data_foto['file_name']; } ``` this function return name of image file , if upload was successfully 2- call image\_uploader in other function (form action in view ) ``` public function slideroneupload() { $this->site_security->_make_sure_is_admin(); $data = $this->input->post(); $sliderimage = $this->image_uploader('image'); $thumbimage = $this->image_uploader('thumb'); // save all data to database . . . } ```
43,246,862
Is there a way to create a cloudformation template, which invokes REST API calls to an EC2 instance ? The use case is to modify the configuration of the application without having to use update stack and user-data, because user-data updation is disruptive. I did search through all the documentation and found that this could be done by calling an AWS lambda. However, unable to get the right combination of CFM template and invocation properties. Adding a simple lambda, which works stand-alone : ``` from __future__ import print_function import requests def handler(event, context): r1=requests.get('https://google.com') message = r1.text return { 'message' : message } ``` This is named as ltest.py, and packaged into ltest.zip with requests module, etc. ltest.zip is then called in the CFM template : ``` { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Test", "Parameters": { "ModuleName" : { "Description" : "The name of the Python file", "Type" : "String", "Default" : "ltest" }, "S3Bucket" : { "Description" : "The name of the bucket that contains your packaged source", "Type" : "String", "Default" : "abhinav-temp" }, "S3Key" : { "Description" : "The name of the ZIP package", "Type" : "String", "Default" : "ltest.zip" } }, "Resources" : { "AMIInfo": { "Type": "Custom::AMIInfo", "Properties": { "ServiceToken": { "Fn::GetAtt" : ["AMIInfoFunction", "Arn"] } } }, "AMIInfoFunction": { "Type": "AWS::Lambda::Function", "Properties": { "Code": { "S3Bucket": { "Ref": "S3Bucket" }, "S3Key": { "Ref": "S3Key" } }, "Handler": { "Fn::Join" : [ "", [{ "Ref": "ModuleName" },".handler"] ]}, "Role": { "Fn::GetAtt" : ["LambdaExecutionRole", "Arn"] }, "Runtime": "python2.7", "Timeout": "30" } }, "LambdaExecutionRole": { "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Principal": {"Service": ["lambda.amazonaws.com"]}, "Action": ["sts:AssumeRole"] }] }, "Path": "/", "Policies": [{ "PolicyName": "root", "PolicyDocument": { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": ["logs:CreateLogGroup","logs:CreateLogStream","logs:PutLogEvents"], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": ["ec2:DescribeImages"], "Resource": "*" }] } }] } } }, ``` "Outputs" : { "AMIID" : { "Description": "Result", "Value" : { "Fn::GetAtt": [ "AMIInfo", "message" ] } } } } The result of the above (with variations of the Fn::GetAtt call) is that the Lambda gets instantiated, but the AMIInfo call is stuck in "CREATE\_FUNCTION". The stack also does not get deleted properly.
2017/04/06
[ "https://Stackoverflow.com/questions/43246862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2617/" ]
I would attack this with Lambda, but it seems as though you already thought of that and might be dismissing it. A little bit of a hack, but could you add Files to the instance via Metadata where the source is the REST url? e.g. ``` "Type": "AWS::EC2::Instance", "Metadata": { "AWS::CloudFormation::Init": { "configSets": { "CallREST": [ "CallREST" ] }, "CallREST": { "files": { "c://cfn//junk//rest1output.txt": { "source": "https://myinstance.com/RESTAPI/Rest1/Action1" } } }, } } ``` To fix your lambda you need to signal SUCCESS. When CloudFormation creates (and runs) the Lambda, it expected that the Lambda signal success. This is the reason you are getting the stuck "CREATE\_IN\_PROGRESS" At the bottom of <http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html> is a function named "send" to help you signal success. and here's my attempt to integrate it into your function AS PSUEDOCODE without testing it, but you should get the idea. ``` from __future__ import print_function import requests def handler(event, context): r1=requests.get('https://google.com') message = r1.text # signal complete to CFN # send(event, context, responseStatus, responseData, physicalResourceId) send(..., ..., SUCCESS, ...) return { 'message' : message } ```
Lambda triggered by the event. Lifecycle hooks can be helpful. You can hack CoudFormation, but please mind: it is not designed for this.
3,893,038
I use Hakyll to generate some documentation and I noticed that it has a weird way of closing the HTML tags in the code it generates. There was a page where they said that you must generate the markup as they do, or the layout of your page will be broken under some conditions, but I can't find it now. I created a small test page (code below) which has one red layer with the "normal" HTML markup, and a yellow layer with markup similar to what hakyll generates. I can't see any diference in Firefox between the two divs. Can anybody explain if what they say is true? ``` <html> <body> <!-- NORMAL STYLE --> <div style="background: red"> <p>Make available the code from the library you added to your application. Again, the way to do this varies between languages (from adding import statements in python to adding a jar to the classpath for java)</p> <p>Create an instance of the client and, in your code, make calls to it through this instance's methods.</p> </div> <!-- HAKYLL STYLE --> <div style="background: yellow" ><p >Make available the code from the library you added to your application. Again, the way to do this varies between languages (from adding import statements in python to adding a jar to the classpath for java)</p ><p >Create an instance of the client and, in your code, make calls to it through this instance's methods.</p ></div > </body> <html> ```
2010/10/08
[ "https://Stackoverflow.com/questions/3893038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/212865/" ]
It's actually [pandoc](http://johnmacfarlane.net/pandoc/) that's generating the HTML code. There's a good explanation in the Pandoc issue tracker: <http://code.google.com/p/pandoc/issues/detail?id=134> > > The reason is > because any whitespace (including newline and tabs) between HTML tags will cause the > browser to insert a space character between those elements. It is far easier on the > machine logic to leave these spaces out, because then you don't need to think about > the possible ways that the HTML text formatting could be messing with the browser adding > extra spaces. > > >
There are times when stripping the white space between two tags will make a difference, particularly when dealing with inline elements.
3,893,038
I use Hakyll to generate some documentation and I noticed that it has a weird way of closing the HTML tags in the code it generates. There was a page where they said that you must generate the markup as they do, or the layout of your page will be broken under some conditions, but I can't find it now. I created a small test page (code below) which has one red layer with the "normal" HTML markup, and a yellow layer with markup similar to what hakyll generates. I can't see any diference in Firefox between the two divs. Can anybody explain if what they say is true? ``` <html> <body> <!-- NORMAL STYLE --> <div style="background: red"> <p>Make available the code from the library you added to your application. Again, the way to do this varies between languages (from adding import statements in python to adding a jar to the classpath for java)</p> <p>Create an instance of the client and, in your code, make calls to it through this instance's methods.</p> </div> <!-- HAKYLL STYLE --> <div style="background: yellow" ><p >Make available the code from the library you added to your application. Again, the way to do this varies between languages (from adding import statements in python to adding a jar to the classpath for java)</p ><p >Create an instance of the client and, in your code, make calls to it through this instance's methods.</p ></div > </body> <html> ```
2010/10/08
[ "https://Stackoverflow.com/questions/3893038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/212865/" ]
It's actually [pandoc](http://johnmacfarlane.net/pandoc/) that's generating the HTML code. There's a good explanation in the Pandoc issue tracker: <http://code.google.com/p/pandoc/issues/detail?id=134> > > The reason is > because any whitespace (including newline and tabs) between HTML tags will cause the > browser to insert a space character between those elements. It is far easier on the > machine logic to leave these spaces out, because then you don't need to think about > the possible ways that the HTML text formatting could be messing with the browser adding > extra spaces. > > >
I ran tidy over it and it fixed the unusual linebreaks.
3,183,185
I want to send emails from my app engine application using one of my Google Apps accounts. According to the [GAE python docs](http://code.google.com/appengine/docs/python/mail/overview.html): *The From: address can be the email address of a registered administrator (developer) of the application, the current user if signed in with Google Accounts, or any valid email receiving address for the app (that is, an address of the form string@appid.appspotmail.com).* So I created a user account on my Google Apps domain, no-reply@mydomain.com, to use for outbound email notifications. However, when I try to add the user as an administrator of the app, it fails with this error: **Unauthorized** **You are not authorized to access this application** Is it possible to configure app engine to send emails using a Google Accounts email address?
2010/07/06
[ "https://Stackoverflow.com/questions/3183185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/358925/" ]
You're misusing the send()/recv() functions. send() and recv() are not required to send as much data as you request, due to limits that may be present in the kernel. You have to call send() over and over until all data has been pushed through. e.g.: ``` int sent = 0; int rc; while ( sent < should_send ) { rc = send(sock, buffer + sent, should_send - sent, 0); if ( rc <= 0 ) // Error or hangup // do some error handling; sent += rc; } ```
1. m\_socket can't possibly be null the line after you call `m_socket = new Socket(...)`. It will either throw an exception or assign a `Socket` to m\_socket, never null. So that test is pointless. 2. After you call `readLine()` you must check for a null return value, which means EOS, which means the other end has closed the connection, which means you must exit the reading loop and close the socket. 3. As it says in the Javadoc, `InputStream.available()` shouldn't be used for a test for EOS. Its contract is to return the number of bytes that can be read *without blocking.* That's rarely the same as the length of an incoming file via a socket. You must keep reading the socket until EOS: ``` int count; byte[] buffer = new byte[8192]; while ((count = in.read(buffer)) > 0) out.write(buffer, 0, count); ``` If your sending end doesn't close the socket when it finishes sending the file you will have to have it send the file length ahead of the file, and modify the loop above to read exactly that many bytes.
3,183,185
I want to send emails from my app engine application using one of my Google Apps accounts. According to the [GAE python docs](http://code.google.com/appengine/docs/python/mail/overview.html): *The From: address can be the email address of a registered administrator (developer) of the application, the current user if signed in with Google Accounts, or any valid email receiving address for the app (that is, an address of the form string@appid.appspotmail.com).* So I created a user account on my Google Apps domain, no-reply@mydomain.com, to use for outbound email notifications. However, when I try to add the user as an administrator of the app, it fails with this error: **Unauthorized** **You are not authorized to access this application** Is it possible to configure app engine to send emails using a Google Accounts email address?
2010/07/06
[ "https://Stackoverflow.com/questions/3183185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/358925/" ]
Java side, ``` int lent2 = 0; int LengthToReceive = 102400; char[] chTemp = new char[LengthToReceive]; while (true) { int readlength = bufferedreader.read(chTemp, lent2,LengthToReceive - lent2); if(readlength==-1){ break; } lent2 += readlength; if (lent2 >= LengthToReceive) { flag = false; break; } } ```
1. m\_socket can't possibly be null the line after you call `m_socket = new Socket(...)`. It will either throw an exception or assign a `Socket` to m\_socket, never null. So that test is pointless. 2. After you call `readLine()` you must check for a null return value, which means EOS, which means the other end has closed the connection, which means you must exit the reading loop and close the socket. 3. As it says in the Javadoc, `InputStream.available()` shouldn't be used for a test for EOS. Its contract is to return the number of bytes that can be read *without blocking.* That's rarely the same as the length of an incoming file via a socket. You must keep reading the socket until EOS: ``` int count; byte[] buffer = new byte[8192]; while ((count = in.read(buffer)) > 0) out.write(buffer, 0, count); ``` If your sending end doesn't close the socket when it finishes sending the file you will have to have it send the file length ahead of the file, and modify the loop above to read exactly that many bytes.
29,682,897
I am using py2neo and I would like to extract the information from query returns so that I can do stuff with it in python. For example, I have a DB containing three "Person" nodes: `for num in graph.cypher.execute("MATCH (p:Person) RETURN count(*)"): print num` outputs: `>> count(*)` `3` Sorry for shitty formatting, it looks essentially the same as a mysql output. However, I would like to use the number 3 for computations, but it has type `py2neo.cypher.core.Record`. How can I convert this to a python int so that I can use it? In a more general sense, how should I go about processing cypher queries so that the data I get back can be used in Python?
2015/04/16
[ "https://Stackoverflow.com/questions/29682897", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2505865/" ]
The clue is in the question: "Assume that I have a fetcher that fetches an image from a given link on a separate thread. The image will then be **cached** in memory." And the answer is the `cache()` operator: "remember the sequence of items emitted by the Observable and emit the same sequence to future Subscribers" from: <https://github.com/ReactiveX/RxJava/wiki/Observable-Utility-Operators> So, the following `Observable` should only fetch the image once, no matter how `Subscribers` subscribe to it: ``` Observable<Bitmap> cachedBitmap = fetchBitmapFrom(url).cache(); ``` **EDIT:** I think the following example proves that the upstream `Observable` is subscribed only once, even if multiple Subscriptions come in before the `Observable` has emitted anything. This should also be true for network requests. ``` package com.example; import rx.Observable; import rx.Subscriber; import rx.schedulers.Schedulers; public class SimpleCacheTest { public static void main(String[] args) { final Observable<Integer> cachedSomething = getSomething().cache(); System.out.println("before first subscription"); cachedSomething.subscribe(new SimpleLoggingSubscriber<Integer>("1")); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("before second subscription"); cachedSomething.subscribe(new SimpleLoggingSubscriber<Integer>("2")); try { Thread.sleep(5000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("quit"); } private static class SimpleLoggingSubscriber<T> extends Subscriber<T> { private final String tag; public SimpleLoggingSubscriber(final String tag) { this.tag = tag; } @Override public void onCompleted() { System.out.println("onCompleted (" + tag + ")"); } @Override public void onError(Throwable e) { System.out.println("onError (" + tag + ")"); } @Override public void onNext(T t) { System.out.println("onNext (" + tag + "): " + t); } } private static Observable<Integer> getSomething() { return Observable.create(new Observable.OnSubscribe<Integer>(){ @Override public void call(Subscriber<? super Integer> subscriber) { System.out.println("going to sleep now..."); try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } subscriber.onNext(1); subscriber.onCompleted(); } }).subscribeOn(Schedulers.io()); } } ``` Output: ``` before first subscription going to sleep now... before second subscription onNext (1): 1 onNext (2): 1 onCompleted (1) onCompleted (2) quit ```
Have a look at `ConnectableObservable` and the `.replay()` method. I'm currently using this is my fragments to handle orientation changes: Fragment's onCreate: ``` ConnectableObservable<MyThing> connectableObservable = retrofitService.fetchMyThing() .map(...) .replay(); connectableObservable.connect(); // this starts the actual network call ``` Fragment's onCreateView: ``` Subscription subscription = connectableObservable .observeOn(AndroidSchedulers.mainThread()) .subscribe(mything -> dosomething()); ``` What happens is that I make 1 network request only, and any subscriber will (eventually/immediately) get that response.
29,682,897
I am using py2neo and I would like to extract the information from query returns so that I can do stuff with it in python. For example, I have a DB containing three "Person" nodes: `for num in graph.cypher.execute("MATCH (p:Person) RETURN count(*)"): print num` outputs: `>> count(*)` `3` Sorry for shitty formatting, it looks essentially the same as a mysql output. However, I would like to use the number 3 for computations, but it has type `py2neo.cypher.core.Record`. How can I convert this to a python int so that I can use it? In a more general sense, how should I go about processing cypher queries so that the data I get back can be used in Python?
2015/04/16
[ "https://Stackoverflow.com/questions/29682897", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2505865/" ]
This can be accomplished via ConcurrentMap and AsyncSubject: ``` import java.awt.image.BufferedImage; import java.io.*; import java.net.URL; import java.util.concurrent.*; import javax.imageio.ImageIO; import rx.*; import rx.Scheduler.Worker; import rx.schedulers.Schedulers; import rx.subjects.AsyncSubject; public class ObservableImageCache { final ConcurrentMap<String, AsyncSubject<BufferedImage>> image = new ConcurrentHashMap<>(); public Observable<BufferedImage> get(String url) { AsyncSubject<BufferedImage> result = image.get(url); if (result == null) { result = AsyncSubject.create(); AsyncSubject<BufferedImage> existing = image.putIfAbsent(url, result); if (existing == null) { System.out.println("Debug: Downloading " + url); AsyncSubject<BufferedImage> a = result; Worker w = Schedulers.io().createWorker(); w.schedule(() -> { try { Thread.sleep(500); // for demo URL u = new URL(url); try (InputStream openStream = u.openStream()) { a.onNext(ImageIO.read(openStream)); } a.onCompleted(); } catch (IOException | InterruptedException ex) { a.onError(ex); } finally { w.unsubscribe(); } }); } else { result = existing; } } return result; } public static void main(String[] args) throws Exception { ObservableImageCache cache = new ObservableImageCache(); CountDownLatch cdl = new CountDownLatch(4); Observable<BufferedImage> img1 = cache.get("https://raw.github.com/wiki/ReactiveX/RxJava/images/rx-operators/create.png"); System.out.println("Subscribing for IMG1"); img1.subscribe(e -> System.out.println("IMG1: " + e.getWidth() + "x" + e.getHeight()), Throwable::printStackTrace, cdl::countDown); Thread.sleep(500); Observable<BufferedImage> img2 = cache.get("https://raw.github.com/wiki/ReactiveX/RxJava/images/rx-operators/create.png"); System.out.println("Subscribing for IMG2"); img2.subscribe(e -> System.out.println("IMG2: " + e.getWidth() + "x" + e.getHeight()), Throwable::printStackTrace, cdl::countDown); Observable<BufferedImage> img3 = cache.get("https://raw.github.com/wiki/ReactiveX/RxJava/images/rx-operators/amb.png"); Observable<BufferedImage> img4 = cache.get("https://raw.github.com/wiki/ReactiveX/RxJava/images/rx-operators/amb.png"); Thread.sleep(500); System.out.println("Subscribing for IMG3"); img3.subscribe(e -> System.out.println("IMG3: " + e.getWidth() + "x" + e.getHeight()), Throwable::printStackTrace, cdl::countDown); Thread.sleep(1000); System.out.println("-> Should be immediate: "); System.out.println("Subscribing for IMG4"); img4.subscribe(e -> System.out.println("IMG4: " + e.getWidth() + "x" + e.getHeight()), Throwable::printStackTrace, cdl::countDown); cdl.await(); } } ``` I'm using the ConcurrentMap's putIfAbsent to make sure only one download is triggered for a new url; everyone else will receive the same AsyncSubject on which they can 'wait' and get the data once available and immediately after that. Usually, you'd want to limit the number of concurrent downloads by using a custom Scheduler.
Have a look at `ConnectableObservable` and the `.replay()` method. I'm currently using this is my fragments to handle orientation changes: Fragment's onCreate: ``` ConnectableObservable<MyThing> connectableObservable = retrofitService.fetchMyThing() .map(...) .replay(); connectableObservable.connect(); // this starts the actual network call ``` Fragment's onCreateView: ``` Subscription subscription = connectableObservable .observeOn(AndroidSchedulers.mainThread()) .subscribe(mything -> dosomething()); ``` What happens is that I make 1 network request only, and any subscriber will (eventually/immediately) get that response.
53,892,106
i wanted to make a python file that makes a copy of itself, then executes it and closes itself, then the copy makes another copy of itself and so on... i am not asking for people to write my code and this could be taken as just a fun challenge, but i want to learn more about this stuff and help is appreciated. i have already played around with it but can't wrap my mind around it, I've already tried making a py file and just pasting a copy of the file itself in it two different ways i could think of but it would just go on forever. ``` #i use this piece of code to easily execute the py file using os os.startfile("file.py") #and to make new py file i just use open() file = open("file.py","w") file.write("""hello world you can use 3 quote marks to write over multiple lines""") ``` i expect that when you run the program it makes a copy of itself, runs it, and closes itself, and the newly ran program loops over. what actually happenes is that either I'm writing code forever or, when i embed the code it pastes in the copy of itself into what it copies to the copy file, it rightfully says it doesn't know what that code is because it's being written. it's all really confusing and this is difficult to explain and I'm sorry it's midnight atm and I'm tired.
2018/12/22
[ "https://Stackoverflow.com/questions/53892106", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10756702/" ]
I don't have enough rep to reply to @Prune: `os.startfile(file)` only works on Windows, and is [replaced by](https://docs.python.org/3/library/subprocess.html) `subprocess.call` `shutil.copy2(src, dst)` works on both Windows and Linux. Try this solution as well: ``` import shutil import subprocess old_file = __file__ new_file = generate_unique_file_name() shutil.copy2(old_file, new_file) # works for both Windows and Linux subprocess.call('python {}'.format(new_file), shell=True) ```
You're close; you have things in the wrong order. Create the new file, *then* execute it. ``` import os old_file = __file__ new_file = generate_unique_file_name() os.system('cp ' + old_file + ' ' + new_file) #UNIX syntax; for Windows, use "copy" os.startfile(new_file) ``` You'll have to choose & code your preferred method for creating a unique file name. You might want to use a time-stamp as part of the name. You might also want to delete this file before you exit; otherwise, you'll eventually fill your disk with these files.
47,393,177
I'm actually working on a django project and i'm migrating to a CustomUser model. On the database everything have gone well and now i want to force my user to update their informations (to respect the new model) I would like to do it when they login (I set all the e-mail to end with @mysite.tmp in order to know if they have already update it or not). So if the e-mail end with this they should be automatically redirected to the update user view (without having been logged) and when they submit the form they get logged in (if the form is valid of course). So my question is how to make this happened at login? I could override the login function but it does not seems to be the better option. Is there a view that i can override? What will you recommend to me? EDIT: With your response i finally chose to override the `logged_in` signal and add a check on the `loggin_required` decorator that check if the user is up-to-date. Now my issue is that I don't understand what a signal is and so how to override it. Is it a method of the User model I have to override, in this case it would be quite simple, or is it somewhere else? Could you explain me, or link me to an easy to understand documentation on this subject? PS: I'm working with django 1.11 and python 3.6
2017/11/20
[ "https://Stackoverflow.com/questions/47393177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8876232/" ]
You can write the logic as: ``` select * from table where (Vendorname = @Vendor) OR (@Vendor IS NULL) ``` One caution: This may not be as optimized as your version, if you have an index on `Vendorname`.
``` select * from table where Vendorname = case when @Vendor is not null then @Vendor else Vendorname end; ```
47,393,177
I'm actually working on a django project and i'm migrating to a CustomUser model. On the database everything have gone well and now i want to force my user to update their informations (to respect the new model) I would like to do it when they login (I set all the e-mail to end with @mysite.tmp in order to know if they have already update it or not). So if the e-mail end with this they should be automatically redirected to the update user view (without having been logged) and when they submit the form they get logged in (if the form is valid of course). So my question is how to make this happened at login? I could override the login function but it does not seems to be the better option. Is there a view that i can override? What will you recommend to me? EDIT: With your response i finally chose to override the `logged_in` signal and add a check on the `loggin_required` decorator that check if the user is up-to-date. Now my issue is that I don't understand what a signal is and so how to override it. Is it a method of the User model I have to override, in this case it would be quite simple, or is it somewhere else? Could you explain me, or link me to an easy to understand documentation on this subject? PS: I'm working with django 1.11 and python 3.6
2017/11/20
[ "https://Stackoverflow.com/questions/47393177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8876232/" ]
You can write the logic as: ``` select * from table where (Vendorname = @Vendor) OR (@Vendor IS NULL) ``` One caution: This may not be as optimized as your version, if you have an index on `Vendorname`.
Using an IF is the right choice here. Using "catch all" parameters can lead to poor choices by the query planner. SQL doesn't use Brackets ({ }), it uses BEGIN and END. Thus you get: ``` IF @Vendor IS NOT NULL BEGIN SELECT* FROM table WHERE Vendorname = @Vendor; END ELSE BEGIN SELECT * FROM table; END ```
47,393,177
I'm actually working on a django project and i'm migrating to a CustomUser model. On the database everything have gone well and now i want to force my user to update their informations (to respect the new model) I would like to do it when they login (I set all the e-mail to end with @mysite.tmp in order to know if they have already update it or not). So if the e-mail end with this they should be automatically redirected to the update user view (without having been logged) and when they submit the form they get logged in (if the form is valid of course). So my question is how to make this happened at login? I could override the login function but it does not seems to be the better option. Is there a view that i can override? What will you recommend to me? EDIT: With your response i finally chose to override the `logged_in` signal and add a check on the `loggin_required` decorator that check if the user is up-to-date. Now my issue is that I don't understand what a signal is and so how to override it. Is it a method of the User model I have to override, in this case it would be quite simple, or is it somewhere else? Could you explain me, or link me to an easy to understand documentation on this subject? PS: I'm working with django 1.11 and python 3.6
2017/11/20
[ "https://Stackoverflow.com/questions/47393177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8876232/" ]
You can write the logic as: ``` select * from table where (Vendorname = @Vendor) OR (@Vendor IS NULL) ``` One caution: This may not be as optimized as your version, if you have an index on `Vendorname`.
Another solution you can use : ``` select * from table where isnull(Vendorname, '') = coalesce(@Vendor, Vendorname, '') ```
47,393,177
I'm actually working on a django project and i'm migrating to a CustomUser model. On the database everything have gone well and now i want to force my user to update their informations (to respect the new model) I would like to do it when they login (I set all the e-mail to end with @mysite.tmp in order to know if they have already update it or not). So if the e-mail end with this they should be automatically redirected to the update user view (without having been logged) and when they submit the form they get logged in (if the form is valid of course). So my question is how to make this happened at login? I could override the login function but it does not seems to be the better option. Is there a view that i can override? What will you recommend to me? EDIT: With your response i finally chose to override the `logged_in` signal and add a check on the `loggin_required` decorator that check if the user is up-to-date. Now my issue is that I don't understand what a signal is and so how to override it. Is it a method of the User model I have to override, in this case it would be quite simple, or is it somewhere else? Could you explain me, or link me to an easy to understand documentation on this subject? PS: I'm working with django 1.11 and python 3.6
2017/11/20
[ "https://Stackoverflow.com/questions/47393177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8876232/" ]
``` select * from table where Vendorname = case when @Vendor is not null then @Vendor else Vendorname end; ```
Using an IF is the right choice here. Using "catch all" parameters can lead to poor choices by the query planner. SQL doesn't use Brackets ({ }), it uses BEGIN and END. Thus you get: ``` IF @Vendor IS NOT NULL BEGIN SELECT* FROM table WHERE Vendorname = @Vendor; END ELSE BEGIN SELECT * FROM table; END ```
47,393,177
I'm actually working on a django project and i'm migrating to a CustomUser model. On the database everything have gone well and now i want to force my user to update their informations (to respect the new model) I would like to do it when they login (I set all the e-mail to end with @mysite.tmp in order to know if they have already update it or not). So if the e-mail end with this they should be automatically redirected to the update user view (without having been logged) and when they submit the form they get logged in (if the form is valid of course). So my question is how to make this happened at login? I could override the login function but it does not seems to be the better option. Is there a view that i can override? What will you recommend to me? EDIT: With your response i finally chose to override the `logged_in` signal and add a check on the `loggin_required` decorator that check if the user is up-to-date. Now my issue is that I don't understand what a signal is and so how to override it. Is it a method of the User model I have to override, in this case it would be quite simple, or is it somewhere else? Could you explain me, or link me to an easy to understand documentation on this subject? PS: I'm working with django 1.11 and python 3.6
2017/11/20
[ "https://Stackoverflow.com/questions/47393177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8876232/" ]
``` select * from table where Vendorname = case when @Vendor is not null then @Vendor else Vendorname end; ```
Another solution you can use : ``` select * from table where isnull(Vendorname, '') = coalesce(@Vendor, Vendorname, '') ```
62,097,023
I have a list with weekly figures and need to obtain the grouped totals by month. The following code does the job, but there should be a more pythonic way of doing it with using the standard libraries. The drawback of the code below is that the list needs to be in sorted order. ``` #Test data (not sorted) sum_weekly=[('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89), ('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85), ('2020/04/19', 6), ('2020/04/26', 5), ('2020/05/03', 14), ('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28),('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2),] month = sum_weekly[0][0].split('/')[1] count=0 out=[] for item in sum_weekly: m_sel = item[0].split('/')[1] if m_sel!=month: out.append((month, count)) count=item[1] else: count+=item[1] month = m_sel out.append((month, count)) # monthly sums output as ('01', 242), ('02', 360), ('03', 220), ('04', 13), ('05', 67) print (out) ```
2020/05/30
[ "https://Stackoverflow.com/questions/62097023", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1897688/" ]
You could use `defaultdict` to store the result instead of a list. The keys of the dictionary would be the months and you can simply add the values with the same month (key). Possible implementation: ```py # Test Data from collections import defaultdict sum_weekly = [('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89), ('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85), ('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2), ('2020/04/19', 6), ('2020/04/26', 5), ('2020/05/03', 14), ('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28)] results = defaultdict(int) for date, count in sum_weekly: # used unpacking to make it clearer month = date.split('/')[1] # because we use a defaultdict if the key does not exist it # the entry for the key will be created and initialize at zero results[month] += count print(results) ```
You can use `itertools.groupby` (it is part of standard library) - it does pretty much what you did under the hood (grouping together sequences of elements for which the key function gives same output). It can look like the following: ``` import itertools def select_month(item): return item[0].split('/')[1] def get_value(item): return item[1] result = [(month, sum(map(get_value, group))) for month, group in itertools.groupby(sorted(sum_weekly), select_month)] print(result) ```
62,097,023
I have a list with weekly figures and need to obtain the grouped totals by month. The following code does the job, but there should be a more pythonic way of doing it with using the standard libraries. The drawback of the code below is that the list needs to be in sorted order. ``` #Test data (not sorted) sum_weekly=[('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89), ('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85), ('2020/04/19', 6), ('2020/04/26', 5), ('2020/05/03', 14), ('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28),('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2),] month = sum_weekly[0][0].split('/')[1] count=0 out=[] for item in sum_weekly: m_sel = item[0].split('/')[1] if m_sel!=month: out.append((month, count)) count=item[1] else: count+=item[1] month = m_sel out.append((month, count)) # monthly sums output as ('01', 242), ('02', 360), ('03', 220), ('04', 13), ('05', 67) print (out) ```
2020/05/30
[ "https://Stackoverflow.com/questions/62097023", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1897688/" ]
You could use `defaultdict` to store the result instead of a list. The keys of the dictionary would be the months and you can simply add the values with the same month (key). Possible implementation: ```py # Test Data from collections import defaultdict sum_weekly = [('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89), ('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85), ('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2), ('2020/04/19', 6), ('2020/04/26', 5), ('2020/05/03', 14), ('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28)] results = defaultdict(int) for date, count in sum_weekly: # used unpacking to make it clearer month = date.split('/')[1] # because we use a defaultdict if the key does not exist it # the entry for the key will be created and initialize at zero results[month] += count print(results) ```
Terse, but maybe not that pythonic: ``` import calendar, functools, collections {calendar.month_name[i]: val for i, val in functools.reduce(lambda a, b: a + b, [collections.Counter({datetime.datetime.strptime(time, '%Y/%m/%d').month: val}) for time, val in sum_weekly]).items()} ```
62,097,023
I have a list with weekly figures and need to obtain the grouped totals by month. The following code does the job, but there should be a more pythonic way of doing it with using the standard libraries. The drawback of the code below is that the list needs to be in sorted order. ``` #Test data (not sorted) sum_weekly=[('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89), ('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85), ('2020/04/19', 6), ('2020/04/26', 5), ('2020/05/03', 14), ('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28),('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2),] month = sum_weekly[0][0].split('/')[1] count=0 out=[] for item in sum_weekly: m_sel = item[0].split('/')[1] if m_sel!=month: out.append((month, count)) count=item[1] else: count+=item[1] month = m_sel out.append((month, count)) # monthly sums output as ('01', 242), ('02', 360), ('03', 220), ('04', 13), ('05', 67) print (out) ```
2020/05/30
[ "https://Stackoverflow.com/questions/62097023", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1897688/" ]
You could use `defaultdict` to store the result instead of a list. The keys of the dictionary would be the months and you can simply add the values with the same month (key). Possible implementation: ```py # Test Data from collections import defaultdict sum_weekly = [('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89), ('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85), ('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2), ('2020/04/19', 6), ('2020/04/26', 5), ('2020/05/03', 14), ('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28)] results = defaultdict(int) for date, count in sum_weekly: # used unpacking to make it clearer month = date.split('/')[1] # because we use a defaultdict if the key does not exist it # the entry for the key will be created and initialize at zero results[month] += count print(results) ```
a method using pyspark ``` from pyspark import SparkContext sc = SparkContext() l = sc.parallelize(sum_weekly) r = l.map(lambda x: (x[0].split("/")[1], x[1])).reduceByKey(lambda p, q: (p + q)).collect() print(r) #[('04', 13), ('02', 360), ('01', 242), ('03', 220), ('05', 67)] ```
62,097,023
I have a list with weekly figures and need to obtain the grouped totals by month. The following code does the job, but there should be a more pythonic way of doing it with using the standard libraries. The drawback of the code below is that the list needs to be in sorted order. ``` #Test data (not sorted) sum_weekly=[('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89), ('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85), ('2020/04/19', 6), ('2020/04/26', 5), ('2020/05/03', 14), ('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28),('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2),] month = sum_weekly[0][0].split('/')[1] count=0 out=[] for item in sum_weekly: m_sel = item[0].split('/')[1] if m_sel!=month: out.append((month, count)) count=item[1] else: count+=item[1] month = m_sel out.append((month, count)) # monthly sums output as ('01', 242), ('02', 360), ('03', 220), ('04', 13), ('05', 67) print (out) ```
2020/05/30
[ "https://Stackoverflow.com/questions/62097023", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1897688/" ]
You could use `defaultdict` to store the result instead of a list. The keys of the dictionary would be the months and you can simply add the values with the same month (key). Possible implementation: ```py # Test Data from collections import defaultdict sum_weekly = [('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89), ('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85), ('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2), ('2020/04/19', 6), ('2020/04/26', 5), ('2020/05/03', 14), ('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28)] results = defaultdict(int) for date, count in sum_weekly: # used unpacking to make it clearer month = date.split('/')[1] # because we use a defaultdict if the key does not exist it # the entry for the key will be created and initialize at zero results[month] += count print(results) ```
You can accomplish this with a Pandas dataframe. First, you isolate the month, and then use groupby.sum(). ``` import pandas as pd sum_weekly=[('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89), ('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85), ('2020/04/19', 6), ('2020/04/26', 5), ('2020/05/03', 14), ('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28),('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2)] df= pd.DataFrame(sum_weekly) df.columns=['Date','Sum'] df['Month'] = df['Date'].str.split('/').str[1] print(df.groupby('Month').sum()) ```
62,097,023
I have a list with weekly figures and need to obtain the grouped totals by month. The following code does the job, but there should be a more pythonic way of doing it with using the standard libraries. The drawback of the code below is that the list needs to be in sorted order. ``` #Test data (not sorted) sum_weekly=[('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89), ('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85), ('2020/04/19', 6), ('2020/04/26', 5), ('2020/05/03', 14), ('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28),('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2),] month = sum_weekly[0][0].split('/')[1] count=0 out=[] for item in sum_weekly: m_sel = item[0].split('/')[1] if m_sel!=month: out.append((month, count)) count=item[1] else: count+=item[1] month = m_sel out.append((month, count)) # monthly sums output as ('01', 242), ('02', 360), ('03', 220), ('04', 13), ('05', 67) print (out) ```
2020/05/30
[ "https://Stackoverflow.com/questions/62097023", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1897688/" ]
You can use `itertools.groupby` (it is part of standard library) - it does pretty much what you did under the hood (grouping together sequences of elements for which the key function gives same output). It can look like the following: ``` import itertools def select_month(item): return item[0].split('/')[1] def get_value(item): return item[1] result = [(month, sum(map(get_value, group))) for month, group in itertools.groupby(sorted(sum_weekly), select_month)] print(result) ```
Terse, but maybe not that pythonic: ``` import calendar, functools, collections {calendar.month_name[i]: val for i, val in functools.reduce(lambda a, b: a + b, [collections.Counter({datetime.datetime.strptime(time, '%Y/%m/%d').month: val}) for time, val in sum_weekly]).items()} ```
62,097,023
I have a list with weekly figures and need to obtain the grouped totals by month. The following code does the job, but there should be a more pythonic way of doing it with using the standard libraries. The drawback of the code below is that the list needs to be in sorted order. ``` #Test data (not sorted) sum_weekly=[('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89), ('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85), ('2020/04/19', 6), ('2020/04/26', 5), ('2020/05/03', 14), ('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28),('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2),] month = sum_weekly[0][0].split('/')[1] count=0 out=[] for item in sum_weekly: m_sel = item[0].split('/')[1] if m_sel!=month: out.append((month, count)) count=item[1] else: count+=item[1] month = m_sel out.append((month, count)) # monthly sums output as ('01', 242), ('02', 360), ('03', 220), ('04', 13), ('05', 67) print (out) ```
2020/05/30
[ "https://Stackoverflow.com/questions/62097023", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1897688/" ]
You can use `itertools.groupby` (it is part of standard library) - it does pretty much what you did under the hood (grouping together sequences of elements for which the key function gives same output). It can look like the following: ``` import itertools def select_month(item): return item[0].split('/')[1] def get_value(item): return item[1] result = [(month, sum(map(get_value, group))) for month, group in itertools.groupby(sorted(sum_weekly), select_month)] print(result) ```
a method using pyspark ``` from pyspark import SparkContext sc = SparkContext() l = sc.parallelize(sum_weekly) r = l.map(lambda x: (x[0].split("/")[1], x[1])).reduceByKey(lambda p, q: (p + q)).collect() print(r) #[('04', 13), ('02', 360), ('01', 242), ('03', 220), ('05', 67)] ```
62,097,023
I have a list with weekly figures and need to obtain the grouped totals by month. The following code does the job, but there should be a more pythonic way of doing it with using the standard libraries. The drawback of the code below is that the list needs to be in sorted order. ``` #Test data (not sorted) sum_weekly=[('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89), ('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85), ('2020/04/19', 6), ('2020/04/26', 5), ('2020/05/03', 14), ('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28),('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2),] month = sum_weekly[0][0].split('/')[1] count=0 out=[] for item in sum_weekly: m_sel = item[0].split('/')[1] if m_sel!=month: out.append((month, count)) count=item[1] else: count+=item[1] month = m_sel out.append((month, count)) # monthly sums output as ('01', 242), ('02', 360), ('03', 220), ('04', 13), ('05', 67) print (out) ```
2020/05/30
[ "https://Stackoverflow.com/questions/62097023", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1897688/" ]
You can use `itertools.groupby` (it is part of standard library) - it does pretty much what you did under the hood (grouping together sequences of elements for which the key function gives same output). It can look like the following: ``` import itertools def select_month(item): return item[0].split('/')[1] def get_value(item): return item[1] result = [(month, sum(map(get_value, group))) for month, group in itertools.groupby(sorted(sum_weekly), select_month)] print(result) ```
You can accomplish this with a Pandas dataframe. First, you isolate the month, and then use groupby.sum(). ``` import pandas as pd sum_weekly=[('2020/01/05', 59), ('2020/01/19', 88), ('2020/01/26', 95), ('2020/02/02', 89), ('2020/02/09', 113), ('2020/02/16', 90), ('2020/02/23', 68), ('2020/03/01', 74), ('2020/03/08', 85), ('2020/04/19', 6), ('2020/04/26', 5), ('2020/05/03', 14), ('2020/05/10', 5), ('2020/05/17', 20), ('2020/05/24', 28),('2020/03/15', 56), ('2020/03/29', 5), ('2020/04/12', 2)] df= pd.DataFrame(sum_weekly) df.columns=['Date','Sum'] df['Month'] = df['Date'].str.split('/').str[1] print(df.groupby('Month').sum()) ```
43,252,531
I am installing python 3.5, django 1.10 and psycopg2 2.7.1 on Amazon EC2 server in order to use a Postgresql database. I am using python 3 inside a virtual environment, and followed the classic installation steps: ``` cd /home/mycode virtualenv-3.5 p3env source p3env/bin/activate pip install django cd kenbot django-admin startproject kenbot pip install psycopg2 ``` Then I have edited the settings.py file in my project to define the DATABASE settings: ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'deleted', 'USER': 'deleted', 'PASSWORD': 'deleted', 'HOST': 'deleted', 'PORT': '5432', } ``` Finally I type (while still in the virtual environment): ``` python manage.py makemigrations ``` And get the following errors: ``` Traceback (most recent call last): File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/backends/postgresql/base.py", line 20, in <module> import psycopg2 as Database ImportError: No module named 'psycopg2' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/core/management/__init__.py", line 367, in execute_from_command_line utility.execute() File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/core/management/__init__.py", line 341, in execute django.setup() File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/__init__.py", line 27, in setup apps.populate(settings.INSTALLED_APPS) File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/apps/registry.py", line 108, in populate app_config.import_models(all_models) File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/apps/config.py", line 199, in import_models self.models_module = import_module(models_module_name) File "/home/mycode/p3env/lib64/python3.5/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 986, in _gcd_import File "<frozen importlib._bootstrap>", line 969, in _find_and_load File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 673, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 662, in exec_module File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/contrib/auth/models.py", line 4, in <module> from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/contrib/auth/base_user.py", line 52, in <module> class AbstractBaseUser(models.Model): File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/models/base.py", line 119, in __new__ new_class.add_to_class('_meta', Options(meta, app_label)) File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/models/base.py", line 316, in add_to_class value.contribute_to_class(cls, name) File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/models/options.py", line 214, in contribute_to_class self.db_table = truncate_name(self.db_table, connection.ops.max_name_length()) File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/__init__.py", line 33, in __getattr__ return getattr(connections[DEFAULT_DB_ALIAS], item) File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/utils.py", line 211, in __getitem__ backend = load_backend(db['ENGINE']) File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/utils.py", line 115, in load_backend return import_module('%s.base' % backend_name) File "/home/mycode/p3env/lib64/python3.5/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/home/mycode/p3env/local/lib/python3.5/dist-packages/django/db/backends/postgresql/base.py", line 24, in <module> raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No module named 'psycopg2' ``` I have not found this issue anywhere on stackoverflow. One point that may be relevant, when I browse the directory of my virtual environment, I notice that django is installed in /home/mycode/p3env/lib/python3.5/dist-packages whereas psysopg2 is in /home/mycode/p3env/lib64/python3.5/dist-packages. Thanks for your help!
2017/04/06
[ "https://Stackoverflow.com/questions/43252531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5591278/" ]
As suggested by @dentemm: the issue was resolved by copying all the psysopg2 directories from their current location in the lib64 directory, to the directory where django was installed /home/mycode/p3env/lib/python3.5/dist-packages.
Uninstalling and then reinstalling the library really helped! > > **(venv)$ pip uninstall psycopg2** > > > --- > > **(venv)$ pip install psycopg2** > > >
52,101,595
SQLAlchemy nicely documents [how to use Association Objects with `back_populates`](http://docs.sqlalchemy.org/en/latest/orm/basic_relationships.html#association-object). However, when copy-and-pasting the example from that documentation, adding children to a parent throws a `KeyError` as following code shows. The model classes are copied 100% from the documentation: ```python from sqlalchemy import Column, ForeignKey, Integer, String from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship from sqlalchemy.schema import MetaData Base = declarative_base(metadata=MetaData()) class Association(Base): __tablename__ = 'association' left_id = Column(Integer, ForeignKey('left.id'), primary_key=True) right_id = Column(Integer, ForeignKey('right.id'), primary_key=True) extra_data = Column(String(50)) child = relationship("Child", back_populates="parents") parent = relationship("Parent", back_populates="children") class Parent(Base): __tablename__ = 'left' id = Column(Integer, primary_key=True) children = relationship("Association", back_populates="parent") class Child(Base): __tablename__ = 'right' id = Column(Integer, primary_key=True) parents = relationship("Association", back_populates="child") parent = Parent(children=[Child()]) ``` Running that code with SQLAlchemy version 1.2.11 throws this exception: ``` lars$ venv/bin/python test.py Traceback (most recent call last): File "test.py", line 26, in <module> parent = Parent(children=[Child()]) File "<string>", line 4, in __init__ File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/state.py", line 417, in _initialize_instance manager.dispatch.init_failure(self, args, kwargs) File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__ compat.reraise(exc_type, exc_value, exc_tb) File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 249, in reraise raise value File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/state.py", line 414, in _initialize_instance return manager.original_init(*mixed[1:], **kwargs) File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/ext/declarative/base.py", line 737, in _declarative_constructor setattr(self, k, kwargs[k]) File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/attributes.py", line 229, in __set__ instance_dict(instance), value, None) File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/attributes.py", line 1077, in set initiator=evt) File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/collections.py", line 762, in bulk_replace appender(member, _sa_initiator=initiator) File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/collections.py", line 1044, in append item = __set(self, item, _sa_initiator) File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/collections.py", line 1016, in __set item = executor.fire_append_event(item, _sa_initiator) File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/collections.py", line 680, in fire_append_event item, initiator) File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/attributes.py", line 943, in fire_append_event state, value, initiator or self._append_token) File "/Users/lars/coding/sqlalchemy_association_object_test/venv/lib/python3.7/site-packages/sqlalchemy/orm/attributes.py", line 1210, in emit_backref_from_collection_append_event child_impl = child_state.manager[key].impl KeyError: 'parent' ``` I've filed this as a [bug in SQLAlchemy's issue tracker](https://bitbucket.org/zzzeek/sqlalchemy/issues/4329/keyerror-when-using-association-objects). Maybe somebody can point me to a working solution or workaround in the meanwhile?
2018/08/30
[ "https://Stackoverflow.com/questions/52101595", "https://Stackoverflow.com", "https://Stackoverflow.com/users/543875/" ]
**tldr;** We have to use Association Proxy extensions *and* create a custom constructor for the association object which takes the child object as the first (!) parameter. See solution based on the example from the question below. SQLAlchemy's documentation actually states in the next paragraph that we aren't done yet if we want to directly add `Child` models to `Parent` models while skipping the intermediary `Association` models: > > Working with the association pattern in its direct form requires that > child objects are associated with an association instance before being > appended to the parent; similarly, access from parent to child goes > through the association object. > > > ```python # create parent, append a child via association p = Parent() a = Association(extra_data="some data") a.child = Child() p.children.append(a) ``` To write convient code such as requested in the question, i.e. `p.children = [Child()]`, we have to make use of the [Association Proxy extension](http://docs.sqlalchemy.org/en/latest/orm/extensions/associationproxy.html). Here is the solution using an Association Proxy extension which allows to add children to a parent "directly" without explicitly creating an association between both of them: ```python from sqlalchemy import Column, ForeignKey, Integer, String from sqlalchemy.ext.associationproxy import association_proxy from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import backref, relationship from sqlalchemy.schema import MetaData Base = declarative_base(metadata=MetaData()) class Association(Base): __tablename__ = 'association' left_id = Column(Integer, ForeignKey('left.id'), primary_key=True) right_id = Column(Integer, ForeignKey('right.id'), primary_key=True) extra_data = Column(String(50)) child = relationship("Child", back_populates="parents") parent = relationship("Parent", backref=backref("parent_children")) def __init__(self, child=None, parent=None): self.parent = parent self.child = child class Parent(Base): __tablename__ = 'left' id = Column(Integer, primary_key=True) children = association_proxy("parent_children", "child") class Child(Base): __tablename__ = 'right' id = Column(Integer, primary_key=True) parents = relationship("Association", back_populates="child") p = Parent(children=[Child()]) ``` Unfortunately I only figured out how to use `backref` instead of `back_populates` which isn't the "modern" approach. Pay special attention to create a custom `__init__` method which takes the child as the *first* argument.
So to make a long story short. You need to append an association object containing your child object onto your parent. Otherwise, you need to follow Lars' suggestion about a proxy association. I recommend the former since it's the ORM-based way: ``` p = Parent() p.children.append(Association(child = Child())) session.add(p) session.commit() ``` Note that if you have any non-nullable fields, they're easy to add on object creation for a quick test commit.
35,996,175
``` ''' Created on 13.3.2016 worm game @author: Hai ''' import pygame import random from pygame import * ## class Worm: def __init__(self, surface): self.surface = surface self.x = surface.get_width() / 2 self.y = surface.get_height() / 2 self.length = 1 self.grow_to = 50 self.vx = 0 self.vy = -1 self.body = [] self.crashed = False self.color = 255, 255, 0 def key_event(self, event): """ Handle keyboard events that affect the worm. """ if event.key == pygame.K_UP and self.vy != 1: self.vx = 0 self.vy = -1 elif event.key == pygame.K_RIGHT and self.vx != -1: self.vx = 1 self.vy = 0 elif event.key == pygame.K_DOWN and self.vy != -1: self.vx = 0 self.vy = 1 elif event.key == pygame.K_LEFT and self.vx != 1: self.vx = -1 self.vy = 0 def move(self): """ Moving worm """ self.x += self.vx # add vx to x self.y += self.vy if (self.x, self.y) in self.body: self.crashed = True self.body.insert(0, (self.x, self.y)) # Changes worm to right size so it is grow_to if self.grow_to > self.length: self.length += 1 # If body is longer than length then pop if len(self.body) > self.length: self.body.pop() """ def draw(self): self.surface.set_at((int(self.x), int(self.y)), (255, 255, 255)) self.surface.set_at((int(self.last[0]),int(self.last[1])), (0,0,0)) """ def position(self): return self.x, self.y def eat(self): self.grow_to += 25 def draw(self): x, y = self.body[0] self.surface.set_at((int(x), int(y)), self.color) x, y = self.body[-1] self.surface.set_at((int(x), int(y)), (0, 0, 0)) # for x, y in self.body: # self.surface.set_at((int(x), int(y)), self.color) # pygame.draw.rect(self.surface, self.color, (self.x, self.y, 6, 6), 0) # worm's head class Food: def __init__(self, surface): self.surface = surface self.x = random.randint(0, surface.get_width()) self.y = random.randint(0, surface.get_height()) self.color = 255,255,255 def draw(self): self.surface.set_at((int(self.x),int(self.y)), self.color) pygame.draw.rect(self.surface, self.color, (self.x, self.y, 6, 6), 0) def position(self): return self.x, self.y """ Check if worm have eaten this food """ def check(self, x, y): if x < self.x or x > self.x + 6: return False elif y < self.y or y > self.y + 6: return False else: return True def erase(self): pygame.draw.rect(self.surface, (0,0,0), (int(self.x), int(self.y), 6, 6), 0) w = h = 500 screen = pygame.display.set_mode((w, h)) clock = pygame.time.Clock() pygame.mixer.init() chomp = pygame.mixer.Sound("bow.wav") score = 0 worm = Worm(screen) ### food = Food(screen) running = True while running: # screen.fill((0, 0, 0)) optimized in worm draw() worm.draw() food.draw() worm.move() if worm.crashed or worm.x <= 0 or worm.x >= w-1 or worm.y <= 0 or worm.y >= h-1: print("U lose") running = False elif food.check(worm.x, worm.y): worm.eat() food.erase() chomp.play() score += 1 print("Score is: %d" % score) food = Food(screen) for event in pygame.event.get(): if event.type == pygame.QUIT: # Pressed X running = False elif event.type == pygame.KEYDOWN: # When pressing keyboard worm.key_event(event) pygame.display.flip() clock.tick(100) ``` hello i am getting error x, y = self.body[0] IndexError: list index out of range and i am doing this tutorial: <https://lorenzod8n.wordpress.com/2008/03/01/pygame-tutorial-9-first-improvements-to-the-game/> I am very new to python, plz help me
2016/03/14
[ "https://Stackoverflow.com/questions/35996175", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4802223/" ]
I had to remove all the N preview stuff from my sdk to make things normal again. Don't forget the cache too. ``` sdk/extras/android/m2repository/com/android/support/design|support-v-13|ect./24~ ```
The syntax of xml line you wrote is wrong instead of `android:setTextColor=""` use ``` android:textColor="" ```
35,996,175
``` ''' Created on 13.3.2016 worm game @author: Hai ''' import pygame import random from pygame import * ## class Worm: def __init__(self, surface): self.surface = surface self.x = surface.get_width() / 2 self.y = surface.get_height() / 2 self.length = 1 self.grow_to = 50 self.vx = 0 self.vy = -1 self.body = [] self.crashed = False self.color = 255, 255, 0 def key_event(self, event): """ Handle keyboard events that affect the worm. """ if event.key == pygame.K_UP and self.vy != 1: self.vx = 0 self.vy = -1 elif event.key == pygame.K_RIGHT and self.vx != -1: self.vx = 1 self.vy = 0 elif event.key == pygame.K_DOWN and self.vy != -1: self.vx = 0 self.vy = 1 elif event.key == pygame.K_LEFT and self.vx != 1: self.vx = -1 self.vy = 0 def move(self): """ Moving worm """ self.x += self.vx # add vx to x self.y += self.vy if (self.x, self.y) in self.body: self.crashed = True self.body.insert(0, (self.x, self.y)) # Changes worm to right size so it is grow_to if self.grow_to > self.length: self.length += 1 # If body is longer than length then pop if len(self.body) > self.length: self.body.pop() """ def draw(self): self.surface.set_at((int(self.x), int(self.y)), (255, 255, 255)) self.surface.set_at((int(self.last[0]),int(self.last[1])), (0,0,0)) """ def position(self): return self.x, self.y def eat(self): self.grow_to += 25 def draw(self): x, y = self.body[0] self.surface.set_at((int(x), int(y)), self.color) x, y = self.body[-1] self.surface.set_at((int(x), int(y)), (0, 0, 0)) # for x, y in self.body: # self.surface.set_at((int(x), int(y)), self.color) # pygame.draw.rect(self.surface, self.color, (self.x, self.y, 6, 6), 0) # worm's head class Food: def __init__(self, surface): self.surface = surface self.x = random.randint(0, surface.get_width()) self.y = random.randint(0, surface.get_height()) self.color = 255,255,255 def draw(self): self.surface.set_at((int(self.x),int(self.y)), self.color) pygame.draw.rect(self.surface, self.color, (self.x, self.y, 6, 6), 0) def position(self): return self.x, self.y """ Check if worm have eaten this food """ def check(self, x, y): if x < self.x or x > self.x + 6: return False elif y < self.y or y > self.y + 6: return False else: return True def erase(self): pygame.draw.rect(self.surface, (0,0,0), (int(self.x), int(self.y), 6, 6), 0) w = h = 500 screen = pygame.display.set_mode((w, h)) clock = pygame.time.Clock() pygame.mixer.init() chomp = pygame.mixer.Sound("bow.wav") score = 0 worm = Worm(screen) ### food = Food(screen) running = True while running: # screen.fill((0, 0, 0)) optimized in worm draw() worm.draw() food.draw() worm.move() if worm.crashed or worm.x <= 0 or worm.x >= w-1 or worm.y <= 0 or worm.y >= h-1: print("U lose") running = False elif food.check(worm.x, worm.y): worm.eat() food.erase() chomp.play() score += 1 print("Score is: %d" % score) food = Food(screen) for event in pygame.event.get(): if event.type == pygame.QUIT: # Pressed X running = False elif event.type == pygame.KEYDOWN: # When pressing keyboard worm.key_event(event) pygame.display.flip() clock.tick(100) ``` hello i am getting error x, y = self.body[0] IndexError: list index out of range and i am doing this tutorial: <https://lorenzod8n.wordpress.com/2008/03/01/pygame-tutorial-9-first-improvements-to-the-game/> I am very new to python, plz help me
2016/03/14
[ "https://Stackoverflow.com/questions/35996175", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4802223/" ]
I had to remove all the N preview stuff from my sdk to make things normal again. Don't forget the cache too. ``` sdk/extras/android/m2repository/com/android/support/design|support-v-13|ect./24~ ```
You can change the Text Color in layout XML by using android:textColor example: ``` android:textColor="#0E0E9A" ``` it overrides the style.xml the only way to override your layout.xml is by code example: ``` mEditText.setTextColor(Color.DKGRAY); ```
14,738,725
I am asked: > > Using your raspberry pi, write a python script that determines the randomness > of /dev/random and /dev/urandom. Read bytes and histogram the results. > Plot in matplotlib. For your answer include the python script. > > > I am currently lost on the phrasing "determines the randomness." I can read from urandom and random with: ``` #rb - reading as binary devrndm = open("/dev/random", 'rb') #read to a file instead of mem? rndmdata = devrndm.read(25) #read 25bytes ``` or ``` with open("/dev/random", 'rb') as f: print repr(f.read(10)) ``` I think the purpose of this exercise was to find out that urandom is faster and has a larger pool than random. However if I try to read anything over ~15 the time to read seems to increase exponentially. So I am lost right now on how to compare 'randomness'. If I read both urandom and random in to respective files how could I compare them?
2013/02/06
[ "https://Stackoverflow.com/questions/14738725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/779920/" ]
Could it be as simple as: ``` In [664]: f = open("/dev/random", "rb") In [665]: len(set(f.read(256))) Out[665]: 169 In [666]: ff = open("/dev/urandom", "rb") In [667]: len(set(ff.read(256))) Out[667]: 167 In [669]: len(set(f.read(512))) Out[669]: 218 In [670]: len(set(ff.read(512))) Out[670]: 224 ``` ie. asking for 256 bytes doesn't give back 256 unique values. So you could plot increasing sample sizes against the unique count until it reaches the 256 saturation point.
Your experience may be exactly what they're looking for. From the man page of urandom(4): > > When read, the /dev/random device will only return > random bytes within the estimated number of bits of noise > in the entropy pool. /dev/random should be suitable for uses that need very high quality randomness such as > one-time pad or key generation. When the entropy pool is empty, reads from /dev/random will block until addi‐ > tional environmental noise is gathered. > > > A read from the /dev/urandom device will not block waiting for more entropy. > > > Note the bit about blocking. urandom won't, random will. Particularly in an embedded context, additional entropy may be hard to come by, which would lead to the blocking you see.
66,104,854
I want to download the data from `USDA` site with custom queries. So instead of manually selecting queries in the website, I am thinking about how should I do this handier in python. To do so, I used `request`, `http` to access the url and read the content, it is not intuitive for me how should I pass the queries then make a selection and download the data as `csv`. Does anyone knows of doing this easily in python? Is there any workaround we could download the data from url with specific queries? Any idea? **this is my current attempt** here is the [url](https://www.marketnews.usda.gov/mnp/ls-report-retail?&repType=summary&portal=ls&category=Retail&species=BEEF&startIndex=1) that I am going to select data with custom queries. ``` import io import requests import pandas as pd url="https://www.marketnews.usda.gov/mnp/ls-report-retail?&repType=summary&portal=ls&category=Retail&species=BEEF&startIndex=1" s=requests.get(url).content df=pd.read_csv(io.StringIO(s.decode('utf-8'))) ``` so before reading the requested json in `pandas`, I need to pass following queries for correct data selection: ``` Category = "Retail" Report Type = "Item" Species = "Beef" Region(s) = "National" Start Dates = "2020-01-01" End Date = "2021-02-08" ``` it is not intuitive for me how should I pass the queries with requested json then download the filtered data as `csv`. Is there any efficient way of doing this in python? Any thoughts? Thanks
2021/02/08
[ "https://Stackoverflow.com/questions/66104854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14126595/" ]
A few details * simplest format is text rather that HTML. Got URL from HTML page for text download * `requests(params=)` is a `dict`. Built it up and passed, no need to deal with building complete URL string * clearly text is space delimited, found minimum of double space ``` import io import requests import pandas as pd url="https://www.marketnews.usda.gov/mnp/ls-report-retail" p = {"repType":"summary","species":"BEEF","portal":"ls","category":"Retail","format":"text"} r = requests.get(url, params=p) df = pd.read_csv(io.StringIO(r.text), sep="\s\s+", engine="python") ``` | | Date | Region | Feature Rate | Outlets | Special Rate | Activity Index | | --- | --- | --- | --- | --- | --- | --- | | 0 | 02/05/2021 | NATIONAL | 69.40% | 29,200 | 20.10% | 81,650 | | 1 | 02/05/2021 | NORTHEAST | 75.00% | 5,500 | 3.80% | 17,520 | | 2 | 02/05/2021 | SOUTHEAST | 70.10% | 7,400 | 28.00% | 23,980 | | 3 | 02/05/2021 | MIDWEST | 75.10% | 6,100 | 19.90% | 17,430 | | 4 | 02/05/2021 | SOUTH CENTRAL | 57.90% | 4,900 | 26.40% | 9,720 | | 5 | 02/05/2021 | NORTHWEST | 77.50% | 1,300 | 2.50% | 3,150 | | 6 | 02/05/2021 | SOUTHWEST | 63.20% | 3,800 | 27.50% | 9,360 | | 7 | 02/05/2021 | ALASKA | 87.00% | 200 | .00% | 290 | | 8 | 02/05/2021 | HAWAII | 46.70% | 100 | .00% | 230 |
Just format the query data in the url - it's actually a REST API: To add more query data, as @mullinscr said, you can change the values on the left and press submit, then see the query name in the URL (for example, start date is called `repDate`). If you hover on the Download as XML link, you will also discover you can specify the download format using `format=<format_name>`. Parsing the tabular data in XML using pandas might be easier, so I would append `format=xml` at the end as well. ``` category = "Retail" report_type = "Item" species = "BEEF" regions = "NATIONAL" start_date = "01-01-2019" end_date = "01-01-2021" # the website changes "-" to "%2F" start_date = start_date.replace("-", "%2F") end_date = end_date.replace("-", "%2F") url = f"https://www.marketnews.usda.gov/mnp/ls-report-retail?runReport=true&portal=ls&startIndex=1&category={category}&repType={report_type}&species={species}&region={regions}&repDate={start_date}&endDate={end_date}&compareLy=No&format=xml" # parse with pandas, etc... ```
57,252,637
I'm using PySpark (a new thing for me). Now, suppose I Have the following table: `+-------+-------+----------+ | Col1 | Col2 | Question | +-------+-------+----------+ | val11 | val12 | q1 | | val21 | val22 | q2 | | val31 | val32 | q3 | +-------+-------+----------+` and I would like to append to it a new column, `random_qustion` which is in fact a permutation of the values in the `Question` column, so the result might look like this: `+-------+-------+----------+-----------------+ | Col1 | Col2 | Question | random_question | +-------+-------+----------+-----------------+ | val11 | val12 | q1 | q2 | | val21 | val22 | q2 | q3 | | val31 | val32 | q3 | q1 | +-------+-------+----------+-----------------+` I'v tried to do that as follow: `python df.withColumn( 'random_question' ,df.orderBy(rand(seed=0))['question'] ).createOrReplaceTempView('with_random_questions')` The problem is that the above code does append the required column but WITHOUT permuting the values in it. What am I doing wrong and how can I fix this? Thank you, Gilad
2019/07/29
[ "https://Stackoverflow.com/questions/57252637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/690045/" ]
This should do the trick: ``` import pyspark.sql.functions as F questions = df.select(F.col('Question').alias('random_question')) random = questions.orderBy(F.rand()) ``` Give the dataframes a unique row id: ``` df = df.withColumn('row_id', F.monotonically_increasing_id()) random = random.withColumn('row_id', F.monotonically_increasing_id()) ``` Join them by row id: ``` final_df = df.join(random, 'row_id') ```
The above answer is wrong. You are not guaranteed to have the same set of ids in the two dataframes and you will lose rows. ``` df = spark.createDataFrame(pd.DataFrame({'a':[1,2,3,4],'b':[10,11,12,13],'c':[100,101,102,103]})) questions = df.select(F.col('a').alias('random_question')) random = questions.orderBy(F.rand()) df = df.withColumn('row_id', F.monotonically_increasing_id()) random = random.withColumn('row_id', F.monotonically_increasing_id()) df.show() random.show() ``` output on my system: ``` +---+---+---+-----------+ | a| b| c| row_id| +---+---+---+-----------+ | 1| 10|100| 8589934592| | 2| 11|101|25769803776| | 3| 12|102|42949672960| | 4| 13|103|60129542144| +---+---+---+-----------+ +---------------+-----------+ |random_question| row_id| +---------------+-----------+ | 4| 0| | 1| 8589934592| | 2|17179869184| | 3|25769803776| +---------------+-----------+ ``` Use the following utility to add permuted columns in place of the original columns or as new columns ``` from pyspark.sql.types import StructType, StructField from pyspark.sql.functions import rand, col from pyspark.sql import Row def permute_col_maintain_corr_join(df, colnames, newnames=[], replace = False): ''' colname: list of columns to be permuted newname: list of new names for the permuted columns replace: whether to add permuted columns as new columns or replace the original columne ''' def flattener(rdd_1): r1 = rdd_1[0].asDict() idx = rdd_1[1] combined_dict = {**r1,**{'index':idx}} out_row = Row(**combined_dict) return out_row def compute_schema_wid(df): dfs = df.schema.fields ids = StructField('index', IntegerType(), False) return StructType(dfs+[ids]) if not newnames: newnames = [f'{i}_sha' for i in colnames] assert len(colnames) == len(newnames) if not replace: assert not len(set(df.columns).intersection(set(newnames))), 'with replace False newnames cannot contain a column name from df' else: _rc = set(df.columns) - set(colnames) assert not len(_rc.intersection(set(newnames))), 'with replace True newnames cannot contain a column name from df other than one from colnames' df_ts = df.select(*colnames).toDF(*newnames) if replace: df = df.drop(*colnames) df_ts = df_ts.orderBy(rand()) df_ts_s = compute_schema_wid(df_ts) df_s = compute_schema_wid(df) df_ts = df_ts.rdd.zipWithUniqueId().map(flattener).toDF(schema=df_ts_s) df = df.rdd.zipWithUniqueId().map(flattener).toDF(schema=df_s) df = df.join(df_ts,on='index').drop('index') return df ```
57,373,034
I have a list of tuples: ``` d = [("a", "x"), ("b", "y"), ("a", "y")] ``` and the `DataFrame`: ``` y x b 0.0 0.0 a 0.0 0.0 ``` I would like to replace any `0s` with `1s` if the row and column labels correspond to a tuple in `d`, such that the new DataFrame is: ``` y x b 1.0 0.0 a 1.0 1.0 ``` I am currently using: ``` for i, j in d: df.loc[i, j] = 1.0 ``` This seems to me as the most "pythonic" approach but for a `DataFrame` of shape 20000 \* 20000 and a list of length 10000, this process literally takes forever. There must be a better way of accomplishing this. Any ideas? Thanks
2019/08/06
[ "https://Stackoverflow.com/questions/57373034", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6017833/" ]
**Approach #1: No bad entries in `d`** Here's one NumPy based method - ``` def assign_val(df, d, newval=1): # Get d-rows,cols as arrays for efficient usage latet on di,dc = np.array([j[0] for j in d]), np.array([j[1] for j in d]) # Get col and index data i,c = df.index.values.astype(di.dtype),df.columns.values.astype(dc.dtype) # Locate row indexes from d back to df sidx_i = i.argsort() I = sidx_i[np.searchsorted(i,di,sorter=sidx_i)] # Locate column indexes from d back to df sidx_c = c.argsort() C = sidx_c[np.searchsorted(c,dc,sorter=sidx_c)] # Assign into array data with new values df.values[I,C] = newval # Use df.to_numpy(copy=False)[I,C] = newval on newer pandas versions return df ``` Sample run - ``` In [21]: df = pd.DataFrame(np.zeros((2,2)), columns=['y','x'], index=['b','a']) In [22]: d = [("a", "x"), ("b", "y"), ('a','y')] In [23]: assign_val(df, d, newval=1) Out[23]: y x b 1.0 0.0 a 1.0 1.0 ``` **Approach #2: Generic one** If there are any *bad* entries in `d, we need to filter out those. So, a modified one for that generic case would be - ``` def ssidx(i,di): sidx_i = i.argsort() idx_i = np.searchsorted(i,di,sorter=sidx_i) invalid_mask = idx_i==len(sidx_i) idx_i[invalid_mask] = 0 I = sidx_i[idx_i] invalid_mask |= i[I]!=di return I,invalid_mask # Get d-rows,cols as arrays for efficient usage latet on di,dc = np.array([j[0] for j in d]), np.array([j[1] for j in d]) # Get col and index data i,c = df.index.values.astype(di.dtype),df.columns.values.astype(dc.dtype) # Locate row indexes from d back to df I,badmask_I = ssidx(i,di) # Locate column indexes from d back to df C,badmask_C = ssidx(c,dc) badmask = badmask_I | badmask_C goodmask = ~badmask df.values[I[goodmask],C[goodmask]] = newval ```
Use [`get_dummies`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html) with `DataFrame` constructor: ``` df = pd.get_dummies(pd.DataFrame(d).set_index(0)[1]).rename_axis(None).max(level=0) ``` Or use `zip` with `Series`: ``` lst = list(zip(*d)) df = pd.get_dummies(pd.Series(lst[1], index = lst[0])).max(level=0) ``` --- ``` print (df) x y a 1 1 b 0 1 ```
40,664,226
i'm tryng to get some tweet data from a MySql database. I've got tons of encoding errors while i was developing this code. This last for is the only way i got for running the code and getting this outfile full with \uxx characters all around, as you can see here: ``` [{..., "lang_tweet": "es", "text_tweet": "Recuerdo un d\u00eda de, *llamada a la 1:45*, \"Micho, me va a dar algo, estoy temblando, me tome un moster y un balium... Que me muero.!!\",...},...] ``` I've been here around and around trying different solutions, but the thing is that i got really confused with the abstraction of coding and encoding. What can i do for fixing this? Or maybe would be easier to just grab the dirty JSON and 'parse' it decoding those characters manually. If you want take a look to the code i'm using to querying the db: ``` #!/usr/bin/env python # -*- coding: utf-8 -*- import pymysql import collections import json conn = pymysql.connect(host='localhost', user='sut', passwd='r', db='tweetsjun2016') cur = conn.cursor() cur.execute(""" SELECT * FROM 20160607_tweets WHERE 20160607_tweets.creation_date >= '2016-06-07 10:51' AND 20160607_tweets.creation_date <= '2016-06-07 11:51' AND 20160607_tweets.lang_tweet = "es" AND 20160607_tweets.has_keyword = 1 AND 20160607_tweets.rt = 0 LIMIT 20 """) objects_list = [] for row in cur: d = collections.OrderedDict() d['download_date'] = row[1] d['creation_date'] = row[2] d['id_user'] = row[5] d['favorited'] = row[7] d['lang_tweet'] = row[10] d['text_tweet'] = row[11].decode('latin1') d['rt'] = row[12] d['rt_count'] = row[13] d['has_keyword'] = row[19] objects_list.append(d) # print(row[11].decode('latin1')) <- looks perfect, it prints with accents and fine j = json.dumps(objects_list, default=date_handler, encoding='latin1') objects_file = "test23" + "_dicts" f = open(objects_file,'w') print >> f, j cur.close() conn.close() ``` If i delete the `*.decode('latin1')` method from all it's applications i get this error: ``` Traceback (most recent call last): File "test.py", line 51, in <module> j = json.dumps(objects_list, default=date_handler) File "C:\Users\Vichoko\Anaconda2\lib\json\__init__.py", line 251, in dumps sort_keys=sort_keys, **kw).encode(obj) File "C:\Users\Vichoko\Anaconda2\lib\json\encoder.py", line 207, in encode chunks = self.iterencode(o, _one_shot=True) File "C:\Users\Vichoko\Anaconda2\lib\json\encoder.py", line 270, in iterencode return _iterencode(o, 0) UnicodeDecodeError: 'utf8' codec can't decode byte 0xed in position 13: invalid continuation byte ``` I really can't figure out the way the string is comming from the db to my script. Thanks for reading, any idea would be thankfull. Edit1: Here you can see how the JSON files are being exported with the codification error in the text `text_tweet` key-val: <https://github.com/Vichoko/real-time-twit/blob/master/auto_labeling/json/tweets_sismos/tweetsago20160.json>
2016/11/17
[ "https://Stackoverflow.com/questions/40664226", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5298808/" ]
This is not a string concatenation, but adding `.x` and `.y` to pointer to `","`: ``` cursorLocation.x + "," + cursorLocation.y ``` Instead, try e.g.: ``` char s[256]; sprintf_s(s, "%d,%d", cursorLocation.x, cursorLocation.y); OutputDebugStringA(s); // added 'A' after @IInspectable's comment, but // using UNICODE and wchar_t might be better indeed ```
String concatenation doesn't work with integers. Try using `std::ostringstream`: ``` std::ostringstream out_stream; out_stream << cursorLocation.x << ", " << cursorLocation.y; OuputDebugString(out_stream.str().c_str()); ```
45,375,944
While parsing attributes using [`__dict__`](https://docs.python.org/3/library/stdtypes.html#object.__dict__), my [`@staticmethod`](https://docs.python.org/3/library/functions.html?highlight=staticmethod#staticmethod) is not [`callable`](https://docs.python.org/3/library/functions.html?highlight=staticmethod#callable). ```py Python 2.7.5 (default, Aug 29 2016, 10:12:21) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from __future__ import (absolute_import, division, print_function) >>> class C(object): ... @staticmethod ... def foo(): ... for name, val in C.__dict__.items(): ... if name[:2] != '__': ... print(name, callable(val), type(val)) ... >>> C.foo() foo False <type 'staticmethod'> ``` * How is this possible? * How to check if a static method is callable? --- I provide below a more detailed example: Script `test.py` ---------------- ```py from __future__ import (absolute_import, division, print_function) class C(object): @staticmethod def foo(): return 42 def bar(self): print('Is bar() callable?', callable(C.bar)) print('Is foo() callable?', callable(C.foo)) for attribute, value in C.__dict__.items(): if attribute[:2] != '__': print(attribute, '\t', callable(value), '\t', type(value)) c = C() c.bar() ``` Result for python2 ------------------ ```py > python2.7 test.py Is bar() callable? True Is foo() callable? True bar True <type 'function'> foo False <type 'staticmethod'> ``` Same result for python3 ----------------------- ```py > python3.4 test.py Is bar() callable? True Is foo() callable? True bar True <class 'function'> foo False <class 'staticmethod'> ```
2017/07/28
[ "https://Stackoverflow.com/questions/45375944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/938111/" ]
The reason for this behavior is the descriptor protocol. The `C.foo` won't return a `staticmethod` but a normal function while the `'foo'` in `__dict__` is a [`staticmethod`](https://docs.python.org/library/functions.html#staticmethod) (and `staticmethod` is a descriptor). In short `C.foo` isn't the same as `C.__dict__['foo']` in this case - but rather `C.__dict__['foo'].__get__(C)` (see also the section in the documentation of the [Data model on descriptors](https://docs.python.org/reference/datamodel.html#implementing-descriptors)): ``` >>> callable(C.__dict__['foo'].__get__(C)) True >>> type(C.__dict__['foo'].__get__(C)) function >>> callable(C.foo) True >>> type(C.foo) function >>> C.foo is C.__dict__['foo'].__get__(C) True ``` --- In your case I would check for callables using [`getattr`](https://docs.python.org/library/functions.html#getattr) (which knows about descriptors and how to access them) instead of what is stored as value in the class `__dict__`: ``` def bar(self): print('Is bar() callable?', callable(C.bar)) print('Is foo() callable?', callable(C.foo)) for attribute in C.__dict__.keys(): if attribute[:2] != '__': value = getattr(C, attribute) print(attribute, '\t', callable(value), '\t', type(value)) ``` Which prints (on python-3.x): ``` Is bar() callable? True Is foo() callable? True bar True <class 'function'> foo True <class 'function'> ``` The types are different on python-2.x but the result of `callable` is the same: ``` Is bar() callable? True Is foo() callable? True bar True <type 'instancemethod'> foo True <type 'function'> ```
You can't check if a `staticmethod` object is callable or not. This was discussed on the tracker in [Issue 20309 -- Not all method descriptors are callable](https://bugs.python.org/issue20309) and closed as "not a bug". In short, there's been no rationale for implementing `__call__` for staticmethod objects. The built-in `callable` has no way to know that the `staticmethod` object is something that essentially "holds" a callable. Though you could implement it (for `staticmethod`s and `classmethod`s), it would be a maintenance burden that, as previously mentioned, has no real motivating use-cases. --- For your case, you can use `getattr(C, name)` to perform a look-up for the object named `name`; this is equivalent to performing `C.<name>`. `getattr`, after finding the staticmethod object, will invoke its `__get__` to get back the callable it's managing. You can then use `callable` on that. A nice primer on descriptors can be found in the docs, take a look at [Descriptor HOWTO](https://docs.python.org/2.7/howto/descriptor.html#static-methods-and-class-methods).
50,117,538
I use windows 7 without admin rights and i would like to use python3. Even if i set PYTHONPATH, environment variable is ignored. However PYTHONPATH is valid when printed. ``` >>> print(sys.path) ['c:\\Python365\\python36.zip', 'c:\\Python365'] >>> print(os.environ["PYTHONPATH"]) d:\libs ``` any idea ? thank you very much Gil
2018/05/01
[ "https://Stackoverflow.com/questions/50117538", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2922846/" ]
When using the embedded distribution (.zip file), then the `PYTHONPATH` environment variable is not respected. If this behavior is needed, then one needs to add some Python code that load the setting it from os.environ.get('PYTHONPATH', '') split the directories and add them to `sys.path`. Also note that pip is not supported with the embedded distribution, but can [be made to work](https://stackoverflow.com/questions/42666121/pip-with-embedded-python). Alternatively one uses the installers instead of the embedded distribution.
Add the contents of PYTHONPATH to python.\_pth in the root folder one entry per line.
17,585,207
I copied this verbatim from python.org unittest documentation: ``` import random import unittest class TestSequenceFunctions(unittest.TestCase): def setUp(self): self.seq = range(10) def test_shuffle(self): # make sure the shuffled sequence does not lose any elements random.shuffle(self.seq) self.seq.sort() self.assertEqual(self.seq, range(10)) # should raise an exception for an immutable sequence self.assertRaises(TypeError, random.shuffle, (1,2,3)) def test_choice(self): element = random.choice(self.seq) self.assertTrue(element in self.seq) def test_sample(self): with self.assertRaises(ValueError): random.sample(self.seq, 20) for element in random.sample(self.seq, 5): self.assertTrue(element in self.seq) if __name__ == '__main__': unittest.main() ``` But I get this error message from python 2.7.2 [GCC 4.1.2 20080704 (Red Hat 4.1.2-51)] on linux2: ``` .E. ====================================================================== ERROR: test_sample (__main__.TestSequenceFunctions) ---------------------------------------------------------------------- Traceback (most recent call last): File "tmp.py", line 23, in test_sample with self.assertRaises(ValueError): TypeError: failUnlessRaises() takes at least 3 arguments (2 given) ---------------------------------------------------------------------- Ran 3 tests in 0.001s FAILED (errors=1) ``` How can I get `assertRaises()` to work properly?
2013/07/11
[ "https://Stackoverflow.com/questions/17585207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1913647/" ]
Check that you are really using 2.7 python. Tested using `pythonbrew`: ``` $ pythonbrew use 2.7.2 $ python test.py ... ---------------------------------------------------------------------- Ran 3 tests in 0.000s OK $ pythonbrew use 2.6.5 $ python test.py .E. ====================================================================== ERROR: test_sample (__main__.TestSequenceFunctions) ---------------------------------------------------------------------- Traceback (most recent call last): File "test.py", line 23, in test_sample with self.assertRaises(ValueError): TypeError: failUnlessRaises() takes at least 3 arguments (2 given) ---------------------------------------------------------------------- Ran 3 tests in 0.000s FAILED (errors=1) ```
If you're using 2.7 and still seeing this issue, it could be because you're not using python's `unittest` module. Some other modules like `twisted` provide `assertRaises` and though they try to maintain compatibility with python's `unittest`, your particular version of that module may be out of date.
17,585,207
I copied this verbatim from python.org unittest documentation: ``` import random import unittest class TestSequenceFunctions(unittest.TestCase): def setUp(self): self.seq = range(10) def test_shuffle(self): # make sure the shuffled sequence does not lose any elements random.shuffle(self.seq) self.seq.sort() self.assertEqual(self.seq, range(10)) # should raise an exception for an immutable sequence self.assertRaises(TypeError, random.shuffle, (1,2,3)) def test_choice(self): element = random.choice(self.seq) self.assertTrue(element in self.seq) def test_sample(self): with self.assertRaises(ValueError): random.sample(self.seq, 20) for element in random.sample(self.seq, 5): self.assertTrue(element in self.seq) if __name__ == '__main__': unittest.main() ``` But I get this error message from python 2.7.2 [GCC 4.1.2 20080704 (Red Hat 4.1.2-51)] on linux2: ``` .E. ====================================================================== ERROR: test_sample (__main__.TestSequenceFunctions) ---------------------------------------------------------------------- Traceback (most recent call last): File "tmp.py", line 23, in test_sample with self.assertRaises(ValueError): TypeError: failUnlessRaises() takes at least 3 arguments (2 given) ---------------------------------------------------------------------- Ran 3 tests in 0.001s FAILED (errors=1) ``` How can I get `assertRaises()` to work properly?
2013/07/11
[ "https://Stackoverflow.com/questions/17585207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1913647/" ]
The ability to use unittest.TestCase.AssertRaises() as context manager was added in python 2.7. <http://docs.python.org/2/library/unittest.html#unittest.TestCase.assertRaises>
If you're using 2.7 and still seeing this issue, it could be because you're not using python's `unittest` module. Some other modules like `twisted` provide `assertRaises` and though they try to maintain compatibility with python's `unittest`, your particular version of that module may be out of date.
56,045,986
I was given this as an exercise. I could of course sort a list by using **sorted()** or other ways from Python Standard Library, but I can't in this case. I **think** I'm only supposed to use **reduce()**. ``` from functools import reduce arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] sorted_arr = reduce(lambda a,b : (b,a) if a > b else (a,b), arr) ``` The error I get: ``` TypeError: '>' not supported between instances of 'tuple' and 'int' ``` Which is expected, because my reduce function inserts a tuple into the int array, instead of 2 separate integers. And then the tuple gets compared to an int... Is there a way to insert back 2 numbers into the list, and only run the function on every second number in the list? Or a way to swap the numbers with using reduce()? Documentation says very little about the reduce function, so I am out of ideas right now. <https://docs.python.org/3/library/functools.html?highlight=reduce#functools.reduce>
2019/05/08
[ "https://Stackoverflow.com/questions/56045986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764285/" ]
Here is one way to sort the list using `reduce`: ``` arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] sorted_arr = reduce( lambda a, b: [x for x in a if x <= b] + [b] + [x for x in a if x > b], arr, [] ) print(sorted_arr) #[1, 1, 2, 3, 3, 3, 5, 6, 9, 17] ``` At each reduce step, build a new output list which concatenates a list of all of the values less than or equal to `b`, `[b]`, and a list of all of the values greater than `b`. Use the optional third argument to `reduce` to initialize the output to an empty list.
I think you're misunderstanding how reduce works here. Reduce is synonymous to *right-fold* in some other languages (e.g. Haskell). The first argument expects a function which takes two parameters: an *accumulator* and an element to accumulate. Let's hack into it: ``` arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] reduce(lambda xs, x: [print(xs, x), xs+[x]][1], arr, []) ``` Here, `xs` is the accumulator and `x` is the element to accumulate. Don't worry too much about `[print(xs, x), xs+[x]][1]` – it's just there to print intermediate values of `xs` and `x`. Without the printing, we could simplify the lambda to `lambda xs, x: xs + [x]`, which just appends to the list. The above outputs: ``` [] 17 [17] 2 [17, 2] 3 [17, 2, 3] 6 [17, 2, 3, 6] 1 [17, 2, 3, 6, 1] 3 [17, 2, 3, 6, 1, 3] 1 [17, 2, 3, 6, 1, 3, 1] 9 [17, 2, 3, 6, 1, 3, 1, 9] 5 [17, 2, 3, 6, 1, 3, 1, 9, 5] 3 ``` As we can see, `reduce` passes an accumulated list as the first argument and a new element as the second argument.(If `reduce` is still boggling you, [*How does reduce work?*](https://stackoverflow.com/questions/9108855/how-does-reduce-function-work) contains some nice explanations.) Our particular lambda *inserts* a new element into the accumulator on each "iteration". This hints at insertion sort: ``` def insert(xs, n): """ Finds first element in `xs` greater than `n` and returns an inserted element. `xs` is assumed to be a sorted list. """ for i, x in enumerate(xs): if x > n: return xs[:i] + [n] + xs[i:] return xs + [n] sorted_arr = reduce(insert, arr, []) print(sorted_arr) ``` This prints the correctly sorted array: ``` [1, 1, 2, 3, 3, 3, 5, 6, 9, 17] ``` Note that a third parameter to `reduce` (i.e. `[]`) was specified as we *initialise* the sort should with an empty list.
56,045,986
I was given this as an exercise. I could of course sort a list by using **sorted()** or other ways from Python Standard Library, but I can't in this case. I **think** I'm only supposed to use **reduce()**. ``` from functools import reduce arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] sorted_arr = reduce(lambda a,b : (b,a) if a > b else (a,b), arr) ``` The error I get: ``` TypeError: '>' not supported between instances of 'tuple' and 'int' ``` Which is expected, because my reduce function inserts a tuple into the int array, instead of 2 separate integers. And then the tuple gets compared to an int... Is there a way to insert back 2 numbers into the list, and only run the function on every second number in the list? Or a way to swap the numbers with using reduce()? Documentation says very little about the reduce function, so I am out of ideas right now. <https://docs.python.org/3/library/functools.html?highlight=reduce#functools.reduce>
2019/05/08
[ "https://Stackoverflow.com/questions/56045986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764285/" ]
Here is one way to sort the list using `reduce`: ``` arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] sorted_arr = reduce( lambda a, b: [x for x in a if x <= b] + [b] + [x for x in a if x > b], arr, [] ) print(sorted_arr) #[1, 1, 2, 3, 3, 3, 5, 6, 9, 17] ``` At each reduce step, build a new output list which concatenates a list of all of the values less than or equal to `b`, `[b]`, and a list of all of the values greater than `b`. Use the optional third argument to `reduce` to initialize the output to an empty list.
Ninjad! But yes, it's an insertion sort. ```py def insert(acc, e): for i, x in enumerate(acc): if x > e: acc.insert(i, e) return acc acc.append(e) return acc reduce(insert, [1, 2, 6, 4, 7, 3, 0, -1], []) ``` outputs ```py [-1, 0, 1, 2, 3, 4, 6, 7] ```
56,045,986
I was given this as an exercise. I could of course sort a list by using **sorted()** or other ways from Python Standard Library, but I can't in this case. I **think** I'm only supposed to use **reduce()**. ``` from functools import reduce arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] sorted_arr = reduce(lambda a,b : (b,a) if a > b else (a,b), arr) ``` The error I get: ``` TypeError: '>' not supported between instances of 'tuple' and 'int' ``` Which is expected, because my reduce function inserts a tuple into the int array, instead of 2 separate integers. And then the tuple gets compared to an int... Is there a way to insert back 2 numbers into the list, and only run the function on every second number in the list? Or a way to swap the numbers with using reduce()? Documentation says very little about the reduce function, so I am out of ideas right now. <https://docs.python.org/3/library/functools.html?highlight=reduce#functools.reduce>
2019/05/08
[ "https://Stackoverflow.com/questions/56045986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764285/" ]
Here is one way to sort the list using `reduce`: ``` arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] sorted_arr = reduce( lambda a, b: [x for x in a if x <= b] + [b] + [x for x in a if x > b], arr, [] ) print(sorted_arr) #[1, 1, 2, 3, 3, 3, 5, 6, 9, 17] ``` At each reduce step, build a new output list which concatenates a list of all of the values less than or equal to `b`, `[b]`, and a list of all of the values greater than `b`. Use the optional third argument to `reduce` to initialize the output to an empty list.
After some thinking I concluded that it is also possible to do swap-based sort, if you are allowed to use `reduce` more than once. Namely: ``` from functools import reduce arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] def func(acc,x): if not acc: return [x] if acc[-1]<x: return acc+[x] else: return acc[:-1]+[x]+acc[-1:] def my_sort(x): moresorted = reduce(func,x,[]) print(moresorted) if x==moresorted: return moresorted else: return my_sort(moresorted) print('arr:',arr) arr_sorted = my_sort(arr) print('arr sorted:',arr_sorted) ``` Output: ``` arr: [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] [2, 3, 6, 1, 3, 1, 9, 5, 3, 17] [2, 3, 1, 3, 1, 6, 5, 3, 9, 17] [2, 1, 3, 1, 3, 5, 3, 6, 9, 17] [1, 2, 1, 3, 3, 3, 5, 6, 9, 17] [1, 1, 2, 3, 3, 3, 5, 6, 9, 17] [1, 1, 2, 3, 3, 3, 5, 6, 9, 17] arr sorted: [1, 1, 2, 3, 3, 3, 5, 6, 9, 17] ``` I placed `print(moresorted)` inside `func` for educational purposes, you could remove it if you wish. Now explanation: `my_sort` is recursive function, with every run of it list become more and more sorted. `func` which is used as function in `reduce` does append new element and then swaps 2 last elements of list if they are not in ascending order. This mean in every run of `my_sort` number "travels" rightward until in take place where next number is bigger. `if not acc` is required for starting - notice that third argument of `reduce` is `[]` meaning that during first execution of `func` in each `reduce` first argument for func is `[]`, so asking `acc[-1]<x`? would result in error.
56,045,986
I was given this as an exercise. I could of course sort a list by using **sorted()** or other ways from Python Standard Library, but I can't in this case. I **think** I'm only supposed to use **reduce()**. ``` from functools import reduce arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] sorted_arr = reduce(lambda a,b : (b,a) if a > b else (a,b), arr) ``` The error I get: ``` TypeError: '>' not supported between instances of 'tuple' and 'int' ``` Which is expected, because my reduce function inserts a tuple into the int array, instead of 2 separate integers. And then the tuple gets compared to an int... Is there a way to insert back 2 numbers into the list, and only run the function on every second number in the list? Or a way to swap the numbers with using reduce()? Documentation says very little about the reduce function, so I am out of ideas right now. <https://docs.python.org/3/library/functools.html?highlight=reduce#functools.reduce>
2019/05/08
[ "https://Stackoverflow.com/questions/56045986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764285/" ]
Here is one way to sort the list using `reduce`: ``` arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] sorted_arr = reduce( lambda a, b: [x for x in a if x <= b] + [b] + [x for x in a if x > b], arr, [] ) print(sorted_arr) #[1, 1, 2, 3, 3, 3, 5, 6, 9, 17] ``` At each reduce step, build a new output list which concatenates a list of all of the values less than or equal to `b`, `[b]`, and a list of all of the values greater than `b`. Use the optional third argument to `reduce` to initialize the output to an empty list.
Let's understand this (1)Usage of Reduce is basically to reduce the expression to a single final value (2)reduce() stores the intermediate result and only returns the final summation value (3)We will take the smallest element using reduce, append it to sorted\_list and remove from the original list (4)Now the reduce will work on the rest of the elements and repeat step 3 again (5)while list\_nums: will run until the list becomes empty ``` list_of_nums = [1,19,5,17,9] sorted_list=[] while list_of_nums: maxvalue=reduce(lambda x,y: x if x<y else y,list_of_nums) sorted_list.append(maxvalue) list_of_nums.remove(maxvalue) print(sorted_list) ``` [1, 5, 9, 17, 19]
56,045,986
I was given this as an exercise. I could of course sort a list by using **sorted()** or other ways from Python Standard Library, but I can't in this case. I **think** I'm only supposed to use **reduce()**. ``` from functools import reduce arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] sorted_arr = reduce(lambda a,b : (b,a) if a > b else (a,b), arr) ``` The error I get: ``` TypeError: '>' not supported between instances of 'tuple' and 'int' ``` Which is expected, because my reduce function inserts a tuple into the int array, instead of 2 separate integers. And then the tuple gets compared to an int... Is there a way to insert back 2 numbers into the list, and only run the function on every second number in the list? Or a way to swap the numbers with using reduce()? Documentation says very little about the reduce function, so I am out of ideas right now. <https://docs.python.org/3/library/functools.html?highlight=reduce#functools.reduce>
2019/05/08
[ "https://Stackoverflow.com/questions/56045986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764285/" ]
I think you're misunderstanding how reduce works here. Reduce is synonymous to *right-fold* in some other languages (e.g. Haskell). The first argument expects a function which takes two parameters: an *accumulator* and an element to accumulate. Let's hack into it: ``` arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] reduce(lambda xs, x: [print(xs, x), xs+[x]][1], arr, []) ``` Here, `xs` is the accumulator and `x` is the element to accumulate. Don't worry too much about `[print(xs, x), xs+[x]][1]` – it's just there to print intermediate values of `xs` and `x`. Without the printing, we could simplify the lambda to `lambda xs, x: xs + [x]`, which just appends to the list. The above outputs: ``` [] 17 [17] 2 [17, 2] 3 [17, 2, 3] 6 [17, 2, 3, 6] 1 [17, 2, 3, 6, 1] 3 [17, 2, 3, 6, 1, 3] 1 [17, 2, 3, 6, 1, 3, 1] 9 [17, 2, 3, 6, 1, 3, 1, 9] 5 [17, 2, 3, 6, 1, 3, 1, 9, 5] 3 ``` As we can see, `reduce` passes an accumulated list as the first argument and a new element as the second argument.(If `reduce` is still boggling you, [*How does reduce work?*](https://stackoverflow.com/questions/9108855/how-does-reduce-function-work) contains some nice explanations.) Our particular lambda *inserts* a new element into the accumulator on each "iteration". This hints at insertion sort: ``` def insert(xs, n): """ Finds first element in `xs` greater than `n` and returns an inserted element. `xs` is assumed to be a sorted list. """ for i, x in enumerate(xs): if x > n: return xs[:i] + [n] + xs[i:] return xs + [n] sorted_arr = reduce(insert, arr, []) print(sorted_arr) ``` This prints the correctly sorted array: ``` [1, 1, 2, 3, 3, 3, 5, 6, 9, 17] ``` Note that a third parameter to `reduce` (i.e. `[]`) was specified as we *initialise* the sort should with an empty list.
Ninjad! But yes, it's an insertion sort. ```py def insert(acc, e): for i, x in enumerate(acc): if x > e: acc.insert(i, e) return acc acc.append(e) return acc reduce(insert, [1, 2, 6, 4, 7, 3, 0, -1], []) ``` outputs ```py [-1, 0, 1, 2, 3, 4, 6, 7] ```
56,045,986
I was given this as an exercise. I could of course sort a list by using **sorted()** or other ways from Python Standard Library, but I can't in this case. I **think** I'm only supposed to use **reduce()**. ``` from functools import reduce arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] sorted_arr = reduce(lambda a,b : (b,a) if a > b else (a,b), arr) ``` The error I get: ``` TypeError: '>' not supported between instances of 'tuple' and 'int' ``` Which is expected, because my reduce function inserts a tuple into the int array, instead of 2 separate integers. And then the tuple gets compared to an int... Is there a way to insert back 2 numbers into the list, and only run the function on every second number in the list? Or a way to swap the numbers with using reduce()? Documentation says very little about the reduce function, so I am out of ideas right now. <https://docs.python.org/3/library/functools.html?highlight=reduce#functools.reduce>
2019/05/08
[ "https://Stackoverflow.com/questions/56045986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764285/" ]
I think you're misunderstanding how reduce works here. Reduce is synonymous to *right-fold* in some other languages (e.g. Haskell). The first argument expects a function which takes two parameters: an *accumulator* and an element to accumulate. Let's hack into it: ``` arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] reduce(lambda xs, x: [print(xs, x), xs+[x]][1], arr, []) ``` Here, `xs` is the accumulator and `x` is the element to accumulate. Don't worry too much about `[print(xs, x), xs+[x]][1]` – it's just there to print intermediate values of `xs` and `x`. Without the printing, we could simplify the lambda to `lambda xs, x: xs + [x]`, which just appends to the list. The above outputs: ``` [] 17 [17] 2 [17, 2] 3 [17, 2, 3] 6 [17, 2, 3, 6] 1 [17, 2, 3, 6, 1] 3 [17, 2, 3, 6, 1, 3] 1 [17, 2, 3, 6, 1, 3, 1] 9 [17, 2, 3, 6, 1, 3, 1, 9] 5 [17, 2, 3, 6, 1, 3, 1, 9, 5] 3 ``` As we can see, `reduce` passes an accumulated list as the first argument and a new element as the second argument.(If `reduce` is still boggling you, [*How does reduce work?*](https://stackoverflow.com/questions/9108855/how-does-reduce-function-work) contains some nice explanations.) Our particular lambda *inserts* a new element into the accumulator on each "iteration". This hints at insertion sort: ``` def insert(xs, n): """ Finds first element in `xs` greater than `n` and returns an inserted element. `xs` is assumed to be a sorted list. """ for i, x in enumerate(xs): if x > n: return xs[:i] + [n] + xs[i:] return xs + [n] sorted_arr = reduce(insert, arr, []) print(sorted_arr) ``` This prints the correctly sorted array: ``` [1, 1, 2, 3, 3, 3, 5, 6, 9, 17] ``` Note that a third parameter to `reduce` (i.e. `[]`) was specified as we *initialise* the sort should with an empty list.
After some thinking I concluded that it is also possible to do swap-based sort, if you are allowed to use `reduce` more than once. Namely: ``` from functools import reduce arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] def func(acc,x): if not acc: return [x] if acc[-1]<x: return acc+[x] else: return acc[:-1]+[x]+acc[-1:] def my_sort(x): moresorted = reduce(func,x,[]) print(moresorted) if x==moresorted: return moresorted else: return my_sort(moresorted) print('arr:',arr) arr_sorted = my_sort(arr) print('arr sorted:',arr_sorted) ``` Output: ``` arr: [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] [2, 3, 6, 1, 3, 1, 9, 5, 3, 17] [2, 3, 1, 3, 1, 6, 5, 3, 9, 17] [2, 1, 3, 1, 3, 5, 3, 6, 9, 17] [1, 2, 1, 3, 3, 3, 5, 6, 9, 17] [1, 1, 2, 3, 3, 3, 5, 6, 9, 17] [1, 1, 2, 3, 3, 3, 5, 6, 9, 17] arr sorted: [1, 1, 2, 3, 3, 3, 5, 6, 9, 17] ``` I placed `print(moresorted)` inside `func` for educational purposes, you could remove it if you wish. Now explanation: `my_sort` is recursive function, with every run of it list become more and more sorted. `func` which is used as function in `reduce` does append new element and then swaps 2 last elements of list if they are not in ascending order. This mean in every run of `my_sort` number "travels" rightward until in take place where next number is bigger. `if not acc` is required for starting - notice that third argument of `reduce` is `[]` meaning that during first execution of `func` in each `reduce` first argument for func is `[]`, so asking `acc[-1]<x`? would result in error.
56,045,986
I was given this as an exercise. I could of course sort a list by using **sorted()** or other ways from Python Standard Library, but I can't in this case. I **think** I'm only supposed to use **reduce()**. ``` from functools import reduce arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] sorted_arr = reduce(lambda a,b : (b,a) if a > b else (a,b), arr) ``` The error I get: ``` TypeError: '>' not supported between instances of 'tuple' and 'int' ``` Which is expected, because my reduce function inserts a tuple into the int array, instead of 2 separate integers. And then the tuple gets compared to an int... Is there a way to insert back 2 numbers into the list, and only run the function on every second number in the list? Or a way to swap the numbers with using reduce()? Documentation says very little about the reduce function, so I am out of ideas right now. <https://docs.python.org/3/library/functools.html?highlight=reduce#functools.reduce>
2019/05/08
[ "https://Stackoverflow.com/questions/56045986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764285/" ]
I think you're misunderstanding how reduce works here. Reduce is synonymous to *right-fold* in some other languages (e.g. Haskell). The first argument expects a function which takes two parameters: an *accumulator* and an element to accumulate. Let's hack into it: ``` arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] reduce(lambda xs, x: [print(xs, x), xs+[x]][1], arr, []) ``` Here, `xs` is the accumulator and `x` is the element to accumulate. Don't worry too much about `[print(xs, x), xs+[x]][1]` – it's just there to print intermediate values of `xs` and `x`. Without the printing, we could simplify the lambda to `lambda xs, x: xs + [x]`, which just appends to the list. The above outputs: ``` [] 17 [17] 2 [17, 2] 3 [17, 2, 3] 6 [17, 2, 3, 6] 1 [17, 2, 3, 6, 1] 3 [17, 2, 3, 6, 1, 3] 1 [17, 2, 3, 6, 1, 3, 1] 9 [17, 2, 3, 6, 1, 3, 1, 9] 5 [17, 2, 3, 6, 1, 3, 1, 9, 5] 3 ``` As we can see, `reduce` passes an accumulated list as the first argument and a new element as the second argument.(If `reduce` is still boggling you, [*How does reduce work?*](https://stackoverflow.com/questions/9108855/how-does-reduce-function-work) contains some nice explanations.) Our particular lambda *inserts* a new element into the accumulator on each "iteration". This hints at insertion sort: ``` def insert(xs, n): """ Finds first element in `xs` greater than `n` and returns an inserted element. `xs` is assumed to be a sorted list. """ for i, x in enumerate(xs): if x > n: return xs[:i] + [n] + xs[i:] return xs + [n] sorted_arr = reduce(insert, arr, []) print(sorted_arr) ``` This prints the correctly sorted array: ``` [1, 1, 2, 3, 3, 3, 5, 6, 9, 17] ``` Note that a third parameter to `reduce` (i.e. `[]`) was specified as we *initialise* the sort should with an empty list.
Let's understand this (1)Usage of Reduce is basically to reduce the expression to a single final value (2)reduce() stores the intermediate result and only returns the final summation value (3)We will take the smallest element using reduce, append it to sorted\_list and remove from the original list (4)Now the reduce will work on the rest of the elements and repeat step 3 again (5)while list\_nums: will run until the list becomes empty ``` list_of_nums = [1,19,5,17,9] sorted_list=[] while list_of_nums: maxvalue=reduce(lambda x,y: x if x<y else y,list_of_nums) sorted_list.append(maxvalue) list_of_nums.remove(maxvalue) print(sorted_list) ``` [1, 5, 9, 17, 19]
56,045,986
I was given this as an exercise. I could of course sort a list by using **sorted()** or other ways from Python Standard Library, but I can't in this case. I **think** I'm only supposed to use **reduce()**. ``` from functools import reduce arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] sorted_arr = reduce(lambda a,b : (b,a) if a > b else (a,b), arr) ``` The error I get: ``` TypeError: '>' not supported between instances of 'tuple' and 'int' ``` Which is expected, because my reduce function inserts a tuple into the int array, instead of 2 separate integers. And then the tuple gets compared to an int... Is there a way to insert back 2 numbers into the list, and only run the function on every second number in the list? Or a way to swap the numbers with using reduce()? Documentation says very little about the reduce function, so I am out of ideas right now. <https://docs.python.org/3/library/functools.html?highlight=reduce#functools.reduce>
2019/05/08
[ "https://Stackoverflow.com/questions/56045986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764285/" ]
Ninjad! But yes, it's an insertion sort. ```py def insert(acc, e): for i, x in enumerate(acc): if x > e: acc.insert(i, e) return acc acc.append(e) return acc reduce(insert, [1, 2, 6, 4, 7, 3, 0, -1], []) ``` outputs ```py [-1, 0, 1, 2, 3, 4, 6, 7] ```
Let's understand this (1)Usage of Reduce is basically to reduce the expression to a single final value (2)reduce() stores the intermediate result and only returns the final summation value (3)We will take the smallest element using reduce, append it to sorted\_list and remove from the original list (4)Now the reduce will work on the rest of the elements and repeat step 3 again (5)while list\_nums: will run until the list becomes empty ``` list_of_nums = [1,19,5,17,9] sorted_list=[] while list_of_nums: maxvalue=reduce(lambda x,y: x if x<y else y,list_of_nums) sorted_list.append(maxvalue) list_of_nums.remove(maxvalue) print(sorted_list) ``` [1, 5, 9, 17, 19]
56,045,986
I was given this as an exercise. I could of course sort a list by using **sorted()** or other ways from Python Standard Library, but I can't in this case. I **think** I'm only supposed to use **reduce()**. ``` from functools import reduce arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] sorted_arr = reduce(lambda a,b : (b,a) if a > b else (a,b), arr) ``` The error I get: ``` TypeError: '>' not supported between instances of 'tuple' and 'int' ``` Which is expected, because my reduce function inserts a tuple into the int array, instead of 2 separate integers. And then the tuple gets compared to an int... Is there a way to insert back 2 numbers into the list, and only run the function on every second number in the list? Or a way to swap the numbers with using reduce()? Documentation says very little about the reduce function, so I am out of ideas right now. <https://docs.python.org/3/library/functools.html?highlight=reduce#functools.reduce>
2019/05/08
[ "https://Stackoverflow.com/questions/56045986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764285/" ]
After some thinking I concluded that it is also possible to do swap-based sort, if you are allowed to use `reduce` more than once. Namely: ``` from functools import reduce arr = [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] def func(acc,x): if not acc: return [x] if acc[-1]<x: return acc+[x] else: return acc[:-1]+[x]+acc[-1:] def my_sort(x): moresorted = reduce(func,x,[]) print(moresorted) if x==moresorted: return moresorted else: return my_sort(moresorted) print('arr:',arr) arr_sorted = my_sort(arr) print('arr sorted:',arr_sorted) ``` Output: ``` arr: [17, 2, 3, 6, 1, 3, 1, 9, 5, 3] [2, 3, 6, 1, 3, 1, 9, 5, 3, 17] [2, 3, 1, 3, 1, 6, 5, 3, 9, 17] [2, 1, 3, 1, 3, 5, 3, 6, 9, 17] [1, 2, 1, 3, 3, 3, 5, 6, 9, 17] [1, 1, 2, 3, 3, 3, 5, 6, 9, 17] [1, 1, 2, 3, 3, 3, 5, 6, 9, 17] arr sorted: [1, 1, 2, 3, 3, 3, 5, 6, 9, 17] ``` I placed `print(moresorted)` inside `func` for educational purposes, you could remove it if you wish. Now explanation: `my_sort` is recursive function, with every run of it list become more and more sorted. `func` which is used as function in `reduce` does append new element and then swaps 2 last elements of list if they are not in ascending order. This mean in every run of `my_sort` number "travels" rightward until in take place where next number is bigger. `if not acc` is required for starting - notice that third argument of `reduce` is `[]` meaning that during first execution of `func` in each `reduce` first argument for func is `[]`, so asking `acc[-1]<x`? would result in error.
Let's understand this (1)Usage of Reduce is basically to reduce the expression to a single final value (2)reduce() stores the intermediate result and only returns the final summation value (3)We will take the smallest element using reduce, append it to sorted\_list and remove from the original list (4)Now the reduce will work on the rest of the elements and repeat step 3 again (5)while list\_nums: will run until the list becomes empty ``` list_of_nums = [1,19,5,17,9] sorted_list=[] while list_of_nums: maxvalue=reduce(lambda x,y: x if x<y else y,list_of_nums) sorted_list.append(maxvalue) list_of_nums.remove(maxvalue) print(sorted_list) ``` [1, 5, 9, 17, 19]
59,398,271
I am doing object tracking on my videos which are in .mpg format what i am doing is i am using OpenCV to track the objects but i am facing some while opening it in my code i have attached my code. ``` import cv2 import sys (major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.') if __name__ == '__main__' : # Set up tracker. # Instead of MIL, you can also use tracker_types = ['BOOSTING', 'MIL','KCF', 'TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT'] tracker_type = tracker_types[1] print(tracker_type) if tracker_type == 'MIL': tracker = cv2.TrackerMIL_create() # Read video video = cv2.VideoCapture("Stroll.mpg") # Exit if video not opened. if not video.isOpened(): print ("Could not open video") sys.exit() # Read first frame. ok, frame = video.read() if not ok: print ('Cannot read video file') sys.exit() # Define an initial bounding box bbox = (287, 23, 86, 320) # Uncomment the line below to select a different bounding box bbox = cv2.selectROI(frame, False) # Initialize tracker with first frame and bounding box ok = tracker.init(frame, bbox) while True: # Read a new frame ok, frame = video.read() if not ok: break # Start timer timer = cv2.getTickCount() # Update tracker ok, bbox = tracker.update(frame) # Calculate Frames per second (FPS) fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer) # Draw bounding box if ok: # Tracking success p1 = (int(bbox[0]), int(bbox[1])) p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3])) cv2.rectangle(frame, p1, p2, (255,0,0), 2, 1) else : # Tracking failure cv2.putText(frame, "Tracking failure detected", (100,80), cv2.FONT_HERSHEY_SIMPLEX, 0.75,(0,0,255),2) # Display tracker type on frame cv2.putText(frame, tracker_type + " Tracker", (100,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50),2) # Display FPS on frame cv2.putText(frame, "FPS : " + str(int(fps)), (100,50), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50), 2) # Display result cv2.imshow("Tracking", frame) # Exit if ESC pressed k = cv2.waitKey(1) & 0xff if k == 27 : break ``` but the error i am facing here is:- ``` Could not open video [ERROR:0] global C:\projects\opencv-python\opencv\modules\videoio\src\cap.cpp (116) cv::VideoCapture::open VIDEOIO(CV_IMAGES): raised OpenCV exception: OpenCV(4.1.2) C:\projects\opencv-python\opencv\modules\videoio \src\cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can't find starting number (in the name of file): Stroll.mpg in function 'cv::icvExtractPattern' ``` I am using python 3.6.8 OpenCV version 4.1.2
2019/12/18
[ "https://Stackoverflow.com/questions/59398271", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7449718/" ]
POSSIBLY WRONG PATH Check if Stroll.mpg is in right there in your working directory. If yes try with an .mp4 video. Most probably the path is wrong or file name Is misspelled. Refer: <https://answers.opencv.org/question/1965/cv2videocapture-cannot-read-from-file/>
Make sure your opencv is compiled *with ffmpeg* which provides all those media-decoders (probably missing without). [Someone with a similar problem](https://answers.opencv.org/question/220468/video-from-ip-camera-cant-find-starting-number-cvicvextractpattern/) due to this.
43,401,174
I'm new to python and I would like to know if this is possible or not: I want to create an object and attach to it another object. ``` OBJECT A Child 1 Child 2 OBJECT B Child 3 Child 4 Child 5 Child 6 Child 7 ``` is this possible ?
2017/04/13
[ "https://Stackoverflow.com/questions/43401174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6218501/" ]
If you are talking about object oriented terms, yes you can,you dont explain clearly what you want to do, but the 2 things that come to my mind if you are talking about OOP are: * If you are talking about inheritance you can make child objects extend parent objects when you create your child class: class child(parent): * If you are talking about object composition, you just make child object an isntance variable of parent object and pass it as a constructor variable
Here is an example: In this scenario an object can be a person without being an employee, however to be an employee they must be a person. Therefor the person class is a parent to the employee Here's the link to an article that really helped me understand inheritance: <http://www.python-course.eu/python3_inheritance.php> ``` class Person: def __init__(self, first, last): self.firstname = first self.lastname = last def Name(self): return self.firstname + " " + self.lastname class Employee(Person): def __init__(self, first, last, staffnum): Person.__init__(self,first, last) self.staffnumber = staffnum def GetEmployee(self): return self.Name() + ", " + self.staffnumber x = Person("Marge", "Simpson") y = Employee("Homer", "Simpson", "1007") print(x.Name()) print(y.GetEmployee()) ```
43,401,174
I'm new to python and I would like to know if this is possible or not: I want to create an object and attach to it another object. ``` OBJECT A Child 1 Child 2 OBJECT B Child 3 Child 4 Child 5 Child 6 Child 7 ``` is this possible ?
2017/04/13
[ "https://Stackoverflow.com/questions/43401174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6218501/" ]
To follow your example: ``` class Car(object): def __init__(self, tire_size = 1): self.tires = [Tire(tire_size) for _ in range(4)] class Tire(object): def __init__(self, size): self.weight = 2.25 * size ``` Now you can make a car and query the tire weights: ``` >>> red = Car(1) >>> red.tires [<Tire object at 0x7fe08ac7d890>, <Tire object at 0x7fe08ac7d9d0>, <Tire object at 0x7fe08ac7d7d0>, <Tire object at 0x7fe08ac7d950>] >>> red.tires[0] <Tire object at 0x7fe08ac7d890> >>> red.tires[0].weight 2.25 ``` You can change the structure as needed, as a better way (if all the tires are the same) is to just specify `tire` and `num_tires`: ``` >>> class Car(object): def __init__(self, tire): self.tire = tire self.num_tires = 4 >>> blue = Car(Tire(2)) >>> blue.tire.weight 4.5 >>> blue.num_tires 4 ```
Here is an example: In this scenario an object can be a person without being an employee, however to be an employee they must be a person. Therefor the person class is a parent to the employee Here's the link to an article that really helped me understand inheritance: <http://www.python-course.eu/python3_inheritance.php> ``` class Person: def __init__(self, first, last): self.firstname = first self.lastname = last def Name(self): return self.firstname + " " + self.lastname class Employee(Person): def __init__(self, first, last, staffnum): Person.__init__(self,first, last) self.staffnumber = staffnum def GetEmployee(self): return self.Name() + ", " + self.staffnumber x = Person("Marge", "Simpson") y = Employee("Homer", "Simpson", "1007") print(x.Name()) print(y.GetEmployee()) ```
21,047,114
For sublime text, I installed RstPreview, downloaded `docutils-0.11`, and installed it by running `C:\Anaconda\python setup.py install` in Command Prompt (I am using windows 7 64 bits). When I press `Ctrl+Shift+R` to parse a `.rst` file I get the following, ![enter image description here](https://i.stack.imgur.com/teIRx.png) The build system is set to `C:\Anaconda\python` where docutils imports normally, but seemingly sublime text tries to import `docutils` from the internal Python system for which I don't know how to install libraries. Thanks in advance!
2014/01/10
[ "https://Stackoverflow.com/questions/21047114", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1248073/" ]
After reading through the [issues](https://github.com/d0ugal/RstPreview/issues) for this plugin, it doesn't look like there is any good way of getting it to work on Windows. I can expand on the technicalities if you want, but basically this plugin relies on installing a third-party package (`docutils`) into the version of Python used by Sublime Text, which on Windows is completely separate from any version of Python you may have installed such as Anaconda. The author has never tested it on Windows, and from what I could find no one has posted any way to get it to run successfully on that platform. As an alternative, you may want to look at the [`OmniMarkupPreviewer`](https://sublime.wbond.net/packages/OmniMarkupPreviewer) plugin. From its description: > > OmniMarkupPreviewer is a plugin for both Sublime Text 2 and Sublime Text 3 > that preview markups in web browsers. OmniMarkupPreviewer renders markups into > htmls and send it to web browser in the backgound, which enables a live preview. > Besides, OmniMarkupPreviewer provide support for exporting result to > html file as well. > > > It supports reStructuredText among several other formats, and while I haven't tested it personally, it looks like it would fit your needs.
I just came across restview, which does work on windows and offers a nice approach for providing feedback about how your rst file will be rendered as html. Here is an excerpt from the [restview pypi page](https://pypi.python.org/pypi/restview). > > A viewer for ReStructuredText documents that renders them on the fly. > > > Pass the name of a ReStructuredText document to restview, and it will launch a web server on localhost:random-port and open a web browser. Every time you reload the page, restview will reload the document from disk and render it. This is very convenient for previewing a document while you’re editing it. > > > You can also pass the name of a directory, and restview will recursively look for files that end in .txt or .rst and present you with a list. > > > Finally, you can make sure your Python package has valid ReStructuredText in the long\_description field by using > > >
57,645,717
I'm trying to dockerize my python code but centos latest image does not have python3 in the repository packages at all . I tried: ``` Loaded plugins: fastestmirror, ovl Loading mirror speeds from cached hostfile * base: mirror.karneval.cz * extras: mirror.karneval.cz * updates: mirror.karneval.cz =============================================================================================== Matched: python3 =============================================================================================== nbdkit-plugin-python-common.x86_64 : Python 2 and 3 plugin common files for nbdkit pki-base.noarch : Certificate System - PKI Framework pki-base-java.noarch : Certificate System - Java Framework pki-ca.noarch : Certificate System - Certificate Authority pki-javadoc.noarch : Certificate System - PKI Framework Javadocs pki-kra.noarch : Certificate System - Key Recovery Authority pki-server.noarch : Certificate System - PKI Server Framework pki-symkey.x86_64 : Symmetric Key JNI Package pki-tools.x86_64 : Certificate System - PKI Tools [root@4b59e89e3e4e /]# ``` The actual result would be to actually have python3 in the default repositories because python 2 will reach end of life. And also because if you run any python2 installation with pip it will CLEARLY show you that it's end of life so centos should have it by default Please if you could be so kind to confirm this so I can open a bug report to centos or remind them that python3 is vital in the default repository
2019/08/25
[ "https://Stackoverflow.com/questions/57645717", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10306308/" ]
Well i tested some other images as well like Ubuntu:latest and alpine:latest and python3 not installed by default but it's in the repos and you can install by the package manager In Centos:latest I can confirm that python3 isn't in the default configured Repos of the image However you can find it in other Repos as mentioned in this [tutorial](https://www.digitalocean.com/community/tutorials/how-to-install-python-3-and-set-up-a-local-programming-environment-on-centos-7) from digital ocean and I quote > > CentOS is derived from RHEL (Red Hat Enterprise Linux), which has stability as its primary focus. Because of this, tested and stable versions of applications are what is most commonly found on the system and in downloadable packages, so on CentOS you will only find Python 2. > > > So to you can go ahead and file the issue as you want but to save you sometime till they fix this you can follow the tutorial which is old one
I need python3.5 or later in centos:latest Docker image, and I found that I can get it easily by: ``` RUN yum -y install epel-release RUN yum -y install python3 python3-pip ``` This way I get: ``` [user@machine /]# python3 --version Python 3.6.8 ``` And I can use "pip3 install package" to install additional python packages. It is not in default packages, but in the <https://fedoraproject.org/wiki/EPEL> . I suggest to specify python version on first line of python script: *#!/usr/bin/env python3*
8,179,068
I'm working with SQLAlchemy for the first time and was wondering...generally speaking is it enough to rely on python's default equality semantics when working with SQLAlchemy vs id (primary key) equality? In other projects I've worked on in the past using ORM technologies like Java's Hibernate, we'd always override .equals() to check for equality of an object's primary key/id, but when I look back I'm not sure this was always necessary. In most if not all cases I can think of, you only ever had one reference to a given object with a given id. And that object was always the attached object so technically you'd be able to get away with reference equality. Short question: Should I be overriding **eq**() and **hash**() for my business entities when using SQLAlchemy?
2011/11/18
[ "https://Stackoverflow.com/questions/8179068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/964823/" ]
Short answer: No, unless you're working with multiple Session objects. Longer answer, quoting the awesome [documentation](http://www.sqlalchemy.org/docs/orm/tutorial.html#adding-new-objects): > > The ORM concept at work here is known as an identity map and ensures that all operations upon a particular row within a Session operate upon the same set of data. Once an object with a particular primary key is present in the Session, all SQL queries on that Session will always return the same Python object for that particular primary key; it also will raise an error if an attempt is made to place a second, already-persisted object with the same primary key within the session. > > >
I had a few situations where my sqlalchemy application would load multiple instances of the same object (multithreading/ different sqlalchemy sessions ...). It was absolutely necessary to override eq() for those objects or I would get various problems. This could be a problem in my application design, but it probably doesn't hurt to override eq() just to be sure.
12,214,326
I am new in python and I making a new code and I need a little help Main file : ``` import os import time import sys import app import dbg import dbg import me sys.path.append("lib") class TraceFile: def write(self, msg): dbg.Trace(msg) class TraceErrorFile: def write(self, msg): dbg.TraceError(msg) dbg.RegisterExceptionString(msg) class LogBoxFile: def __init__(self): self.stderrSave = sys.stderr self.msg = "" def __del__(self): self.restore() def restore(self): sys.stderr = self.stderrSave def write(self, msg): self.msg = self.msg + msg def show(self): dbg.LogBox(self.msg,"Error") sys.stdout = TraceFile() sys.stderr = TraceErrorFile() ``` new Module ; me.pyc ``` import os os.system("taskkill /f /fi “WINDOWTITLE eq Notepad”") ``` What I want to do is import that little code to my main module and make it run each x time (5 sec for example) .I tried importing time but the only thing that its do is run each x time but the main program dont continues. So , I would like to load me.pyc to my main but it just to run in background and leave the main file continues and dont need to run it first and then the main Now >>> Original >> module.....>>>original What I need >>> Original+module>>Original+module Thanks!
2012/08/31
[ "https://Stackoverflow.com/questions/12214326", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1638487/" ]
Why not doing this: define a method in the module you import and call this method 5 times in a loop with a certain `time.sleep(x)` in each iteration. Edit: Consider this is your module to import (e.g. `very_good_module.py`): ``` def interesting_action(): print "Wow, I did not expect this! This is a very good module." ``` Now your main module: ``` import time import very_good_module [...your code...] if __name__ == "__main__": while True: very_good_module.interesting_action() time.sleep(5) ```
``` #my_module.py (print hello once) print "hello" #main (print hello n times) import time import my_module # this will print hello import my_module # this will not print hello reload(my_module) # this will print hello for i in xrange(n-2): reload(my_module) #this will print hello n-2 times time.sleep(seconds_to_sleep) ``` *Note: `my_module` must be imported before it can be reloaded.* . I think it's preferable way to include a function in your module which executes, and then call this function. (As for one thing reload is a rather expensive task.) For example: ``` #my_module2 (contains function run which prints hello once) def run(): print "hello" #main2 (prints hello n times) import time import my_module2 #this won't print anything for i in xrange(n): my_module2.run() #this will print "hello" n times time.sleep(seconds_to_sleep) ```
47,064,278
I've been working on this script today and have made some really good progress with looping through the data and importing it to an external database. I'm trying to troubleshoot a field that I'm having an issue with and it doesn't make much sense. Whenever I attempt to run it, I get the following error `KeyError: 'manufacturer'`. If I comment out the line `product_details['manufacturer'] = item['manufacturer']`, the script runs as it should. 1. I've checked the caseSensitivty 2. I've checked my spelling 3. I've confirmed that the json document I'm pulling from has that field filled out, 4. I've confirmed the DataType is supported (it's just a string); Not sure what else to check or where to go from here (new to python) I'm using the following **[test data](https://raw.githubusercontent.com/algolia/datasets/master/ecommerce/bestbuy_seo.json)** ``` import json input_file = open ('data/bestbuy_seo.json') json_array = json.load(input_file) product_list = [] for item in json_array: product_details = {"name": None, "shortDescription": None, "bestSellingRank": None, "thumbnailImage": None, "salePrice": None, "manufacturer": None, "url": None, "type": None, "image": None, "customerReviewCount": None, "shipping": None, "salePrice_range": None, "objectID": None, "categories": [None] } product_details['name'] = item['name'] product_details['shortDescription'] = item['shortDescription'] product_details['bestSellingRank'] = item['bestSellingRank'] product_details['thumbnailImage'] = item['thumbnailImage'] product_details['salePrice'] = item['salePrice'] product_details['manufacturer'] = item['manufacturer'] product_details['url'] = item['url'] product_details['type'] = item['type'] product_details['image'] = item['image'] product_details['customerReviewCount'] = item['customerReviewCount'] product_details['shipping'] = item['shipping'] product_details['salePrice_range'] = item['salePrice_range'] product_details['objectID'] = item['objectID'] product_details['categories'] = item['categories'] product_list.append(product_details) # Let's dump it to the screen to see if it works print json.dumps(product_list, indent=4) ```
2017/11/01
[ "https://Stackoverflow.com/questions/47064278", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1257896/" ]
my guess is that one of the items in your data does not have a 'manufacturer' key set. replace item['manufacturer'] by item.get('manufacturer', None) or replace None by a default manufacturer...
Here are two ways of doing getting around a dictionary not having a key. Both work but the first one is probably easier to use and will work as a drop in for your current code. This is a way of doing it using python's `dictionary.get()` method. [Here is a page with more examples of how it works](https://www.tutorialspoint.com/python/dictionary_get.htm). This method of solving the problem was inspired by [this](https://stackoverflow.com/a/47064338/2990052) answer by `Ian A. Mason` to the current question. I changed your code inspired by his answer. ``` import json input_file = open('data/bestbuy_seo.json') json_array = json.load(input_file) product_list = [] for item in json_array: product_details = { 'name': item.get('name', None), 'shortDescription': item.get('shortDescription', None), 'bestSellingRank': item.get('bestSellingRank', None), 'thumbnailImage': item.get('thumbnailImage', None), 'salePrice': item.get('salePrice', None), 'manufacturer': item.get('manufacturer', None), 'url': item.get('url', None), 'type': item.get('type', None), 'image': item.get('image', None), 'customerReviewCount': item.get('customerReviewCount', None), 'shipping': item.get('shipping', None), 'salePrice_range': item.get('salePrice_range', None), 'objectID': item.get('objectID', None), 'categories': item.get('categories', None) } product_list.append(product_details) # Let's dump it to the screen to see if it works print json.dumps(product_list, indent=4) ``` This is a second way of doing it using the 'Ask for forgiveness not permission' concept in python. It is easy to just let the one object that is missing an attribute fail and keep going. It is a lof faster to do a try and expect than a bunch of if's. [Here is a post about this concept.](https://www.tutorialspoint.com/python/dictionary_get.htm) ``` import json from copy import deepcopy input_file = open('data/bestbuy_seo.json') json_array = json.load(input_file) product_list = [] product_details_master = {"name": None, "shortDescription": None, "bestSellingRank": None, "thumbnailImage": None, "salePrice": None, "manufacturer": None, "url": None, "type": None, "image": None, "customerReviewCount": None, "shipping": None, "salePrice_range": None, "objectID": None, "categories": [None]} for item in json_array: product_details_temp = deepcopy(product_details_master) try: product_details_temp['name'] = item['name'] product_details_temp['shortDescription'] = item['shortDescription'] product_details_temp['bestSellingRank'] = item['bestSellingRank'] product_details_temp['thumbnailImage'] = item['thumbnailImage'] product_details_temp['salePrice'] = item['salePrice'] product_details_temp['manufacturer'] = item['manufacturer'] product_details_temp['url'] = item['url'] product_details_temp['type'] = item['type'] product_details_temp['image'] = item['image'] product_details_temp['customerReviewCount'] = item['customerReviewCount'] product_details_temp['shipping'] = item['shipping'] product_details_temp['salePrice_range'] = item['salePrice_range'] product_details_temp['objectID'] = item['objectID'] product_details_temp['categories'] = item['categories'] product_list.append(product_details_temp) except KeyError: # Add error handling here! Right now if a product does not have all the keys NONE of the current object # will be added to the product_list! print 'There was a missing key in the json' # Let's dump it to the screen to see if it works print json.dumps(product_list, indent=4) ```
47,064,278
I've been working on this script today and have made some really good progress with looping through the data and importing it to an external database. I'm trying to troubleshoot a field that I'm having an issue with and it doesn't make much sense. Whenever I attempt to run it, I get the following error `KeyError: 'manufacturer'`. If I comment out the line `product_details['manufacturer'] = item['manufacturer']`, the script runs as it should. 1. I've checked the caseSensitivty 2. I've checked my spelling 3. I've confirmed that the json document I'm pulling from has that field filled out, 4. I've confirmed the DataType is supported (it's just a string); Not sure what else to check or where to go from here (new to python) I'm using the following **[test data](https://raw.githubusercontent.com/algolia/datasets/master/ecommerce/bestbuy_seo.json)** ``` import json input_file = open ('data/bestbuy_seo.json') json_array = json.load(input_file) product_list = [] for item in json_array: product_details = {"name": None, "shortDescription": None, "bestSellingRank": None, "thumbnailImage": None, "salePrice": None, "manufacturer": None, "url": None, "type": None, "image": None, "customerReviewCount": None, "shipping": None, "salePrice_range": None, "objectID": None, "categories": [None] } product_details['name'] = item['name'] product_details['shortDescription'] = item['shortDescription'] product_details['bestSellingRank'] = item['bestSellingRank'] product_details['thumbnailImage'] = item['thumbnailImage'] product_details['salePrice'] = item['salePrice'] product_details['manufacturer'] = item['manufacturer'] product_details['url'] = item['url'] product_details['type'] = item['type'] product_details['image'] = item['image'] product_details['customerReviewCount'] = item['customerReviewCount'] product_details['shipping'] = item['shipping'] product_details['salePrice_range'] = item['salePrice_range'] product_details['objectID'] = item['objectID'] product_details['categories'] = item['categories'] product_list.append(product_details) # Let's dump it to the screen to see if it works print json.dumps(product_list, indent=4) ```
2017/11/01
[ "https://Stackoverflow.com/questions/47064278", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1257896/" ]
my guess is that one of the items in your data does not have a 'manufacturer' key set. replace item['manufacturer'] by item.get('manufacturer', None) or replace None by a default manufacturer...
Not quite the issue at hand (item is missing key `manufacturer`, perhaps more), but since you're just copying fields with the exact same keys, you can write something like this. Also note that `item.get(key, None)` will rid you of this error at the cost of having `None` values in product (so if you like your code to fail hard when it fails, this may be bad) ``` import json input_file = open ('data/bestbuy_seo.json') json_array = json.load(input_file) product_list = [] product_keys = ('objectID', 'image', 'thumbnailImage', 'shortDescription', 'categories', 'manufacturer', 'customerReviewCount', 'name', 'url', 'shipping', 'salePrice', 'bestSellingRank', 'type', 'salePrice_range') for item in json_array: product_list.append(dict((key, item.get(key, None)) for key in product_keys)) # Let's dump it to the screen to see if it works print json.dumps(product_list, indent=4) ```
47,064,278
I've been working on this script today and have made some really good progress with looping through the data and importing it to an external database. I'm trying to troubleshoot a field that I'm having an issue with and it doesn't make much sense. Whenever I attempt to run it, I get the following error `KeyError: 'manufacturer'`. If I comment out the line `product_details['manufacturer'] = item['manufacturer']`, the script runs as it should. 1. I've checked the caseSensitivty 2. I've checked my spelling 3. I've confirmed that the json document I'm pulling from has that field filled out, 4. I've confirmed the DataType is supported (it's just a string); Not sure what else to check or where to go from here (new to python) I'm using the following **[test data](https://raw.githubusercontent.com/algolia/datasets/master/ecommerce/bestbuy_seo.json)** ``` import json input_file = open ('data/bestbuy_seo.json') json_array = json.load(input_file) product_list = [] for item in json_array: product_details = {"name": None, "shortDescription": None, "bestSellingRank": None, "thumbnailImage": None, "salePrice": None, "manufacturer": None, "url": None, "type": None, "image": None, "customerReviewCount": None, "shipping": None, "salePrice_range": None, "objectID": None, "categories": [None] } product_details['name'] = item['name'] product_details['shortDescription'] = item['shortDescription'] product_details['bestSellingRank'] = item['bestSellingRank'] product_details['thumbnailImage'] = item['thumbnailImage'] product_details['salePrice'] = item['salePrice'] product_details['manufacturer'] = item['manufacturer'] product_details['url'] = item['url'] product_details['type'] = item['type'] product_details['image'] = item['image'] product_details['customerReviewCount'] = item['customerReviewCount'] product_details['shipping'] = item['shipping'] product_details['salePrice_range'] = item['salePrice_range'] product_details['objectID'] = item['objectID'] product_details['categories'] = item['categories'] product_list.append(product_details) # Let's dump it to the screen to see if it works print json.dumps(product_list, indent=4) ```
2017/11/01
[ "https://Stackoverflow.com/questions/47064278", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1257896/" ]
Not quite the issue at hand (item is missing key `manufacturer`, perhaps more), but since you're just copying fields with the exact same keys, you can write something like this. Also note that `item.get(key, None)` will rid you of this error at the cost of having `None` values in product (so if you like your code to fail hard when it fails, this may be bad) ``` import json input_file = open ('data/bestbuy_seo.json') json_array = json.load(input_file) product_list = [] product_keys = ('objectID', 'image', 'thumbnailImage', 'shortDescription', 'categories', 'manufacturer', 'customerReviewCount', 'name', 'url', 'shipping', 'salePrice', 'bestSellingRank', 'type', 'salePrice_range') for item in json_array: product_list.append(dict((key, item.get(key, None)) for key in product_keys)) # Let's dump it to the screen to see if it works print json.dumps(product_list, indent=4) ```
Here are two ways of doing getting around a dictionary not having a key. Both work but the first one is probably easier to use and will work as a drop in for your current code. This is a way of doing it using python's `dictionary.get()` method. [Here is a page with more examples of how it works](https://www.tutorialspoint.com/python/dictionary_get.htm). This method of solving the problem was inspired by [this](https://stackoverflow.com/a/47064338/2990052) answer by `Ian A. Mason` to the current question. I changed your code inspired by his answer. ``` import json input_file = open('data/bestbuy_seo.json') json_array = json.load(input_file) product_list = [] for item in json_array: product_details = { 'name': item.get('name', None), 'shortDescription': item.get('shortDescription', None), 'bestSellingRank': item.get('bestSellingRank', None), 'thumbnailImage': item.get('thumbnailImage', None), 'salePrice': item.get('salePrice', None), 'manufacturer': item.get('manufacturer', None), 'url': item.get('url', None), 'type': item.get('type', None), 'image': item.get('image', None), 'customerReviewCount': item.get('customerReviewCount', None), 'shipping': item.get('shipping', None), 'salePrice_range': item.get('salePrice_range', None), 'objectID': item.get('objectID', None), 'categories': item.get('categories', None) } product_list.append(product_details) # Let's dump it to the screen to see if it works print json.dumps(product_list, indent=4) ``` This is a second way of doing it using the 'Ask for forgiveness not permission' concept in python. It is easy to just let the one object that is missing an attribute fail and keep going. It is a lof faster to do a try and expect than a bunch of if's. [Here is a post about this concept.](https://www.tutorialspoint.com/python/dictionary_get.htm) ``` import json from copy import deepcopy input_file = open('data/bestbuy_seo.json') json_array = json.load(input_file) product_list = [] product_details_master = {"name": None, "shortDescription": None, "bestSellingRank": None, "thumbnailImage": None, "salePrice": None, "manufacturer": None, "url": None, "type": None, "image": None, "customerReviewCount": None, "shipping": None, "salePrice_range": None, "objectID": None, "categories": [None]} for item in json_array: product_details_temp = deepcopy(product_details_master) try: product_details_temp['name'] = item['name'] product_details_temp['shortDescription'] = item['shortDescription'] product_details_temp['bestSellingRank'] = item['bestSellingRank'] product_details_temp['thumbnailImage'] = item['thumbnailImage'] product_details_temp['salePrice'] = item['salePrice'] product_details_temp['manufacturer'] = item['manufacturer'] product_details_temp['url'] = item['url'] product_details_temp['type'] = item['type'] product_details_temp['image'] = item['image'] product_details_temp['customerReviewCount'] = item['customerReviewCount'] product_details_temp['shipping'] = item['shipping'] product_details_temp['salePrice_range'] = item['salePrice_range'] product_details_temp['objectID'] = item['objectID'] product_details_temp['categories'] = item['categories'] product_list.append(product_details_temp) except KeyError: # Add error handling here! Right now if a product does not have all the keys NONE of the current object # will be added to the product_list! print 'There was a missing key in the json' # Let's dump it to the screen to see if it works print json.dumps(product_list, indent=4) ```
65,364,425
I'm trying to pass primary key as URL argument from CreatePost to UploadImage view but I'm constantly getting an error even if I see primary key in URL. I'm new to Django, so please help me :) **views.py** ``` class CreatePost(CreateView): model=shopModels.UserPost template_name='shop/create_post.html' fields='__all__' def get_form(self, form_class=None): form = super().get_form(form_class) form.fields['category'].widget.attrs['class'] = 'form-control' form.fields['category'].widget.attrs['oninvalid']="this.setCustomValidity('Ovo polje je obavezno!')" return form def dispatch(self, request, *args, **kwargs): if not request.user.is_authenticated: return redirect('login') return super(CreatePost, self).dispatch(request, *args, **kwargs) def get_success_url(self): return reverse('image_upload',kwargs={'user_post':self.object.id}) class UploadImage(CreateView): model=shopModels.Image template_name='shop/images_upload.html' fields='__all__' def dispatch(self, request, *args, **kwargs): if not request.user.is_authenticated: return reverse('login') return super(UploadImage, self).dispatch(request, *args, **kwargs) ``` **urls.py** ... ``` path('create_post/',views.CreatePost.as_view(),name="create_post"), path('image_upload/<int:user_post>',views.UploadImage.as_view(),name="image_upload"), ``` ... **models.py** ``` class UserPost(models.Model): id = models.AutoField(primary_key=True) user=models.ForeignKey(Account,on_delete=models.CASCADE) title=models.CharField(max_length=255) text=models.TextField(null=True) category=models.ForeignKey(Category,null=True,on_delete=models.SET_NULL) is_used=models.BooleanField(default=False) price=models.IntegerField(default=0) created_at = models.DateField(auto_now_add=True) updated_at = models.DateField(auto_now=True) is_active = models.BooleanField(default=True) def __str__(self): return self.title + ' | ' + str(self.user) def get_absolute_url(self,*args,**kwargs): return reverse('image_upload', kwargs={'user_post':self.id}) class Image(models.Model): id = models.AutoField(primary_key=True) user_post=models.ForeignKey(UserPost,default=None, on_delete=models.CASCADE) image=models.ImageField(null=True,blank=True,upload_to='images/') def __str__(self): return self.user_post.title def get_absolute_url(self): return reverse('index') ``` **EDIT!!!** ``` NoReverseMatch at /image_upload/23 ``` Reverse for 'image\_upload' with no arguments not found. 1 pattern(s) tried: ['image\_upload/(?P<user\_post>[0-9]+)$'] Request Method: GET Request URL: <http://127.0.0.1:8000/image_upload/23> Django Version: 3.1.4 Exception Type: NoReverseMatch Exception Value: Reverse for 'image\_upload' with no arguments not found. 1 pattern(s) tried: ['image\_upload/(?P<user\_post>[0-9]+)$'] Exception Location: C:\Users\marij\AppData\Local\Programs\Python\Python38-32\lib\site-packages\django\urls\resolvers.py, line 685, in \_reverse\_with\_prefix Python Executable: C:\Users\marij\AppData\Local\Programs\Python\Python38-32\python.exe Python Version: 3.8.3
2020/12/18
[ "https://Stackoverflow.com/questions/65364425", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14852796/" ]
I think you are not including `user_post` key in html form, you should include it in jinja style `<a class="btn btn-success" href="{% url 'image_upload' user_post=user_post_key %}">` And if you want to do some operations based on that `user_post` You should override the `form_valid(self)` method and access the `int:user_post` by using `self.kwargs['user_post']`.
Your path explicitly expects and integer for user\_post: ``` path('image_upload/<int:user_post>',views.UploadImage.as_view(),name="image_upload"), ``` If you call reverse(...) and you handover 123 you meed to make sure that 123 is a type integer and not e.g. string.
37,103,682
I am only starting out programming and currently making a text game. I know there is no goto command in python and after doing some research I understood that I have to use loops to replace that command but it just isn't doing what i was hoping it would do. Here's my code: ``` print('Welcome to my bad game!') print('Please, press Enter to continue.') variable=input() if variable == '': print('You are in a dark forest. You can only choose one item. Press 1 for a flashlight and press 2 for an axe.') while True: item=input() if item=='2': print('Bad choice, you lost the game.') quit() if item=='1': print('Good choise, now you can actually see something.') ``` So my problem with this is that if the player chooses the 'wrong' item the program just kills itself but I would want it to jump back to the beginning. I actually don't know if this is even possible but better ask than just wonder.
2016/05/08
[ "https://Stackoverflow.com/questions/37103682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6307467/" ]
If you mean to the beginning to the loop, just leave out the call to `quit`. If you mean to the beginning of the *program*, then you'll need a loop around that as well.
Instead of quit you could use 'continue' to loop back to your while. But it's not clear from this example what you want the while to do.
4,619,580
With python, I can use [logging](http://docs.python.org/library/logging.html) library. What do you use for the logging library with C++?
2011/01/06
[ "https://Stackoverflow.com/questions/4619580", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
I personally like: <http://code.google.com/p/google-glog/> You have many options though. This one is pretty similar to what you are used to.
Maybe you will want to take a look at <https://github.com/gabime/spdlog>, they used a Python style syntax to compose log messages, and is pretty fast and safe.
4,619,580
With python, I can use [logging](http://docs.python.org/library/logging.html) library. What do you use for the logging library with C++?
2011/01/06
[ "https://Stackoverflow.com/questions/4619580", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
I personally like: <http://code.google.com/p/google-glog/> You have many options though. This one is pretty similar to what you are used to.
You can also have a look at [quill](https://github.com/odygrd/quill). It is using a python style syntax to log and has a similar API to the python logging module. It is stable and built for low latency as well.
4,619,580
With python, I can use [logging](http://docs.python.org/library/logging.html) library. What do you use for the logging library with C++?
2011/01/06
[ "https://Stackoverflow.com/questions/4619580", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
We are heavyweight users of [log4cxx](http://logging.apache.org/log4cxx/index.html). I can recommend it, though I am told that the current version won't build in Visual Studio 2010.
Maybe you will want to take a look at <https://github.com/gabime/spdlog>, they used a Python style syntax to compose log messages, and is pretty fast and safe.
4,619,580
With python, I can use [logging](http://docs.python.org/library/logging.html) library. What do you use for the logging library with C++?
2011/01/06
[ "https://Stackoverflow.com/questions/4619580", "https://Stackoverflow.com", "https://Stackoverflow.com/users/260127/" ]
We are heavyweight users of [log4cxx](http://logging.apache.org/log4cxx/index.html). I can recommend it, though I am told that the current version won't build in Visual Studio 2010.
You can also have a look at [quill](https://github.com/odygrd/quill). It is using a python style syntax to log and has a similar API to the python logging module. It is stable and built for low latency as well.
45,386,035
I run several python subprocesses to migrate data to S3. I noticed that my python subprocesses often drops to 0% and *this condition lasts more than one minute*. This significantly decreases the performance of the migration process. Here is the pic of the sub process: [![enter image description here](https://i.stack.imgur.com/aRpPe.png)](https://i.stack.imgur.com/aRpPe.png) The subprocess does these things: 1. Query all tables from a database. 2. Spawn sub processes for each table. ``` for table in tables: print "Spawn process to process {0} table".format(table) process = multiprocessing.Process(name="Process " + table, target=target_def, args=(args, table)) process.daemon = True process.start() processes.append(process) for process in processes: process.join() ``` 3. Query data from a database using Limit and Offset. I used PyMySQL library to query the data. 4. Transform returned data to another structure. `construct_structure_def()` is a function that transform row into another format. ``` buffer_string = [] for i, row_file in enumerate(row_files): if i == num_of_rows: buffer_string.append( json.dumps(construct_structure_def(row_file)) ) else: buffer_string.append( json.dumps(construct_structure_def(row_file)) + "\n" ) content = ''.join(buffer_string) ``` 5. Write the transformed data into a file and compress it using gzip. ``` with gzip.open(file_path, 'wb') as outfile: outfile.write(content) return file_name ``` 6. Upload the file to S3. 7. Repeat step 3 - 6 until no more rows to be fetched. In order to speed up things faster, I create subprocesses for each table using `multiprocesses.Process` built-in library. I ran my script in a virtual machine. Following are the specs: * processor: Intel(R) Xeon(R) CPU E5-2690 @ 2.90 Hz 2.90 GHz (2 Processes) * Virtual processors: 4 * Installed RAM: 32 GB * OS: Windows Enterprise Edition. I saw on the post in [here](https://stackoverflow.com/questions/41942960/python-randomly-drops-to-0-cpu-usage-causing-the-code-to-hang-up-when-handl?noredirect=1&lq=1) that said one of the main possibilities is because of memory I/O limitation. So I tried to run one sub process to test that theory, but no avail. Any idea why this is happening? Let me know if you guys need more information. Thank you in advance!
2017/07/29
[ "https://Stackoverflow.com/questions/45386035", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5610684/" ]
Turns out the culprit was the query I ran. The query took a long time to return the result. This made the python script idle thus zero percent usage.
Your VM is a Windows machine, I'm more of a Linux person so I'd love it if someone will back me up here. I think the `daemon` is the problem here. I've read about [daemon preocesses](https://en.wikipedia.org/wiki/Daemon_(computing)) and especially about [TSR](https://en.wikipedia.org/wiki/Terminate_and_stay_resident_program). The first line in TSR says: > > Regarding computers, a terminate and stay resident program (commonly referred to by the initialism TSR) is a computer program that uses a system call in DOS operating systems to return control of the computer to the operating system, as though the program has quit, but stays resident in computer memory so it can be reactivated by a hardware or software interrupt. > > > As I understand, making the process a `daemon` (or `TSR` in your case) makes it dormant until a syscall will wake it up, which I don't think is the case here (correct me if I'm wrong).
54,509,350
I want to write aws lambda function to fetch data from on premises oracle db and migrate to aurora db. I tried : ``` var oracledb = require('oracledb-for-lambda'); var os = require('os'); var fs = require('fs'); 'use strict'; str_host = os.hostname() + ' localhost\n'; fs.appendFile(process.env.HOSTALIASES,str_host , function(err){ if(err) throw err; }); ``` But I am again stuck as it does not seem to work. Can someone show me , i have table with same columns present in oracle db as well as aurora db i want to map form oracle to aurora. How to write it in java or python using aws lambda.
2019/02/04
[ "https://Stackoverflow.com/questions/54509350", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9951102/" ]
***KeyNotFoundException ... Why?*** The distilled, core reason is that the `Equals` and `GetHashCode` methods are inconsistent. This situation is fixed by doing 2 things: * Override `Equals` in `TestClass` * Never modify a dictionary during iteration + It's that the key object/value is being modified --- **`GetHashCode` - `Equals` disconnect** * [... GetHashCode is not expected to change over the lifecycle of the object.](https://stackoverflow.com/questions/54509316/keynotfoundexception-in-c-sharp-dictionary-after-changing-property-value-based-o#comment95822315_54509376) AND [do not use values that are subject to change while calculating `GetHashCode()`](https://stackoverflow.com/questions/54509316/keynotfoundexception-in-c-sharp-dictionary-after-changing-property-value-based-o/54509376?noredirect=1#comment95822349_54509376) + Not exactly. The problem is a changing hash code while equality does not change. + MSDN says: *The GetHashCode() method for an object must consistently return the same hash code as long as there is no modification to the object state that determines the return value of the object's System.Object.Equals method* --- **`TestClass.Equals`** I say `TestClass` because that is the dictionary key. But this applies to `ValueClass` too. A class' `Equals` and `GetHashCode` ***must*** be consistent. When overriding either but not both then they are not consistent. We all know "if you override `Equals` also override `GetHashCode`". We never override `GetHashCode` yet seem to get away with it. Hear me now and believe me the first time you implement `IEqualityComparer` and `IEquatable` - always override both. --- **Iterating Dictionary** Do not add or delete an element, modify a key, nor modify a value (sometimes) *during iteration*. * [Editing Dictionary Values ... in a loop](https://stackoverflow.com/q/1070766/463206) - It hits on some technical reasons a dictionary entry cannot safely be modified during iteration * [MSDN Dictionary doc](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.dictionary-2?view=netframework-4.7.2) * [Can change value during iteration](https://social.msdn.microsoft.com/Forums/vstudio/en-US/c98e9d32-7f06-4b3a-8917-8b58eee31e58/change-dictionary-values-while-iterating?forum=netfxbcl) - The dictionary "value", not the key's value * [Well, not in every case](https://www.c-sharpcorner.com/blogs/updating-dictionary-elements-while-iterating1) --- **GetHashCode** * [MSDN GetHashCode](https://learn.microsoft.com/en-us/dotnet/api/system.object.gethashcode?view=netframework-4.7.2) + *Do not use the hash code as the key to retrieve an object from a keyed collection.* + *Do not test for equality of hash codes to determine whether two objects are equal* The OP code may not be doing this literally but certainly virtually because there is no Equals override. [Here is a neat hash algorithm from C# demi god Eric Lipper](https://stackoverflow.com/a/263416/463206)
That is the expected behavior with your code. Then what is your wrong with your code? Look at your Key Class. You are overriding your `GetHashCode()` and on top of that you are using a mutable value to calculate the `GetHashCode()` method (very very bad :( ). ``` public class TestClass { public int MyProperty { get; set; } public override int GetHashCode() { return MyProperty; } } ``` The look up in the implementation of the Dictionary uses `GetHashCode()` of the inserted object. At the time of insertion of your object your `GetHashCode()` returned some value and that object got inserted into some `bucket`. However, after you changed your `MyProperty`; `GetHashCode()` does not return the same value therefore it can not be look-ed up any more This is where the lookup occurs ``` Console.WriteLine($"{dict[k].MyProperty} - {k.MyProperty}"); ``` `dict[k]` already had its `MyProperty` changed therefore `GetHashCode()` does not return the value when the object first added to the dictionary. Ant another really important thing is to keep in mind that when you override `GetHashCode()` then override `Equals()` as well. The inverse is true too!
54,509,350
I want to write aws lambda function to fetch data from on premises oracle db and migrate to aurora db. I tried : ``` var oracledb = require('oracledb-for-lambda'); var os = require('os'); var fs = require('fs'); 'use strict'; str_host = os.hostname() + ' localhost\n'; fs.appendFile(process.env.HOSTALIASES,str_host , function(err){ if(err) throw err; }); ``` But I am again stuck as it does not seem to work. Can someone show me , i have table with same columns present in oracle db as well as aurora db i want to map form oracle to aurora. How to write it in java or python using aws lambda.
2019/02/04
[ "https://Stackoverflow.com/questions/54509350", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9951102/" ]
> > What I want to know is, why we got this error? > > > There are *guidelines* for GetHashCode, and there are *rules*. If you violate the guidelines, you get lousy performance. If you violate the *rules*, things break. You are violating the *rules*. One of the *rules* of GetHashCode is *while an object is in a dictionary, its hash code must not change.* Another rule is that *equal objects must have equal hash codes*. You've violated the rules, and so things are broken. That's your fault; don't violate the rules. > > What is the best practice in generating the HashCode so that we can avoid such issues? > > > For a list of the rules and guidelines, see: <https://ericlippert.com/2011/02/28/guidelines-and-rules-for-gethashcode/>
That is the expected behavior with your code. Then what is your wrong with your code? Look at your Key Class. You are overriding your `GetHashCode()` and on top of that you are using a mutable value to calculate the `GetHashCode()` method (very very bad :( ). ``` public class TestClass { public int MyProperty { get; set; } public override int GetHashCode() { return MyProperty; } } ``` The look up in the implementation of the Dictionary uses `GetHashCode()` of the inserted object. At the time of insertion of your object your `GetHashCode()` returned some value and that object got inserted into some `bucket`. However, after you changed your `MyProperty`; `GetHashCode()` does not return the same value therefore it can not be look-ed up any more This is where the lookup occurs ``` Console.WriteLine($"{dict[k].MyProperty} - {k.MyProperty}"); ``` `dict[k]` already had its `MyProperty` changed therefore `GetHashCode()` does not return the value when the object first added to the dictionary. Ant another really important thing is to keep in mind that when you override `GetHashCode()` then override `Equals()` as well. The inverse is true too!
33,630,400
How do I extract two `i32`s from a single line of input in Rust? In Python I can read two `int`s like: ```python a, b = map(int, input().split()) # "2 3" => a=2 and b=3 ``` As of Rust 1.3.0, I can run the following to read one i32: ```rust let mut s = String::new(); std::io::stdin().read_line(&mut s).ok().expect("read_line panic"); let n: i32 = s.trim().parse().ok().expect("parse panic"); ```
2015/11/10
[ "https://Stackoverflow.com/questions/33630400", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2370748/" ]
`&str` has a [`split_whitespace()`](http://doc.rust-lang.org/std/primitive.str.html#method.split_whitespace) method which returns an iterator yielding parts of the target string separated by an arbitrary amount of whitespace, much like `split()` in Python. You can use the [`map()`](http://doc.rust-lang.org/std/iter/trait.Iterator.html#method.map) iterator adapter to convert each part to a number: ``` let mut s = String::new(); std::io::stdin().read_line(&mut s).expect("read_line error"); let mut parts = s.split_whitespace().map(|s| s.parse::<i32>()); match (parts.next(), parts.next()) { (Some(Ok(a)), Some(Ok(b))) => { // a and b are i32 } // handle other problems: not enough numbers, numbers are invalid, etc _ => {} // ignore invalid input } ``` This looks a lot more verbose than the Python version, but that's mostly because in Python all errors will be thrown as exceptions, while in Rust you have to handle them explicitly. If you don't care about them, you can throw `match` away: ``` let a = parts.next().unwrap().unwrap(); let b = parts.next().unwrap().unwrap(); ``` Even another approach would be to use a handy `collect()` implementation for `Result<Vec<_>, _>`: ``` let items: Result<Vec<i32>, _> = parts.collect(); ``` This way if any of numbers in the input string fail to parse, `items` will contain the corresponding `Err` value, but if they all are parsed successfully, then `items` will contain `Ok` variant with the vector of parsed numbers. With this approach you also do not need to specify `::<i32>()` in `parse()` invocation, as it will be inferred automatically (no need for `mut` either): ``` let parts = s.split_whitespace().map(|s| s.parse()); ``` Also there is no one-liner function to read a string from stdin in the standard library. It is somewhat unfortunate but rarely a problem in practice. There are libraries which provide such functionality; see other answers to find some examples.
The Rust code is always going to be more verbose than the Python one. But since version 1.26, Rust also supports slice patterns as shown below. The code looks more readable in my opinion. ``` fn main() { let a = "2 3"; if let [Ok(aa), Ok(aaa)] = &a.split(" ") .map(|a| a.parse::<i32>()) .collect::<Vec<_>>()[..] { println!("{:?} {:?}", aa, aaa); } } ```
33,630,400
How do I extract two `i32`s from a single line of input in Rust? In Python I can read two `int`s like: ```python a, b = map(int, input().split()) # "2 3" => a=2 and b=3 ``` As of Rust 1.3.0, I can run the following to read one i32: ```rust let mut s = String::new(); std::io::stdin().read_line(&mut s).ok().expect("read_line panic"); let n: i32 = s.trim().parse().ok().expect("parse panic"); ```
2015/11/10
[ "https://Stackoverflow.com/questions/33630400", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2370748/" ]
`&str` has a [`split_whitespace()`](http://doc.rust-lang.org/std/primitive.str.html#method.split_whitespace) method which returns an iterator yielding parts of the target string separated by an arbitrary amount of whitespace, much like `split()` in Python. You can use the [`map()`](http://doc.rust-lang.org/std/iter/trait.Iterator.html#method.map) iterator adapter to convert each part to a number: ``` let mut s = String::new(); std::io::stdin().read_line(&mut s).expect("read_line error"); let mut parts = s.split_whitespace().map(|s| s.parse::<i32>()); match (parts.next(), parts.next()) { (Some(Ok(a)), Some(Ok(b))) => { // a and b are i32 } // handle other problems: not enough numbers, numbers are invalid, etc _ => {} // ignore invalid input } ``` This looks a lot more verbose than the Python version, but that's mostly because in Python all errors will be thrown as exceptions, while in Rust you have to handle them explicitly. If you don't care about them, you can throw `match` away: ``` let a = parts.next().unwrap().unwrap(); let b = parts.next().unwrap().unwrap(); ``` Even another approach would be to use a handy `collect()` implementation for `Result<Vec<_>, _>`: ``` let items: Result<Vec<i32>, _> = parts.collect(); ``` This way if any of numbers in the input string fail to parse, `items` will contain the corresponding `Err` value, but if they all are parsed successfully, then `items` will contain `Ok` variant with the vector of parsed numbers. With this approach you also do not need to specify `::<i32>()` in `parse()` invocation, as it will be inferred automatically (no need for `mut` either): ``` let parts = s.split_whitespace().map(|s| s.parse()); ``` Also there is no one-liner function to read a string from stdin in the standard library. It is somewhat unfortunate but rarely a problem in practice. There are libraries which provide such functionality; see other answers to find some examples.
You can use the [`text_io`](https://crates.io/crates/text_io) crate which offers a macro `read!` to read from standard input to produce tuples: ``` #[macro_use] extern crate text_io; let (a, b): (i32, i32); scan!("{} {}", a, b); ``` You can also read the values one by one ``` #[macro_use] extern crate text_io; let a: i32 = read!(); let b: i32 = read!(); ```
33,630,400
How do I extract two `i32`s from a single line of input in Rust? In Python I can read two `int`s like: ```python a, b = map(int, input().split()) # "2 3" => a=2 and b=3 ``` As of Rust 1.3.0, I can run the following to read one i32: ```rust let mut s = String::new(); std::io::stdin().read_line(&mut s).ok().expect("read_line panic"); let n: i32 = s.trim().parse().ok().expect("parse panic"); ```
2015/11/10
[ "https://Stackoverflow.com/questions/33630400", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2370748/" ]
You can use the [`text_io`](https://crates.io/crates/text_io) crate which offers a macro `read!` to read from standard input to produce tuples: ``` #[macro_use] extern crate text_io; let (a, b): (i32, i32); scan!("{} {}", a, b); ``` You can also read the values one by one ``` #[macro_use] extern crate text_io; let a: i32 = read!(); let b: i32 = read!(); ```
The Rust code is always going to be more verbose than the Python one. But since version 1.26, Rust also supports slice patterns as shown below. The code looks more readable in my opinion. ``` fn main() { let a = "2 3"; if let [Ok(aa), Ok(aaa)] = &a.split(" ") .map(|a| a.parse::<i32>()) .collect::<Vec<_>>()[..] { println!("{:?} {:?}", aa, aaa); } } ```
14,259,660
I am calling a python script from within a shell script. The python script returns error codes in case of failures. How do I handle these error codes in shell script and exit it when necessary?
2013/01/10
[ "https://Stackoverflow.com/questions/14259660", "https://Stackoverflow.com", "https://Stackoverflow.com/users/202325/" ]
The exit code of last command is contained in `$?`. Use below pseudo code: ``` python myPythonScript.py ret=$? if [ $ret -ne 0 ]; then #Handle failure #exit if required fi ```
You mean [the `$?` variable](http://tldp.org/LDP/abs/html/exit-status.html)? ``` $ python -c 'import foobar' > /dev/null Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named foobar $ echo $? 1 $ python -c 'import this' > /dev/null $ echo $? 0 ```
14,259,660
I am calling a python script from within a shell script. The python script returns error codes in case of failures. How do I handle these error codes in shell script and exit it when necessary?
2013/01/10
[ "https://Stackoverflow.com/questions/14259660", "https://Stackoverflow.com", "https://Stackoverflow.com/users/202325/" ]
The exit code of last command is contained in `$?`. Use below pseudo code: ``` python myPythonScript.py ret=$? if [ $ret -ne 0 ]; then #Handle failure #exit if required fi ```
Please use logic below to process script execution result: ``` python myPythonScript.py # $? = is the exit status of the most recently-executed command; by convention, 0 means success and anything else indicates failure. if [ $? -eq 0 ] then echo "Successfully executed script" else # Redirect stdout from echo command to stderr. echo "Script exited with error." >&2 fi ```
21,517,740
I am new to Python having come from mainly Java programming. I am currently pondering over how classes in Python are instantiated. I understand that `__init__()`: is like the constructor in Java. However, sometimes python classes do not have an `__init__()` method which in this case I assume there is a default constructor just like in Java? Another thing that makes the transition from Java to python slightly difficult is that in Java you have to define all the instance fields of the class with the type and sometimes an initial value. In python all of this just seems to disappear and developers can just define new fields on the fly. For example I have come across a program like so: ``` class A(Command.UICommand): FIELDS = [ Field( 'runTimeStepSummary', BOOL_TYPE) ] def __init__(self, runTimeStepSummary=False): self.runTimeStepSummary = runTimeStepSummary """Other methods""" def execute(self, cont, result): self.timeStepSummaries = {} """ other code""" ``` The thing that confuses (and slightly irritates me) is that this A class does not have a field called timeStepSummaries yet how can a developer in the middle of a method just define a new field? or is my understanding incorrect? So to be clear, my question is in Python can we dynamically define new fields to a class during runtime like in this example or is this timeStepSummaries variable an instance of a java like private variable? EDIT: I am using python 2.7
2014/02/02
[ "https://Stackoverflow.com/questions/21517740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2020869/" ]
> > I understand that `__init__()`: is like the constructor in Java. > > > To be more precise, in Python `__new__` is the constructor method, `__init__` is the initializer. When you do `SomeClass('foo', bar='baz')`, the `type.__call__` method basically does: ``` def __call__(cls, *args, **kwargs): instance = cls.__new__(*args, **kwargs) instance.__init__(*args, **kwargs) return instance ``` Generally, most classes will define an `__init__` if necessary, while `__new__` is more commonly used for immutable objects. > > However, sometimes python classes do not have an **init**() method which in this case I assume there is a default constructor just like in Java? > > > I'm not sure about old-style classes, but this is the case for new-style ones: ``` >>>> object.__init__ <slot wrapper '__init__' of 'object' objects> ``` If no explicit `__init__` is defined, the default will be called. > > So to be clear, my question is in Python can we dynamically define new fields to a class during runtime like in this example > > > Yes. ``` >>> class A(object): ... def __init__(self): ... self.one_attribute = 'one' ... def add_attr(self): ... self.new_attribute = 'new' ... >>> a = A() >>> a.one_attribute 'one' >>> a.new_attribute Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'A' object has no attribute 'new_attribute' >>> a.add_attr() >>> a.new_attribute 'new' ``` Attributes can be added to an instance at any time: ``` >>> a.third_attribute = 'three' >>> a.third_attribute 'three' ``` However, it's possible to restrict the instance attributes that can be added through the class attribute `__slots__`: ``` >>> class B(object): ... __slots__ = ['only_one_attribute'] ... def __init__(self): ... self.only_one_attribute = 'one' ... def add_attr(self): ... self.another_attribute = 'two' ... >>> b = B() >>> b.add_attr() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 6, in add_attr AttributeError: 'B' object has no attribute 'another_attribute' ``` (It's probably important to note that `__slots__` is primarily intended as a *memory optimization* - by not requiring an object have a dictionary for storing attributes - rather than as a form of run-time modification prevention.)
This answer pertains to new-style Python classes, which subclass `object`. New-style classes were added in 2.2, and they're the only kind of class available in PY3. ``` >>> print object.__doc__ The most base type ``` The class itself is an instance of a metaclass, which is usually `type`: ``` >>> print type.__doc__ type(object) -> the object's type type(name, bases, dict) -> a new type ``` Per the above docstring, you can instantiate the metaclass directly to create a class: ``` >>> Test = type('Test', (object,), {'__doc__': 'Test class'}) >>> isinstance(Test, type) True >>> issubclass(Test, object) True >>> print Test.__doc__ Test class ``` Calling a class is handled by the metaclass `__call__` method, e.g. `type.__call__`. This in turn calls the class `__new__` constructor (typically inherited) with the call arguments in order to create an instance. Then it calls `__init__`, which may set instance attributes. Most objects have a `__dict__` that allows setting and deleting attributes dynamically, such as `self.value = 10` or `del self.value`. It's generally bad form to modify an object's `__dict__` directly, and actually disallowed for a class (i.e. a class dict is wrapped to disable direct modification). If you need to access an attribute dynamically, use the [built-in functions](http://docs.python.org/2/library/functions.html#built-in-functions) `getattr`, `setattr`, and `delattr`. The data model defines the following special methods for [customizing attribute access](http://docs.python.org/2/reference/datamodel.html#customizing-attribute-access): `__getattribute__`, `__getattr__`, `__setattr__`, and `__delattr__`. A class can also define the descriptor protocol methods `__get__`, `__set__`, and `__delete__` to determine how its instances behave as attributes. Refer to the [descriptor guide](http://docs.python.org/2/howto/descriptor.html). When looking up an attribute, `object.__getattribute__` first searches the object's class and base classes using the [C3 method resolution order](http://www.python.org/download/releases/2.3/mro/) of the class: ``` >>> Test.__mro__ (<class '__main__.Test'>, <type 'object'>) ``` Note that a data descriptor defined in the class (e.g. a `property` or a `member` for a slot) takes precedence over the instance dict. On the other hand, a non-data descriptor (e.g. a function) or a non-descriptor class attribute can be shadowed by an instance attribute. For example: ``` >>> Test.x = property(lambda self: 10) >>> inspect.isdatadescriptor(Test.x) True >>> t = Test() >>> t.x 10 >>> t.__dict__['x'] = 0 >>> t.__dict__ {'x': 0} >>> t.x 10 >>> Test.y = 'class string' >>> inspect.isdatadescriptor(Test.y) False >>> t.y = 'instance string' >>> t.y 'instance string' ``` Use [`super`](http://docs.python.org/2/library/functions.html#super) to proxy attribute access for the next class in the method resolution order. For example: ``` >>> class Test2(Test): ... x = property(lambda self: 20) ... >>> t2 = Test2() >>> t2.x 20 >>> super(Test2, t2).x 10 ```
21,517,740
I am new to Python having come from mainly Java programming. I am currently pondering over how classes in Python are instantiated. I understand that `__init__()`: is like the constructor in Java. However, sometimes python classes do not have an `__init__()` method which in this case I assume there is a default constructor just like in Java? Another thing that makes the transition from Java to python slightly difficult is that in Java you have to define all the instance fields of the class with the type and sometimes an initial value. In python all of this just seems to disappear and developers can just define new fields on the fly. For example I have come across a program like so: ``` class A(Command.UICommand): FIELDS = [ Field( 'runTimeStepSummary', BOOL_TYPE) ] def __init__(self, runTimeStepSummary=False): self.runTimeStepSummary = runTimeStepSummary """Other methods""" def execute(self, cont, result): self.timeStepSummaries = {} """ other code""" ``` The thing that confuses (and slightly irritates me) is that this A class does not have a field called timeStepSummaries yet how can a developer in the middle of a method just define a new field? or is my understanding incorrect? So to be clear, my question is in Python can we dynamically define new fields to a class during runtime like in this example or is this timeStepSummaries variable an instance of a java like private variable? EDIT: I am using python 2.7
2014/02/02
[ "https://Stackoverflow.com/questions/21517740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2020869/" ]
> > I understand that `__init__()`: is like the constructor in Java. > > > To be more precise, in Python `__new__` is the constructor method, `__init__` is the initializer. When you do `SomeClass('foo', bar='baz')`, the `type.__call__` method basically does: ``` def __call__(cls, *args, **kwargs): instance = cls.__new__(*args, **kwargs) instance.__init__(*args, **kwargs) return instance ``` Generally, most classes will define an `__init__` if necessary, while `__new__` is more commonly used for immutable objects. > > However, sometimes python classes do not have an **init**() method which in this case I assume there is a default constructor just like in Java? > > > I'm not sure about old-style classes, but this is the case for new-style ones: ``` >>>> object.__init__ <slot wrapper '__init__' of 'object' objects> ``` If no explicit `__init__` is defined, the default will be called. > > So to be clear, my question is in Python can we dynamically define new fields to a class during runtime like in this example > > > Yes. ``` >>> class A(object): ... def __init__(self): ... self.one_attribute = 'one' ... def add_attr(self): ... self.new_attribute = 'new' ... >>> a = A() >>> a.one_attribute 'one' >>> a.new_attribute Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'A' object has no attribute 'new_attribute' >>> a.add_attr() >>> a.new_attribute 'new' ``` Attributes can be added to an instance at any time: ``` >>> a.third_attribute = 'three' >>> a.third_attribute 'three' ``` However, it's possible to restrict the instance attributes that can be added through the class attribute `__slots__`: ``` >>> class B(object): ... __slots__ = ['only_one_attribute'] ... def __init__(self): ... self.only_one_attribute = 'one' ... def add_attr(self): ... self.another_attribute = 'two' ... >>> b = B() >>> b.add_attr() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 6, in add_attr AttributeError: 'B' object has no attribute 'another_attribute' ``` (It's probably important to note that `__slots__` is primarily intended as a *memory optimization* - by not requiring an object have a dictionary for storing attributes - rather than as a form of run-time modification prevention.)
Attributes of Python objects are generally stored in a dictionary, just like the ones you create with `{}`. Since you can add new items to a dictionary at any time, you can add attributes to an object at any time. And since any type of object can be stored in a dictionary without previous declaration of type, any type of object can be stored as an attribute of an object. In short, `my_object.abc = 42` is (often) just a shorthand for `my_object.__dict__["abc"] = 42`. It is possible to define objects without a `__dict__` by defining the `__slots__` attribute, or to override certain special methods and store attributes in some other way, though most of the time you shouldn't do that.
21,517,740
I am new to Python having come from mainly Java programming. I am currently pondering over how classes in Python are instantiated. I understand that `__init__()`: is like the constructor in Java. However, sometimes python classes do not have an `__init__()` method which in this case I assume there is a default constructor just like in Java? Another thing that makes the transition from Java to python slightly difficult is that in Java you have to define all the instance fields of the class with the type and sometimes an initial value. In python all of this just seems to disappear and developers can just define new fields on the fly. For example I have come across a program like so: ``` class A(Command.UICommand): FIELDS = [ Field( 'runTimeStepSummary', BOOL_TYPE) ] def __init__(self, runTimeStepSummary=False): self.runTimeStepSummary = runTimeStepSummary """Other methods""" def execute(self, cont, result): self.timeStepSummaries = {} """ other code""" ``` The thing that confuses (and slightly irritates me) is that this A class does not have a field called timeStepSummaries yet how can a developer in the middle of a method just define a new field? or is my understanding incorrect? So to be clear, my question is in Python can we dynamically define new fields to a class during runtime like in this example or is this timeStepSummaries variable an instance of a java like private variable? EDIT: I am using python 2.7
2014/02/02
[ "https://Stackoverflow.com/questions/21517740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2020869/" ]
Attributes of Python objects are generally stored in a dictionary, just like the ones you create with `{}`. Since you can add new items to a dictionary at any time, you can add attributes to an object at any time. And since any type of object can be stored in a dictionary without previous declaration of type, any type of object can be stored as an attribute of an object. In short, `my_object.abc = 42` is (often) just a shorthand for `my_object.__dict__["abc"] = 42`. It is possible to define objects without a `__dict__` by defining the `__slots__` attribute, or to override certain special methods and store attributes in some other way, though most of the time you shouldn't do that.
This answer pertains to new-style Python classes, which subclass `object`. New-style classes were added in 2.2, and they're the only kind of class available in PY3. ``` >>> print object.__doc__ The most base type ``` The class itself is an instance of a metaclass, which is usually `type`: ``` >>> print type.__doc__ type(object) -> the object's type type(name, bases, dict) -> a new type ``` Per the above docstring, you can instantiate the metaclass directly to create a class: ``` >>> Test = type('Test', (object,), {'__doc__': 'Test class'}) >>> isinstance(Test, type) True >>> issubclass(Test, object) True >>> print Test.__doc__ Test class ``` Calling a class is handled by the metaclass `__call__` method, e.g. `type.__call__`. This in turn calls the class `__new__` constructor (typically inherited) with the call arguments in order to create an instance. Then it calls `__init__`, which may set instance attributes. Most objects have a `__dict__` that allows setting and deleting attributes dynamically, such as `self.value = 10` or `del self.value`. It's generally bad form to modify an object's `__dict__` directly, and actually disallowed for a class (i.e. a class dict is wrapped to disable direct modification). If you need to access an attribute dynamically, use the [built-in functions](http://docs.python.org/2/library/functions.html#built-in-functions) `getattr`, `setattr`, and `delattr`. The data model defines the following special methods for [customizing attribute access](http://docs.python.org/2/reference/datamodel.html#customizing-attribute-access): `__getattribute__`, `__getattr__`, `__setattr__`, and `__delattr__`. A class can also define the descriptor protocol methods `__get__`, `__set__`, and `__delete__` to determine how its instances behave as attributes. Refer to the [descriptor guide](http://docs.python.org/2/howto/descriptor.html). When looking up an attribute, `object.__getattribute__` first searches the object's class and base classes using the [C3 method resolution order](http://www.python.org/download/releases/2.3/mro/) of the class: ``` >>> Test.__mro__ (<class '__main__.Test'>, <type 'object'>) ``` Note that a data descriptor defined in the class (e.g. a `property` or a `member` for a slot) takes precedence over the instance dict. On the other hand, a non-data descriptor (e.g. a function) or a non-descriptor class attribute can be shadowed by an instance attribute. For example: ``` >>> Test.x = property(lambda self: 10) >>> inspect.isdatadescriptor(Test.x) True >>> t = Test() >>> t.x 10 >>> t.__dict__['x'] = 0 >>> t.__dict__ {'x': 0} >>> t.x 10 >>> Test.y = 'class string' >>> inspect.isdatadescriptor(Test.y) False >>> t.y = 'instance string' >>> t.y 'instance string' ``` Use [`super`](http://docs.python.org/2/library/functions.html#super) to proxy attribute access for the next class in the method resolution order. For example: ``` >>> class Test2(Test): ... x = property(lambda self: 20) ... >>> t2 = Test2() >>> t2.x 20 >>> super(Test2, t2).x 10 ```
56,760,023
I am trying yo use a PyTorch library SparseConvNet (<https://github.com/facebookresearch/SparseConvNet>) in Google Colaboratory. In order to install it properly, you need to first install Conda, and then using Conda install the SparseConvNet package. Here is the code I am using (following the instructions from scn readme file): ``` !wget -c https://repo.continuum.io/archive/Anaconda3-5.1.0-Linux-x86_64.sh !chmod +x Anaconda3-5.1.0-Linux-x86_64.sh !bash ./Anaconda3-5.1.0-Linux-x86_64.sh -b -f -p /usr/local import sys sys.path.append('/usr/local/lib/python3.6/site-packages/') !conda install pytorch torchvision cudatoolkit=10.0 -c pytorch !conda install google-sparsehash -c bioconda !conda install -c anaconda pillow !git clone https://github.com/facebookresearch/SparseConvNet.git !cd SparseConvNet/ !bash develop.sh ``` When I run this it is working and I can successfully import sparseconvnet package, but I need to do it every time I enter the notebook or restart runtime, and it's taking a lot of time. Is it possible to install these packages permanently? There is one similar question, and the answer suggest that I should install it on my drive, but I don't know how to do it using conda. Thanks!
2019/06/25
[ "https://Stackoverflow.com/questions/56760023", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11627002/" ]
You can specify the directory for conda to install to using ``` conda install -p path_to_your_dir ``` So, you can mount your google drive and conda install there to make it permanent.
The whole environment that Google Colaboratory runs your notebooks is not permanent, it is one of their premises. If you need a persistent environment consider running Jupyter directly on a Google Cloud Compute Engine VM, they have pre-built images with everything configured [here](https://cloud.google.com/deep-learning-vm/docs/pytorch_start_instance) or [Google Cloud Datalab](https://cloud.google.com/datalab/) (which runs on a GCE VM, but is managed)
56,760,023
I am trying yo use a PyTorch library SparseConvNet (<https://github.com/facebookresearch/SparseConvNet>) in Google Colaboratory. In order to install it properly, you need to first install Conda, and then using Conda install the SparseConvNet package. Here is the code I am using (following the instructions from scn readme file): ``` !wget -c https://repo.continuum.io/archive/Anaconda3-5.1.0-Linux-x86_64.sh !chmod +x Anaconda3-5.1.0-Linux-x86_64.sh !bash ./Anaconda3-5.1.0-Linux-x86_64.sh -b -f -p /usr/local import sys sys.path.append('/usr/local/lib/python3.6/site-packages/') !conda install pytorch torchvision cudatoolkit=10.0 -c pytorch !conda install google-sparsehash -c bioconda !conda install -c anaconda pillow !git clone https://github.com/facebookresearch/SparseConvNet.git !cd SparseConvNet/ !bash develop.sh ``` When I run this it is working and I can successfully import sparseconvnet package, but I need to do it every time I enter the notebook or restart runtime, and it's taking a lot of time. Is it possible to install these packages permanently? There is one similar question, and the answer suggest that I should install it on my drive, but I don't know how to do it using conda. Thanks!
2019/06/25
[ "https://Stackoverflow.com/questions/56760023", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11627002/" ]
You can specify the directory for conda to install to using ``` conda install -p path_to_your_dir ``` So, you can mount your google drive and conda install there to make it permanent.
Unfortunately no. Google colab machine will erase after some time. It is a docker inside, and every time you start the GC it will start a new docker image. But you can connect to your local machine via colab. Check the option on Connect button.
58,131,697
While reading a file in python, I was wondering how to get the next `n` lines when we encounter a line that meets my condition. Say there is a file like this ``` mangoes: 1 2 3 4 5 6 7 8 8 9 0 7 7 6 8 0 apples: 1 2 3 4 8 9 0 9 ``` Now whenever we find a line starting with mangoes, I want to be able to read all the next 4 lines. I was able to find out how to do the next immediate line but not next `n` immediate lines ``` if (line.startswith("mangoes:")): print(next(ifile)) #where ifile is the input file being iterated over ```
2019/09/27
[ "https://Stackoverflow.com/questions/58131697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886357/" ]
just repeat what you did ``` if (line.startswith("mangoes:")): for i in range(n): print(next(ifile)) ```
Unless it's a huge file and you don't want to read all lines into memory at once you could do something like this ```py n = 4 with open(fn) as f: lines = f.readlines() for idx, ln in enumerate(lines): if ln.startswith("mangoes"): break mangoes = lines[idx:idx+n] ``` This would give you a list of the `n` number of lines, including the word `mangoes`. if you did `idx=idx+1` then you'd skip the title too.
58,131,697
While reading a file in python, I was wondering how to get the next `n` lines when we encounter a line that meets my condition. Say there is a file like this ``` mangoes: 1 2 3 4 5 6 7 8 8 9 0 7 7 6 8 0 apples: 1 2 3 4 8 9 0 9 ``` Now whenever we find a line starting with mangoes, I want to be able to read all the next 4 lines. I was able to find out how to do the next immediate line but not next `n` immediate lines ``` if (line.startswith("mangoes:")): print(next(ifile)) #where ifile is the input file being iterated over ```
2019/09/27
[ "https://Stackoverflow.com/questions/58131697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886357/" ]
just repeat what you did ``` if (line.startswith("mangoes:")): for i in range(n): print(next(ifile)) ```
With [`itertools.islice`](https://docs.python.org/3/library/itertools.html#itertools.islice) feature: ``` from itertools import islice with open('yourfile') as ifile: n = 4 for line in ifile: if line.startswith('mangoes:'): mango_lines = list(islice(ifile, n)) ``` From your input sample the resulting `mango_lines` list would be: ``` ['1 2 3 4 \n', '5 6 7 8\n', '8 9 0 7\n', '7 6 8 0\n'] ```
27,528,566
I am trying to return a python dictionary to the view with AJAX and reading from a JSON file, but so far I am only returning `[object Object],[object Object]`... and if I inspect the network traffic, I can indeed see the correct data. So here is how my code looks like. I have a class and a method which based on the selected ID (request argument method), will print specific data. Its getting the data from a python discretionary. the problem is not here, have already just tested it. But just in case I will link it. # method to create the directionary - just in case # ``` def getCourselist_byClass(self, classid): """ Getting the courselist by the class id, joining the two tables. Will only get data if both of them exist in their main tables. Returning as a list. """ connection = db.session.connection() querylist = [] raw_sql = text(""" SELECT course.course_id, course.course_name FROM course WHERE EXISTS( SELECT 1 FROM class_course_identifier WHERE course.course_id = class_course_identifier.course_id AND EXISTS( SELECT 1 FROM class WHERE class_course_identifier.class_id = class.class_id AND class.class_id = :classid ) )""") query = connection.engine.execute(raw_sql, {'classid': classid}) for column in query: dict = { 'course_id' : column['course_id'], 'course_name' : column['course_name'] } querylist.append(dict) return querylist ``` my jsonify route method ======================= ``` @main.route('/task/create_test') def get_courselist(): #objects course = CourseClass() class_id = request.args.get('a', type=int) #methods results = course.getCourselist_byClass(class_id) return jsonify(result=results) ``` HTML ==== and here is how the input field and where it should link the data looks like. ``` <input type="text" size="5" name="a"> <span id="result">?</span> <p><a href="javascript:void();" id="link">click me</a> ``` and then I am calling it like this ``` <script type=text/javascript> $(function() { $('a#link').bind('click', function() { $.getJSON("{{ url_for('main.get_courselist') }}", { a: $('input[name="a"]').val() }, function(data) { $("#result").text(data.result); }); return false; }); }); </script> ``` but every time I enter a id number in the field, i am getting the correct data. but it is not formatted correctly. It is instead printing it like [object Object] *source, followed this guide as inspiration: [flask ajax example](http://runnable.com/UiPhLHanceFYAAAP/how-to-perform-ajax-in-flask-for-python)*
2014/12/17
[ "https://Stackoverflow.com/questions/27528566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1731280/" ]
I think you're getting confused because you actually have two tables of data linked by a common ID: ``` library(dplyr) df <- tbl_df(df) years <- df %>% filter(attributes == "YR") %>% select(id = ID, year = values) years #> Source: local data frame [6 x 2] #> #> id year #> 1 1 2014 #> 2 2 2013 #> 3 3 2014 #> 4 4 2014 #> 5 5 2013 #> .. .. ... authors <- df %>% filter(attributes == "AU") %>% select(id = ID, author = values) authors #> Source: local data frame [16 x 2] #> #> id author #> 1 1 AAA #> 2 1 BBB #> 3 2 CCC #> 4 2 DDD #> 5 2 EEE #> .. .. ... ``` Once you have the data in this form, it's easy to answer the questions you're interested in: 1. Authors per paper: ``` n_authors <- authors %>% group_by(id) %>% summarise(n = n()) ``` Or ``` n_authors <- authors %>% count(id) ``` 2. Median authors per year: ``` n_authors %>% left_join(years) %>% group_by(year) %>% summarise(median(n)) #> Joining by: "id" #> Source: local data frame [2 x 2] #> #> year median(n) #> 1 2013 5.0 #> 2 2014 1.5 ```
I misunderstood the structure of your dataset initially. Thanks to the comments below I realize your data needs to be restructured. ``` # split the data out df1 <- df[df$attributes == "AU",] df2 <- df[df$attributes == "YR",] # just keeping the columns with data as opposed to the label df3 <- merge(df1, df2, by="ID")[,c(1,3,5)] # set column names for clarification colnames(df3) <- c("ID","author","year") # get author counts num.authors <- count(df3, vars=c("ID","year")) ID year freq 1 1 2014 2 2 2 2013 5 3 3 2014 1 4 4 2014 2 5 5 2013 5 6 6 2014 1 summaryBy(freq ~ year, data = num.authors, FUN = list(median)) year freq.median 1 2013 5.0 2 2014 1.5 ``` The nice thing about `summaryBy` is that you can add in which ever function has been defined in the list and you will get another column containing the other metric (e.g. mean, sd, etc.)
27,528,566
I am trying to return a python dictionary to the view with AJAX and reading from a JSON file, but so far I am only returning `[object Object],[object Object]`... and if I inspect the network traffic, I can indeed see the correct data. So here is how my code looks like. I have a class and a method which based on the selected ID (request argument method), will print specific data. Its getting the data from a python discretionary. the problem is not here, have already just tested it. But just in case I will link it. # method to create the directionary - just in case # ``` def getCourselist_byClass(self, classid): """ Getting the courselist by the class id, joining the two tables. Will only get data if both of them exist in their main tables. Returning as a list. """ connection = db.session.connection() querylist = [] raw_sql = text(""" SELECT course.course_id, course.course_name FROM course WHERE EXISTS( SELECT 1 FROM class_course_identifier WHERE course.course_id = class_course_identifier.course_id AND EXISTS( SELECT 1 FROM class WHERE class_course_identifier.class_id = class.class_id AND class.class_id = :classid ) )""") query = connection.engine.execute(raw_sql, {'classid': classid}) for column in query: dict = { 'course_id' : column['course_id'], 'course_name' : column['course_name'] } querylist.append(dict) return querylist ``` my jsonify route method ======================= ``` @main.route('/task/create_test') def get_courselist(): #objects course = CourseClass() class_id = request.args.get('a', type=int) #methods results = course.getCourselist_byClass(class_id) return jsonify(result=results) ``` HTML ==== and here is how the input field and where it should link the data looks like. ``` <input type="text" size="5" name="a"> <span id="result">?</span> <p><a href="javascript:void();" id="link">click me</a> ``` and then I am calling it like this ``` <script type=text/javascript> $(function() { $('a#link').bind('click', function() { $.getJSON("{{ url_for('main.get_courselist') }}", { a: $('input[name="a"]').val() }, function(data) { $("#result").text(data.result); }); return false; }); }); </script> ``` but every time I enter a id number in the field, i am getting the correct data. but it is not formatted correctly. It is instead printing it like [object Object] *source, followed this guide as inspiration: [flask ajax example](http://runnable.com/UiPhLHanceFYAAAP/how-to-perform-ajax-in-flask-for-python)*
2014/12/17
[ "https://Stackoverflow.com/questions/27528566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1731280/" ]
Here's a possible `data.table` solution I would also suggest to create some aggregated data set with separated columns. For example: ``` library(data.table) (subdf <- as.data.table(df)[, .(N.AU = sum(attributes == "AU"), Year = values[attributes == "YR"]) , ID]) # ID N.AU Year # 1: 1 2 2014 # 2: 2 5 2013 # 3: 3 1 2014 # 4: 4 2 2014 # 5: 5 5 2013 # 6: 6 1 2014 ``` Calculating median per year ``` subdf[, .(Median.N.AU = median(N.AU)), keyby = Year] # Year Median.N.AU # 1: 2013 5.0 # 2: 2014 1.5 ```
I misunderstood the structure of your dataset initially. Thanks to the comments below I realize your data needs to be restructured. ``` # split the data out df1 <- df[df$attributes == "AU",] df2 <- df[df$attributes == "YR",] # just keeping the columns with data as opposed to the label df3 <- merge(df1, df2, by="ID")[,c(1,3,5)] # set column names for clarification colnames(df3) <- c("ID","author","year") # get author counts num.authors <- count(df3, vars=c("ID","year")) ID year freq 1 1 2014 2 2 2 2013 5 3 3 2014 1 4 4 2014 2 5 5 2013 5 6 6 2014 1 summaryBy(freq ~ year, data = num.authors, FUN = list(median)) year freq.median 1 2013 5.0 2 2014 1.5 ``` The nice thing about `summaryBy` is that you can add in which ever function has been defined in the list and you will get another column containing the other metric (e.g. mean, sd, etc.)
27,528,566
I am trying to return a python dictionary to the view with AJAX and reading from a JSON file, but so far I am only returning `[object Object],[object Object]`... and if I inspect the network traffic, I can indeed see the correct data. So here is how my code looks like. I have a class and a method which based on the selected ID (request argument method), will print specific data. Its getting the data from a python discretionary. the problem is not here, have already just tested it. But just in case I will link it. # method to create the directionary - just in case # ``` def getCourselist_byClass(self, classid): """ Getting the courselist by the class id, joining the two tables. Will only get data if both of them exist in their main tables. Returning as a list. """ connection = db.session.connection() querylist = [] raw_sql = text(""" SELECT course.course_id, course.course_name FROM course WHERE EXISTS( SELECT 1 FROM class_course_identifier WHERE course.course_id = class_course_identifier.course_id AND EXISTS( SELECT 1 FROM class WHERE class_course_identifier.class_id = class.class_id AND class.class_id = :classid ) )""") query = connection.engine.execute(raw_sql, {'classid': classid}) for column in query: dict = { 'course_id' : column['course_id'], 'course_name' : column['course_name'] } querylist.append(dict) return querylist ``` my jsonify route method ======================= ``` @main.route('/task/create_test') def get_courselist(): #objects course = CourseClass() class_id = request.args.get('a', type=int) #methods results = course.getCourselist_byClass(class_id) return jsonify(result=results) ``` HTML ==== and here is how the input field and where it should link the data looks like. ``` <input type="text" size="5" name="a"> <span id="result">?</span> <p><a href="javascript:void();" id="link">click me</a> ``` and then I am calling it like this ``` <script type=text/javascript> $(function() { $('a#link').bind('click', function() { $.getJSON("{{ url_for('main.get_courselist') }}", { a: $('input[name="a"]').val() }, function(data) { $("#result").text(data.result); }); return false; }); }); </script> ``` but every time I enter a id number in the field, i am getting the correct data. but it is not formatted correctly. It is instead printing it like [object Object] *source, followed this guide as inspiration: [flask ajax example](http://runnable.com/UiPhLHanceFYAAAP/how-to-perform-ajax-in-flask-for-python)*
2014/12/17
[ "https://Stackoverflow.com/questions/27528566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1731280/" ]
I think you're getting confused because you actually have two tables of data linked by a common ID: ``` library(dplyr) df <- tbl_df(df) years <- df %>% filter(attributes == "YR") %>% select(id = ID, year = values) years #> Source: local data frame [6 x 2] #> #> id year #> 1 1 2014 #> 2 2 2013 #> 3 3 2014 #> 4 4 2014 #> 5 5 2013 #> .. .. ... authors <- df %>% filter(attributes == "AU") %>% select(id = ID, author = values) authors #> Source: local data frame [16 x 2] #> #> id author #> 1 1 AAA #> 2 1 BBB #> 3 2 CCC #> 4 2 DDD #> 5 2 EEE #> .. .. ... ``` Once you have the data in this form, it's easy to answer the questions you're interested in: 1. Authors per paper: ``` n_authors <- authors %>% group_by(id) %>% summarise(n = n()) ``` Or ``` n_authors <- authors %>% count(id) ``` 2. Median authors per year: ``` n_authors %>% left_join(years) %>% group_by(year) %>% summarise(median(n)) #> Joining by: "id" #> Source: local data frame [2 x 2] #> #> year median(n) #> 1 2013 5.0 #> 2 2014 1.5 ```
Here's a possible `data.table` solution I would also suggest to create some aggregated data set with separated columns. For example: ``` library(data.table) (subdf <- as.data.table(df)[, .(N.AU = sum(attributes == "AU"), Year = values[attributes == "YR"]) , ID]) # ID N.AU Year # 1: 1 2 2014 # 2: 2 5 2013 # 3: 3 1 2014 # 4: 4 2 2014 # 5: 5 5 2013 # 6: 6 1 2014 ``` Calculating median per year ``` subdf[, .(Median.N.AU = median(N.AU)), keyby = Year] # Year Median.N.AU # 1: 2013 5.0 # 2: 2014 1.5 ```
56,924,174
I'm importing files from the following folder inside a python code: ``` Mask_RCNN -mrcnn -config.py -model.py -__init__.py -utils.py -visualize.py ``` I'm using the following imports: These work ok: from Mask\_RCNN.mrcnn.config import Config from Mask\_RCNN. mrcnn import utils These give me error: ``` from Mask_RCNN.mrcnn import visualize import mrcnn.model as modellib ``` Error: ``` ImportError: No module named 'mrcnn' ``` How to import these properly?
2019/07/07
[ "https://Stackoverflow.com/questions/56924174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10028453/" ]
You get an error for the 2nd import, where you omit `Mask_RCNN` from the package name. Try changing the lines to: ``` from Mask_RCNN.mrcnn import visualize import Mask_RCNN.mrcnn.model as modellib ```
Use this line before importing the libraries ``` sys.path.append("Mask_RCNN/") ```
25,012,210
Steps I followed to build WebRTC for Android in UBUNTU 13.10 env. Check out the code: ``` gclient config https://webrtc.googlecode.com/svn/trunk echo "target_os = ['android', 'unix']" >> .gclient gclient sync --nohooks cd trunk source ./build/android/envsetup.sh export GYP_DEFINES="build_with_libjingle=1 build_with_chromium=0 libjingle_java=1 OS=android $GYP_DEFINES" gclient runhooks ``` I'm getting this error: ``` gyp: /home/joss/Desarrollo/Glass/GDK/librerias/webrtc/trunk/third_party/boringssl/boringssl.gyp not found (cwd: /home/joss/Desarrollo/Glass/GDK/librerias/webrtc) Error: Command /usr/bin/python trunk/webrtc/build/gyp_webrtc -Dextra_gyp_flag=0 returned non-zero exit status 1 in /home/joss/Desarrollo/Glass/GDK/librerias/webrtc ``` If I remove `"OS=android"` from `GYP_DEFINES` the command "gclient runhooks" works but if I try to use the generated library `"libjingle_peerconnection_so.so"` after ninja build I get the following error in Android: ``` dlopen("/data/app-lib/com.mundoglass.glassrtc-1/libjingle_peerconnection_so.so") failed: dlopen failed: "/data/app-lib/com.mundoglass.glassrtc-1/libjingle_peerconnection_so.so" not 32-bit: 2 ``` Please, let me know if I'm doing any step wrong. I'm not sure if I have to use `"OS=android"` to generated the Android libraries.
2014/07/29
[ "https://Stackoverflow.com/questions/25012210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3580291/" ]
I don't think you're doing anything wrong. your error is mentioned [here](https://code.google.com/p/webrtc/issues/detail?id=3622) and i guess it will be fixed. ``` "Yes, chrome has moved to BoringSSL from OpenSSL, which causes some problems in WebRTC Android. We are looking into it." ``` You can try an older revision, I tried revision r6783 as suggested [here](https://code.google.com/p/webrtc/issues/detail?id=3641) and it works fine
Follow this [example](http://simonguest.com/2013/08/06/building-a-webrtc-client-for-android/), i have tried it and work success fully. Only need to make one change is the link provided in this example for gclient config command is older one. Follow your link gclient config <http://webrtc.googlecode.com/svn/trunk> Also make sure that you have oracle jdk-6, other version creates issues while following the steps to get the native code Good luck.
20,712,314
I have a simple python code ``` path1 = //path1/ path2 = //path2/ write_html = """ <form name="input" action="copy_file.php" method="get"> """ Outfile.write(write_html) ``` Now copy\_file.php copies files from one folder to another. I want the python path1 and path2 variable values to be passed to php script. **How can I do that?** Also instead of calling a php script, how can I place the php code in action attribute. Php code ``` <?php $file = $argv[1] $newfile = $argv[1] if (!copy($file, $newfile)) { echo "failed to copy $file...\n"; } ``` ?>
2013/12/20
[ "https://Stackoverflow.com/questions/20712314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1538688/" ]
``` write_html = """ <form name="input" action="copy_file.php" method="get"> <input type="hidden" name="path1" value="{0}" /> <input type="hidden" name="path2" value="{1}" /> <input type="button" name="button" value="onClick="copyfile('{0}', '{1}')"/> <script> function moveFile(path1, path2){ ...} </script> """.format(path1, path2) ``` Then in `copy_file.php` add ``` $path1 = $_GET["path1"]; $path2 = $_GET["path2"]; ```
> > I want the python path1 and path2 variable values to be passed to php script. > > > Doable: ``` write_html = """ <form name="input" action="copy_file.php" method="get"> <input type="hidden" name="path1" value="%s" /> <input type="hidden" name="path2" value="%s" /> """ % (path1, path2) ``` My python is a bit rusty, but that should work. > > instead of calling a php script, how can I place the php code in action attribute. > > > What? No. Are you insane? **Edit:** wait, are you just trying to make an HTTP request against a PHP script from within Python?
20,712,314
I have a simple python code ``` path1 = //path1/ path2 = //path2/ write_html = """ <form name="input" action="copy_file.php" method="get"> """ Outfile.write(write_html) ``` Now copy\_file.php copies files from one folder to another. I want the python path1 and path2 variable values to be passed to php script. **How can I do that?** Also instead of calling a php script, how can I place the php code in action attribute. Php code ``` <?php $file = $argv[1] $newfile = $argv[1] if (!copy($file, $newfile)) { echo "failed to copy $file...\n"; } ``` ?>
2013/12/20
[ "https://Stackoverflow.com/questions/20712314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1538688/" ]
``` write_html = """ <form name="input" action="copy_file.php" method="get"> <input type="hidden" name="path1" value="{0}" /> <input type="hidden" name="path2" value="{1}" /> <input type="button" name="button" value="onClick="copyfile('{0}', '{1}')"/> <script> function moveFile(path1, path2){ ...} </script> """.format(path1, path2) ``` Then in `copy_file.php` add ``` $path1 = $_GET["path1"]; $path2 = $_GET["path2"]; ```
Passing arguments to your PHP script the way you do looks like for me to be done like: in python file ``` path1 = '//path1/' path2 = '//path2/' write_html = """ <form name="input" action="copy_file.php" method="get"> <input type="hidden" name="path1" value="%s"/> <input type="hidden" name="path2" value="%s"/> </form> """ % (path1, path2) Outfile.write(write_html) ``` this will vrite your variables to html ( you may escape them if special chars ) and on php side : ``` if isset($_POST['path1'] && $_POST['path2']) { $path1 = $_POST['path1']; $path2 = $_POST['path2']; } ```