qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
309,135
I'm using Notepad++ for python development, and few days ago I found out about free Komodo Edit. I need Pros and Cons for Python development between this two editors...
2008/11/21
[ "https://Stackoverflow.com/questions/309135", "https://Stackoverflow.com", "https://Stackoverflow.com/users/35425/" ]
A downside I found of Notepad++ for Python is that it tends (for me) to silently mix tabs and spaces. I know this is configurable, but it caught me out, especially when trying to work with other people using different editors / IDE's, so take care.
If I had to choose between Notepad++ and Komodo i would choose PyScripter ;.) Seriously I consider PyScripter as a great alternative...
5,495,143
``` var data; $(document).ready(function(){ var rows = document.getElementById("orderlist_1").rows; var cell = rows[rows.length - 1].cells[3]; data = "id="+cell.innerHTML checkAndNotify(); }) function checkAndNotify() { alert("oo"); $("#shownoti").load("/mostrecenttransaction","id=2008010661301520679"); t = setTimeout("checkAndNotify()",3000) return true; } //$(document).ready(checkAndNotify()) ``` In the above code, window with content is shown every 3 seconds when I open the webpage. But, the next line is as if its never executes. If I manually open the URL <http://127.0.0.1:8000/mostrecenttransaction/?id=2008010661301520679> , it returns me a HttpResponse so why is the AJAX call never sent. I have checked using Inspect Element of google chrome by going to the network tab & observed that even if alert("oo") is executes every 3 seconds, but ajax request is never sent. Can anyone help me out. I have spent so much time on but I am unable to figure it out. You can check the code simply by git clone & "git checkout counternoti". **Code can be run simply by "python2 manage.py runserver", no configuration needed.(enter username as "123456789" and password as abcd** It would be nice if if you could see it for yourself. At least, it damn interesting to me as a beginner. > > My repo -> > <https://github.com/shadyabhi/pycourt_login/tree/counternoti> > > > Related code in urls.py:(./urls.py) ``` (r'^mostrecenttransaction/',mostRecentTransaction), ``` views.py: (./views.py:line 339) ``` def mostRecentTransaction(request): transactid_atdocument = request.GET['id'][7:] latestid = Ordersss.objects.latest('transaction_id').transaction_id[7:] print latestid, transactid_atdocument if latestid > transactid_atdocument: return HttpResponse("Food List outdated, please refresh") else: return HttpResponse("") ``` templates: (./templates/home.html:line 282) ``` <script type="text/javascript" src="/pycourt/newordernotification.js"></script> .. .. <div id=shownoti"></div> ``` I am new to jQuery & AJAX and this thing has driven me crazy after I tried it for hours solving it. **Main point is, if the "alert" is shown then why not the AJAX request?** A reply would be highly appreciated.
2011/03/31
[ "https://Stackoverflow.com/questions/5495143", "https://Stackoverflow.com", "https://Stackoverflow.com/users/167814/" ]
``` jQuery.ajax({ url: "mostRecentTransaction", type: "GET", data: { id : 2008010661301520679 }, success: function(data) { alert(data); jQuery('#shownoti').html(data).hide().fadeIn(1500); } }); ```
Try using setInterval instead of setTimeout
48,819,547
I want take logarithm multiple times. We know this ``` import numpy as np np.log(x) ``` now the second logarithm would be ``` np.log(np.log(x)) ``` what if one wants to take n number of logs? surely it would not be pythonic to repeat n times as above.
2018/02/16
[ "https://Stackoverflow.com/questions/48819547", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
As per @eugenhu's suggestion, one way is to use a generic function which loops iteratively: ``` import numpy as np def repeater(f, n): def fn(i): result = i for _ in range(n): result = f(result) return result return fn repeater(np.log, 5)(x) ```
You could use the following little trick: ``` >>> from functools import reduce >>> >>> k = 4 >>> x = 1e12 >>> >>> y = np.array(x) >>> reduce(np.log, (k+1) * (y,))[()] 0.1820258315495139 ``` and back: ``` >>> reduce(np.exp, (k+1) * (y,))[()] 999999999999.9813 ``` On my machine this is slightly faster than @jp\_data\_analysis' approach: ``` >>> def f_pp(ufunc, x, k): ... y = np.array(x) ... return reduce(ufunc, (k+1) * (y,))[()] ... >>> x = 1e12 >>> k = 5 >>> >>> from timeit import repeat >>> kwds = dict(globals=globals(), number=100000) >>> >>> repeat('repeater(np.log, 5)(x)', **kwds) [0.5353733809897676, 0.5327484680456109, 0.5363518510130234] >>> repeat('f_pp(np.log, x, 5)', **kwds) [0.4512511100037955, 0.4380568229826167, 0.45331112697022036] ``` To be fair, their approach is more flexible. Mine uses quite specific properties of unary `ufunc`s and numpy `array`s. Larger `k` is also possible. For that we need to make sure that `x` is complex because `np.log` will not switch automatically. ``` >>> x = 1e12+0j >>> k = 50 >>> >>> f_pp(np.log, x, 50) (0.3181323483680859+1.3372351153002153j) >>> f_pp(np.exp, _, 50) (1000000007040.9696+6522.577629950761j) # not that bad, all things considered ... >>> >>> repeat('f_pp(np.log, x, 50)', **kwds) [4.272890724008903, 4.266964592039585, 4.270542044949252] >>> repeat('repeater(np.log, 50)(x)', **kwds) [5.799160094989929, 5.796761817007791, 5.80835147597827] ```
48,819,547
I want take logarithm multiple times. We know this ``` import numpy as np np.log(x) ``` now the second logarithm would be ``` np.log(np.log(x)) ``` what if one wants to take n number of logs? surely it would not be pythonic to repeat n times as above.
2018/02/16
[ "https://Stackoverflow.com/questions/48819547", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
As per @eugenhu's suggestion, one way is to use a generic function which loops iteratively: ``` import numpy as np def repeater(f, n): def fn(i): result = i for _ in range(n): result = f(result) return result return fn repeater(np.log, 5)(x) ```
From this [post](https://stackoverflow.com/questions/35398472/sequential-function-mapping-in-python), you can compose functions: **Code** ``` import itertools as it import functools as ft import numpy as np def compose(f, g): return lambda x: f(g(x)) identity = lambda x: x ``` **Demo** ``` ft.reduce(compose, it.repeat(np.log, times=2), identity)(10) # 0.83403244524795594 ft.reduce(compose, it.repeat(np.log, times=3), identity)(10) # -0.18148297420509205 ```
48,819,547
I want take logarithm multiple times. We know this ``` import numpy as np np.log(x) ``` now the second logarithm would be ``` np.log(np.log(x)) ``` what if one wants to take n number of logs? surely it would not be pythonic to repeat n times as above.
2018/02/16
[ "https://Stackoverflow.com/questions/48819547", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You could use the following little trick: ``` >>> from functools import reduce >>> >>> k = 4 >>> x = 1e12 >>> >>> y = np.array(x) >>> reduce(np.log, (k+1) * (y,))[()] 0.1820258315495139 ``` and back: ``` >>> reduce(np.exp, (k+1) * (y,))[()] 999999999999.9813 ``` On my machine this is slightly faster than @jp\_data\_analysis' approach: ``` >>> def f_pp(ufunc, x, k): ... y = np.array(x) ... return reduce(ufunc, (k+1) * (y,))[()] ... >>> x = 1e12 >>> k = 5 >>> >>> from timeit import repeat >>> kwds = dict(globals=globals(), number=100000) >>> >>> repeat('repeater(np.log, 5)(x)', **kwds) [0.5353733809897676, 0.5327484680456109, 0.5363518510130234] >>> repeat('f_pp(np.log, x, 5)', **kwds) [0.4512511100037955, 0.4380568229826167, 0.45331112697022036] ``` To be fair, their approach is more flexible. Mine uses quite specific properties of unary `ufunc`s and numpy `array`s. Larger `k` is also possible. For that we need to make sure that `x` is complex because `np.log` will not switch automatically. ``` >>> x = 1e12+0j >>> k = 50 >>> >>> f_pp(np.log, x, 50) (0.3181323483680859+1.3372351153002153j) >>> f_pp(np.exp, _, 50) (1000000007040.9696+6522.577629950761j) # not that bad, all things considered ... >>> >>> repeat('f_pp(np.log, x, 50)', **kwds) [4.272890724008903, 4.266964592039585, 4.270542044949252] >>> repeat('repeater(np.log, 50)(x)', **kwds) [5.799160094989929, 5.796761817007791, 5.80835147597827] ```
From this [post](https://stackoverflow.com/questions/35398472/sequential-function-mapping-in-python), you can compose functions: **Code** ``` import itertools as it import functools as ft import numpy as np def compose(f, g): return lambda x: f(g(x)) identity = lambda x: x ``` **Demo** ``` ft.reduce(compose, it.repeat(np.log, times=2), identity)(10) # 0.83403244524795594 ft.reduce(compose, it.repeat(np.log, times=3), identity)(10) # -0.18148297420509205 ```
994,460
I have a Pylons app and am using FormEncode and HtmlFill to handle my forms. I have an array of text fields in my template (Mako) ``` <tr> <td>Yardage</td> <td>${h.text('yardage[]', maxlength=3, size=3)}</td> <td>${h.text('yardage[]', maxlength=3, size=3)}</td> <td>${h.text('yardage[]', maxlength=3, size=3)}</td> <td>${h.text('yardage[]', maxlength=3, size=3)}</td> <td>${h.text('yardage[]', maxlength=3, size=3)}</td> <td>${h.text('yardage[]', maxlength=3, size=3)}</td> <td>${h.text('yardage[]', maxlength=3, size=3)}</td> <td>${h.text('yardage[]', maxlength=3, size=3)}</td> <td>${h.text('yardage[]', maxlength=3, size=3)}</td> </tr> ``` However, I can't seem to figure out how to validate these fields. Here is the relevant entry from my Schema `yardage = formencode.ForEach(formencode.validators.Int())` I'm trying to validate that each of these fields is an Int. However, no validation occurs for these fields. **UPDATE** As requested here is the code for the action of this controller. I know it was working as I can validate other form fields. ``` def submit(self): schema = CourseForm() try: c.form_result = schema.to_python(dict(request.params)) except formencode.Invalid, error: c.form_result = error.value c.form_errors = error.error_dict or {} c.heading = 'Add a course' html = render('/derived/course/add.html') return htmlfill.render( html, defaults = c.form_result, errors = c.form_errors ) else: h.redirect_to(controler='course', action='view') ``` **UPDATE** It was suggested on IRC that I change the name of the elements from `yardage[]` to `yardage` No result. They should all be ints but putting in f into one of the elements doesn't cause it to be invalid. As I said before, I am able to validate other form fields. Below is my entire schema. ``` import formencode class CourseForm(formencode.Schema): allow_extra_fields = True filter_extra_fields = True name = formencode.validators.NotEmpty(messages={'empty': 'Name must not be empty'}) par = formencode.ForEach(formencode.validators.Int()) yardage = formencode.ForEach(formencode.validators.Int()) ```
2009/06/15
[ "https://Stackoverflow.com/questions/994460", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10738/" ]
Turns out what I wanted to do wasn't quite right. **Template**: ``` <tr> <td>Yardage</td> % for hole in range(9): <td>${h.text('hole-%s.yardage'%(hole), maxlength=3, size=3)}</td> % endfor </tr> ``` (Should have made it in a loop to begin with.) You'll notice that the name of the first element will become `hole-1.yardage`. I will then use `[FormEncode.variabledecode](http://www.formencode.org/en/latest/modules/variabledecode.html)` to turn this into a dictionary. This is done in the **Schema**: ``` import formencode class HoleSchema(formencode.Schema): allow_extra_fields = False yardage = formencode.validators.Int(not_empty=True) par = formencode.validators.Int(not_empty=True) class CourseForm(formencode.Schema): allow_extra_fields = True filter_extra_fields = True name = formencode.validators.NotEmpty(messages={'empty': 'Name must not be empty'}) hole = formencode.ForEach(HoleSchema()) ``` The HoleSchema will validate that `hole-#.par` and `hole-#.yardage` are both ints and are not empty. `formencode.ForEach` allows me to apply `HoleSchema` to the dictionary that I get from passing `variable_decode=True` to the `@validate` decorator. Here is the `submit` action from my **Controller**: ``` @validate(schema=CourseForm(), form='add', post_only=False, on_get=True, auto_error_formatter=custom_formatter, variable_decode=True) def submit(self): # Do whatever here. return 'Submitted!' ``` Using the `@validate` decorator allows for a much cleaner way to validate and fill in the forms. The `variable_decode=True` is very important or the dictionary will not be properly created.
``` c.form_result = schema.to_python(request.params) - (without dict) ``` It seems to works fine.
38,608,781
Python has filter method which filters the desired output on some criteria as like in the following example. ``` >>> s = "some\x00string. with\x15 funny characters" >>> import string >>> printable = set(string.printable) >>> filter(lambda x: x in printable, s) 'somestring. with funny characters' ``` The example is taken from [link](https://stackoverflow.com/questions/8689795/how-can-i-remove-non-ascii-characters-but-leave-periods-and-spaces-using-python). Now, when I am trying to do similar thing in Python IDE it does return the result `'somestring. with funny characters'` infact it return this `<filter object at 0x0000020E1792EC50>`. In IDE I cannot use just simple Enter so I am running this whole code with a print when filtering. Where I am doing it wrong? Thanks.
2016/07/27
[ "https://Stackoverflow.com/questions/38608781", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5453723/" ]
As others said it is undefined behaviour. Why it is working though? It is probably because the function call is linked statically, during compile-time (it's not virtual function). The function `B::hi()` exists so it is called. Try to add variable to `class B` and use it in function `hi()`. Then you will see problem (trash value) on the screen: ``` class B { public: void hi() { cout << "hello, my value is " << x << endl; } private: int x = 5; }; ``` Otherwise you could make function `hi()` virtual. Then function is linked dynamically, at runtime and program crashes immediately: ``` class B { public: virtual void hi() { cout << "hello" << endl; } }; ```
> > Now, why is this happening ? > > > Because it can happen. Anything can happen. The behaviour is *undefined*. The fact that something unexpected happened demonstrates well why UB is so dangerous. If it always caused a crash, then it would be far easier to deal with. > > What object was used to call such a method > > > Most likely, the compiler blindly trusts you, and assumes that `b` does point to an object of type `B` (which it doesn't). It probably would use the pointed memory as if the assumption was true. The member function didn't access any of the memory that belongs to the object, and the behaviour happened to be the same as if there had been an object of correct type. Being undefined, the behaviour could be completely different. If you try to run the program again, the standard doesn't guarantee that demons won't fly out of your nose.
38,608,781
Python has filter method which filters the desired output on some criteria as like in the following example. ``` >>> s = "some\x00string. with\x15 funny characters" >>> import string >>> printable = set(string.printable) >>> filter(lambda x: x in printable, s) 'somestring. with funny characters' ``` The example is taken from [link](https://stackoverflow.com/questions/8689795/how-can-i-remove-non-ascii-characters-but-leave-periods-and-spaces-using-python). Now, when I am trying to do similar thing in Python IDE it does return the result `'somestring. with funny characters'` infact it return this `<filter object at 0x0000020E1792EC50>`. In IDE I cannot use just simple Enter so I am running this whole code with a print when filtering. Where I am doing it wrong? Thanks.
2016/07/27
[ "https://Stackoverflow.com/questions/38608781", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5453723/" ]
As others said it is undefined behaviour. Why it is working though? It is probably because the function call is linked statically, during compile-time (it's not virtual function). The function `B::hi()` exists so it is called. Try to add variable to `class B` and use it in function `hi()`. Then you will see problem (trash value) on the screen: ``` class B { public: void hi() { cout << "hello, my value is " << x << endl; } private: int x = 5; }; ``` Otherwise you could make function `hi()` virtual. Then function is linked dynamically, at runtime and program crashes immediately: ``` class B { public: virtual void hi() { cout << "hello" << endl; } }; ```
This only works because of the implementation of the `hi()` method itself, and the peculiar part of the C++ spec called *undefined behaviour*. Casting using a C-style cast to an incompatible pointer type is undefined behaviour - literally anything at all could happen. In this case, the compiler has obviously decided to just trust you, and has decided to believe that `b` is indeed a valid pointer to an instance of `B` - this is in fact all a C-style cast will ever do, as they involve no runtime behaviour. When you call `hi()` on it, the method works because: * It doesn't access any instance variables belonging to `B` but not `A` (indeed, it doesn't access any instance variables at all) * It's not virtual, so it doesn't need to be looked up in `b`'s vtable to be called Therefore it works, but in almost all non-trivial cases such a malformed cast followed by a method call would result in a crash or memory corruption. And you can't rely on this kind of behaviour - *undefined* doesn't mean it has to be the same every time you run it, either. The compiler is perfectly within its rights with this code to insert a random number generator and, upon generating a 1, start up a complete copy of the original *Doom*. Keep that firmly in mind whenever anything involving undefined behaviour appears to work, because it might not work tomorrow and you need to treat it like that.
42,875,890
I do install odoo version 8 in ubuntu version 16 , then I doing like this link <https://www.getopenerp.com/easy-odoo8-installation/> . In step 3 , it is wrong it say "E:Package 'python-pybabel' has no installation candidate". what happens?
2017/03/18
[ "https://Stackoverflow.com/questions/42875890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7626653/" ]
python-pybabel was replace by python-babel package from what I see on the web search. After I face also this problem I use python-babel and all work correctly. best regards, CiprianR
Hello this is a command for dependencies, you need to run it : ``` sudo apt-get install python-psutil python-pybabel ```
42,875,890
I do install odoo version 8 in ubuntu version 16 , then I doing like this link <https://www.getopenerp.com/easy-odoo8-installation/> . In step 3 , it is wrong it say "E:Package 'python-pybabel' has no installation candidate". what happens?
2017/03/18
[ "https://Stackoverflow.com/questions/42875890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7626653/" ]
This python-pybabel has been replaced by another package called python-babel. So try: ``` $ sudo apt-get install python-babel ``` This should work
Hello this is a command for dependencies, you need to run it : ``` sudo apt-get install python-psutil python-pybabel ```
42,875,890
I do install odoo version 8 in ubuntu version 16 , then I doing like this link <https://www.getopenerp.com/easy-odoo8-installation/> . In step 3 , it is wrong it say "E:Package 'python-pybabel' has no installation candidate". what happens?
2017/03/18
[ "https://Stackoverflow.com/questions/42875890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7626653/" ]
python-pybabel was replace by python-babel package from what I see on the web search. After I face also this problem I use python-babel and all work correctly. best regards, CiprianR
This python-pybabel has been replaced by another package called python-babel. So try: ``` $ sudo apt-get install python-babel ``` This should work
62,045,094
I'm trying to create an s3 bucket in every region in AWS with boto3 in python but I'm failing to create a bucket in 4 regions (af-south-1, eu-south-1, ap-east-1 & me-south-1) My python code: ``` def create_bucket(name, region): s3 = boto3.client('s3') s3.create_bucket(Bucket=name, CreateBucketConfiguration={'LocationConstraint': region}) ``` and the exception I get: ``` botocore.exceptions.ClientError: An error occurred (InvalidLocationConstraint) when calling the CreateBucket operation: The specified location-constraint is not valid ``` I can create buckets in these regions from the aws website but it is not good for me, so I tried to do create it directly from the rest API without boto3. url: **bucket-name**.s3.amazonaws.com body: ``` <?xml version="1.0" encoding="UTF-8"?> <CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <LocationConstraint>eu-south-1</LocationConstraint> </CreateBucketConfiguration> ``` but the response was similar to the exception: ``` <?xml version="1.0" encoding="UTF-8"?> <Error> <Code>InvalidLocationConstraint</Code> <Message>The specified location-constraint is not valid</Message> <LocationConstraint>eu-south-1</LocationConstraint> <RequestId>********</RequestId> <HostId>**************</HostId> </Error> ``` Does anyone have an idea why I can do it manually from the site but not from python?
2020/05/27
[ "https://Stackoverflow.com/questions/62045094", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4213730/" ]
The regions your code fails in are relativly new regions, where you need to opt-in first to use them, see here [Managing AWS Regions](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html)
Newer AWS regions only support regional endpoints. Thus, if creating buckets in one of those regions, a regional endpoint needs to be created. Since I was creating buckets in multiple regions, I set the endpoint by creating a new instance of the client for each region. (This was in Node.js, but should still work with boto3) ``` client = boto3.client('s3', region_name='region') ``` See the same problem on Node.js [here](https://stackoverflow.com/questions/63262119/invalidlocationconstraint-creating-a-bucket-in-af-south-1-cape-town-region-usi/63262329#63262329)
30,518,714
After running a python program, I obtain a list of numeric data. How can I format the data in CSV? Goal: I hope to format it so that I can reuse the CSV-formatted data in Mathematica.
2015/05/28
[ "https://Stackoverflow.com/questions/30518714", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4143312/" ]
Let's assume your list of numeric data is stored in a variable - `numericList` ``` import csv myFile = open(csvFile, 'wb') writer = csv.writer(myFile, quoting = csv.QUOTE_ALL) writer.writerow(numericList) ``` *wb* indicates that the file is opened for **w**riting in **b**inary mode. The `csvFile` should contain your list in csv format. **Note:** If each element in the `numericList` list ***is a list*** in itself, `writer.writerows(numericList)` might be a better option than `writer.writeRow(numericList)`. This is because `writeRows` breaks up each element in list into columns if the elements are lists in the first place.
for a simple case you can just as easily write it directly.. ``` f=open('test.csv','w') for p in data : f.write('%g,%g\n'%tuple(p)) f.close() ``` where data here is an `nx2` array.
52,161,349
I have successfully install pattern3 for python 3.6 in my Linux system. But after writing this code I got an error. ``` from pattern3.en import referenced print(referenced('university')) print(referenced('hour')) ``` > > IndentationError::expected an indented block > > >
2018/09/04
[ "https://Stackoverflow.com/questions/52161349", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6051513/" ]
I solved this by entering to the problematic file, which should be `(C:\Python27\Lib\site-packages\pattern3\text\tree.py)` and fixing the problem myself: ```py from itertools import chain # 34 try: # 35 None # ===> THIS IS THE LINE I ADDED! <=== except: # 37 izip = zip # Python 3 ``` Obviously, this is not a suitable answer for production environments. Alternatively, clone the current pattern repository on your computer and install it: ```sh git clone -b development https://github.com/clips/pattern cd pattern python setup.py install ``` Then you must import `pattern` instead of `pattern3`. Good luck.
Python will give you an error `expected an indented block` if you skip the indentation: Example of this: ``` if 5 > 2: print("Five is greater than two!") ``` run it,and it will give an error ``` if 5 > 2: print("Five is greater than two!") ``` run it,it will not give any error.
25,538,584
I have 2 date columns (begin and end) in a data frame where the dates are in the following string format '%Y-%m-%d %H:%M:%S.%f'. How can I change these into date format in python? I also want to create a new column that shows the difference in days between the end and begin dates. Thanks in advance!
2014/08/27
[ "https://Stackoverflow.com/questions/25538584", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2313307/" ]
If you're using a recent version of pandas you can pass a format argument to `to_datetime`: ``` In [11]: dates = ["2014-08-27 19:53:06.000", "2014-08-27 19:53:15.002"] In [12]: pd.to_datetime(dates, format='%Y-%m-%d %H:%M:%S.%f') Out[12]: <class 'pandas.tseries.index.DatetimeIndex'> [2014-08-27 19:53:06, 2014-08-27 19:53:15.002000] Length: 2, Freq: None, Timezone: None ``` Note: it isn't necessary in this case to pass format but it may be faster/tighter: ``` In [13]: pd.to_datetime(dates,) Out[13]: <class 'pandas.tseries.index.DatetimeIndex'> [2014-08-27 19:53:06, 2014-08-27 19:53:15.002000] Length: 2, Freq: None, Timezone: None ```
The `datetime` module has everything you need to play around with dates. Note that in the format you describe `%Y-%m-%d %H:%M:%S.%f` the `%f` does not appear in the [known directives](https://docs.python.org/3/library/time.html#time.strftime) and is not included in my answer ``` from datetime import datetime dates = ["2014-08-27 19:53:06", "2014-08-27 19:53:15"] # That's where the conversion happens from string to datetime objects datetimes = [datetime.strptime(date, "%Y-%m-%d %H:%M:%S") for date in dates] print datetimes >> [datetime.datetime(2014, 8, 27, 19, 53, 6), datetime.datetime(2014, 8, 27, 19, 53, 15) # Here a simple subtraction will give you the result you are looking for return a timedelta object delta = datetimes[1] - datetimes[0] print type(delta), delta >> <type 'datetime.timedelta'>, 0:00:09 ```
19,845,259
I am getting below errors while configuring Grinder on JIRA instances, followed all instruction as per <https://confluence.atlassian.com/display/ATLAS/JIRA+Performance+Testing+with+Grinder#JIRAPerformanceTestingwithGrinder-Prerequisites> Errors : $ cat project\_manager\_8/error\_xxxx004.fm.XXXXX.com-0.log ``` 11/7/13 7:44:35 PM (process xxxx004.fm.XXXXX.com-0): Error running worker process (Java exception initialising test script File "/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./env.py", line 35, in request File "/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./dashboard.py", line 9, in __init__ File "/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./interactions.py", line 35, in ? File "./agent_project_manager.py", line 4, in ?) Java exception initialising test script File "/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./env.py", line 35, in request File "/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./dashboard.py", line 9, in __init__ File "/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./interactions.py", line 35, in ? File "./agent_project_manager.py", line 4, in ? Caused by: net.grinder.script.NotWrappableTypeException: Failed to wrap http://jira-fm-dev.devtools.XXXXX.com:8080/ at net.grinder.engine.process.instrumenter.MasterInstrumenter.createInstrumentedProxy(MasterInstrumenter.java:99) at net.grinder.engine.process.TestData.createProxy(TestData.java:93) at net.grinder.script.Test.wrap(Test.java:100) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:600) at org.python.core.PyReflectedFunction.__call__(Unknown Source) at org.python.core.PyMethod.__call__(Unknown Source) at org.python.core.PyObject.__call__(Unknown Source) at org.python.core.PyInstance.invoke(Unknown Source) at env$py.request$6(/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./env.py:35) at env$py.call_function(/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./env.py) at org.python.core.PyTableCode.call(Unknown Source) at org.python.core.PyTableCode.call(Unknown Source) at org.python.core.PyFunction.__call__(Unknown Source) at dashboard$py.__init__$2(/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./dashboard.py:9) at dashboard$py.call_function(/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./dashboard.py) at org.python.core.PyTableCode.call(Unknown Source) at org.python.core.PyTableCode.call(Unknown Source) at org.python.core.PyTableCode.call(Unknown Source) at org.python.core.PyFunction.__call__(Unknown Source) at org.python.core.PyInstance.__init__(Unknown Source) at org.python.core.PyClass.__call__(Unknown Source) at org.python.core.PyObject.__call__(Unknown Source) at interactions$py.f$0(/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./interactions.py:35) at interactions$py.call_function(/opt/atlassian-jira-performance-tests/target/classes/test_scripts/./interactions.py) at org.python.core.PyTableCode.call(Unknown Source) at org.python.core.PyCode.call(Unknown Source) at org.python.core.imp.createFromCode(Unknown Source) at org.python.core.imp.createFromPyClass(Unknown Source) at org.python.core.imp.loadFromSource(Unknown Source) at org.python.core.imp.find_module(Unknown Source) at org.python.core.imp.import_next(Unknown Source) at org.python.core.imp.import_name(Unknown Source) at org.python.core.imp.importName(Unknown Source) at org.python.core.ImportFunction.load(Unknown Source) at org.python.core.ImportFunction.__call__(Unknown Source) at org.python.core.PyObject.__call__(Unknown Source) at org.python.core.__builtin__.__import__(Unknown Source) at org.python.core.imp.importAll(Unknown Source) at org.python.pycode._pyx0.f$0(./agent_project_manager.py:4) at org.python.pycode._pyx0.call_function(./agent_project_manager.py) at org.python.core.PyTableCode.call(Unknown Source) at org.python.core.PyCode.call(Unknown Source) at org.python.core.Py.runCode(Unknown Source) at org.python.core.__builtin__.execfile_flags(Unknown Source) at org.python.util.PythonInterpreter.execfile(Unknown Source) at net.grinder.engine.process.jython.JythonScriptEngine.initialise(JythonScriptEngine.java:83) at net.grinder.engine.process.GrinderProcess.run(GrinderProcess.java:259) at net.grinder.engine.process.WorkerProcessEntryPoint.run(WorkerProcessEntryPoint.java:87) at net.grinder.engine.process.WorkerProcessEntryPoint.main(WorkerProcessEntryPoint.java:59) ```
2013/11/07
[ "https://Stackoverflow.com/questions/19845259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2608551/" ]
It's taken me 2 weeks to find out how to REALLY fix this issue. I had put a new class file under the **App\_Code** folder (I haven't used that folder for ages). For some reason I had set the "Build Action" to "*Compile*". Well, I guess anything under the **App\_Code** folder is already compiled by default, so when the project would build, it would give me this "ambiguous" error. By simply setting the "Build Action" back to "*None*", the ambiguous error went away! There's a tip for all you putting your helper methods under the "App\_Code" folder!
It appears that something was wrong with either one of the web.config's. Simply took the web.config's from a blank MVC4 project and replaced. Incidentally, having the namespace in both the config and layout does not throw an error.
48,189,688
Im trying to find if a .xlsx file contains a @. I have used pandas, which work great, unless if the excel sheet have the first column empty, then it fails.. any ideas how to rewrite the code to handle/skip empty columns? the code: ``` df = pandas.read_excel(open(path,'rb'), sheetname=0) out = 'False' for col in df.columns: if df[col].str.contains('@').any(): out = 'True' break ``` This is the error i'm getting: ``` df = pandas.read_excel(open(path,'rb'), sheetname=0) File "/anaconda3/lib/python3.6/site-packages/pandas/io/excel.py", line 203, in read_excel io = ExcelFile(io, engine=engine) File "/anaconda3/lib/python3.6/site-packages/pandas/io/excel.py", line 258, in __init__ self.book = xlrd.open_workbook(file_contents=data) File "/anaconda3/lib/python3.6/site-packages/xlrd/__init__.py", line 162, in open_workbook ragged_rows=ragged_rows, File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 91, in open_workbook_xls biff_version = bk.getbof(XL_WORKBOOK_GLOBALS) File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 1271, in getbof bof_error('Expected BOF record; found %r' % self.mem[savpos:savpos+8]) File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 1265, in bof_error raise XLRDError('Unsupported format, or corrupt file: ' + msg) xlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record; found b'\x17Microso' ```
2018/01/10
[ "https://Stackoverflow.com/questions/48189688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1495850/" ]
Take a look at the [json module](https://docs.python.org/3/library/json.html). More specifically the 'Decoding JSON:' section. ``` import json import requests response = requests.get() # api call users = json.loads(response.text) for user in users: print(user['id']) ```
It seems what you are looking for is the [json](https://docs.python.org/2/library/json.html) module. with it you can use this to parse a string into json format: ``` import json output=json.loads(myJsonString) ```
48,189,688
Im trying to find if a .xlsx file contains a @. I have used pandas, which work great, unless if the excel sheet have the first column empty, then it fails.. any ideas how to rewrite the code to handle/skip empty columns? the code: ``` df = pandas.read_excel(open(path,'rb'), sheetname=0) out = 'False' for col in df.columns: if df[col].str.contains('@').any(): out = 'True' break ``` This is the error i'm getting: ``` df = pandas.read_excel(open(path,'rb'), sheetname=0) File "/anaconda3/lib/python3.6/site-packages/pandas/io/excel.py", line 203, in read_excel io = ExcelFile(io, engine=engine) File "/anaconda3/lib/python3.6/site-packages/pandas/io/excel.py", line 258, in __init__ self.book = xlrd.open_workbook(file_contents=data) File "/anaconda3/lib/python3.6/site-packages/xlrd/__init__.py", line 162, in open_workbook ragged_rows=ragged_rows, File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 91, in open_workbook_xls biff_version = bk.getbof(XL_WORKBOOK_GLOBALS) File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 1271, in getbof bof_error('Expected BOF record; found %r' % self.mem[savpos:savpos+8]) File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 1265, in bof_error raise XLRDError('Unsupported format, or corrupt file: ' + msg) xlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record; found b'\x17Microso' ```
2018/01/10
[ "https://Stackoverflow.com/questions/48189688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1495850/" ]
Take a look at the [json module](https://docs.python.org/3/library/json.html). More specifically the 'Decoding JSON:' section. ``` import json import requests response = requests.get() # api call users = json.loads(response.text) for user in users: print(user['id']) ```
use python 3 and import urlib ``` import urllib.request import json url = link of the server #Taking response and request from url r = urllib.request.urlopen(url) #reading and decoding the data data = json.loads(r.read().decode(r.info().get_param('charset') or 'utf-8')) for json_inner_array in data: for json_data in json_inner_array: print("id: "+json_data["id"]) ```
48,189,688
Im trying to find if a .xlsx file contains a @. I have used pandas, which work great, unless if the excel sheet have the first column empty, then it fails.. any ideas how to rewrite the code to handle/skip empty columns? the code: ``` df = pandas.read_excel(open(path,'rb'), sheetname=0) out = 'False' for col in df.columns: if df[col].str.contains('@').any(): out = 'True' break ``` This is the error i'm getting: ``` df = pandas.read_excel(open(path,'rb'), sheetname=0) File "/anaconda3/lib/python3.6/site-packages/pandas/io/excel.py", line 203, in read_excel io = ExcelFile(io, engine=engine) File "/anaconda3/lib/python3.6/site-packages/pandas/io/excel.py", line 258, in __init__ self.book = xlrd.open_workbook(file_contents=data) File "/anaconda3/lib/python3.6/site-packages/xlrd/__init__.py", line 162, in open_workbook ragged_rows=ragged_rows, File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 91, in open_workbook_xls biff_version = bk.getbof(XL_WORKBOOK_GLOBALS) File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 1271, in getbof bof_error('Expected BOF record; found %r' % self.mem[savpos:savpos+8]) File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 1265, in bof_error raise XLRDError('Unsupported format, or corrupt file: ' + msg) xlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record; found b'\x17Microso' ```
2018/01/10
[ "https://Stackoverflow.com/questions/48189688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1495850/" ]
Take a look at the [json module](https://docs.python.org/3/library/json.html). More specifically the 'Decoding JSON:' section. ``` import json import requests response = requests.get() # api call users = json.loads(response.text) for user in users: print(user['id']) ```
You can try like below to get the values from json response: ``` import json content=[{ "username": "admin", "first_name": "", "last_name": "", "roles": "system_admin system_user", "locale": "en", "delete_at": 0, "update_at": 1511335509393, "create_at": 1511335500662, "auth_service": "", "email": "adminuser@cognizant.com", "auth_data": "", "position": "", "nickname": "", "id": "pbjds5wmsp8cxr993nmc6ozodh" }, { "username": "chatops", "first_name": "", "last_name": "", "roles": "system_user", "locale": "en", "delete_at": 0, "update_at": 1511335743479, "create_at": 1511335743393, "auth_service": "", "email": "chatops@cognizant.com", "auth_data": "", "position": "", "nickname": "", "id": "akxdddp5p7fjirxq7whhntq1nr" }] for item in content: print("Name: {}\nEmail: {}\nID: {}\n".format(item['username'],item['email'],item['id'])) ``` Output: ``` Name: admin Email: adminuser@cognizant.com ID: pbjds5wmsp8cxr993nmc6ozodh Name: chatops Email: chatops@cognizant.com ID: akxdddp5p7fjirxq7whhntq1nr ```
48,189,688
Im trying to find if a .xlsx file contains a @. I have used pandas, which work great, unless if the excel sheet have the first column empty, then it fails.. any ideas how to rewrite the code to handle/skip empty columns? the code: ``` df = pandas.read_excel(open(path,'rb'), sheetname=0) out = 'False' for col in df.columns: if df[col].str.contains('@').any(): out = 'True' break ``` This is the error i'm getting: ``` df = pandas.read_excel(open(path,'rb'), sheetname=0) File "/anaconda3/lib/python3.6/site-packages/pandas/io/excel.py", line 203, in read_excel io = ExcelFile(io, engine=engine) File "/anaconda3/lib/python3.6/site-packages/pandas/io/excel.py", line 258, in __init__ self.book = xlrd.open_workbook(file_contents=data) File "/anaconda3/lib/python3.6/site-packages/xlrd/__init__.py", line 162, in open_workbook ragged_rows=ragged_rows, File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 91, in open_workbook_xls biff_version = bk.getbof(XL_WORKBOOK_GLOBALS) File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 1271, in getbof bof_error('Expected BOF record; found %r' % self.mem[savpos:savpos+8]) File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 1265, in bof_error raise XLRDError('Unsupported format, or corrupt file: ' + msg) xlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record; found b'\x17Microso' ```
2018/01/10
[ "https://Stackoverflow.com/questions/48189688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1495850/" ]
You can try like below to get the values from json response: ``` import json content=[{ "username": "admin", "first_name": "", "last_name": "", "roles": "system_admin system_user", "locale": "en", "delete_at": 0, "update_at": 1511335509393, "create_at": 1511335500662, "auth_service": "", "email": "adminuser@cognizant.com", "auth_data": "", "position": "", "nickname": "", "id": "pbjds5wmsp8cxr993nmc6ozodh" }, { "username": "chatops", "first_name": "", "last_name": "", "roles": "system_user", "locale": "en", "delete_at": 0, "update_at": 1511335743479, "create_at": 1511335743393, "auth_service": "", "email": "chatops@cognizant.com", "auth_data": "", "position": "", "nickname": "", "id": "akxdddp5p7fjirxq7whhntq1nr" }] for item in content: print("Name: {}\nEmail: {}\nID: {}\n".format(item['username'],item['email'],item['id'])) ``` Output: ``` Name: admin Email: adminuser@cognizant.com ID: pbjds5wmsp8cxr993nmc6ozodh Name: chatops Email: chatops@cognizant.com ID: akxdddp5p7fjirxq7whhntq1nr ```
It seems what you are looking for is the [json](https://docs.python.org/2/library/json.html) module. with it you can use this to parse a string into json format: ``` import json output=json.loads(myJsonString) ```
48,189,688
Im trying to find if a .xlsx file contains a @. I have used pandas, which work great, unless if the excel sheet have the first column empty, then it fails.. any ideas how to rewrite the code to handle/skip empty columns? the code: ``` df = pandas.read_excel(open(path,'rb'), sheetname=0) out = 'False' for col in df.columns: if df[col].str.contains('@').any(): out = 'True' break ``` This is the error i'm getting: ``` df = pandas.read_excel(open(path,'rb'), sheetname=0) File "/anaconda3/lib/python3.6/site-packages/pandas/io/excel.py", line 203, in read_excel io = ExcelFile(io, engine=engine) File "/anaconda3/lib/python3.6/site-packages/pandas/io/excel.py", line 258, in __init__ self.book = xlrd.open_workbook(file_contents=data) File "/anaconda3/lib/python3.6/site-packages/xlrd/__init__.py", line 162, in open_workbook ragged_rows=ragged_rows, File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 91, in open_workbook_xls biff_version = bk.getbof(XL_WORKBOOK_GLOBALS) File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 1271, in getbof bof_error('Expected BOF record; found %r' % self.mem[savpos:savpos+8]) File "/anaconda3/lib/python3.6/site-packages/xlrd/book.py", line 1265, in bof_error raise XLRDError('Unsupported format, or corrupt file: ' + msg) xlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record; found b'\x17Microso' ```
2018/01/10
[ "https://Stackoverflow.com/questions/48189688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1495850/" ]
You can try like below to get the values from json response: ``` import json content=[{ "username": "admin", "first_name": "", "last_name": "", "roles": "system_admin system_user", "locale": "en", "delete_at": 0, "update_at": 1511335509393, "create_at": 1511335500662, "auth_service": "", "email": "adminuser@cognizant.com", "auth_data": "", "position": "", "nickname": "", "id": "pbjds5wmsp8cxr993nmc6ozodh" }, { "username": "chatops", "first_name": "", "last_name": "", "roles": "system_user", "locale": "en", "delete_at": 0, "update_at": 1511335743479, "create_at": 1511335743393, "auth_service": "", "email": "chatops@cognizant.com", "auth_data": "", "position": "", "nickname": "", "id": "akxdddp5p7fjirxq7whhntq1nr" }] for item in content: print("Name: {}\nEmail: {}\nID: {}\n".format(item['username'],item['email'],item['id'])) ``` Output: ``` Name: admin Email: adminuser@cognizant.com ID: pbjds5wmsp8cxr993nmc6ozodh Name: chatops Email: chatops@cognizant.com ID: akxdddp5p7fjirxq7whhntq1nr ```
use python 3 and import urlib ``` import urllib.request import json url = link of the server #Taking response and request from url r = urllib.request.urlopen(url) #reading and decoding the data data = json.loads(r.read().decode(r.info().get_param('charset') or 'utf-8')) for json_inner_array in data: for json_data in json_inner_array: print("id: "+json_data["id"]) ```
57,919,803
Similar question asked here ([Start index for iterating Python list](https://stackoverflow.com/questions/6148619/start-index-for-iterating-python-list)), but I need one more thing. Assume I have a list [Sunday, Monday, ...Saturday], and I want to iterate the list starting from different position, wrap around and complete the loop. For example ```py a = [Sunday, Monday, ...Saturday] for i in range(7): print("----") for j in (SOMETHING): print(j) OUTPUT: ---- Sunday Monday Tuesday Wednesday Thursday Friday Saturday ---- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ---- Tuesday . . . Friday ``` How could I approach this?
2019/09/13
[ "https://Stackoverflow.com/questions/57919803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12062120/" ]
You can use `collections.dequeue`, which has a `rotate` method. However, if you want to make it on your own you can do it like this: ``` >>> a = ['a','b','c','d'] >>> counter = 0 >>> start_index=2 >>> while counter < len(a): ... print(a[start_index]) ... start_index+=1 ... counter += 1 ... if start_index==len(a): ... start_index=0 ... c d a b ``` This is quite optimal, because you do not need to make any copy or create a new list, just iterate.
Use the following function: ``` def cycle_list(l, i): for element in l[i:]: yield element for element in l[:i]: yield element ```
57,919,803
Similar question asked here ([Start index for iterating Python list](https://stackoverflow.com/questions/6148619/start-index-for-iterating-python-list)), but I need one more thing. Assume I have a list [Sunday, Monday, ...Saturday], and I want to iterate the list starting from different position, wrap around and complete the loop. For example ```py a = [Sunday, Monday, ...Saturday] for i in range(7): print("----") for j in (SOMETHING): print(j) OUTPUT: ---- Sunday Monday Tuesday Wednesday Thursday Friday Saturday ---- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ---- Tuesday . . . Friday ``` How could I approach this?
2019/09/13
[ "https://Stackoverflow.com/questions/57919803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12062120/" ]
You could pop the start item off and add it to the end. ``` days = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] for _ in range(7): print("----") print("\n".join(days)) days.append(days.pop(0)) ```
Use the following function: ``` def cycle_list(l, i): for element in l[i:]: yield element for element in l[:i]: yield element ```
57,919,803
Similar question asked here ([Start index for iterating Python list](https://stackoverflow.com/questions/6148619/start-index-for-iterating-python-list)), but I need one more thing. Assume I have a list [Sunday, Monday, ...Saturday], and I want to iterate the list starting from different position, wrap around and complete the loop. For example ```py a = [Sunday, Monday, ...Saturday] for i in range(7): print("----") for j in (SOMETHING): print(j) OUTPUT: ---- Sunday Monday Tuesday Wednesday Thursday Friday Saturday ---- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ---- Tuesday . . . Friday ``` How could I approach this?
2019/09/13
[ "https://Stackoverflow.com/questions/57919803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12062120/" ]
You can use `collections.dequeue`, which has a `rotate` method. However, if you want to make it on your own you can do it like this: ``` >>> a = ['a','b','c','d'] >>> counter = 0 >>> start_index=2 >>> while counter < len(a): ... print(a[start_index]) ... start_index+=1 ... counter += 1 ... if start_index==len(a): ... start_index=0 ... c d a b ``` This is quite optimal, because you do not need to make any copy or create a new list, just iterate.
If you don't want to import any libraries. ``` DAYS = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] for i in range(7): print("----") for j in range(len(DAYS)): print(DAYS[(j+i) % len(DAYS)]) ```
57,919,803
Similar question asked here ([Start index for iterating Python list](https://stackoverflow.com/questions/6148619/start-index-for-iterating-python-list)), but I need one more thing. Assume I have a list [Sunday, Monday, ...Saturday], and I want to iterate the list starting from different position, wrap around and complete the loop. For example ```py a = [Sunday, Monday, ...Saturday] for i in range(7): print("----") for j in (SOMETHING): print(j) OUTPUT: ---- Sunday Monday Tuesday Wednesday Thursday Friday Saturday ---- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ---- Tuesday . . . Friday ``` How could I approach this?
2019/09/13
[ "https://Stackoverflow.com/questions/57919803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12062120/" ]
One approach would be using [`collections.deque`](https://docs.python.org/2/library/collections.html#collections.deque): ``` from collections import deque from itertools import repeat d = deque(['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday']) n = 7 for i in repeat(d, n): print(*i, sep='\n') print('-----') i.rotate(-1) ``` --- ``` Sunday Monday Tuesday Wednesday Thursday Friday Saturday ----- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ----- Tuesday . . . ``` --- Though you might find more interesting to create a nested list: ``` n = 7 l = [] for i in repeat(d, n): sl = [] for j in i: sl.append(j) l.append(sl) i.rotate(-1) ``` --- ``` print(l) # [['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'], # ['Monday', 'Tuesday', 'Wednesday'... ```
If you don't want to import any libraries. ``` DAYS = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] for i in range(7): print("----") for j in range(len(DAYS)): print(DAYS[(j+i) % len(DAYS)]) ```
57,919,803
Similar question asked here ([Start index for iterating Python list](https://stackoverflow.com/questions/6148619/start-index-for-iterating-python-list)), but I need one more thing. Assume I have a list [Sunday, Monday, ...Saturday], and I want to iterate the list starting from different position, wrap around and complete the loop. For example ```py a = [Sunday, Monday, ...Saturday] for i in range(7): print("----") for j in (SOMETHING): print(j) OUTPUT: ---- Sunday Monday Tuesday Wednesday Thursday Friday Saturday ---- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ---- Tuesday . . . Friday ``` How could I approach this?
2019/09/13
[ "https://Stackoverflow.com/questions/57919803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12062120/" ]
You could pop the start item off and add it to the end. ``` days = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] for _ in range(7): print("----") print("\n".join(days)) days.append(days.pop(0)) ```
If you don't want to import any libraries. ``` DAYS = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] for i in range(7): print("----") for j in range(len(DAYS)): print(DAYS[(j+i) % len(DAYS)]) ```
57,919,803
Similar question asked here ([Start index for iterating Python list](https://stackoverflow.com/questions/6148619/start-index-for-iterating-python-list)), but I need one more thing. Assume I have a list [Sunday, Monday, ...Saturday], and I want to iterate the list starting from different position, wrap around and complete the loop. For example ```py a = [Sunday, Monday, ...Saturday] for i in range(7): print("----") for j in (SOMETHING): print(j) OUTPUT: ---- Sunday Monday Tuesday Wednesday Thursday Friday Saturday ---- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ---- Tuesday . . . Friday ``` How could I approach this?
2019/09/13
[ "https://Stackoverflow.com/questions/57919803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12062120/" ]
You could pop the start item off and add it to the end. ``` days = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] for _ in range(7): print("----") print("\n".join(days)) days.append(days.pop(0)) ```
It can be done by: ```py a[i:]+a[:i] ```
57,919,803
Similar question asked here ([Start index for iterating Python list](https://stackoverflow.com/questions/6148619/start-index-for-iterating-python-list)), but I need one more thing. Assume I have a list [Sunday, Monday, ...Saturday], and I want to iterate the list starting from different position, wrap around and complete the loop. For example ```py a = [Sunday, Monday, ...Saturday] for i in range(7): print("----") for j in (SOMETHING): print(j) OUTPUT: ---- Sunday Monday Tuesday Wednesday Thursday Friday Saturday ---- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ---- Tuesday . . . Friday ``` How could I approach this?
2019/09/13
[ "https://Stackoverflow.com/questions/57919803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12062120/" ]
One approach would be using [`collections.deque`](https://docs.python.org/2/library/collections.html#collections.deque): ``` from collections import deque from itertools import repeat d = deque(['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday']) n = 7 for i in repeat(d, n): print(*i, sep='\n') print('-----') i.rotate(-1) ``` --- ``` Sunday Monday Tuesday Wednesday Thursday Friday Saturday ----- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ----- Tuesday . . . ``` --- Though you might find more interesting to create a nested list: ``` n = 7 l = [] for i in repeat(d, n): sl = [] for j in i: sl.append(j) l.append(sl) i.rotate(-1) ``` --- ``` print(l) # [['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'], # ['Monday', 'Tuesday', 'Wednesday'... ```
You can use `collections.dequeue`, which has a `rotate` method. However, if you want to make it on your own you can do it like this: ``` >>> a = ['a','b','c','d'] >>> counter = 0 >>> start_index=2 >>> while counter < len(a): ... print(a[start_index]) ... start_index+=1 ... counter += 1 ... if start_index==len(a): ... start_index=0 ... c d a b ``` This is quite optimal, because you do not need to make any copy or create a new list, just iterate.
57,919,803
Similar question asked here ([Start index for iterating Python list](https://stackoverflow.com/questions/6148619/start-index-for-iterating-python-list)), but I need one more thing. Assume I have a list [Sunday, Monday, ...Saturday], and I want to iterate the list starting from different position, wrap around and complete the loop. For example ```py a = [Sunday, Monday, ...Saturday] for i in range(7): print("----") for j in (SOMETHING): print(j) OUTPUT: ---- Sunday Monday Tuesday Wednesday Thursday Friday Saturday ---- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ---- Tuesday . . . Friday ``` How could I approach this?
2019/09/13
[ "https://Stackoverflow.com/questions/57919803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12062120/" ]
You can use `collections.dequeue`, which has a `rotate` method. However, if you want to make it on your own you can do it like this: ``` >>> a = ['a','b','c','d'] >>> counter = 0 >>> start_index=2 >>> while counter < len(a): ... print(a[start_index]) ... start_index+=1 ... counter += 1 ... if start_index==len(a): ... start_index=0 ... c d a b ``` This is quite optimal, because you do not need to make any copy or create a new list, just iterate.
It can be done by: ```py a[i:]+a[:i] ```
57,919,803
Similar question asked here ([Start index for iterating Python list](https://stackoverflow.com/questions/6148619/start-index-for-iterating-python-list)), but I need one more thing. Assume I have a list [Sunday, Monday, ...Saturday], and I want to iterate the list starting from different position, wrap around and complete the loop. For example ```py a = [Sunday, Monday, ...Saturday] for i in range(7): print("----") for j in (SOMETHING): print(j) OUTPUT: ---- Sunday Monday Tuesday Wednesday Thursday Friday Saturday ---- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ---- Tuesday . . . Friday ``` How could I approach this?
2019/09/13
[ "https://Stackoverflow.com/questions/57919803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12062120/" ]
You can use `collections.dequeue`, which has a `rotate` method. However, if you want to make it on your own you can do it like this: ``` >>> a = ['a','b','c','d'] >>> counter = 0 >>> start_index=2 >>> while counter < len(a): ... print(a[start_index]) ... start_index+=1 ... counter += 1 ... if start_index==len(a): ... start_index=0 ... c d a b ``` This is quite optimal, because you do not need to make any copy or create a new list, just iterate.
you can [chain](https://docs.python.org/3.7/library/itertools.html#itertools.chain) the elements starting from your current index (in your case the current index is `i`) with the elements before current index using generators, in this way you will not create new lists, will be memory-efficient: ``` a = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] from itertools import chain for i in range(7): print("----") for j in chain((e for e in a[i:]), (e for e in a[:i])): print(j) ``` output: ``` ---- Sunday Monday Tuesday Wednesday Thursday Friday Saturday ---- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ---- Tuesday Wednesday Thursday Friday Saturday Sunday Monday ---- Wednesday Thursday Friday Saturday Sunday Monday Tuesday ---- Thursday Friday Saturday Sunday Monday Tuesday Wednesday ---- Friday Saturday Sunday Monday Tuesday Wednesday Thursday ---- Saturday Sunday Monday Tuesday Wednesday Thursday Friday ```
57,919,803
Similar question asked here ([Start index for iterating Python list](https://stackoverflow.com/questions/6148619/start-index-for-iterating-python-list)), but I need one more thing. Assume I have a list [Sunday, Monday, ...Saturday], and I want to iterate the list starting from different position, wrap around and complete the loop. For example ```py a = [Sunday, Monday, ...Saturday] for i in range(7): print("----") for j in (SOMETHING): print(j) OUTPUT: ---- Sunday Monday Tuesday Wednesday Thursday Friday Saturday ---- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ---- Tuesday . . . Friday ``` How could I approach this?
2019/09/13
[ "https://Stackoverflow.com/questions/57919803", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12062120/" ]
One approach would be using [`collections.deque`](https://docs.python.org/2/library/collections.html#collections.deque): ``` from collections import deque from itertools import repeat d = deque(['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday']) n = 7 for i in repeat(d, n): print(*i, sep='\n') print('-----') i.rotate(-1) ``` --- ``` Sunday Monday Tuesday Wednesday Thursday Friday Saturday ----- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ----- Tuesday . . . ``` --- Though you might find more interesting to create a nested list: ``` n = 7 l = [] for i in repeat(d, n): sl = [] for j in i: sl.append(j) l.append(sl) i.rotate(-1) ``` --- ``` print(l) # [['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'], # ['Monday', 'Tuesday', 'Wednesday'... ```
you can [chain](https://docs.python.org/3.7/library/itertools.html#itertools.chain) the elements starting from your current index (in your case the current index is `i`) with the elements before current index using generators, in this way you will not create new lists, will be memory-efficient: ``` a = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'] from itertools import chain for i in range(7): print("----") for j in chain((e for e in a[i:]), (e for e in a[:i])): print(j) ``` output: ``` ---- Sunday Monday Tuesday Wednesday Thursday Friday Saturday ---- Monday Tuesday Wednesday Thursday Friday Saturday Sunday ---- Tuesday Wednesday Thursday Friday Saturday Sunday Monday ---- Wednesday Thursday Friday Saturday Sunday Monday Tuesday ---- Thursday Friday Saturday Sunday Monday Tuesday Wednesday ---- Friday Saturday Sunday Monday Tuesday Wednesday Thursday ---- Saturday Sunday Monday Tuesday Wednesday Thursday Friday ```
13,610,654
I understand [from this question](https://stackoverflow.com/questions/2227169/are-python-built-in-containers-thread-safe) that if I want to have a `set` which is thread-safe I have to implement the thread-safety part on my own. Therefore I could come up with: ``` from threading import Lock class LockedSet(set): """A set where add() and remove() are thread-safe""" def __init__(self, *args, **kwargs): # Create a lock self._lock = Lock() # Call the original __init__ super(LockedSet, self).__init__(*args, **kwargs) def add(self, elem): self._lock.acquire() try: super(LockedSet, self).add(elem) finally: self._lock.release() def remove(self, elem): self._lock.acquire() try: super(LockedSet, self).remove(elem) finally: self._lock.release() ``` So, of course only add() and remove() are thread-safe in this implementation. The other methods are not because they were not overwritten in the subclass. Now, the pattern is pretty simple: acquire lock, call original method, release lock. If I follow the logic above, I would have to overwrite all methods exposed by `set` in essentially the same way, e.g.: (pseudo-code) ``` def <method>(<args>): 1. acquire lock 2. try: 3. call original method passing <args> 4. finally: 5. release lock ``` (/pseudo-code) This is not only tedious but also prone to errors. So, any ideas/suggestions on how to approach this in a better way?
2012/11/28
[ "https://Stackoverflow.com/questions/13610654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15931/" ]
[Indeed, see the comments, it is not true] If you are running CPython you can see from the set source code that it doesn't release the GIL (http://hg.python.org/cpython/file/db20367b20de/Objects/setobject.c) so all its operations should be atomic. If it is all what you need and you are sure to run your code on CPython you can just use it directly.
You can implement your own context manager: ``` class LockableSet: def __enter__(self): self.lock() return self def __exit__(self, exc_type, exc_value, traceback): #Do what you want with the error self.unlock() with LockableSet() as s: s.whatever() raise Exception() ``` No matter what, the object's `__exit__` method will be called at the end. More detailed informations are available [here](http://docs.python.org/2/reference/compound_stmts.html#the-with-statement) (python official docs). Another use for this could be a `lock` decorator for methods, like this: ``` def lock(func): def safe_func(self, *args, **kwargs): with self: func(self, *args, **kwargs) return safe_func ```
13,610,654
I understand [from this question](https://stackoverflow.com/questions/2227169/are-python-built-in-containers-thread-safe) that if I want to have a `set` which is thread-safe I have to implement the thread-safety part on my own. Therefore I could come up with: ``` from threading import Lock class LockedSet(set): """A set where add() and remove() are thread-safe""" def __init__(self, *args, **kwargs): # Create a lock self._lock = Lock() # Call the original __init__ super(LockedSet, self).__init__(*args, **kwargs) def add(self, elem): self._lock.acquire() try: super(LockedSet, self).add(elem) finally: self._lock.release() def remove(self, elem): self._lock.acquire() try: super(LockedSet, self).remove(elem) finally: self._lock.release() ``` So, of course only add() and remove() are thread-safe in this implementation. The other methods are not because they were not overwritten in the subclass. Now, the pattern is pretty simple: acquire lock, call original method, release lock. If I follow the logic above, I would have to overwrite all methods exposed by `set` in essentially the same way, e.g.: (pseudo-code) ``` def <method>(<args>): 1. acquire lock 2. try: 3. call original method passing <args> 4. finally: 5. release lock ``` (/pseudo-code) This is not only tedious but also prone to errors. So, any ideas/suggestions on how to approach this in a better way?
2012/11/28
[ "https://Stackoverflow.com/questions/13610654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15931/" ]
This is my first attempt to play with decorators (although my code doesn't actually use the @decorate syntax), and I don't have much experience with multi-threading/multiprocessing. With that disclaimer, though, here's an attempt I made: ``` from multiprocessing import Lock def decorate_all(obj): lock = Lock() #you'll want to make this more robust: fnc_names = [fnctn for fnctn in dir(obj) if '__' not in fnctn] for name in fnc_names: print 'decorating ' + name fnc = getattr(obj, name) setattr(obj, name, decorate(fnc, lock)) return obj def decorate(fnctn, lock): def decorated(*args): print 'acquiring lock' lock.acquire() try: print 'calling decorated function' return fnctn(*args) finally: print 'releasing lock' lock.release() return decorated def thread_safe(superclass): lock = Lock() class Thread_Safe(superclass): def __init__(self, *args, **kwargs): super(Thread_Safe, self).__init__(*args, **kwargs) return decorate_all(Thread_Safe) >>> thread_safe_set = thread_safe(set) decorating add decorating clear decorating copy decorating difference decorating difference_update decorating discard decorating intersection decorating intersection_update decorating isdisjoint decorating issubset decorating issuperset decorating pop decorating remove decorating symmetric_difference decorating symmetric_difference_update decorating union decorating update >>> s = thread_safe_set() >>> s.add(1) acquiring lock calling decorated function releasing lock >>> s.add(4) acquiring lock calling decorated function releasing lock >>> s.pop() acquiring lock calling decorated function releasing lock 1 >>> s.pop() acquiring lock calling decorated function releasing lock 4 >>> ```
You can implement your own context manager: ``` class LockableSet: def __enter__(self): self.lock() return self def __exit__(self, exc_type, exc_value, traceback): #Do what you want with the error self.unlock() with LockableSet() as s: s.whatever() raise Exception() ``` No matter what, the object's `__exit__` method will be called at the end. More detailed informations are available [here](http://docs.python.org/2/reference/compound_stmts.html#the-with-statement) (python official docs). Another use for this could be a `lock` decorator for methods, like this: ``` def lock(func): def safe_func(self, *args, **kwargs): with self: func(self, *args, **kwargs) return safe_func ```
13,610,654
I understand [from this question](https://stackoverflow.com/questions/2227169/are-python-built-in-containers-thread-safe) that if I want to have a `set` which is thread-safe I have to implement the thread-safety part on my own. Therefore I could come up with: ``` from threading import Lock class LockedSet(set): """A set where add() and remove() are thread-safe""" def __init__(self, *args, **kwargs): # Create a lock self._lock = Lock() # Call the original __init__ super(LockedSet, self).__init__(*args, **kwargs) def add(self, elem): self._lock.acquire() try: super(LockedSet, self).add(elem) finally: self._lock.release() def remove(self, elem): self._lock.acquire() try: super(LockedSet, self).remove(elem) finally: self._lock.release() ``` So, of course only add() and remove() are thread-safe in this implementation. The other methods are not because they were not overwritten in the subclass. Now, the pattern is pretty simple: acquire lock, call original method, release lock. If I follow the logic above, I would have to overwrite all methods exposed by `set` in essentially the same way, e.g.: (pseudo-code) ``` def <method>(<args>): 1. acquire lock 2. try: 3. call original method passing <args> 4. finally: 5. release lock ``` (/pseudo-code) This is not only tedious but also prone to errors. So, any ideas/suggestions on how to approach this in a better way?
2012/11/28
[ "https://Stackoverflow.com/questions/13610654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15931/" ]
You can use Python's metaprogramming facilities to accomplish this. (Note: written quickly and not thoroughly tested.) I prefer to use a class decorator. I also think you *may* need to lock more than `add` and `remove` to make a set thread-safe, but I'm not sure. I'll ignore that problem and just concentrate on your question. Also consider whether delegation (proxying) is a better fit than subclassing. Wrapping objects is the usual approach in Python. Finally, there is no "magic wand" of metaprogramming that will magically add fine-grained locking to any mutable Python collection. The safest thing to do is to lock *any* method or attribute access using `RLock`, but this is very coarse-grained and slow and probably still not a guarantee that your object will be thread-safe in all cases. (For example, you may have a collection that manipulates another non-threadsafe object accessible to other threads.) You really do need to examine each and every data structure and think about what operations are atomic or require locks and which methods might call other methods using the same lock (i.e., deadlock itself). That said, here are some techniques at your disposal in increasing order of abstraction: Delegation ---------- ``` class LockProxy(object): def __init__(self, obj): self.__obj = obj self.__lock = RLock() # RLock because object methods may call own methods def __getattr__(self, name): def wrapped(*a, **k): with self.__lock: getattr(self.__obj, name)(*a, **k) return wrapped lockedset = LockProxy(set([1,2,3])) ``` Context manager --------------- ``` class LockedSet(set): """A set where add(), remove(), and 'in' operator are thread-safe""" def __init__(self, *args, **kwargs): self._lock = Lock() super(LockedSet, self).__init__(*args, **kwargs) def add(self, elem): with self._lock: super(LockedSet, self).add(elem) def remove(self, elem): with self._lock: super(LockedSet, self).remove(elem) def __contains__(self, elem): with self._lock: super(LockedSet, self).__contains__(elem) ``` Decorator --------- ``` def locked_method(method): """Method decorator. Requires a lock object at self._lock""" def newmethod(self, *args, **kwargs): with self._lock: return method(self, *args, **kwargs) return newmethod class DecoratorLockedSet(set): def __init__(self, *args, **kwargs): self._lock = Lock() super(DecoratorLockedSet, self).__init__(*args, **kwargs) @locked_method def add(self, *args, **kwargs): return super(DecoratorLockedSet, self).add(elem) @locked_method def remove(self, *args, **kwargs): return super(DecoratorLockedSet, self).remove(elem) ``` Class Decorator --------------- I think this is the cleanest and easiest-to-understand of the abstract methods, so I've expanded it to allow one to specify the methods to lock and a lock object factory. ``` def lock_class(methodnames, lockfactory): return lambda cls: make_threadsafe(cls, methodnames, lockfactory) def lock_method(method): if getattr(method, '__is_locked', False): raise TypeError("Method %r is already locked!" % method) def locked_method(self, *arg, **kwarg): with self._lock: return method(self, *arg, **kwarg) locked_method.__name__ = '%s(%s)' % ('lock_method', method.__name__) locked_method.__is_locked = True return locked_method def make_threadsafe(cls, methodnames, lockfactory): init = cls.__init__ def newinit(self, *arg, **kwarg): init(self, *arg, **kwarg) self._lock = lockfactory() cls.__init__ = newinit for methodname in methodnames: oldmethod = getattr(cls, methodname) newmethod = lock_method(oldmethod) setattr(cls, methodname, newmethod) return cls @lock_class(['add','remove'], Lock) class ClassDecoratorLockedSet(set): @lock_method # if you double-lock a method, a TypeError is raised def frobnify(self): pass ``` Override Attribute access with `__getattribute__` ------------------------------------------------- ``` class AttrLockedSet(set): def __init__(self, *args, **kwargs): self._lock = Lock() super(AttrLockedSet, self).__init__(*args, **kwargs) def __getattribute__(self, name): if name in ['add','remove']: # note: makes a new callable object "lockedmethod" on every call # best to add a layer of memoization lock = self._lock def lockedmethod(*args, **kwargs): with lock: return super(AttrLockedSet, self).__getattribute__(name)(*args, **kwargs) return lockedmethod else: return super(AttrLockedSet, self).__getattribute__(name) ``` Dynamically-added wrapper methods with `__new__` ------------------------------------------------ ``` class NewLockedSet(set): def __new__(cls, *args, **kwargs): # modify the class by adding new unbound methods # you could also attach a single __getattribute__ like above for membername in ['add', 'remove']: def scoper(membername=membername): # You can also return the function or use a class def lockedmethod(self, *args, **kwargs): with self._lock: m = getattr(super(NewLockedSet, self), membername) return m(*args, **kwargs) lockedmethod.__name__ = membername setattr(cls, membername, lockedmethod) self = super(NewLockedSet, cls).__new__(cls, *args, **kwargs) self._lock = Lock() return self ``` Dynamically-added wrapper methods with `__metaclass__` ------------------------------------------------------ ``` def _lockname(classname): return '_%s__%s' % (classname, 'lock') class LockedClass(type): def __new__(mcls, name, bases, dict_): # we'll bind these after we add the methods cls = None def lockmethodfactory(methodname, lockattr): def lockedmethod(self, *args, **kwargs): with getattr(self, lockattr): m = getattr(super(cls, self), methodname) return m(*args,**kwargs) lockedmethod.__name__ = methodname return lockedmethod lockattr = _lockname(name) for methodname in ['add','remove']: dict_[methodname] = lockmethodfactory(methodname, lockattr) cls = type.__new__(mcls, name, bases, dict_) return cls def __call__(self, *args, **kwargs): #self is a class--i.e. an "instance" of the LockedClass type instance = super(LockedClass, self).__call__(*args, **kwargs) setattr(instance, _lockname(self.__name__), Lock()) return instance class MetaLockedSet(set): __metaclass__ = LockedClass ``` Dynamically-created Metaclasses ------------------------------- ``` def LockedClassMetaFactory(wrapmethods): class LockedClass(type): def __new__(mcls, name, bases, dict_): # we'll bind these after we add the methods cls = None def lockmethodfactory(methodname, lockattr): def lockedmethod(self, *args, **kwargs): with getattr(self, lockattr): m = getattr(super(cls, self), methodname) return m(*args,**kwargs) lockedmethod.__name__ = methodname return lockedmethod lockattr = _lockname(name) for methodname in wrapmethods: dict_[methodname] = lockmethodfactory(methodname, lockattr) cls = type.__new__(mcls, name, bases, dict_) return cls def __call__(self, *args, **kwargs): #self is a class--i.e. an "instance" of the LockedClass type instance = super(LockedClass, self).__call__(*args, **kwargs) setattr(instance, _lockname(self.__name__), Lock()) return instance return LockedClass class MetaFactoryLockedSet(set): __metaclass__ = LockedClassMetaFactory(['add','remove']) ``` I'll bet using a simple, explicit `try...finally` doesn't look so bad now, right? Exercise for the reader: let the caller pass in their own `Lock()` object (dependency injection) using any of these methods.
You can implement your own context manager: ``` class LockableSet: def __enter__(self): self.lock() return self def __exit__(self, exc_type, exc_value, traceback): #Do what you want with the error self.unlock() with LockableSet() as s: s.whatever() raise Exception() ``` No matter what, the object's `__exit__` method will be called at the end. More detailed informations are available [here](http://docs.python.org/2/reference/compound_stmts.html#the-with-statement) (python official docs). Another use for this could be a `lock` decorator for methods, like this: ``` def lock(func): def safe_func(self, *args, **kwargs): with self: func(self, *args, **kwargs) return safe_func ```
13,610,654
I understand [from this question](https://stackoverflow.com/questions/2227169/are-python-built-in-containers-thread-safe) that if I want to have a `set` which is thread-safe I have to implement the thread-safety part on my own. Therefore I could come up with: ``` from threading import Lock class LockedSet(set): """A set where add() and remove() are thread-safe""" def __init__(self, *args, **kwargs): # Create a lock self._lock = Lock() # Call the original __init__ super(LockedSet, self).__init__(*args, **kwargs) def add(self, elem): self._lock.acquire() try: super(LockedSet, self).add(elem) finally: self._lock.release() def remove(self, elem): self._lock.acquire() try: super(LockedSet, self).remove(elem) finally: self._lock.release() ``` So, of course only add() and remove() are thread-safe in this implementation. The other methods are not because they were not overwritten in the subclass. Now, the pattern is pretty simple: acquire lock, call original method, release lock. If I follow the logic above, I would have to overwrite all methods exposed by `set` in essentially the same way, e.g.: (pseudo-code) ``` def <method>(<args>): 1. acquire lock 2. try: 3. call original method passing <args> 4. finally: 5. release lock ``` (/pseudo-code) This is not only tedious but also prone to errors. So, any ideas/suggestions on how to approach this in a better way?
2012/11/28
[ "https://Stackoverflow.com/questions/13610654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15931/" ]
You can use Python's metaprogramming facilities to accomplish this. (Note: written quickly and not thoroughly tested.) I prefer to use a class decorator. I also think you *may* need to lock more than `add` and `remove` to make a set thread-safe, but I'm not sure. I'll ignore that problem and just concentrate on your question. Also consider whether delegation (proxying) is a better fit than subclassing. Wrapping objects is the usual approach in Python. Finally, there is no "magic wand" of metaprogramming that will magically add fine-grained locking to any mutable Python collection. The safest thing to do is to lock *any* method or attribute access using `RLock`, but this is very coarse-grained and slow and probably still not a guarantee that your object will be thread-safe in all cases. (For example, you may have a collection that manipulates another non-threadsafe object accessible to other threads.) You really do need to examine each and every data structure and think about what operations are atomic or require locks and which methods might call other methods using the same lock (i.e., deadlock itself). That said, here are some techniques at your disposal in increasing order of abstraction: Delegation ---------- ``` class LockProxy(object): def __init__(self, obj): self.__obj = obj self.__lock = RLock() # RLock because object methods may call own methods def __getattr__(self, name): def wrapped(*a, **k): with self.__lock: getattr(self.__obj, name)(*a, **k) return wrapped lockedset = LockProxy(set([1,2,3])) ``` Context manager --------------- ``` class LockedSet(set): """A set where add(), remove(), and 'in' operator are thread-safe""" def __init__(self, *args, **kwargs): self._lock = Lock() super(LockedSet, self).__init__(*args, **kwargs) def add(self, elem): with self._lock: super(LockedSet, self).add(elem) def remove(self, elem): with self._lock: super(LockedSet, self).remove(elem) def __contains__(self, elem): with self._lock: super(LockedSet, self).__contains__(elem) ``` Decorator --------- ``` def locked_method(method): """Method decorator. Requires a lock object at self._lock""" def newmethod(self, *args, **kwargs): with self._lock: return method(self, *args, **kwargs) return newmethod class DecoratorLockedSet(set): def __init__(self, *args, **kwargs): self._lock = Lock() super(DecoratorLockedSet, self).__init__(*args, **kwargs) @locked_method def add(self, *args, **kwargs): return super(DecoratorLockedSet, self).add(elem) @locked_method def remove(self, *args, **kwargs): return super(DecoratorLockedSet, self).remove(elem) ``` Class Decorator --------------- I think this is the cleanest and easiest-to-understand of the abstract methods, so I've expanded it to allow one to specify the methods to lock and a lock object factory. ``` def lock_class(methodnames, lockfactory): return lambda cls: make_threadsafe(cls, methodnames, lockfactory) def lock_method(method): if getattr(method, '__is_locked', False): raise TypeError("Method %r is already locked!" % method) def locked_method(self, *arg, **kwarg): with self._lock: return method(self, *arg, **kwarg) locked_method.__name__ = '%s(%s)' % ('lock_method', method.__name__) locked_method.__is_locked = True return locked_method def make_threadsafe(cls, methodnames, lockfactory): init = cls.__init__ def newinit(self, *arg, **kwarg): init(self, *arg, **kwarg) self._lock = lockfactory() cls.__init__ = newinit for methodname in methodnames: oldmethod = getattr(cls, methodname) newmethod = lock_method(oldmethod) setattr(cls, methodname, newmethod) return cls @lock_class(['add','remove'], Lock) class ClassDecoratorLockedSet(set): @lock_method # if you double-lock a method, a TypeError is raised def frobnify(self): pass ``` Override Attribute access with `__getattribute__` ------------------------------------------------- ``` class AttrLockedSet(set): def __init__(self, *args, **kwargs): self._lock = Lock() super(AttrLockedSet, self).__init__(*args, **kwargs) def __getattribute__(self, name): if name in ['add','remove']: # note: makes a new callable object "lockedmethod" on every call # best to add a layer of memoization lock = self._lock def lockedmethod(*args, **kwargs): with lock: return super(AttrLockedSet, self).__getattribute__(name)(*args, **kwargs) return lockedmethod else: return super(AttrLockedSet, self).__getattribute__(name) ``` Dynamically-added wrapper methods with `__new__` ------------------------------------------------ ``` class NewLockedSet(set): def __new__(cls, *args, **kwargs): # modify the class by adding new unbound methods # you could also attach a single __getattribute__ like above for membername in ['add', 'remove']: def scoper(membername=membername): # You can also return the function or use a class def lockedmethod(self, *args, **kwargs): with self._lock: m = getattr(super(NewLockedSet, self), membername) return m(*args, **kwargs) lockedmethod.__name__ = membername setattr(cls, membername, lockedmethod) self = super(NewLockedSet, cls).__new__(cls, *args, **kwargs) self._lock = Lock() return self ``` Dynamically-added wrapper methods with `__metaclass__` ------------------------------------------------------ ``` def _lockname(classname): return '_%s__%s' % (classname, 'lock') class LockedClass(type): def __new__(mcls, name, bases, dict_): # we'll bind these after we add the methods cls = None def lockmethodfactory(methodname, lockattr): def lockedmethod(self, *args, **kwargs): with getattr(self, lockattr): m = getattr(super(cls, self), methodname) return m(*args,**kwargs) lockedmethod.__name__ = methodname return lockedmethod lockattr = _lockname(name) for methodname in ['add','remove']: dict_[methodname] = lockmethodfactory(methodname, lockattr) cls = type.__new__(mcls, name, bases, dict_) return cls def __call__(self, *args, **kwargs): #self is a class--i.e. an "instance" of the LockedClass type instance = super(LockedClass, self).__call__(*args, **kwargs) setattr(instance, _lockname(self.__name__), Lock()) return instance class MetaLockedSet(set): __metaclass__ = LockedClass ``` Dynamically-created Metaclasses ------------------------------- ``` def LockedClassMetaFactory(wrapmethods): class LockedClass(type): def __new__(mcls, name, bases, dict_): # we'll bind these after we add the methods cls = None def lockmethodfactory(methodname, lockattr): def lockedmethod(self, *args, **kwargs): with getattr(self, lockattr): m = getattr(super(cls, self), methodname) return m(*args,**kwargs) lockedmethod.__name__ = methodname return lockedmethod lockattr = _lockname(name) for methodname in wrapmethods: dict_[methodname] = lockmethodfactory(methodname, lockattr) cls = type.__new__(mcls, name, bases, dict_) return cls def __call__(self, *args, **kwargs): #self is a class--i.e. an "instance" of the LockedClass type instance = super(LockedClass, self).__call__(*args, **kwargs) setattr(instance, _lockname(self.__name__), Lock()) return instance return LockedClass class MetaFactoryLockedSet(set): __metaclass__ = LockedClassMetaFactory(['add','remove']) ``` I'll bet using a simple, explicit `try...finally` doesn't look so bad now, right? Exercise for the reader: let the caller pass in their own `Lock()` object (dependency injection) using any of these methods.
[Indeed, see the comments, it is not true] If you are running CPython you can see from the set source code that it doesn't release the GIL (http://hg.python.org/cpython/file/db20367b20de/Objects/setobject.c) so all its operations should be atomic. If it is all what you need and you are sure to run your code on CPython you can just use it directly.
13,610,654
I understand [from this question](https://stackoverflow.com/questions/2227169/are-python-built-in-containers-thread-safe) that if I want to have a `set` which is thread-safe I have to implement the thread-safety part on my own. Therefore I could come up with: ``` from threading import Lock class LockedSet(set): """A set where add() and remove() are thread-safe""" def __init__(self, *args, **kwargs): # Create a lock self._lock = Lock() # Call the original __init__ super(LockedSet, self).__init__(*args, **kwargs) def add(self, elem): self._lock.acquire() try: super(LockedSet, self).add(elem) finally: self._lock.release() def remove(self, elem): self._lock.acquire() try: super(LockedSet, self).remove(elem) finally: self._lock.release() ``` So, of course only add() and remove() are thread-safe in this implementation. The other methods are not because they were not overwritten in the subclass. Now, the pattern is pretty simple: acquire lock, call original method, release lock. If I follow the logic above, I would have to overwrite all methods exposed by `set` in essentially the same way, e.g.: (pseudo-code) ``` def <method>(<args>): 1. acquire lock 2. try: 3. call original method passing <args> 4. finally: 5. release lock ``` (/pseudo-code) This is not only tedious but also prone to errors. So, any ideas/suggestions on how to approach this in a better way?
2012/11/28
[ "https://Stackoverflow.com/questions/13610654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15931/" ]
You can use Python's metaprogramming facilities to accomplish this. (Note: written quickly and not thoroughly tested.) I prefer to use a class decorator. I also think you *may* need to lock more than `add` and `remove` to make a set thread-safe, but I'm not sure. I'll ignore that problem and just concentrate on your question. Also consider whether delegation (proxying) is a better fit than subclassing. Wrapping objects is the usual approach in Python. Finally, there is no "magic wand" of metaprogramming that will magically add fine-grained locking to any mutable Python collection. The safest thing to do is to lock *any* method or attribute access using `RLock`, but this is very coarse-grained and slow and probably still not a guarantee that your object will be thread-safe in all cases. (For example, you may have a collection that manipulates another non-threadsafe object accessible to other threads.) You really do need to examine each and every data structure and think about what operations are atomic or require locks and which methods might call other methods using the same lock (i.e., deadlock itself). That said, here are some techniques at your disposal in increasing order of abstraction: Delegation ---------- ``` class LockProxy(object): def __init__(self, obj): self.__obj = obj self.__lock = RLock() # RLock because object methods may call own methods def __getattr__(self, name): def wrapped(*a, **k): with self.__lock: getattr(self.__obj, name)(*a, **k) return wrapped lockedset = LockProxy(set([1,2,3])) ``` Context manager --------------- ``` class LockedSet(set): """A set where add(), remove(), and 'in' operator are thread-safe""" def __init__(self, *args, **kwargs): self._lock = Lock() super(LockedSet, self).__init__(*args, **kwargs) def add(self, elem): with self._lock: super(LockedSet, self).add(elem) def remove(self, elem): with self._lock: super(LockedSet, self).remove(elem) def __contains__(self, elem): with self._lock: super(LockedSet, self).__contains__(elem) ``` Decorator --------- ``` def locked_method(method): """Method decorator. Requires a lock object at self._lock""" def newmethod(self, *args, **kwargs): with self._lock: return method(self, *args, **kwargs) return newmethod class DecoratorLockedSet(set): def __init__(self, *args, **kwargs): self._lock = Lock() super(DecoratorLockedSet, self).__init__(*args, **kwargs) @locked_method def add(self, *args, **kwargs): return super(DecoratorLockedSet, self).add(elem) @locked_method def remove(self, *args, **kwargs): return super(DecoratorLockedSet, self).remove(elem) ``` Class Decorator --------------- I think this is the cleanest and easiest-to-understand of the abstract methods, so I've expanded it to allow one to specify the methods to lock and a lock object factory. ``` def lock_class(methodnames, lockfactory): return lambda cls: make_threadsafe(cls, methodnames, lockfactory) def lock_method(method): if getattr(method, '__is_locked', False): raise TypeError("Method %r is already locked!" % method) def locked_method(self, *arg, **kwarg): with self._lock: return method(self, *arg, **kwarg) locked_method.__name__ = '%s(%s)' % ('lock_method', method.__name__) locked_method.__is_locked = True return locked_method def make_threadsafe(cls, methodnames, lockfactory): init = cls.__init__ def newinit(self, *arg, **kwarg): init(self, *arg, **kwarg) self._lock = lockfactory() cls.__init__ = newinit for methodname in methodnames: oldmethod = getattr(cls, methodname) newmethod = lock_method(oldmethod) setattr(cls, methodname, newmethod) return cls @lock_class(['add','remove'], Lock) class ClassDecoratorLockedSet(set): @lock_method # if you double-lock a method, a TypeError is raised def frobnify(self): pass ``` Override Attribute access with `__getattribute__` ------------------------------------------------- ``` class AttrLockedSet(set): def __init__(self, *args, **kwargs): self._lock = Lock() super(AttrLockedSet, self).__init__(*args, **kwargs) def __getattribute__(self, name): if name in ['add','remove']: # note: makes a new callable object "lockedmethod" on every call # best to add a layer of memoization lock = self._lock def lockedmethod(*args, **kwargs): with lock: return super(AttrLockedSet, self).__getattribute__(name)(*args, **kwargs) return lockedmethod else: return super(AttrLockedSet, self).__getattribute__(name) ``` Dynamically-added wrapper methods with `__new__` ------------------------------------------------ ``` class NewLockedSet(set): def __new__(cls, *args, **kwargs): # modify the class by adding new unbound methods # you could also attach a single __getattribute__ like above for membername in ['add', 'remove']: def scoper(membername=membername): # You can also return the function or use a class def lockedmethod(self, *args, **kwargs): with self._lock: m = getattr(super(NewLockedSet, self), membername) return m(*args, **kwargs) lockedmethod.__name__ = membername setattr(cls, membername, lockedmethod) self = super(NewLockedSet, cls).__new__(cls, *args, **kwargs) self._lock = Lock() return self ``` Dynamically-added wrapper methods with `__metaclass__` ------------------------------------------------------ ``` def _lockname(classname): return '_%s__%s' % (classname, 'lock') class LockedClass(type): def __new__(mcls, name, bases, dict_): # we'll bind these after we add the methods cls = None def lockmethodfactory(methodname, lockattr): def lockedmethod(self, *args, **kwargs): with getattr(self, lockattr): m = getattr(super(cls, self), methodname) return m(*args,**kwargs) lockedmethod.__name__ = methodname return lockedmethod lockattr = _lockname(name) for methodname in ['add','remove']: dict_[methodname] = lockmethodfactory(methodname, lockattr) cls = type.__new__(mcls, name, bases, dict_) return cls def __call__(self, *args, **kwargs): #self is a class--i.e. an "instance" of the LockedClass type instance = super(LockedClass, self).__call__(*args, **kwargs) setattr(instance, _lockname(self.__name__), Lock()) return instance class MetaLockedSet(set): __metaclass__ = LockedClass ``` Dynamically-created Metaclasses ------------------------------- ``` def LockedClassMetaFactory(wrapmethods): class LockedClass(type): def __new__(mcls, name, bases, dict_): # we'll bind these after we add the methods cls = None def lockmethodfactory(methodname, lockattr): def lockedmethod(self, *args, **kwargs): with getattr(self, lockattr): m = getattr(super(cls, self), methodname) return m(*args,**kwargs) lockedmethod.__name__ = methodname return lockedmethod lockattr = _lockname(name) for methodname in wrapmethods: dict_[methodname] = lockmethodfactory(methodname, lockattr) cls = type.__new__(mcls, name, bases, dict_) return cls def __call__(self, *args, **kwargs): #self is a class--i.e. an "instance" of the LockedClass type instance = super(LockedClass, self).__call__(*args, **kwargs) setattr(instance, _lockname(self.__name__), Lock()) return instance return LockedClass class MetaFactoryLockedSet(set): __metaclass__ = LockedClassMetaFactory(['add','remove']) ``` I'll bet using a simple, explicit `try...finally` doesn't look so bad now, right? Exercise for the reader: let the caller pass in their own `Lock()` object (dependency injection) using any of these methods.
This is my first attempt to play with decorators (although my code doesn't actually use the @decorate syntax), and I don't have much experience with multi-threading/multiprocessing. With that disclaimer, though, here's an attempt I made: ``` from multiprocessing import Lock def decorate_all(obj): lock = Lock() #you'll want to make this more robust: fnc_names = [fnctn for fnctn in dir(obj) if '__' not in fnctn] for name in fnc_names: print 'decorating ' + name fnc = getattr(obj, name) setattr(obj, name, decorate(fnc, lock)) return obj def decorate(fnctn, lock): def decorated(*args): print 'acquiring lock' lock.acquire() try: print 'calling decorated function' return fnctn(*args) finally: print 'releasing lock' lock.release() return decorated def thread_safe(superclass): lock = Lock() class Thread_Safe(superclass): def __init__(self, *args, **kwargs): super(Thread_Safe, self).__init__(*args, **kwargs) return decorate_all(Thread_Safe) >>> thread_safe_set = thread_safe(set) decorating add decorating clear decorating copy decorating difference decorating difference_update decorating discard decorating intersection decorating intersection_update decorating isdisjoint decorating issubset decorating issuperset decorating pop decorating remove decorating symmetric_difference decorating symmetric_difference_update decorating union decorating update >>> s = thread_safe_set() >>> s.add(1) acquiring lock calling decorated function releasing lock >>> s.add(4) acquiring lock calling decorated function releasing lock >>> s.pop() acquiring lock calling decorated function releasing lock 1 >>> s.pop() acquiring lock calling decorated function releasing lock 4 >>> ```
11,965,655
I have created a utility software for operating file copy process in python.Every thing is working nice but when i start copying any files larger than 2 Gb the the whole system hangs. It seems to me that it might be a memory leak issue. I have tried: * Copying it using Shutil Module * Using Lazy operation by copying Chunks of Bytes * Copying Files data LINE by LINE * Using Fileinput Module * Adjusting Buffer million times * Writing copy file part with C and then Extending it with python. ...but none of this has worked. Here are links to my [File Script](http://dl.dropbox.com/u/16634607/schizocopy.py) and my [GUI Script](http://sourceforge.net/projects/schizocopy/files/latest/download): I'm using Windows 7 with 2 Gb of RAM. Can anyone help please?
2012/08/15
[ "https://Stackoverflow.com/questions/11965655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1599825/" ]
Since you only have 2 GB of memory when you copy a file that's larger than your memory, it causes issues. Don't load the entire file into memory. Instead, I would do something like: ``` with open(myLargeFile) as f: with open(myOtherLargeFile, "w") as fo: for line in f: fo.write(line) ``` Since this can potentially take a long time, you should put this into a separate thread from your GUI or the GUI will appear to hang. Here are a couple links on that topic for wxPython: * <http://www.blog.pythonlibrary.org/2010/05/22/wxpython-and-threads/> * <http://wiki.wxpython.org/LongRunningTasks>
The good approach for this problem is: * use multiprocessing or multithreading * split file into chunks * use python dbm for storing which chunk belongs to which filename, filepath and chunk offset( for file.seek function) * create queue for read and write chunks
48,441,737
I have a raspberry pi and I have installed dockers in it. I have made a python script to read gpio status in it. So when I run the below command ``` sudo docker run -it --device /dev/gpiomem app-image ``` It runs perfectly and shows the gpio status. Now I have created a `docker-compose.yml` file as I want to deploy this `app.py` to the swarm cluster which I have created. Below is the content of `docker-compose.yml` ``` version: "3" services: app: image: app-image deploy: mode: global restart: always privileged: true ``` When I start the deployment using `sudo docker stack deploy` command, the image is deployed but it gives error: ``` No access to /dev/mem. Try running as root ``` So it says that it do not have access to `/dev/mem`, but this is very strange when I am using `device`, why the service do not have access. It also says trying running as root which I think all the containers are in root already. I also tried giving the full permissions to the file by including the command `chmod 777 /dev/gpiomem` in the code but it still shows this error. My main question is that when it runs normally using `docker run..` command why it is showing error in `docker-compose` file when deploying using `sudo docker stack deploy`.? How to resolve this issue.? Thanks
2018/01/25
[ "https://Stackoverflow.com/questions/48441737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9267000/" ]
Adding devices, capabilities, and using privileged mode are not supported in swarm mode. Those options in the yml file exist for using `docker-compose` instead of `docker stack deploy`. You can track the progress on getting these features added to swarm mode in [github issue #24862](https://github.com/moby/moby/issues/24862). Since all you need to do is access a device, you may have luck adding the file for the device as a volume, but that's a shot in the dark: ``` volumes: - /dev/gpiomem:/dev/gpiomem ```
As stated in [docker-compose devices](https://docs.docker.com/compose/compose-file/#devices) > > Note: This option is ignored when deploying a stack in swarm mode with > a (version 3) Compose file. > > > The devices option is ignored in swarm. You can use `privileged: true` which will give access to all devices.
19,331,093
I am trying to create a legend for a plot with variable sets of data. There are at least 2, and at most 5. The first two will always be there, but the other three are optional, so how can I create a legend for only the existing number of data sets? I've tried if-statements to tell python what to do if that variable doesn't exist, but without avail. Perhaps this is not the proper way to determine variable existence. ``` line1 = os.path.basename(str(os.path.splitext(selectedFiles[0])[0])) line2 = os.path.basename(str(os.path.splitext(selectedFiles[1])[0])) if selectedFiles[2] in locals: line3 = os.path.basename(str(os.path.splitext(selectedFiles[2])[0])) else: line3 = None if selectedFiles[3] in locals: line4 = os.path.basename(str(os.path.splitext(selectedFiles[3])[0])) else: line4 = None if selectedFiles[4] in locals: line5 = os.path.basename(str(os.path.splitext(selectedFiles[4])[0])) else:line5 = None legend((line1, line2, line3, line4, line5), loc='upper left') ``` Here is the error I am getting: ``` if selectedFiles[2] in locals: IndexError: tuple index out of range ``` It is possible that there are multiple issues with his code (not sure if the "None" is the right way to handle the non-existent data). Please bear in mind that I'm am new to python with little programming experience otherwise, so bear with me and try not to condescend, as some more experienced users tend to do.
2013/10/12
[ "https://Stackoverflow.com/questions/19331093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2873277/" ]
Because selectedFiles is a tuple, and the logic of processing each item inside it is same. you can iterate it with a for loop. ``` lines = [os.path.basename(str(os.path.splitext(filename)[0])) for filename in selectedFiles] #extend lines' length to 5 and fill the space with None lines = lines + [None] * (5-len(lines)) legend(lines,loc='upper left') ```
I have no idea what your data structures look like, but it looks like you just want ``` lines = (os.path.basename(str(os.path.splitext(x)[0])) for x in selectedFiles) legend(lines, loc='upper left') ```
19,331,093
I am trying to create a legend for a plot with variable sets of data. There are at least 2, and at most 5. The first two will always be there, but the other three are optional, so how can I create a legend for only the existing number of data sets? I've tried if-statements to tell python what to do if that variable doesn't exist, but without avail. Perhaps this is not the proper way to determine variable existence. ``` line1 = os.path.basename(str(os.path.splitext(selectedFiles[0])[0])) line2 = os.path.basename(str(os.path.splitext(selectedFiles[1])[0])) if selectedFiles[2] in locals: line3 = os.path.basename(str(os.path.splitext(selectedFiles[2])[0])) else: line3 = None if selectedFiles[3] in locals: line4 = os.path.basename(str(os.path.splitext(selectedFiles[3])[0])) else: line4 = None if selectedFiles[4] in locals: line5 = os.path.basename(str(os.path.splitext(selectedFiles[4])[0])) else:line5 = None legend((line1, line2, line3, line4, line5), loc='upper left') ``` Here is the error I am getting: ``` if selectedFiles[2] in locals: IndexError: tuple index out of range ``` It is possible that there are multiple issues with his code (not sure if the "None" is the right way to handle the non-existent data). Please bear in mind that I'm am new to python with little programming experience otherwise, so bear with me and try not to condescend, as some more experienced users tend to do.
2013/10/12
[ "https://Stackoverflow.com/questions/19331093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2873277/" ]
I have no idea what your data structures look like, but it looks like you just want ``` lines = (os.path.basename(str(os.path.splitext(x)[0])) for x in selectedFiles) legend(lines, loc='upper left') ```
I would prefer this way. ``` line = [] for i in range(5): if i < len(selectedFiles): line.append(os.path.basename(str(os.path.splitext(selectedFiles[i])[0]))) else: line.append(None) legend(tuple(line), loc='upper left') ``` or you can always use `except IndexError:`.
19,331,093
I am trying to create a legend for a plot with variable sets of data. There are at least 2, and at most 5. The first two will always be there, but the other three are optional, so how can I create a legend for only the existing number of data sets? I've tried if-statements to tell python what to do if that variable doesn't exist, but without avail. Perhaps this is not the proper way to determine variable existence. ``` line1 = os.path.basename(str(os.path.splitext(selectedFiles[0])[0])) line2 = os.path.basename(str(os.path.splitext(selectedFiles[1])[0])) if selectedFiles[2] in locals: line3 = os.path.basename(str(os.path.splitext(selectedFiles[2])[0])) else: line3 = None if selectedFiles[3] in locals: line4 = os.path.basename(str(os.path.splitext(selectedFiles[3])[0])) else: line4 = None if selectedFiles[4] in locals: line5 = os.path.basename(str(os.path.splitext(selectedFiles[4])[0])) else:line5 = None legend((line1, line2, line3, line4, line5), loc='upper left') ``` Here is the error I am getting: ``` if selectedFiles[2] in locals: IndexError: tuple index out of range ``` It is possible that there are multiple issues with his code (not sure if the "None" is the right way to handle the non-existent data). Please bear in mind that I'm am new to python with little programming experience otherwise, so bear with me and try not to condescend, as some more experienced users tend to do.
2013/10/12
[ "https://Stackoverflow.com/questions/19331093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2873277/" ]
Because selectedFiles is a tuple, and the logic of processing each item inside it is same. you can iterate it with a for loop. ``` lines = [os.path.basename(str(os.path.splitext(filename)[0])) for filename in selectedFiles] #extend lines' length to 5 and fill the space with None lines = lines + [None] * (5-len(lines)) legend(lines,loc='upper left') ```
I would prefer this way. ``` line = [] for i in range(5): if i < len(selectedFiles): line.append(os.path.basename(str(os.path.splitext(selectedFiles[i])[0]))) else: line.append(None) legend(tuple(line), loc='upper left') ``` or you can always use `except IndexError:`.
56,106,783
I am building a dockerfile with the `docker build .` command. While building, I am experiencing the following error: ``` Downloading/unpacking requests Cannot fetch index base URL http://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement requests No distributions at all found for requests ``` Here is the dockerfile: ``` FROM jonasbonno/rpi-grovepi RUN pip install requests RUN git clone https://github.com/keyban/fogservice.git #update ENTRYPOINT ["python"] CMD ["fogservice/service.py"] ``` What might be the problem?
2019/05/13
[ "https://Stackoverflow.com/questions/56106783", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11439964/" ]
You have a pip problem, not a docker problem, you need to add `pip install --index-url https://pypi.python.org/simple/ --upgrade pip` to your docker file: ``` FROM jonasbonno/rpi-grovepi RUN pip install --index-url https://pypi.python.org/simple/ --upgrade pip RUN hash -r RUN pip install requests RUN git clone https://github.com/keyban/fogservice.git #update ENTRYPOINT ["python"] CMD ["fogservice/service.py"] ``` You can find the solution here: [pip connection failure: cannot fetch index base URL http://pypi.python.org/simple/](https://stackoverflow.com/questions/21294997/pip-connection-failure-cannot-fetch-index-base-url-http-pypi-python-org-simpl)
#### Legacy problem In Python 2.7, a pip installer of *Pylons* threw the same error. I then read somewhere that upgrading *pip* could help, and doing so in the bash of the container, you get the same error again, now for *pip* itself: ```bash Cannot fetch index base URL https://pypi.python.org/simple/ Could not find any downloads that satisfy the requirement pip in /usr/lib/python2.7/dist-packages Downloading/unpacking pip Cleaning up... No distributions at all found for pip in /usr/lib/python2.7/dist-packages Storing debug log for failure in /root/.pip/pip.log ``` And when you try upgrading pip in the Dockerfile, you get the same output, but at the end: ```bash The command '/bin/sh -c pip install --index-url https://pypi.python.org/simple/ --upgrade pip' returned a non-zero code: 1 ``` See also the two comments at the end of [pip connection failure: cannot fetch index base URL http://pypi.python.org/simple/](https://stackoverflow.com/a/46963820/11154841) The error is thrown only from within your container / when building the image, while in your workspace, you can install the upgrade or the packages. The container's pip does not have the rights to download from the internet since the [TLS security changed](https://stackoverflow.com/a/49901622/11154841). #### Answer 1: load from your own *pip* server I saw a fix in a legacy docker setup, likely for the same reason as in the question. This is not for sure, though, I have not done it myself: Download the *pip* installers of the packages you need in your workspace, then upload these *pip* installers to a local server with your own certificate, and then add that server website with your own setup to a python `setup.py` file by means of `from setuptools import setup, find_packages`, then calling the built-in `setup(...)` function, to *pip install* the needed packages. In short: Do not upgrade the *pip* installer for this, instead, just have the needed tarballs on your own server to get them without TLS problems, and take the `setuptools` module in a python file for this. #### Answer 2: take a younger ubuntu base image in the Dockerfile (recommended) Neither upgrading pip (which should never make sense in any Dockerfile) nor uploading outdated pip installers into your own server and fetching them with a `setuptools` python module will be good. At such a legacy project, try changing the base image, and perhaps even the Python version. I changed the line from `FROM ubuntu:14.04` to 15.04, 16.04, and then 18.04 worked to upgrade pip (which was just a test, I do not need to upgrade it anyway). Then I tried `pip install Pylons` and all of the other needed packages in the bash of the container, worked. When all of the versions were clear and the app worked, I added the pip installations to a requirements file that gets loaded in the Dockerfile.
19,312,270
Is there any way to break string based on punctuation-word ``` #!/usr/bin/python #Asking user to Enter a line in specified format myString=raw_input('Enter your String:\nFor Example:I am doctor break I stays in CA break you can contact me on +000000\n') # 'break' is punctuation word <my code which breaks the user input based on break word and returns output in different lists> ``` Expecting output like `String1:I am doctor` `String2:I stays in CA` `String2:you can contact me on +000000`
2013/10/11
[ "https://Stackoverflow.com/questions/19312270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1699472/" ]
If you have written you `java-script` code then make sure that you `return false` from the js code if it is not valid and in aspx file you need to use return as follows ``` <asp:Button runat="server" id="btnLogin" Text="Login" OnClientClick="return Validate()"/> ``` Edit -1 ------- There is a chance that your button is not the default one so the page get refreshed when you press the enter button. For this use code below ``` <asp:Panel ID="p" runat="server" DefaultButton="myButton"> <%-- Text boxes here --%> <asp:Button ID="myButton" runat="server" /> </asp:Panel> ``` some useful links 1. <http://www.hanselman.com/blog/ASPNETHowToCreateADefaultEnterButtonForFormsPostBacks.aspx> 2. [how to set a default 'enter' on a certain button](https://stackoverflow.com/questions/7638119/how-to-set-a-default-enter-on-a-certain-button)
You need add a add an attribute to button(btnLogin) `OnClientClick="Validate()"`. Like: ``` <asp:Button runat="server" id="btnLogin" Text="Login" OnClientClick="Validate()"/> ``` Define javascript function `Validate()` and return false if your form value is not valid.
19,312,270
Is there any way to break string based on punctuation-word ``` #!/usr/bin/python #Asking user to Enter a line in specified format myString=raw_input('Enter your String:\nFor Example:I am doctor break I stays in CA break you can contact me on +000000\n') # 'break' is punctuation word <my code which breaks the user input based on break word and returns output in different lists> ``` Expecting output like `String1:I am doctor` `String2:I stays in CA` `String2:you can contact me on +000000`
2013/10/11
[ "https://Stackoverflow.com/questions/19312270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1699472/" ]
you want to validate the value of textbox when user clicks enter and focus is in textbox then use the below code: ``` <table> <tr> <td> Username:> </td> <td> <asp:TextBox CssClass="save" runat="server" ID="txtUsername" /> </td> <td> Password:> </td> <td> <asp:TextBox CssClass="save" runat="server" ID="txtPassword" /> </td> <td> Username:> </td> <td> <asp:Button CssClass="save" runat="server" ID="btnLogin" Text="Login" /> </td> </tr> </table> <asp:Button ID="submit" runat="server" Text="submit" /> ``` and jquery code: ``` <script type="text/javascript"> $('.save').live('keydown', function (event) { if (event.keyCode == '13') { alert('heee'); return false; event.preventDefault(); } }); </script> ``` Or if you want to validate the controls on enter button click only then use the below code: ``` <asp:Panel ID="pnl" runat="server" DefaultButton="submit"> <table> <tr> <td> Username:> </td> <td> <asp:TextBox CssClass="save" runat="server" ID="txtUsername" /> </td> <td> Password:> </td> <td> <asp:TextBox CssClass="save" runat="server" ID="txtPassword" /> </td> <td> Username:> </td> <td> <asp:Button CssClass="save" runat="server" ID="btnLogin" Text="Login" /> </td> </tr> </table> <asp:Button ID="submit" runat="server" Text="submit" /> </asp:Panel> ``` and jquery code is: ``` $('[id$=submit]').click(function (event) { if (event.keyCode == '13') { alert('heee'); return false; event.preventDefault(); } }); ``` that's all
You need add a add an attribute to button(btnLogin) `OnClientClick="Validate()"`. Like: ``` <asp:Button runat="server" id="btnLogin" Text="Login" OnClientClick="Validate()"/> ``` Define javascript function `Validate()` and return false if your form value is not valid.
19,312,270
Is there any way to break string based on punctuation-word ``` #!/usr/bin/python #Asking user to Enter a line in specified format myString=raw_input('Enter your String:\nFor Example:I am doctor break I stays in CA break you can contact me on +000000\n') # 'break' is punctuation word <my code which breaks the user input based on break word and returns output in different lists> ``` Expecting output like `String1:I am doctor` `String2:I stays in CA` `String2:you can contact me on +000000`
2013/10/11
[ "https://Stackoverflow.com/questions/19312270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1699472/" ]
If you have written you `java-script` code then make sure that you `return false` from the js code if it is not valid and in aspx file you need to use return as follows ``` <asp:Button runat="server" id="btnLogin" Text="Login" OnClientClick="return Validate()"/> ``` Edit -1 ------- There is a chance that your button is not the default one so the page get refreshed when you press the enter button. For this use code below ``` <asp:Panel ID="p" runat="server" DefaultButton="myButton"> <%-- Text boxes here --%> <asp:Button ID="myButton" runat="server" /> </asp:Panel> ``` some useful links 1. <http://www.hanselman.com/blog/ASPNETHowToCreateADefaultEnterButtonForFormsPostBacks.aspx> 2. [how to set a default 'enter' on a certain button](https://stackoverflow.com/questions/7638119/how-to-set-a-default-enter-on-a-certain-button)
I would do that in Jquery. ``` //set this so that Jquery selector can get your button <asp:Button runat="server" id="btnLogin" Text="Login" ClientIDMode="Static" /> ``` jquery ``` $(function) () { $('#btnLogin').click(e) { Your javascript code here //put those if you don't want your page reloading e.preventDefault(); return false; }); }); ```
19,312,270
Is there any way to break string based on punctuation-word ``` #!/usr/bin/python #Asking user to Enter a line in specified format myString=raw_input('Enter your String:\nFor Example:I am doctor break I stays in CA break you can contact me on +000000\n') # 'break' is punctuation word <my code which breaks the user input based on break word and returns output in different lists> ``` Expecting output like `String1:I am doctor` `String2:I stays in CA` `String2:you can contact me on +000000`
2013/10/11
[ "https://Stackoverflow.com/questions/19312270", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1699472/" ]
you want to validate the value of textbox when user clicks enter and focus is in textbox then use the below code: ``` <table> <tr> <td> Username:> </td> <td> <asp:TextBox CssClass="save" runat="server" ID="txtUsername" /> </td> <td> Password:> </td> <td> <asp:TextBox CssClass="save" runat="server" ID="txtPassword" /> </td> <td> Username:> </td> <td> <asp:Button CssClass="save" runat="server" ID="btnLogin" Text="Login" /> </td> </tr> </table> <asp:Button ID="submit" runat="server" Text="submit" /> ``` and jquery code: ``` <script type="text/javascript"> $('.save').live('keydown', function (event) { if (event.keyCode == '13') { alert('heee'); return false; event.preventDefault(); } }); </script> ``` Or if you want to validate the controls on enter button click only then use the below code: ``` <asp:Panel ID="pnl" runat="server" DefaultButton="submit"> <table> <tr> <td> Username:> </td> <td> <asp:TextBox CssClass="save" runat="server" ID="txtUsername" /> </td> <td> Password:> </td> <td> <asp:TextBox CssClass="save" runat="server" ID="txtPassword" /> </td> <td> Username:> </td> <td> <asp:Button CssClass="save" runat="server" ID="btnLogin" Text="Login" /> </td> </tr> </table> <asp:Button ID="submit" runat="server" Text="submit" /> </asp:Panel> ``` and jquery code is: ``` $('[id$=submit]').click(function (event) { if (event.keyCode == '13') { alert('heee'); return false; event.preventDefault(); } }); ``` that's all
I would do that in Jquery. ``` //set this so that Jquery selector can get your button <asp:Button runat="server" id="btnLogin" Text="Login" ClientIDMode="Static" /> ``` jquery ``` $(function) () { $('#btnLogin').click(e) { Your javascript code here //put those if you don't want your page reloading e.preventDefault(); return false; }); }); ```
52,175,927
I am coming from a C# background and Python's Asyncio library is confusing me. I have read the following [1](https://stackoverflow.com/questions/37278647/fire-and-forget-python-async-await/37345564#37345564) [2](https://stackoverflow.com/questions/33357233/when-to-use-and-when-not-to-use-python-3-5-await/33399896#33399896), yet the use of asyncio remains unclear to me. I am trying to make a website **scraper** in python that is asynchronous. ``` async def requestPage(url): request = requests.get(url, headers=headers) soup = BeautifulSoup(request.content, 'html.parser') return soup async def main(): #****** How do I run an async task and store its result to use in another task? index_soup = asyncio.ensure_future(requestPage(index_url)) res = asyncio.gather(index_soup) currency_urls = res.select('a[href^="/currencies"]') print(currency_urls) loop = asyncio.get_event_loop() try: loop.run_until_complete(main()) finally: loop.close() ```
2018/09/05
[ "https://Stackoverflow.com/questions/52175927", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8714371/" ]
As the **requests** library is not asynchronous, you can use [run\_in\_executor](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor) method, so it won't block the running thread. As the result, you can define `requestPage` as a regular function and call it in the `main` function like this: `res = await asyncio.gather(loop.run_in_executor(None, requestPage, url)` The blocking function will run in a separate executor, while the control will be returned to the event loop. Or you can try to use async HTTP client library, like [aiohttp](https://aiohttp.readthedocs.io/en/stable/).
Ok, I think I found a basic solution. ``` async def requestPage(url): request = requests.get(url, headers=headers) soup = BeautifulSoup(request.content, 'html.parser') return soup async def getValueAsync(func, param): # Create new task task = asyncio.ensure_future(func(param)) # Execute task. This returns a list of tasks await asyncio.gather(task) # Get result from task return task.result() async def main(): soup = await getValueAsync(requestPage, index_url) print(val.encode("utf-8")) loop = asyncio.get_event_loop() try: loop.run_until_complete(main()) finally: loop.close() ``` I wrote a wrapper that that allows me to call the function asynchronously and store the result.
57,417,108
I have to parse the following file in python: ``` 20100322;232400;1.355800;1.355900;1.355800;1.355900;0 20100322;232500;1.355800;1.355900;1.355800;1.355900;0 20100322;232600;1.355800;1.355800;1.355800;1.355800;0 ``` I need to end upwith the following variables (first line is parsed as example): ``` year = 2010 month = 03 day = 22 hour = 23 minute = 24 p1 = Decimal('1.355800') p2 = Decimal('1.355900') p3 = Decimal('1.355800') p4 = Decimal('1.355900') ``` I have tried: ``` line = '20100322;232400;1.355800;1.355900;1.355800;1.355900;0' year = line[:4] month = line[4:6] day = line[6:8] hour = line[9:11] minute = line[11:13] p1 = Decimal(line[16:24]) p2 = Decimal(line[25:33]) p3 = Decimal(line[34:42]) p4 = Decimal(line[43:51]) print(year) print(month) print(day) print(hour) print(minute) print(p1) print(p2) print(p3) print(p4) ``` Which works fine, but I am wondering if there is an easier way to parse this (maybe using struct) to avoid having to count each position manually.
2019/08/08
[ "https://Stackoverflow.com/questions/57417108", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5328289/" ]
``` from decimal import Decimal from datetime import datetime line = "20100322;232400;1.355800;1.355900;1.355800;1.355900;0" tokens = line.split(";") dt = datetime.strptime(tokens[0] + tokens[1], "%Y%m%d%H%M%S") decimals = [Decimal(string) for string in tokens[2:6]] # datetime objects also have some useful attributes: dt.year, dt.month, etc. print(dt, *decimals, sep="\n") ``` Output: ``` 2010-03-22 23:24:00 1.355800 1.355900 1.355800 1.355900 ```
You could use regex: ``` import re to_parse = """ 20100322;232400;1.355800;1.355900;1.355800;1.355900;0 20100322;232500;1.355800;1.355900;1.355800;1.355900;0 20100322;232600;1.355800;1.355800;1.355800;1.355800;0 """ stx = re.compile( r'(?P<date>(?P<year>\d{4})(?P<month>\d{2})(?P<day>\d{2}));' r'(?P<time>(?P<hour>\d{2})(?P<minute>\d{2})(?P<second>\d{2}));' r'(?P<p1>[\.\-\d]*);(?P<p2>[\.\-\d]*);(?P<p3>[\.\-\d]*);(?P<p4>[\.\-\d]*)' ) f = [{k:float(v) if 'p' in k else int(v) for k,v in a.groupdict().items()} for a in stx.finditer(to_parse)] print(f) ``` Output: ``` [{'date': 20100322, 'day': 22, 'hour': 23, 'minute': 24, 'month': 3, 'p1': 1.3558, 'p2': 1.3559, 'p3': 1.3558, 'p4': 1.3559, 'second': 0, 'time': 232400, 'year': 2010}, {'date': 20100322, 'day': 22, 'hour': 23, 'minute': 25, 'month': 3, 'p1': 1.3558, 'p2': 1.3559, 'p3': 1.3558, 'p4': 1.3559, 'second': 0, 'time': 232500, 'year': 2010}, {'date': 20100322, 'day': 22, 'hour': 23, 'minute': 26, 'month': 3, 'p1': 1.3558, 'p2': 1.3558, 'p3': 1.3558, 'p4': 1.3558, 'second': 0, 'time': 232600, 'year': 2010}] ``` Here i stored everything in a list, but you could actually go through the results of `finditer` line by line if you don't want to store everything in memory. You can also replace fload and/or int with Decimal if needed
10,904,629
noob programmer here, I'm trying to get the SQLite3 on my Python installation up-to-date (I currently have version 3.6.11, whereas I need at least version 3.6.19, as that is the first version that supports foreign keys). Here's my problem, though: I have no idea how to do this. I know next to nothing about the command line, and I don't really know what files to replace (if at all) in my python install. And before anyone asks, I'm already using the latest Pysql version – it's what's not up to date. Can anyone give me some pointers, or maybe a guide on how to update this? I'm on Mac OSX 10.5.8, working with python 2.6.
2012/06/05
[ "https://Stackoverflow.com/questions/10904629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1430987/" ]
I suggest using the 'pip' command on the command line. ``` pip search sqlite pip install pysqlite ```
<https://pip.pypa.io/en/latest/installing.html> python get-pip.py python [complete path] python c:\folder\get-pip.py
10,904,629
noob programmer here, I'm trying to get the SQLite3 on my Python installation up-to-date (I currently have version 3.6.11, whereas I need at least version 3.6.19, as that is the first version that supports foreign keys). Here's my problem, though: I have no idea how to do this. I know next to nothing about the command line, and I don't really know what files to replace (if at all) in my python install. And before anyone asks, I'm already using the latest Pysql version – it's what's not up to date. Can anyone give me some pointers, or maybe a guide on how to update this? I'm on Mac OSX 10.5.8, working with python 2.6.
2012/06/05
[ "https://Stackoverflow.com/questions/10904629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1430987/" ]
I suggest using the 'pip' command on the command line. ``` pip search sqlite pip install pysqlite ```
> > Disclaimer: I'm not a Mac User, but by common knowledge i give you > this info. > > > You could follow the next instructions: Use Homebrew ------------ As this [page](http://mislav.uniqpath.com/rails/install-sqlite3/) mention: If you need to upgrade sqlite, you could use [Homebrew](http://brew.sh/). Homebrew implies an aditional "software manager". So you should know how to use it before. Install it from the source -------------------------- As this [page](http://www.tutorialspoint.com/sqlite/sqlite_installation.htm) metion: 1. Download the [sqlite-autoconf package](http://www.sqlite.org/2015/sqlite-autoconf-3080803.tar.gz) 2. Compile it and install it: : ``` $ tar xvfz sqlite-autoconf-3071502.tar.gz $ cd sqlite-autoconf-3071502 $ ./configure --prefix=/usr/local $ make $ make install ``` Either Homebrew or Source, verfy it ----------------------------------- ``` >>> import sqlite3 >>> sqlite3.version_info (2, 4, 1) >>> sqlite3.sqlite_version_info (3, 6, 11) >>> from pysqlite2 import dbapi2 as sqlite3 >>> sqlite3.version_info (2, 5, 5) >>> sqlite3.sqlite_version_info (3, 6, 18) ``` Maybe you need to uninstall previous version of pysqlite. In any way, you should read [this answer](https://stackoverflow.com/a/1546162/344501) to understand better the **sqlite/python** relationship.
10,904,629
noob programmer here, I'm trying to get the SQLite3 on my Python installation up-to-date (I currently have version 3.6.11, whereas I need at least version 3.6.19, as that is the first version that supports foreign keys). Here's my problem, though: I have no idea how to do this. I know next to nothing about the command line, and I don't really know what files to replace (if at all) in my python install. And before anyone asks, I'm already using the latest Pysql version – it's what's not up to date. Can anyone give me some pointers, or maybe a guide on how to update this? I'm on Mac OSX 10.5.8, working with python 2.6.
2012/06/05
[ "https://Stackoverflow.com/questions/10904629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1430987/" ]
> > Disclaimer: I'm not a Mac User, but by common knowledge i give you > this info. > > > You could follow the next instructions: Use Homebrew ------------ As this [page](http://mislav.uniqpath.com/rails/install-sqlite3/) mention: If you need to upgrade sqlite, you could use [Homebrew](http://brew.sh/). Homebrew implies an aditional "software manager". So you should know how to use it before. Install it from the source -------------------------- As this [page](http://www.tutorialspoint.com/sqlite/sqlite_installation.htm) metion: 1. Download the [sqlite-autoconf package](http://www.sqlite.org/2015/sqlite-autoconf-3080803.tar.gz) 2. Compile it and install it: : ``` $ tar xvfz sqlite-autoconf-3071502.tar.gz $ cd sqlite-autoconf-3071502 $ ./configure --prefix=/usr/local $ make $ make install ``` Either Homebrew or Source, verfy it ----------------------------------- ``` >>> import sqlite3 >>> sqlite3.version_info (2, 4, 1) >>> sqlite3.sqlite_version_info (3, 6, 11) >>> from pysqlite2 import dbapi2 as sqlite3 >>> sqlite3.version_info (2, 5, 5) >>> sqlite3.sqlite_version_info (3, 6, 18) ``` Maybe you need to uninstall previous version of pysqlite. In any way, you should read [this answer](https://stackoverflow.com/a/1546162/344501) to understand better the **sqlite/python** relationship.
<https://pip.pypa.io/en/latest/installing.html> python get-pip.py python [complete path] python c:\folder\get-pip.py
10,904,629
noob programmer here, I'm trying to get the SQLite3 on my Python installation up-to-date (I currently have version 3.6.11, whereas I need at least version 3.6.19, as that is the first version that supports foreign keys). Here's my problem, though: I have no idea how to do this. I know next to nothing about the command line, and I don't really know what files to replace (if at all) in my python install. And before anyone asks, I'm already using the latest Pysql version – it's what's not up to date. Can anyone give me some pointers, or maybe a guide on how to update this? I'm on Mac OSX 10.5.8, working with python 2.6.
2012/06/05
[ "https://Stackoverflow.com/questions/10904629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1430987/" ]
I just came fresh from installing this both in Mavericks and Mountain Lion. This [SO article](https://stackoverflow.com/questions/1545479/force-python-to-forego-native-sqlite3-and-use-the-installed-latest-sqlite3-ver/1546162#1546162) mentions using the build\_static method, which they say retrieves that latest version of the sqlite amalgamation. For some reason it didn't work (ie it didn't seem to download it or use it) What I ended up doing was 1. Downloaded pysqlite as a tarball 2. Downloaded latest sqlite source amalgamation 3. Unzipped pysqlite into its folder 4. Unzipped sqlite and *copied* that to the pysqlite folder 5. Opened setup.cfg and commented out all of the directives 6. In Terminal, went to the pysqlite folder, and then: `$ python setup.py build_static install` This compiled pysqlite using the libraries from the latest sqlite sources. And then in Python, I used it as: `import pysqlite2.dbapi2 as sqlite3`
<https://pip.pypa.io/en/latest/installing.html> python get-pip.py python [complete path] python c:\folder\get-pip.py
10,904,629
noob programmer here, I'm trying to get the SQLite3 on my Python installation up-to-date (I currently have version 3.6.11, whereas I need at least version 3.6.19, as that is the first version that supports foreign keys). Here's my problem, though: I have no idea how to do this. I know next to nothing about the command line, and I don't really know what files to replace (if at all) in my python install. And before anyone asks, I'm already using the latest Pysql version – it's what's not up to date. Can anyone give me some pointers, or maybe a guide on how to update this? I'm on Mac OSX 10.5.8, working with python 2.6.
2012/06/05
[ "https://Stackoverflow.com/questions/10904629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1430987/" ]
I recently installed python from source and used the following commands to install both SQLite from source and Python 2.7.13 from source. for SQLite3 you can use the following commands $SQLITE\_INSTALL\_LOCATION ``` $ curl -O http://www.sqlite.org/sqlite-autoconf-3070603.tar.gz $ tar xvfz sqlite-autoconf-3070603.tar.gz $ cd sqlite-autoconf-3070603 $ ./configure --prefix=$SQLITE_INSTALL_LOCATION --disable-static CFLAGS="-g" $ make && make install ``` Then when I compiled my python I edited the setup.py in the root of the Python source ``` $ curl -O https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz $ tar xvfz Python-2.7.13.tgz ``` Python-2.7.13/setup.py -- add the path to your SQLite install here: ``` ``` ... # We hunt for #define SQLITE_VERSION "n.n.n" # We need to find >= sqlite version 3.0.8 sqlite_incdir = sqlite_libdir = None sqlite_inc_paths = [ '/usr/include', '/usr/include/sqlite', '/usr/include/sqlite3', '/usr/local/include', '/usr/local/include/sqlite', '/usr/local/include/sqlite3', $SQLITE_INSTALL_LOCATION/include, ] ... ``` Once you've changed the setup.py in your python source finish up compiling and installing python assuming the install location is $PYTHON\_INSTALL\_LOCATION ``` $ cd Python-2.7.13 $ LD_RUN_PATH=$SQLITE_INSTALL_LOCATION/lib ./configure --prefix=$PYTHON_INSTALL_LOCATION --enable-shared --enable-unicode=ucs4 $ LD_RUN_PATH=$SQLITE_INSTALL_LOCATION/lib make $ LD_RUN_PATH=$SQLITE_INSTALL_LOCATION/lib make install ``` Once you do that you should have a Python version with SQLite3 support installed at $PYTHON\_INSTALL\_LOCATION/bin/python Hope this helps!
<https://pip.pypa.io/en/latest/installing.html> python get-pip.py python [complete path] python c:\folder\get-pip.py
10,904,629
noob programmer here, I'm trying to get the SQLite3 on my Python installation up-to-date (I currently have version 3.6.11, whereas I need at least version 3.6.19, as that is the first version that supports foreign keys). Here's my problem, though: I have no idea how to do this. I know next to nothing about the command line, and I don't really know what files to replace (if at all) in my python install. And before anyone asks, I'm already using the latest Pysql version – it's what's not up to date. Can anyone give me some pointers, or maybe a guide on how to update this? I'm on Mac OSX 10.5.8, working with python 2.6.
2012/06/05
[ "https://Stackoverflow.com/questions/10904629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1430987/" ]
I just came fresh from installing this both in Mavericks and Mountain Lion. This [SO article](https://stackoverflow.com/questions/1545479/force-python-to-forego-native-sqlite3-and-use-the-installed-latest-sqlite3-ver/1546162#1546162) mentions using the build\_static method, which they say retrieves that latest version of the sqlite amalgamation. For some reason it didn't work (ie it didn't seem to download it or use it) What I ended up doing was 1. Downloaded pysqlite as a tarball 2. Downloaded latest sqlite source amalgamation 3. Unzipped pysqlite into its folder 4. Unzipped sqlite and *copied* that to the pysqlite folder 5. Opened setup.cfg and commented out all of the directives 6. In Terminal, went to the pysqlite folder, and then: `$ python setup.py build_static install` This compiled pysqlite using the libraries from the latest sqlite sources. And then in Python, I used it as: `import pysqlite2.dbapi2 as sqlite3`
> > Disclaimer: I'm not a Mac User, but by common knowledge i give you > this info. > > > You could follow the next instructions: Use Homebrew ------------ As this [page](http://mislav.uniqpath.com/rails/install-sqlite3/) mention: If you need to upgrade sqlite, you could use [Homebrew](http://brew.sh/). Homebrew implies an aditional "software manager". So you should know how to use it before. Install it from the source -------------------------- As this [page](http://www.tutorialspoint.com/sqlite/sqlite_installation.htm) metion: 1. Download the [sqlite-autoconf package](http://www.sqlite.org/2015/sqlite-autoconf-3080803.tar.gz) 2. Compile it and install it: : ``` $ tar xvfz sqlite-autoconf-3071502.tar.gz $ cd sqlite-autoconf-3071502 $ ./configure --prefix=/usr/local $ make $ make install ``` Either Homebrew or Source, verfy it ----------------------------------- ``` >>> import sqlite3 >>> sqlite3.version_info (2, 4, 1) >>> sqlite3.sqlite_version_info (3, 6, 11) >>> from pysqlite2 import dbapi2 as sqlite3 >>> sqlite3.version_info (2, 5, 5) >>> sqlite3.sqlite_version_info (3, 6, 18) ``` Maybe you need to uninstall previous version of pysqlite. In any way, you should read [this answer](https://stackoverflow.com/a/1546162/344501) to understand better the **sqlite/python** relationship.
10,904,629
noob programmer here, I'm trying to get the SQLite3 on my Python installation up-to-date (I currently have version 3.6.11, whereas I need at least version 3.6.19, as that is the first version that supports foreign keys). Here's my problem, though: I have no idea how to do this. I know next to nothing about the command line, and I don't really know what files to replace (if at all) in my python install. And before anyone asks, I'm already using the latest Pysql version – it's what's not up to date. Can anyone give me some pointers, or maybe a guide on how to update this? I'm on Mac OSX 10.5.8, working with python 2.6.
2012/06/05
[ "https://Stackoverflow.com/questions/10904629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1430987/" ]
I recently installed python from source and used the following commands to install both SQLite from source and Python 2.7.13 from source. for SQLite3 you can use the following commands $SQLITE\_INSTALL\_LOCATION ``` $ curl -O http://www.sqlite.org/sqlite-autoconf-3070603.tar.gz $ tar xvfz sqlite-autoconf-3070603.tar.gz $ cd sqlite-autoconf-3070603 $ ./configure --prefix=$SQLITE_INSTALL_LOCATION --disable-static CFLAGS="-g" $ make && make install ``` Then when I compiled my python I edited the setup.py in the root of the Python source ``` $ curl -O https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz $ tar xvfz Python-2.7.13.tgz ``` Python-2.7.13/setup.py -- add the path to your SQLite install here: ``` ``` ... # We hunt for #define SQLITE_VERSION "n.n.n" # We need to find >= sqlite version 3.0.8 sqlite_incdir = sqlite_libdir = None sqlite_inc_paths = [ '/usr/include', '/usr/include/sqlite', '/usr/include/sqlite3', '/usr/local/include', '/usr/local/include/sqlite', '/usr/local/include/sqlite3', $SQLITE_INSTALL_LOCATION/include, ] ... ``` Once you've changed the setup.py in your python source finish up compiling and installing python assuming the install location is $PYTHON\_INSTALL\_LOCATION ``` $ cd Python-2.7.13 $ LD_RUN_PATH=$SQLITE_INSTALL_LOCATION/lib ./configure --prefix=$PYTHON_INSTALL_LOCATION --enable-shared --enable-unicode=ucs4 $ LD_RUN_PATH=$SQLITE_INSTALL_LOCATION/lib make $ LD_RUN_PATH=$SQLITE_INSTALL_LOCATION/lib make install ``` Once you do that you should have a Python version with SQLite3 support installed at $PYTHON\_INSTALL\_LOCATION/bin/python Hope this helps!
> > Disclaimer: I'm not a Mac User, but by common knowledge i give you > this info. > > > You could follow the next instructions: Use Homebrew ------------ As this [page](http://mislav.uniqpath.com/rails/install-sqlite3/) mention: If you need to upgrade sqlite, you could use [Homebrew](http://brew.sh/). Homebrew implies an aditional "software manager". So you should know how to use it before. Install it from the source -------------------------- As this [page](http://www.tutorialspoint.com/sqlite/sqlite_installation.htm) metion: 1. Download the [sqlite-autoconf package](http://www.sqlite.org/2015/sqlite-autoconf-3080803.tar.gz) 2. Compile it and install it: : ``` $ tar xvfz sqlite-autoconf-3071502.tar.gz $ cd sqlite-autoconf-3071502 $ ./configure --prefix=/usr/local $ make $ make install ``` Either Homebrew or Source, verfy it ----------------------------------- ``` >>> import sqlite3 >>> sqlite3.version_info (2, 4, 1) >>> sqlite3.sqlite_version_info (3, 6, 11) >>> from pysqlite2 import dbapi2 as sqlite3 >>> sqlite3.version_info (2, 5, 5) >>> sqlite3.sqlite_version_info (3, 6, 18) ``` Maybe you need to uninstall previous version of pysqlite. In any way, you should read [this answer](https://stackoverflow.com/a/1546162/344501) to understand better the **sqlite/python** relationship.
33,978,739
First post to the forum here. I searched for an answer, but wasnt exactly sure how to phrase the search. I am currently working through "learn python the hard way" and one of the drills he uses this coding: ``` target.write(line1) target.write("\n") target.write(line2) target.write("\n") target.write(line3) target.write("\n") ``` and my objective is to fit everything into calling target.write() one time rather than six, which i tried to do like this: ``` target.write(line1, "\n", line2, "\n", line3) ``` but i get an error for giving 5 arguments rather than 1. can anyone tell me the correct syntax?
2015/11/29
[ "https://Stackoverflow.com/questions/33978739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5616687/" ]
Using a comma sends each object as a separate argument. Concatenate them with `+` instead, or `join()` them: ``` target.write(line1 + "\n" + line2 + "\n" + line3) ``` Or: ``` target.write('\n'.join((line1, line2, line3))) ```
You can use Python's `str` `format`: ``` target.write('{}\n{}\n{}'.format(line1, line2, line3)) ```
33,580,308
Hello I am new to python, I was trying to find the distance from different points. Example: The distance between each door is about 2.5 feet. So the distance between door 1 and door 2 is 2.5 feet. How would i go about looking for two different distanced in the door dictionary. or should i use something else. ``` d = {"door 1" : 2.5,"door 2" :2.5 , "door 3" : 2.5, "door 4": 2.5} x = raw_input() y = raw_input() tol = 0 if x not in list and y not in list: print 'not a door' else: if x in list and y in list: tol = (list[x]) + (list[y]) print tol ```
2015/11/07
[ "https://Stackoverflow.com/questions/33580308", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5533124/" ]
try this ``` final ListPopupWindow listPopupWindow = new ListPopupWindow( context); listPopupWindow.setAdapter(new ArrayAdapter( context, R.layout.list_row_layout, arrayOfValues)); listPopupWindow.setAnchorView(your_view); listPopupWindow.setWidth(300); listPopupWindow.show(); ```
Exmaple : ``` findViewById(R.id.btn).setOnTouchListener(new OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { //use MotionEvent event : getX() and getY() will return your pressing location in the button. } }); ```
72,363,146
I am working on a project where I have to send arguments by a command line to a python file (using system exec) and then visualize the results saved in a folder after the python file finishes executing. I need to have this by only clicking on one button, so my question is, if there is any way to realize this scenario or maybe if I can order the events. [![enter image description here](https://i.stack.imgur.com/RmwtK.png)](https://i.stack.imgur.com/RmwtK.png) Now I have included the flat sequence structure to the block Diagram so I can order the events, but I had an issue with making the program (the python file) running every time I press the Test button (it only runs in the first time I click on the Test button), I tried to use the while loop but I couldn't execute it again unless I restart the program.
2022/05/24
[ "https://Stackoverflow.com/questions/72363146", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18449615/" ]
The way you phrased your question makes me think that you want to wait until the command you call via system exec finished and then run some code. You could simply use a sequence structure for this. However, if you need to do this *asynchronously*, i.e. launch the command and get an event when the command finished so you can draw the results, you will need to resort to asynchronous techniques like "Start Asynchronous Call" and "Wait On Asynchronous Call", or for example queues and a separate code area for the background-work. [![Example](https://i.stack.imgur.com/tjRZw.png)](https://i.stack.imgur.com/tjRZw.png)
Use "wait until completion?" input of System Exec function to make sure the script finished execution, then proceed with the results visualization part.
69,856,536
**Goal:** Using python, I want to create a service account in a project on the Google Cloud Platform and grant that service account one role. **Problem:** The docs explain [here](https://cloud.google.com/iam/docs/granting-changing-revoking-access#grant-single-role) how to grant a single role to the service account. However, it seems to be only possible by using the Console or the gcloud tool, not with python. The alternative for python is to update the whole IAM policy of the project to grant the role for the single service account and overwrite it (described [here](https://cloud.google.com/iam/docs/granting-changing-revoking-access#multiple-roles)). *However*, overwriting the whole policy seems quite risky because in case of an error the policy of the whole project could be lost. *Therefore I want to avoid that.* **Question:** I'm creating a service account using the python code provided [here in the docs](https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating). Is it possible to grant the role already while creating the service account with this code or in any other way?
2021/11/05
[ "https://Stackoverflow.com/questions/69856536", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13439686/" ]
Creating a service account, creating a service account key, downloading a service account JSON key file, and granting a role are separate steps. There is no single API to create a service account and grant a role at the same time. Anytime you update a project's IAM bindings is a risk. Google prevents multiple applications from updating IAM at the same time. It is possible to lock everyone (users and services) out of a project by overwriting the policy with no members. I recommend that you create a test project and develop and debug your code against that project. Use credentials that have no permissions to your other projects. Otherwise use the CLI or Terraform to minimize your risks. The API is very easy to use provided that you understand the API, IAM bindings, and JSON data structures.
As mentioned in John’s answer, you should be very careful when manipulating the IAM module, if something goes wrong it could end in services completely inoperable. Here is a Google’s document which [manipulates the IAM resources using the REST API](https://cloud.google.com/resource-manager/reference/rest/v1/projects/setIamPolicy). The owner role can be granted to a user, serviceAccount, or a group that is part of an organization. For example, group@myownpersonaldomain.com could be added as an owner to a project in the myownpersonaldomain.com organization, but not the examplepetstore.com organization.
63,695,246
Is there a fast possibility to reverse a binary number in python? Example: I have the number 11 in binary 0000000000001011 with 16 Bits. Now I'm searching for a **fast** function f, which returns 1101000000000000 (decimal 53248). Lookup tables are no solutions since i want it to scale to 32Bit numbers. Thank you for your effort. Edit: **Performances**. I tested the code for all 2^16 pattern several times. * winner are the partially look up tables: 30ms * 2nd `int(format(num, '016b')[::-1], 2)` from the comments: 56ms * 3rd `x = ((x & 0x00FF) << 8) | (x >> 8)`: 65ms * I did not expect my approach to be so horribly slow but it is. approx. 320ms. Small improvement by using + instead of | 300ms * `bytes(str(num).encode('utf-8'))` fought for the 2nd place but somehow the code did not provide valid answers. Most likely because I made a mistake by transforming them into an integer again. thank you very much for your input. I was quite surprised.
2020/09/01
[ "https://Stackoverflow.com/questions/63695246", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8080648/" ]
This might be faster using small 8-bit lookup table: ``` num = 11 # One time creation of 8bit lookup rev = [int(format(b, '08b')[::-1], base=2) for b in range(256)] # Run for each number to be flipped. lower_rev = rev[num & 0xFF] << 8 upper_rev = rev[(num & 0xFF00) >> 8] flipped = lower_rev + upper_rev ```
My current approach is to access the bits via bit shifting and mask and to shift them in the mirror number until they reach their destination. Still I have the feeling that there is room for improvement. ``` num = 11 print(format(num, '016b')) right = num left = 0 for i in range(16): tmp = right & 1 left = (left << 1 ) | tmp right = right >> 1 print(format(left, '016b')) ```
63,695,246
Is there a fast possibility to reverse a binary number in python? Example: I have the number 11 in binary 0000000000001011 with 16 Bits. Now I'm searching for a **fast** function f, which returns 1101000000000000 (decimal 53248). Lookup tables are no solutions since i want it to scale to 32Bit numbers. Thank you for your effort. Edit: **Performances**. I tested the code for all 2^16 pattern several times. * winner are the partially look up tables: 30ms * 2nd `int(format(num, '016b')[::-1], 2)` from the comments: 56ms * 3rd `x = ((x & 0x00FF) << 8) | (x >> 8)`: 65ms * I did not expect my approach to be so horribly slow but it is. approx. 320ms. Small improvement by using + instead of | 300ms * `bytes(str(num).encode('utf-8'))` fought for the 2nd place but somehow the code did not provide valid answers. Most likely because I made a mistake by transforming them into an integer again. thank you very much for your input. I was quite surprised.
2020/09/01
[ "https://Stackoverflow.com/questions/63695246", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8080648/" ]
This might be faster using small 8-bit lookup table: ``` num = 11 # One time creation of 8bit lookup rev = [int(format(b, '08b')[::-1], base=2) for b in range(256)] # Run for each number to be flipped. lower_rev = rev[num & 0xFF] << 8 upper_rev = rev[(num & 0xFF00) >> 8] flipped = lower_rev + upper_rev ```
I think you can just use slicing to get what you are looking for: ``` b=bytes('0000000000001011'.encode('utf-8')) >>> b b'0000000000001011' >>> b[::-1] b'1101000000000000' ```
63,695,246
Is there a fast possibility to reverse a binary number in python? Example: I have the number 11 in binary 0000000000001011 with 16 Bits. Now I'm searching for a **fast** function f, which returns 1101000000000000 (decimal 53248). Lookup tables are no solutions since i want it to scale to 32Bit numbers. Thank you for your effort. Edit: **Performances**. I tested the code for all 2^16 pattern several times. * winner are the partially look up tables: 30ms * 2nd `int(format(num, '016b')[::-1], 2)` from the comments: 56ms * 3rd `x = ((x & 0x00FF) << 8) | (x >> 8)`: 65ms * I did not expect my approach to be so horribly slow but it is. approx. 320ms. Small improvement by using + instead of | 300ms * `bytes(str(num).encode('utf-8'))` fought for the 2nd place but somehow the code did not provide valid answers. Most likely because I made a mistake by transforming them into an integer again. thank you very much for your input. I was quite surprised.
2020/09/01
[ "https://Stackoverflow.com/questions/63695246", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8080648/" ]
This might be faster using small 8-bit lookup table: ``` num = 11 # One time creation of 8bit lookup rev = [int(format(b, '08b')[::-1], base=2) for b in range(256)] # Run for each number to be flipped. lower_rev = rev[num & 0xFF] << 8 upper_rev = rev[(num & 0xFF00) >> 8] flipped = lower_rev + upper_rev ```
There's this, but in Python it seems slower than Matthias' proposed `int`->`str`->`int` solution. ``` x = ((x & 0x5555) << 1) | ((x & 0xAAAA) >> 1) x = ((x & 0x3333) << 2) | ((x & 0xCCCC) >> 2) x = ((x & 0x0F0F) << 4) | ((x & 0xF0F0) >> 4) x = ((x & 0x00FF) << 8) | (x >> 8) ```
52,553,757
I am a newcomer to python. I want to implement a "For" loop on the elements of a dataframe, with an embedded "if" statement. Code: ``` import numpy as np import pandas as pd #Dataframes x = pd.DataFrame([1,-2,3]) y = pd.DataFrame() for i in x.iterrows(): for j in x.iteritems(): if x>0: y = x*2 else: y = 0 ``` With the previous loop, I want to go through each item in the x dataframe and generate a new dataframe y based on the condition in the "if" statement. When I run the code, I get the following error message. ``` ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). ``` Any help would be much appreciated.
2018/09/28
[ "https://Stackoverflow.com/questions/52553757", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1978243/" ]
In pandas is best avoid loops if exist vectorized solution: ``` x = pd.DataFrame([1,-2,3], columns=['a']) y = pd.DataFrame(np.where(x['a'] > 0, x['a'] * 2, 0), columns=['b']) print (y) b 0 2 1 0 2 6 ``` **Explanation**: First compare column by value for boolean mask: ``` print (x['a'] > 0) 0 True 1 False 2 True Name: a, dtype: bool ``` Then use [`numpy.where`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html) for set values by conditions: ``` print (np.where(x['a'] > 0, x['a'] * 2, 0)) [2 0 6] ``` And last use `DataFrame` constructor or create new column: ``` x['new'] = np.where(x['a'] > 0, x['a'] * 2, 0) print (x) a new 0 1 2 1 -2 0 2 3 6 ```
You can try this: ``` y = (x[(x > 0)]*2).fillna(0) ```
20,629,561
#### With many started postgresql services, psql chooses the lowest postgresql version I have installed two versions of postgresql, `12` and `13` (in an earlier version of this question, these were `9.1` and `9.2`, I change this to be in line with the added output details from the higher versions). ``` sudo service postgresql status 12/main (port 5432): down 13/main (port 5433): down ``` They are located at `/etc/postgresql/12/` and `/etc/postgresql/13/`. After installing an extension on version `13`: ``` sudo apt-get install postgresql-contrib postgresql-plpython3-13 ``` start the postgresql service: ``` sudo service postgresql start ``` which outputs: ``` * Starting PostgreSQL 12 database server * Starting PostgreSQL 13 database server ``` Now let us create the extension in the database, running: ``` sudo su - postgres ``` and then: ``` postgres=# psql psql (13.4 (Ubuntu 13.4-1.pgdg20.04+1), server 12.7 (Ubuntu 12.7-0ubuntu0.20.04.1)) Type "help" for help. postgres=# CREATE EXTENSION plpython3u; ERROR: could not open extension control file "/usr/share/postgresql/12/extension/plpython3u.control": No such file or directory ``` We see that the extension is searched in version `12` although I have installed the `postgresql-python3u` to the directory of version `13`. #### Aim I want to use version `13` only, I don't need two different versions, and psql seems to choose the lowest available postgresql version of the started services by default, not the highest which I need. How to either remove version `12` safely or make `13` the only started (or default) service, also using the standard port `5432` for version `13`?
2013/12/17
[ "https://Stackoverflow.com/questions/20629561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2813589/" ]
This situation with two clusters in Ubuntu may happen when upgrading to a newer release providing an newer postgresql version. The automatic upgrade does not remove the old cluster, presumably for fear of erasing valuable data (which is wise because some postgres upgrades may require human work to be complete). If you know you want to drop it, just run: ``` sudo pg_dropcluster --stop 9.1 main ``` The corresponding data directory will be removed and `service postgresql` will no longer refer to 9.1 At this point the 9.2 cluster will still use the port 5433, which is unpractical. To switch it to the default port, edit `/etc/postgresql/9.2/main/postgresql.conf` and change the line `port = 5433` to `port = 5432` Then restart PostgreSQL. Finally to get rid of the postgresql-9.1 packages see the result of `dpkg -l 'postgresql*9.1*'`
`psql` fails because none of your postgres is running. First, you should understand **why** there are 2 different servers, then delete one of them (through `apt-get`, I think), and if necessary reconfigure the other (if you type `sudo service portgresql start`, both of the servers will start, and to connect to 9.2 you must use `psql --port=5433`). Edit your question to add more informations (Version of Ubuntu, origin of Postgres, etc.)...
47,684,408
new to web development and i need some help to figure out the basics.I have a website right now,which is working fine,on a VPS with Ubuntu 16.04 and Apache.Say i would like a converter in my site or in a mobile application and have a python script in my server doing all the work.How can i send the python program the request and how can the page/application receive the data back(like download link,or info like JSON). A simple information would suffice,i don't need a thorough explanation,mostly names like protocols,etc.. maybe some sample code for Hello World. I should add that my website runs with Wordpress and i am not looking to change that yet.I want to create a web app within wordpress,with the app written in Python.
2017/12/06
[ "https://Stackoverflow.com/questions/47684408", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5540416/" ]
There are a few things you need to take into consideration. **Centralising this element** To address your issue of centralising this element with a couple of methods: **Option 1.** You can make the entire `span` width 100% and center the text within it by adding this to `#header-content`: ``` width: 100%; display: block; text-align:center; ``` As shown here: <https://codepen.io/anon/pen/RjmMjQ> **Option 2.** You can remove the absolute positioning, use `display:inline-block`, and, adding `text-align:center;` on the parent element like so: <https://codepen.io/anon/pen/wPbmrQ> **Option 3.** If the element is position:absolute; you can also use: ``` left: 50%; transform: translateX(-50%); ``` **Positioning the element relative to an image:** You also said you would like this element to be positioned inside the border. Because this border is set on the image itself, you will need to position your element so it is relative to the image. To make sure everything lines up correctly with padding we need to set the following, otherwise padding & css borders are not included in any width you set.: ``` *{ box-sizing: border-box; } ``` Then you need to get the exact position right for aligning you element with the image, these seem to work right: ``` width: 170px; bottom: 18px; left:50%; transform: translateX(-50%); border-radius: 0 0 12px 12px; ``` This is done for you here: <https://codepen.io/anon/pen/YEbMYB>
First you need to make the image `max-width:100%` to avoid overflow, then simply adjust left/right/bottom values since your element is absolute position and add `text-align:center` : ```css .elixir { position: absolute; top: 5px; left: 15px; color: white; font-weight: bold; font-size: 50px; } .level { color: white; font: bold 24px/45px Helvetica, Sans-Serif; letter-spacing: -1px; background: rgb(0, 0, 0); /* fallback color */ background: rgba(0, 0, 0, 0.7); padding: 10px; text-align: center; } #header { position: relative; background: yellow; width: 200px; } #header img { max-width:100%; } #header-content { position: absolute; bottom: 5px; left:5px; right:5px; text-align:center; color: Violet ; font: bold 24px/45px Helvetica, Sans-Serif; letter-spacing: -1px; background: rgb(0, 0, 0); /* fallback color */ background: rgba(0, 0, 0, 0.7); padding: 10px; /* width: 100%; */ line-height: 40px; } ``` ```html <div id="header"> <img src="https://vignette.wikia.nocookie.net/clashroyale/images/4/46/DarkPrinceCard.png/revision/latest?cb=20160702201038" /> <span class="elixir"> 4 </span> <span id="header-content"> Level 2 </span> </div> ```
47,684,408
new to web development and i need some help to figure out the basics.I have a website right now,which is working fine,on a VPS with Ubuntu 16.04 and Apache.Say i would like a converter in my site or in a mobile application and have a python script in my server doing all the work.How can i send the python program the request and how can the page/application receive the data back(like download link,or info like JSON). A simple information would suffice,i don't need a thorough explanation,mostly names like protocols,etc.. maybe some sample code for Hello World. I should add that my website runs with Wordpress and i am not looking to change that yet.I want to create a web app within wordpress,with the app written in Python.
2017/12/06
[ "https://Stackoverflow.com/questions/47684408", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5540416/" ]
There are a few things you need to take into consideration. **Centralising this element** To address your issue of centralising this element with a couple of methods: **Option 1.** You can make the entire `span` width 100% and center the text within it by adding this to `#header-content`: ``` width: 100%; display: block; text-align:center; ``` As shown here: <https://codepen.io/anon/pen/RjmMjQ> **Option 2.** You can remove the absolute positioning, use `display:inline-block`, and, adding `text-align:center;` on the parent element like so: <https://codepen.io/anon/pen/wPbmrQ> **Option 3.** If the element is position:absolute; you can also use: ``` left: 50%; transform: translateX(-50%); ``` **Positioning the element relative to an image:** You also said you would like this element to be positioned inside the border. Because this border is set on the image itself, you will need to position your element so it is relative to the image. To make sure everything lines up correctly with padding we need to set the following, otherwise padding & css borders are not included in any width you set.: ``` *{ box-sizing: border-box; } ``` Then you need to get the exact position right for aligning you element with the image, these seem to work right: ``` width: 170px; bottom: 18px; left:50%; transform: translateX(-50%); border-radius: 0 0 12px 12px; ``` This is done for you here: <https://codepen.io/anon/pen/YEbMYB>
By using `left:0; right:0;` and appropirate margin value, you should be able to center it. You could also set the width of the `#header-content` and set `margin:0 auto;`, it would achieve the same effect. I also fit the image inside parent and added a curve to the label, you can see everything commented in css. Working snippet: ```css .elixir { position: absolute; top: 5px; left: 15px; color: white; font-weight: bold; font-size: 50px; } .level { color: white; font: bold 24px/45px Helvetica, Sans-Serif; letter-spacing: -1px; background: rgb(0, 0, 0); /* fallback color */ background: rgba(0, 0, 0, 0.7); padding: 10px; text-align: center; } #header { position: relative; background: yellow; width: 200px; } #header img {width:100%; /* sets image to fit parent */} #header-content { position: absolute; bottom: 0; color: Violet ; font: bold 24px/45px Helvetica, Sans-Serif; letter-spacing: -1px; background: rgb(0, 0, 0); /* fallback color */ background: rgba(0, 0, 0, 0.7); padding: 10px; /* width: 100%; */ line-height: 40px; right:0; left:0; /*center*/ margin:0 18px 24px 18px; /* set margin to appropriate values top right bottom left*/ border-bottom-left-radius:13px; border-bottom-right-radius:13px; /* curved edges */ } ``` ```html <div id="header"> <img src="https://vignette.wikia.nocookie.net/clashroyale/images/4/46/DarkPrinceCard.png/revision/latest?cb=20160702201038" /> <span class="elixir"> 4 </span> <span id="header-content"> Level 2 </span> </div> ```
47,684,408
new to web development and i need some help to figure out the basics.I have a website right now,which is working fine,on a VPS with Ubuntu 16.04 and Apache.Say i would like a converter in my site or in a mobile application and have a python script in my server doing all the work.How can i send the python program the request and how can the page/application receive the data back(like download link,or info like JSON). A simple information would suffice,i don't need a thorough explanation,mostly names like protocols,etc.. maybe some sample code for Hello World. I should add that my website runs with Wordpress and i am not looking to change that yet.I want to create a web app within wordpress,with the app written in Python.
2017/12/06
[ "https://Stackoverflow.com/questions/47684408", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5540416/" ]
There are a few things you need to take into consideration. **Centralising this element** To address your issue of centralising this element with a couple of methods: **Option 1.** You can make the entire `span` width 100% and center the text within it by adding this to `#header-content`: ``` width: 100%; display: block; text-align:center; ``` As shown here: <https://codepen.io/anon/pen/RjmMjQ> **Option 2.** You can remove the absolute positioning, use `display:inline-block`, and, adding `text-align:center;` on the parent element like so: <https://codepen.io/anon/pen/wPbmrQ> **Option 3.** If the element is position:absolute; you can also use: ``` left: 50%; transform: translateX(-50%); ``` **Positioning the element relative to an image:** You also said you would like this element to be positioned inside the border. Because this border is set on the image itself, you will need to position your element so it is relative to the image. To make sure everything lines up correctly with padding we need to set the following, otherwise padding & css borders are not included in any width you set.: ``` *{ box-sizing: border-box; } ``` Then you need to get the exact position right for aligning you element with the image, these seem to work right: ``` width: 170px; bottom: 18px; left:50%; transform: translateX(-50%); border-radius: 0 0 12px 12px; ``` This is done for you here: <https://codepen.io/anon/pen/YEbMYB>
Not too sure, but looks like you are just missing coordonates and text-align. update example for `#header-content`: ``` bottom: 30px; left:24px; right:24px; text-align:center; ``` and eventually ``` border-radius:0 0 10px 10px; text-shadow:1px 1px white; ``` --- Demo below: ```css .elixir { position: absolute; top: 5px; left: 15px; color: white; font-weight: bold; font-size: 50px; } .level { color: white; font: bold 24px/45px Helvetica, Sans-Serif; letter-spacing: -1px; background: rgb(0, 0, 0); /* fallback color */ background: rgba(218, 21, 225); padding: 10px; text-align: center; } #header { position: relative; background: yellow; width: 200px; } #header-content { position: absolute; bottom: 30px; left:24px; right:24px; text-align:center; border-radius:0 0 10px 10px; text-shadow:2px 2px black, -1px -1px black;; color: Violet ; font: bold 24px/45px Helvetica, Sans-Serif; letter-spacing: -1px; background: rgb(0, 0, 0); /* fallback color */ background: rgba(218, 21, 225,0.5); /* padding:10px; width: 100%; */ line-height: 40px; } ``` ```html <div id="header"> <img src="https://vignette.wikia.nocookie.net/clashroyale/images/4/46/DarkPrinceCard.png/revision/latest?cb=20160702201038" /> <span class="elixir"> 4 </span> <span id="header-content"> Level 2 </span> </div> ```
54,298,939
Is there a way to split a python string without using a for loop that basically splits a string in the middle to the closest delimiter. Like: ``` The cat jumped over the moon very quickly. ``` The delimiter would be the space and the resulting strings would be: ``` The cat jumped over the moon very quickly. ``` I see there is a `count` where I can see how many spaces are in there (Don't see how to return their indexes though). I could then find the middle one by dividing by two, but then how to say split on this delimiter at this index. Find is close but it returns the first index (or right first index using rfind) not all the indexes where " " is found. I might be over thinking this.
2019/01/21
[ "https://Stackoverflow.com/questions/54298939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/327258/" ]
I think the solutions using split are good. I tried to solve it without `split` and here's what I came up with. ``` sOdd = "The cat jumped over the moon very quickly." sEven = "The cat jumped over the moon very quickly now." def split_on_delim_mid(s, delim=" "): delim_indexes = [ x[0] for x in enumerate(s) if x[1]==delim ] # [3, 7, 14, 19, 23, 28, 33] # Select the correct number from delim_indexes middle = len(delim_indexes)/2 if middle % 2 == 0: middle_index = middle else: middle_index = (middle-.5) # Return the separated sentances sep = delim_indexes[int(middle_index)] return s[:sep], s[sep:] split_on_delim_mid(sOdd) # ('The cat jumped over', ' the moon very quickly.') split_on_delim_mid(sEven) # ('The cat jumped over the', ' moon very quickly now.') ``` The idea here is to: * Find the indexes of the deliminator. * Find the median of that list of indexes * Split on that.
Solutions with `split()` and `join()` are fine if you want to get half the words, not half the string (counting the characters and not the words). I think the latter is impossibile without a `for` loop or a list comprehension (or an expensive workaround such a recursion to find the indexes of the spaces maybe). But if you are fine with a list comprehension, you could do: ``` phrase = "The cat jumped over the moon very quickly." #indexes of separator, here the ' ' sep_idxs = [i for i, j in enumerate(phrase) if j == ' '] #getting the separator index closer to half the length of the string sep = min(sep_idxs, key=lambda x:abs(x-(len(phrase) // 2))) first_half = phrase[:sep] last_half = phrase[sep+1:] print([first_half, last_half]) ``` Here first I look for the indexes of the separator with the list comprehension. Then I find the index of the closer separator to the half of the string using a custom key for the [min()](https://docs.python.org/3.6/library/functions.html#min) built-in function. Then split. The `print` statement prints `['The cat jumped over', 'the moon very quickly.']`
54,298,939
Is there a way to split a python string without using a for loop that basically splits a string in the middle to the closest delimiter. Like: ``` The cat jumped over the moon very quickly. ``` The delimiter would be the space and the resulting strings would be: ``` The cat jumped over the moon very quickly. ``` I see there is a `count` where I can see how many spaces are in there (Don't see how to return their indexes though). I could then find the middle one by dividing by two, but then how to say split on this delimiter at this index. Find is close but it returns the first index (or right first index using rfind) not all the indexes where " " is found. I might be over thinking this.
2019/01/21
[ "https://Stackoverflow.com/questions/54298939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/327258/" ]
I think the solutions using split are good. I tried to solve it without `split` and here's what I came up with. ``` sOdd = "The cat jumped over the moon very quickly." sEven = "The cat jumped over the moon very quickly now." def split_on_delim_mid(s, delim=" "): delim_indexes = [ x[0] for x in enumerate(s) if x[1]==delim ] # [3, 7, 14, 19, 23, 28, 33] # Select the correct number from delim_indexes middle = len(delim_indexes)/2 if middle % 2 == 0: middle_index = middle else: middle_index = (middle-.5) # Return the separated sentances sep = delim_indexes[int(middle_index)] return s[:sep], s[sep:] split_on_delim_mid(sOdd) # ('The cat jumped over', ' the moon very quickly.') split_on_delim_mid(sEven) # ('The cat jumped over the', ' moon very quickly now.') ``` The idea here is to: * Find the indexes of the deliminator. * Find the median of that list of indexes * Split on that.
As Valentino says, the answer depends on whether you want to split the number of characters as evenly as possible or the number of words as evenly as possible: `split()`-based methods will do the latter. Here's a way to do the former without looping or list comprehension. `delim` can be any single character. This method just wouldn't work if you want a longer delimiter, since in that case it needn't be wholly in the first half or wholly in the second half. ``` def middlesplit(s,delim=" "): if delim not in s: return (s,) midpoint=(len(s)+1)//2 left=s[:midpoint].rfind(delim) right=s[:midpoint-1:-1].rfind(delim) if right>left: return (s[:-right-1],s[-right:]) else: return (s[:left],s[left+1:]) ``` The reason for using `rfind()` rather than `find()` is so that you can choose the larger result, making sure you avoid the `-1` if only one side of your string contains `delim`.
54,298,939
Is there a way to split a python string without using a for loop that basically splits a string in the middle to the closest delimiter. Like: ``` The cat jumped over the moon very quickly. ``` The delimiter would be the space and the resulting strings would be: ``` The cat jumped over the moon very quickly. ``` I see there is a `count` where I can see how many spaces are in there (Don't see how to return their indexes though). I could then find the middle one by dividing by two, but then how to say split on this delimiter at this index. Find is close but it returns the first index (or right first index using rfind) not all the indexes where " " is found. I might be over thinking this.
2019/01/21
[ "https://Stackoverflow.com/questions/54298939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/327258/" ]
This should work: ``` def split_text(text): middle = len(text)//2 under = text.rfind(" ", 0, middle) over = text.find(" ", middle) if over > under and under != -1: return (text[:,middle - under], text[middle - under,:]) else: if over is -1: raise ValueError("No separator found in text '{}'".format(text)) return (text[:,middle + over], text[middle + over,:]) ``` it does not use a for loop, but probably using a for loop would have better performance. I handle the case where the separator is not found in the whole string by raising an error, but change `raise ValueError()` for whatever way you want to handle that case.
You can use `min` to find the closest space to the middle and then slice the string. ``` s = "The cat jumped over the moon very quickly." mid = min((i for i, c in enumerate(s) if c == ' '), key=lambda i: abs(i - len(s) // 2)) fst, snd = s[:mid], s[mid+1:] print(fst) print(snd) ``` ### Output ``` The cat jumped over the moon very quickly. ```
54,298,939
Is there a way to split a python string without using a for loop that basically splits a string in the middle to the closest delimiter. Like: ``` The cat jumped over the moon very quickly. ``` The delimiter would be the space and the resulting strings would be: ``` The cat jumped over the moon very quickly. ``` I see there is a `count` where I can see how many spaces are in there (Don't see how to return their indexes though). I could then find the middle one by dividing by two, but then how to say split on this delimiter at this index. Find is close but it returns the first index (or right first index using rfind) not all the indexes where " " is found. I might be over thinking this.
2019/01/21
[ "https://Stackoverflow.com/questions/54298939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/327258/" ]
how about something like this: ``` s = "The cat jumped over the moon very quickly" l = s.split() s1 = ' '.join(l[:len(l)//2]) s2 = ' '.join(l[len(l)//2 :]) print(s1) print(s2) ```
As Valentino says, the answer depends on whether you want to split the number of characters as evenly as possible or the number of words as evenly as possible: `split()`-based methods will do the latter. Here's a way to do the former without looping or list comprehension. `delim` can be any single character. This method just wouldn't work if you want a longer delimiter, since in that case it needn't be wholly in the first half or wholly in the second half. ``` def middlesplit(s,delim=" "): if delim not in s: return (s,) midpoint=(len(s)+1)//2 left=s[:midpoint].rfind(delim) right=s[:midpoint-1:-1].rfind(delim) if right>left: return (s[:-right-1],s[-right:]) else: return (s[:left],s[left+1:]) ``` The reason for using `rfind()` rather than `find()` is so that you can choose the larger result, making sure you avoid the `-1` if only one side of your string contains `delim`.
54,298,939
Is there a way to split a python string without using a for loop that basically splits a string in the middle to the closest delimiter. Like: ``` The cat jumped over the moon very quickly. ``` The delimiter would be the space and the resulting strings would be: ``` The cat jumped over the moon very quickly. ``` I see there is a `count` where I can see how many spaces are in there (Don't see how to return their indexes though). I could then find the middle one by dividing by two, but then how to say split on this delimiter at this index. Find is close but it returns the first index (or right first index using rfind) not all the indexes where " " is found. I might be over thinking this.
2019/01/21
[ "https://Stackoverflow.com/questions/54298939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/327258/" ]
This should work: ``` def split_text(text): middle = len(text)//2 under = text.rfind(" ", 0, middle) over = text.find(" ", middle) if over > under and under != -1: return (text[:,middle - under], text[middle - under,:]) else: if over is -1: raise ValueError("No separator found in text '{}'".format(text)) return (text[:,middle + over], text[middle + over,:]) ``` it does not use a for loop, but probably using a for loop would have better performance. I handle the case where the separator is not found in the whole string by raising an error, but change `raise ValueError()` for whatever way you want to handle that case.
how about something like this: ``` s = "The cat jumped over the moon very quickly" l = s.split() s1 = ' '.join(l[:len(l)//2]) s2 = ' '.join(l[len(l)//2 :]) print(s1) print(s2) ```
54,298,939
Is there a way to split a python string without using a for loop that basically splits a string in the middle to the closest delimiter. Like: ``` The cat jumped over the moon very quickly. ``` The delimiter would be the space and the resulting strings would be: ``` The cat jumped over the moon very quickly. ``` I see there is a `count` where I can see how many spaces are in there (Don't see how to return their indexes though). I could then find the middle one by dividing by two, but then how to say split on this delimiter at this index. Find is close but it returns the first index (or right first index using rfind) not all the indexes where " " is found. I might be over thinking this.
2019/01/21
[ "https://Stackoverflow.com/questions/54298939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/327258/" ]
This should work: ``` def split_text(text): middle = len(text)//2 under = text.rfind(" ", 0, middle) over = text.find(" ", middle) if over > under and under != -1: return (text[:,middle - under], text[middle - under,:]) else: if over is -1: raise ValueError("No separator found in text '{}'".format(text)) return (text[:,middle + over], text[middle + over,:]) ``` it does not use a for loop, but probably using a for loop would have better performance. I handle the case where the separator is not found in the whole string by raising an error, but change `raise ValueError()` for whatever way you want to handle that case.
As Valentino says, the answer depends on whether you want to split the number of characters as evenly as possible or the number of words as evenly as possible: `split()`-based methods will do the latter. Here's a way to do the former without looping or list comprehension. `delim` can be any single character. This method just wouldn't work if you want a longer delimiter, since in that case it needn't be wholly in the first half or wholly in the second half. ``` def middlesplit(s,delim=" "): if delim not in s: return (s,) midpoint=(len(s)+1)//2 left=s[:midpoint].rfind(delim) right=s[:midpoint-1:-1].rfind(delim) if right>left: return (s[:-right-1],s[-right:]) else: return (s[:left],s[left+1:]) ``` The reason for using `rfind()` rather than `find()` is so that you can choose the larger result, making sure you avoid the `-1` if only one side of your string contains `delim`.
54,298,939
Is there a way to split a python string without using a for loop that basically splits a string in the middle to the closest delimiter. Like: ``` The cat jumped over the moon very quickly. ``` The delimiter would be the space and the resulting strings would be: ``` The cat jumped over the moon very quickly. ``` I see there is a `count` where I can see how many spaces are in there (Don't see how to return their indexes though). I could then find the middle one by dividing by two, but then how to say split on this delimiter at this index. Find is close but it returns the first index (or right first index using rfind) not all the indexes where " " is found. I might be over thinking this.
2019/01/21
[ "https://Stackoverflow.com/questions/54298939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/327258/" ]
This should work: ``` def split_text(text): middle = len(text)//2 under = text.rfind(" ", 0, middle) over = text.find(" ", middle) if over > under and under != -1: return (text[:,middle - under], text[middle - under,:]) else: if over is -1: raise ValueError("No separator found in text '{}'".format(text)) return (text[:,middle + over], text[middle + over,:]) ``` it does not use a for loop, but probably using a for loop would have better performance. I handle the case where the separator is not found in the whole string by raising an error, but change `raise ValueError()` for whatever way you want to handle that case.
Solutions with `split()` and `join()` are fine if you want to get half the words, not half the string (counting the characters and not the words). I think the latter is impossibile without a `for` loop or a list comprehension (or an expensive workaround such a recursion to find the indexes of the spaces maybe). But if you are fine with a list comprehension, you could do: ``` phrase = "The cat jumped over the moon very quickly." #indexes of separator, here the ' ' sep_idxs = [i for i, j in enumerate(phrase) if j == ' '] #getting the separator index closer to half the length of the string sep = min(sep_idxs, key=lambda x:abs(x-(len(phrase) // 2))) first_half = phrase[:sep] last_half = phrase[sep+1:] print([first_half, last_half]) ``` Here first I look for the indexes of the separator with the list comprehension. Then I find the index of the closer separator to the half of the string using a custom key for the [min()](https://docs.python.org/3.6/library/functions.html#min) built-in function. Then split. The `print` statement prints `['The cat jumped over', 'the moon very quickly.']`
54,298,939
Is there a way to split a python string without using a for loop that basically splits a string in the middle to the closest delimiter. Like: ``` The cat jumped over the moon very quickly. ``` The delimiter would be the space and the resulting strings would be: ``` The cat jumped over the moon very quickly. ``` I see there is a `count` where I can see how many spaces are in there (Don't see how to return their indexes though). I could then find the middle one by dividing by two, but then how to say split on this delimiter at this index. Find is close but it returns the first index (or right first index using rfind) not all the indexes where " " is found. I might be over thinking this.
2019/01/21
[ "https://Stackoverflow.com/questions/54298939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/327258/" ]
I'd just split then rejoin: ``` text = "The cat jumped over the moon very quickly" words = text.split() first_half = " ".join(words[:len(words)//2]) ```
As Valentino says, the answer depends on whether you want to split the number of characters as evenly as possible or the number of words as evenly as possible: `split()`-based methods will do the latter. Here's a way to do the former without looping or list comprehension. `delim` can be any single character. This method just wouldn't work if you want a longer delimiter, since in that case it needn't be wholly in the first half or wholly in the second half. ``` def middlesplit(s,delim=" "): if delim not in s: return (s,) midpoint=(len(s)+1)//2 left=s[:midpoint].rfind(delim) right=s[:midpoint-1:-1].rfind(delim) if right>left: return (s[:-right-1],s[-right:]) else: return (s[:left],s[left+1:]) ``` The reason for using `rfind()` rather than `find()` is so that you can choose the larger result, making sure you avoid the `-1` if only one side of your string contains `delim`.
54,298,939
Is there a way to split a python string without using a for loop that basically splits a string in the middle to the closest delimiter. Like: ``` The cat jumped over the moon very quickly. ``` The delimiter would be the space and the resulting strings would be: ``` The cat jumped over the moon very quickly. ``` I see there is a `count` where I can see how many spaces are in there (Don't see how to return their indexes though). I could then find the middle one by dividing by two, but then how to say split on this delimiter at this index. Find is close but it returns the first index (or right first index using rfind) not all the indexes where " " is found. I might be over thinking this.
2019/01/21
[ "https://Stackoverflow.com/questions/54298939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/327258/" ]
This should work: ``` def split_text(text): middle = len(text)//2 under = text.rfind(" ", 0, middle) over = text.find(" ", middle) if over > under and under != -1: return (text[:,middle - under], text[middle - under,:]) else: if over is -1: raise ValueError("No separator found in text '{}'".format(text)) return (text[:,middle + over], text[middle + over,:]) ``` it does not use a for loop, but probably using a for loop would have better performance. I handle the case where the separator is not found in the whole string by raising an error, but change `raise ValueError()` for whatever way you want to handle that case.
I think the solutions using split are good. I tried to solve it without `split` and here's what I came up with. ``` sOdd = "The cat jumped over the moon very quickly." sEven = "The cat jumped over the moon very quickly now." def split_on_delim_mid(s, delim=" "): delim_indexes = [ x[0] for x in enumerate(s) if x[1]==delim ] # [3, 7, 14, 19, 23, 28, 33] # Select the correct number from delim_indexes middle = len(delim_indexes)/2 if middle % 2 == 0: middle_index = middle else: middle_index = (middle-.5) # Return the separated sentances sep = delim_indexes[int(middle_index)] return s[:sep], s[sep:] split_on_delim_mid(sOdd) # ('The cat jumped over', ' the moon very quickly.') split_on_delim_mid(sEven) # ('The cat jumped over the', ' moon very quickly now.') ``` The idea here is to: * Find the indexes of the deliminator. * Find the median of that list of indexes * Split on that.
54,298,939
Is there a way to split a python string without using a for loop that basically splits a string in the middle to the closest delimiter. Like: ``` The cat jumped over the moon very quickly. ``` The delimiter would be the space and the resulting strings would be: ``` The cat jumped over the moon very quickly. ``` I see there is a `count` where I can see how many spaces are in there (Don't see how to return their indexes though). I could then find the middle one by dividing by two, but then how to say split on this delimiter at this index. Find is close but it returns the first index (or right first index using rfind) not all the indexes where " " is found. I might be over thinking this.
2019/01/21
[ "https://Stackoverflow.com/questions/54298939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/327258/" ]
Solutions with `split()` and `join()` are fine if you want to get half the words, not half the string (counting the characters and not the words). I think the latter is impossibile without a `for` loop or a list comprehension (or an expensive workaround such a recursion to find the indexes of the spaces maybe). But if you are fine with a list comprehension, you could do: ``` phrase = "The cat jumped over the moon very quickly." #indexes of separator, here the ' ' sep_idxs = [i for i, j in enumerate(phrase) if j == ' '] #getting the separator index closer to half the length of the string sep = min(sep_idxs, key=lambda x:abs(x-(len(phrase) // 2))) first_half = phrase[:sep] last_half = phrase[sep+1:] print([first_half, last_half]) ``` Here first I look for the indexes of the separator with the list comprehension. Then I find the index of the closer separator to the half of the string using a custom key for the [min()](https://docs.python.org/3.6/library/functions.html#min) built-in function. Then split. The `print` statement prints `['The cat jumped over', 'the moon very quickly.']`
As Valentino says, the answer depends on whether you want to split the number of characters as evenly as possible or the number of words as evenly as possible: `split()`-based methods will do the latter. Here's a way to do the former without looping or list comprehension. `delim` can be any single character. This method just wouldn't work if you want a longer delimiter, since in that case it needn't be wholly in the first half or wholly in the second half. ``` def middlesplit(s,delim=" "): if delim not in s: return (s,) midpoint=(len(s)+1)//2 left=s[:midpoint].rfind(delim) right=s[:midpoint-1:-1].rfind(delim) if right>left: return (s[:-right-1],s[-right:]) else: return (s[:left],s[left+1:]) ``` The reason for using `rfind()` rather than `find()` is so that you can choose the larger result, making sure you avoid the `-1` if only one side of your string contains `delim`.
4,184,841
I need to color the white part surrounded by black edges! ``` from PIL import Image import sys image=Image.open("G:/ghgh.bmp") data=image.load() image_width,image_height=image.size sys.setrecursionlimit(10115) def f(x,y): if(x<image_width and y<image_height and x>0 and y>0): if (data[x,y]==255): image.putpixel((x,y),150) f(x+1,y) f(x-1,y) f(x,y+1) f(x,y-1) f(x+1,y+1) f(x-1,y-1) f(x+1,y-1) f(x-1,y+1) f(100,100) image.show() ``` 255 is to detect white color, 150 is used to re-color greyish, and (100,100) is the starting pixel. It's giving "Max recursion depth" at n=10114 and python crashes on n=10115 (the `setrecursionlimit(n)`).
2010/11/15
[ "https://Stackoverflow.com/questions/4184841", "https://Stackoverflow.com", "https://Stackoverflow.com/users/508292/" ]
Some warnings provide a description, others may not. Maybe the documentation is just incomplete or the authors figure some warnings don't need additional clarification?
If stylecop check is been run during a build (MSBuild/VS build) the show help option wont appear, however if you right click on the solution and run stylecop check, it will have the show help option. Hope that helps
29,775,006
I've been writing automated tests with Selenium Webdriver 2.45 in python. To get through some of the things I need to test I must retrieve the various `JSESSION` cookies that are generate from the site. When I use webdrivers `get_cookies()` function with Firefox or Chrome all of the needed cookies return to me. When I do the same thing with IE11 I do not see the cookies that I need. Anyone know how I can retrieve session cookies from IE?
2015/04/21
[ "https://Stackoverflow.com/questions/29775006", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3249517/" ]
What you describe sounds like an issue I ran into a few months ago. My tests ran fine with Chrome and Firefox but not in IE, and the problem was cookies. Upon investigation what I found is that my web site had set its session cookies to be [HTTP-only](https://en.wikipedia.org/wiki/HTTP_cookie#Secure_and_HttpOnly). When a cookie has this flag turned on, the browser will send the cookie over the HTTP(S) protocol and allow it to be set by the server in responses but it will make the cookie inaccessible to JavaScript. (Which is consistent with your comment that you cannot see the cookies you want in `document.cookie`.) It so happens that when you use Selenium with Chrome or Firefox, Selenium is able to ignore this flag and obtain the cookies from the browser anyway. However, it cannot do the same with IE. I worked around this issue by turning off the HTTP-only flag when running my site in testing mode. I use Django for my server so I had to create a special `test_settings.py` file with `SESSION_COOKIE_HTTPONLY = False` in it.
There is an open issue with IE and Safari. Those driver will not return correct cookies information. At least not the domain. See [this](https://code.google.com/p/selenium/issues/detail?id=8509)
60,992,133
I am trying to get my PTZ camera to stream using python 3 and openCV. The URL i use in the code works with VLC stream but not with the code. ``` import cv2 import numpy as np cap = cv2.VideoCapture(src="rtsp://USER:PASS@XX.XXX.XXX.XXX:XXX/Streaming/Channels/101/") FRAME_WIDTH = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) FRAME_HIGTH = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) print('Frame Size: ', FRAME_WIDTH, 'x', FRAME_HIGTH) if cap.isOpened(): ret, frame = cap.read() else: ret = False while ret: ret, frame = cap.read() cv2.imshow('Camera', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() ``` When I run it i get the next error: ``` Traceback (most recent call last): File "C:/Users/.../CameraTest/TEST.py", line 4, in <module> cap = cv2.VideoCapture(src="rtsp://.../Streaming/Channels/101/") TypeError: Required argument 'index' (pos 1) not found ``` this is and HIKVISON PTZ camera. Can I please get any tips for how to get her to stream. TNX in advance.
2020/04/02
[ "https://Stackoverflow.com/questions/60992133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13198103/" ]
one small change, remove **src=** from the cv2.VideoCapture() method. It should look like, ``` cap = cv2.VideoCapture("rtsp://USER:PASS@XX.XXX.XXX.XXX:XXX/Streaming/Channels/101/") ```
This is working for me in Hikvision camera. Avoid using special letter in the password. ``` import cv2 cap = cv2.VideoCapture('rtsp://username:password@10.199.27.123:554') while True: ret, img = cap.read() cv2.imshow('video output', img) k = cv2.waitKey(10)& 0xff if k == 27: break cap.release() cv2.destroyAllWindows() ```
67,740,665
The Script, running on a Linux host, should call some Windows hosts holding Oracle Databases. Each Oracle Database is in DNS with its name "db-[ORACLE\_SID]". Lets say you have a database with ORACLE SID `TEST02`, it can be resolved as `db-TEST02`. The complete script is doing some more stuff, but this example is sufficient to explain the problem. The `db-[SID]` hostnames must be added as dynamic hosts to be able to parallelize the processing. The problem is that oracle\_databases is not passed to the new playbook. It works if I change the hosts from `windows` to `localhost`, but I need to analyze something first and get some data from the windows hosts, so this is not an option. Here is the script: ```yaml --- # ansible-playbook parallel.yml -e "databases=TEST01,TEST02,TEST03" - hosts: windows gather_facts: false vars: ansible_connection: winrm ansible_port: 5985 ansible_winrm_transport: kerberos ansible_winrm_kerberos_delegation: true tasks: - set_fact: database: "{{ databases.split(',') }}" - name: Add databases as hosts, to parallelize the shutdown process add_host: name: "db-{{ item }}" groups: oracle_databases loop: "{{ database | list}}" ##### just to check, what is in oracle_databases - name: show the content of oracle_databases debug: msg: "{{ item }}" with_inventory_hostnames: - oracle_databases - hosts: oracle_databases gather_facts: true tasks: - debug: msg: - "Hosts, on which the playbook is running: {{ ansible_play_hosts }}" verbosity: 1 ``` My inventory file is just small, but there will be more windows hosts in future: ``` [adminsw1@obelix oracle_change_home]$ cat inventory [local] localhost [windows] windows68 ``` And the output ``` [adminsw1@obelix oracle_change_home]$ ansible-playbook para.yml -l windows68 -e "databases=TEST01,TEST02" /usr/lib/python2.7/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release. from cryptography.exceptions import InvalidSignature /usr/lib/python2.7/site-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.23) or chardet (2.2.1) doesn't match a supported version! RequestsDependencyWarning) PLAY [windows] ***************************************************************************************************************************** TASK [set_fact] **************************************************************************************************************************** ok: [windows68] TASK [Add databases as hosts, to parallelize the shutdown process] ************************************************************************* changed: [windows68] => (item=TEST01) changed: [windows68] => (item=TEST02) TASK [show the content of oracle_databases] ************************************************************************************************ ok: [windows68] => (item=db-TEST01) => { "msg": "db-TEST01" } ok: [windows68] => (item=db-TEST02) => { "msg": "db-TEST02" } PLAY [oracle_databases] ******************************************************************************************************************** skipping: no hosts matched PLAY RECAP ********************************************************************************************************************************* windows68 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
2021/05/28
[ "https://Stackoverflow.com/questions/67740665", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15623827/" ]
The first thing that comes to mind for me would be to use the append feature on Python file handlers. You could do something like this for each line of text: ```py def writecond(text, cond): fname = cond + '.txt' with open(fname, 'a') as file: file.write(text) ``` Another thing you could do is have a `dict` which maps you condition text to a list of open file handlers (although I think there might be a hard limit to the number of handlers you can have on some systems), but just be sure to close all of them before your function exits! EDIT: If you want the dictionary case, here's the code for that: ```py fh_assign = {} def writeline(text, condition): if condition not in fh_assign.keys(): fh = open(f'{condition}.txt', 'w') fh.write(text) fh_assign[condition] = fh else: fh_assign[condition].write(text) ``` Once you're done with the calls to `writeline`, just iterate through the list as follows and close all the connections. ```py for _, fh in fh_assign: fh.close() ```
I figured out one option of handling objects dynamically and keeping track of it. ``` file_handler = {} with open(file) as f: for line in f: if line.split()[1] not in file_handler.keys(): file_handler[line.split()[1]] = open(line.split()[1],"w") file_handler[line.split()[1]].write(line.split()[0]) else: file_handler[line.split()[1]].write(line.split()[0]) f.close() for key in file_handler.keys(): file_handler[key].close() ```
74,195,370
Hello so im new on python, i want to know how to do multiple string input on list. I already try to append the input to the list, but it doesn't give me expected output. Here is the source code: ``` test=[] input1=input("Enter multiple strings: ") splitinput1=input1.split() for x in range(len(splitinput1)): test.append(splitinput1) print(test) print(len(test)) ``` And the output is not what i expected: ``` Enter multiple strings: A B C [['A', 'B', 'C'], ['A', 'B', 'C'], ['A', 'B', 'C']] 3 ``` However, when i change to `print(splitinput1)`, it give me expected output: ``` Enter multiple strings: A B C ['A', 'B', 'C'] 3 ``` So how to make the output like `print(splitinput1)` while use `print(test)` and whats missing on my code? Thankyou.
2022/10/25
[ "https://Stackoverflow.com/questions/74195370", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19363291/" ]
> > reducing vertex data down to 32bits per vertex is as far as the GPU will allow > > > You seem to think that vertex buffer sizes are what's holding you back. Make no mistake here, they are not. You have many gigs of VRAM to work with, use them if it will make your code faster. Specifically, anything you're unpacking in your shaders that could otherwise be stored explicitly in your vertex buffer should probably be stored in your vertex buffer. > > I am wondering if anyone has experience with using geometry shaders to auto-generate quads > > > I'll stop you right there, geometry shaders are very inefficient in most driver implementations, even today. They just aren't used that much so nobody bothered to optimize them. One quick thing that jumps at me is that you're allocating and freeing your system-side vertex array every frame. Building it is fine, but cache the array, C memory allocation is about as slow as anything is going to get. A quick profiling should have shown you that. Your next biggest problem is that you have *a lot* of branching in your pixel shader. Use standard functions (like `clamp` or `mix`) or blending to let the math cancel out instead of checking for ranges or fully transparent values. Branching will absolutely kill performance. And lastly, make sure you have the correct hints and usage on your buffers. You don't show them, but they should be set to whatever the equivalent of `GL_STREAM_DRAW` is, and you need to ensure you don't corrupt the in-flight parts of your vertex buffer. Future frames will render at the same time as the current one as long as you don't invalidate their data by overwriting their vertex buffer, so instead use a round-robin scheme to allow as many vertices as possible to survive (again, use memory for performance). Personally I allocate a very large vertex buffer (5x the data a frame needs) and write it sequentially until I reach the end, at which point I orphan the whole thing and re-allocate it and start from the beginning again.
I think your code is CPU bound. While your approach has very small vertices, you have non-trivial API overhead. A better approach is rendering all quads with a single draw call. I would probably use instancing for that. Assuming you want arbitrary per-quad size, position, and orientation in 3D space, here’s one possible approach. Untested. Vertex buffer elements: ``` struct sInstanceData { // Center of the quad in 3D space XMFLOAT3 center; // XY coordinates of the sprite in the atlas uint16_t spriteX, spriteY; // Local XY vectors of the quad in 3D space // length of the vectors = half width/height of the quad XMFLOAT3 plusX, plusY; }; ``` Input layout: ``` D3D11_INPUT_ELEMENT_DESC desc[ 4 ]; desc[ 0 ] = D3D11_INPUT_ELEMENT_DESC{ "QuadCenter", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_INSTANCE_DATA, 0 }; desc[ 1 ] = D3D11_INPUT_ELEMENT_DESC{ "SpriteIndex", 0, DXGI_FORMAT_R16G16_UINT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_INSTANCE_DATA, 0 }; desc[ 2 ] = D3D11_INPUT_ELEMENT_DESC{ "QuadPlusX", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_INSTANCE_DATA, 0 }; desc[ 3 ] = D3D11_INPUT_ELEMENT_DESC{ "QuadPlusY", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_INSTANCE_DATA, 0 }; ``` Vertex shader: ``` cbuffer Constants { matrix viewProj; // Pass [ 1.0 / xSegs, 1.0 / ySegs ] in that field float2 texcoordMul; }; struct VOut { float3 position : POSITION; float3 n : NORMAL; float2 texcoord : TEXCOORD; float4 pos : SV_Position; }; VOut main( uint index: SV_VertexID, float3 center : QuadCenter, uint2 texcoords : SpriteIndex, float3 plusX : QuadPlusX, float3 plusY : QuadPlusY ) { VOut result; float3 pos = center; int2 uv = ( int2 )texcoords; // No branches are generated in release builds; // only conditional moves are there if( index & 1 ) { pos += plusX; uv.x++; } else pos -= plusX; if( index & 2 ) { pos += plusY; uv.y++; } else pos -= plusY; result.position = pos; result.n = normalize( cross( plusX, plusY ) ); result.texcoord = ( ( float2 )uv ) * texcoordMul; result.pos = mul( float4( pos, 1.0f ), viewProj ); return result; } ``` Rendering: ``` UINT stride = sizeof( sInstanceData ); UINT off = 0; context->IASetVertexBuffers( 0, 1, &vb, &stride, &off ); context->IASetPrimitiveTopology( D3D_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP ); context->DrawInstanced( 4, countQuads, 0, 0 ); ```
48,702,330
Code: ``` a={'day': [{'average_price': 9.3, 'buy_m2m': 9.3, 'buy_price': 9.3, 'buy_quantity': 1, 'buy_value': 9.3, 'close_price': 0, 'exchange': 'NSE', 'instrument_token': 2867969, 'last_price': 9.3, 'm2m': 0.0, 'multiplier': 1, 'net_buy_amount_m2m': 9.3, 'net_sell_amount_m2m': 0, 'overnight_quantity': 0, 'pnl': 0.0, 'product': 'MIS', 'quantity': 1, 'realised': 0, 'sell_m2m': 0, 'sell_price': 0, 'sell_quantity': 0, 'sell_value': 0, 'tradingsymbol': 'SUBEX', 'unrealised': 0.0, 'value': -9.3}], 'net': [{'average_price': 9.3, 'buy_m2m': 9.3, 'buy_price': 9.3, 'buy_quantity': 1, 'buy_value': 9.3, 'close_price': 0, 'exchange': 'NSE', 'instrument_token': 2867969, 'last_price': 9.3, 'm2m': 0.0, 'multiplier': 1, 'net_buy_amount_m2m': 9.3, 'net_sell_amount_m2m': 0, 'overnight_quantity': 0, 'pnl': 0.0, 'product': 'MIS', 'quantity': 1, 'realised': 0, 'sell_m2m': 0, 'sell_price': 0, 'sell_quantity': 0, 'sell_value': 0, 'tradingsymbol': 'SUBEX', 'unrealised': 0.0, 'value': -9.3}]} b= a['day'] ``` `a` shows dict type variable in python. I want to assign value of `buy_price` which is `9.3` to variable ``x and value of `instrument_token` which is `2867969` to variable `y`. Now problem is after using `b=a['day']`, `b` variable becomes list in python so I can not use `x=b['buy_price']` to get `x=9.3`.
2018/02/09
[ "https://Stackoverflow.com/questions/48702330", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9337404/" ]
`Footer` is not inside a section element, so your selector won't work. ```css .mypage :not(footer) p{ color:red } ``` ```html <body class="mypage"> <section> <p>Hello world</p> </section> <footer> <p>footer content</p> </footer> </body> ```
Why not target the footer again ? ``` .mypage section p{color: red} .mypage footer p{color: blue} <body class="mypage"> <section> <p>Hello world</p> </section> <footer> <p>footer content</p> </footer> </body> ```
37,297,276
I have a data frame ``` df=data.frame(f=c('a','ab','abc'),v=1:3) ``` and make a new column with: ``` df$c=paste(df$v,df$f,sep='') ``` the result is ``` > df f v c 1 a 1 1a 2 ab 2 2ab 3 abc 3 3abc ``` I would like column c to be in this format: ``` > df f v c 1 a 1 1 a 2 ab 2 2 ab 3 abc 3 3abc ``` such that the total length of the concatenated values is a fixed number (in this case 4 characters) and to fill it will a chosen character, such as | (in this case \w). Is there a function like this in R? I think it is similar to the z.fill function in python, but I am not a python programmer, and would prefer to stay in R as opposed to switching between languages for processing. Ultimately, I am creating a supervariable of 10 columns, and think this would help in downstream processing I guess it would be in the paste function, but not sure how to 'fill a factor' so that it is of a fixed width
2016/05/18
[ "https://Stackoverflow.com/questions/37297276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2123706/" ]
You can use the `format()` function to pretty print the values of your column. For example: ``` > format(df$f, width = 3, justify = "right") [1] " a" " ab" "abc" ``` So your code should be: ``` df <- within(df, { c <- paste0(v, format(f, width = 3, justify = "right")) }) df ``` The result: ``` > df f v c 1 a 1 1 a 2 ab 2 2 ab 3 abc 3 3abc ```
You can use the `formatC`function as follow ``` df$c <- paste(df$v, formatC(as.character(df$f), width = 3, flag = " "), sep = "") df f v c 1 a 1 1 a 2 ab 2 2 ab 3 abc 3 3abc ``` **DATA** ``` df <- data.frame(f = c('a','ab','abc'), v=1:3) ```
40,397,657
So, Im a complete newb when it comes to programming. I have been watching tutorials and I am reading a book on how to program python. So, I want to create a number generator guesser on my own and I have watched some tutorials on it but I do not want to recreate the code. basically, I want to make my own guesser with the information I've gotten. ``` import random # Random Numbergenerator Guesser print("Hello and welcome to the random number guesser.") print("I am guessing a number of 1 - 20. Can you guess which one?") x = random.randint(1,20) # Here you guess the number value of 'x' for randomNumber in range (1,7): randomGuess = input() if randomGuess > x: print("Too high. Guess again!") elif randomGuess < x: print("Too low. Guess again!") else: break # Checks to see if the number you were guessing is correct or takes you to a fail screen. if randomGuess == x: print("Correct number!") else: print("Too many tries. You have failed. The number I was thinking of was " + (x))`` ``` I keep getting this error. ``` C:\Python\Python35\python.exe "C:/Users/Morde/Desktop/Python Projects/LoginDataBase/LoginUserDatabse1File.py" Hello and welcome to the random number guesser. I am guessing a number of 1 - 20. Can you guess which one? 1 Traceback (most recent call last): File "C:/Users/Morde/Desktop/Python Projects/LoginDataBase/LoginUserDatabse1File.py", line 12, in <module> if randomGuess > x: TypeError: unorderable types: str() > int() ```
2016/11/03
[ "https://Stackoverflow.com/questions/40397657", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7096598/" ]
**Updated Answer** For ASP Core 1.1.0 generic model binding is now done using `Get`: ``` var config = Configuration.GetSection("configuredClients").Get<ClientConfiguration>(); ``` --- **Original Answer** How about this: ``` var config = Configuration.GetSection("configuredClients").Bind<ClientConfiguration>(); ```
You don't read the configuration manually generally in ASP.NET Core yourself, instead you create an object that matches your definition. You can read more on that in the official documentation [here](https://docs.asp.net/en/latest/fundamentals/configuration.html). E.g. ``` public class MyOptions { public string Option1 { get; set; } public int Option2 { get; set; } } public void ConfigureServices(IServiceCollection services) { // Setup options with DI services.AddOptions(); services.Configure<MyOptions>(Configuration); } ``` Then you just inject the options `IOptions<MyOptions>` where you need them.
40,397,657
So, Im a complete newb when it comes to programming. I have been watching tutorials and I am reading a book on how to program python. So, I want to create a number generator guesser on my own and I have watched some tutorials on it but I do not want to recreate the code. basically, I want to make my own guesser with the information I've gotten. ``` import random # Random Numbergenerator Guesser print("Hello and welcome to the random number guesser.") print("I am guessing a number of 1 - 20. Can you guess which one?") x = random.randint(1,20) # Here you guess the number value of 'x' for randomNumber in range (1,7): randomGuess = input() if randomGuess > x: print("Too high. Guess again!") elif randomGuess < x: print("Too low. Guess again!") else: break # Checks to see if the number you were guessing is correct or takes you to a fail screen. if randomGuess == x: print("Correct number!") else: print("Too many tries. You have failed. The number I was thinking of was " + (x))`` ``` I keep getting this error. ``` C:\Python\Python35\python.exe "C:/Users/Morde/Desktop/Python Projects/LoginDataBase/LoginUserDatabse1File.py" Hello and welcome to the random number guesser. I am guessing a number of 1 - 20. Can you guess which one? 1 Traceback (most recent call last): File "C:/Users/Morde/Desktop/Python Projects/LoginDataBase/LoginUserDatabse1File.py", line 12, in <module> if randomGuess > x: TypeError: unorderable types: str() > int() ```
2016/11/03
[ "https://Stackoverflow.com/questions/40397657", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7096598/" ]
**Updated Answer** For ASP Core 1.1.0 generic model binding is now done using `Get`: ``` var config = Configuration.GetSection("configuredClients").Get<ClientConfiguration>(); ``` --- **Original Answer** How about this: ``` var config = Configuration.GetSection("configuredClients").Bind<ClientConfiguration>(); ```
If you want to get first "clientName"(expected "Client1"), just write: ``` Configuration.GetSection("configuredClients")["clients:0:clientName"]; ``` **Update for comment** Install .NET Core 1.0.1 and go with @TomMakin's way.
40,397,657
So, Im a complete newb when it comes to programming. I have been watching tutorials and I am reading a book on how to program python. So, I want to create a number generator guesser on my own and I have watched some tutorials on it but I do not want to recreate the code. basically, I want to make my own guesser with the information I've gotten. ``` import random # Random Numbergenerator Guesser print("Hello and welcome to the random number guesser.") print("I am guessing a number of 1 - 20. Can you guess which one?") x = random.randint(1,20) # Here you guess the number value of 'x' for randomNumber in range (1,7): randomGuess = input() if randomGuess > x: print("Too high. Guess again!") elif randomGuess < x: print("Too low. Guess again!") else: break # Checks to see if the number you were guessing is correct or takes you to a fail screen. if randomGuess == x: print("Correct number!") else: print("Too many tries. You have failed. The number I was thinking of was " + (x))`` ``` I keep getting this error. ``` C:\Python\Python35\python.exe "C:/Users/Morde/Desktop/Python Projects/LoginDataBase/LoginUserDatabse1File.py" Hello and welcome to the random number guesser. I am guessing a number of 1 - 20. Can you guess which one? 1 Traceback (most recent call last): File "C:/Users/Morde/Desktop/Python Projects/LoginDataBase/LoginUserDatabse1File.py", line 12, in <module> if randomGuess > x: TypeError: unorderable types: str() > int() ```
2016/11/03
[ "https://Stackoverflow.com/questions/40397657", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7096598/" ]
**Updated Answer** For ASP Core 1.1.0 generic model binding is now done using `Get`: ``` var config = Configuration.GetSection("configuredClients").Get<ClientConfiguration>(); ``` --- **Original Answer** How about this: ``` var config = Configuration.GetSection("configuredClients").Bind<ClientConfiguration>(); ```
With **ASP.NET Core 2.0** (basically Core 1.1+), the `IConfiguration` is injected to `Startup`, and that can be used within `ConfigureServices()` and `Configure()` methods. As shown in the accepted answer, the configuration can be bound to an object. But if just one value is required, the key based approach works well. The `IConfiguration` still works with colon `:` separated string keys. And for `array`, use 0-based index. Or use the the generic `getValue<T>()` method with same keys. See example below: ``` var clientId2 = Configuration["configuredClients:clients:1:clientId"]?.ToString(); var clientName1 = Configuration.GetValue<string>("configuredClients:clients:0:clientName"); ``` --- To use the same configuration values in other classes (e.g. Controllers) 1. Either inject the `IConfiguration` and use the same key-based approach like above. Or 2. Register an instance of the strongly-typed configuration object with the DI container, and inject that object directly into client classes. Sample code below: ``` //In Startup.ConfigureServices() var clientConfig = Configuration.GetSection("configuredClients") .Get<ClientConfiguration>(); services.AddSingleton(clientConfig); //Controller public class TestController : Controller { IConfiguration _configStore; ClientConfiguration _clientConfiguration; public TestController(IConfiguration configuration, ClientConfiguration clientConfiguration) { _configStore = configuration; _clientConfiguration = clientConfiguration; } public IActionResult Get() { //with IConfiguration var clientId1 = _configStore .GetValue<string>("configuredClients:clients:0:clientId"); //with strongly typed ClientConfiguration var clientName1 = _clientConfiguration.Clients[0]?.ClientName; return new OkObjectResult("Configuration test"); } } ``` More examples [here](http://arghya.xyz/articles/practical-configuration-and-di-in-aspnet-core/).
40,397,657
So, Im a complete newb when it comes to programming. I have been watching tutorials and I am reading a book on how to program python. So, I want to create a number generator guesser on my own and I have watched some tutorials on it but I do not want to recreate the code. basically, I want to make my own guesser with the information I've gotten. ``` import random # Random Numbergenerator Guesser print("Hello and welcome to the random number guesser.") print("I am guessing a number of 1 - 20. Can you guess which one?") x = random.randint(1,20) # Here you guess the number value of 'x' for randomNumber in range (1,7): randomGuess = input() if randomGuess > x: print("Too high. Guess again!") elif randomGuess < x: print("Too low. Guess again!") else: break # Checks to see if the number you were guessing is correct or takes you to a fail screen. if randomGuess == x: print("Correct number!") else: print("Too many tries. You have failed. The number I was thinking of was " + (x))`` ``` I keep getting this error. ``` C:\Python\Python35\python.exe "C:/Users/Morde/Desktop/Python Projects/LoginDataBase/LoginUserDatabse1File.py" Hello and welcome to the random number guesser. I am guessing a number of 1 - 20. Can you guess which one? 1 Traceback (most recent call last): File "C:/Users/Morde/Desktop/Python Projects/LoginDataBase/LoginUserDatabse1File.py", line 12, in <module> if randomGuess > x: TypeError: unorderable types: str() > int() ```
2016/11/03
[ "https://Stackoverflow.com/questions/40397657", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7096598/" ]
You don't read the configuration manually generally in ASP.NET Core yourself, instead you create an object that matches your definition. You can read more on that in the official documentation [here](https://docs.asp.net/en/latest/fundamentals/configuration.html). E.g. ``` public class MyOptions { public string Option1 { get; set; } public int Option2 { get; set; } } public void ConfigureServices(IServiceCollection services) { // Setup options with DI services.AddOptions(); services.Configure<MyOptions>(Configuration); } ``` Then you just inject the options `IOptions<MyOptions>` where you need them.
If you want to get first "clientName"(expected "Client1"), just write: ``` Configuration.GetSection("configuredClients")["clients:0:clientName"]; ``` **Update for comment** Install .NET Core 1.0.1 and go with @TomMakin's way.
40,397,657
So, Im a complete newb when it comes to programming. I have been watching tutorials and I am reading a book on how to program python. So, I want to create a number generator guesser on my own and I have watched some tutorials on it but I do not want to recreate the code. basically, I want to make my own guesser with the information I've gotten. ``` import random # Random Numbergenerator Guesser print("Hello and welcome to the random number guesser.") print("I am guessing a number of 1 - 20. Can you guess which one?") x = random.randint(1,20) # Here you guess the number value of 'x' for randomNumber in range (1,7): randomGuess = input() if randomGuess > x: print("Too high. Guess again!") elif randomGuess < x: print("Too low. Guess again!") else: break # Checks to see if the number you were guessing is correct or takes you to a fail screen. if randomGuess == x: print("Correct number!") else: print("Too many tries. You have failed. The number I was thinking of was " + (x))`` ``` I keep getting this error. ``` C:\Python\Python35\python.exe "C:/Users/Morde/Desktop/Python Projects/LoginDataBase/LoginUserDatabse1File.py" Hello and welcome to the random number guesser. I am guessing a number of 1 - 20. Can you guess which one? 1 Traceback (most recent call last): File "C:/Users/Morde/Desktop/Python Projects/LoginDataBase/LoginUserDatabse1File.py", line 12, in <module> if randomGuess > x: TypeError: unorderable types: str() > int() ```
2016/11/03
[ "https://Stackoverflow.com/questions/40397657", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7096598/" ]
With **ASP.NET Core 2.0** (basically Core 1.1+), the `IConfiguration` is injected to `Startup`, and that can be used within `ConfigureServices()` and `Configure()` methods. As shown in the accepted answer, the configuration can be bound to an object. But if just one value is required, the key based approach works well. The `IConfiguration` still works with colon `:` separated string keys. And for `array`, use 0-based index. Or use the the generic `getValue<T>()` method with same keys. See example below: ``` var clientId2 = Configuration["configuredClients:clients:1:clientId"]?.ToString(); var clientName1 = Configuration.GetValue<string>("configuredClients:clients:0:clientName"); ``` --- To use the same configuration values in other classes (e.g. Controllers) 1. Either inject the `IConfiguration` and use the same key-based approach like above. Or 2. Register an instance of the strongly-typed configuration object with the DI container, and inject that object directly into client classes. Sample code below: ``` //In Startup.ConfigureServices() var clientConfig = Configuration.GetSection("configuredClients") .Get<ClientConfiguration>(); services.AddSingleton(clientConfig); //Controller public class TestController : Controller { IConfiguration _configStore; ClientConfiguration _clientConfiguration; public TestController(IConfiguration configuration, ClientConfiguration clientConfiguration) { _configStore = configuration; _clientConfiguration = clientConfiguration; } public IActionResult Get() { //with IConfiguration var clientId1 = _configStore .GetValue<string>("configuredClients:clients:0:clientId"); //with strongly typed ClientConfiguration var clientName1 = _clientConfiguration.Clients[0]?.ClientName; return new OkObjectResult("Configuration test"); } } ``` More examples [here](http://arghya.xyz/articles/practical-configuration-and-di-in-aspnet-core/).
You don't read the configuration manually generally in ASP.NET Core yourself, instead you create an object that matches your definition. You can read more on that in the official documentation [here](https://docs.asp.net/en/latest/fundamentals/configuration.html). E.g. ``` public class MyOptions { public string Option1 { get; set; } public int Option2 { get; set; } } public void ConfigureServices(IServiceCollection services) { // Setup options with DI services.AddOptions(); services.Configure<MyOptions>(Configuration); } ``` Then you just inject the options `IOptions<MyOptions>` where you need them.
40,397,657
So, Im a complete newb when it comes to programming. I have been watching tutorials and I am reading a book on how to program python. So, I want to create a number generator guesser on my own and I have watched some tutorials on it but I do not want to recreate the code. basically, I want to make my own guesser with the information I've gotten. ``` import random # Random Numbergenerator Guesser print("Hello and welcome to the random number guesser.") print("I am guessing a number of 1 - 20. Can you guess which one?") x = random.randint(1,20) # Here you guess the number value of 'x' for randomNumber in range (1,7): randomGuess = input() if randomGuess > x: print("Too high. Guess again!") elif randomGuess < x: print("Too low. Guess again!") else: break # Checks to see if the number you were guessing is correct or takes you to a fail screen. if randomGuess == x: print("Correct number!") else: print("Too many tries. You have failed. The number I was thinking of was " + (x))`` ``` I keep getting this error. ``` C:\Python\Python35\python.exe "C:/Users/Morde/Desktop/Python Projects/LoginDataBase/LoginUserDatabse1File.py" Hello and welcome to the random number guesser. I am guessing a number of 1 - 20. Can you guess which one? 1 Traceback (most recent call last): File "C:/Users/Morde/Desktop/Python Projects/LoginDataBase/LoginUserDatabse1File.py", line 12, in <module> if randomGuess > x: TypeError: unorderable types: str() > int() ```
2016/11/03
[ "https://Stackoverflow.com/questions/40397657", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7096598/" ]
With **ASP.NET Core 2.0** (basically Core 1.1+), the `IConfiguration` is injected to `Startup`, and that can be used within `ConfigureServices()` and `Configure()` methods. As shown in the accepted answer, the configuration can be bound to an object. But if just one value is required, the key based approach works well. The `IConfiguration` still works with colon `:` separated string keys. And for `array`, use 0-based index. Or use the the generic `getValue<T>()` method with same keys. See example below: ``` var clientId2 = Configuration["configuredClients:clients:1:clientId"]?.ToString(); var clientName1 = Configuration.GetValue<string>("configuredClients:clients:0:clientName"); ``` --- To use the same configuration values in other classes (e.g. Controllers) 1. Either inject the `IConfiguration` and use the same key-based approach like above. Or 2. Register an instance of the strongly-typed configuration object with the DI container, and inject that object directly into client classes. Sample code below: ``` //In Startup.ConfigureServices() var clientConfig = Configuration.GetSection("configuredClients") .Get<ClientConfiguration>(); services.AddSingleton(clientConfig); //Controller public class TestController : Controller { IConfiguration _configStore; ClientConfiguration _clientConfiguration; public TestController(IConfiguration configuration, ClientConfiguration clientConfiguration) { _configStore = configuration; _clientConfiguration = clientConfiguration; } public IActionResult Get() { //with IConfiguration var clientId1 = _configStore .GetValue<string>("configuredClients:clients:0:clientId"); //with strongly typed ClientConfiguration var clientName1 = _clientConfiguration.Clients[0]?.ClientName; return new OkObjectResult("Configuration test"); } } ``` More examples [here](http://arghya.xyz/articles/practical-configuration-and-di-in-aspnet-core/).
If you want to get first "clientName"(expected "Client1"), just write: ``` Configuration.GetSection("configuredClients")["clients:0:clientName"]; ``` **Update for comment** Install .NET Core 1.0.1 and go with @TomMakin's way.
70,736,110
I have a pandas dataframe like this: ``` df = pd.DataFrame([ {'A': 'aaa', 'B': 0.01, 'C': 0.00001, 'D': 0.00999999999476131, 'E': 0.00023191546403037534}, {'A': 'bbb', 'B': 0.01, 'C': 0.0001, 'D': 0.010000000000218279, 'E': 0.002981781316158273}, {'A': 'ccc', 'B': 0.1, 'C': 0.001, 'D': 0.0999999999999659, 'E': 0.020048115477145148}, {'A': 'ddd', 'B': 0.01, 'C': 0.01, 'D': 0.019999999999999574, 'E': 0.397456279809221}, {'A': 'eee', 'B': 0.00001, 'C': 0.000001, 'D': 0.09500000009999432, 'E': 0.06821282401091405}, ]) ``` ``` A B C D E 0 aaa 0.01 0.00001 0.00999999999476131 0.00023191546403037534 1 bbb 0.01 0.0001 0.010000000000218279 0.002981781316158273 2 ccc 0.1 0.001 0.0999999999999659 0.020048115477145148 3 ddd 0.01 0.01 0.019999999999999574 0.397456279809221 4 eee 0.00001 0.000001 0.09500000009999432 0.06821282401091405 ``` I have tried to round columns D and E to the same number of decimal places as the values in columns B and C without success. I try this: ``` df['b_decimals'] = df['B'].astype(str).str.split('.').str[1].str.len() df['c_decimals'] = df['C'].astype(str).str.split('.').str[1].str.len() df['D'] = [np.around(x, y) for x, y in zip(df['D'], df['b_decimals'])] df['E'] = [np.around(x, y) for x, y in zip(df['E'], df['c_decimals'])] ``` but i get this error: ``` Traceback (most recent call last): File "C:\Program Files\Python37\lib\site-packages\numpy\core\fromnumeric.py", line 56, in _wrapfunc return getattr(obj, method)(*args, **kwds) AttributeError: 'float' object has no attribute 'round' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:/_GITHUB/python/test.py", line 30, in <module> main() File "D:/_GITHUB/python/test.py", line 24, in main df['E'] = [np.around(x, y) for x, y in zip(df['E'], df['c_decimals'])] File "D:/_GITHUB/python/test.py", line 24, in <listcomp> df['E'] = [np.around(x, y) for x, y in zip(df['E'], df['c_decimals'])] File "C:\Program Files\Python37\lib\site-packages\numpy\core\fromnumeric.py", line 3007, in around return _wrapfunc(a, 'round', decimals=decimals, out=out) File "C:\Program Files\Python37\lib\site-packages\numpy\core\fromnumeric.py", line 66, in _wrapfunc return _wrapit(obj, method, *args, **kwds) File "C:\Program Files\Python37\lib\site-packages\numpy\core\fromnumeric.py", line 46, in _wrapit result = getattr(asarray(obj), method)(*args, **kwds) TypeError: integer argument expected, got float ``` And the problems is this, when creating columns b\_decimals and c\_decimals, they store NaN values: ``` A B C D E b_decimals c_decimals 0 aaa 0.01 0.00001 0.00999999999476131 0.00023191546403037534 2 NaN 1 bbb 0.01 0.0001 0.010000000000218279 0.002981781316158273 2 4 2 ccc 0.1 0.001 0.0999999999999659 0.020048115477145148 1 3 3 ddd 0.01 0.01 0.019999999999999574 0.397456279809221 2 2 4 eee 0.00001 0.000001 0.09500000009999432 0.06821282401091405 NaN NaN ``` What is the reason for this to happen when creating the columns? Is there another way to get the desired transformation like below? ``` A B C D E 0 aaa 0.01 0.00001 0.01 0.00023 1 bbb 0.01 0.0001 0.01 0.0030 2 ccc 0.1 0.001 0.1 0.020 3 ddd 0.01 0.01 0.02 0.40 4 eee 0.00001 0.000001 0.09600 0.068212 ``` I read them!... thanks!
2022/01/17
[ "https://Stackoverflow.com/questions/70736110", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14382768/" ]
You can use a `-log10` operation to obtain the number of decimals before a digit (credit goes to @Willem van Onsem's answer [here](https://stackoverflow.com/questions/57663565/pandas-column-with-count-of-decimal-places)). Then you can incorporate this into a lambda function that you `apply` rowwise: ``` import numpy as np df['D'] = df.apply(lambda row: round(row['D'], int(-np.floor(np.log10(row['B'])))),axis=1) df['E'] = df.apply(lambda row: round(row['E'], int(-np.floor(np.log10(row['C'])))),axis=1) ``` Result: ``` >>> df A B C D E 0 aaa 0.0100 0.000010 0.010 0.000230 1 bbb 0.0100 0.000100 0.010 0.003000 2 ccc 0.1000 0.001000 0.100 0.020000 3 ddd 0.0100 0.010000 0.020 0.400000 4 eee 0.0001 0.000001 0.095 0.068213 >>> df.values array([['aaa', 0.01, 1e-05, 0.01, 0.00023], ['bbb', 0.01, 0.0001, 0.01, 0.003], ['ccc', 0.1, 0.001, 0.1, 0.02], ['ddd', 0.01, 0.01, 0.02, 0.4], ['eee', 0.0001, 1e-06, 0.095, 0.068213]], dtype=object) ```
I use part of the solution above of Derek and make my solution: ``` df['b_decimals'] = -np.floor(np.log10(df['B'])) df['c_decimals'] = -np.floor(np.log10(df['C'])) df['D'] = [np.around(x, y) for x, y in zip(df['D'], df['b_decimals'].astype(int))] df['E'] = [np.around(x, y) for x, y in zip(df['E'], df['c_decimals'].astype(int))] ``` getting the following: ``` A B C D E b_decimals c_decimals 0 aaa 0.01 0.00001 0.01 0.00023 2 5 1 bbb 0.01 0.0001 0.01 0.003 2 4 2 ccc 0.1 0.001 0.1 0.02 1 3 3 ddd 0.01 0.01 0.02 0.4 2 2 4 eee 0.00001 0.000001 0.1 0.068213 5 6 ```
62,446,911
I have this dataset ``` age 24 32 29 23 23 31 25 26 34 ``` I want to categorize using python and save the result to a new column "agegroup" such that age between; 23 to 26 to return 1 in the agegroup column, 27-30 to return value 2 in the agegroup column and 31-34 to return 3 in the agegroup column
2020/06/18
[ "https://Stackoverflow.com/questions/62446911", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12360445/" ]
You can use [`pandas.cut`](https://pandas.pydata.org/docs/reference/api/pandas.cut.html). Given: ``` >>> df age 0 24 1 32 2 29 3 23 4 23 5 31 6 25 7 26 8 34 ``` Solution: ``` >>> df.assign(agegroup=pd.cut(df['age'], bins=[23, 27, 31, 35], right=False, labels=[1, 2, 3])) age agegroup 0 24 1 1 32 3 2 29 2 3 23 1 4 23 1 5 31 3 6 25 1 7 26 1 8 34 3 ```
You can use dictionaries to do this as well. Key-value pairs. The keys would be the different age ranges and the value for a particular key would be the count for that particular age group. groupDict={'23-26':0,'27-30':0,'31-34':0} ``` for i in ages: if i>=23 and i<=26: groupDict['23-26']+=1 elif i>=27 and i<=30: groupDict['27-30']+=1 elif i>=31 and i<=34: groupDict['27-30']+=1 ```
40,744,392
I use Anonymous Python Functions in BitBake recipes to set variables during parsing. Now I wonder if I can check if a specific variable is set or not. If not, then I want to generate a BitBake Error, which stops the build process. Pseudo code, that I want to create: ``` python __anonymous () { if d.getVar('MY_VARIABLE', True) == "": <BITBAKE ERROR with custom message "MY_VARIABLE not found"> } ```
2016/11/22
[ "https://Stackoverflow.com/questions/40744392", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5316879/" ]
You can call `bb.fatal("MY_VARIABLE not set")` which will print that error and abort the build by throwing an exception. Beware that d.getVar() returns `None` when the variable is unset. You only get the empty string if that's your default value.
Outputs are possible on different loglevels and with python as well as shell script code For usage in python there are: * **bb.fatal** * bb.error * bb.warn * bb.note * bb.plain * bb.debug For usage in shell script there are: * **bbfatal** * bberror * bbwarn * bbnote * bbplain * bbdebug for example if you want to throw an error in your recipe's do\_install\_append function: ``` bbfatal "something went terribly wrong!" ```
37,228,607
In C, as well as in C++, one can in a for-loop change the index (for example `i`). This can be useful to, for example, compare a current element and based on that comparison compare the next element: ``` for(int i = 0; i < end; i++) if(array[i] == ':') if(array[++i] == ')') smileyDetected = true; ``` Now I know this cannot be done in Python (for [various](https://stackoverflow.com/questions/1485841/behaviour-of-increment-and-decrement-operators-in-python) [reasons](https://stackoverflow.com/questions/15363138/scope-of-python-variable-in-for-loop)). However, I cannot help wondering if there are short alternatives for Python? I can come up with: ``` while i < end: if array[i] == ':': i += 1 if array[i] == ')': smileyDetected = True; ``` However, this costs me an extra line, which doesn't sound so bad until you do the same multiple times ('less readable' did not mean having a long file). So to have it in one line, I would think of something like `array[i += 1]`, but this is invalid syntax as it seems. Is there Python equivalent which does the incrementation of the index in the same line as reading out that incremented index? EDIT: As most answers mention using `in` to find a substring, as an alternative for the particular example, let me add another example which wouldn't be solvable in such a way: ``` j = 0; for(int i = 0; i < end; i++) if(array[i] == ':') if(array[++i] == ')') anotherArray[j++] = array[++i]; ``` With that I mean it's about the incrementing of the index, not the finding of a particular string.
2016/05/14
[ "https://Stackoverflow.com/questions/37228607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1762311/" ]
Perhaps: ``` smileyDetected = ':)' in "".join(array) ``` or per @jonrsharpe: ``` from itertools import tee # pairwise() from "Itertools Recipes" def pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." a, b = tee(iterable) next(b, None) return zip(a, b) for a, b in pairwise(array): if a == ':' and b == ')': smileyDetected = True ```
In this special case, you could do: ``` for i, char in enumerate(array[:-1]): if char == ":" and array[i+1] == ")": smiley_detected = True ``` However, in the more general case, if you need to skip elements, you could modify the raw iterator: ``` iterator = iter(array) for char in iterator: if char == ":" and next(iterator) == ")": smiley_detected = True ``` Here you need to take more care about the array bounds; if the last element is `:`, you will get a `StopIteration` exception from calling `next` on the depleted iterator. You would need to catch that.
37,228,607
In C, as well as in C++, one can in a for-loop change the index (for example `i`). This can be useful to, for example, compare a current element and based on that comparison compare the next element: ``` for(int i = 0; i < end; i++) if(array[i] == ':') if(array[++i] == ')') smileyDetected = true; ``` Now I know this cannot be done in Python (for [various](https://stackoverflow.com/questions/1485841/behaviour-of-increment-and-decrement-operators-in-python) [reasons](https://stackoverflow.com/questions/15363138/scope-of-python-variable-in-for-loop)). However, I cannot help wondering if there are short alternatives for Python? I can come up with: ``` while i < end: if array[i] == ':': i += 1 if array[i] == ')': smileyDetected = True; ``` However, this costs me an extra line, which doesn't sound so bad until you do the same multiple times ('less readable' did not mean having a long file). So to have it in one line, I would think of something like `array[i += 1]`, but this is invalid syntax as it seems. Is there Python equivalent which does the incrementation of the index in the same line as reading out that incremented index? EDIT: As most answers mention using `in` to find a substring, as an alternative for the particular example, let me add another example which wouldn't be solvable in such a way: ``` j = 0; for(int i = 0; i < end; i++) if(array[i] == ':') if(array[++i] == ')') anotherArray[j++] = array[++i]; ``` With that I mean it's about the incrementing of the index, not the finding of a particular string.
2016/05/14
[ "https://Stackoverflow.com/questions/37228607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1762311/" ]
Perhaps: ``` smileyDetected = ':)' in "".join(array) ``` or per @jonrsharpe: ``` from itertools import tee # pairwise() from "Itertools Recipes" def pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." a, b = tee(iterable) next(b, None) return zip(a, b) for a, b in pairwise(array): if a == ':' and b == ')': smileyDetected = True ```
If `array` is a string, the shortest pythonic version would obviously be: ```py smileyDetected = ':)' in array ``` A more generic / non-string-specific way of doing this would be using iterators: ```py smileyDetected = (':', ')') in zip(array, array[1:]) ```
37,228,607
In C, as well as in C++, one can in a for-loop change the index (for example `i`). This can be useful to, for example, compare a current element and based on that comparison compare the next element: ``` for(int i = 0; i < end; i++) if(array[i] == ':') if(array[++i] == ')') smileyDetected = true; ``` Now I know this cannot be done in Python (for [various](https://stackoverflow.com/questions/1485841/behaviour-of-increment-and-decrement-operators-in-python) [reasons](https://stackoverflow.com/questions/15363138/scope-of-python-variable-in-for-loop)). However, I cannot help wondering if there are short alternatives for Python? I can come up with: ``` while i < end: if array[i] == ':': i += 1 if array[i] == ')': smileyDetected = True; ``` However, this costs me an extra line, which doesn't sound so bad until you do the same multiple times ('less readable' did not mean having a long file). So to have it in one line, I would think of something like `array[i += 1]`, but this is invalid syntax as it seems. Is there Python equivalent which does the incrementation of the index in the same line as reading out that incremented index? EDIT: As most answers mention using `in` to find a substring, as an alternative for the particular example, let me add another example which wouldn't be solvable in such a way: ``` j = 0; for(int i = 0; i < end; i++) if(array[i] == ':') if(array[++i] == ')') anotherArray[j++] = array[++i]; ``` With that I mean it's about the incrementing of the index, not the finding of a particular string.
2016/05/14
[ "https://Stackoverflow.com/questions/37228607", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1762311/" ]
If `array` is a string, the shortest pythonic version would obviously be: ```py smileyDetected = ':)' in array ``` A more generic / non-string-specific way of doing this would be using iterators: ```py smileyDetected = (':', ')') in zip(array, array[1:]) ```
In this special case, you could do: ``` for i, char in enumerate(array[:-1]): if char == ":" and array[i+1] == ")": smiley_detected = True ``` However, in the more general case, if you need to skip elements, you could modify the raw iterator: ``` iterator = iter(array) for char in iterator: if char == ":" and next(iterator) == ")": smiley_detected = True ``` Here you need to take more care about the array bounds; if the last element is `:`, you will get a `StopIteration` exception from calling `next` on the depleted iterator. You would need to catch that.
68,026,549
I'm using TFX to build an AI Pipeline on Vertex AI. I've followed [this tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_simple) to get started, then I adapted the pipeline to my own data which has over 100M rows of time series data. A couple of my components get killed midway because of memory issues, so I'd like to set the memory requirements for these components only. I use `KubeflowV2DagRunner` to orchestrated and launch the pipeline in Vertex AI with the following code: ```py runner = tfx.orchestration.experimental.KubeflowV2DagRunner( config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig( default_image = 'gcr.io/watch-hop/hop-tfx-covid:0.6.2' ), output_filename=PIPELINE_DEFINITION_FILE) _ = runner.run( create_pipeline( pipeline_name=PIPELINE_NAME, pipeline_root=PIPELINE_ROOT, data_path=DATA_ROOT, metadata_path=METADATA_PATH)) ``` A [similar question](https://stackoverflow.com/questions/67816573/how-to-specify-machine-type-in-vertex-ai-pipeline) has been answered on Stack Overflow, which has led me to a way to [set memory requirements in AI Platform](https://www.tensorflow.org/tfx/tutorials/tfx/cloud-ai-platform-pipelines#kubeflow_pipelines_resource_considerations), but these configs don't exist anymore in `KubeflowV2DagRunnerConfig`, so I'm at a dead end. Any help would be much appreciated. \*\* EDIT \*\* We define our components as python functions with the `@component` decorator, so most of them are custom components. For Training components, I know you can specify the machine type using the `tfx.Trainer` class as explained in [this tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_vertex_training#write_a_pipeline_definition), though my question is for custom components that are not doing any training.
2021/06/17
[ "https://Stackoverflow.com/questions/68026549", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2005440/" ]
Turns out you can't at the moment but according to this [issue](https://github.com/tensorflow/tfx/issues/3194#issuecomment-802598448), this feature is coming. An alternative solution is to convert your TFX pipeline to a Kubeflow pipeline. Vertex AI pipelines support kubeflow and with these you can set memory and cpu constraints at the component level. ```py @component // imported from kfp.dsl def MyComponent(Input[Dataset] input_data): // ... @pipeline // imported from kfp.dsl def MyPipeline(...): component = MyComponent(...) component.set_memory_limit('64G') // alternative to set_memory_request(...) ```
An alternate option to this solution would be using the dataflow beam runner which allows components to be run dataflow cluster via Vertex. I am still to find a way for specifying machine types for custom components Sample beam input: ``` BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS = [ --project= GOOGLE_CLOUD_PROJECT, --temp_location= GCS_LOCAITON, --runner=DataflowRunner ``` ] By now you would be migrating to Vertex AI
24,862,912
**Solution:** My fault: The file where adding the icon to the button is used via the "placeholder" function from QtDesigner. The Main-programm located in a different folder searches in its own folder for the icon, not in the folder from the "imported" file. So you just have to add the path to the icon: ``` dirpath = os.path.dirname(os.path.abspath(__file__)) icon1_path = os.path.join(dirpath,"arrow_down.ico") icon = QtGui.QPixmap(icon1_path) ``` --- I want to create a Qpushbutton with an Icon instead of text: ``` icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap("arrow_down.png")) self.ui.pb_down.setIcon(icon) ``` But this doesn't work. Neither this works: ``` self.ui.pb_down.setIcon(QtGui.QIcon("arrow_down.png")) ``` Theres is no Error-message, the icon just doesn't appear. If I add the Icon via Qt Designer, the icon is shown in the Qt Desiger itself, but when running the programm, the icon disappears again. Does anybody know what's going on? Im using python 2.7 and Windows 7 **Edit:** Using @Chris Aung code, I get a button with icon. ``` button = QtGui.QPushButton() self.setCentralWidget(button) icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap("arrow_down.ico")) print button.icon().isNull() #Returns true if the icon is empty; otherwise returns false. #output = False ``` But if I use exactly this code in my GUI, it just doesnt add the icon. ``` icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap("arrow_down.ico")) self.ui.pb_down.setIcon(icon) print self.ui.pb_down.icon().isNull() # output = True ``` I have no idea where the problem is.
2014/07/21
[ "https://Stackoverflow.com/questions/24862912", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3027322/" ]
This works for me and is auto generated by pyqt when you convert a .ui file to .py with the [pyuic4](http://pyqt.sourceforge.net/Docs/PyQt4/designer.html#the-uic-module) tool. ``` Icon = QtGui.QIcon() Icon.addPixmap(QtGui.QPixmap(_fromUtf8("SOME FILE")), QtGui.QIcon.Normal, QtGui.QIcon.Off) button.setIcon(Icon) button.setIconSize(QtCore.QSize(width, height)) ``` If you use this, you will also have to define "\_fromUtf8" at the top of your module as: ``` try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: _fromUtf8 = lambda s: s ```
Do you have text rendered on that button? Try playing with the icon size setIconSize(), to begin with you can try setting it to the rect of the pixmap.
24,862,912
**Solution:** My fault: The file where adding the icon to the button is used via the "placeholder" function from QtDesigner. The Main-programm located in a different folder searches in its own folder for the icon, not in the folder from the "imported" file. So you just have to add the path to the icon: ``` dirpath = os.path.dirname(os.path.abspath(__file__)) icon1_path = os.path.join(dirpath,"arrow_down.ico") icon = QtGui.QPixmap(icon1_path) ``` --- I want to create a Qpushbutton with an Icon instead of text: ``` icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap("arrow_down.png")) self.ui.pb_down.setIcon(icon) ``` But this doesn't work. Neither this works: ``` self.ui.pb_down.setIcon(QtGui.QIcon("arrow_down.png")) ``` Theres is no Error-message, the icon just doesn't appear. If I add the Icon via Qt Designer, the icon is shown in the Qt Desiger itself, but when running the programm, the icon disappears again. Does anybody know what's going on? Im using python 2.7 and Windows 7 **Edit:** Using @Chris Aung code, I get a button with icon. ``` button = QtGui.QPushButton() self.setCentralWidget(button) icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap("arrow_down.ico")) print button.icon().isNull() #Returns true if the icon is empty; otherwise returns false. #output = False ``` But if I use exactly this code in my GUI, it just doesnt add the icon. ``` icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap("arrow_down.ico")) self.ui.pb_down.setIcon(icon) print self.ui.pb_down.icon().isNull() # output = True ``` I have no idea where the problem is.
2014/07/21
[ "https://Stackoverflow.com/questions/24862912", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3027322/" ]
I was able to create the `QPushButton` with Icon without any problem using the code you have provided.The following is the code I used. ``` from PyQt4 import QtGui,QtCore import sys class MainWindow(QtGui.QMainWindow): def __init__(self, parent=None): QtGui.QMainWindow.__init__(self, parent) button = QtGui.QPushButton("TEST") self.setCentralWidget(button) icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap("add.png")) button.setIcon(icon) if __name__ == "__main__": app = QtGui.QApplication(sys.argv) main = MainWindow() main.show() app.exec_() ``` I suggest you to check your `png` image again(or try with some other `png` image). I am not totally confident that it will fix the problem but I had the similar problem before and using a different `.png` image seems to fix the problem.
Do you have text rendered on that button? Try playing with the icon size setIconSize(), to begin with you can try setting it to the rect of the pixmap.
24,862,912
**Solution:** My fault: The file where adding the icon to the button is used via the "placeholder" function from QtDesigner. The Main-programm located in a different folder searches in its own folder for the icon, not in the folder from the "imported" file. So you just have to add the path to the icon: ``` dirpath = os.path.dirname(os.path.abspath(__file__)) icon1_path = os.path.join(dirpath,"arrow_down.ico") icon = QtGui.QPixmap(icon1_path) ``` --- I want to create a Qpushbutton with an Icon instead of text: ``` icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap("arrow_down.png")) self.ui.pb_down.setIcon(icon) ``` But this doesn't work. Neither this works: ``` self.ui.pb_down.setIcon(QtGui.QIcon("arrow_down.png")) ``` Theres is no Error-message, the icon just doesn't appear. If I add the Icon via Qt Designer, the icon is shown in the Qt Desiger itself, but when running the programm, the icon disappears again. Does anybody know what's going on? Im using python 2.7 and Windows 7 **Edit:** Using @Chris Aung code, I get a button with icon. ``` button = QtGui.QPushButton() self.setCentralWidget(button) icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap("arrow_down.ico")) print button.icon().isNull() #Returns true if the icon is empty; otherwise returns false. #output = False ``` But if I use exactly this code in my GUI, it just doesnt add the icon. ``` icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap("arrow_down.ico")) self.ui.pb_down.setIcon(icon) print self.ui.pb_down.icon().isNull() # output = True ``` I have no idea where the problem is.
2014/07/21
[ "https://Stackoverflow.com/questions/24862912", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3027322/" ]
I was able to create the `QPushButton` with Icon without any problem using the code you have provided.The following is the code I used. ``` from PyQt4 import QtGui,QtCore import sys class MainWindow(QtGui.QMainWindow): def __init__(self, parent=None): QtGui.QMainWindow.__init__(self, parent) button = QtGui.QPushButton("TEST") self.setCentralWidget(button) icon = QtGui.QIcon() icon.addPixmap(QtGui.QPixmap("add.png")) button.setIcon(icon) if __name__ == "__main__": app = QtGui.QApplication(sys.argv) main = MainWindow() main.show() app.exec_() ``` I suggest you to check your `png` image again(or try with some other `png` image). I am not totally confident that it will fix the problem but I had the similar problem before and using a different `.png` image seems to fix the problem.
This works for me and is auto generated by pyqt when you convert a .ui file to .py with the [pyuic4](http://pyqt.sourceforge.net/Docs/PyQt4/designer.html#the-uic-module) tool. ``` Icon = QtGui.QIcon() Icon.addPixmap(QtGui.QPixmap(_fromUtf8("SOME FILE")), QtGui.QIcon.Normal, QtGui.QIcon.Off) button.setIcon(Icon) button.setIconSize(QtCore.QSize(width, height)) ``` If you use this, you will also have to define "\_fromUtf8" at the top of your module as: ``` try: _fromUtf8 = QtCore.QString.fromUtf8 except AttributeError: _fromUtf8 = lambda s: s ```