qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
62,463,565
I have three csv dataframes of tweets, each ~5M tweets. The following code for concatenating them exists with low memory error. My machine has 32GB memory. How can I assign more memory for this task in pandas? ``` df1 = pd.read_csv('tweets.csv') df2 = pd.read_csv('tweets2.csv') df3 = pd.read_csv('tweets3.csv') frames = [df1, df2, df3] result = pd.concat(frames) result.to_csv('tweets_combined.csv') ``` The error is: ``` $ python concantenate_dataframes.py sys:1: DtypeWarning: Columns (0,1,2,3,4,5,6,8,9,10,11,12,13,14,19,22,23,24) have mixed types.Specify dtype option on import or set low_memory=False. Traceback (most recent call last): File "concantenate_dataframes.py", line 19, in <module> df2 = pd.read_csv('tweets2.csv') File "/home/mona/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 676, in parser_f return _read(filepath_or_buffer, kwds) File "/home/mona/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 454, in _read data = parser.read(nrows) File "/home/mona/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 1133, in read ret = self._engine.read(nrows) File "/home/mona/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 2037, in read data = self._reader.read(nrows) File "pandas/_libs/parsers.pyx", line 859, in pandas._libs.parsers.TextReader.read ``` UPDATE: tried the suggestions in the answer and still get error ``` $ python concantenate_dataframes.py Traceback (most recent call last): File "concantenate_dataframes.py", line 18, in <module> df1 = pd.read_csv('tweets.csv', low_memory=False, error_bad_lines=False) File "/home/mona/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 676, in parser_f return _read(filepath_or_buffer, kwds) File "/home/mona/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 454, in _read data = parser.read(nrows) File "/home/mona/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 1133, in read ret = self._engine.read(nrows) File "/home/mona/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 2037, in read data = self._reader.read(nrows) File "pandas/_libs/parsers.pyx", line 862, in pandas._libs.parsers.TextReader.read File "pandas/_libs/parsers.pyx", line 943, in pandas._libs.parsers.TextReader._read_rows File "pandas/_libs/parsers.pyx", line 2070, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file. File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._read_low_memory File "pandas/_libs/parsers.pyx", line 928, in pandas._libs.parsers.TextReader._read_rows File "pandas/_libs/parsers.pyx", line 915, in pandas._libs.parsers.TextReader._tokenize_rows File "pandas/_libs/parsers.pyx", line 2070, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file. I am running the code on Ubuntu 20.04 OS ```
2020/06/19
[ "https://Stackoverflow.com/questions/62463565", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2414957/" ]
I think this is problem with malformed data (some data not structure properly in `tweets2.csv`) for that you can use `error_bad_lines=False` and try to chnage engine from c to python like `engine='python'` ex : `df2 = pd.read_csv('tweets2.csv', error_bad_lines=False)` or ex : `df2 = pd.read_csv('tweets2.csv', engine='python')` or maybe ex : `df2 = pd.read_csv('tweets2.csv', engine='python', error_bad_lines=False)` but I recommand to identify those revord and repair that. And also if you want hacky way to do this than use 1) <https://askubuntu.com/questions/941480/how-to-merge-multiple-files-of-the-same-format-into-a-single-file> 2) <https://askubuntu.com/questions/656039/concatenate-multiple-files-without-header>[enter link description here](https://askubuntu.com/questions/656039/concatenate-multiple-files-without-header)
Specify `dtype` option on import or set `low_memory=False`
5,646
61,390,586
I am currently working on a schoolproject, and im trying to import data from a CSV file to MySQL using python. This is my code so far: ``` import mysql.connector import csv mydb = mysql.connector.connect(host='127.0.0.1', user='root', password='abc123!', db='jd_university') cursor = mydb.cursor() with open('C:/Users/xxxxxx/Downloads/Students.csv') as csvfile: reader = csv.DictReader(csvfile, delimiter=',') for row in reader: cursor.execute('INSERT INTO Student (First_Name, Last_Name, DOB, Username, Password, Phone_nr,' 'Email, StreetName_nr, ZIP) ' 'VALUES("%s", "%s", "%s", "%s", "%s", "%s", "%s", "%s", "%s")', row) mydb.commit() cursor.close() ``` When i run this, i get this error: "mysql.connector.errors.DataError: 1292 (22007): Incorrect date value: '%s' for column 'DOB' at row 1" The date format used in the CSV file are yyyy-mm-dd Any tips on this would help greatly!
2020/04/23
[ "https://Stackoverflow.com/questions/61390586", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13391817/" ]
* You don't need to quote the `%s` placeholders. * Since you're using `DictReader`, you will need to name the columns in your `row` expression (or not use DictReader and hope for the correct order, which I'd not do). Try this: ```py import mysql.connector import csv mydb = mysql.connector.connect( host="127.0.0.1", user="root", password="abc123!", db="jd_university" ) cursor = mydb.cursor() with open("C:/Users/xxxxxx/Downloads/Students.csv") as csvfile: reader = csv.DictReader(csvfile, delimiter=",") for row in reader: values = [ row["First_Name"], row["Last_Name"], row["DOB"], row["Username"], row["Password"], row["Phone_nr"], row["Email"], row["StreetName_nr"], row["ZIP"], ] cursor.execute( "INSERT INTO Student (First_Name, Last_Name, DOB, Username, Password, Phone_nr," "Email, StreetName_nr, ZIP) " "VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s)", values, ) mydb.commit() cursor.close() ```
Validate the datatype for DOB field in your data file and database column. Could be a data issue or table definition issue.
5,647
25,608,078
I am trying to create an SVG font, so I need to create some paths. One of the letters is defined by the following path: ![Path](https://i.imgur.com/Uj2Wbdd.png) Which I created with [svgwrite](https://pypi.python.org/pypi/svgwrite), by creating two `circles` and a `rect`, and then using inkscape to take the difference of the two circles and the intersection with the straight line, like so: ![Combination](https://i.imgur.com/9f5AchO.png) My question is if I can do this directly with SVG or svgwrite? Either doing the boolean operations, or creating a path that behaves as the one above. I've tried to create a black and white circle with a path like: ``` d="M0,128 A128,128,1,1,0 0 127.9 Z\ M 32 128 A 96 96 1 1 0 32 127.9 Z" ``` with `fill="#000000", stroke = "none", fill-rule="evenodd"` However this ring is not recognized by the SVG font editor (it just creates a black disc). I also tried to create the combination of paths (outer circle, inner circle, horizontal line) ``` d="M0,128 A128,128,1,1,0 0 127.9 Z\ M 32 128 A 96 96 1 1 0 32 127.9 Z \ M 38 128 l 0 15 l 180 0 l 0 -30 l -180 0 z" ``` but although I can see the right-looking result when I open the SVG, the font editor will not recognize the path created which looks like this: ![path generated](https://i.imgur.com/bAK95Ws.png) Is there some way to generate programmatically the path of the first picture above?
2014/09/01
[ "https://Stackoverflow.com/questions/25608078", "https://Stackoverflow.com", "https://Stackoverflow.com/users/218558/" ]
The first arc has a negative (0) draw angle, the second must have a positive (1) draw angle and drawn from the opposite side to achieve the desired effect. ```py #--------------------------N-----------↓↓↓-↓↓↓-------------P-↓↓↓-↓↓↓↓↓---------------------------------------------- d="M 0 128 A 128 128 1 1 0 0 127.9 Z M 224 128 A 96 96 1 1 1 224 127.9 Z M 38 128 L 0 15 L 180 0 L 0 -30 L -180 0 Z" ```
following @martineau's suggestion and [this](https://stackoverflow.com/questions/5737975/circle-drawing-with-svgs-arc-path) SO question, I came to this solution: * Create a circle made of two halfs * Creates two smaller half circles (not quite circular) * then use [`fill-rule: evenodd`](http://www.w3.org/TR/SVG/painting.html#FillRuleProperty) to combine all of them. ``` d=" M 128, 128 m -128, 0 a 128,128 0 1,0 256,0\ a 128,128 0 1,0 -256,0\ M 32,112 a 1.15 1 0 1 1 194, 0z\ M 32,142 a 1.15 1 0 1 0 194, 0z\ " ``` which returns something like this: ![e letter path from script](https://i.imgur.com/qKpBrvP.png). Unfortunately, the Inkscape SVG font editor only renders this: ![Imgur](https://i.imgur.com/WZ8zBa7.png) So I'll have to continue investigating where the problem may come from. Further suggestions are welcome.
5,648
44,430,246
I have list of dictionaries and in each one of them the key `site` exists. So in other words, this code returns `True`: ``` all('site' in site for site in summary) ``` Question is, what will be the pythonic way to iterate over the list of dictionaries and return `True` if a key different from `site` exists in any of the dictionaries? **Example**: in the following list I would like to return `True` because of the existence of `cost` in the last dictionary BUT, I can't tell what will be the other key, it can be `cost` as in the example and it can be other strings; random keys for that matter. ``` [ {"site": "site_A"}, {"site": "site_B"}, {"site": "site_C", "cost": 1000} ] ```
2017/06/08
[ "https://Stackoverflow.com/questions/44430246", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5517847/" ]
If all dictionaries have the key `site`, the dictionaries have a length of at least 1. The presence of *any other key* would increase the dictionary size to be greater than 1, test for that: ``` any(len(d) > 1 for d in summary) ```
You could just check, for each dictionary `dct`: ``` any(key != "site" for key in dct) ``` If you want to check this for a list of dictionaries `dcts`, shove another `any` around that: `any(any(key != "site" for key in dct) for dct in dcts)` This also makes it easily extensible to allowing multiple different keys. (E.g. `any(key not in ("site", "otherkey") for key in dct)`) Because what's a dictionary good for if you can only use one key?
5,650
71,184,380
I have two lists. I want to create a `Literal` using both these lists ```python category1 = ["image/jpeg", "image/png"] category2 = ["application/pdf"] SUPPORTED_TYPES = typing.Literal[category1 + category2] ``` Is there any way to do this? I have seen the question [typing: Dynamically Create Literal Alias from List of Valid Values](https://stackoverflow.com/questions/64522040/typing-dynamically-create-literal-alias-from-list-of-valid-values) but this doesnt work for my use case because I dont want `mimetype` to be of type `typing.Tuple`. I will be using the `Literal` in a function - ```python def process_file(filename: str, mimetype: SUPPORTED_TYPES) ``` What I have tried - ```python supported_types_list = category1 + category2 SUPPORTED_TYPES = Literal[supported_types_list] SUPPORTED_TYPES = Literal[*supported_types_list] # this gives 2 different literals, rather i want only 1 literal SUPPORTED_TYPES = Union[Literal["image/jpeg", "image/png"], Literal["application/pdf"]] ```
2022/02/19
[ "https://Stackoverflow.com/questions/71184380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14595305/" ]
Use the same technique as in the question you linked: build the lists from the literal types, instead of the other way around: ``` SUPPORTED_IMAGE_TYPES = typing.Literal["image/jpeg", "image/png"] SUPPORTED_OTHER_TYPES = typing.Literal["application/pdf"] SUPPORTED_TYPES = typing.Literal[SUPPORTED_IMAGE_TYPES, SUPPORTED_OTHER_TYPES] category1 = list(typing.get_args(SUPPORTED_IMAGE_TYPES)) category2 = list(typing.get_args(SUPPORTED_OTHER_TYPES)) ``` The only part of this that wasn't already covered in the other answer is `SUPPORTED_TYPES = typing.Literal[SUPPORTED_IMAGE_TYPES, SUPPORTED_OTHER_TYPES]`, which, [yeah, you can do that](https://www.python.org/dev/peps/pep-0586/#legal-parameters-for-literal-at-type-check-time). It's equivalent to your original definition of `SUPPORTED_TYPES`.
I got an answer to this - Create a literal for both the lists, and then create a combined literal ```python category1 = Literal["image/jpeg", "image/png"] category2 = Literal["application/pdf"] SUPPORTED_TYPES = Literal[category1, category2] ``` Sorry: hadnt seen that monica answered the question
5,653
74,214,700
i wrote this code: ``` admitted_List = [1, 5, 10, 50, 100, 500, 1000] tempString = "" finalList = [] for i in range(len(xkcd)-1): if int(xkcd[i] + xkcd[i+1]) in admitted_List: tempString += xkcd[i] continue else: tempString += xkcd[i] finalList.append(int(tempString)) tempString = "" return (finalList) ``` that basically takes in (xkcd) a string of weights of roman numbers like '10010010010100511' and it should return me the list of weights like [100, 100, 100, 10, 100, 5, 1, 1] so that C C C XC V I I makes sense, of course the first 4 chars of the string make the number 1001 that in roman numbers means nothing so my number will be 100 and then the check should stop and begin a new number. I tried the above algorithm. Please excuse me if bad code or bad body question, I'm pretty new to python and stack overflow.
2022/10/26
[ "https://Stackoverflow.com/questions/74214700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18943770/" ]
> > > ``` > nodejs test.js > > ``` > > > > > ``` > nodejs -v > v10.19.0 > > ``` > > You are running this with Node 10 which is beyond end of life and does not support ECMAScript modules (with provide `import`) except as an experimental feature locked behind a flag. Use the other version of Node.js you have installed instead.
What worked for me: 1. Install **curl**: `sudo apt install curl` 2. Install **NVM**: `sudo curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash` ``` export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")" [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm. ``` 3. List all node versions: `nvm list` 4. Select a node **version** to use `nvm use v19.0.0` 5. Run the file using: `node test.js`
5,654
66,166,103
How do I turn these numbers into a list using python? 16 3 2 13 -> ["16","3","2","13"]
2021/02/12
[ "https://Stackoverflow.com/questions/66166103", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15195153/" ]
You can divide it by using [split](https://docs.python.org/3/library/stdtypes.html?highlight=split#str.split): ``` "16 3 2 13".split() ``` Output: ``` ["16","3","2","13"] ```
``` a = '16 3 2 13' b = [''] print(type(b)) print(len(b)) j = 0 for i in range(len(a)): if a[i] != ' ': b[j] = b[j] + a[i] else: j = j+1 b.append('') print(b) ```
5,657
72,726,621
I have two lists. ``` L1 = ['worry not', 'be happy', 'very good', 'not worry', 'good very', 'full stop'] # bigrams list L2 = ['take into account', 'always be happy', 'stay safe friend', 'happy be always'] #trigrams list ``` If I look closely, L1 has `'not worry'` and `'good very'` which are exact reversed repetitions of `'worry not'` and `'very good'`. I need to remove such reversed elements from the list. Similary in L2, `'happy be always'` is a reverse of `'always be happy'`, which is to be removed as well. The final output I'm looking for is: ``` L1 = ['worry not', 'be happy', 'very good', 'full stop'] L2 = ['take into account', 'always be happy', 'stay safe friend'] ``` I tried one solution `[[max(zip(map(set, map(str.split, group)), group))[1]] for group in L1]` But it is not giving the correct output. Should I be writing different functions for bigrams and trigrams reverse repetition removal, or is there a pythonic way of doing this in a faster way,because I'll have to run this for about 10K+strings.
2022/06/23
[ "https://Stackoverflow.com/questions/72726621", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6803114/" ]
``` L1 = ['worry not', 'be happy', 'very good', 'not worry', 'good very', 'full stop'] # bigrams list L2 = ['take into account', 'always be happy', 'stay safe friend', 'happy be always'] #trigrams list def solution(lst): res = [] for item in lst: if " ".join(item.split()[::-1]) not in res: res.append(item) return res print(solution(L2)) ```
This is a possible solution (the complexity is linear with respect to the number of strings): ``` from collections import defaultdict from operator import itemgetter d = defaultdict(list) for s in L2: d[max(s, reversed(s.split()))].append(s) result = list(map(itemgetter(0), d.values())) ``` Here are the results: ``` ['worry not', 'be happy', 'very good', 'full stop'] ['take into account', 'always be happy', 'stay safe friend'] ```
5,659
63,145,924
Let say I have something like this : ``` --module1 def called(): if caller.class.attrX == 1 : ... --module2 class ABC: attrX = 1 def method(): called() ``` I want to access caller Class-attribute ? I know I have to use inspect somehow but can figure how exactly. python3
2020/07/29
[ "https://Stackoverflow.com/questions/63145924", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1019129/" ]
The code works perfectly fine for me in python 3.7.3. `The number is: 2825302` `['28', '82', '253', '2530', '5302']` this is the output I have received
I don't get the error. Better practice is to pass mutable objects as parameters to functions. Changed `find_ten_substring()` to take additional parameter ``` def find_sum(num_str): sum1 = 0 for i in num_str: sum1 += int(i) return sum1 def find_ten_substring(num_str, dict1): list1 = [] for i in range(2, len(num_str) + 1): for j in range(0, i): if (i != j): x = num_str[j:i] if (x in dict1): if (dict1[x] == 10): list1.append(x) elif (x not in dict1): y = find_sum(x) if (y == 10): dict1[x] = y list1.append(x) return list1 # Remove pass and write your logic here return list1 dict1 = {} num_str = "2825302" print("The number is:", num_str) result_list = find_ten_substring(num_str, dict1) print(result_list, dict1) ```
5,669
1,223,927
I have django running through WSGI like this : ``` <VirtualHost *:80> WSGIScriptAlias / /home/ptarjan/django/django.wsgi WSGIDaemonProcess ptarjan processes=2 threads=15 display-name=%{GROUP} WSGIProcessGroup ptarjan Alias /media /home/ptarjan/django/mysite/media/ </VirtualHost> ``` But if in python I do : ``` def handler(request) : data = urllib2.urlopen("http://example.com/really/unresponsive/url").read() ``` the whole apache server hangs and is unresponsive with this backtrace ``` #0 0x00007ffe3602a570 in __read_nocancel () from /lib/libpthread.so.0 #1 0x00007ffe36251d1c in apr_file_read () from /usr/lib/libapr-1.so.0 #2 0x00007ffe364778b5 in ?? () from /usr/lib/libaprutil-1.so.0 #3 0x0000000000440ec2 in ?? () #4 0x00000000004412ae in ap_scan_script_header_err_core () #5 0x00007ffe2a2fe512 in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #6 0x00007ffe2a2f9bdd in ?? () from /usr/lib/apache2/modules/mod_wsgi.so #7 0x000000000043b623 in ap_run_handler () #8 0x000000000043eb4f in ap_invoke_handler () #9 0x000000000044bbd8 in ap_process_request () #10 0x0000000000448cd8 in ?? () #11 0x0000000000442a13 in ap_run_process_connection () #12 0x000000000045017d in ?? () #13 0x00000000004504d4 in ?? () #14 0x00000000004510f6 in ap_mpm_run () #15 0x0000000000428425 in main () ``` on Debian Apache 2.2.11-7. Similarly, can we be protected against : ``` def handler(request) : while (1) : pass ``` In PHP, I would set time and memory limits.
2009/08/03
[ "https://Stackoverflow.com/questions/1223927", "https://Stackoverflow.com", "https://Stackoverflow.com/users/90025/" ]
It is not 'deadlock-timeout' you want as specified by another, that is for a very special purpose which will not help in this case. As far as trying to use mod\_wsgi features, you instead want the 'inactivity-timeout' option for WSGIDaemonProcess directive. Even then, this is not a complete solution. This is because the 'inactivity-timeout' option is specifically to detect whether all request processing by a daemon process has ceased, it is not a per request timeout. It only equates to a per request timeout if daemon processes are single threaded. As well as help to unstick a process, the option will also have side effect of restarting daemon process if no requests arrive at all in that time. In short, there is no way at mod\_wsgi level to have per request timeouts, this is because there is no real way of interrupting a request, or thread, in Python. What you really need to implement is a timeout on the HTTP request in your application code. Am not sure where it is up to and whether available already, but do a Google search for 'urllib2 socket timeout'.
If I understand well the question, you want to protect apache from locking up when running some random scripts from people. Well, if you're running untrusted code, I think you have other things to worry about that are worst than apache. That said, you can use some configuration directives to adjust a *safer* environment. These two below are very useful: * **WSGIApplicationGroup** - Sets which application group WSGI application belongs to. It allows to separate settings for each user - All WSGI applications within the same application group will execute within the context of the same Python sub interpreter of the process handling the request. * **WSGIDaemonProcess** - Configures a distinct daemon process for running applications. The daemon processes can be run as a user different to that which the Apache child processes would normally be run as. This directive accepts a lot of useful options, I'll list some of them: + `user=name | user=#uid`, `group=name | group=#gid`: Defines the UNIX user and groupname name or numeric user uid or group gid of the user/group that the daemon processes should be run as. + `stack-size=nnn` The amount of virtual memory in bytes to be allocated for the stack corresponding to each thread created by mod\_wsgi in a daemon process. + `deadlock-timeout=sss` Defines the maximum number of seconds allowed to pass before the daemon process is shutdown and restarted after a potential deadlock on the Python GIL has been detected. The default is 300 seconds. You can read more about the configuration directives [here](http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives).
5,670
44,746,078
I have some exception handling code in `python` where two exceptions can be raised, the first one being a "superset" of the second one. I.e. the following code summarizes what I need to do (and works fine) ``` try: normal_execution_path() except FirstError: handle_first_error() handle_second_error() except SecondError: handle_second_error() ``` But it requires me to abstract everything into independent functions for the code to remain clean and readable. I was hopping for some simpler syntax like: ``` try: normal_execution_path() except FirstError: handle_first_error() raise SecondError except SecondError: handle_second_error() ``` But this does not seem to work (`SecondError` does not get re-catched if it is raised inside this block). Is there anything doable in that direction though ?
2017/06/25
[ "https://Stackoverflow.com/questions/44746078", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3246191/" ]
If you wish to manually throw the second error to be handled, you can use nested try-catch blocks like these: ``` try: normal_execution_path() except FirstError: try: handle_first_error() raise SecondError except SecondError: handle_second_error() except SecondError: handle_second_error() ```
Perhaps it is worth reviewing the code architecture. But for your particular case: Create a generic class that handles this type of error. To inherit from it for the first and second error cases. Create a handler for this type of error. In the handler, check the first or second special case and process it with a waterfall. ``` class SupersetException(Exception): pass class FirstError(SupersetException): pass class SecondError(SupersetException): pass def normal_execution_path(): raise SecondError def handle_superset_ex(state): # Our waterfall # We determine from whom the moment to start processing the exception. if type(state) is FirstError: handle_first_error() # If not the first, the handler above will be skipped handle_second_error() try: normal_execution_path() except SupersetException as state: handle_superset_ex(state) ``` Then just develop the idea.
5,671
39,902,759
I have a cube of size `N * N * N`, say `N=8`. Each dimension of the cube is discretised to 1, so that I have labelled points `(0,0,0), (0,0,1)..(N,N,N)`. At each labelled points, I would like to assign a random value, and thus produce an array which stores value at each vertex. For example `val[0,0,0]=1, val[0,0,1]=1.2 val[0,1,0]=1.3`, ... How do I write a python code to acheive this?
2016/10/06
[ "https://Stackoverflow.com/questions/39902759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6700176/" ]
You could simply generate lists of lists. While not in any way efficient, it would allow you to access your cube like `val[0][0][0]`. ``` arr = [[[] for _ in range(8)] for _ in range(8)] arr[0][0].append(1) ```
For large matrices, look into using `numpy`. This is the problem that it's designed to solve
5,672
53,480,515
Note: I am quite new to Python so the problem could be anything. * Python: 3.6 * MySQL: 8 I have a MySQL database setup and can successfully query from it through Python, so I am sure my connection is OK. I can insert records inside MySQL Workbench, so I am fairly sure the DB is OK. However, when I run the following code, I get no error ("Done" does print, "error" does not print). However, the record is not being inserted. ``` import mysql.connector cnx = mysql.connector.connect(user='furby', password='something',host='127.0.0.1', database='mydb') cursor = cnx.cursor() document_root = ET.fromstring(semester.read('data')) semester_name = document_root.get("Name") print(semester_name) query = ("SELECT semester_id FROM StudentData.semesters WHERE name = '%s'") cursor.execute(query % semester_name) cursor.fetchall() print(cursor.rowcount) if (cursor.rowcount == 0): print("hi") start_date = document_root.get("StartDate") end_date = document_root.get("EndDate") notes = document_root.get("Notes") try: query = "INSERT INTO StudentData.semesters (name, start_date, end_date, notes) VALUES ('" + semester_name + "', '" + start_date + "', '" + end_date + "', '" + notes + "')" print(query) cursor.execute(query) except: print("error") print("done") ``` I had gotten lots of errors building up to this but suddenly, no errors. However, there must be some error, right? What am I doing wrong here that would stop the record from being inserted without generating any kind of error? **Edit** After Douglas's answer, I changed to print the SQL insert statement and then copy and pasted it into SQL Workbench. Again, it does nothing running it through python but running in SQL Workbench does insert the record as expected.
2018/11/26
[ "https://Stackoverflow.com/questions/53480515", "https://Stackoverflow.com", "https://Stackoverflow.com/users/546813/" ]
I thin you should close your cursor. And your connection is autocommited? Plese check it, and you should commit it!
I don't see anything obvious, though I have a suggestion. The same way you created the query above before executing it, do the same below and print it before execution so you can be sure what you are executing. As a general rule I don't construct query strings within the execute function. I don't see a close which might have helped commit the data. I hope this helps.
5,674
36,490,093
Can anyone tell me how to use if statement in python for the difference between the two nos is 1..?? I have written like below and I am getting error if num1 = num2 + 1: what should be the content with if?
2016/04/08
[ "https://Stackoverflow.com/questions/36490093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6174977/" ]
Your guess was mostly correct. It's not that the passing of references was a problem, is that after all the references got passed around, here, and there, and everywhere, the object that was referenced went out of scope and got destroyed, with the reference left hanging around. Using the reference at that point becomes undefined behavior. When all is said and done, `_cycles` ends up being a reference to a function-scoped object from a function that has returned, thus destroying the object that was referenced. A simplified example of what you did: ``` int &foo() { int bar=4; return bar; } void foobar() { int &blah=foo(); } ``` `foo()` returned a reference to a function-scoped object that was already destroyed by the time `foo()` returned, so the returned reference is referring to an object that went out of scope and got destroyed. Using the reference is now undefined behavior.
I changed the line: const vector >& \_cycles; to const vector > \_cycles; and everything worked fine!
5,675
44,933,326
I am having problems connecting to my database through postgreSQL3 version 9.5. However, after running the code below: ``` import psycopg2 as p con = p.connect("dbname ='dvdrental' user = 'myusername' host ='localhost' password ='somepassword'") cur = con.cursor() cur.execute("select * from title") rows = cur.fetchall() ``` I get this error message: ``` psycopg2.OperationalError: could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432? ``` For background information, I accidentally downloaded the newest version of PostgreSQL, and it connected to the part 5432. I need it to connect to PostgreSQL port 5433. I do not know how to do that. How can I solve this DB problem? Is this a PostgreSQL problem or a python problem?
2017/07/05
[ "https://Stackoverflow.com/questions/44933326", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7620397/" ]
Check the listen adresses of postgres, using `netstat` (from the shell): --- ``` plasser@pisbak$ netstat -nl |grep 5432 tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN tcp6 0 0 :::5432 :::* LISTEN unix 2 [ ACC ] STREAM LISTENING 9002 /tmp/.s.PGSQL.5432 plasser@pisbak$ ``` --- If nothing shows up, Postgres is *not* listening on port 5432.
what if you add `port = '5433'` to your `p.connect` line? ``` import psycopg2 as p con = p.connect("dbname ='dvdrental' user = 'myusername' host ='localhost' password ='somepassword' port='5433'") cur = con.cursor() cur.execute("select * from title") rows = cur.fetchall() ```
5,678
34,936,039
Env: Windows 10 Pro I installed python 2.7.9 and using `pip` installed `robotframework` and `robotframework-selenium2library` and it all worked fine with no errors. Then I was doing some research and found that unless there is a reason for me to use 2.x versions of Python, I should stick with 3.x versions. Since 3.4 support already exists for *selenium2library* (read somewhere), so I decided to switch to it. I uninstalled `python 2.7.9` and installed `python 3.4` version. When I installed `robotframerwork`, I am getting the following: > > **C:\Users\username>**`pip install robotframework` > > Downloading/unpacking RobotFramework > Running setup.py (path:C:\Users\username\AppData\Local\Temp\pip\_build\_username\RobotFramework\setup.py) egg\_info for package RobotFramework > no previously-included directories found matching 'src\robot\htmldata\testdata' > Installing collected packages: RobotFramework > Running setup.py install for RobotFramework > File "C:\Python34\Lib\site-packages\robot\running\timeouts\ironpython.py", line 57 > raise self.\_error[0], self.\_error[1], self.\_error[2] > ^ > SyntaxError: invalid syntax > File "C:\Python34\Lib\site-packages\robot\running\timeouts\jython.py", line 56 > raise self.\_error[0], self.\_error[1], self.\_error[2] > ^ > SyntaxError: invalid syntax > no previously-included directories found matching 'src\robot\htmldata\testdata' > replacing interpreter in robot.bat and rebot.bat. > Successfully installed RobotFramework > Cleaning up... > > > When I did pip list I do see robotframework is installed. ``` C:\Users\username>pip list pip (1.5.4) robotframework (3.0) setuptools (2.1) ``` Should I be concerned and stick to `Python 2.7.9`?
2016/01/21
[ "https://Stackoverflow.com/questions/34936039", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4496252/" ]
You need to assign `heroclass.toLowerCase();` to the original value of `heroclass`: ``` heroclass = heroclass.toLowerCase(); ``` If you do not do this, the lowercase version of heroclass is not saved.
Put your loop in a labeled block: ``` myblock: { while (true) { //code heroclass = heroclass.toLowerCase(); switch(heroclass) { case "slayer": A = "text"; break myblock; //repeat with other cases } } } //goes to here when you say "break myblock;" ``` What you're doing is basically assigning the label `myblock` to the entire loop. When you say `break myblock` it breaks out of the entire section inside of the brackets. NOTE: I would recommend this solution over the others because it doesn't depend on the magic value assigned by the `switch`; it works no matter what it is. Also, I've added the part to make it case insensitive. Sorry about the confusion!
5,679
73,616,000
I want to hide this warning `UserWarning: pandas only support SQLAlchemy connectable(engine/connection) ordatabase string URI or sqlite3 DBAPI2 connectionother DBAPI2 objects are not tested, please consider using SQLAlchemy` and I've tried ``` import warnings warnings.simplefilter(action='ignore', category=UserWarning) import pandas ``` but the warning still shows. My python script read data from databases. I'm using `pandas.read_sql` for SQL queries and `psycopg2` for db connections. Also I'd like to know which line triggers the warning.
2022/09/06
[ "https://Stackoverflow.com/questions/73616000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8089312/" ]
It seems I cannot disable the pandas warning, so I used SQLAlchemy (as the warning message wants me to do so) to wrap the psycopg2 connection. I followed the instruction here: [SQLAlchemy for psycopg2 documentation](https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.psycopg2) A simple example: ``` import psycopg2 import sqlalchemy import pandas as pd conn = sqlalchemy.create_engine(f"postgresql+psycopg2://{user}:{pw}@{host}:{port}/{db}") query = "select count(*) from my_table" pd.read_sql(query, conn) ``` The warning doesn't get triggered anymore.
The warnings that you're filtering right now are warnings of type `FutureWarning`. The warning that you're getting is of type `UserWarning`, so you should change the warning category to `UserWarning`. I hope [this](https://stackoverflow.com/a/71083448/8293793) answers your question regarding why pandas is giving that warning.
5,684
6,595,673
I'm trying to read a column oriented csv file into R as a data frame. the first line of the file is like so: `sDATE, sTIME,iGPS_ALT, ...` and then each additional line is a measurement: `4/10/2011,2:15,78, ...` when I try to read this into R, via `d = read.csv('filename')` I get a duplicate row.names error since R thinks that the first column of the data is the row names, and since all of the measurements were taken on the same day, the values in the first column do not change. If I put in `row.names = NULL` into the `read.csv` call, I get an extraneous column `d$row.names` which corresponds to the sDATE column, and everything is "shifted" one column down, so `d$sDATE` would have `2:15` in it, not `4/10/2011` as needed. If I open my csv in excel, do nothing and then save it, everything's cool. I have to process hundreds of these, so manually saving in excel is not something I want. If there's something programmatically I can do to preprocess these csv's in python or otherwise, that would be great.
2011/07/06
[ "https://Stackoverflow.com/questions/6595673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3926/" ]
`read.csv` only assumes there are any row names if there are less values in the header than in the other rows. So somehow you are either missing a column name or have an extra column you don't want.
You probably DO have an extra column. But it probably arises from a stray formatted cell (or column of cells) that is actually empty, to the right of your data in your original spreadsheet. Here is the key: Excel will save empty fields in the CSV file for any empty cells that are formatted in your sheet. Here is why you probably have this problem: Because when you open the CSV file with Excel and re-save it the problem with R goes away. What is happening: when you pull a CSV file back into Excel, it will subsequently ignore empty cells to the right or below your data (since CSV files have no formatting). **Conclusion**: be careful saving formatted spreadsheets as CSV files for use with statistical packages. Stray formatting means stray fields in the CSV.
5,685
43,882,498
The following code is my pipeline for reading images and labels from files: ``` import tensorflow as tf import numpy as np import tflearn.data_utils from tensorflow.python.framework import ops from tensorflow.python.framework import dtypes import sys #process labels in the input file def process_label(label): info=np.zeros(6) ... return info def read_label_file(file): f = open(file, "r") filepaths = [] labels = [] lines = [] for line in f: tokens = line.split(",") filepaths.append([tokens[0],tokens[1],tokens[2]]) labels.append(process_label(tokens[3:])) lines.append(line) return filepaths, np.vstack(labels), lines def get_data_batches(params): # reading labels and file path train_filepaths, train_labels, train_line = read_label_file(params.train_info) test_filepaths, test_labels, test_line = read_label_file(params.test_info) # convert string into tensors train_images = ops.convert_to_tensor(train_filepaths) train_labels = ops.convert_to_tensor(train_labels) train_line = ops.convert_to_tensor(train_line) test_images = ops.convert_to_tensor(test_filepaths) test_labels = ops.convert_to_tensor(test_labels) test_line = ops.convert_to_tensor(test_line) # create input queues train_input_queue = tf.train.slice_input_producer([train_images, train_labels, train_line], shuffle=params.shuffle) test_input_queue = tf.train.slice_input_producer([test_images, test_labels, test_line],shuffle=False) # process path and string tensor into an image and a label train_image=None for i in range(train_input_queue[0].get_shape()[0]): file_content = tf.read_file(params.path_prefix+train_input_queue[0][i]) train_imageT = (tf.to_float(tf.image.decode_jpeg(file_content, channels=params.num_channels)))*(1.0/255) train_imageT = tf.image.resize_images(train_imageT,[params.load_size[0],params.load_size[1]]) train_imageT = tf.random_crop(train_imageT,size=[params.crop_size[0],params.crop_size[1],params.num_channels]) train_imageT = tf.image.random_flip_up_down(train_imageT) train_imageT = tf.image.per_image_standardization(train_imageT) if(i==0): train_image = train_imageT else: train_image = tf.concat([train_image, train_imageT], 2) train_label = train_input_queue[1] train_lineInfo = train_input_queue[2] test_image=None for i in range(test_input_queue[0].get_shape()[0]): file_content = tf.read_file(params.path_prefix+test_input_queue[0][i]) test_imageT = tf.to_float(tf.image.decode_jpeg(file_content, channels=params.num_channels))*(1.0/255) test_imageT = tf.image.resize_images(test_imageT,[params.load_size[0],params.load_size[1]]) test_imageT = tf.image.central_crop(test_imageT, (params.crop_size[0]+0.0)/params.load_size[0]) test_imageT = tf.image.per_image_standardization(test_imageT) if(i==0): test_image = test_imageT else: test_image = tf.concat([test_image, test_imageT],2) test_label = test_input_queue[1] test_lineInfo = test_input_queue[2] # define tensor shape train_image.set_shape([params.crop_size[0], params.crop_size[1], params.num_channels*3]) train_label.set_shape([66]) test_image.set_shape( [params.crop_size[0], params.crop_size[1], params.num_channels*3]) test_label.set_shape([66]) # collect batches of images before processing train_image_batch, train_label_batch, train_lineno = tf.train.batch([train_image, train_label, train_lineInfo],batch_size=params.batch_size,num_threads=params.num_threads,allow_smaller_final_batch=True) test_image_batch, test_label_batch, test_lineno = tf.train.batch([test_image, test_label, test_lineInfo],batch_size=params.test_size,num_threads=params.num_threads,allow_smaller_final_batch=True) if(params.loadSlice=='all'): return train_image_batch, train_label_batch, train_lineno, test_image_batch, test_label_batch, test_lineno elif params.loadSlice=='train': return train_image_batch, train_label_batch elif params.loadSlice=='test': return test_image_batch, test_label_batch elif params.loadSlice=='train_info': return train_image_batch, train_label_batch, train_lineno elif params.loadSlice=='test_info': return test_image_batch, test_label_batch, test_lineno else: return train_image_batch, train_label_batch, test_image_batch, test_label_batch ``` I want to use the same pipeline for loading the test data. The size of my test data is huge and I cannot load all of them at once. I have 20453 test examples which is not an integer multiply of the batch size (here 512). **How can I read all of my test examples via this pipeline one and only one time and then measure the performance on them?** Currently, I am using this code for batching my test data and it does not work. It always read a full batch from the queue even when I set **allow\_smaller\_final\_batch** to True ``` with tf.Session() as sess: sess.run(tf.global_variables_initializer()) saver.restore(sess,"checkpoints2/snapshot-16") coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) more = True num_examples=0 while(more): img_test, lbl_test, lbl_line=sess.run([test_image_batch,test_label_batch,test_lineno]) print(lbl_test.shape) size=lbl_test.shape[0] num_examples += size if size<args.batch_size: more = False sess.close() ``` This is the code of my model: ``` from tflearn.layers.core import input_data, dropout, fully_connected from tflearn.layers.conv import conv_2d, max_pool_2d from tflearn.layers.normalization import local_response_normalization from tflearn.layers.normalization import batch_normalization from tflearn.layers.estimator import regression from tflearn.activations import relu def get_alexnet(x,num_output): network = conv_2d(x, 64, 11, strides=4) network = batch_normalization(network,epsilon=0.001) network = relu (network) network = max_pool_2d(network, 3, strides=2) network = conv_2d(network, 192, 5) network = batch_normalization(network,epsilon=0.001) network = relu(network) network = max_pool_2d(network, 3, strides=2) network = conv_2d(network, 384, 3) network = batch_normalization(network,epsilon=0.0001) network = relu(network) network = conv_2d(network, 256, 3) network = batch_normalization(network,epsilon=0.001) network = relu(network) network = conv_2d(network, 256, 3) network = batch_normalization(network,epsilon=0.001) network = relu(network) network = max_pool_2d(network, 3, strides=2) network = fully_connected(network, 4096) network = batch_normalization(network,epsilon=0.001) network = relu(network) network = dropout(network, 0.5) network = fully_connected(network, 4096) network = batch_normalization(network,epsilon=0.001) network = relu(network) network = dropout(network, 0.5) network1 = fully_connected(network, num_output) network2 = fully_connected(network, 12) network3 = fully_connected(network,6) return network1,network2,network3 ```
2017/05/10
[ "https://Stackoverflow.com/questions/43882498", "https://Stackoverflow.com", "https://Stackoverflow.com/users/332289/" ]
`tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath)` is called by the table view each time it needs a new cell. If only 12 cells are visible at a time then the table view initially needs only 12 cells so will ask for only 12 cells. You'd have to scroll before it would need to ask for more. It won't request cells until it needs them.
So interestingly, even though the table is displaying correctly, the printout only reaches 12, and that happens regardless of how many cells you scroll to. Is that what you are finding? This is because you have 12 rows in a view, and the cells are reused, so you are not creating more cells but you are just reusing the 12 that you already have.
5,690
23,271,575
I'm trying to get text to display as bold, or in colors, or possibly in italics, in ipython's qtconsole. I found this link: [How do I print bold text in Python?](https://stackoverflow.com/questions/8924173/python-print-bold-text), and used the first and second answers, but in qtconsole, only the underlining option works. I try: `print '\033[1m' + 'Hello World!' + '\033[0m'` And get: `Hello World!` (No boldface). The colors don't work either. But: `print '\033[4m' + 'Hello World!' + '\033[0m'` And get: `Hello World!` With underlining. This is only in the qtconsole. Running ipython just in the terminal, it works to do boldface and color in this way. There were other options suggested in that link and another, [Print in terminal with colors using Python?](https://stackoverflow.com/questions/287871/print-in-terminal-with-colors-using-python), linked from it, but they all seem more complex, and to use more elaborate packages, than seems necessary for what I want to do, which is simply to get qtconsole to display like the ordinary terminal does. Does anyone know what's going on? Is this simply a limitation of the qtconsole?
2014/04/24
[ "https://Stackoverflow.com/questions/23271575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3566002/" ]
Those are ANSI escapes, special sequences of characters which terminals process to switch font styles. The Qt console interprets some of them, but not all of the ones that serious terminals do. This sequence works to print in red, for instance: ``` print('\x1b[1;31m'+'Hello world'+'\x1b[0m') ``` However, if you're trying to write a cross platform application, be aware that the Windows command prompt doesn't handle these codes. Some of the more complex packages can process them to produce similar effects on Windows. The Qt console can also display simple HTML, like this: ``` from IPython.display import HTML HTML("<i>Italic text</i>") ``` But of course, HTML doesn't work in regular terminals.
If you mean the body text of the iPython notebook (Markdowns), you can put 2 underline characters directly before and after your text to make it **BOLD**: `__BOLD TEXT__` => **BOLD TEXT** if you put a backslash before that, it will be counteracted: `\__BOLD TEXT__` => \_\_BOLD TEXT\_\_
5,691
17,128,878
I was trying to install `autoclose.vim` to Vim. I noticed I didn't have a `~/.vim/plugin` folder, so I accidentally made a `~/.vim/plugins` folder (notice the extra 's' in plugins). I then added `au FileType python set rtp += ~/.vim/plugins` to my .vimrc, because from what I've read, that will allow me to automatically source the scripts in that folder. The plugin didn't load for me until I realized my mistake and took out the extra 's' from 'plugins'. I'm confused because this new path isn't even defined in my runtime path. I'm basically wondering why the plugin loaded when I had it in `~/.vim/plugin` but not in `~/.vim/plugins`?
2013/06/15
[ "https://Stackoverflow.com/questions/17128878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2467761/" ]
[:help load-plugins](http://vimdoc.sourceforge.net/htmldoc/starting.html#load-plugins) outlines how plugins are loaded. Adding a folder to your `rtp` alone does not suffice; it must have a `plugin` subdirectory. For example, given `:set rtp+=/tmp/foo`, a file `/tmp/foo/plugin/bar.vim` would be detected and loaded, but neither `/tmp/foo/plugins/bar.vim` nor `/tmp/foo/bar.vim` would be.
You are on the right track with `set rtp+=...` but there's a bit more to it (`rtp` is non-recursive, help indexing, many corner cases) than what meets the eye so it is not a very good idea to do it by yourself. Unless you are ready for a months-long drop in productivity. If you want to store all your plugins in a special directory you should use a proper `runtimepath`/plugin-management solution. I suggest [Pathogen](http://www.vim.org/scripts/script.php?script_id=2332) (`rtp`-manager) or [Vundle](http://www.vim.org/scripts/script.php?script_id=3458) (plugin-manager) but there are many others.
5,699
53,494,097
I am trying to get hands on with selenium and webdriver with python. ``` from selenium import webdriver PROXY = "119.82.253.95:61853" url = 'http://google.co.in/search?q=book+flights' chrome_options = webdriver.ChromeOptions() chrome_options.add_argument('--proxy-server=%s' % PROXY) driver = webdriver.Chrome(options=chrome_options, executable_path="/usr/local/bin/chromedriver") driver.get(url) driver.implicitly_wait(20) ``` When I access normally without a proxy everything works fine. But when I try to access using proxy it shows captcha with message "Our systems have detected unusual traffic from your computer". How do I avoid it?
2018/11/27
[ "https://Stackoverflow.com/questions/53494097", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2954789/" ]
`fscanf` is a non-starter. The only way to read empty fields would be to use `"%c"` to read delimiters (and that would require you to know which fields were empty beforehand -- not very useful) Otherwise, depending on the *format specifier* used, `fscanf` would simply consume the `tabs` as leading whitespace or experience a *matching failure* or *input failure*. Continuing from the comment, in order to tokenize based on delimiters that may separate empty fields, you will need to use `strsep` as `strtok` will consider consecutive delimiters as one. While your string is a bit unclear where the `tabs` are located, a short example of tokenizing with `strsep` could be as follows. Note that `strsep` takes a pointer-to-pointer as its first argument, e.g. ``` #include <stdio.h> #include <stdlib.h> #include <string.h> int main (void) { int n = 0; const char *delim = "\t\n"; char *s = strdup ("usrid\tUser Id 0\t15\tstring\td\tk\ty\ty\t\t\t0\t0"), *toks = s, /* tokenize with separate pointer to preserve s */ *p; while ((p = strsep (&toks, delim))) printf ("token[%2d]: '%s'\n", n++ + 1, p); free (s); } ``` (**note:** since `strsep` will modify the address held by the string pointer, you need to preserve a pointer to the beginning of `s` so it can be freed when no longer needed -- thanks JL) **Example Use/Output** ``` $ ./bin/strtok_tab token[ 1]: 'usrid' token[ 2]: 'User Id 0' token[ 3]: '15' token[ 4]: 'string' token[ 5]: 'd' token[ 6]: 'k' token[ 7]: 'y' token[ 8]: 'y' token[ 9]: '' token[10]: '' token[11]: '0' token[12]: '0' ``` Look things over and let me know if you have further questions.
> > I wanna use fscanf to read consecutive tabs as empty fields and store them in a structure. > > > Ideally, code should read a *line*, as with `fgets()` and then parse the *string*. Yet staying with `fscanf()`, this can be done in a loop. --- The main idea is to use `"%[^/t/n]"` to read one token. If the next character is a `'\t'`, then the return value will not be 1. Test for that. A width limit is wise. Then read the separator and look for tab, end-of-line or if end-of-file/error occurred. ``` #define TABS_PER_LINE 12 #define TOKENS_PER_LINE (TABS_PER_LINE + 1) #define TOKEN_SIZE 100 #define TOKEN_FMT_N "99" int fread_tab_delimited_line(FILE *istream, int n, char token[n][TOKEN_SIZE]) { for (int i = 0; i < n; i++) { int token_count = fscanf(istream, "%" TOKEN_FMT_N "[^\t\n]", token[i]); if (token_count != 1) { token[i][0] = '\0'; // Empty token } char separator; int term_count = fscanf(istream, "%c", &separator); // fgetc() makes more sense here // if end-of-file or end-of-line if (term_count != 1 || separator == '\n') { if (i == 0 && token_count != 1 && term_count != 1) { return 0; } return i + 1; } if (separator != '\t') { return -1; // Token too long } } return -1; // Token too many tokens found } ``` Sample driving code ``` void test_tab_delimited_line(FILE *istream) { char token[TOKENS_PER_LINE][TOKEN_SIZE]; long long line_count = 0; int token_count; while ((token_count = fread_tab_delimited_line(istream, TOKENS_PER_LINE, token)) > 0) { printf("Line %lld\n", ++line_count); for (int i = 0; i < token_count; i++) { printf("%d: <%s>\n", i, token[i]); } } while (token_count > 0); if (token_count < 0) { puts("Trouble reading any tokens."); } } ```
5,704
63,322,884
I have a python script that is responsible for verifying the existence of a process with its respective name, I am using the pip module `pgrep`, the problem is that it does not allow me to kill the processes with the kill module of pip or with the of `os.kill` because there are several processes that I want to kill and these are saved in list, for example `pid = [2222, 4444, 6666]` How could you kill those processes at once? since the above modules don't give me results.
2020/08/09
[ "https://Stackoverflow.com/questions/63322884", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14063362/" ]
You would loop over processes using a `for` loop. Ideally you should send a `SIGTERM` before resorting to `SIGKILL`, because it can allow processes to exit more gracefully. ``` import time import os import signal # send all the processes a SIGTERM for p in pid: os.kill(p, signal.SIGTERM) # give them a short time to do any cleanup time.sleep(2) # in case still exist - send them a SIGKILL to definitively remove them # if they are already exited, just ignore the error and carry on for p in pid: try: os.kill(p, signal.SIGKILL) except ProcessLookupError: pass ```
Try this it may work ``` processes = {'pro1', 'pro2', 'pro3'} for proc in psutil.process_iter(): if proc.name() in processes: proc.kill() ``` For more information you can refer [here](https://psutil.readthedocs.io/en/latest/)
5,705
53,546,396
How to reduce numbers in python after comma without rounding Example : I have x = 2.97656 I want it to be 2.9 not 3.0 Thank you
2018/11/29
[ "https://Stackoverflow.com/questions/53546396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9705031/" ]
If you don't want to use `math.round()` you can use `math.floor()`: ``` import math x = 2.97656 print(math.floor(x * 10) / 10) #Output = 2.9 ```
you can use round(var , number precision ) see this link please to more info **<https://www.geeksforgeeks.org/precision-handling-python/>**
5,706
64,575,636
I'm trying to convert json data into a dict by using load() but I'm unable to do so if I have more than one object. For example, the code below works perfectly, I can dump 'dog' into a json file and then I can load 'dog' and print it out as a dict. ``` import json dog = { "name":"Sally", "color": "yellow", "breed": "lab", "age": 2, }, with open("Pets.json","w") as output_file: json.dump(dog,output_file) with open("Pets.json","r") as infile: dog_dict = json.load(infile) print(dog_dict) ``` Output: [{'name': 'Sally', 'color': 'yellow', 'breed': 'lab', 'age': 2}] However, let's say I add an object 'cat' to the existing code: ``` dog = { "name":"Sally", "color": "yellow", "breed": "lab", "age": 2, }, cat = { "name":"Daniel", "color": "black", "breed": "unknown", "age": 8, } with open("Pets.json","w") as output_file: json.dump(dog,output_file) json.dump(cat,output_file) with open("Pets.json","r") as infile: dog_dict = json.load(infile) cat_dict = json.load(infile) print(dog_dict) print(cat_dict) ``` I can successfull dump 'dog' and 'cat' it into the json file, but when I try to load both 'dog' and 'cat' as dicts, I get an error message: ``` dog_dict = json.load(infile) File "/usr/lib/python3.8/json/__init__.py", line 293, in load return loads(fp.read(), File "/usr/lib/python3.8/json/__init__.py", line 357, in loads return _default_decoder.decode(s) File "/usr/lib/python3.8/json/decoder.py", line 340, in decode raise JSONDecodeError("Extra data", s, end) json.decoder.JSONDecodeError: Extra data: line 1 column 65 (char 64) ```
2020/10/28
[ "https://Stackoverflow.com/questions/64575636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14209856/" ]
You should only have one json thing you dump to a file. `json.load` will try to load the whole file, it doesn't find the first instance of a valid json object You could combine them into an array ``` j_obj = [dog, cat] ``` Or create a new dict ``` j_obj = {'dog': dog, 'cat': cat} ``` Then `j_obj` can be dumped to a file and read back and you'll still be able to get `dog` and `cat` back individually if you need them that way A quick note. In your first example, the trailing `,` on the dog object actually makes what your dumping a json array, which is what you are printing out ``` [{'name': 'Sally', 'color': 'yellow', 'breed': 'lab', 'age': 2}] ``` It's not just a dog dictionary
The JSON module doesn't append it automatically. If you want your JSON to contain a number of objects use an array as insert your dictionaries into it. the dump the array
5,707
63,074,629
I am a newbie to a python dictionary. Excume me for my mistakes. I want to create a list of **all** the keys which have a Maximum and Minimum values from Python Dictionary. I searched it about on Google but didn't get any answer. I have written the following code: ``` a = {1:1, 2:3, 4:3, 3:2, 5:1, 6:3} maxi = [keys for keys, values in a.items() if keys == max(a, key=a.get)] mini = [keys for keys, values in a.items() if keys == min(a, key=a.get)] print(maxi) print(mini) ``` My output: ``` [2] [1] ``` Expected output: ``` [2,4,6] [1,5] ``` What did I do wrong? Is there any better (or other) way to do this? I would be more than happy for your help. Thanks in advance!
2020/07/24
[ "https://Stackoverflow.com/questions/63074629", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13285566/" ]
The `ngModel` binding might have precedence here. You could ignore the `value` attribute and set `updatedStockValue` in it's definition. Try the following ```js @Component({ selector: 'app-stock-status', template:` <input type="number" min="0" required [(ngModel)]="updatedStockValue"/> <button class="btn btn-primary" [style.background]="color" (click)="stockValueChanged()">Change Stock Value</button> `, styleUrls: ['./stock-status.component.css'] }) export class AppComponent { updatedStockValue: number = 0; ... } ```
You can initialize a variable in the template with ng-init if you don't want to do it in the controller. ``` <input type="number" min='0' required [(ngModel)]='updatedStockValue' ng-init="updatedStockValue=0"/> ```
5,709
68,500,403
I am using Pandas to analyze a dataset which includes a column named "Age on Intake" (floating numbers). I had been trying to further categorize the data into a few small age buckets using the function I wrote. However, I keep getting the error **"*'<=' not supported between instances of 'str' and 'int'*"**. How could I fix this please? **My function:** ``` def convert_age(num): if num <=7: return "0-7 days" elif num <= 21: return "1-3 weeks" elif num <= 42: return "3-6 weeks" elif num <= 84: return "7-12 weeks" elif num <= 168: return "12 weeks - 6 months" elif num <= 365: return "6-12 months" elif num <= 730: return "1-2 years" elif num <= 1095: return "2-3 years" else: return "3+ years" df['Age on Intake'] = df['Age on Intake'].apply(convert_age) ``` **The df['Age on Intake'] column includes floating numbers:** ``` 0 95.0 1 1096.0 2 111.0 3 111.0 4 397.0 ... 21474 NaN 21475 NaN 21476 365.0 21477 699.0 21478 61.0 Name: Age on Intake, Length: 21479, dtype: float64 ``` **Error Message I get:** ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-31-ca12621d6b19> in <module> 22 return "3+ years" 23 ---> 24 df['Age on Intake'] = df['Age on Intake'].apply(convert_age) 25 26 /opt/anaconda3/lib/python3.8/site-packages/pandas/core/series.py in apply(self, func, convert_dtype, args, **kwds) 4198 else: 4199 values = self.astype(object)._values -> 4200 mapped = lib.map_infer(values, f, convert=convert_dtype) 4201 4202 if len(mapped) and isinstance(mapped[0], Series): pandas/_libs/lib.pyx in pandas._libs.lib.map_infer() <ipython-input-31-ca12621d6b19> in convert_age(num) 3 def convert_age(num): 4 ----> 5 if num <=7: 6 return "0-7 days" 7 elif num <= 21: TypeError: '<=' not supported between instances of 'str' and 'int' ```
2021/07/23
[ "https://Stackoverflow.com/questions/68500403", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16494766/" ]
`for` loops in Rust act on iterators, so if you want succinct semantics, change your code to use them. There's not really that much other choice - what's ergonomic in C isn't necessarily ergonomic in Rust, and vice versa. If your `next` functions follow a common pattern, you can create a structure that implements `Iterator` that takes the `next` function as a `FnMut` closure. In my opinion, your "useless variable" is only there because you've special cased getting the first element by doing it without the `next` function. If you changed your code so that `next(None)` returns the first item, you wouldn't need that.
It would be most idiomatic to convert the code to use an `Iterator`, but that is "non-trivial" in this case due to how next works. The simplest version I could create was to create that was similar to the C code yet IMO reasonably idiomatic was to create an `on_each` style function that accepts a closure. ``` #[derive(Default)] pub struct SomeData {} fn next(data: &mut SomeData) -> bool {todo!() } fn process(data: &mut SomeData) { todo!() } fn condition1(data: &SomeData) -> bool { todo!() } fn condition2(data: &SomeData) -> bool { todo!() } fn condition3(data: &SomeData) -> bool { todo!() } pub fn on_each_data(f: impl for<'a> Fn(&'a mut SomeData)) { let mut data = SomeData::default(); f(&mut data); while next(&mut data) { f(&mut data); } } pub fn iterate_data_2() { on_each_data(|data| { if condition1(data) { return; } if condition2(data) { return; } if condition3(data) { return; } process(data) }); } ```
5,710
55,639,746
I am new to python and Jupyter Notebook The objective of the code I am writing is to request the user to introduce 10 different integers. The program is supposed to return the highest odd number introduced previously by the user. My code is as followws: ``` i=1 c=1 y=1 while i<=10: c=int(input('Enter an integer number: ')) if c%2==0: print('The number is even') elif c> y y=c print('y') i=i+1 ``` My loop is running over and over again, and I don't get a solution. I guess the code is well written. It must be a slight detail I am not seeing. Any help would be much appreciated!
2019/04/11
[ "https://Stackoverflow.com/questions/55639746", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10023598/" ]
You have `elif c > y`, you should just need to add a colon there so it's `elif c > y:`
Yup. ``` i=1 c=1 y=1 while i<=10: c=int(input('Enter an integer number: ')) # This line was off if c%2==0: print('The number is even') elif c> y: # Need also ':' y=c print('y') i=i+1 ```
5,711
32,893,568
I'm trying to parse json string with an escape character (Of some sort I guess) ``` { "publisher": "\"O'Reilly Media, Inc.\"" } ``` Parser parses well if I remove the character `\"` from the string, the exceptions raised by different parsers are, **json** ``` File "/usr/lib/python2.7/json/__init__.py", line 338, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode obj, end = self.scan_once(s, idx) ValueError: Expecting , delimiter: line 17 column 20 (char 392) ``` **ujson** ``` ValueError: Unexpected character in found when decoding object value ``` How do I make the parser to escape this characters ? update: [![enter image description here](https://i.stack.imgur.com/cY8l2.png)](https://i.stack.imgur.com/cY8l2.png) *ps. json is imported as ujson in this example* [![enter image description here](https://i.stack.imgur.com/2d195.png)](https://i.stack.imgur.com/2d195.png) This is what my ide shows comma is just added accidently, it has no trailing comma at the end of json, json is valid [![enter image description here](https://i.stack.imgur.com/uuFTB.png)](https://i.stack.imgur.com/uuFTB.png) the string definition.
2015/10/01
[ "https://Stackoverflow.com/questions/32893568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4597501/" ]
You almost certainly did not define properly escaped backslashes. If you define the string properly the JSON parses *just fine*: ``` >>> import json >>> json_str = r''' ... { ... "publisher": "\"O'Reilly Media, Inc.\"" ... } ... ''' # raw string to prevent the \" from being interpreted by Python >>> json.loads(json_str) {u'publisher': u'"O\'Reilly Media, Inc."'} ``` Note that I used a *raw string literal* to define the string in Python; if I did not, the `\"` would be interpreted by Python and a regular `"` would be inserted. You'd have to *double* the backslash otherwise: ``` >>> print '\"' " >>> print '\\"' \" >>> print r'\"' \" ``` Reencoding the parsed Python structure back to JSON shows the backslashes re-appearing, with the `repr()` output for the string using the same double backslash: ``` >>> json.dumps(json.loads(json_str)) '{"publisher": "\\"O\'Reilly Media, Inc.\\""}' >>> print json.dumps(json.loads(json_str)) {"publisher": "\"O'Reilly Media, Inc.\""} ``` If you did not escape the `\` escape you'll end up with unescaped quotes: ``` >>> json_str_improper = ''' ... { ... "publisher": "\"O'Reilly Media, Inc.\"" ... } ... ''' >>> print json_str_improper { "publisher": ""O'Reilly Media, Inc."" } >>> json.loads(json_str_improper) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python2.7/json/__init__.py", line 338, in loads return _default_decoder.decode(s) File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python2.7/json/decoder.py", line 382, in raw_decode obj, end = self.scan_once(s, idx) ValueError: Expecting , delimiter: line 3 column 20 (char 22) ``` Note that the `\"` sequences now are printed as `"`, the backslash is gone!
Your JSON is invalid. If you have questions about your JSON objects, you can always validate them with [JSONlint](http://jsonlint.com). In your case you have an object ``` { "publisher": "\"O'Reilly Media, Inc.\"", } ``` and you have an extra comma indicating that something else should be coming. So JSONlint yields > > Parse error on line 2: > ...edia, Inc.\"", } > ---------------------^ > Expecting 'STRING' > > > which would begin to help you find where the error was. Removing the comma for ``` { "publisher": "\"O'Reilly Media, Inc.\"" } ``` yields > > Valid JSON > > > Update: I'm keeping the stuff in about JSONlint as it may be helpful to others in the future. As for your well formed JSON object, I have ``` import json d = { "publisher": "\"O'Reilly Media, Inc.\"" } print "Here is your string parsed." print(json.dumps(d)) ``` yielding > > Here is your string parsed. > {"publisher": "\"O'Reilly Media, Inc.\""} > > > Process finished with exit code 0 > > >
5,713
35,901,517
I get the following error when I run my code which has been annotated with @profile: ``` Wrote profile results to monthly_spi_gamma.py.prof Traceback (most recent call last): File "/home/james.adams/anaconda2/lib/python2.7/site-packages/kernprof.py", line 233, in <module> sys.exit(main(sys.argv)) File "/home/james.adams/anaconda2/lib/python2.7/site-packages/kernprof.py", line 223, in main prof.runctx('execfile_(%r, globals())' % (script_file,), ns, ns) File "/home/james.adams/anaconda2/lib/python2.7/cProfile.py", line 140, in runctx exec cmd in globals, locals File "<string>", line 1, in <module> File "monthly_spi_gamma.py", line 1, in <module> import indices File "indices.py", line 14, in <module> @profile NameError: name 'profile' is not defined ``` Can anyone comment as to what may solve the problem? I am using Python 2.7 (Anaconda) on Windows 7.
2016/03/09
[ "https://Stackoverflow.com/questions/35901517", "https://Stackoverflow.com", "https://Stackoverflow.com/users/85248/" ]
I worked this out by using the -l option, i.e. ``` $ kernprof.py -l my_code.py ```
``` kernprof -l -b web_app.py ``` This worked for me, if we see ``` kernprof --help ``` we see an option to include in builtin namespace ``` usage: kernprof [-h] [-V] [-l] [-b] [-o OUTFILE] [-s SETUP] [-v] [-u UNIT] [-z] script ... Run and profile a python script. positional arguments: script The python script file to run args Optional script arguments optional arguments: -h, --help show this help message and exit -V, --version show program's version number and exit -l, --line-by-line Use the line-by-line profiler instead of cProfile. Implies --builtin. -b, --builtin Put 'profile' in the builtins. Use 'profile.enable()'/'.disable()', '@profile' to decorate functions, or 'with profile:' to profile a section of code. -o OUTFILE, --outfile OUTFILE Save stats to <outfile> (default: 'scriptname.lprof' with --line-by-line, 'scriptname.prof' without) -s SETUP, --setup SETUP Code to execute before the code to profile -v, --view View the results of the profile in addition to saving it -u UNIT, --unit UNIT Output unit (in seconds) in which the timing info is displayed (default: 1e-6) -z, --skip-zero Hide functions which have not been called ```
5,714
32,838,802
Say that I have a color image, and naturally this will be represented by a 3-dimensional array in python, say of shape (n x m x 3) and call it img. I want a new 2-d array, call it "narray" to have a shape (3,nxm), such that each row of this array contains the "flattened" version of R,G,and B channel respectively. Moreover, it should have the property that I can easily reconstruct back any of the original channel by something like ``` narray[0,].reshape(img.shape[0:2]) #so this should reconstruct back the R channel. ``` The question is how can I construct the "narray" from "img"? The simple img.reshape(3,-1) does not work as the order of the elements are not desirable for me. Thanks
2015/09/29
[ "https://Stackoverflow.com/questions/32838802", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4929035/" ]
You need to use [`np.transpose`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.transpose.html) to rearrange dimensions. Now, `n x m x 3` is to be converted to `3 x (n*m)`, so send the last axis to the front and shift right the order of the remaining axes `(0,1)`. Finally , reshape to have `3` rows. Thus, the implementation would be - ``` img.transpose(2,0,1).reshape(3,-1) ``` Sample run - ``` In [16]: img Out[16]: array([[[155, 33, 129], [161, 218, 6]], [[215, 142, 235], [143, 249, 164]], [[221, 71, 229], [ 56, 91, 120]], [[236, 4, 177], [171, 105, 40]]]) In [17]: img.transpose(2,0,1).reshape(3,-1) Out[17]: array([[155, 161, 215, 143, 221, 56, 236, 171], [ 33, 218, 142, 249, 71, 91, 4, 105], [129, 6, 235, 164, 229, 120, 177, 40]]) ```
[ORIGINAL ANSWER] Let's say we have an array `img` of size `m x n x 3` to transform into an array `new_img` of size `3 x (m*n)` Initial Solution: ``` new_img = img.reshape((img.shape[0]*img.shape[1]), img.shape[2]) new_img = new_img.transpose() ``` [EDITED ANSWER] **Flaw**: The reshape starts from the first dimension and reshapes the remainder, this solution has the potential to mix the values from the third dimension. Which in the case of images could be semantically incorrect. Adapted Solution: ``` # Dimensions: [m, n, 3] new_img = new_img.transpose() # Dimensions: [3, n, m] new_img = img.reshape(img.shape[0], (img.shape[1]*img.shape[2])) ``` Strict Solution: ``` # Dimensions: [m, n, 3] new_img = new_img.transpose((2, 0, 1)) # Dimensions: [3, m, n] new_img = img.reshape(img.shape[0], (img.shape[1]*img.shape[2])) ``` The strict is a better way forward to account for the order of dimensions, while the results from the `Adapted` and `Strict` will be identical in terms of the values (`set(new_img[0,...])`), however with the order shuffled.
5,715
71,140,438
I am a beginner in Python and would really appreciate if someone could help me with the following: I would like to run this script 10 times and for that change for every run the sub-batch (from 0-9): E.g. the first run would be: ``` python $GWAS_TOOLS/gwas_summary_imputation.py \ -by_region_file $DATA/eur_ld.bed.gz \ -gwas_file $OUTPUT/harmonized_gwas/CARDIoGRAM_C4D_CAD_ADDITIVE.txt.gz \ -parquet_genotype $DATA/reference_panel_1000G/chr1.variants.parquet \ -parquet_genotype_metadata $DATA/reference_panel_1000G/variant_metadata.parquet \ -window 100000 \ -parsimony 7 \ -chromosome 1 \ -regularization 0.1 \ -frequency_filter 0.01 \ -sub_batches 10 \ -sub_batch 0 \ --standardise_dosages \ -output $OUTPUT/summary_imputation_1000G/CARDIoGRAM_C4D_CAD_ADDITIVE_chr1_sb0_reg0.1_ff0.01_by_region.txt.gz ``` The second run would be ``` python $GWAS_TOOLS/gwas_summary_imputation.py \ -by_region_file $DATA/eur_ld.bed.gz \ -gwas_file $OUTPUT/harmonized_gwas/CARDIoGRAM_C4D_CAD_ADDITIVE.txt.gz \ -parquet_genotype $DATA/reference_panel_1000G/chr1.variants.parquet \ -parquet_genotype_metadata $DATA/reference_panel_1000G/variant_metadata.parquet \ -window 100000 \ -parsimony 7 \ -chromosome 1 \ -regularization 0.1 \ -frequency_filter 0.01 \ -sub_batches 10 \ -sub_batch 1 \ --standardise_dosages \ -output $OUTPUT/summary_imputation_1000G/CARDIoGRAM_C4D_CAD_ADDITIVE_chr1_sb0_reg0.1_ff0.01_by_region.txt.gz ``` I am sure this can be done with a loop but not quite sure how to do it in python? Thank you so much for any advice, Sally
2022/02/16
[ "https://Stackoverflow.com/questions/71140438", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18222525/" ]
While we can't show you how to retrofit a loop to the python code without actually seeing the python code, you could just use a shell loop to accomplish what you want without touching the python code. For bash shell, it would look like this: ``` for sub_batch in {0..9}; do \ python $GWAS_TOOLS/gwas_summary_imputation.py \ -by_region_file $DATA/eur_ld.bed.gz \ -gwas_file $OUTPUT/harmonized_gwas/CARDIoGRAM_C4D_CAD_ADDITIVE.txt.gz \ -parquet_genotype $DATA/reference_panel_1000G/chr1.variants.parquet \ -parquet_genotype_metadata $DATA/reference_panel_1000G/variant_metadata.parquet \ -window 100000 \ -parsimony 7 \ -chromosome 1 \ -regularization 0.1 \ -frequency_filter 0.01 \ -sub_batches 10 \ -sub_batch $sub_batch \ --standardise_dosages \ -output $OUTPUT/summary_imputation_1000G/CARDIoGRAM_C4D_CAD_ADDITIVE_chr1_sb0_reg0.1_ff0.01_by_region.txt.gz done ```
a loop in python from 0 to 10 is very easy. ```py for i in range(0, 10): do stuff ```
5,718
36,680,407
I on RHEL6 with Python 2.6 and need to install rrdtool with python. I have to upload and install packages manually as network admin blocks yum and pip outgoing traffic for security reason. During installation I encounter missing error missing rrdtoolmodule.c, where can I locate the file? or I missing something? ``` [user@host ~]$ sudo pip install py-rrdtool-1.0b1.tar.gz [sudo] password for user: /usr/lib/python2.6/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning. InsecurePlatformWarning Processing ./py-rrdtool-1.0b1.tar.gz Installing collected packages: py-rrdtool Running setup.py install for py-rrdtool Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-a5tFI5-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-krfsUz-record/install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build/lib.linux-x86_64-2.6 copying rrdtool.py -> build/lib.linux-x86_64-2.6 running build_ext building '_rrdtool' extension creating build/temp.linux-x86_64-2.6 creating build/temp.linux-x86_64-2.6/src gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/local/include -I/usr/include/python2.6 -c src/_rrdtoolmodule.c -o build/temp.linux-x86_64-2.6/src/_rrdtoolmodule.o src/_rrdtoolmodule.c:34:17: error: rrd.h: No such file or directory In file included from src/rrd_extra.h:37, from src/_rrdtoolmodule.c:35: src/rrd_format.h:59: error: expected specifier-qualifier-list before β€˜rrd_value_t’ src/rrd_format.h:295: error: expected specifier-qualifier-list before β€˜rrd_value_t’ src/_rrdtoolmodule.c: In function β€˜PyRRD_create’: src/_rrdtoolmodule.c:93: warning: implicit declaration of function β€˜rrd_create’ src/_rrdtoolmodule.c:94: warning: implicit declaration of function β€˜rrd_get_error’ src/_rrdtoolmodule.c:94: warning: passing argument 2 of β€˜PyErr_SetString’ makes pointer from integer without a cast /usr/include/python2.6/pyerrors.h:78: note: expected β€˜const char *’ but argument is of type β€˜int’ src/_rrdtoolmodule.c:95: warning: implicit declaration of function β€˜rrd_clear_error’ src/_rrdtoolmodule.c: In function β€˜PyRRD_update’: src/_rrdtoolmodule.c:122: warning: implicit declaration of function β€˜rrd_update’ src/_rrdtoolmodule.c:123: warning: passing argument 2 of β€˜PyErr_SetString’ makes pointer from integer without a cast /usr/include/python2.6/pyerrors.h:78: note: expected β€˜const char *’ but argument is of type β€˜int’ src/_rrdtoolmodule.c: In function β€˜PyRRD_fetch’: src/_rrdtoolmodule.c:145: error: β€˜rrd_value_t’ undeclared (first use in this function) src/_rrdtoolmodule.c:145: error: (Each undeclared identifier is reported only once src/_rrdtoolmodule.c:145: error: for each function it appears in.) src/_rrdtoolmodule.c:145: error: β€˜data’ undeclared (first use in this function) src/_rrdtoolmodule.c:145: error: β€˜datai’ undeclared (first use in this function) src/_rrdtoolmodule.c:145: warning: left-hand operand of comma expression has no effect src/_rrdtoolmodule.c:154: warning: implicit declaration of function β€˜rrd_fetch’ src/_rrdtoolmodule.c:156: warning: passing argument 2 of β€˜PyErr_SetString’ makes pointer from integer without a cast /usr/include/python2.6/pyerrors.h:78: note: expected β€˜const char *’ but argument is of type β€˜int’ src/_rrdtoolmodule.c:165: error: expected β€˜;’ before β€˜dv’ src/_rrdtoolmodule.c:191: error: β€˜dv’ undeclared (first use in this function) src/_rrdtoolmodule.c: In function β€˜PyRRD_graph’: src/_rrdtoolmodule.c:245: warning: implicit declaration of function β€˜rrd_graph’ src/_rrdtoolmodule.c:247: warning: passing argument 2 of β€˜PyErr_SetString’ makes pointer from integer without a cast /usr/include/python2.6/pyerrors.h:78: note: expected β€˜const char *’ but argument is of type β€˜int’ src/_rrdtoolmodule.c: In function β€˜PyRRD_tune’: src/_rrdtoolmodule.c:297: warning: implicit declaration of function β€˜rrd_tune’ src/_rrdtoolmodule.c:298: warning: passing argument 2 of β€˜PyErr_SetString’ makes pointer from integer without a cast /usr/include/python2.6/pyerrors.h:78: note: expected β€˜const char *’ but argument is of type β€˜int’ src/_rrdtoolmodule.c: In function β€˜PyRRD_last’: src/_rrdtoolmodule.c:324: warning: implicit declaration of function β€˜rrd_last’ src/_rrdtoolmodule.c:325: warning: passing argument 2 of β€˜PyErr_SetString’ makes pointer from integer without a cast /usr/include/python2.6/pyerrors.h:78: note: expected β€˜const char *’ but argument is of type β€˜int’ src/_rrdtoolmodule.c: In function β€˜PyRRD_resize’: src/_rrdtoolmodule.c:350: warning: implicit declaration of function β€˜rrd_resize’ src/_rrdtoolmodule.c:351: warning: passing argument 2 of β€˜PyErr_SetString’ makes pointer from integer without a cast /usr/include/python2.6/pyerrors.h:78: note: expected β€˜const char *’ but argument is of type β€˜int’ src/_rrdtoolmodule.c: In function β€˜PyRRD_info’: src/_rrdtoolmodule.c:380: warning: passing argument 2 of β€˜PyErr_SetString’ makes pointer from integer without a cast /usr/include/python2.6/pyerrors.h:78: note: expected β€˜const char *’ but argument is of type β€˜int’ src/_rrdtoolmodule.c:423: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:423: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:423: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:423: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:423: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:423: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:424: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:424: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:424: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:424: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:424: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:424: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:426: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:426: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:426: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:426: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:426: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:426: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:443: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:443: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:443: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:443: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:443: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:443: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:455: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:455: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:455: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:455: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:455: error: β€˜unival’ has no member named β€˜u_val’ src/_rrdtoolmodule.c:455: error: β€˜unival’ has no member named β€˜u_val’ error: command 'gcc' failed with exit status 1 ---------------------------------------- Command "/usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-a5tFI5-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-krfsUz-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-a5tFI5-build ```
2016/04/17
[ "https://Stackoverflow.com/questions/36680407", "https://Stackoverflow.com", "https://Stackoverflow.com/users/79311/" ]
The one-hour difference is due to Daylight Savings Time, which by definition is not reflected in Unix timestamps. You may want to consider [moment-timezone.js](http://momentjs.com/timezone/docs/) to cope with DST in time conversions.
You can use [Date.parse()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/parse) in javascript. ```js const isoDate = new Date(); const convertToUnix = Date.parse(isoDate.toISOString()); ```
5,720
8,198,162
I have a script for deleting images older than a date. Can I pass this date as an argument when I call to run the script? Example: This script `delete_images.py` deletes images older than a date (YYYY-MM-DD) ``` python delete_images.py 2010-12-31 ``` Script (works with a fixed date (xDate variable)) ``` import os, glob, time root = '/home/master/files/' # one specific folder #root = 'D:\\Vacation\\*' # or all the subfolders too # expiration date in the format YYYY-MM-DD ### I have to pass the date from the script ### xDate = '2010-12-31' print '-'*50 for folder in glob.glob(root): print folder # here .jpg image files, but could be .txt files or whatever for image in glob.glob(folder + '/*.jpg'): # retrieves the stats for the current jpeg image file # the tuple element at index 8 is the last-modified-date stats = os.stat(image) # put the two dates into matching format lastmodDate = time.localtime(stats[8]) expDate = time.strptime(xDate, '%Y-%m-%d') print image, time.strftime("%m/%d/%y", lastmodDate) # check if image-last-modified-date is outdated if expDate > lastmodDate: try: print 'Removing', image, time.strftime("(older than %m/%d/%y)", expDate) os.remove(image) # commented out for testing except OSError: print 'Could not remove', image ```
2011/11/19
[ "https://Stackoverflow.com/questions/8198162", "https://Stackoverflow.com", "https://Stackoverflow.com/users/871976/" ]
The quick but crude way is to use `sys.argv`. ``` import sys xDate = sys.argv[1] ``` A more robust, extendable way is to use the [argparse](http://docs.python.org/library/argparse.html#module-argparse) module: ``` import argparse parser=argparse.ArgumentParser() parser.add_argument('xDate') args=parser.parse_args() ``` Then to access the user-supplied value you'd use `args.xDate` instead of `xDate`. Using the `argparse` module you automatically get a help message for free when a user types ``` delete_images.py -h ``` It also gives a helpful error message if the user fails to supply the proper inputs. You can also easily set up a default value for `xDate`, convert `xDate` into a `datetime.date` object, and, as they say on TV, "much, much more!". --- I see later in you script you use ``` expDate = time.strptime(xDate, '%Y-%m-%d') ``` to convert the `xDate` string into a time tuple. You could do this with `argparse` so `args.xDate` is automatically a time tuple. For example, ``` import argparse import time def mkdate(datestr): return time.strptime(datestr, '%Y-%m-%d') parser=argparse.ArgumentParser() parser.add_argument('xDate',type=mkdate) args=parser.parse_args() print(args.xDate) ``` when run like this: ``` % test.py 2000-1-1 ``` yields ``` time.struct_time(tm_year=2000, tm_mon=1, tm_mday=1, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=5, tm_yday=1, tm_isdst=-1) ``` --- PS. Whatever method you choose to use (sys.argv or argparse), it would be a good idea to pull ``` expDate = time.strptime(xDate, '%Y-%m-%d') ``` outside of the `for-loop`. Since the value of `xDate` never changes, you only need to compute `expDate` once.
The command line options can be accessed via the list `sys.argv`. So you can simply use ``` xDate = sys.argv[1] ``` (`sys.argv[0]` is the name of the current script.)
5,721
47,555,613
It appears, based on a [urwid example](http://urwid.org/tutorial/#horizontal-menu) that `u'\N{HYPHEN BULLET}` will create a unicode character that is a hyphen intended for a bullet. The names for unicode characters seem to be defined at [fileformat.info](http://www.fileformat.info/info/unicode/char/b.htm) and some element of using Unicode in Python appears in the [howto documentation](https://docs.python.org/2/howto/unicode.html). Though there is no mention of the `\N{}` syntax. If you pull all these docs together you get the idea that the constant `u"\N{HYPHEN BULLET}"` creates a ⁃ However, this is all a theory based on pulling all this data together. I can find no documentation for `"\N{}` in the Python docs. My question is whether my theory of operation is correct and whether it is documented anywhere?
2017/11/29
[ "https://Stackoverflow.com/questions/47555613", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4360746/" ]
Not every gory detail can be found in a how-to. The [table of escape sequences](https://docs.python.org/2/reference/lexical_analysis.html#string-literals) in the reference manual includes: Escape Sequence: `\N{name}` Meaning: Character named `name` in the Unicode database (Unicode only)
The `\N{}` syntax is documented in the [Unicode HOWTO](https://docs.python.org/3/howto/unicode.html?highlight=unicode%20howto#the-string-type), at least. The names are documented in the Unicode standard, such as: ``` http://www.unicode.org/Public/UCD/latest/ucd/NamesList.txt ``` The `unicodedata` module can look up a name for a character: ``` >>> import unicodedata as ud >>> ud.name('A') 'LATIN CAPITAL LETTER A' >>> print('\N{LATIN CAPITAL LETTER A}') A ```
5,726
55,837,477
convert all txt files delimiter '|' from dir path and convert to csv and save in a location using python? i have tried this code which is hardcoded. ``` import csv txt_file = r"SentiWS_v1.8c_Positive.txt" csv_file = r"NewProcessedDoc.csv" with open(txt_file, "r") as in_text: in_reader = csv.reader(in_text, delimiter = '|') with open(csv_file, "w") as out_csv: out_writer = csv.writer(out_csv, newline='') for row in in_reader: out_writer.writerow(row) ``` Expecting csv files with same file names in dir path for all txt files in path location
2019/04/24
[ "https://Stackoverflow.com/questions/55837477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11186737/" ]
You are trying to instantiate a typealias and are getting `interface doesn't have a constructor` error. To my understanding, typealias with function types work with three steps: 1. Define the typealias itself ``` typealias MyHandler = (Int, String) -> Unit ``` 2. declare an action of that type ``` val myHandler: MyHandler = {intValue, stringValue -> // do something } ``` 3. use that action, e.g. ``` class Foo(val action: MyHandler) { val stateOne: Boolean = false // ... fun bar() { if (stateOne) { action.invoke(1, "One") } else { action.invoke(0, "notOne") } } } ```
`typealias` are just an alias for the type :) in other words, it's just another name for the type. Imagine having to write all the time `(Int, String) -> Unit`. With `typealias` you can define something like you did to help out and write less,i.e. instead of: ``` fun Foo(handler: (Int, String) -> Unit) ``` You can write: ``` fun Foo(handler: MyHandler) ``` They also help giving hints, meaning they can give you a way to describe types in a more contextualized way. Imagine implementing an app where in it's entire domain time is represented as an `Int`. One approach we could follow is defining: ``` typealias Time = Int ``` From there on, every time you want to code something specifically with time, instead of using `Int` you can provide more context to others by using `Time`. This is not a new type, it's just another name for an `Int`, so therefore everything that works with integers works with it too. There's more if you want to have a [look](https://kotlinlang.org/docs/reference/type-aliases.html)
5,728
50,279,728
I have a code like this: ``` x = [] for fitur in self.fiturs: x.append(fitur[0]) a = [x , rpxy_list] join = zip(*a) print join ``` and in the self.fiturs is: ``` F1,1,1,1,1,0,1,1,0,0,1 F2,1,0,0,0,0,0,1,0,1,1 F3,1,0,0,0,0,0,1,1,1,1 F4,1,0,0,0,0,0,1,1,1,0 F5,14,24,22,22,22,16,18,19,26,22 F6,8.0625,6.2,6.2609,6.6818,6.2174,6.3333,7.85,6.0833,6.9655,6.9167 F7,0,0,0,0,0,0,1,0,1,0 F8,1,0,2,0,0,0,2,0,0,0 F9,1,0,0,0,1,1,0,0,0,0 F10,8,4,3,3,3,6,8,5,8,4 F11,0,0,1,0,0,1,0,0,0,0 F12,1,0,0,0,1,0,1,1,1,1 ``` In the **rpxt\_list** is the float and the output of the program is: ``` C:\Users\USER\PycharmProjects\Skripsi\venv\Scripts\python.exe C:/Users/USER/PycharmProjects/Skripsi/coba.py [('F1', 0.2182178902359924), ('F1', 0.2182178902359924), ('F2', 0.408248290463863), ('F3', 0.2), ('F4', 0.408248290463863), ('F5', 0.37142857142857144), ('F6', 0.5053765608632352), ('F7', 0.5), ('F8', 0.6201736729460423), ('F9', 0.2182178902359924), ('F10', 0.6864064729836441), ('F11', 0.5), ('F12', 0.0), ('F13', 0), ('F14', 0), ('F15', 0), ('F16', 0), ('F17', 0), ('F18', 0), ('F19', 0), ('F20', 0), ('F21', 0), ('F22', 0), ('F23', 0.2672612419124244), ('F24', 0.4364357804719848), ('F25', 0), ('F26', 0), ('F27', 0), ('F28', 0), ('F29', 0), ('F30', 0), ('F31', 0), ('F32', 0), ('F33', 0), ('F34', 0), ('F35', 0), ('F36', 0), ('F37', 0.7808688094430304)] Process finished with exit code 0 ``` And I just want the output like this: ``` ['F1', 0.2182178902359924] ['F2', 0.408248290463863] etc ``` What should i do with my code?
2018/05/10
[ "https://Stackoverflow.com/questions/50279728", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9665999/" ]
It look okay for the most part, With Spark 2 you can try something like this by eliminating extra values there, ``` case class Rating(name:Int, product:Int, rating:Int) val spark:SparkSession = ??? val df = spark.read.csv("/path/to/file") .map({ case Row(u: Int, p: Int, r:Int) => Rating(u, p, r) }) ``` Hope this helps. Cheers.
my problem was related with NaN values down the road. I got it fixed using this: predictions.select([to\_null(c).alias(c) for c in predictions.columns]).na.drop() also I had to import "from pyspark.sql.functions import col, isnan, when, trim"
5,731
45,690,043
I have a str like `rjg[]u[ur"fur[ufrng[]"gree`, and i want to replace "[" and "]" between "" with #,the result is `rjg[]u[ur"fur[ufrng[]"gree` => `rjg[]u[ur"fur#ufrng##"gree`, how can i get this in python?
2017/08/15
[ "https://Stackoverflow.com/questions/45690043", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6298732/" ]
One liner solution: ``` import re text = 'rjg[]u[ur"fur[ufrng[]"gree' text = re.sub(r'(")([^"]+)(")', lambda pat: pat.group(1)+pat.group(2).replace(']', '#').replace('[', '#')+pat.group(3), text) print text ``` Output: ``` rjg[]u[ur"fur#ufrng##"gree ```
I would try ``` L = data.split('"') for i in range(1, len(L), 2): L[i] = re.sub(r'[\[\]]', '#', L[i]) result = '"'.join(L) ```
5,732
49,638,674
I have a string `s`, and I want to remove `'.mainlog'` from it. I tried: ``` >>> s = 'ntm_MonMar26_16_59_41_2018.mainlog' >>> s.strip('.mainlog') 'tm_MonMar26_16_59_41_2018' ``` Why did the `n` get removed from `'ntm...'`? Similarly, I had another issue: ``` >>> s = 'MonMar26_16_59_41_2018_rerun.mainlog' >>> s.strip('.mainlog') 'MonMar26_16_59_41_2018_reru' ``` Why does python insist on removing `n`'s from my strings? How I can properly remove `.mainlog` from my strings?
2018/04/03
[ "https://Stackoverflow.com/questions/49638674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/868546/" ]
From Python documentation: <https://docs.python.org/2/library/string.html#string.strip> Currently, it tries to strip all the characters which you mentioned ('.', 'm', 'a', 'i'...) You can use string.replace instead. ``` s.replace('.mainlog', '') ```
You are using the wrong function. `strip` removes characters from the beginning and end of the string. By default spaces, but you can give a list of characters to remove. You should use instead: ``` s.replace('.mainlog', '') ``` Or: ``` import os.path os.path.splitext(s)[0] ```
5,737
44,218,387
This is what I encountered when trying to import thread package: `>>> import thread Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packag es/thread.py", line 3 print('This is ultran00b's package - thread')` I tried uninstalling and install again but it won't work.
2017/05/27
[ "https://Stackoverflow.com/questions/44218387", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7074612/" ]
thread module was deprecated in python 3. Try threading instead: ``` import threading ```
You are trying to import the thread class ? Use : ``` from threading import Thread ```
5,742
2,990,819
I'm looking for the templates engine for Java with syntax like in Django templates or Twig (PHP). Does it exists? Update: The target is to have same templates files for different languages. ``` <html> {{head}} {{ var|escape }} {{body}} </html> ``` can be rendered from python (Django) code as well as from PHP, using Twig. I'm looking for Java solution. Any other templates system available in Java, PHP and python is suitable.
2010/06/07
[ "https://Stackoverflow.com/questions/2990819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/108826/" ]
* <http://www.jangod.org/> (There is now also <https://github.com/HubSpot/jinjava>) * run django via jython on jvm * use <http://mustache.github.com/>
Sure, there are all sorts of template engines for Java. I've used FreeMarker, Velocity and StringTemplate. I'm not sure what you mean by Django-like syntax; each engine has it's own variations on a templating approach. For a comparison of some different engines check out [here](http://java-source.net/open-source/template-engines).
5,743
55,656,522
I installed Python 3.7.3 on windows 10, but I can't install Python packages via PIP in Gitbash (Git SCM), due to my company's internet proxy. I tryed to create environment variables for the proxy via the following, but it didn't work: * export http\_proxy='proxy.com:8080' * export https\_proxy='proxy.com:8080' I found a temporary solution that works for me: inserting the following aliases into the .bashrc file: * alias python='winpty python.exe' * alias pip='pip --proxy=proxy.com:8080' The above is working but I am looking for a nicer solution so that I don't need to set aliases for every command I use. I was thinking about something like an environment variable but didn't found out how to set it up on a windows' git bash environment yet. Do you have an idea on how to do it?
2019/04/12
[ "https://Stackoverflow.com/questions/55656522", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8691122/" ]
One approach here would be to use lookarounds to ensure that you match *only* islands of exactly two sixes: ``` String regex = "(?<!6)66(?!6)"; String text = "6678793346666786784966"; Pattern pattern = Pattern.compile(regex); Matcher matcher = pattern.matcher(text); ``` This finds a count of two, for the input string you provided (the two matches being the `66` at the very start and end of the string). The regex pattern uses two lookarounds to assert that what comes before the first 6 and after the second 6 are *not* other sixes: ``` (?<!6) assert that what precedes is NOT 6 66 match and consume two 6's (?!6) assert that what follows is NOT 6 ```
You need to use ``` String regex = "(?<!6)66(?!6)"; ``` See the [regex demo](https://regex101.com/r/3QHER6/2). [![enter image description here](https://i.stack.imgur.com/6b4St.png)](https://i.stack.imgur.com/6b4St.png) **Details** * `(?<!6)` - no `6` right before the current location * `66` - `66` substring * `(?!6)` - no `6` right after the current location. See the [Java demo](https://ideone.com/UrxExY): ``` String regex = "(?<!6)66(?!6)"; String text = "6678793346666786784966"; Pattern pattern = Pattern.compile(regex); Matcher matcher = pattern.matcher(text); int match=0; while (matcher.find()) { match++; } System.out.println("count is "+match); // => count is 2 ```
5,751
19,838,976
What's the most pythonic way of joining a list so that there are commas between each item, except for the last which uses "and"? ``` ["foo"] --> "foo" ["foo","bar"] --> "foo and bar" ["foo","bar","baz"] --> "foo, bar and baz" ["foo","bar","baz","bah"] --> "foo, bar, baz and bah" ```
2013/11/07
[ "https://Stackoverflow.com/questions/19838976", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1277170/" ]
The fix based on the comment led to this fun way. It assumes no commas occur in the string entries of the list to be joined (which would be problematic anyway, so is a reasonable assumption.) ``` def special_join(my_list): return ", ".join(my_list)[::-1].replace(",", "dna ", 1)[::-1] In [50]: def special_join(my_list): return ", ".join(my_list)[::-1].replace(",", "dna ", 1)[::-1] ....: In [51]: special_join(["foo", "bar", "baz", "bah"]) Out[51]: 'foo, bar, baz and bah' In [52]: special_join(["foo"]) Out[52]: 'foo' In [53]: special_join(["foo", "bar"]) Out[53]: 'foo and bar' ```
In case you need a solution where negative indexing isn't supported (i.e. Django QuerySet) ``` def oxford_join(string_list): if len(string_list) < 1: text = '' elif len(string_list) == 1: text = string_list[0] elif len(string_list) == 2: text = ' and '.join(string_list) else: text = ', '.join(string_list) text = '{parts[0]}, and {parts[2]}'.format(parts=text.rpartition(', ')) # oxford comma return text oxford_join(['Apples', 'Oranges', 'Mangoes']) ```
5,753
13,555,386
I try to start a Celery worker server from a command line: ``` celery -A tasks worker --loglevel=info ``` The code in tasks.py: ``` import os os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings" from celery import task @task() def add_photos_task( lad_id ): ... ``` I get the next error: ``` Traceback (most recent call last): File "/usr/local/bin/celery", line 8, in <module> load_entry_point('celery==3.0.12', 'console_scripts', 'celery')() File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/__main__.py", line 14, in main main() File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 946, in main cmd.execute_from_commandline(argv) File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/celery.py", line 890, in execute_from_commandline super(CeleryCommand, self).execute_from_commandline(argv))) File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 177, in execute_from_commandline argv = self.setup_app_from_commandline(argv) File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 295, in setup_app_from_commandline self.app = self.find_app(app) File "/usr/local/lib/python2.7/site-packages/celery-3.0.12-py2.7.egg/celery/bin/base.py", line 313, in find_app return sym.celery AttributeError: 'module' object has no attribute 'celery' ``` Does anybody know why the 'celery' attribute cannot be found? Thank you for help. The operating system is Linux Debian 5. **Edit**. May be the clue. Could anyone explain me the next comment to a function (why we must be sure that it finds modules in the current directory)? ``` # from celery/utils/imports.py def import_from_cwd(module, imp=None, package=None): """Import module, but make sure it finds modules located in the current directory. Modules located in the current directory has precedence over modules located in `sys.path`. """ if imp is None: imp = importlib.import_module with cwd_in_path(): return imp(module, package=package) ```
2012/11/25
[ "https://Stackoverflow.com/questions/13555386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/749288/" ]
I forgot to create a celery object in tasks.py: ``` from celery import Celery from celery import task celery = Celery('tasks', broker='amqp://guest@localhost//') #! import os os.environ[ 'DJANGO_SETTINGS_MODULE' ] = "proj.settings" @task() def add_photos_task( lad_id ): ... ``` After that we could normally start tasks: ``` celery -A tasks worker --loglevel=info ```
When you run `celery -A tasks worker --loglevel=info`, your celery app should be exposed in the module `tasks`. It shouldn't be wrapped in a function or an `if` statements that. If you `make_celery` in another file, you should import the celery app in to your the file you are passing to celery.
5,763
31,800,998
Issue: Remove the hyperlinks, numbers and signs like `^&*$ etc` from twitter text. The tweet file is in CSV tabulated format as shown below: ``` s.No. username tweetText 1. @abc This is a test #abc example.com 2. @bcd This is another test #bcd example.com ``` Being a novice at python, I search and string together the following code, thanks to a the code given [here](https://stackoverflow.com/questions/8376691/how-to-remove-hashtag-user-link-of-a-tweet-using-regular-expression): ``` import re fileName="path-to-file//tweetfile.csv" fileout=open("Output.txt","w") with open(fileName,'r') as myfile: data=myfile.read().lower() # read the file and convert all text to lowercase clean_data=' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)"," ",data).split()) # regular expression to strip the html out of the text fileout.write(clean_data+'\n') # write the cleaned data to a file fileout.close() myfile.close() print "All done" ``` It does the data stripping, but the output file format is not as I desire. The output text file is in a single line like `s.no username tweetText 1 abc` This is a cleaned tweet `2 bcd` This is another cleaned tweet `3 efg` This is yet another cleaned tweet How can I fix this code to give me an output like given below: ``` s.No. username tweetText 1 abc This is a test 2 bcd This is another test 3 efg This is yet another test ``` I think something needs to be added in the regular expression code but I don't know what it could be. Any pointers or suggestions will be helpful.
2015/08/04
[ "https://Stackoverflow.com/questions/31800998", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4195053/" ]
To remove `&amp` from string you can use [html\_entity\_decode](http://php.net/manual/en/function.html-entity-decode.php) ``` while ($row = mysql_fetch_array($result)) { $row['value'] = html_entity_decode($row['value']); $row['id'] = (int) $row['client_id']; $row_set[] = $row; } ```
Change this `htmlentities` to this `html_entity_decode()` So **final Code will** be ``` $term = trim(strip_tags($_GET['term'])); $term = str_replace(' ', '%', $term); $qstring = "SELECT name as value, client_id FROM goa WHERE name LIKE '" . $term . "%' limit 0,5000"; $result = mysql_query($qstring); $qcount = 0; if ($result) { while ($row = mysql_fetch_array($result)) { $row['value'] = html_entity_decode(stripslashes($row['value']));//change $row['id'] = (int) $row['client_id']; $row_set[] = $row; //build an array$qcount= $qcount + 1;}}echo json_encode($row_set); //format the array into json data } } ``` [`html_entity_decode()` example in W3Schools](http://www.w3schools.com/php/func_string_html_entity_decode.asp)
5,773
58,926,146
I trained a model with RBF kernel-based support vector machine regression. I want to know the features that are very important or major contributing features for the RBF kernel-based support vector machine. I know there is a method to know the most contributing features for linear support vector regression based on weight vectors which are the size of the vectors. However, for the RBF kernel-based support vector machine, since the features are transformed into a new space, I have no clue how to extract the most contributing features. I am using scikit-learn in python. Is there a way to extract the most contributing features in RBF kernel-based support vector regression or non-linear support vector regression? ``` from sklearn import svm svm = svm.SVC(gamma=0.001, C=100., kernel = 'linear') ``` In this case: [Determining the most contributing features for SVM classifier in sklearn](https://stackoverflow.com/questions/41592661/determining-the-most-contributing-features-for-svm-classifier-in-sklearn) does work very well. However, if the kernel is changed in to ``` from sklearn import svm svm = svm.SVC(gamma=0.001, C=100., kernel = 'rbf') ``` The above answer doesn't work.
2019/11/19
[ "https://Stackoverflow.com/questions/58926146", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8739662/" ]
Let me sort the comments as an answer: As you can read [here](https://stackoverflow.com/questions/52640386/how-do-i-solve-the-future-warning-min-groups-self-n-splits-warning-in): > > Weights asigned to the features (coefficients in the primal > problem). This is only available in the case of linear kernel. > > > but also it doesn't make sense. In linear SVM the resulting separating plane is in the same space as your input features. Therefore its coefficients can be viewed as weights of the input's "dimensions". In other kernels, the separating plane exists in another space - a result of kernel transformation of the original space. Its coefficients are not directly related to the input space. In fact, for the rbf kernel the transformed space is infinite-dimensional. > > As menionted in the comments, things you can do: > > > Play with the features (leave some out), and see how the accuracy will change, this will give you an idea which features are important. If you use other classifier as random forest, you will get the feature importances, for the other algorithm. But this will not answer your question which is important for your svm. So this does not necessarily answer your question.
In relation with the inspection of non linear SVM models (e.g. using RBF kernel), here I share an answer posted in another thread which might be useful for this purpose. The method is based on "[sklearn.inspection.permutation\_importance](https://stackoverflow.com/a/67910281/13670156)". And here, a compressive discussion about the significance of ["permutation\_importance" applied on SVM models](http://rasbt.github.io/mlxtend/user_guide/evaluate/feature_importance_permutation/).
5,774
42,149,079
I've managed to install pymol on windows following the instructions [here](https://stackoverflow.com/questions/27885397/how-do-i-install-a-python-package-with-a-whl-file) and using the file Pmw‑2.0.1‑py2‑none‑any.whl from [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/#pymol) Various folders have appeared in `C:\Users\Python27\Lib\site-packages` (`Pmw` and `Pmw-2.0.1.dist-info`). However, I can't actually work out how to run pymol. It used to be provided as a .exe format which could just be run in the usual way for windows applications. The folders that have installed just contain lots of python scripts, but I can't find anything which actually launches the programme.
2017/02/09
[ "https://Stackoverflow.com/questions/42149079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2923519/" ]
Try changing ``` lastRow44 = Cells(Rows.Count, "A").End(xlUp).Row LastRow3 = Worksheets("Temp").Cells(Rows.Count, "A").End(xlUp).Offset(1, 0).Row ``` to ``` lastRow44 = Sheets("Temp").Cells(Rows.Count, 1).End(xlUp).Row LastRow3 = Worksheets("Temp").Cells(Rows.Count, 1).End(xlUp).Offset(1, 0).Row ``` Also, I am not sure what you are trying to accomplish with ``` Range("A" & LastRow3).End(xlDown).Offset(0, 11).Formula = _ "=Sum(("M" & LastRow3).End(xlDown).Offset(0, 11) & lastRow44 & ")" ``` What your formula is doing is first setting to the lastrow that you defined, and then searching downward (as if you hit CTRL + down-arrow). If this is not what you intend, try removing the ".END(xlDown" portion of both. Lastly, if you know you are using an offset of 11, why not set it to use "M" instead of A, and simply not offset?
How about something like that: ``` lastRow44 = Cells(Rows.Count, "A").End(xlUp).Row For x = 50 To LastRow3 Range("A" & x).Formula = "=Sum(""M""" & x & "": M "" & lastRow44 & ")" Next x ```
5,775
40,687,397
I am trying to update my chromedriver.exe file as outlined here. [Python selenium webdriver "Session not created" exception when opening Chrome](https://stackoverflow.com/questions/40373801/python-selenium-webdriver-session-not-created-exception-when-opening-chrome) The problem is, I do not know the location of the old chromedriver on my Windows machine, and therefore can't update. Any help is appreciated!
2016/11/18
[ "https://Stackoverflow.com/questions/40687397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5960274/" ]
If you don't want to expose your constructors for some reasons, you can easily hide them behind a factory method based on templates and perfect forwarding: ``` class Foo { // defined somewhere Foo( Param1, Param2 ); Foo( Param1, Param3, Param4 ); Foo( Param1, Param4 ); Foo( Param1, Param2, Param4 ); private: template<typename... Args> static auto factory(Args&&... args) { Foo foo{std::forward<Args>(args)...}; // do whatever you want here return foo; } } ``` No need to throw anything at runtime. If a constructor that accepts those parameters doesn't exist, you'll receive a compile-time error. --- Otherwise, another idiomatic way of doing that is by using [named constructors](https://en.m.wikibooks.org/wiki/More_C%2B%2B_Idioms/Named_Constructor). I copy-and-paste directly the example from the link above: ``` class Game { public: static Game createSinglePlayerGame() { return Game(0); } static Game createMultiPlayerGame() { return Game(1); } protected: Game (int game_type); }; ``` Not sure this fits your requirements anyway. --- That said, think about what's the benefit of doing this: ``` CreateFoo({ Param1V, Param3V }); ``` Or even worse, this: ``` FooParams params{ Param1V, Param3V }; CreateFoo(params); ``` Instead of this: ``` new Foo{Param1V, Param3V}; ``` By introducing an intermediate class you are not actually helping the users of your class. They still have to remember what are the required params for the specific case.
As an user, I prefer ``` Foo* CreateFoo(Param1* P1, Param2* P2, Param3* P3, Param4* P4); ``` Why should I construc a `struct` just to pass some (maybe NULL) parameters?
5,776
72,173,142
I need to create a fΓ³rmula that when it is dragged down it jumps a certain pre defined number of cells. For example, I have this column: [![enter image description here](https://i.stack.imgur.com/y7c78.png)](https://i.stack.imgur.com/y7c78.png) However I want a formula that when I drag down it jumps 6 rows, something like =A(1+6) in the second row and so on, so it gets to look like this: [![enter image description here](https://i.stack.imgur.com/NOfCg.png)](https://i.stack.imgur.com/NOfCg.png) Is there a "pythonic" way to do that or I need to create some regexextract in a new column + query formula getting only non blank cells? Example sheet in this link: <https://docs.google.com/spreadsheets/d/1RYzX31i8sBFROwFrQGql_eZ6tPu69KDesqzQ3hSj028/edit#gid=0>
2022/05/09
[ "https://Stackoverflow.com/questions/72173142", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5606352/" ]
Try in B2 ``` =offset($A$1;5*row(A2)-10;) ```
try instead: ``` =QUERY(A1:A; "skipping 5"; 0) ``` [![enter image description here](https://i.stack.imgur.com/09MiK.png)](https://i.stack.imgur.com/09MiK.png)
5,777
41,131,038
Given an interactive python script ``` #!/usr/bin/python import sys name = raw_input("Please enter your name: ") age = raw_input("Please enter your age: ") print("Happy %s.th birthday %s!" % (age, name)) while 1: r = raw_input("q for quit: ") if r == "q": sys.exit() ``` I want to interact with it from an expect script ``` #!/usr/bin/expect -f set timeout 3 puts "example to interact" spawn python app.py expect { "name: " { send "jani\r"; } "age: " { send "12\r"; } "quit: " { send "q\r"; } } puts "bye" ``` The expect script seems to be not interacting with the python appliction just run over that. Is the problem with the python or with the expect code ?
2016/12/13
[ "https://Stackoverflow.com/questions/41131038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1922202/" ]
Storing an unsigned integer straight *in* a pointer portably isn't allowed, but you can: * do the reverse: you can store your pointer in an unsigned integer; specifically, `uintptr_t` is explicitly guaranteed by the standard to be big enough to let pointers survive the roundtrip; * use a `union`: ``` union NodePtr { Octree *child; uint32_t value; } ``` here `child` and `value` share the same memory location, and you are allowed to read only from the one where you last wrote; when you are in a terminal node you use `value`, otherwise use `child`.
Well, you can store int as a pointer with casts: ``` uint32_t i = 123; Octree* ptr = reinterpret_cast<Octree*>(i); uint32_t ii = reinterpret_cast<uint32_t>(ptr); std::cout << ii << std::endl; //Prints 123 ``` But if you do it this way I can't see how you detect that a given Octree\* actually stores data and is not a pointer to another Octree
5,778
12,569,356
I'm a very beginner with Python classes and JSON and I'm not sure I'm going in the right direction. Basically, I have a web service that accepts a JSON request in a POST body like this: ``` { "item" : { "thing" : "foo", "flag" : true, "language" : "en_us" }, "numresults" : 3 } ``` I started going down the route of creating a class for "item" like this: ``` class Item(object): def __init__: self.name = "item" @property def thing(self): return self.thing @thing.setter def thing(self, value): self.thing = value ... ``` So, my questions are: 1. Am I going in the right direction? 2. How do I turn the Python object into a JSON string? I've found a lot of information about JSON in python, I've looked at jsonpickle, but I can't seem to create a class that ends up outputting the nested dictionaries needed. EDIT: Thanks to Joran's suggestion, I stuck with a class using properties and added a method like this: ``` def jsonify(self): return json.dumps({ "item" : self.__dict__ }, indent=4) ``` and that worked perfectly. Thanks everyone for your help.
2012/09/24
[ "https://Stackoverflow.com/questions/12569356", "https://Stackoverflow.com", "https://Stackoverflow.com/users/476638/" ]
just add one method to your class that returns a dictionary ``` def jsonify(self): return { 'Class Whatever':{ 'data1':self.data1, 'data2':self.data2, ... } } ``` and call your tojson function on the result ... or call it before your return to just return a json result...
Take a look at the [`colander` project](http://docs.pylonsproject.org/projects/colander/en/latest/); it let's you define an object-oriented 'schema' that is easily serializable to and from JSON. ``` import colander class Item(colander.MappingSchema): thing = colander.SchemaNode(colander.String(), validator=colander.OneOf(['foo', 'bar'])) flag = colander.SchemaNode(colander.Boolean()) language = colander.SchemaNode(colander.String() validator=colander.OneOf(supported_languages) class Items(colander.SequenceSchema): item = Item() ``` Then load these from JSON: ``` items = Items().deserialize(json.loads(jsondata)) ``` and `colander` validates the data for you, returning a set of python objects that then can be acted upon. Alternatively, you'd have to create specific per-object handling to be able to turn Python objects into JSON structures and vice-versa.
5,779
6,800,280
This is a follow-up to this previous question: [Complicated COUNT query in MySQL](https://stackoverflow.com/questions/6580684/complicated-count-query-in-mysql). None of the answers worked under all conditions, and I have had trouble figuring out a solution as well. I will be awarding a 75 point bounty to the first person that provides a fully correct answer (I will award the bounty as soon as it is available, and as reference I've done this before: [Improving Python/django view code](https://stackoverflow.com/questions/6245755/improving-python-django-view-code)). I want to get the count of video credits a user has and not allow duplicates (i.e., for every video a user can be credited in it 0 or 1 times. I want to find three counts: the number of videos a user has uploaded (easy) -- `Uploads`; the number of videos credited in from videos not uploaded by the user -- `Credited_by_others`; and the total number of videos a user has been credited in -- `Total_credits`. I have three tables: ``` CREATE TABLE `userprofile_userprofile` ( `id` int(11) NOT NULL AUTO_INCREMENT, `full_name` varchar(100) NOT NULL, ... ) CREATE TABLE `videos_video` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` int(11) NOT NULL, `uploaded_by_id` int(11) NOT NULL, ... KEY `userprofile_video_e43a31e7` (`uploaded_by_id`), CONSTRAINT `uploaded_by_id_refs_id_492ba9396be0968c` FOREIGN KEY (`uploaded_by_id`) REFERENCES `userprofile_userprofile` (`id`) ) ``` **Note that the `uploaded_by_id` is the same as the `userprofile.id`** ``` CREATE TABLE `videos_videocredit` ( `id` int(11) NOT NULL AUTO_INCREMENT, `video_id` int(11) NOT NULL, `profile_id` int(11) DEFAULT NULL, `position` int(11) NOT NULL ... KEY `videos_videocredit_fa26288c` (`video_id`), KEY `videos_videocredit_141c6eec` (`profile_id`), CONSTRAINT `profile_id_refs_id_31fc4a6405dffd9f` FOREIGN KEY (`profile_id`) REFERENCES `userprofile_userprofile` (`id`), CONSTRAINT `video_id_refs_id_4dcff2eeed362a80` FOREIGN KEY (`video_id`) REFERENCES `videos_video` (`id`) ) ``` Here is a step-by-step to illustrate: 1) create 2 users: ``` insert into userprofile_userprofile (id, full_name) values (1, 'John Smith'); insert into userprofile_userprofile (id, full_name) values (2, 'Jane Doe'); ``` 2) a user uploads a video. He does not yet credit anyone -- including himself -- in it. ``` insert into videos_video (id, title, uploaded_by_id) values (1, 'Hamlet', 1); ``` The result should be as follows: ``` **User** **Uploads** **Credited_by_others** **Total_credits** John Smith 1 0 1 Jane Doe 0 0 0 ``` 3) the user who uploaded the video now credits himself in the video. Note this should not change anything, since the user has already received a credit for uploading the film and I am not allowing duplicate credits: ``` insert into videos_videocredit (id, video_id, profile_id, position) values (1, 1, 1, 'director') ``` The result should now be as follows: ``` **User** **Uploads** **Credited_by_others** **Total_credits** John Smith 1 0 1 Jane Doe 0 0 0 ``` 4) The user now credits himself two more times in the same video (i.e., he has had multiple 'positions' in the video). In addition, he credits Jane Doe three times for that video: ``` insert into videos_videocredit (id, video_id, profile_id, position) values (2, 1, 1, 'writer') insert into videos_videocredit (id, video_id, profile_id, position) values (3, 1, 1, 'producer') insert into videos_videocredit (id, video_id, profile_id, position) values (4, 1, 2, 'director') insert into videos_videocredit (id, video_id, profile_id, position) values (5, 1, 2, 'editor') insert into videos_videocredit (id, video_id, profile_id, position) values (6, 1, 2, 'decorator') ``` The result should now be as follows: ``` **User** **Uploads** **Credited_by_others** **Total_credits** John Smith 1 0 1 Jane Doe 0 1 1 ``` 5) Jane Doe now uploads a video. She does not credit herself, but credits John Smith twice in the video: ``` insert into videos_video (id, title, uploaded_by_id) values (2, 'Othello', 2) insert into videos_videocredit (id, video_id, profile_id, position) values (7, 2, 1, 'writer') insert into videos_videocredit (id, video_id, profile_id, position) values (8, 2, 1, 'producer') ``` The result should now be as follows: ``` **User** **Uploads** **Credited_by_others** **Total_credits** John Smith 1 1 2 Jane Doe 1 1 2 ``` So, I would like to find those three fields for each user -- `Uploads`, `Credited_by_others`, and `Total_credits`. Data should never be Null, but instead be 0 when the field has no count. Thank you.
2011/07/23
[ "https://Stackoverflow.com/questions/6800280", "https://Stackoverflow.com", "https://Stackoverflow.com/users/651174/" ]
Couldn't there be problem with `<Files .*>`? I think this is a wildcard pattern so you should use just `<Files *>`.
The order of your htaccess, should be... ``` RewriteEngine On <Files .*> Order allow,deny Allow from all </Files> Options FollowSymLinks RewriteRule ^photos.+$ thumbs.php [L,QSA] RewriteRule ^[a-zA-Z0-9\-_]*$ index.php [L,QSA] RewriteRule ^[a-zA-Z0-9\-_]+\.html$ index.php [L,QSA] ```
5,782
44,732,839
I am trying to process txt file using pandas. However, I get following error at read\_csv > > CParserError Traceback (most recent call > last) in () > 22 Col.append(elm) > 23 > ---> 24 revised=pd.read\_csv(Path+file,skiprows=Header+1,header=None,delim\_whitespace=True) > 25 > 26 TimeSeries.append(revised) > > > C:\Users\obakatsu\Anaconda3\lib\site-packages\pandas\io\parsers.py in > parser\_f(filepath\_or\_buffer, sep, delimiter, header, names, index\_col, > usecols, squeeze, prefix, mangle\_dupe\_cols, dtype, engine, converters, > true\_values, false\_values, skipinitialspace, skiprows, skipfooter, > nrows, na\_values, keep\_default\_na, na\_filter, verbose, > skip\_blank\_lines, parse\_dates, infer\_datetime\_format, keep\_date\_col, > date\_parser, dayfirst, iterator, chunksize, compression, thousands, > decimal, lineterminator, quotechar, quoting, escapechar, comment, > encoding, dialect, tupleize\_cols, error\_bad\_lines, warn\_bad\_lines, > skip\_footer, doublequote, delim\_whitespace, as\_recarray, compact\_ints, > use\_unsigned, low\_memory, buffer\_lines, memory\_map, float\_precision) > 560 skip\_blank\_lines=skip\_blank\_lines) > 561 > --> 562 return \_read(filepath\_or\_buffer, kwds) > 563 > 564 parser\_f.**name** = name > > > C:\Users\obakatsu\Anaconda3\lib\site-packages\pandas\io\parsers.py in > \_read(filepath\_or\_buffer, kwds) > 323 return parser > 324 > --> 325 return parser.read() > 326 > 327 \_parser\_defaults = { > > > C:\Users\obakatsu\Anaconda3\lib\site-packages\pandas\io\parsers.py in > read(self, nrows) > 813 raise ValueError('skip\_footer not supported for iteration') > 814 > --> 815 ret = self.\_engine.read(nrows) > 816 > 817 if self.options.get('as\_recarray'): > > > C:\Users\obakatsu\Anaconda3\lib\site-packages\pandas\io\parsers.py in > read(self, nrows) 1312 def read(self, nrows=None): 1313 > > try: > -> 1314 data = self.\_reader.read(nrows) 1315 except StopIteration: 1316 if self.\_first\_chunk: > > > pandas\parser.pyx in pandas.parser.TextReader.read > (pandas\parser.c:8748)() > > > pandas\parser.pyx in pandas.parser.TextReader.\_read\_low\_memory > (pandas\parser.c:9003)() > > > pandas\parser.pyx in pandas.parser.TextReader.\_read\_rows > (pandas\parser.c:9731)() > > > pandas\parser.pyx in pandas.parser.TextReader.\_tokenize\_rows > (pandas\parser.c:9602)() > > > pandas\parser.pyx in pandas.parser.raise\_parser\_error > (pandas\parser.c:23325)() > > > CParserError: Error tokenizing data. C error: Expected 4 fields in > line 6, saw 8 > > > Does anyone know how I can fix this problem? My python script and example txt file I want to process is shown below. ``` Path='data/NanFung/OCTA_Tower/test/' files=os.listdir(Path) TimeSeries=[] Cols=[] for file in files: new=open(Path+file) Supplement=[] Col=[] data=[] Header=0 #calculate how many rows should be skipped for line in new: if line.startswith('Timestamp'): new1=line.split(" ") new1[-1]=str(file)[:-4] break else: Header += 1 #clean col name for elm in new1: if len(elm)>0: Col.append(elm) revised=pd.read_csv(Path+file,skiprows=Header+1,header=None,delim_whitespace=True) TimeSeries.append(revised) Cols.append(Col) ``` txt file ``` history:/NIKL6215_ENC_1/CH$2d19$2d1$20$20CHW$20OUTLET$20TEMP 20-Oct-12 8:00 PM CT to ? Timestamp Trend Flags Status Value (ΒΊC) ------------------------- ----------- ------ ---------- 20-Oct-12 8:00:00 PM HKT {start} {ok} 15.310 ΒΊC 21-Oct-12 12:00:00 AM HKT { } {ok} 15.130 ΒΊC ```
2017/06/24
[ "https://Stackoverflow.com/questions/44732839", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7124344/" ]
It fails because the part of the file you're reading looks like this: ``` Timestamp Trend Flags Status Value (ΒΊC) ------------------------- ----------- ------ ---------- 20-Oct-12 8:00:00 PM HKT {start} {ok} 15.310 ΒΊC 21-Oct-12 12:00:00 AM HKT { } {ok} 15.130 ΒΊC ``` But there are no consistent delimiters here. `read_csv` does not understand how to read fixed-width formats like yours. You might consider using a delimited file, such as with tab characters between the columns.
Include This line before ======================== ``` file_name = Path+file #change below line to given ``` > > revised=pd.read\_csv(Path+file,skiprows=Header+1,header=None,delim\_whitespace=True) > revised=pd.read\_csv(file\_name,skiprows=Header+1,header=None,sep=" ") > > >
5,784
27,647,922
I'm working on my python script as I'm created a list to stored the elements in the arrays. I have got a problem with the if statement. I'm trying to find the elements if I have the values `375` but it won't let me to get pass on the if statement. Here is the code: ``` program_X = list() #create the rows to count for 69 program buttons for elem in programs_button: program_width.append(elem.getWidth()) program_X.append(elem.getX()) program_X = map(str, program_X) #get the list of position_X for all buttons for pos_X in programs_X: #find the position with 375 if pos_X == 375: print pos_X ``` Here is the list of elements that I use to print from the arrays: ``` 14:08:55 T:1260 NOTICE: 375 14:08:55 T:1260 NOTICE: 724.06 14:08:55 T:1260 NOTICE: 1610.21 14:08:55 T:1260 NOTICE: 2496.39 14:08:55 T:1260 NOTICE: 2845.45 14:08:55 T:1260 NOTICE: 3194.51 14:08:55 T:1260 NOTICE: 3543.57 14:08:55 T:1260 NOTICE: 3892.63 14:08:55 T:1260 NOTICE: 4241.69 14:08:55 T:1260 NOTICE: 4590.75 14:08:55 T:1260 NOTICE: 4939.81 14:08:55 T:1260 NOTICE: 5288.87 14:08:55 T:1260 NOTICE: 5637.93 14:08:55 T:1260 NOTICE: 5986.99 14:08:55 T:1260 NOTICE: 6336.05 14:08:55 T:1260 NOTICE: 6685.11 14:08:55 T:1260 NOTICE: 7034.17 14:08:55 T:1260 NOTICE: 7383.23 14:08:55 T:1260 NOTICE: 7732.29 14:08:55 T:1260 NOTICE: 8081.35 14:08:55 T:1260 NOTICE: 8430.41 14:08:55 T:1260 NOTICE: 8779.47 14:08:55 T:1260 NOTICE: 9665.59 14:08:55 T:1260 NOTICE: 10014.65 14:08:55 T:1260 NOTICE: 10363.71 14:08:55 T:1260 NOTICE: 10712.77 14:08:55 T:1260 NOTICE: 11061.83 14:08:55 T:1260 NOTICE: 11410.89 14:08:55 T:1260 NOTICE: 11759.95 14:08:55 T:1260 NOTICE: 12109.01 14:08:55 T:1260 NOTICE: 12458.07 14:08:55 T:1260 NOTICE: 12807.13 14:08:55 T:1260 NOTICE: 13156.19 14:08:55 T:1260 NOTICE: 13505.25 14:08:55 T:1260 NOTICE: 13854.31 14:08:55 T:1260 NOTICE: 14203.37 14:08:55 T:1260 NOTICE: 14552.43 14:08:55 T:1260 NOTICE: 14901.49 14:08:55 T:1260 NOTICE: 15250.55 14:08:55 T:1260 NOTICE: 15599.61 14:08:55 T:1260 NOTICE: 15948.67 14:08:55 T:1260 NOTICE: 16297.73 14:08:55 T:1260 NOTICE: 17183.85 14:08:55 T:1260 NOTICE: 17532.91 14:08:55 T:1260 NOTICE: 17881.97 14:08:55 T:1260 NOTICE: 18231.03 14:08:55 T:1260 NOTICE: 18580.09 14:08:55 T:1260 NOTICE: 18929.15 14:08:55 T:1260 NOTICE: 19278.21 14:08:55 T:1260 NOTICE: 19627.27 14:08:55 T:1260 NOTICE: 19976.33 14:08:55 T:1260 NOTICE: 20325.39 14:08:55 T:1260 NOTICE: 20674.45 14:08:55 T:1260 NOTICE: 21023.51 14:08:55 T:1260 NOTICE: 21372.57 14:08:55 T:1260 NOTICE: 21721.63 14:08:55 T:1260 NOTICE: 22070.69 14:08:55 T:1260 NOTICE: 22419.75 14:08:55 T:1260 NOTICE: 22768.81 14:08:55 T:1260 NOTICE: 23117.87 14:08:55 T:1260 NOTICE: 23466.93 14:08:55 T:1260 NOTICE: 24353.05 14:08:55 T:1260 NOTICE: 24702.11 14:08:55 T:1260 NOTICE: 25051.17 14:08:55 T:1260 NOTICE: 25400.23 14:08:55 T:1260 NOTICE: 25749.29 14:08:55 T:1260 NOTICE: 26098.35 14:08:55 T:1260 NOTICE: 26447.41 14:08:55 T:1260 NOTICE: 26796.47 14:08:55 T:1260 NOTICE: 375 14:08:55 T:1260 NOTICE: 724.06 14:08:55 T:1260 NOTICE: 1610.21 14:08:55 T:1260 NOTICE: 1959.27 14:08:55 T:1260 NOTICE: 2308.33 14:08:55 T:1260 NOTICE: 3194.45 14:08:55 T:1260 NOTICE: 3543.51 14:08:55 T:1260 NOTICE: 4241.6 14:08:55 T:1260 NOTICE: 4590.66 14:08:55 T:1260 NOTICE: 4939.72 14:08:55 T:1260 NOTICE: 5825.9 14:08:55 T:1260 NOTICE: 6174.96 ``` Can you please help me how I can get pass on the if statement when I'm trying to find the elements of `375`?
2014/12/25
[ "https://Stackoverflow.com/questions/27647922", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4275381/" ]
As the `program_x` contains string elements : ``` program_X = map(str, program_X) ^ ``` you need to change the following : ``` if pos_X == 375 ``` to ``` if pos_X == '375' ```
If you are storing strings in list that way, ``` program_X = ['14:08:55 T:1260 NOTICE: 8081.35', ...] ``` Then use `in` keyword to check the word ``` for pos_X in programs_X: #find the position with 375 if '375' in pos_X: print pos_X ```
5,785
45,692,894
Problem ------- Some reoccurring events, that don't really end at some point (like club meetings?), depend on other conditions (like holiday season). However, manually adding these exceptions would be necessary every year, as the dates might differ. **Research** * I have found out about `exdate` (see the image of ["iCalendar components and their properties"](https://en.wikipedia.org/wiki/ICalendar) on Wikipedia [(2)](http://www.kanzaki.com/docs/ical/exdate.html)) * Also found some possible workaround: 'just writing a [script](https://stackoverflow.com/questions/3408097/parsing-files-ics-icalendar-using-python) to do process such events'. This would still mean I need to process a `.ics` manually and import it into my calendar, which implies some limitations: + can not be determined for all time spans (e.g. holidays not fixed for more than three years) + these events would probably be separate and not reoccurring/'grouped', which makes further edits harder Question -------- > > Is there a way to specify recurring exceptions in iCal ? > > > * To clarify, I have a recurring event *and* recurring exceptions. * So for instance I have a *infinitely reoccurring weekly* event, that depends on the month; where it might only take place *if it's not* e.g. January, August, or December. > > Is there a way to use another event (/calendar) to filter events by boolean logic ? > > > If one could use a second event (or several) to plug into `exdate` this would solve the first problem and add some more possibilities. --- **note** if this question is too specific and the original problem could be solved by other means (other calendar-formats), feel free to comment/edit/answer
2017/08/15
[ "https://Stackoverflow.com/questions/45692894", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4550784/" ]
[RFC2445 defines an `EXRULE`](https://www.rfc-editor.org/rfc/rfc2445#section-4.8.5.2) (exception rule) property. You can use that in addition to the `RRULE` to define recurring exceptions. However, RFC2445 was superseded by [RFC5545, which unfortunately deprecates the `EXRULE`](https://www.rfc-editor.org/rfc/rfc5545#appendix-A.3) property. So, client support is questionable. As you already proposed, automatically adding `EXDATE` properties is a possible solution.
`BYMONTH` would be another possibility, e.g. here's a rule for a club meeting that occurs the first Wednesday of every month except December (which is their Christmas party, so no business meeting) ``` RRULE:FREQ=MONTHLY;BYDAY=1WE;BYMONTH=1,2,3,4,5,6,7,8,9,10,11 ```
5,786
42,519,094
I am trying to start a Python 3.6 project by creating a virtualenv to keep the dependencies. I currently have both Python 2.7 and 3.6 installed on my machine, as I have been coding in 2.7 up until now and I wish to try out 3.6. I am running into a problem with the different versions of Python not detecting modules I am installing inside the virtualenv. For example, I create a virtualenv with the command: `virtualenv venv` I then activate the virtualenv and install Django with the command: `pip install django` My problems arise when I activate either Python 2.7 or 3.6 with the commands `py -2` or `py -3`, neither of the interactive shells detect Django as being installed. Django is only detected when I run the `python` command, which defaults to 2.7 when I want to use 3.6. Does anyone know a possible fix for this so I can get my virtualenv working correctly? Thanks! If it matters at all I am on a machine running Windows 7.
2017/02/28
[ "https://Stackoverflow.com/questions/42519094", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4588188/" ]
You have to select the interpreter when you create the virtualenv. ``` virtualenv --python=PYTHON36_EXE my_venv ``` Substitute the path to your Python 3.6 installation in place of `PYTHON36_EXE`. Then after you've activated, `python` executable will be bound to 3.6 and you can just `pip install Django` as usual.
The key is that `pip` installs things for a specific version of Python, and to a very specific location. Basically, the `pip` command in your virtual environment is set up specifically for the interpreter that your virtual environment is using. So even if you explicitly call another interpreter with that environment activated, it will not pick up the packages `pip` installed for the default interpreter.
5,787
21,426,329
I followed mbrochh's instruction <https://github.com/mbrochh/vim-as-a-python-ide> to build my vim as a python IDE. But things go wrong when openning the vim after I put `jedi-vim` into `~/.vim/bundle`. The following is the warnings ``` Error detected while processing CursorMovedI Auto commands for "buffer=1": Traceback (most recent call last) Error detected while processing CursorMovedI Auto commands for "buffer=1": File "string", line 1, in module Error detected while processing CursorMovedI Auto commands for "buffer=1": NameError: name 'jedi_vim' is not defined ``` I hope someone can figure out the problem and thanks for your help.
2014/01/29
[ "https://Stackoverflow.com/questions/21426329", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1263069/" ]
If you’re trying to use Vundle to install the jedi-vim plugin, I don’t think you should have to place it under `~/.vim/bundle`. Instead, make sure you have Vundle set up correctly, as [described in its β€œQuick start”](https://github.com/gmarik/vundle#quick-start), and then try adding this line to your `~/.vimrc` after the lines where Vundle is set up: ``` Plugin 'davidhalter/jedi-vim' ``` Then run `:PluginInstall` and the plugin should be installed.
make sure that you have install jedi, I solved my problem with below command.. ``` cd ~/.vim/bundle/jedi-vim git submodule update --init ```
5,789
47,972,811
I am on CentOs7. I installed tk, tk-devel, tkinter through yum. I can import tkinter in Python 3, but not in Python 2.7. Any ideas? Success in Python 3 (Anaconda): ``` Python 3.6.3 |Anaconda custom (64-bit)| (default, Oct 13 2017, 12:02:49) [GCC 7.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tkinter >>> ``` But fail on Python 2.7 (CentOS default): ``` Python 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import Tkinter Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib64/python2.7/lib-tk/Tkinter.py", line 39, in <module> import _tkinter # If this fails your Python may not be configured for Tk ImportError: libTix.so: cannot open shared object file: No such file or directory ``` I read some answers said > > If it fails with "No module named \_tkinter", your Python configuration needs to be modified to include this module (which is an extension module implemented in C). Do not edit Modules/Setup (it is out of date). You may have to install Tcl and Tk (when using RPM, install the -devel RPMs as well) and/or edit the setup.py script to point to the right locations where Tcl/Tk is installed. If you install Tcl/Tk in the default locations, simply rerunning "make" should build the \_tkinter extension. > > > I have reinstalled tk, tk-devel and tkinter through yum, but the problem is same. How can I configure it to work on Python 2.7?
2017/12/25
[ "https://Stackoverflow.com/questions/47972811", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9139945/" ]
For python 3 use: ``` import tkinter ``` For python 2 use: ``` import Tkinter ``` If these do not work install with, for python 3: ``` sudo apt-get install python3-tk ``` or, for python 2: ``` sudo apt-get install python-tk ``` you can find more details [here](https://www.techinfected.net/2015/09/how-to-install-and-use-tkinter-in-ubuntu-debian-linux-mint.html)
For python2.7 try ``` import Tkinter ``` With a capital T. It should already be pre-installed in default centos 7 python setup, if not do `yum install tkinter`
5,794
17,273,393
In my python code I have global `requests.session` instance: ``` import requests session = requests.session() ``` How can I mock it with `Mock`? Is there any decorator for this kind of operations? I tried following: ``` session.get = mock.Mock(side_effect=self.side_effects) ``` but (as expected) this code doesn't return `session.get` to original state after each test, like `@mock.patch` decorator do.
2013/06/24
[ "https://Stackoverflow.com/questions/17273393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1325846/" ]
Use `mock.patch` to patch session in your module. Here you go, a complete working example <https://gist.github.com/k-bx/5861641>
With some inspiration from the previous answer and : [mock-attributes-in-python-mock](https://stackoverflow.com/questions/16867509/mock-attributes-in-python-mock) I was able to mock a session defined like this: ``` class MyClient(object): """ """ def __init__(self): self.session = requests.session() ``` with that: (the call to *get* returns a response with a *status\_code* attribute set to 200) ``` def test_login_session(): with mock.patch('path.to.requests.session') as patched_session: # instantiate service: Arrange test_client = MyClient() type(patched_session().get.return_value).status_code = mock.PropertyMock(return_value=200) # Act (+assert) resp = test_client.login_cookie() # Assert assert resp is None ```
5,795
68,660,419
I don't have a lot of experience with Selenium but I am trying to run a code which search for an element in HTML with chromedriver. I keep getting an error as below. The first thing I would like to confirm is that this error cannot be due to the connection with Chromedriver to the web but is because of the way the python script search in the HTML code. Any help would be appreciated. The error: ``` ('no such element: Unable to locate element: {"method":"xpath","selector":"//*[contains(text(),\'Find exited companies announced\')]/../.."}\n (Session info: headless chrome=91.0.4472.101)', None, None) ``` The code source: ``` <div id="logon-brownContent" style="width:100%;display:true;;padding-bottom: 0px;" class="hideforprinting"> <table width="" cellpadding="0" cellspacing="0" class=""> </table> </div> </div> </td></tr></table> </div> </td> <td class="homepage_mainbody-headlines"> <table class="framework_standard"> <tr> <td colspan="2" valign="top"> <form action="exitbroker.asp?" method="post" name="oz" id="oz" sumbit="javascript:return validate();"> <input type="hidden" name="verb" value="8" /> <input type="hidden" name="dateformat" value="dd/mm/yyyy" /> <input type="hidden" name="contextid" value="1032390856" /> <input type="hidden" name="statecodelength" value="0" /> <table cellspacing="0" cellpadding="0" border="0"> <tr> <td> <table> <tr> <td class="framework_page-title"> <span class="framework_page-title">PE Exit Companies: Search</span><br/> </td> </tr> </table> </td> </tr> <tr> <td height="1"><img src="/images/spacer.gif" height="13" width="1"></td> </tr> </table> <table class="criteriaborder" cellspacing="0" cellpadding="2" width="100%" border="0"> <tbody> <tr> <td> <table cellspacing="0" cellpadding="0" border="0" style="width:100%;"> <tbody> <tr> <td valign="top"> <table cellspacing="0" cellpadding="0" width="100%" border="0"> <tr> <td align="center" valign="middle" width="100%" height="18" class="criteriaheader2">Exits</td> </tr> </table> </td> </tr> <tr> <td class="criteriasectionheader"><br />Exit Types</td> </tr> <tr> <td> <table border="0" cellpadding="0" cellspacing="0"> <tr valign="top"><td width="200"><input type="checkbox" name="exitdealtype" value="ipo"/>Initial Public Offering</td><td width="200"><input type="checkbox" name="exitdealtype" value="sbo"/>Secondary Buyout</td><td width="200"><input type="checkbox" name="exitdealtype" value="tradesale"/>Trade Sale</td></tr> </table> </td> </tr> <tr> <td class="criteriasectionheader"><br />Date Range</td> </tr> <tr> <td> Find exited companies announced<br><br> </td> </tr> <tr> <td> <table cellpadding="2" cellspacing="0" border="0"> <tr> <td>From &nbsp;&nbsp;&nbsp;</td> <td><input type="text" name="datefrom" style="width:100" value=""></td> <td>&nbsp;&nbsp;&nbsp; To &nbsp;&nbsp;&nbsp;</td> <td><input type="text" name="dateto" style="width:100" value=""></td> <td>&nbsp;<a href="javascript:removeMe(document.oz.datefrom);removeMe(document.oz.dateto);">Clear Date</a></td> </tr> <tr> <td>&nbsp;</td> <td><span class="hint">(dd/mm/yyyy)</span></td> <td>&nbsp;</td> <td><span class="hint">(dd/mm/yyyy)</span></td> <td>&nbsp;</td> </tr> </table> </td> </tr> <tr> <td> <br /> Please Note: The default start date for our searches has been changed to 01/01/2005. You can still access all <br /> of our historical data by inserting the desired start date above. For help or further information please contact <br /> your Customer Relationship Consultant. <br /> </td> </tr> <tr> <td class="criteriasectionheader"><br />Industry</td> </tr> <tr> <td> Find exited companies in these sectors. <br />The industries defined here are affiliated with both the core business and divisions of the portfolio/exited companies. <br />Multiple select using ctrl and click. The default is set to all.<br><br> </td> </tr> <tr> <td> <table border="0" cellspacing="0" cellpadding="0"> <tr> <td><span class="criterialabel">Sectors<a href="javascript:displaySectorGlossary('../includes/glossary');"><img src="/includes/images/mm-info-icon.gif"></a></span></td> <td><span class="criterialabel">Sub-Sectors</span></td> </tr> <tr> <td><select multiple="multiple" size="6" name="sectorcode" style="width:250px" onChange="javascript:emptyListBox(document.oz.subsectorcode);fillSelect(document.oz.subsectorcode,null,buildSelectedItems(document.oz.sectorcode));"></select> </td> <td><select multiple="multiple" size="6" name="subsectorcode" style="width:250px"></select> </td> </tr> <tr> <td><a name="selectAllSubsectorLink" href="javascript:fillSelect(document.oz.subsectorcode,null,buildSelectedItems(document.oz.sectorcode));selectAll(document.oz.sectorcode);fillSelect(document.oz.subsectorcode,null,buildSelectedItems(document.oz.sectorcode));">Select All Sectors</a> </td> <td><a href="javascript:if(!document.oz.domsectoronly.checked){selectAll(document.oz.subsectorcode)};">Select All Sub-Sectors</a> </td> </tr> <tr> <td><a href="javascript:emptyListBox(document.oz.subsectorcode);deselectAll(document.oz.sectorcode);">Clear All</a><br><br></td> </tr> <tr> <td colspan="4"> <input type="hidden" name="normalsectorsearch" value="" /> <input type="hidden" name="normalsubsectorsearch" value="" /> <input type="checkbox" name="domsectoronly" value="true" onclick="javascript:deselectAll(document.oz.subsectorcode);setItemDisableStatus(document.oz.subsectorcode);setItemDisableStatus(document.oz.selectAllSubsectorLink);">Search by dominant sector only<a href="javascript:displayPEPortfolioDominantSectorCountryGlossary('../includes/glossary');"><img src="/includes/images/mm-info-icon.gif" title="More information" /> </td> </tr> </table> </td> <!-- <td><select size="6" multiple="multiple" name="sectorcode" style="width:250px" ></select> </td> </tr> <tr> <td> <a href="javascript:selectAll(document.oz.sectorcode);">Select All</a> <a href="javascript:deselectAll(document.oz.sectorcode);">Clear All</a> </td> </tr> --> </tr> <tr> <td style="TEXT-ALIGN: right;" class="search_buttons_right"> <input type="button" value="Save Search" class="framework_flatbutton" onclick="javascript:if (validatePage(document.oz)) {document.oz.verb.value=1;document.oz.target='_self';document.oz.submit();};"/> <!-- a onmouseover="style.cursor = 'hand'" onclick="javascript:if (validatePage(document.oz)) {document.oz.verb.value=28;defaultDatesWithLocale( document.oz.datefrom, document.oz.dateto, 'dd/mm/yyyy' );if (verifyDateSubSectors(document.oz.datefrom.value)) {countWindow();document.oz.target='_self';document.oz.submit();}}"><img src="/images/button_countresults.gif" border="0" /></a --> <input type="button" value="Count Results" class="framework_flatbutton" onclick="javascript:submitCount();" /> <!-- a onmouseover="style.cursor = 'hand'" onclick="javascript:if (validatePage(document.oz)) {document.oz.verb.value=8;defaultDatesWithLocale( document.oz.datefrom,document.oz.dateto, 'dd/mm/yyyy' );document.oz.target='_self';if (verifyDateSubSectors(document.oz.datefrom.value)) {document.oz.target='_self';document.oz.submit();}};"><img src="/images/button_search.gif" border="0" /></a --> <input type="button" value="Search" class="framework_flatbutton" onclick="javascript:if (validatePage(document.oz)) { document.oz.verb.value=8 ;document.oz.target='_self' defaultDatesWithLocale( document.oz.datefrom,document.oz.dateto, 'dd/mm/yyyy' ); ; document.oz.target='_self'; document.oz.submit(); }" /> </td> </tr> </tbody> </table> </tr> </td> </tbody> </table> <table> <tr> <td> <br> </td> </tr> </table> <table class="criteriaborder" cellspacing="0" cellpadding="2" width="100%" border="0"> <tbody> <tr> <td> <table cellspacing="0" cellpadding="0" border="0" style="width:100%;"> <tbody> <tr> <td valign="top"> <table cellspacing="0" cellpadding="0" width="100%" border="0"> <tr> <td align="center" valign="middle" width="100%" height="18" class="criteriaheader2">Further Search Criteria</td> </tr> </table> </td> </tr> <tr> <td class="criteriasectionheader"><br/>Geography</td> </tr> <tr> <td>Find exited companies in these locations. <br />Multiple select using ctrl and click. The default is set to all. </td> </tr> <tr> <td> <table border="0" cellspacing="0" cellpadding="0"> <tr> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> <td><img src="/images/spacer.gif" width="10" height="1" alt="" /></td><td>&nbsp;</td> </tr> <tr> <td><select multiple="multiple" size="6" name="areacode" style="width:200px" onChange="javascript:emptyListBox(document.oz.regioncode);emptyListBox(document.oz.countrycode);fillSelect(document.oz.regioncode,null,buildSelectedItems(document.oz.areacode));emptyListBox(document.oz.statecode);"></select></td> <td><select multiple="multiple" size="6" name="regioncode" style="width:200px" onChange="javascript:emptyListBox(document.oz.countrycode);fillSelect(document.oz.countrycode,null,buildSelectedItems(document.oz.regioncode));emptyListBox(document.oz.statecode);"></select></td> <td><select multiple="multiple" size="6" name="countrycode" style="width:200px" onChange="javascript:emptyListBox(document.oz.statecode);fillSelect(document.oz.statecode,null,buildSelectedItems(document.oz.countrycode));"></select></td> <td>&nbsp;</td><td><select multiple="multiple" size="6" name="statecode" style="width:200px"></select></td> </tr> <tr> <td><a href="javascript:emptyListBox(document.oz.regioncode);emptyListBox(document.oz.countrycode);selectAll(document.oz.areacode);fillSelect(document.oz.regioncode,null,buildSelectedItems(document.oz.areacode));">Select All</a></td> <td><a href="javascript:emptyListBox(document.oz.countrycode);selectAll(document.oz.regioncode);fillSelect(document.oz.countrycode,null,buildSelectedItems(document.oz.regioncode));">Select All</a></td> <td><a href="javascript:selectAll(document.oz.countrycode);emptyListBox(document.oz.statecode);fillSelect(document.oz.statecode,null,buildSelectedItems(document.oz.countrycode));">Select All</a></td> <td>&nbsp;</td><td><a href="javascript:selectAll(document.oz.statecode);">Select All</a></td> </tr> <tr> <td><a href="javascript:emptyListBox(document.oz.regioncode);emptyListBox(document.oz.countrycode);emptyListBox(document.oz.statecode);deselectAll(document.oz.areacode);">Clear All</a></td> </tr> </table> </td> </tr> <tr> <td class="criteriasectionheader"><br/>PE House</td> </tr> <tr> <td>Find exit companies who are currently held by specific PE Houses. <br />Maximum of 50 selections allowed.</td > </tr> <tr> <td> <table border="0" cellspacing="0" cellpadding="0"> <tr> <td> <a class="search_lookup" href="javascript:openWin('qpehousenotapproved','hyperlink','pehousesysid','select-multiple','pehousesysiddescription','');">Lookup</a> </td> </tr> <tr> <td> <select size="4" multiple="multiple" name="pehousesysid" style="width:350px"></select> <input type="hidden" name="pehousesysiddescription" /> </td> </tr> <tr> <td> <a href="javascript:removeLookupOption(document.oz.pehousesysid);removeMe(document.oz.pehousesysid);">Remove</a> </td> </tr> </table> </td> </tr> <tr> <td class="criteriasectionheader"><br/>Advisors</td> </tr> <tr> <td> Find exited companies who have been advised by these companies. <br />Maximum of 50 selections allowed. </td> </tr> <tr> <td> <table border="0" cellspacing="0" cellpadding="0"> <tr> <td> <a class="search_lookup" href="javascript:openWin('ecadvisor','hyperlink','advisorcompanysysid','select-multiple','advisorcompanysysiddescription','');">Lookup</a> </td> </tr> <tr> <td> <select size="4" multiple="multiple" name="advisorcompanysysid" style="width:350px"></select> <input type="hidden" name="advisorcompanysysiddescription" /> </td> </tr> <tr> <td> <a href="javascript:removeLookupOption(document.oz.advisorcompanysysid);removeMe(document.oz.advisorcompanysysid);">Remove</a> </td> </tr> </table> </td> </tr> <tr> <td><br /><span class="criteriasectionheader">Deal Value</span></td> </tr> <tr> <td>Find exited companies with the following deal value. </td> </tr> <tr> <td> <table> <tr> <td><p><span class="criterialabel">Currency</span></p></td> <td>&nbsp;</td> <td><select id="currencycode" name="currencycode"><option value="AUD">AUD</option> <option value="CHF">CHF</option> <option value="CNY">CNY</option> <option value="EUR">EUR</option> <option value="GBP">GBP</option> <option value="HKD">HKD</option> <option value="INR">INR</option> <option value="JPY">JPY</option> <option value="USD" selected="">USD</option></select></td> </tr> </table> </td> </tr> <tr> <td> <table> <tr> <td width="180"><p><span class="criterialabel">Minimum value in millions</span></p></td> <td>&nbsp;</td> <td><p><input type="text" name="mindealvalue" size="12" value="" onkeypress="checkMinimumValue();" onkeyup="checkMinimumValue();" /></td> </tr> </table> </td> </tr> <tr> <td> <table> <tr> <td width="180"><span class="criterialabel">Maximum value in millions</span></td> <td>&nbsp;</td> <td><input type="text" name="maxdealvalue" size="12" value=""></td> </tr> </table> </td> </tr> <tr><td><br>Include deals with undisclosed value <input type="checkbox" name="undiscloseddealvalues" value="true" Checked></td></tr> <tr> <td class="criteriasectionheader"><br/>Exited Companies</td> </tr> <tr> <td>Maximum of 50 selections allowed.</td > </tr> <tr> <td> <table border="0" cellspacing="0" cellpadding="0"> <tr> <td> <a class="search_lookup" href="javascript:openWin('eccompany','hyperlink','eccompanysysid','select-multiple','eccompanysysiddescription','');">Lookup</a> </td> </tr> <tr> <td> <select size="4" multiple="multiple" name="eccompanysysid" style="width:350px"></select> <input type="hidden" name="eccompanysysiddescription" /> </td> </tr> <tr> <td> <a href="javascript:removeLookupOption(document.oz.eccompanysysid);removeMe(document.oz.eccompanysysid);">Remove</a> </td> </tr> </table> </td> </tr> <tr> <td class="criteriasectionheader"><br/>Free Text Search</td> </tr> <tr> <td>Please use the Free Text Search by typing in a keyword or phrase to identify the required portfolio. <br /> <span class="hint">Searches on companies' information, deal description, and condition, type, nature, consideration structure.<br><br></span> </td> </tr> <tr> <td> <table border="0" cellspacing="0" cellpadding="0"> <tr> <td width="150" class="criterialabel">Search</td> <td><input type="text" name="textsearch" style="width:250px" value="" /></td> <td><table border="0" cellpadding="0" cellspacing="0"> <tr valign="top"><td width="350"><input checked type="radio" name="andorfreetext" value="and"/>Match all words<br><input type="radio" name="andorfreetext" value="or"/>Match any word<br><input type="radio" name="andorfreetext" value="phrase"/>Match exact phrase</td></tr> </table> </td> </tr> </table> </td> </tr> <tr> <td style="TEXT-ALIGN: right;" class="search_buttons_right"> <input type="button" value="Save Search" class="framework_flatbutton" onclick="javascript:if (validatePage(document.oz)) {document.oz.verb.value=1;document.oz.target='_self';document.oz.submit();};"/> <!-- a onmouseover="style.cursor = 'hand'" onclick="javascript:if (validatePage(document.oz)) {document.oz.verb.value=28;defaultDatesWithLocale( document.oz.datefrom, document.oz.dateto, 'dd/mm/yyyy' );if (verifyDateSubSectors(document.oz.datefrom.value)) {countWindow();document.oz.target='_self';document.oz.submit();}}"><img src="/images/button_countresults.gif" border="0" /></a --> <input type="button" value="Count Results" class="framework_flatbutton" onclick="javascript:submitCount();" /> <!-- a onmouseover="style.cursor = 'hand'" onclick="javascript:if (validatePage(document.oz)) {document.oz.verb.value=8;defaultDatesWithLocale( document.oz.datefrom,document.oz.dateto, 'dd/mm/yyyy' );document.oz.target='_self';if (verifyDateSubSectors(document.oz.datefrom.value)) {document.oz.target='_self';document.oz.submit();}};"><img src="/images/button_search.gif" border="0" /></a --> <input type="button" value="Search" class="framework_flatbutton" onclick="javascript:if (validatePage(document.oz)) { document.oz.verb.value=8 ;document.oz.target='_self'; defaultDatesWithLocale( document.oz.datefrom,document.oz.dateto, 'dd/mm/yyyy' ); document.oz.target='_self'; document.oz.submit(); }" /> </td> </tr> </tbody> </table> </tr> </td> </tbody> </table> </form> <script LANGUAGE="JavaScript"> <!-- function validatePage(objitem) { selectAll(objitem.pehousesysid); selectAll(objitem.eccompanysysid); objitem.eccompanysysid.required=false; objitem.eccompanysysid.description='Portfolio Company Name'; objitem.eccompanysysid.datatype='alphanumeric'; selectAll(objitem.advisorcompanysysid); objitem.advisorcompanysysid.required=false; objitem.advisorcompanysysid.description='Advisor Name'; objitem.advisorcompanysysid.datatype='alphanumeric'; // locale info. objitem.localedateformat='dd/mm/yyyy'; objitem.localecurrencycode='USD'; objitem.localelanguagecode='en_eu'; objitem.localetimezone='235'; objitem.mindealvalue.required=false; objitem.mindealvalue.description='Currency minimum value in millions'; objitem.mindealvalue.datatype='decimal'; objitem.mindealvalue.min =0; objitem.mindealvalue.max=1000000000000000000; objitem.maxdealvalue.required=false; objitem.maxdealvalue.description='Currency maximum value in millions'; objitem.maxdealvalue.datatype='decimal'; objitem.maxdealvalue.min=0; objitem.maxdealvalue.max=1000000000000000000; objitem.datefrom.required=false; objitem.datefrom.description='Date from'; objitem.datefrom.datatype='date'; objitem.dateto.required=false; objitem.dateto.description='Date to'; objitem.dateto.datatype='date'; if (objitem.statecode) { objitem.statecodelength.value = objitem.statecode.length; } // DanielC: 7/11/08: Case 107136: set the hidden field so that it will end up in the token XML and can be used in criteria.xml if (document.oz.domsectoronly.checked == false) { document.oz.normalsectorsearch.value = "true"; document.oz.normalsubsectorsearch.value = "true"; } return verify(objitem,false); } function submitCount() { if (validatePage(document.oz)) { var dOz = document.oz; //need to change pPopup variable to pPopup=1 to ensure no chrome on popup in event of failure var vAction = dOz.action; dOz.action = (dOz.action.search(/pPopup/) == -1) ? dOz.action+= "&pPopup=1" : dOz.action.replace(/pPopup=./,"pPopup=1"); defaultDatesWithLocale( document.oz.datefrom,document.oz.dateto, 'dd/mm/yyyy' ); dOz.verb.value=28; countWindow(); document.oz.submit(); dOz.action = vAction; } } //--> </script> </td> </tr> </table> </td> <td class="homepage_mainbody-leaguetbl"></td> </tr> </table> </td> </tr> <tr> <td width="100%"><img src="/images/spacer.gif" width="1" height="1"></td> </tr> </table> </td> </tr> </table> </div><footer class="acuris-footer" xmlns:msxsl="urn:schemas ``` A piece of code with xpath not sending error: ``` def openSearchPageCommon(self,url,clear_xpath) : self.drv.get(url) for x in self.drv.find_elements_by_xpath(clear_xpath) : x.click() def openSearchPage(self) : xpath = "//form[@action='portfoliobroker.asp?']//table//*[contains(text(),'Clear Date')]" self.openSearchPageCommon(self.tgt,xpath) ``` Full error: ``` Traceback (most recent call last): File "mmmm_lib.py", line 73, in __init__ self.drv.find_element_by_xpath("//*[contains(text(),'Find exited companies announced')]/../..") File "/home/airflow/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 394, in find_element_by_xpath return self.find_element(by=By.XPATH, value=xpath) File "/home/airflow/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 978, in find_element 'value': value})['value'] File "/home/airflow/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute self.error_handler.check_response(response) File "/home/airflow/.local/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[contains(text(),'Find exited companies announced')]/../.."} ```
2021/08/05
[ "https://Stackoverflow.com/questions/68660419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8867871/" ]
Using `repr` or raw string on a target string is a bad idea! By doing that newline characters are treated as literal '`\n`'. This is likely to cause unexpected behavior on other test cases. The real problem is that `.` matches any character **EXCEPT** newline. If you want to match everything, replace `.` with `[\s\S]`. This means "whitespace or not whitespace" = "anything". Using other character groups like `[\w\W]` also works, [and it is more efficient for adding exception just for newline.](https://stackoverflow.com/a/33312193/11556864) One more thing, it is a good practice to use raw string in **pattern** string(not match target). This will eliminate the need to escape every characters that has special meaning in normal python strings.
You could add it as an or, but make sure you `\` in the regex string, so regex actually gets the `\n` and not a actual newline. Something like this: ``` regex = '.*match(.|\\n)*fail.*' ``` This would match anything from the last `\n` to `match`, then any mix or number of `\n` until `testfail`. You can change this how you want, but the idea is the same. Put what you want into a grouping, and then use `|` as an `or`. [![enter image description here](https://i.stack.imgur.com/O97Gn.png)](https://i.stack.imgur.com/O97Gn.png) On the left is what this regex pattern matched from your example.
5,800
4,561,113
hi how to convert a = ['1', '2', '3', '4'] into a = [1, 2, 3, 4] in one line in python ?
2010/12/30
[ "https://Stackoverflow.com/questions/4561113", "https://Stackoverflow.com", "https://Stackoverflow.com/users/557854/" ]
With a list comprehension. ``` a[:] = [int(x) for x in a] ```
With a **generator**: ``` a[:] = (int(x) for x in a) ``` ... list comprehensions are so ummmmm, 2.1, don't you know? but please be wary of replacing the contents in situ; compare this: ``` >>> a = b = ['1', '2', '3', '4'] >>> a[:] = [int(x) for x in a] >>> a [1, 2, 3, 4] >>> b [1, 2, 3, 4] ``` with this: ``` >>> a = b = ['1', '2', '3', '4'] >>> a = [int(x) for x in a] >>> a [1, 2, 3, 4] >>> b ['1', '2', '3', '4'] ```
5,801
14,007,784
I'm trying to create a scheduled task using the Unix `at` command. I wanted to run a python script, but quickly realized that `at` is configured to use run whatever file I give it with `sh`. In an attempt to circumvent this, I created a file that contained the command `python mypythonscript.py` and passed that to `at` instead. I have set the permissions on the python file to executable by everyone (`chmod a+x`), but when the `at` job runs, I am told `python: can't open file 'mypythonscript.py': [Errno 13] Permission denied`. If I run `source myshwrapperscript.sh`, the shell script invokes the python script fine. Is there some obvious reason why I'm having permissions problems with `at`? **Edit:** I got frustrated with the python script, so I went ahead and made a `sh` script version of the thing I wanted to run. I am now finding that the `sh` script returns to me saying `rm: cannot remove <filename>: Permission denied` (this was a temporary file I was creating to store intermediate data). Is there anyway I can authorize these operations with my own credentials, despite not having sudo access? All of this works perfectly when I run it myself, but everything seems to go to shit when I have `at` do it.
2012/12/23
[ "https://Stackoverflow.com/questions/14007784", "https://Stackoverflow.com", "https://Stackoverflow.com/users/599391/" ]
You can use a `LEFT JOIN`, but it would be so much easier to do that if you started off by using the cleaner and more modern `JOIN` syntax: ``` SELECT c.*, d.username, d.email, e.country_name FROM user_profiles c JOIN users d ON d.id = c.id JOIN country e ON e.country_id = c.country_id WHERE c.user_id = 42 ``` Now to solve your problem you can just add `LEFT`: ``` LEFT JOIN country e ON e.country_id = c.country_id ``` Full query: ``` SELECT c.*, d.username, d.email, e.country_name FROM user_profiles c JOIN users d ON d.id = c.id LEFT JOIN country e ON e.country_id = c.country_id WHERE c.user_id = 42 ``` **Related** * [Why isn't SQL ANSI-92 standard better adopted over ANSI-89?](https://stackoverflow.com/questions/334201/why-isnt-sql-ansi-92-standard-better-adopted-over-ansi-89)
Before I even start thinking about your current problem, can I just point out that your current query is a mess. Really bad. It might work, it might even work efficiently - but it's still a mess: ``` SELECT c.*, d.username, d.email, e.country_name FROM user_profiles c, users d, country e WHERE d.id = ".$id." AND d.id = c.user_id AND e.id = c.country_id; ``` > > I have tried to rewrite this with CASE or LEFT JOIN > > > But you're not going to show us your code? One solution would be to use a sub select against each row in user\_profiles/users: ``` SELECT c.*, d.username, d.email, (SELECT e.country_name FROM country e WHERE AND e.id = c.country_id LIMIT 0,1) AS country_name FROM user_profiles c, users d WHERE d.id = ".$id." AND d.id = c.user_id; ``` Alternatively, use a LEFT JOIN: ``` SELECT c.*, d.username, d.email, e.country_name FROM user_profiles c INNER JOIN users d ON d.id = c.user_id LEFT JOIN country e ON e.id = c.country_id WHERE d.id = ".$id."; ```
5,806
47,891,644
I am doing a python project, with the SikuliX feature. I want to make an Automatic Mail sending system, but I import the TO, CC/BCC, and so on.. trough a BAT file, which sends then its data to a txt, python imports the txt and then it uses to do the job. But my problem is that when I leave a variable in Batch empty, it Automatically fills it as 'ECHO is off.' How could I prevent this ? Here's the code: ``` @echo Off SETLOCAL EnableDelayedExpansion for /F "tokens=1,2,3 delims=#" %%a in ('"prompt #$H#$E# & echo on & for %%b in (1) do rem"') do ( set "DEL=%%a" ) call :colorEcho f0 "-----[]AutoMail System[]-----" echo. echo. call :colorEcho 0f "Fill out the next part:" echo. pause del userdata.txt echo. echo. set /p to="TO: " echo To: >> userdata.txt echo %to% >> userdata.txt set /p ccbcc="CC/BCC: " CC/ BCC: >> userdata.txt %ccbcc% >> userdata.txt set /p targy="TΓ‘rgy: " Targy: >> userdata.txt %targy% >> userdata.txt set /p szoveg=">->-> " szoveg: >> userdata.txt %szoveg% >> userdata.txt echo. PAUSE echo. echo Starting AutoFill echo. PAUSE start C:\\Users\\gutiw\\Desktop\\Sikuli\\runsikulix.cmd -r C:\\Users\\gutiw\\Desktop\\Sikuli\\AUTOMATION\\AutoMail.sikuli exit :colorEcho echo off <nul set /p ".=%DEL%" > "%~2" findstr /v /a:%1 /R "^$" "%~2" nul del "%~2" > nul 2>&1i ``` Thanks for helping!
2017/12/19
[ "https://Stackoverflow.com/questions/47891644", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8961515/" ]
Use the command line: ``` >>userdata.txt echo/%to% ``` Now environment variable `to` can be not defined and **ECHO** does nevertheless not output current state of **ECHO** mode because of forward slash `/` in command line. The output redirection is specified on this command line for safety at beginning to make it possible to output with referencing `to` also the string values `1` to `9` without getting a trailing space written into the file `userdata.txt`. The command line `echo.%to% >> userdata.txt` has two small disadvantages: 1. Although extremely unlikely, `echo.` can fail and do something completely different as expected, see DosTips forum topic [ECHO. FAILS to give text or blank line - Instead use ECHO/](https://www.dostips.com/forum/viewtopic.php?f=3&t=774) for details. 2. The space between `%to%` and the redirection operator `>>` is also written as trailing space into the output file `userdata.txt`. Example to demonstrate the difference: ``` del userdata.txt 2>nul set to=1 >>userdata.txt echo/%to% ``` Execution of a batch file with the three lines above results in really executing by Windows command interpreter: ``` del userdata.txt 2>nul set to=1 echo/11>>userdata.txt ``` The file `userdata.txt` contains `1`, carriage return and line-feed, i.e. **three** bytes with the hexadecimal values `31 0D 0A`. A batch file with the three lines ``` del userdata.txt 2>nul set to=1 echo.%to% >>userdata.txt ``` results in really executing by Windows command interpreter ``` del userdata.txt 2>nul set to=1 echo.1 1>>userdata.txt ``` and the file `userdata.txt` contains `1`, a space character, carriage return and line-feed, i.e. **four** bytes with the hexadecimal values `31 20 0D 0A`. In this case it works also to specify the redirection on right side as usual without a space between environment variable reference `%to%` and redirection operator `>>`, i.e. use the command line: ``` echo/%to%>>userdata.txt ``` This works because of `/` after command **ECHO**. `.` could be also used if there would not be issue 1 with `echo.`. See also [Why does ECHO command print some extra trailing space into the file?](https://stackoverflow.com/a/46972524/3074564)
So it looks like it's working now. The problem was that I haven't used the **.** between echo and the variables. So it looks like this after edited: ``` @echo Off SETLOCAL EnableDelayedExpansion for /F "tokens=1,2,3 delims=#" %%a in ('"prompt #$H#$E# & echo on & for %%b in (1) do rem"') do ( set "DEL=%%a" ) call :colorEcho f0 "-----[]AutoMail System[]-----" echo. echo. call :colorEcho 0f "Fill out the next part" echo. pause del userdata.txt echo. echo. set /p to="TO: " echo To: >> userdata.txt echo.%to% >> userdata.txt set /p ccbcc="CC/BCC: " echo.CC/ BCC: >> userdata.txt echo.%ccbcc% >> userdata.txt set /p targy="TΓ‘rgy: " echo.Targy: >> userdata.txt echo.%targy% >> userdata.txt set /p szoveg=">->-> " echo.szoveg: >> userdata.txt echo.%szoveg% >> userdata.txt echo. PAUSE echo. echo Starting AutoFill echo. PAUSE start C:\\Users\\gutiw\\Desktop\\Sikuli\\runsikulix.cmd -r C:\\Users\\gutiw\\Desktop\\Sikuli\\AUTOMATION\\AutoMail.sikuli exit :colorEcho echo off <nul set /p ".=%DEL%" > "%~2" findstr /v /a:%1 /R "^$" "%~2" nul del "%~2" > nul 2>&1i ```
5,807
57,408,736
I can’t figure out how to give my R package’s shared library’s debug symbols source line information. What am I missing? 1. I create the following `src/Makevars` file: ``` PKG_CXXFLAGS=-O0 -ggdb PKG_LIBS=-O0 -ggdb ``` 2. I compile the package using `R CMD INSTALL --no-multiarch --with-keep.source`: ``` * installing to library β€˜~/.local/lib/R/3.6’ * installing *source* package β€˜reticulate’ ... ** using staged installation g++ -std=gnu++11 -I"/usr/include/R/" -DNDEBUG -I"$HOME/.local/lib/R/3.6/Rcpp/include" -D_FORTIFY_SOURCE=2 -O0 -ggdb -fpic -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -c RcppExports.cpp -o RcppExports.o ** libs g++ -std=gnu++11 -shared -L/usr/lib64/R/lib -Wl,-O1,--sort-common,--as-needed,-z,relro,-z,now -o reticulate.so RcppExports.o event_loop.o libpython.o output.o python.o readline.o -O0 -ggdb -L/usr/lib64/R/lib -lR ``` installing to ~/.local/lib/R/3.6/00LOCK-reticulate/00new/reticulate/libs 3. I debug like this: ``` R -d gdb --slave -e 'reticulate::py_eval("print")()' GNU gdb (GDB) 8.3 [...] (No debugging symbols found in /usr/lib64/R/bin/exec/R) (gdb) break py_get_formals Function "py_get_formals" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Breakpoint 1 (py_get_formals) pending. (gdb) run Starting program: /usr/lib/R/bin/exec/R --slave -e reticulate::py_eval\(\"print\"\)\(\) [Thread debugging using libthread_db enabled] Using host libthread_db library "/usr/lib/libthread_db.so.1". [...] Thread 1 "R" hit Breakpoint 1, 0x00007fffeb6b79a0 in py_get_formals(PyObjectRef, bool) () from /home/angerer/.local/lib/R/3.6/reticulate/libs/reticulate.so (gdb) step Single stepping until exit from function _Z14py_get_formals11PyObjectRefb, which has no line number information. [...] ``` Why does my function not have line numbers even though I specified `-ggdb` in both compilation? I see that only `RcppExports.cpp` is mentioned in the command line, is that the problem? If so, how can I change this?
2019/08/08
[ "https://Stackoverflow.com/questions/57408736", "https://Stackoverflow.com", "https://Stackoverflow.com/users/247482/" ]
Changing the Makevars doesn’t prompt recompilation. I needed to `rm -f src/*.o src/*.so` before the object files get recompiled.
This is specifically for Windows. The simplest way to do it is to set the R\_MAKEVARS\_USER environment to point to the Makevars.win file. That seems to work. However, debug break points have stopped working!!!!
5,808
20,201,562
I have a list where each element is a letter. Like this: ``` myList = ['L', 'H', 'V', 'M'] ``` However, I want to reverse these letters and store them as a string. Like this: ``` myString = 'MVHL' ``` is there an easy way to do this in python? is there a .reverse I could call on my list and then just loop through and add items to my string?
2013/11/25
[ "https://Stackoverflow.com/questions/20201562", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1110590/" ]
There is a [`reversed()` function](http://docs.python.org/2/library/functions.html#reversed), as well as the `[::-1]` negative-stride slice: ``` >>> myList = ['L', 'H', 'V', 'M'] >>> ''.join(reversed(myList)) 'MVHL' >>> ''.join(myList[::-1]) 'MVHL' ``` Both get the job done admirably when combined with the [`str.join()` method](http://docs.python.org/2/library/stdtypes.html#str.join), but of the two, the negative stride slice is the faster method: ``` >>> import timeit >>> timeit.timeit("''.join(reversed(myList))", 'from __main__ import myList') 1.4639930725097656 >>> timeit.timeit("''.join(myList[::-1])", 'from __main__ import myList') 0.4923250675201416 ``` This is because `str.join()` really wants a list, to pass over the strings in the input list twice (once for allocating space, the second time for copying the character data), and a negative slice returns a list directly, while `reversed()` returns an iterator instead.
You can use `reversed` (or `[::-1]`) and `str.join`: ``` >>> myList = ['L', 'H', 'V', 'M'] >>> "".join(reversed(myList)) 'MVHL' ```
5,809
34,493,535
I am using **pymongo** driver to work with Mongodb using Python. Every time when I run a query in python shell, it returns me some output which is very difficult to understand. I have used the `.pretty()` option with mongo shell, which gives the output in a structured way. I want to know whether there is any method like `pretty()` in **pymongo**, which can return output in a structured way ?
2015/12/28
[ "https://Stackoverflow.com/questions/34493535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4138764/" ]
There is no direct method to print output of pymongo in a structured way. as the output of pymongo is a `dict` ``` print(json.dumps('variable with out of pymongo query')) ``` this will serve your purpose i think
It probably depends on your IDE, not the pymongo itself. the pymongo is responsible for manipulating data and communicating with the mongodb. I am using Visual Studio with PTVS and I have such options provided from the Visual Studio. The PyCharm is also a good option for IDE that will allow you to watch your code variables and the JSON in a formatted structure.
5,810
42,162,985
**Use Case** I am making a factory type script in Python that consumes XML and based on that XML, returns information from a specific factory. I have created a file that I call FactoryMap.json that stores the mapping between the location an item can be found in XML and the appropriate factory. **Issue** The JSON in my mapping file looks like: ``` { "path": "['project']['builders']['hudson.tasks.Shell']", "class": "bin.classes.factories.step.ShellStep" } ``` *path* is where the element can be found in the xml once its converted to a dict. *class* is the corresponding path to the factory that can consume that elements information. In order to do anything with this, I need to descend into the dictionaries structure, which would look like this if I didn't have to draw this information from a file(note the key reference = 'path' from my json'): ``` configDict={my xml config dict} for k,v in configDict['project']['builders']['hudson.tasks.Shell'].iteritems(): #call the appropriate factory ``` The issue is that if I look up the path value as a string or a list, I can not use it in 'iteritems'(): ``` path="['project']['builders']['hudson.tasks.Shell']" #this is taken from the JSON for k,v in configDict[path].iteritems(): #call the appropriate factory ``` This returns a key error stating that I can't use a string as the key value. How can I used a variable as the key for that python dictionary?
2017/02/10
[ "https://Stackoverflow.com/questions/42162985", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7127136/" ]
You could use `eval`: ``` eval( "configDict"+path ) ```
You can use the `eval()` function to evaluate your path into an actual dict object vs a string. Something like this is what I'm referring to: ``` path="['project']['builders']['hudson.tasks.Shell']" #this is taken from the JSON d = eval("configDict%s" % path) for k,v in d.iteritems(): #call the appropriate factory ```
5,819
25,109,445
I am developing a client-server software in which server is developed by python. I want to call a group of methods from a java program in python. All the java methods exists in one jar file. It means I do not need to load different jars. For this purpose, I used jpype. For each request from client, I invoke a function of python which looks like this: ``` def test(self, userName, password): Classpath = "/home/DataSource/DMP.jar" jpype.startJVM( "/usr/local/java/jdk1.7.0_60/jre/lib/amd64/server/libjvm.so", "-ea", "- Xmx512m", "-Djava.class.path=%s" % Classpath) NCh = jpype.JClass("Common.NChainInterface") n = NCh(self._DB_ipAddress, self._DB_Port, self._XML_SCHEMA_PATH, self._DSTDir) jpype.shutdownJVM() ``` For one function it works, but for the second call it cannot start jvm. I saw a lot of complain about it but I could not find any solution for that. I appreciate it if any body can help. If jpype has problem in multiple starting jvm, is there any way to start and stop jvm once? The server is deployed on a Ubuntu virtual machine but I do not have enough knowledge to write for example, a script for this purpose. Could you please provide a link, or an example?
2014/08/03
[ "https://Stackoverflow.com/questions/25109445", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Check `isJVMStarted()` before `startJVM()`. If JVM is running, it will return `True`, otherwise `False`. ``` def init_jvm(jvmpath=None): if jpype.isJVMStarted(): return jpype.startJVM(jpype.getDefaultJVMPath()) ``` For a real example, see [here](https://github.com/e9t/konlpy/blob/master/konlpy/jvm.py#L21).
This issue is not resolved by et9's answer above. The problem is explained [here](https://sourceforge.net/p/jpype/discussion/379372/thread/8dab696c/). Effectively you need to start/stop the JVM at the server/module level. I have had success with multiple calls using this method in unit tests.
5,820
41,795,116
While `frozendict` [was rejected](https://www.python.org/dev/peps/pep-0416/#rejection-notice), a related class `types.MappingProxyType` was added to public API in python 3.3. I understand `MappingProxyType` is just a wrapper around the underlying `dict`, but despite that isn't it functionally equivalent to `frozendict`? In other words, what's the substantive difference between the original PEP 416 `frozendict` and this: ``` from types import MappingProxyType def frozendict(*args, **kwargs): return MappingProxyType(dict(*args, **kwargs)) ``` Of course `MappingProxyType` is not hashable as is, but just as [the PEP suggested for `frozendict`](https://www.python.org/dev/peps/pep-0416/#recipe-hashable-dict), it can be made hashable after ensuring that all its values are hashable (MappingProxyType cannot be subclassed, so it would be require composition and forwarding of methods).
2017/01/22
[ "https://Stackoverflow.com/questions/41795116", "https://Stackoverflow.com", "https://Stackoverflow.com/users/336527/" ]
TL;DR ----- `MappingProxyType` is a read only proxy for mapping (e.g. dict) objects. `frozendict` is an immutable dict Answer ------ The proxy pattern is (quoting [wikipedia](https://en.wikipedia.org/wiki/Proxy_pattern)): > > A proxy, in its most general form, is a class functioning as an > interface to something else. > > > The `MappingProxyType` is just a simple proxy (i.e. interface) to access the real object (the real map, which on our example is dict). the suggested `frozendict` object is just as set is to frozenset. a read only (immutable) object which can only be changed upon creation. So why do we need `MappingProxyType`? example use case is where you want to pass a dictionary to another function but without it able to change your dictionary, it act as a read only proxy, (quoting [python docs](https://docs.python.org/3.5/library/types.html#types.MappingProxyType)): > > Read-only proxy of a mapping. It provides a dynamic view on the > mapping’s entries, which means that when the mapping changes, the view > reflects these changes. > > > lets see some example usage of the `MappingProxyType` ``` In [1]: from types import MappingProxyType In [2]: d = {'a': 1, 'b': 2} In [3]: m = MappingProxyType(d) In [4]: m['a'] Out[4]: 1 In [5]: m['a'] = 5 TypeError: 'mappingproxy' object does not support item assignment In [6]: d['a'] = 42 In [7]: m['a'] Out[7]: 42 In [8]: for i in m.items(): ...: print(i) ('a', 42) ('b', 2) ``` Update: ------- because the PEP did not make it into python, we cannot know for sure what the implementation that would be. by looking at the PEP we see that: ``` frozendict({'a': {'b': 1}}) ``` would raise an exception as `{'b': 1}` is not hashable value, but on your implementation it will create the object. of-course, you can add a validation for the value as noted on the PEP. I assume part of the PEP was memory optimization and implementation of this kind of frozendict could have benefit from the performance of dict comparison using the `__hash__` implementation.
One thing I've noticed is that `frozendict.copy` supports add/replace (limited to string keys), whereas `MappingProxyType.copy` does not. For instance: ```py d = {'a': 1, 'b': 2} from frozendict import frozendict fd = frozendict(d) fd2 = fd.copy(b=3, c=5) from types import MappingProxyType mp = MappingProxyType(d) # mp2 = mp.copy(b=3, c=5) => TypeError: copy() takes no keyword arguments # to do that w/ MappingProxyType we need more biolerplate temp = dict(mp) temp.update(b=3, c=5) mp2 = MappingProxyType(temp) ``` Note: none of these two immutable maps supports "remove and return new immutable copy" operation.
5,823
49,217,962
I tend to write a lot of command line utility programs and was wondering if there is a standard way of messaging the user in Python. Specifically, I would like to print error and warning messages, as well as other more conversational output in a manner that is consistent with Unix conventions. I could produce these myself using the built-in print function, but the messages have a uniform structure so it seems like it would be useful to have a package to handle this for me. For example, for commands that you run directly in the command line you might get messages like this: ``` This is normal output. error: no files given. error: parse.c: no such file or directory. error: parse.c:7:16: syntax error. warning: /usr/lib64/python2.7/site-packages/simplejson: not found, skipping. ``` If the commands might be run in a script or pipeline, they should include their name: ``` grep: /usr/dict/words: no such file or directory. ``` It would be nice if could handle levels of verbosity. These things are all relatively simple in concept, but can result in a lot of extra conditionals and complexity for each print statement. I have looked at the logging facility in Python, but it seems overly complicated and more suited for daemons than command line utilities.
2018/03/11
[ "https://Stackoverflow.com/questions/49217962", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8323360/" ]
I can recommend [Inform](https://inform.readthedocs.io). It is the only package I have seen that seems to address this need. It provides a variety of print functions that print in different circumstances or with different headers. For example: ``` log() -- prints to log file, no header comment() -- prints if verbose, no header display() -- prints if not quiet, no header output() -- always prints, no header warning() -- always prints with warning header error() -- always prints with error header fatal() -- always prints with error header, terminates program. ``` Inform refers to these functions as 'informants'. Informants are very similar to the Python print function in that they take any number of arguments and builds the message by joining them together. It also allows you to specify a *culprit*, which is added to the front of the message. For example, here is a simple search and replace program written using Inform. ``` #!/usr/bin/env python3 """ Replace a string in one or more files. Usage: replace [options] <target> <replacement> <file>... Options: -v, --verbose indicate whether file is changed """ from docopt import docopt from inform import Inform, comment, error, os_error from pathlib import Path # read command line cmdline = docopt(__doc__) target = cmdline['<target>'] replacement = cmdline['<replacement>'] filenames = cmdline['<file>'] Inform(verbose=cmdline['--verbose'], prog_name=True) for filename in filenames: try: filepath = Path(filename) orig = filepath.read_text() new = orig.replace(target, replacement) comment('updated' if orig != new else 'unchanged', culprit=filename) filepath.write_text(new) except OSError as e: error(os_error(e)) ``` Inform() is used to specify your preferences; comment() and error() are the informants, they actually print the messages; and os\_error() is a useful utility that converts OSError exceptions into a string that can be used as an error message. If you were to run this, you might get the following output: ``` > replace -v tiger toe eeny meeny miny moe eeny: updated meeny: unchanged replace error: miny: no such file or directory. replace error: moe: no such file or directory. ``` Hopefully this gives you an idea of what Inform does. There is a lot more power there. For example, it provides a collection of utilities that are useful when printing messages. An example is os\_error(), but there are others. You can also define your own informants, which is a way of handling multiple levels of verbosity.
``` import logging logging.basicConfig(level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s') ``` `level` specified above controls the verbosity of the output. You can attach handlers (this is where the complexity outweighs the benefit in my case) to the logging to send output to different places (<https://docs.python.org/2/howto/logging-cookbook.html#multiple-handlers-and-formatters>) but I haven't needed more than command line output to date. To produce output you must specify it's *verbosity* as you log it: `logging.debug("This debug message will rarely appeal to end users")` I hadn't read your very last line, the answer seemed obvious by then and I wouldn't have imagined that single `basicConfig` line could be described as "overly complicated". It's all I use the 60% of the time when print is not enough.
5,829
32,604,558
I looked but i didn't found the answer (and I'm pretty new to python). The question is pretty simple. I have a list made of sublists: ``` ll [[1,2,3], [4,5,6], [7,8,9]] ``` What I'm trying to do is to create a dictionary that has as key the first element of each sublist and as values the values of the coorresponding sublists, like: ``` d = {1:[2,3], 4:[5,6], 7:[8,9]} ``` How can I do that?
2015/09/16
[ "https://Stackoverflow.com/questions/32604558", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2509085/" ]
Using dictionary comprehension (For Python 2.7 +) and slicing - ``` d = {e[0] : e[1:] for e in ll} ``` Demo - ``` >>> ll = [[1,2,3], [4,5,6], [7,8,9]] >>> d = {e[0] : e[1:] for e in ll} >>> d {1: [2, 3], 4: [5, 6], 7: [8, 9]} ```
you could do it this way: ``` ll = [[1,2,3], [4,5,6], [7,8,9]] dct = dict( (item[0], item[1:]) for item in ll) # or even: dct = { item[0]: item[1:] for item in ll } print(dct) # {1: [2, 3], 4: [5, 6], 7: [8, 9]} ```
5,830
24,151,563
I've got a presentation running with reveal.js and everything is working. I am writing some sample code and highlight.js is working well within my presentation. But, I want to incrementally display code. E.g., imagine that I'm explaining a function to you, and I show you the first step, and then want to show the subsequent steps. Normally, I would use fragments to incrementally display items, but it's not working in a code block. So I have something like this: ``` <pre><code> def python_function() <span class="fragment">display this first</span> <span class="fragment">now display this</span> </code></pre> ``` But the `<span>` elements are getting syntax-highlighted instead of read as HTML fragments. It looks something like this: <http://imgur.com/nK3yNIS> FYI without the `<span>` elements highlight.js reads this correctly as python, but with the `<span>`, the language it detects is coffeescript. Any ideas on how to have fragments inside a code block (or another way to simulate this) would be greatly appreciated.
2014/06/10
[ "https://Stackoverflow.com/questions/24151563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2423506/" ]
I got this to work. I had to change the init for the highlight.js dependency: ``` { src: 'plugin/highlight/highlight.js', async: true, callback: function() { [].forEach.call( document.querySelectorAll( '.highlight' ), function( v, i) { hljs.highlightBlock(v); }); } }, ``` Then I authored the section this way: ``` <section> <h2>Demo</h2> <pre class="stretch highlight cpp"> #pragma once void step_one_setup(ofApp* app) { auto orbit_points = app-><span class="fragment zoom-in highlight-current-green">orbitPointsFromTimeInPeriod</span>( app-><span class="fragment zoom-in highlight-current-green">timeInPeriodFromMilliseconds</span>( app->updates. <span class="fragment zoom-in highlight-current-green" data->milliseconds</span>())); } </pre> </section> ``` Results: ![slide before fragments](https://i.stack.imgur.com/mDBXO.png) ![slide at fragment 1](https://i.stack.imgur.com/Z9Ssh.png) ![slide at fragment 2](https://i.stack.imgur.com/x6egW.png)
I would try to use multiple `<pre class="fragment">`and change manually `.reveal pre` to `margin: 0 auto;` and `box-shadow: none;` so they will look like one block of code. OR Have you tried `<code class="fragment">`? If you use negative vertical margin to remove space between individual fragments and add the same background to `<pre>` as `<code>` has then you get what you want. Result: ![enter image description here](https://i.stack.imgur.com/E8CtQ.png) ![enter image description here](https://i.stack.imgur.com/KeV3L.png)
5,839
37,020,181
I am trying to pull a `change` of a `gerrit` project into my local repository using `gitpython`. This can be done using the following command, ``` git pull origin refs/changes/25/225/1 ``` Here, `refs/changes/25/225/1` is the change that has not been submitted in `gerrit`. I have cloned the `gerrit` project into a directory. Now, I want to `pull` the changes that have not submitted into this directory. Below mentioned code is the usual way to `git pull` into a directory containing `.git` file. ``` #gitPull.py import git repo = git.Repo('/home/user/gitRepo') o = repo.remotes.origin o.pull() ``` Here, `gitRepo` has the `.git` folder(it is the cloned gerrit project). I did a lot of searching, but did not find a way to execute the above mentioned command `git pull origin refs/changes/25/225/1` using `gitpython`.
2016/05/04
[ "https://Stackoverflow.com/questions/37020181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6164440/" ]
well, it's a simple as giving the branch as [`refspec` parameter to the pull method](http://gitpython.readthedocs.io/en/stable/reference.html#git.remote.Remote.pull) : ``` import git repo = git.Repo('/home/user/gitRepo') o = repo.remotes.origin o.pull('refs/changes/25/225/1') ```
``` import git import os, os.path g = git.Git(os.path.expanduser("/home/user/gitRepo")) result = g.execute(["git", "pull", "origin", "refs/changes/25/225/1"]) ``` Could do the same using `execute()`.
5,841
17,209,397
This is the code, its quite simple, it just looks like a lot of code: ``` from collections import namedtuple # make a basic Link class Link = namedtuple('Link', ['id', 'submitter_id', 'submitted_time', 'votes', 'title', 'url']) # list of Links to work with links = [ Link(0, 60398, 1334014208.0, 109, "C overtakes Java as the No. 1 programming language in the TIOBE index.", "http://pixelstech.net/article/index.php?id=1333969280"), Link(1, 60254, 1333962645.0, 891, "This explains why technical books are all ridiculously thick and overpriced", "http://prog21.dadgum.com/65.html"), Link(23, 62945, 1333894106.0, 351, "Learn Haskell Fast and Hard", "http://yannesposito.com/Scratch/en/blog/Haskell-the-Hard-Way/"), Link(2, 6084, 1333996166.0, 81, "Announcing Yesod 1.0- a robust, developer friendly, high performance web framework for Haskell", "http://www.yesodweb.com/blog/2012/04/announcing-yesod-1-0"), Link(3, 30305, 1333968061.0, 270, "TIL about the Lisp Curse", "http://www.winestockwebdesign.com/Essays/Lisp_Curse.html"), Link(4, 59008, 1334016506.0, 19, "The Downfall of Imperative Programming. Functional Programming and the Multicore Revolution", "http://fpcomplete.com/the-downfall-of-imperative-programming/"), Link(5, 8712, 1333993676.0, 26, "Open Source - Twitter Stock Market Game - ", "http://www.twitstreet.com/"), Link(6, 48626, 1333975127.0, 63, "First look: Qt 5 makes JavaScript a first-class citizen for app development", "http://arstechnica.com/business/news/2012/04/an-in-depth-look-at-qt-5-making-javascript-a-first-class-citizen-for-native-cross-platform-developme.ars"), Link(7, 30172, 1334017294.0, 5, "Benchmark of Dictionary Structures", "http://lh3lh3.users.sourceforge.net/udb.shtml"), Link(8, 678, 1334014446.0, 7, "If It's Not on Prod, It Doesn't Count: The Value of Frequent Releases", "http://bits.shutterstock.com/?p=165"), Link(9, 29168, 1334006443.0, 18, "Language proposal: dave", "http://davelang.github.com/"), Link(17, 48626, 1334020271.0, 1, "LispNYC and EmacsNYC meetup Tuesday Night: Large Scale Development with Elisp ", "http://www.meetup.com/LispNYC/events/47373722/"), Link(101, 62443, 1334018620.0, 4, "research!rsc: Zip Files All The Way Down", "http://research.swtch.com/zip"), Link(12, 10262, 1334018169.0, 5, "The Tyranny of the Diff", "http://michaelfeathers.typepad.com/michael_feathers_blog/2012/04/the-tyranny-of-the-diff.html"), Link(13, 20831, 1333996529.0, 14, "Understanding NIO.2 File Channels in Java 7", "http://java.dzone.com/articles/understanding-nio2-file"), Link(15, 62443, 1333900877.0, 1244, "Why vector icons don't work", "http://www.pushing-pixels.org/2011/11/04/about-those-vector-icons.html"), Link(14, 30650, 1334013659.0, 3, "Python - Getting Data Into Graphite - Code Examples", "http://coreygoldberg.blogspot.com/2012/04/python-getting-data-into-graphite-code.html"), Link(16, 15330, 1333985877.0, 9, "Mozilla: The Web as the Platform and The Kilimanjaro Event", "https://groups.google.com/forum/?fromgroups#!topic/mozilla.dev.planning/Y9v46wFeejA"), Link(18, 62443, 1333939389.0, 104, "github is making me feel stupid(er)", "http://www.serpentine.com/blog/2012/04/08/github-is-making-me-feel-stupider/"), Link(19, 6937, 1333949857.0, 39, "BitC Retrospective: The Issues with Type Classes", "http://www.bitc-lang.org/pipermail/bitc-dev/2012-April/003315.html"), Link(20, 51067, 1333974585.0, 14, "Object Oriented C: Class-like Structures", "http://cecilsunkure.blogspot.com/2012/04/object-oriented-c-class-like-structures.html"), Link(10, 23944, 1333943632.0, 188, "The LOVE game framework version 0.8.0 has been released - with GLSL shader support!", "https://love2d.org/forums/viewtopic.php?f=3&amp;t=8750"), Link(22, 39191, 1334005674.0, 11, "An open letter to language designers: Please kill your sacred cows. (megarant)", "http://joshondesign.com/2012/03/09/open-letter-language-designers"), Link(21, 3777, 1333996565.0, 2, "Developers guide to Garage48 hackatron", "http://martingryner.com/developers-guide-to-garage48-hackatron/"), Link(24, 48626, 1333934004.0, 17, "An R programmer looks at Julia", "http://www.r-bloggers.com/an-r-programmer-looks-at-julia/")] def query(): return_list = [link for link in links if link.submitter_id == 62443] return_list = sorted(return_list, key=lambda var: var.submitted_time) return return_list query() ``` So, this is the problem, whenever I use the above code, it works fine, however whenever I do this, in the `query()` function it gives me a problem: ``` return_list = [link for link in links if link.submitter_id == 62443].sort(key=lambda var: var.submitted_time) ``` Now, I do not know why, because to me both of these look identical. When I try to do it using `.sort()`, I get None as my list (I tried to iterate over it), which is rather odd. How do you get the list that you want, using the '.sort()` method in Python? I am on Windows 8 using Python 2.7.5.
2013/06/20
[ "https://Stackoverflow.com/questions/17209397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1624921/" ]
`.sort` is an in-place method that sorts an existing list and returns `None`. `sorted` is its counter part the returns a new sorted list. ``` return_list = sorted((link for link in links if link.submitter_id == 62443), key=lambda var: var.submitted_time) ``` On a side note I would use `operator.attrgetter` to eliminate the `lambda` ``` from operator import attrgetter return_list = sorted((link for link in links if link.submitter_id == 62443), key=attrgetter('submitted_time')) ```
If you are bent on using `.sort`, you need a temporary reference to the unsorted list ``` return_list = [link for link in links if link.submitter_id == 62443] return_list.sort(key=lambda var: var.submitted_time) ``` again, it's nice to use attrgetter instead of the lambda
5,842
61,546,785
I'm fairly experienced with python as a tool for datascience, no CS background (but eager to learn). I've inherited a 3K line python script (simulates thermal effects on a part of a machine). It was built *organically* by physics people used to matlab. I've cleaned it up and modularized it (put it into a class and functions). Now I want an easy way to be certain it's working correctly after someone updates it. There have been some frustrating debugging sessions lately. I figure testing of some form can help there. My question is how do I even get started in this case of a large existing script? I see pytest and unittest but is that where I should start? The code is roughly structured like this: ``` class Simulator: parameters = input_file def __init__(self): self.fn1 self.fn2 self.fn3 def fn1(): # with nested functions def fn2 def fn3 ... def fn(n) ``` Each function either generates or acts on some data. Would a way to test to have some standardized input/output run and check against that? Is there a way to do this within the standard convention of testing? Appreciate any advice or tips, cheers!
2020/05/01
[ "https://Stackoverflow.com/questions/61546785", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8559356/" ]
Hope everything is alright with you! `pytest` is good for simple cases like yours (1 file script). It's really simple to get started. Just install it using pip: ``` pip install -U pytest ``` Then create a test file (`pytest` will run all files of the form test\_\*.py or \*\_test.py in the current directory and its subdirectories) ```py # content of test_fn1.py from your_script import example_function def test_1(): assert example_function(1, 2, 3) == 'expected output' ``` You can add as many tests in this file as you want, and as many test files as you desire. To run, go to the folder in a terminal and just execute `pytest`. For organization sake, create a folder named test with all test files inside. If you do this, pay attention to how you import your script since they won't be in the same folder anymore. Check [pytest docs](https://docs.pytest.org/en/latest/getting-started.html) for more information. Hope this helps! Stay safe!
No matter how hard you test a program, it is always fairly reasonable to assume that there will always still be bugs left unfound, in other words, it is impossible to check for everything. To start, I recommend that you thoroughly understand how the program works; thus, you will know what the expected and important values that should be returned are, and what exceptions should be thrown when an error occurs. You will have to write the tests yourself, which may be a hassle, and it sounds as if you don't want to do it, but rigorous testing involves perseverance and determination. As you may know, debugging and fixing code can take a lot longer than the coding portion itself. Here is the [pytest](https://docs.pytest.org/en/latest/contents.html) documentation, I suggest you map out what you want to test first before reading the documentation. You don't need to know how pytest works before you understand how that script of yours works first. Take a pen and paper if necessary and plan out what functions do what and what exceptions should be thrown. Good luck!
5,843
38,228,593
I have the followng dict in python: ``` d = {'ABC': ["DEF", "ASD"], 'DEF': ["AFS", "UAP"]} ``` Now I want to delete the value "DEF" but leave it as a Key. so it will be: ``` d = {'ABC': [ "ASD"], 'DEF': ["AFS", "UAP"]} ```
2016/07/06
[ "https://Stackoverflow.com/questions/38228593", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6171823/" ]
The solution was simple. I installed Mirosoft.NETCore.UniversalWindowsPlatform package via the Package Manager Console: `PM> Install-Package Microsoft.NETCore.UniversalWindowsPlatform`
If you have the newest version of NETCore.UniversalWindowsPlatform installed and it still isn't working, make sure that you're using the newest nuget. We had it working in Visual Studio, but failing in command line. The reason was that we were using old version of nuget.exe, and this caused our F# fake script to use msbuild 14 instead of 15.
5,851
47,301,581
I'm building genetic algorithm to feature selection in python. I have extracted features from my datas, then I divided into two dataframe, 'train' and 'test' dataframe. How can I multiple the values for each row in 'population' dataframe (each individu) and 'train' dataframe? 'train' dataframe: ``` feature0 feature1 feature2 feature3 feature4 feature5 0 18.279579 -3.921346 13.611829 -7.250185 -11.773605 -18.265003 1 17.899545 -15.503942 -0.741729 -0.053619 -6.734652 4.398419 4 16.432750 -22.490190 -4.611659 -15.247781 -13.941488 -2.433374 5 15.905368 -4.812785 18.291712 3.742221 3.631887 -1.074326 6 16.991823 -15.946251 8.299577 8.057511 8.057510 -1.482333 ``` 'population' dataframe: ``` 0 1 2 3 4 5 0 1 1 0 0 0 1 1 0 1 0 1 0 0 2 0 0 0 0 0 1 3 0 0 1 0 1 1 ``` Multiplying each row in 'population' to all rows in 'train'. It will results: 1) From population row 1: ``` feature0 feature1 feature2 feature3 feature4 feature5 0 18.279579 -3.921346 0 0 0 -18.265003 1 17.899545 -15.503942 0 0 0 4.398419 4 16.432750 -22.490190 0 0 0 -2.433374 5 15.905368 -4.812785 0 0 0 -1.074326 6 16.991823 -15.946251 0 0 0 -1.482333 ``` 2) From population row 2: ``` feature0 feature1 feature2 feature3 feature4 feature5 0 0 -3.921346 0 -7.250185 0 0 1 0 -15.503942 0 -0.053619 0 0 4 0 -22.490190 0 -15.247781 0 0 5 0 -4.812785 0 3.742221 0 0 6 0 -15.946251 0 8.057511 0 0 ``` And so on...
2017/11/15
[ "https://Stackoverflow.com/questions/47301581", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4093535/" ]
If need loop (slow if large data): ``` for i, x in population.iterrows(): print (train * x.values) feature0 feature1 feature2 feature3 feature4 feature5 0 18.279579 -3.921346 0.0 -0.0 -0.0 -18.265003 1 17.899545 -15.503942 -0.0 -0.0 -0.0 4.398419 4 16.432750 -22.490190 -0.0 -0.0 -0.0 -2.433374 5 15.905368 -4.812785 0.0 0.0 0.0 -1.074326 6 16.991823 -15.946251 0.0 0.0 0.0 -1.482333 feature0 feature1 feature2 feature3 feature4 feature5 0 0.0 -3.921346 0.0 -7.250185 -0.0 -0.0 1 0.0 -15.503942 -0.0 -0.053619 -0.0 0.0 4 0.0 -22.490190 -0.0 -15.247781 -0.0 -0.0 5 0.0 -4.812785 0.0 3.742221 0.0 -0.0 6 0.0 -15.946251 0.0 8.057511 0.0 -0.0 feature0 feature1 feature2 feature3 feature4 feature5 0 0.0 -0.0 0.0 -0.0 -0.0 -18.265003 1 0.0 -0.0 -0.0 -0.0 -0.0 4.398419 4 0.0 -0.0 -0.0 -0.0 -0.0 -2.433374 5 0.0 -0.0 0.0 0.0 0.0 -1.074326 6 0.0 -0.0 0.0 0.0 0.0 -1.482333 feature0 feature1 feature2 feature3 feature4 feature5 0 0.0 -0.0 13.611829 -0.0 -11.773605 -18.265003 1 0.0 -0.0 -0.741729 -0.0 -6.734652 4.398419 4 0.0 -0.0 -4.611659 -0.0 -13.941488 -2.433374 5 0.0 -0.0 18.291712 0.0 3.631887 -1.074326 6 0.0 -0.0 8.299577 0.0 8.057510 -1.482333 ``` --- Or each row separately: ``` print (train * population.values[0]) feature0 feature1 feature2 feature3 feature4 feature5 0 18.279579 -3.921346 0.0 -0.0 -0.0 -18.265003 1 17.899545 -15.503942 -0.0 -0.0 -0.0 4.398419 4 16.432750 -22.490190 -0.0 -0.0 -0.0 -2.433374 5 15.905368 -4.812785 0.0 0.0 0.0 -1.074326 6 16.991823 -15.946251 0.0 0.0 0.0 -1.482333 ``` --- Or for MultiIndex DataFrame: ``` d = pd.concat([train * population.values[i] for i in range(population.shape[0])], keys=population.index.tolist()) print (d) feature0 feature1 feature2 feature3 feature4 feature5 0 0 18.279579 -3.921346 0.000000 -0.000000 -0.000000 -18.265003 1 17.899545 -15.503942 -0.000000 -0.000000 -0.000000 4.398419 4 16.432750 -22.490190 -0.000000 -0.000000 -0.000000 -2.433374 5 15.905368 -4.812785 0.000000 0.000000 0.000000 -1.074326 6 16.991823 -15.946251 0.000000 0.000000 0.000000 -1.482333 1 0 0.000000 -3.921346 0.000000 -7.250185 -0.000000 -0.000000 1 0.000000 -15.503942 -0.000000 -0.053619 -0.000000 0.000000 4 0.000000 -22.490190 -0.000000 -15.247781 -0.000000 -0.000000 5 0.000000 -4.812785 0.000000 3.742221 0.000000 -0.000000 6 0.000000 -15.946251 0.000000 8.057511 0.000000 -0.000000 2 0 0.000000 -0.000000 0.000000 -0.000000 -0.000000 -18.265003 1 0.000000 -0.000000 -0.000000 -0.000000 -0.000000 4.398419 4 0.000000 -0.000000 -0.000000 -0.000000 -0.000000 -2.433374 5 0.000000 -0.000000 0.000000 0.000000 0.000000 -1.074326 6 0.000000 -0.000000 0.000000 0.000000 0.000000 -1.482333 3 0 0.000000 -0.000000 13.611829 -0.000000 -11.773605 -18.265003 1 0.000000 -0.000000 -0.741729 -0.000000 -6.734652 4.398419 4 0.000000 -0.000000 -4.611659 -0.000000 -13.941488 -2.433374 5 0.000000 -0.000000 18.291712 0.000000 3.631887 -1.074326 6 0.000000 -0.000000 8.299577 0.000000 8.057510 -1.482333 ``` And select by [`xs`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html): ``` print (d.xs(0)) feature0 feature1 feature2 feature3 feature4 feature5 0 18.279579 -3.921346 0.0 -0.0 -0.0 -18.265003 1 17.899545 -15.503942 -0.0 -0.0 -0.0 4.398419 4 16.432750 -22.490190 -0.0 -0.0 -0.0 -2.433374 5 15.905368 -4.812785 0.0 0.0 0.0 -1.074326 6 16.991823 -15.946251 0.0 0.0 0.0 -1.482333 ```
Once you set the columns of `population` to match `train` you can use `*`: ``` In [11]: population.columns = train.columns In [12]: train * population.iloc[0] Out[12]: feature0 feature1 feature2 feature3 feature4 feature5 0 18.279579 -3.921346 0.0 -0.0 -0.0 -18.265003 1 17.899545 -15.503942 -0.0 -0.0 -0.0 4.398419 4 16.432750 -22.490190 -0.0 -0.0 -0.0 -2.433374 5 15.905368 -4.812785 0.0 0.0 0.0 -1.074326 6 16.991823 -15.946251 0.0 0.0 0.0 -1.482333 ``` --- You can make a MultiIndex (as recommended by @jezrael) very efficiently using `np.tile` and `np.repeat`: ``` In [11]: res = population.iloc[np.repeat(np.arange(len(population)), len(train))] In [12]: res = res.set_index(np.tile(train.index, len(population)), append=True) In [13]: res Out[13]: feature0 feature1 feature2 feature3 feature4 feature5 0 0 1 1 0 0 0 1 1 1 1 0 0 0 1 4 1 1 0 0 0 1 5 1 1 0 0 0 1 6 1 1 0 0 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 0 0 4 0 1 0 1 0 0 5 0 1 0 1 0 0 6 0 1 0 1 0 0 2 0 0 0 0 0 0 1 1 0 0 0 0 0 1 4 0 0 0 0 0 1 5 0 0 0 0 0 1 6 0 0 0 0 0 1 3 0 0 0 1 0 1 1 1 0 0 1 0 1 1 4 0 0 1 0 1 1 5 0 0 1 0 1 1 6 0 0 1 0 1 1 In [14]: res.mul(train, level=1) Out[14]: feature0 feature1 feature2 feature3 feature4 feature5 0 0 18.279579 -3.921346 0.000000 -0.000000 -0.000000 -18.265003 1 17.899545 -15.503942 -0.000000 -0.000000 -0.000000 4.398419 4 16.432750 -22.490190 -0.000000 -0.000000 -0.000000 -2.433374 5 15.905368 -4.812785 0.000000 0.000000 0.000000 -1.074326 6 16.991823 -15.946251 0.000000 0.000000 0.000000 -1.482333 1 0 0.000000 -3.921346 0.000000 -7.250185 -0.000000 -0.000000 1 0.000000 -15.503942 -0.000000 -0.053619 -0.000000 0.000000 4 0.000000 -22.490190 -0.000000 -15.247781 -0.000000 -0.000000 5 0.000000 -4.812785 0.000000 3.742221 0.000000 -0.000000 6 0.000000 -15.946251 0.000000 8.057511 0.000000 -0.000000 2 0 0.000000 -0.000000 0.000000 -0.000000 -0.000000 -18.265003 1 0.000000 -0.000000 -0.000000 -0.000000 -0.000000 4.398419 4 0.000000 -0.000000 -0.000000 -0.000000 -0.000000 -2.433374 5 0.000000 -0.000000 0.000000 0.000000 0.000000 -1.074326 6 0.000000 -0.000000 0.000000 0.000000 0.000000 -1.482333 3 0 0.000000 -0.000000 13.611829 -0.000000 -11.773605 -18.265003 1 0.000000 -0.000000 -0.741729 -0.000000 -6.734652 4.398419 4 0.000000 -0.000000 -4.611659 -0.000000 -13.941488 -2.433374 5 0.000000 -0.000000 18.291712 0.000000 3.631887 -1.074326 6 0.000000 -0.000000 8.299577 0.000000 8.057510 -1.482333 ```
5,852
45,657,365
I just started python like a week ago and now I am stuck at the question about rolling dice. This is a question that my friend sent to me yesterday and I have just no idea how to solve it myself. > > Imagine you are playing a board game. You roll a 6-faced dice and move forward the same number of spaces that you rolled. If the finishing point is β€œn” spaces away from the starting point, please implement a program that calculates how many possible ways there are to arrive exactly at the finishing point. > > > So it seems I shall make a function with a parameter with "N" and when it reaches a certain point, let's say 10, so we all can see how many possibilities there are to get to 10 spaces away from the starting point. I suppose this is something to do with "compositions" but I am not sure how it should be coded in python. Please, python masters!
2017/08/13
[ "https://Stackoverflow.com/questions/45657365", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6891099/" ]
This is one way to compute the result that is exact, and uses neither iteration nor recursion: ``` def ways(n): A = 3**(n+6) M = A**6 - A**5 - A**4 - A**3 - A**2 - A - 1 return pow(A, n+6, M) % A for i in xrange(20): print i, '->', ways(i) ``` The output is in agreement with <https://oeis.org/A001592> ``` 0 -> 1 1 -> 1 2 -> 2 3 -> 4 4 -> 8 5 -> 16 6 -> 32 7 -> 63 8 -> 125 9 -> 248 10 -> 492 11 -> 976 12 -> 1936 13 -> 3840 14 -> 7617 15 -> 15109 16 -> 29970 17 -> 59448 18 -> 117920 19 -> 233904 ```
Sorry i'm not expert in python but Java can solve this, you can easily transform it to language you want : **First idea using recursion :** The idea is to create all possible combinaisons in a GameTree after calculate the sum requested and increment out counter. ``` public class GameTree { public int value; public GameTree[] childs; public GameTree(int value) { this.value = value; } public GameTree(int value, GameTree[] childs) { this.value = value; this.childs = childs; } ``` } For Memory issue i'll ignore the subtree superior to our sum ([Like Alpha–beta pruning algorithm](https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning)) ``` static void generateGameTreeRecursive(String path, GameTree node, int winnerScore, int currentScore) throws InterruptedException { // Build the path if(node.value != 0)// We exclude the root node path += " " + String.valueOf(node.value); if (winnerScore <= currentScore) { // lierate the current node (prevent for Java heap space error) node = null; // Add the winner route count++; // Finish for this node return; } else{ // cerate the childs node.childs = new GameTree[6]; for (int i = 0; i < 6; i++) { // Generate the possible values to the childs node.childs[i] = new GameTree(i+1); // Recursion for each child generateGameTreeRecursive(path, node.childs[i], winnerScore, currentScore + i + 1); } } } ``` **Second idea using iteration :** This solution is more elegant just dont ask me how i found it :) ``` // Returns number of ways to reach score n static int getCombinaison(int n) { int[] table = new int[n+1]; // table[i] will store count of solutions for // Base case (If given value is 0) table[0] = 1; // One by one consider given 3 moves and update the table[] // values after the index greater than or equal to the // value of the picked move for (int j=1; j<7; j++) { for (int i=j; i<=n; i++) table[i] += table[i-j]; } return table[n]; } ```
5,854
64,507,361
I am trying to write a SQL query that helps me find out the unique amount of "Numbers" that show up in a specific column. Example, in a select \* query, the column I want can look like this ``` Num_Option 9000 9001 9000,9001,9002 8080 8080,8000,8553 ``` I then have another field of "date\_available" which is a date/time. Basically, what want is something where I can group by the "date\_available" while combing all the Num\_Options on that date, so something like this.. ``` Num_Option date_available 9000,9001,9002,8080 10/22/2020 9000,9002,8080,8000,8553 10/23/2020 ``` I am struggling to figure this out. I have gotten to the possible point of using a python script and matplotlib instead... but I am hoping there is a SQL way of handling this as well.
2020/10/23
[ "https://Stackoverflow.com/questions/64507361", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3973837/" ]
In Postgres, you can use `regexp_split_to_table()` in a lateral join to turn the csv elements to rows, then `string_agg()` to aggregate by date: ``` select string_agg(x.num, ',') num_option, t.date_available from mytable t cross join lateral regexp_split_to_table(t.num_option, ',') x(num) group by date_available ``` Of course, this assumes that you want to avoid duplicate nums on the same data (otherwise, there is not need to split, you can directly aggregate).
You may just be able to use `string_agg()`: ``` select date_available, string_agg(num_option, ',') from t group by date_available; ```
5,859
8,560,320
``` >>> False in [0] True >>> type(False) == type(0) False ``` The reason I stumbled upon this: For my unit-testing I created lists of valid and invalid example values for each of my types. (with 'my types' I mean, they are not 100% equal to the python types) So I want to iterate the list of all values and expect them to pass if they are in my valid values, and on the other hand, fail if they are not. That does not work so well now: ``` >>> valid_values = [-1, 0, 1, 2, 3] >>> invalid_values = [True, False, "foo"] >>> for value in valid_values + invalid_values: ... if value in valid_values: ... print 'valid value:', value ... valid value: -1 valid value: 0 valid value: 1 valid value: 2 valid value: 3 valid value: True valid value: False ``` Of course I disagree with the last two 'valid' values. Does this mean I really have to iterate through my valid\_values and compare the type?
2011/12/19
[ "https://Stackoverflow.com/questions/8560320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/532373/" ]
The problem is not the missing type checking, but because in Python `bool` is a subclass of `int`. Try this: ``` >>> False == 0 True >>> isinstance(False, int) True ```
According to the [documentation](http://docs.python.org/reference/datamodel.html#object.__contains__), `__contains__` is done by iterating over the collection and testing elements by `==`. Hence the actual problem is caused by the fact, that `False == 0` is `True`.
5,861
15,371,643
I am receiving this error when authenticating users for vsftpd with pam\_python on Ubuntu (13.04 development branch) in the auth.log file, ``` vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted ``` and then vsftpd says the password is wrong when attempting to connect. Here is the full section from the auth.log file: ``` vsftpd[1]: pam_auth.py(9): pam_sm_authenticate() vsftpd[1]: pam_auth.py(9): get_user_base_dir() vsftpd[1]: pam_auth.py(9): auth_user() vsftpd[1]: pam_auth.py(9): get_user_base_dir() vsftpd[1]: pam_auth.py(9): verify_password() vsftpd[1]: pam_auth.py(5): LOGIN: dev vsftpd[1]: PAM audit_log_acct_message() failed: Operation not permitted ``` Now, this is not normal at all, `LOGIN: dev` is outputted when the account `dev` is properly authenticated so it should authenticate me (or the python script should give out an error).. here is a healthy output from another server with the exact same configuration: ``` vsftpd[11037]: pam_auth.py(9): pam_sm_authenticate() vsftpd[11037]: pam_auth.py(9): get_user_base_dir() vsftpd[11037]: pam_auth.py(9): auth_user() vsftpd[11037]: pam_auth.py(9): get_user_base_dir() vsftpd[11037]: pam_auth.py(9): verify_password() vsftpd[11037]: pam_auth.py(5): LOGIN: dev vsftpd[11037]: pam_auth.py(9): pam_sm_acct_mgmt() vsftpd[11037]: pam_auth.py(9): get_user_base_dir() vsftpd[11037]: pam_auth.py(9): pam_sm_setcred() vsftpd[11037]: pam_auth.py(9): get_user_base_dir() vsftpd[11037]: pam_auth.py(5): /home/dev/downloads/ ``` The only thing different about this server, is that it is running a different kernel (it is from a different datacenter than usual), the kernel normally is: ``` Linux sb16 3.2.13-grsec-xxxx-grs-ipv6-64 #1 SMP Thu Mar 29 09:48:59 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux ``` Whereas the kernel on the server where I can't get pam to work is: ``` Linux sb17 3.8.0-12-generic #21-Ubuntu SMP Thu Mar 7 19:08:49 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux ``` There is definitely something going wrong, but the only error that I can see anywhere is the `audit_log_acct_message() failed` message. When trying the python script directly it outputs success too: ``` $ pam_auth.py dev test success ``` What could be causing this? And how can I fix it/get around it?
2013/03/12
[ "https://Stackoverflow.com/questions/15371643", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2112652/" ]
Protected fields can be inherited, but cannot be shown like `echo $q->protectedQ;` Private fields cannot be neither displayed nor inherited.
Protected functions make your class more flexible. Think of a class that somewhere has to load some data. It has a default implementation, which reads the data from a file. If you want to use the same class, but want to change the way it gets its data, you can create a subclass and override the getData() funciton.
5,866
67,728,723
There is a really old thread on stackoverflow here [Getting 'DatabaseOperations' object has no attribute 'geo\_db\_type' error when doing a syncdb](https://stackoverflow.com/questions/12538510/getting-databaseoperations-object-has-no-attribute-geo-db-type-error-when-do) but the difference that I have with their issue is that my containers have the POSTGIS and POSTGRES installed in. Specifically I used QGIS and the image is like so ``` db: image: kartoza/postgis:13.0 volumes: - postgis-data:/var/lib/postgresql ``` So locally I have two docker images - one is web and the other is the kartoza/postgis I also have this as well in the settings.py file ``` import dj_database_url db_from_env = dj_database_url.config(conn_max_age=500) DATABASES['default'].update(db_from_env) ``` which should support the GIS data. I see all my packages gis, geolocation packages installed with no issues. But I am getting the above error when I run heroku run python manage.py migrate The website runs with very limited functionality as the geo variables are needed to get you past the landing page. The steps I have taken to deploy is ``` heroku create appname heroku stack:set container -a appname heroku addons:create heroku-postgresql:hobby-dev -a appname heroku git:remote -a appname git push heroku main ``` **EDIT** The db url on heroku is `postgres://foobar:3242q34rq2rq32rf3q2rfq2q2r3vq23rvq23vr@er3-13-234-91-69.compute-` I have also ran the below command and it shows that the db now takes GIS, but I still get the error ``` $ heroku pg:psql create extension postgis; ```
2021/05/27
[ "https://Stackoverflow.com/questions/67728723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6046858/" ]
> > Process exited ... with return value 3221225620 > > > 3221225620 is the numeric code for a `STATUS_INTEGER_DIVIDE_BY_ZERO` (0xC0000094) exception, which means your code is crashing before it has a chance to print out its messages. On this line: ```cpp avg=sum/n; ``` `n` is 0 at this point, because the `while` loop above it had decremented `n` on each iteration until `n` reached 0. To avoid that, change the `while` loop to not modify `n` anymore: ```cpp while (i <= n) { ... i++; } ``` Or, use a `for` loop instead: ```cpp for (int i = 1; i <= n; ++i) { ... } ```
you're dividing by 0, since you're modifying n until it reaches 0 and then use it to divide, it's better if you iterate with another variable instead of reducing the value of n. so the problem isn't that it's not printing, it's that the program dies before reaching that
5,873
3,788,208
I'm running this simple code: ``` import threading, time class reqthread(threading.Thread): def run(self): for i in range(0, 10): time.sleep(1) print('.') try: thread = reqthread() thread.start() except (KeyboardInterrupt, SystemExit): print('\n! Received keyboard interrupt, quitting threads.\n') ``` But when I run it, it prints ```none $ python prova.py . . ^C. . . . . . . . Exception KeyboardInterrupt in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored ``` In fact python thread ignore my `Ctrl`+`C` keyboard interrupt and doesn't print `Received Keyboard Interrupt`. Why? What is wrong with this code?
2010/09/24
[ "https://Stackoverflow.com/questions/3788208", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39796/" ]
Try ``` try: thread=reqthread() thread.daemon=True thread.start() while True: time.sleep(100) except (KeyboardInterrupt, SystemExit): print '\n! Received keyboard interrupt, quitting threads.\n' ``` Without the call to `time.sleep`, the main process is jumping out of the `try...except` block too early, so the `KeyboardInterrupt` is not caught. My first thought was to use `thread.join`, but that seems to block the main process (ignoring KeyboardInterrupt) until the `thread` is finished. `thread.daemon=True` causes the thread to terminate when the main process ends.
To summarize the changes recommended in [the](https://stackoverflow.com/questions/3788208/python-threading-ignores-keyboardinterrupt-exception#comment16625542_3788243) [comments](https://stackoverflow.com/questions/3788208/python-threading-ignores-keyboardinterrupt-exception#comment28203538_3788243), the following works well for me: ``` try: thread = reqthread() thread.start() while thread.isAlive(): thread.join(1) # not sure if there is an appreciable cost to this. except (KeyboardInterrupt, SystemExit): print '\n! Received keyboard interrupt, quitting threads.\n' sys.exit() ```
5,874
25,888,396
I am trying to retrieve the longitude & latitude of a physical address ,through the below script .But I am getting the error. I have already installed googlemaps . kindly reply Thanks In Advance ``` #!/usr/bin/env python import urllib,urllib2 """This Programs Fetch The Address""" from googlemaps import GoogleMaps address='Mahatma Gandhi Rd, Shivaji Nagar, Bangalore, KA 560001' add=GoogleMaps().address_to_latlng(address) print add ``` **Output:** ``` Traceback (most recent call last): File "Fetching.py", line 12, in <module> add=GoogleMaps().address_to_latlng(address) File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 310, in address_to_latlng return tuple(self.geocode(address)['Placemark'][0]['Point']['coordinates'][1::-1]) File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 259, in geocode url, response = fetch_json(self._GEOCODE_QUERY_URL, params=params) File "/usr/local/lib/python2.7/dist-packages/googlemaps.py", line 50, in fetch_json response = urllib2.urlopen(request) File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen return _opener.open(url, data, timeout) File "/usr/lib/python2.7/urllib2.py", line 407, in open response = meth(req, response) File "/usr/lib/python2.7/urllib2.py", line 520, in http_response 'http', request, response, code, msg, hdrs) File "/usr/lib/python2.7/urllib2.py", line 445, in error return self._call_chain(*args) File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain result = func(*args) File "/usr/lib/python2.7/urllib2.py", line 528, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 403: Forbidden ```
2014/09/17
[ "https://Stackoverflow.com/questions/25888396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3008712/" ]
googlemaps package you are using is not an official one and does not use google maps API v3 which is the latest one from google. You can use google's [geocode REST api](https://developers.google.com/maps/documentation/geocoding/) to fetch coordinates from address. Here's an example. ``` import requests response = requests.get('https://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA') resp_json_payload = response.json() print(resp_json_payload['results'][0]['geometry']['location']) ```
For a Python script that does not require an API key nor external libraries you can query the Nominatim service which in turn queries the Open Street Map database. For more information on how to use it see <https://nominatim.org/release-docs/develop/api/Search/> A simple example is below: ``` import requests import urllib.parse address = 'Shivaji Nagar, Bangalore, KA 560001' url = 'https://nominatim.openstreetmap.org/search/' + urllib.parse.quote(address) +'?format=json' response = requests.get(url).json() print(response[0]["lat"]) print(response[0]["lon"]) ```
5,883
40,616,527
I'm trying to build caffe with python but it keep saying this ``` CXX/LD -o python/caffe/_caffe.so python/caffe/_caffe.cpp /usr/bin/ld: cannot find -lboost_python3 collect2: error: ld returned 1 exit status make: *** [python/caffe/_caffe.so] Error 1 ``` This is what I get when I try to locate `boost_python` ``` $ sudo locate boost_python /usr/lib/x86_64-linux-gnu/libboost_python-py27.a /usr/lib/x86_64-linux-gnu/libboost_python-py27.so /usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.55.0 /usr/lib/x86_64-linux-gnu/libboost_python-py33.a /usr/lib/x86_64-linux-gnu/libboost_python-py33.so /usr/lib/x86_64-linux-gnu/libboost_python-py33.so.1.55.0 /usr/lib/x86_64-linux-gnu/libboost_python-py34.a /usr/lib/x86_64-linux-gnu/libboost_python-py34.so /usr/lib/x86_64-linux-gnu/libboost_python-py34.so.1.55.0 /usr/lib/x86_64-linux-gnu/libboost_python.a /usr/lib/x86_64-linux-gnu/libboost_python.so ``` I've add this path also ``` ## .bashrc export LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu":$LD_LIBRARY_PATH ``` Any idea why is that happing?
2016/11/15
[ "https://Stackoverflow.com/questions/40616527", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4258576/" ]
I've found the problem. it turned out that it tries to look for something with that name of `libboost_python3.so` after changing the name in Makefile.config from `boost_python3` to `boost_python-py34`, it worked just fine!
I know this thread is quite old, but : ``` dnf install boost-python3-devel ``` may help !
5,893
47,966,556
I'm getting confused groups of Regex which from a book:γ€ŠAutomate the Boring Stuff with Python: Practical Programming for Total Beginners 》。The Regex as follow: ``` #! python3 # phoneAndEmail.py - Finds phone numbers and email addresses on the clipboard # The data of paste from: https://www.nostarch.com/contactus.html import pyperclip, re phoneRegex = re.compile(r'''( (\d{3}|\(\d{3}\))? # area code (\s|-|\.)? # separator (\d{3}) # first 3 digits (\s|-|\.) # separator (\d{4}) # last 4 digits (\s*(ext|x|ext.)\s*(\d{2,,5}))? # extension )''', re.VERBOSE ) # TODO: Create email regex. emailRegex = re.compile(r'''( [a-zA-Z0-9._%+-]+ # username @ # @ symbol [a-zA-Z0-9.-]+ # domian name (\.[a-zA-Z]{2,4}) # dot-something )''', re.VERBOSE) # TODO: Find matches in clipboard text. text = str(pyperclip.paste()) matches = [] for groups in phoneRegex.findall(text): **phoneNum = '-'.join ([groups[1], groups[3], groups[5]]) if groups[8]!= '': phoneNum += ' x' + groups[8]** matches.append(phoneNum) print(groups[0]) for groups in emailRegex.findall(text): matches.append(groups[0]) # TODO: Copy results to the clipboard. if len(matches) > 0: pyperclip.copy('\n'.join(matches)) print('Copied to clipboard:') print('\n'.join(matches)) else: print('No phone number or email addresses found.') ``` I am confused about **groups[1](http://http//www.nostarch.com/contactus.html)/groups[2]……/groups[8]**. And how many groups in the phoneRegex. And what is the difference between **groups()** and **groups[]**. The data of paste from: [<https://www.nostarch.com/contactus.html]>
2017/12/25
[ "https://Stackoverflow.com/questions/47966556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9135962/" ]
Regexes can have *groups*. They are denoted by `()`. Groups can be used to extract a part of the match which might be useful. In the phone number regex for example, there are 9 groups: ``` Group Subpattern 1 ((\d{3}|\(\d{3}\))?(\s|-|\.) (\d{3}) (\s|-|\.)(\d{4})(\s*(ext|x|ext.)\s*(\d{2,,5}))?) 2 (\d{3}|\(\d{3}\))? 3 (\s|-|\.) 4 (\d{3}) 5 (\s|-|\.) 6 (\d{4}) 7 (\s*(ext|x|ext.)\s*(\d{2,,5}))? 8 (ext|x|ext.) 9 (\d{2,,5}) ``` Note how each group is enclosed in `()`s. The `groups[x]` is just referring to the string matched by a particular group. `groups[0]` means the string matched by group 1, `groups[1]` means the string matched by group 2, etc.
In a regex, parenthesis `()` create what is called a **capturing group**. Each group is assigned a number, starting with 1. For example: ``` In [1]: import re In [2]: m = re.match('([0-9]+)([a-z]+)', '123xyz') In [3]: m.group(1) Out[3]: '123' In [4]: m.group(2) Out[4]: 'xyz' ``` Here, `([0-9]+)` is the first capturing group, and `([a-z]+)` is the second capturing group. When you apply the regex, the first capturing group ends up "capturing" the string `123` (since that's the part it matches), and the second part captures `xyz`. With `findall`, it searches the string for all places where the regex matches, and for each match, it returns the list of captured groups as a tuple. I'd encourage you to play with it a bit in `ipython` to understand how it works. Also check the docs: <https://docs.python.org/3.6/library/re.html#re.findall>
5,895
59,591,632
I am currently trying to use the [dynamodb](https://www.npmjs.com/package/dynamodb) extension in python to get the position\rank in a descending query on an index, the following is what I have come up with: ``` router.get('/'+USER_ROUTE+'/top', (req, res) => { POST.query() .usingIndex('likecount') .attributes(['id', 'likecount']) .descending() .loadAll() .exec((error, result) => { if (error) { res.status(400).json({ error: 'Error retrieving most liked post' }); } //convert res to dict and sort dict by value (likecount) res.json({position:Object.keys(result).indexOf(req.body.id)}); }); }); ``` As you can see, I would convert the result into a dict of ID and likecount and then sort the dict after which I get the index of the key I am looking for. Obviously this fails in multiple respects, it is slow/inefficient (iterates over every item in the database per call) and requires multiple steps. Is there a more succinct method to achieve this? Thanks.
2020/01/04
[ "https://Stackoverflow.com/questions/59591632", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4134377/" ]
It is not possible in Delta Lake up to and including 0.5.0. There's an issue to track this at <https://github.com/delta-io/delta/issues/294>. Feel free to upvote that to help get it prioritized. --- Just a day after Google posted [Getting started with new table formats on Dataproc](https://cloud.google.com/blog/products/data-analytics/getting-started-with-new-table-formats-on-dataproc): > > We’re announcing that table format projects Delta Lake and Apache Iceberg (Incubating) are now available in the latest version of Cloud Dataproc (version 1.5 Preview). You can start using them today with either Spark or Presto. Apache Hudi is also available on Dataproc 1.3. > > >
It's possible. Here's a sample code and the libraries that you need: Make sure to set first your credential, you can either part of the code or as environment: ``` export GOOGLE_APPLICATION_CREDENTIALS={gcs-key-path.json} ``` ``` import org.apache.spark.sql.{SparkSession, DataFrame} import com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.BigQueryException import com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.BigQueryOptions import com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.DatasetInfo spark.conf.set("parentProject", {Proj}) spark.conf.set("spark.hadoop.fs.gs.auth.service.account.enable", "true") spark.conf.set("spark.hadoop.fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem") spark.conf.set("fs.AbstractFileSystem.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS") spark.conf.set("spark.delta.logStore.gs.impl", "io.delta.storage.GCSLogStore") spark.conf.set("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog") val targetTablePath = "gs://{bucket}/{dataset}/{tablename}" spark.range(5, 10).write.format("delta") .mode("overwrite") .save(targetTablePath) ``` Libraries that you need: ``` "io.delta" % "delta-core_2.12" % "1.0.0", "io.delta" % "delta-contribs_2.12" % "1.0.0", "com.google.cloud.spark" % "spark-bigquery-with-dependencies_2.12" % "0.21.1", "com.google.cloud.bigdataoss" % "gcs-connector" % "1.9.4-hadoop3" ``` Checking my delta files in GCS: ``` $ gsutil ls gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3 gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/ gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/part-00000-ce79bfc7-e28f-4929-955c-56a7a08caf9f-c000.snappy.parquet gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/part-00001-dda0bd2d-a081-4444-8983-ac8f3a2ffe9d-c000.snappy.parquet gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/part-00002-93f7429b-777a-42f4-b2dd-adc9a482a6e8-c000.snappy.parquet gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/part-00003-e9874baf-6c0b-46de-891e-032ac8b67287-c000.snappy.parquet gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/part-00004-ede54816-2da1-412f-a9e3-5233e77258fb-c000.snappy.parquet gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/_delta_log/ gs://r-dps-datapipeline-dev/testoliver/oliver_sample_delta3/_symlink_format_manifest/ ```
5,896
48,464,693
So, I created a python program. Converted to exe using Py2Exe, and tried with PyInstaller and cx\_freeze as well. All these trigger the program to be detected as virus by avast, avg, and others on virustotal and on my local machine. I tried changing to a Hello World script to see if the problem is there but the results are exactly the same. My question is, what is triggering this detection? The way in which the .exe is created? If so, are there any other alternatives to Py2exe, Pyinstaller, cx\_freeze?
2018/01/26
[ "https://Stackoverflow.com/questions/48464693", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9273160/" ]
You can try nuitka. ``` pip install -U nuitka ``` Example: ``` nuitka --recurse-all --icon=app.ico --portable helloworld.py ``` Website: <http://nuitka.net/> Maybe you need to install Visual C++ 2015 Build Tools for compile. <http://landinghub.visualstudio.com/visual-cpp-build-tools>
If you download Nuitka package, you will find a Trojan files in the folder. If you use this library, you will create a exe file with a Trojan embedded in the exe file. It converts files much faster than other similar libraries with no errors.
5,897
60,421,328
I am trying to import a python dictionary from moodels and manipulate/print it's properties in Javascript. However nothing seems to print out and I don't receive any error warnings. **Views.py** ``` from chesssite.models import Chess_board import json def chess(request): board = Chess_board() data = json.dumps(board.rep) return render(request, 'home.html', {'board': data}) ``` Here board.rep is a python dictionary {"0a":0, "0b":0, "0c":"K0"} - basically a chess board **home.html** ``` <html> <body> {% block content %} <script> for (x in {{board}}) { document.write(x) } </script> {% endblock %} </body> </html> ``` I also would very much appreciate some debugging tips! Thanks in advance, Alex
2020/02/26
[ "https://Stackoverflow.com/questions/60421328", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12315848/" ]
You can utilize the [z algorithm](https://codeforces.com/blog/entry/3107), a linear time (***O***(n)) algorithm that: > > Given a string *S* of length n, the Z Algorithm produces an array *Z* > where *Z[i]* is the length of the longest substring starting from *S[i]* > which is also a prefix of *S* > > > You need to concatenate your arrays (*b*+*a*) and run the algorithm on the resulting constructed array till the first *i* such that *Z[i]*+*i* == *m*+*n*. For example, for *a* = [1, 2, 3, 6, 2, 3] & *b* = [2, 3, 6, 2, 1, 0], the concatenation would be [2, 3, 6, 2, 1, 0, 1, 2, 3, 6, 2, 3] which would yield *Z[10]* = 2 fulfilling *Z[i]* + *i* = 12 = *m* + *n*.
For O(n) time/space complexity, the trick is to evaluate hashes for each subsequence. Consider the array `b`: ``` [b1 b2 b3 ... bn] ``` Using [Horner's method](https://en.wikipedia.org/wiki/Horner%27s_method), you can evaluate all the possible hashes for each subsequence. Pick a base value `B` (bigger than any value in both of your arrays): ``` from b1 to b1 = b1 * B^1 from b1 to b2 = b1 * B^1 + b2 * B^2 from b1 to b3 = b1 * B^1 + b2 * B^2 + b3 * B^3 ... from b1 to bn = b1 * B^1 + b2 * B^2 + b3 * B^3 + ... + bn * B^n ``` Note that you can evaluate each sequence in O(1) time, using the result of the previous sequence, hence all the job costs O(n). Now you have an array `Hb = [h(b1), h(b2), ... , h(bn)]`, where `Hb[i]` is the hash from `b1` until `bi`. Do the same thing for the array `a`, but with a little trick: ``` from an to an = (an * B^1) from an-1 to an = (an-1 * B^1) + (an * B^2) from an-2 to an = (an-2 * B^1) + (an-1 * B^2) + (an * B^3) ... from a1 to an = (a1 * B^1) + (a2 * B^2) + (a3 * B^3) + ... + (an * B^n) ``` You must note that, when you step from one sequence to another, you multiply the whole previous sequence by B and add the new value multiplied by B. For example: ``` from an to an = (an * B^1) for the next sequence, multiply the previous by B: (an * B^1) * B = (an * B^2) now sum with the new value multiplied by B: (an-1 * B^1) + (an * B^2) hence: from an-1 to an = (an-1 * B^1) + (an * B^2) ``` Now you have an array `Ha = [h(an), h(an-1), ... , h(a1)]`, where `Ha[i]` is the hash from `ai` until `an`. Now, you can compare `Ha[d] == Hb[d]` for all `d` values from n to 1, if they match, you have your answer. --- > > **ATTENTION**: this is a hash method, the values can be large and you may have to use a [fast exponentiation method and modular arithmetics](https://www.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/fast-modular-exponentiation), which may (hardly) give you **collisions**, making this method not totally safe. A good practice is to pick a base `B` as a really big prime number (at least bigger than the biggest value in your arrays). You should also be careful as the limits of the numbers may overflow at each step, so you'll have to use (modulo `K`) in each operation (where `K` can be a prime bigger than `B`). > > > This means that two different sequences **might** have the same hash, but two equal sequences will **always** have the same hash.
5,898
8,997,431
A common pattern in python is to catch an error in an upstream module and re-raise that error as something more useful. ``` try: config_file = open('config.ini', 'r') except IOError: raise ConfigError('Give me my config, user!') ``` This will generate a stack trace of the form ``` Traceback (most recent call last): File "<stdin>", line 4, in <module> __main__.ConfigError: Give me my config, user! ``` Is there any way to access the wrapped exception in order to generate a stack trace more like this? ``` Traceback (most recent call last): File "<stdin>", line 2, in <module> __builtin__.IOError: File Does not exist. Exception wrapped by: File "<stdin>", line 4, in <module> __main__.ConfigError: Give me my config, user! ``` EDIT: ===== The problem i'm trying to defeat is that some 3rd party code can wrap exceptions up to 3 times and I want to be able to determine the root cause, i.e. a generic way to inspect the exception stack and determine the root cause of an exception without having to add any extra code to 3rd party modules.
2012/01/25
[ "https://Stackoverflow.com/questions/8997431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/234254/" ]
This is known as *Exception Chaining* and is suported in Python 3. PEP 3134: <http://www.python.org/dev/peps/pep-3134/> In Python 2, the old exception is lost when you raise a new one, unless you save it in the `except` block.
Use the `traceback` [module](http://docs.python.org/library/traceback.html). It will allow you to access the most recent traceback and store it in a string. For example, ``` import traceback try: config_file = open('config.ini', 'r') except OSError: tb = traceback.format_exc() raise ConfigError('Give me my config, user!',tb) ``` The "nested" traceback will be stored in tb and passed to ConfigError, where you can work with it however you want.
5,900
51,247,566
It looks like I can only change the value of mutable variables using a function, but is it possible to change immutable ### Code ``` def f(a, b): a += 1 b.append('hi') x = 1 y = ['hello'] f(x, y) print(x, y) #x didn't change, but y did ``` ### Result ``` 1 [10, 1] ``` So, my question is that is it possible to modify immutable variables using functions? If no then why? What's the reason that python bans people from doing that?
2018/07/09
[ "https://Stackoverflow.com/questions/51247566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9273687/" ]
In python, the list is passed by **object-reference**. Actually, everything in python is an object but when you pass a single variable to function it creates a local copy of that if a value is changed but in case of a list if it creates a local copy even than the reference remains to the previous list object. Hence the value of the list will not get changed.\ You can refer to the [link](https://robertheaton.com/2014/02/09/pythons-pass-by-object-reference-as-explained-by-philip-k-dick/). You can check the following example for clarification. ``` def fun1(b): for i in range(0,len(b)): b[i]+=4 arr=[1,2,3,4] print("Before Passing",arr) fun2(arr) print("After Passing",arr) #output #Before Passing [1, 2, 3, 4] #After Passing [5, 6, 7, 8] ``` If you do not want any function to change value accidentally you can use an immutable object such as a tuple. **Edit:** (Copy example) We can check it by printing the id of both objects. ``` def fun(a): a=5 print(hex(id(a))) a=3 print(hex(id(a))) fun(a) # Output: # 0x555eb8890cc0 # 0x555eb8890d00 ``` But if we do it with a **List** object: ``` def fun(a): a.append(5) print(hex(id(a))) a=[1,2,3] print(hex(id(a))) fun(a) # Output: # 0x7f97e1589308 # 0x7f97e1589308 ```
Y is not value its just some bindings to memory. When You pass it to function its memory address is passed to function (call by reference). On the other hand x is value and when you pass it to function new local variable is created with same value. (At the assembly level all parameters of function are passed via stack pointer. Value of x and adress of y are pushed to stack pointer.
5,903
8,933,380
I'm developing a web application in python for which each user request makes an API call to an external service and takes about **20 seconds** to receive response. As a result, in the event of several concurrent requests being made, the CPU load goes crazy (>95%) with several idle processes. The server consists of a 1.6 GHz dual core Atom 330 with 2GB RAM. The web app is developed in python which is served through Apache with mod\_wsgi > > My question is the following. Will a non-blocking webserver such as Tornado improve CPU load and thus handle more concurrent users (I'm also interested why) ? Can you suggest any other scalable solution ? > > >
2012/01/19
[ "https://Stackoverflow.com/questions/8933380", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1159455/" ]
You can't create an object like that - the "key" in an object literal must be a constant, not a variable or expression. If the key is a variable you need the array-like syntax instead: ``` myArray[key] = value; ``` Hence you need: ``` var data = {}; // empty object data[$(this).attr('id')] = $(this).val(); ``` However as all of your fields are actually plain `HTMLInputElement` or `HTMLTextAreaElement` objects, you should really use this and avoid those expensive jQuery calls: ``` var data = {}; // empty object data[this.id] = this.value; ``` I'd also question why you're creating an *array of objects* - as the keys should all be unique, I would normally expect to return a single object: ``` function formObjectBuild($group) { var obj = {}; $group.find('input[type=text],textarea').each(function () { obj[this.id] = this.value; }); return obj; } ```
You can't build property names dynamically like that. ``` function ArrayPush($group) { var arr = new Array(); $group.find('input[type=text],textarea').each(function () { var data = {}; data [$(this).attr('id')] = $(this).val(); arr.push(data); }); return arr; } ```
5,904
61,388,487
Was trying to learn itertools from the python docs. I was going through `count` function and replicated the example given. However, I did not get any output to view. ``` def count(start=0, step=1): n = start while True: yield n n += step ``` output was : ``` >>> print(count(2.5,0.5)) <generator object count at 0x00000254BF3FEA48> ```
2020/04/23
[ "https://Stackoverflow.com/questions/61388487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13146477/" ]
if u want set the icon size small then u can do like that ``` Tab( text: "Category List", icon: Icon(Icons.home,size: 15,), ), Tab( text: "Product List", icon: Icon(Icons.view_list,size: 15,), ), Tab( text: "Contact Us", icon: Icon(Icons.contacts,size: 15,), ), Tab( text: "Darshan Timing", icon: Icon(Icons.access_time,size: 15,), ) ``` here `size: 15,` will make the icon size as you want
You gotta make custom TabBar for customization. Something like this. **CustomTabbar** ``` import 'package:flutter/material.dart'; class ChangeTextSizeTabbar extends StatefulWidget { @override ChangeTextSizeTabbarState createState() { return new ChangeTextSizeTabbarState(); } } class ChangeTextSizeTabbarState extends State<ChangeTextSizeTabbar> { @override Widget build(BuildContext context) { return DefaultTabController( length: 3, child: Scaffold( appBar: AppBar( title: Text("Change Text Size Tabbar Example"), bottom: TabBar( tabs: <Tab>[ Tab( child: Image.asset( 'assets/icons/project/proj_001.png', height : 100, width : 100, ), ), Tab( icon: Image.asset( 'assets/icons/project/all.png', height : 100, width : 100, ), ), Tab( icon: Image.asset( 'assets/icons/project/proj_009.png', height : 100, width : 100, ), ), ] ), ), body: TabBarView( children: <Widget>[ Container(), Container(), Container(), ], ), ), ); } } ``` **main.dart** ``` import 'package:flutter/material.dart'; import 'change_text_size_tabbar_task-3.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return MaterialApp( title: 'Flutter Demo', debugShowCheckedModeBanner: false, theme: ThemeData( // This is the theme of your application. // // Try running your application with "flutter run". You'll see the // application has a blue toolbar. Then, without quitting the app, try // changing the primarySwatch below to Colors.green and then invoke // "hot reload" (press "r" in the console where you ran "flutter run", // or simply save your changes to "hot reload" in a Flutter IDE). // Notice that the counter didn't reset back to zero; the application // is not restarted. primarySwatch: Colors.blue, ), //home: MyHomePage(title: 'Flutter Demo Home Page'), home: ChangeTextSizeTabbar(), ); } } ```
5,911
31,715,184
I want to calculate 6 month before date in python.So is there any problem occurs at dates (example 31 august).Can we solve this problem using timedelta() function.can we pass months like *date=now - timedelta(days=days)* instead of argument days.
2015/07/30
[ "https://Stackoverflow.com/questions/31715184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4947801/" ]
`timedelta` does not support months, but you can try using [`dateutil.relativedelta`](http://dateutil.readthedocs.org/en/latest/relativedelta.html#dateutil.relativedelta.relativedelta) for your calculations , which do support months. Example - ``` >>> from dateutil import relativedelta >>> from datetime import datetime >>> n = datetime.now() >>> n - relativedelta.relativedelta(months=6) datetime.datetime(2015, 1, 30, 10, 5, 32, 491815) >>> n - relativedelta.relativedelta(months=8) datetime.datetime(2014, 11, 30, 10, 5, 32, 491815) ```
If you are only interested in what the month was 6 months ago then try this: ``` import datetime month = datetime.datetime.now().month - 6 if month < 1: month = 12 + month # At this point month is 0 or a negative number so we add ```
5,912
52,805,041
Say im having scrapper\_1.py , scrapper\_2.py, scrapper\_3.py. The way i run it now its from pycharm run/execute each in separate, this way i can see the 3 python.exe in execution at task manager. Now im trying to write a master script say scrapper\_runner.py that imports this scrappers as modules and run them all in parallel not sequential. I tried examples with subprocess, multiprocessing even os.system from various SO posts ... but without any luck ... from logs they all run in sequence and from task manager i only see one python.exe execution. Is this the right pattern for this kind of process ? **EDIT:1** (trying with concurrent.futures ProcessPoolExecutor) it runns sequentially. ``` from concurrent.futures import ProcessPoolExecutor import scrapers.scraper_1 as scraper_1 import scrapers.scraper_2 as scraper_2 import scrapers.scraper_3 as scraper_3 ## Calling method runner on each scrapper_x to kick off processes runners_list = [scraper_1.runner(), scraper_1.runner(), scraper_3.runner()] if __name__ == "__main__": with ProcessPoolExecutor(max_workers=10) as executor: for runner in runners_list: future = executor.submit(runner) print(future.result()) ```
2018/10/14
[ "https://Stackoverflow.com/questions/52805041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/662409/" ]
A subprocess in python may or may not show up as a separate process, depending on your OS and your task manager. `htop` in linux, for example, will display subprocesses under the parent process in tree-view. I recommend taking a look at this in depth tutorial on the `multiprocessing` module in python: <https://pymotw.com/2/multiprocessing/basics.html> However, if python's built-in methods of multiprocessing/threading don't work or make sense to you, you can achieve your desired result by using bash to call your python scripts. The following bash script results in the attached screenshot. ``` #!/bin/sh ./py1.py & ./py2.py & ./py3.py & ``` [![parallel python scripts](https://i.stack.imgur.com/VKxPW.png)](https://i.stack.imgur.com/VKxPW.png) Explanation: The `&` at the end of each call tells bash to run each call as a background process.
Your problem is in how you setup the processes. You are not running the processes in parallel, even though you think you are. You are actually running them, when you add them to the `runners_list` and then you are running the result of each runner in parallel as multiprocesses. What you want to do, is to add the functions to the `runners_list` without executing them, then have them being executed in your multiprocessing `pool`. The way to achieve this, is to add the function references, i.e. the name of the functions. To do this, you should not include the parantheses, since this is the syntax for calling functions and not just name them. In addition, to have the futures execute asynchronously, it is not possible to have a direct call to `future.result`, as that will force the code to execute sequentially, to ensure that the results are available in the same sequnece as the functions are called. This means that the soultion to your problem is ``` from concurrent.futures import ProcessPoolExecutor import scrapers.scraper_1 as scraper_1 import scrapers.scraper_2 as scraper_2 import scrapers.scraper_3 as scraper_3 ## NOT calling method runner on each scrapper_x to kick off processes ## Instead add them to the list of functions to be run in the pool runners_list = [scraper_1.runner, scraper_1.runner, scraper_3.runner] # Adding callback function to call when future is done. # If result is not printed in callback, the future.result call will # serialize the call sequence to ensure results in order def print_result(future): print(future.result) if __name__ == "__main__": with ProcessPoolExecutor(max_workers=10) as executor: for runner in runners_list: future = executor.submit(runner) future.add_done_callback(print_result) ``` As you can see, here the invocation of the runners does not happen when the list is created, but later, when the `runner` is submitted to the executor. And, when the results are ready, the callback is called, to print the result to screen.
5,915
52,902,158
I have the following list: ``` o_dict_list = [(OrderedDict([('StreetNamePreType', 'ROAD'), ('StreetName', 'Coffee')]), 'Ambiguous'), (OrderedDict([('StreetNamePreType', 'AVENUE'), ('StreetName', 'Washington')]), 'Ambiguous'), (OrderedDict([('StreetNamePreType', 'ROAD'), ('StreetName', 'Quartz')]), 'Ambiguous')] ``` And like the title says, I am trying to take this list and create a pandas dataframe where the columns are: `'StreetNamePreType'` and `'StreetName'` and the rows contain the corresponding values for each key in the OrderedDict. I have done some searching on StackOverflow to get some guidance on how to create a dataframe, see [here](https://stackoverflow.com/questions/44365209/generate-a-pandas-dataframe-from-ordereddict) but I am getting an error when I run this code (I am trying to replicate what is going on in that response). ``` from collections import Counter, OrderedDict import pandas as pd col = Counter() for k in o_dict_list: col.update(k) df = pd.DataFrame([k.values() for k in o_dict_list], columns = col.keys()) ``` When I run this code, the error I get is: `TypeError: unhashable type: 'OrderedDict'` I looked up this error, [here](https://stackoverflow.com/questions/15880765/python-unhashable-type-ordereddict), I get that there is a problem with the datatypes, but I, unfortunately, I don't know enough about the inner workings of Python/Pandas to resolve this problem on my own. I suspect that my list of OrderedDict is not exactly the same as in [here](https://stackoverflow.com/questions/44365209/generate-a-pandas-dataframe-from-ordereddict) which is why I am not getting my code to work. More specifically, I believe I have a list of sets, and each element contains an OrderedDict. The example, that I have linked to [here](https://stackoverflow.com/questions/44365209/generate-a-pandas-dataframe-from-ordereddict) seems to be a true list of OrderedDicts. Again, I don't know enough about the inner workings of Python/Pandas to resolve this problem on my own and am looking for help.
2018/10/20
[ "https://Stackoverflow.com/questions/52902158", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6930132/" ]
I would use list comprehension to do this as follows. ``` pd.DataFrame([o_dict_list[i][0] for i, j in enumerate(o_dict_list)]) ``` > > > > > > See the output below. > > > > > > > > > ``` StreetNamePreType StreetName 0 ROAD Coffee 1 AVENUE Washington 2 ROAD Quartz ```
extracting the `OrderedDict` objects from your list and then use `pd.Dataframe` should work ``` values= [] for i in range(len(o_dict_list)): values.append(o_dict_list[i][0]) pd.DataFrame(values) StreetNamePreType StreetName 0 ROAD Coffee 1 AVENUE Washington 2 ROAD Quartz ```
5,916
5,980,863
In my django app I call `location.reload();` in an Ajax routine in some rare circumstances. That works well with Chrome, but with Firefox4 I get an `error: [Errno 32] Broken pipe` twice on my development server (Django 1.2.5, Python 2.7), which takes about 10 sec. And the error seems to eat the message I'm trying to display using the django messages framework. No I replaced this line with ``` var uri = location.href; location.href = uri; ``` Now the reload still takes some 10 sec, but Firefox displays the message. So far, it works. But to me this looks like a dirty hack. So my questions are: 1. Can anybody explain (or guess) what the error is in the first place? 2. Do you see any problems where this 'solution' could bite me in the future? (Note: I'm not the first [to](https://stackoverflow.com/questions/2868059/django-webkit-broken-pipe) [experience](https://stackoverflow.com/questions/5847580/django-is-sooo-slow-errno-32-broken-pipe-dcramer-django-sentry-static-folder) [that](https://stackoverflow.com/questions/4029297/error-errno-32-broken-pipe-when-paypal-calls-back-to-python-django-app) [problem](http://code.djangoproject.com/ticket/4444)).
2011/05/12
[ "https://Stackoverflow.com/questions/5980863", "https://Stackoverflow.com", "https://Stackoverflow.com/users/526169/" ]
First of all, that's an issue with some specific browsers (and, probably, long processing on the server side) not a problem in django. From the [bug report](http://code.djangoproject.com/ticket/4444#comment:14) on django: > > This is common error which happens whenever your browser closes the connection while the dev server is still busy sending data. The best we could is to have a more explicit error message. > > > It actually can happen on other systems, eg from [cherrypy](http://www.cherrypy.org/wiki/CherryPyBrokenPipe) > > There is nothing to worry about as this just means that the client closed the connection before the server. After this traceback, your CherryPy server will still keep running normally. > > > So that's an introduction to your first question: 1. Can anybody explain (or guess) what the error is in the first place? Well, it's simply the browser closing the connection - kind of a client-side timeout. This [Django + WebKit = Broken pipe](https://stackoverflow.com/questions/2868059/django-webkit-broken-pipe/2868082#2868082) answer does answer that question. Why does it work by changing `location.href` and not using `location.reload()`? Well I would guess, but that's ONLY a guess, that Firefox behaves slightly differently and a reload will timeout differently. I think the message is consumed because the request is already being sent when the browser pulls the trigger and shuts the connection. The dev server is single threaded, and that might also be a factor in the issue. I usually do my development on a real (local) server (nginx+apache+mod\_wsgi, nothing fancy) - that avoid running into silly issues that would never happen on production. 2. Do you see any problems where this 'solution' could bite me in the future? Well, it might not work on a browser that would check if `href` has changed before reloading. Or it might hit the cache instead of doing a real request (you can force avoiding the cache with reload()). And behaviour might not be consistent on all browsers. But again, you are already hitting a browser quirk, so I wouldn't worry about it too much by itself. By the way, you could simply do: ``` location.href = location.href ``` I would rather worry that the processing takes 10s! That really should not happen. **edit** So it looks like it's the browser itself provoking the long processing time AND the broken pipe error. sounds like (bad) parallel requests on the singlethreaded django server to me. **endedit** Test on a real webserver, optimize your code; if that's not enough launch the long tasks on a background process with celery+rabbitmq; in any case don't lose time on an issue which is not really an issue! You will probably be able to live with `location.reload()` and a little tweaking OR maybe just a real test environment!
The broken pipe error can also be down to lack of support for certain functionality in the Django debug server - one notable issue is Django's lack of support for [`Range` HTTP requests](https://www.rfc-editor.org/rfc/rfc7233) (see here for details: [Byte Ranges in Django](https://stackoverflow.com/questions/14324250/byte-ranges-in-django)) which are commonly used when delivering [streaming] media content. It's probably worth investigating the actual HTTP interchange using a packet capture program such as Wireshark so you can see where and when the problem is occurring.
5,918
50,014,265
I'm trying to write a python script that can echo whatever a user types when running the script Right now, the code I have is (version\_msg and usage\_msg don't matter right now) ``` from optparse import OptionParser version_msg = "" usage_msg = "" parser = OptionParser(version=version_msg, usage=usage_msg) parser.add_option("-e", "--echo", action="append", dest="input_lines", default=[]) ``` But if I try to run the script (python options.py -e hello world), it echoes just ['hello']. How would I go about fixing this so it outputs ['hello', 'world']?
2018/04/25
[ "https://Stackoverflow.com/questions/50014265", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6642021/" ]
A slightly hacky way of doing it: ``` from optparse import OptionParser version_msg = "" usage_msg = "" parser = OptionParser(version=version_msg, usage=usage_msg) parser.add_option("-e", "--echo", action="append", dest="input_lines", default=[]) options, arguments = parser.parse_args() print(options.input_lines + arguments) ``` I then run ``` python myscript.py -e hello world how are you ``` Output: ``` ['hello', 'world', 'how', 'are', 'you'] ```
I think this is best accomplished by quoting the argument ie hello world becomes 'hello world' this ensures that the -e option consumes the entire string. If you really need the string to be broken up into pieces ie ['hello', 'world'] instead of ['hello world'] you could easily call split() on options.e[0] ``` strings = options.e[0].split() ``` For a more complex method, you can use a callback, below links to a relevant example for you. <https://docs.python.org/3/library/optparse.html#callback-example-6-variable-arguments>
5,919
14,404,744
I'm trying to read a CSV file using `numpy.recfromcsv(...)` where some of the fields have commas in them. The fields that have commas in them are surrounded by quotes i.e., `"value1, value2"`. Numpy see's the quoted field as two different fields and it doesn't work very well. The command I'm using right now is ``` data = numpy.recfromcsv(dataFilename, delimiter=',', autstrip=True) ``` I found this question > > [Read CSV file with comma within fields in Python](https://stackoverflow.com/questions/8311900/python-read-csv-file-with-comma-within-fields) > > > But it doesn't use `numpy`, which I'd really love to use. So I'm hoping there are at least one of a few options here: 1. What are some options to `numpy.recfromcsv(...)` that will allow me to read a quoted field as one field instead of multiple comma separated fields? 2. Should I format my CSV file differently? 3. (alternatively, but not ideally) Read CSV as in quoted question, with extra steps to create `numpy` array. Please advise.
2013/01/18
[ "https://Stackoverflow.com/questions/14404744", "https://Stackoverflow.com", "https://Stackoverflow.com/users/633318/" ]
It is possible to do this with [pandas](http://pandas.pydata.org/): ``` np_array = pandas.io.parsers.read_csv("file_with_comma_fields_quoted.csv").as_matrix() ```
If you consider using native Python csv reader, with Python doc [here](http://docs.python.org/2/library/csv.html#csv-fmt-params): Python csv reader defines some optional `Dialect.quotechar` options, which defaults to `'"'`. In the csv format standard, quotechar is another field delimiter, and the delimiter (comma in your case) may be included in the quoted field. Rules for quoting character in csv format are clear in first section of [this page](http://golang.org/pkg/encoding/csv/). So, it seems that with default quoting character to `"`, native Python csv reader manages your problem in default mode. If you want to stick to Python, why not clean your csv file first, using regexp to identify quoted fields, and change delimiter from comma to `\t` for instance. But here you are actually parsing csv format by yourself.
5,921
25,426,447
I was trying to understand how non-blocking sockets work ,so I wrote this simple server in python . ``` import socket s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(('127.0.0.1',1000)) s.listen(5) s.setblocking(0) while True: try: conn, addr = s.accept() print ('connection from',addr) data=conn.recv(100) print ('recived: ',data,len(data)) except: pass ``` Then I tried to connect to this server from multiple instances of this client ``` import socket s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('127.0.0.1',1000)) while True: continue ``` But for some reason putting blocking to 0 or 1 dose not seem to have an effect and server's recv method always block the execution. So, dose creating non-blocking socket in python require more than just setting the blocking flag to 0.
2014/08/21
[ "https://Stackoverflow.com/questions/25426447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2789669/" ]
`setblocking` only affects the socket you use it on. So you have to add `conn.setblocking(0)` to see an effect: The `recv` will then return immediately if there is no data available.
You just need to call `setblocking(0)` on the *connected* socket, i.e. `conn`. ``` import socket s = socket.socket() s.bind(('127.0.0.1', 12345)) s.listen(5) s.setblocking(0) >>> conn, addr = s.accept() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib64/python2.7/socket.py", line 202, in accept sock, addr = self._sock.accept() socket.error: [Errno 11] Resource temporarily unavailable # start your client... >>> conn, addr = s.accept() >>> conn.recv() # this will hang until the client sends some data.... 'hi there\n' >>> conn.setblocking(0) # set non-blocking on the connected socket "conn" >>> conn.recv() Traceback (most recent call last): File "<stdin>", line 1, in <module> socket.error: [Errno 11] Resource temporarily unavailable ```
5,923