qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
68,564,322
I have a 143k lowcase word dictionary and I want to count the frequency of the first two letters (ie: `aa* = 14, ab* = 534, ac = 714` ... `za = 65,` ... `zz = 0` ) and put it in a bidimensional array. However I have no idea how to even go about iterating them without switches or a bunch of if elses I tried looking on google for a solution to this but I could only find counting amount of letters in the whole word and mostly only things in python. I've sat here for a while thinking how could I do this and my brain keeps blocking this was what I came up with but I really don't know where to head. ``` int main (void) { char *line = NULL; size_t len = 0; ssize_t read; char *arr[143091]; FILE *fp = fopen("large", “r”); if (*fp == NULL) { return 1; } int i = 0; while ((read = getline(&line, &len, fp)) != -1) { arr[i] = line; i++; } char c1 = 'a'; char c2 = 'a'; i = 0; int j = 0; while (c1 <= 'z') { while (arr[k][0] == c1) { while (arr[k][1] == c2) { } c2++; } c1++; } fclose(fp); if (line) free(line); return 0; } ``` Am I being an idiot or am I just missing someting really basic? How can I go about this problem? Edit: I forgot to mention that the dictionary is only lowercase and has some edge cases like just an `a` or an `e` and some words have `'` (like `e'er`and `e's`) there are no accentuated latin characters and they are all accii lowercase
2021/07/28
[ "https://Stackoverflow.com/questions/68564322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16498000/" ]
The code assumes that the input has one word per line without leading spaces and will count all words that start with two ASCII letters from `'a'`..`'z'`. As the statement in the question is not fully clear, I further assume that the character encoding is ASCII or at least ASCII compatible. (The question states: "there are no accentuated latin characters and they are all accii lowercase") If you want to include words that consist of only one letter or words that contain `'`, the calculation of the index values from the characters would be a bit more complicated. In this case I would add a function to calculate the index from the character value. Also for non-ASCII letters the simple calculation of the array index would not work. The program reads the input line by line without storing all lines, checks the input as defined above and converts the first two characters from range `'a'`..`'z'` to index values in range `0`..`'z'-'a'` to count the occurrence in a two-dimensional array. ``` #include <stdio.h> #include <stdlib.h> int main (void) { char *line = NULL; size_t len = 0; ssize_t read; /* Counter array, initialized with 0. The highest possible index will * be 'z'-'a', so the size in each dimension is 1 more */ unsigned long count['z'-'a'+1]['z'-'a'+1] = {0}; FILE *fp = fopen("large", "r"); if (fp == NULL) { return 1; } while ((read = getline(&line, &len, fp)) != -1) { /* ignore short input */ if(read >= 2) { /* ignore other characters */ if((line[0] >= 'a') && (line[0] <= 'z') && (line[1] >= 'a') && (line[1] <= 'z')) { /* convert first 2 characters to array index range and count */ count[line[0]-'a'][line[1]-'a']++; } } } fclose(fp); if (line) free(line); /* example output */ for(int i = 'a'-'a'; i <= 'z'-'a'; i++) { for(int j = 'a'-'a'; j <= 'z'-'a'; j++) { /* only print combinations that actually occurred */ if(count[i][j] > 0) { printf("%c%c %lu\n", i+'a', j+'a', count[i][j]); } } } return 0; } ``` The example input ```none foo a foobar bar baz fish ford ``` results in ``` ba 2 fi 1 fo 3 ```
There is no need to read the entire dictionary into memory, or even to buffer lines. The dictionary consists of words, one per line. This means it has this structure: ``` "aardvark\nabacus\n" ``` The first two characters of the file are the first digraph. The other interesting digraphs are all characters which immediately follow a newline. This can be read by a state machine, which we can code into a loop like this. Suppose `f` is the `FILE *` handle to the stream reading from the dictionary file: ``` for (;;) { /* Read two characters from the dictionary file. */ int ch0 = getc(f); int ch1 = getc(f); /* Is the ch0 newline? That means we read an empty line, and one character after that. So, let us move that character into ch0, and read another ch1. Keep doing this until ch0 is not a newline, and bail at EOF. */ while (ch0 == '\n' && ch1 != EOF) { ch0 = ch1; ch1 = getc(f); } /* After the above, if we have EOF, we are done: bail the loop */ if (ch0 == EOF || ch1 == EOF) break; /* We know that ch0 isn't newline. But ch1 could be newline; i.e. we found a one-letter-long dictionary entry. We don't process those, only two or more letters. */ if (ch1 != '\n') { /* Here we put the code which looks up the ch0-ch1 pair in our frequency table and increments the count. */ } /* Now drop characters until the end of the line. If ch1 is newline, we are already there. If not, let's just use ch1 for reading more characters until we get a newline. */ while (ch1 != '\n' && ch1 != EOF) ch = getc(f); /* Watch out for EOF in the middle of a line that isn't newline-terminated. */ if (ch == EOF) break; } ``` I would do this with a state machine: ``` enum { begin, have_ch0, scan_eol } state = begin; int ch0, ch1; for (;;) { int c = getc(f); if (c == EOF) break; switch (state) { case begin: /* stay in begin state if newline seen */ if (c != \n') { /* otherwise accumulate ch0, and switch to have_ch0 state */ ch0 = c; state = have_ch0; } break; case have_ch0: if (c == '\n') { /* newline in ch0 state: back to begin */ state = begin; } else { /* we got a second character! */ ch1 = c; /* code for processing ch0 and ch1 goes here! */ state = scan_eol; /* switch to scanning for EOL. */ } break; case scan_eol: if (c == '\n') { /* We got the newline we are looking for; go to begin state. */ state = begin; } break; } } ``` Now we have a tidy loop around a single call to `getc`. `EOF` is checked in one place where we bail out of the loop. The state machine recognizes the situation when we have the first two characters of a line which is at least two characters long; there is a single place in the code where to put the logic for dealing with the two characters. We are not allocating any buffers; we are not `malloc`-ing lines, so there is nothing to free. There is no limit on the dictionary size we can scan (just we have to watch for overflowing frequency counters).
68,564,322
I have a 143k lowcase word dictionary and I want to count the frequency of the first two letters (ie: `aa* = 14, ab* = 534, ac = 714` ... `za = 65,` ... `zz = 0` ) and put it in a bidimensional array. However I have no idea how to even go about iterating them without switches or a bunch of if elses I tried looking on google for a solution to this but I could only find counting amount of letters in the whole word and mostly only things in python. I've sat here for a while thinking how could I do this and my brain keeps blocking this was what I came up with but I really don't know where to head. ``` int main (void) { char *line = NULL; size_t len = 0; ssize_t read; char *arr[143091]; FILE *fp = fopen("large", “r”); if (*fp == NULL) { return 1; } int i = 0; while ((read = getline(&line, &len, fp)) != -1) { arr[i] = line; i++; } char c1 = 'a'; char c2 = 'a'; i = 0; int j = 0; while (c1 <= 'z') { while (arr[k][0] == c1) { while (arr[k][1] == c2) { } c2++; } c1++; } fclose(fp); if (line) free(line); return 0; } ``` Am I being an idiot or am I just missing someting really basic? How can I go about this problem? Edit: I forgot to mention that the dictionary is only lowercase and has some edge cases like just an `a` or an `e` and some words have `'` (like `e'er`and `e's`) there are no accentuated latin characters and they are all accii lowercase
2021/07/28
[ "https://Stackoverflow.com/questions/68564322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16498000/" ]
The code assumes that the input has one word per line without leading spaces and will count all words that start with two ASCII letters from `'a'`..`'z'`. As the statement in the question is not fully clear, I further assume that the character encoding is ASCII or at least ASCII compatible. (The question states: "there are no accentuated latin characters and they are all accii lowercase") If you want to include words that consist of only one letter or words that contain `'`, the calculation of the index values from the characters would be a bit more complicated. In this case I would add a function to calculate the index from the character value. Also for non-ASCII letters the simple calculation of the array index would not work. The program reads the input line by line without storing all lines, checks the input as defined above and converts the first two characters from range `'a'`..`'z'` to index values in range `0`..`'z'-'a'` to count the occurrence in a two-dimensional array. ``` #include <stdio.h> #include <stdlib.h> int main (void) { char *line = NULL; size_t len = 0; ssize_t read; /* Counter array, initialized with 0. The highest possible index will * be 'z'-'a', so the size in each dimension is 1 more */ unsigned long count['z'-'a'+1]['z'-'a'+1] = {0}; FILE *fp = fopen("large", "r"); if (fp == NULL) { return 1; } while ((read = getline(&line, &len, fp)) != -1) { /* ignore short input */ if(read >= 2) { /* ignore other characters */ if((line[0] >= 'a') && (line[0] <= 'z') && (line[1] >= 'a') && (line[1] <= 'z')) { /* convert first 2 characters to array index range and count */ count[line[0]-'a'][line[1]-'a']++; } } } fclose(fp); if (line) free(line); /* example output */ for(int i = 'a'-'a'; i <= 'z'-'a'; i++) { for(int j = 'a'-'a'; j <= 'z'-'a'; j++) { /* only print combinations that actually occurred */ if(count[i][j] > 0) { printf("%c%c %lu\n", i+'a', j+'a', count[i][j]); } } } return 0; } ``` The example input ```none foo a foobar bar baz fish ford ``` results in ``` ba 2 fi 1 fo 3 ```
You are started in the right direction. You do need a 2D array 27 x 27 for a single case (e.g. lowercase or uppercase), not including digits. To handle digits, just add another 11 x 11 array and map 2-digit frequencies there. The reason you can't use a flat 1D array and map to it without serious indexing gymnastics is that the ASCII sum of `"ab"` and `"ba"` would be the same. The 2D array solves that problem allowing the map of the 1st character ASCII value to the first index, and the map of the ASCII of the 2nd character to the 2nd index or after in that row. An easy way to think of it is to just take a lowercase example. Let's look at the word `"accent"`. You have your 2D array: ```none +---+---+---+---+---+---+ | a | a | b | c | d | e | ... +---+---+---+---+---+---+ | b | a | b | c | d | e | ... +---+---+---+---+---+---+ | c | a | b | c | d | e | ... +---+---+---+---+---+---+ ... ``` The first column tracks the first letter and then the remaining columns (the next `'a' - 'z'` characters) track the 2nd character that follows the first character. (you can do this will an array of struct holding the 1st char and a 26 char array as well -- up to you) This way, you remove ambiguity of which combination `"ab"` or `"ba"`. Now note -- you do not actually need a 27 x 27 arrays with the 1st column repeated. Recall, by mapping the ASCII value to the first index, it designates the first character associated with the row on its own, e.g. `row[0][..]` indicates the first character was `'a'`. So a 26 x 26 array is fine (and the same for digits). So you simply need: ```none +---+---+---+---+---+ | a | b | c | d | e | ... +---+---+---+---+---+ | a | b | c | d | e | ... +---+---+---+---+---+ | a | b | c | d | e | ... +---+---+---+---+---+ ... ``` So the remainder of the approach is simple. Open the file, read the word into a buffer, validate there is a 1st character (e.g. not the nul-character), then validate the 2nd character (`continue` to get the next word if either validation fails). Convert both to lowercase (or add the additional arrays if tracking both cases -- that gets ugly). Now just map the ASCII value for each character to an index in the array, e.g. ```c int ltrfreq[ALPHABET][ALPHABET] = {{0}}; ... while (fgets (buf, SZBUF, fp)) { /* read each line into buf */ int ch1 = *buf, ch2; /* initialize ch1 with 1st char */ if (!ch1 || !isalpha(ch1)) /* validate 1st char or get next word */ continue; ch2 = buf[1]; /* assign 2nd char */ if (!ch1 || !isalpha(ch2)) /* validate 2nd char or get next word */ continue; ch1 = tolower (ch1); /* convert to lower to eliminate case */ ch2 = tolower (ch2); ltrfreq[ch1-'a'][ch2-'a']++; /* map ASCII to index, increment */ } ``` With our example word `"accent"`, that would increment the array element `[0][2]`, so that corresponds to row `0` and column `2` for `"ac"` in: ```none +---+---+---+---+---+ | a | b | c | d | e | ... +---+---+---+---+---+ ... ^ [0][2] ``` Where you increment the value at that index. So `ltrfreq[0][2]++` now holds the value `1` for the combination `"ac"` having been seen once. When encountered again, the element would be incremented to `2` and so on... Since the value is *incremented* it is imperative the array be initialized all zero when declared. When you output the results, you just have to remember to subtract `1` from the `j` index when mapping from index back to ASCII, e.g. ```c for (int i = 0; i < ALPHABET; i++) /* loop over all 1st char index */ for (int j = 0; j < ALPHABET; j++) /* loop over all 2nd char index */ if (ltrfreq[i][j]) /* map i, j back to ASCII, output freq */ printf ("%c%c = %d\n", i + 'a', j + 'a', ltrfreq[i][j]); ``` That's it. Putting it altogether in an example that takes the filename to read as the first argument to the program (or reads from `stdin` if no argument is given), you would have: ```c #include <stdio.h> #include <ctype.h> #define ALPHABET 26 #define SZBUF 1024 int main (int argc, char **argv) { char buf[SZBUF] = ""; int ltrfreq[ALPHABET][ALPHABET] = {{0}}; /* use filename provided as 1st argument (stdin by default) */ FILE *fp = argc > 1 ? fopen (argv[1], "r") : stdin; if (!fp) { /* validate file open for reading */ perror ("file open failed"); return 1; } while (fgets (buf, SZBUF, fp)) { /* read each line into buf */ int ch1 = *buf, ch2; /* initialize ch1 with 1st char */ if (!ch1 || !isalpha(ch1)) /* validate 1st char or get next word */ continue; ch2 = buf[1]; /* assign 2nd char */ if (!ch1 || !isalpha(ch2)) /* validate 2nd char or get next word */ continue; ch1 = tolower (ch1); /* convert to lower to eliminate case */ ch2 = tolower (ch2); ltrfreq[ch1-'a'][ch2-'a']++; /* map ASCII to index, increment */ } if (fp != stdin) /* close file if not stdin */ fclose (fp); for (int i = 0; i < ALPHABET; i++) /* loop over all 1st char index */ for (int j = 0; j < ALPHABET; j++) /* loop over all 2nd char index */ if (ltrfreq[i][j]) /* map i, j back to ASCII, output freq */ printf ("%c%c = %d\n", i + 'a', j + 'a', ltrfreq[i][j]); } ``` **Example Input Dictionary** In the file `dat/ltrfreq2.txt`: ```none $ cat dat/ltrfreq2.txt My dog has fleas and my cat has none lucky cat! ``` **Example Use/Output** ```none $ ./bin/ltrfreq2 dat/ltrfreq2.txt an = 1 ca = 2 do = 1 fl = 1 ha = 2 lu = 1 my = 2 no = 1 ``` Where both `"cat"` words accurately account for `ca = 2`, both `"has"` for `ha = 2` and `"My"` and `"my"` for `my = 2`. The rest are just the 2 character prefixes for words that appear once in the dictionary. Or with the entire `307993` words dictionary that comes with SuSE, timed to show the efficiency of the approach (all within 15 ms): ```none $ time ./bin/ltrfreq2 /var/lib/dict/words aa = 40 ab = 990 ac = 1391 ad = 1032 ae = 338 af = 411 ag = 608 ah = 68 ai = 369 aj = 18 ak = 70 al = 2029 ... zn = 2 zo = 434 zr = 2 zs = 2 zu = 57 zw = 25 zy = 135 zz = 1 real 0m0.015s user 0m0.015s sys 0m0.001s ``` A bit about the array type. Since you have 143K words, that rules out using a `short` or `unsigned short` type -- just in case you have a bad dictionary with all 143K words being `"aardvark"`.... The `int` type is more than capable of handling all words -- even if you have a bad dictionary containing only `"aardvark"`. Look things over and let me know if this is what you need, if not let me know where I misunderstood. Also, let me know if you have further questions.
68,932,000
``` from typing import List def dailyTemperatures(temperatures: List[int]) -> List[int]: temp_count = len(temperatures) ans = [0]*temp_count stack = [] idx_stack = [] for idx in range(temp_count-1,-1,-1): // first point temperature = temperatures[idx] last_temp_idx = 0 while stack: // second point last_temp = stack[-1] last_temp_idx = idx_stack[-1] if last_temp <= temperature: stack.pop() idx_stack.pop() else: break if len(stack) == 0: stack.append(temperature) idx_stack.append(idx) ans[idx] = 0 continue stack.append(temperature) idx_stack.append(idx) ans[idx] = last_temp_idx-idx return ans ``` I have two question. I've just started learning python. I googled but couldn't find an answer. first point > (temp\_count-1,-1,-1) I'm not sure what this sentence means. Does this mean decrement by one? Why are there two -1? second point > while stack: Does this sentence mean that when stack is empty is operate?
2021/08/26
[ "https://Stackoverflow.com/questions/68932000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16607199/" ]
Since the address of `cs` is sent to a function and that function may spawn goroutines that may hold a reference to `cs` after the function returns, it is moved to heap. In the second case, `cs` is a pointer. There is no need to move the pointer itself to the heap, because that the function `Unmarshal` can refer to is the object pointed to by `cs`, not `cs` itself. You didn't show how `cs` is initialized, so this piece of code will fail, however if you initialize `cs` to point to a variable declared in that function, that object will likely end up in the heap.
`proto.Unmarshal` ``` func Unmarshal(buf []byte, pb Message) ``` ``` type Message interface { Reset() String() string ProtoMessage() } ``` interface{} can be every type, It is difficult to determine the specific types of its parameters during compilation, and escape will also occur. but if interface{} is a pointer, it just a pointer
66,675,001
I'm trying to use `sklearn_porter` to train a Random Forest Modell in python which then should be exported to C code. This is my code: ``` from sklearn_porter import Porter from sklearn.ensemble import RandomForestClassifier from sklearn.datasets import load_iris import sys sys.path.append('../../../../..') iris_data = load_iris() X = iris_data.data y = iris_data.target print(X.shape, y.shape) clf = RandomForestClassifier(n_estimators=15, max_depth=None, min_samples_split=2, random_state=0) clf.fit(X, y) porter = Porter(clf, language='c') output = porter.export(embed_data=True) print(output) ``` but get the following error: ``` ...AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn_porter\Porter.py", line 10, in <module> from sklearn.tree.tree import DecisionTreeClassifier ModuleNotFoundError: No module named 'sklearn.tree.tree ``` So I checked Porter.py and saw that the import is not done right (?) because the from statement has mistakes in it like `sklearn.tree.tree`one `.tree` is enough ``` from sklearn.metrics import accuracy_score from sklearn.tree.tree import DecisionTreeClassifier from sklearn.ensemble.weight_boosting import AdaBoostClassifier from sklearn.ensemble.forest import RandomForestClassifier from sklearn.ensemble.forest import ExtraTreesClassifier from sklearn.svm.classes import LinearSVC from sklearn.svm.classes import SVC from sklearn.svm.classes import NuSVC from sklearn.neighbors.classification import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.naive_bayes import BernoulliNB ``` So I corrected that but another error occured ``` ...AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn_porter\Porter.py", line 117, in __init__ error = "Currently the given model '{algorithm_name}' " \ KeyError: 'algorithm_name' ``` but I couldn't correct this error... What am I doing wrong? I'm using python 3.8.8 and scikit-learn 0.24.1 Edit: tried it also with python 3.7.10 because on [Git](https://github.com/nok/sklearn-porter) states that it is just available for python 2.7, 3.4, 3.5, 3.6 and 3.7. On 3.7 the same error occurs
2021/03/17
[ "https://Stackoverflow.com/questions/66675001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12959163/" ]
You should use `connects_to database: { writing: :wn }` When you specify only `reading:` keyword you will get this error `No connection pool for 'Wn' found`. You can only use `reading:` together with `writing:` when you have a read replica. See docs for more info <https://edgeguides.rubyonrails.org/active_record_multiple_databases.html#horizontal-sharding>
Create an abstract class in `models/wn.rb` ... ``` class Wn < ActiveRecord::Base self.abstract_class = true connects_to database: { reading: :wn } end ``` then in `models/ci_harves_record.rb` ``` class CiHarvestRecord < Wn self.table_name = "ciHarvest" end ```
23,530,703
We're currently working with Cassandra on a single node cluster to test application development on it. Right now, we have a really huge data set consisting of approximately 70M lines of texts that we would like dump into a Cassandra. We have tried all of the following: * Line by line insertion using python Cassandra driver * Copy command of Cassandra * Set compression of sstable to none We have explored the option of the sstable bulk loader, but we don't have an appropriate .db format for this. Our text file to be loaded has 70M lines that look like: ``` 2f8e4787-eb9c-49e0-9a2d-23fa40c177a4 the magnet programs succeeded in attracting applicants and by the mid-1990s only #about a #third of students who #applied were accepted. ``` The column family that we're intending to insert into has this creation syntax: ``` CREATE TABLE post ( postid uuid, posttext text, PRIMARY KEY (postid) ) WITH bloom_filter_fp_chance=0.010000 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.000000 AND gc_grace_seconds=864000 AND index_interval=128 AND read_repair_chance=0.100000 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={}; ``` Problem: The loading of the data into even a simple column family is taking forever -- 5hrs for 30M lines that were inserted. We were wondering if there is any way to expedite this as the performance for 70M lines of the same data being loaded into MySQL takes approximately 6 minutes on our server. We were wondering if we have missed something? Or if someone could point us in the right direction? Many thanks in advance!
2014/05/08
[ "https://Stackoverflow.com/questions/23530703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3613688/" ]
The sstableloader is the fastest way to import data into Cassandra. You have to write the code to generate the sstables, but if you really care about speed this will give you the most bang for your buck. This article is a bit old, but the basics still apply to how you [generate the SSTables](http://www.datastax.com/dev/blog/bulk-loading) . If you really don't want to use the sstableloader, you should be able to go faster by doing the inserts in parallel. A single node can handle multiple connections at once, and you can scale out your Cassandra cluster for increased throughput.
I have a two node Cassandra 2.? cluster. Each node is I7 4200 MQ laptop, 1 TB HDD, 16 gig RAM). Have imported almost 5 billion rows using copy command. Each CSV file is a about 63 gig with approx 275 million rows. Takes about 8-10 hours to complete the import/per file. Approx 6500 rows per sec. YAML file is set to use 10 gigs of RAM. JIC that helps.
17,155,724
So I am working on this project where I take input from the user (a file name ) and then open and check for stuff. the file name is "cur" Now suppose the name of my file is `kb.py` (Its in python) If I run it on my terminal then first I will do: python kb.y and then there will a prompt and user will give the input. I'll do it this way: ``` A = raw_input("Enter File Name: ") b = open(A, 'r+') ``` I dont want to do that. Instead i want to use it as a command for example: **python kb.py cur** and it will take it as an input and save to a variable which then will open it. I am confused how to get a input in the same command line.
2013/06/17
[ "https://Stackoverflow.com/questions/17155724", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2438430/" ]
Just use `sys.argv`, like this: ``` import sys # this part executes when the script is run from the command line if __name__ == '__main__': if len(sys.argv) != 2: # check for the correct number of arguments print 'usage: python kb.py cur' else: call_your_code(sys.argv[1]) # first command line argument ``` Note: `sys.argv[0]` is the script's name, and `sys.argv[1]` is the first command line argument. And so on, if there were more arguments.
For simply stuff `sys.argv[]` is the way to go, for more complicated stuff, have a look at the [argparse-module](http://docs.python.org/2/howto/argparse.html) ``` import argparse parser = argparse.ArgumentParser() parser.add_argument("--verbose", help="increase output verbosity", action="store_true") args = parser.parse_args() if args.verbose: print "verbosity turned on" ``` output: ``` $ python prog.py --verbose verbosity turned on $ python prog.py --verbose 1 usage: prog.py [-h] [--verbose] prog.py: error: unrecognized arguments: 1 $ python prog.py --help usage: prog.py [-h] [--verbose] optional arguments: -h, --help show this help message and exit --verbose increase output verbosity ```
72,215,886
How to write python code that let the computer know if the list is a right sequence and the position doesn't matter, it will return true, otherwise it return false. below are some of my example, I really don't know how to start ``` b=[1,2,3,4,5] #return true b=[1,2,2,1,3] # return false b=[2,3,1,5,4] #return true b=[2,4,6,4,3] # return false ```
2022/05/12
[ "https://Stackoverflow.com/questions/72215886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
sort function is O(nlogn), we can use for loop which is O(n): ``` def check_seq(in_list): now_ele = set() min_ele = max_ele = in_list[0] for i in in_list: if i in now_ele: return False min_ele = min(i, min_ele) max_ele = max(i, max_ele) now_ele.add(i) if max_ele-min_ele+1 == len(in_list): return True return False ```
This question is quite simple and can be solved a few ways. 1. The conditional approach - if there is a number that is bigger than the length of the list, it automatically cannot be a sequence because there can only be numbers from 1-n where n is the size of the list. Also, you have to check if there are any duplicates in the list as this cannot be possible either. If none of these conditions occur, it should return true 2. Using dictionary - go through the entire list and add it as a key to a dictionary. Afterwards, simply loop through numbers 1-n where n is the length of the list and check if they are keys in the dictionary, if one of them isn't, return false. If all of them are, return true. Both of these are quite simply approaches and you should be able to implement them yourselves. However, this is one implementation for both. 1. ``` def solve(list1): seen = {} for i in list1: if i > len(list1): return False if i in seen: return False seen[i] = True return False ``` 2. ``` def solve(list1): seen = {} for i in list1: seen[i] = True for i in range (1, len(list1)+1): if i not in seen: return False return True ```
72,215,886
How to write python code that let the computer know if the list is a right sequence and the position doesn't matter, it will return true, otherwise it return false. below are some of my example, I really don't know how to start ``` b=[1,2,3,4,5] #return true b=[1,2,2,1,3] # return false b=[2,3,1,5,4] #return true b=[2,4,6,4,3] # return false ```
2022/05/12
[ "https://Stackoverflow.com/questions/72215886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
sort function is O(nlogn), we can use for loop which is O(n): ``` def check_seq(in_list): now_ele = set() min_ele = max_ele = in_list[0] for i in in_list: if i in now_ele: return False min_ele = min(i, min_ele) max_ele = max(i, max_ele) now_ele.add(i) if max_ele-min_ele+1 == len(in_list): return True return False ```
This solution needs O(n) runtime and O(n) space ```py def is_consecutive(l: list[int]): if not l: return False low = min(l) high = max(l) # Bounds Check if high - low != len(l) - 1: return False # Test all indices exist test_vec = [False] * len(l) # O(n) for i in range(len(l)): test_vec[l[i] - low] = True return all(test_vec) assert is_consecutive(range(10)) assert is_consecutive([-1, 1,0]) assert not is_consecutive([1,1]) assert not is_consecutive([1,2,4,6,5]) ```
72,215,886
How to write python code that let the computer know if the list is a right sequence and the position doesn't matter, it will return true, otherwise it return false. below are some of my example, I really don't know how to start ``` b=[1,2,3,4,5] #return true b=[1,2,2,1,3] # return false b=[2,3,1,5,4] #return true b=[2,4,6,4,3] # return false ```
2022/05/12
[ "https://Stackoverflow.com/questions/72215886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Create a set and one to compare with -- based on minimum and maximum: ``` isRightSequence = set(range(min(b), max(b)+1)) == set(b) ```
This question is quite simple and can be solved a few ways. 1. The conditional approach - if there is a number that is bigger than the length of the list, it automatically cannot be a sequence because there can only be numbers from 1-n where n is the size of the list. Also, you have to check if there are any duplicates in the list as this cannot be possible either. If none of these conditions occur, it should return true 2. Using dictionary - go through the entire list and add it as a key to a dictionary. Afterwards, simply loop through numbers 1-n where n is the length of the list and check if they are keys in the dictionary, if one of them isn't, return false. If all of them are, return true. Both of these are quite simply approaches and you should be able to implement them yourselves. However, this is one implementation for both. 1. ``` def solve(list1): seen = {} for i in list1: if i > len(list1): return False if i in seen: return False seen[i] = True return False ``` 2. ``` def solve(list1): seen = {} for i in list1: seen[i] = True for i in range (1, len(list1)+1): if i not in seen: return False return True ```
72,215,886
How to write python code that let the computer know if the list is a right sequence and the position doesn't matter, it will return true, otherwise it return false. below are some of my example, I really don't know how to start ``` b=[1,2,3,4,5] #return true b=[1,2,2,1,3] # return false b=[2,3,1,5,4] #return true b=[2,4,6,4,3] # return false ```
2022/05/12
[ "https://Stackoverflow.com/questions/72215886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Create a set and one to compare with -- based on minimum and maximum: ``` isRightSequence = set(range(min(b), max(b)+1)) == set(b) ```
This solution needs O(n) runtime and O(n) space ```py def is_consecutive(l: list[int]): if not l: return False low = min(l) high = max(l) # Bounds Check if high - low != len(l) - 1: return False # Test all indices exist test_vec = [False] * len(l) # O(n) for i in range(len(l)): test_vec[l[i] - low] = True return all(test_vec) assert is_consecutive(range(10)) assert is_consecutive([-1, 1,0]) assert not is_consecutive([1,1]) assert not is_consecutive([1,2,4,6,5]) ```
46,725,942
I'm writing some calculation tasks which would be efficient in Python or Java, but Sidekiq does not seem to support external consumers. I'm aware there's a workaround to spawn a task using system call: ``` class MyWorker include Sidekiq::Worker def perform(*args) `python script.py -c args` # and watch out using `ps` end end ``` Is there a better way to do this with rewriting a Sidekiq consumer?
2017/10/13
[ "https://Stackoverflow.com/questions/46725942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3340588/" ]
You can use apply and which: ``` df <- data.frame( x1 = c(0, 0, 1), x2 = c(1, 0 , 0), x3 = c(0, 1 , 0) ) idx <- apply( df, 1, function(row) which( row == 1 ) ) cbind( df, Number = colnames( df[ , idx] ) ) x1 x2 x3 Number 1 0 1 0 x2 2 0 0 1 x3 3 1 0 0 x1 ```
We can use `max.col` to find the column index of logical matrix (`df1[-1]=="1+"`). Add 1 to it because we used only from 2nd column. Then, with `names(df1)` get the corresponding names ``` df1$Number <- names(df1)[max.col(df1[-1]=="1+")+1] df1$Number #[1] "X3000" "X1234" "X7500" ```
46,725,942
I'm writing some calculation tasks which would be efficient in Python or Java, but Sidekiq does not seem to support external consumers. I'm aware there's a workaround to spawn a task using system call: ``` class MyWorker include Sidekiq::Worker def perform(*args) `python script.py -c args` # and watch out using `ps` end end ``` Is there a better way to do this with rewriting a Sidekiq consumer?
2017/10/13
[ "https://Stackoverflow.com/questions/46725942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3340588/" ]
An approach with `which`: ``` dat$Number <- names(dat)[which(dat == "1+", arr.ind = TRUE)[ , 2]] # [1] "X1234" "X3000" "X7500" ```
We can use `max.col` to find the column index of logical matrix (`df1[-1]=="1+"`). Add 1 to it because we used only from 2nd column. Then, with `names(df1)` get the corresponding names ``` df1$Number <- names(df1)[max.col(df1[-1]=="1+")+1] df1$Number #[1] "X3000" "X1234" "X7500" ```
46,725,942
I'm writing some calculation tasks which would be efficient in Python or Java, but Sidekiq does not seem to support external consumers. I'm aware there's a workaround to spawn a task using system call: ``` class MyWorker include Sidekiq::Worker def perform(*args) `python script.py -c args` # and watch out using `ps` end end ``` Is there a better way to do this with rewriting a Sidekiq consumer?
2017/10/13
[ "https://Stackoverflow.com/questions/46725942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3340588/" ]
We can use `max.col` to find the column index of logical matrix (`df1[-1]=="1+"`). Add 1 to it because we used only from 2nd column. Then, with `names(df1)` get the corresponding names ``` df1$Number <- names(df1)[max.col(df1[-1]=="1+")+1] df1$Number #[1] "X3000" "X1234" "X7500" ```
You can also use the `col` function to return the proper variable name index like this: ``` names(mat)[col(mat)[which(mat == "1+")]] [1] "X1234" "X3000" "X7500" ```
46,725,942
I'm writing some calculation tasks which would be efficient in Python or Java, but Sidekiq does not seem to support external consumers. I'm aware there's a workaround to spawn a task using system call: ``` class MyWorker include Sidekiq::Worker def perform(*args) `python script.py -c args` # and watch out using `ps` end end ``` Is there a better way to do this with rewriting a Sidekiq consumer?
2017/10/13
[ "https://Stackoverflow.com/questions/46725942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3340588/" ]
You can use apply and which: ``` df <- data.frame( x1 = c(0, 0, 1), x2 = c(1, 0 , 0), x3 = c(0, 1 , 0) ) idx <- apply( df, 1, function(row) which( row == 1 ) ) cbind( df, Number = colnames( df[ , idx] ) ) x1 x2 x3 Number 1 0 1 0 x2 2 0 0 1 x3 3 1 0 0 x1 ```
You can also use the `col` function to return the proper variable name index like this: ``` names(mat)[col(mat)[which(mat == "1+")]] [1] "X1234" "X3000" "X7500" ```
46,725,942
I'm writing some calculation tasks which would be efficient in Python or Java, but Sidekiq does not seem to support external consumers. I'm aware there's a workaround to spawn a task using system call: ``` class MyWorker include Sidekiq::Worker def perform(*args) `python script.py -c args` # and watch out using `ps` end end ``` Is there a better way to do this with rewriting a Sidekiq consumer?
2017/10/13
[ "https://Stackoverflow.com/questions/46725942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3340588/" ]
An approach with `which`: ``` dat$Number <- names(dat)[which(dat == "1+", arr.ind = TRUE)[ , 2]] # [1] "X1234" "X3000" "X7500" ```
You can also use the `col` function to return the proper variable name index like this: ``` names(mat)[col(mat)[which(mat == "1+")]] [1] "X1234" "X3000" "X7500" ```
8,722,182
I'm getting a "DoesNotExist" error with the following set up - I've been trying to debug for a while and just can't figure it out. ``` class Video(models.Model): name = models.CharField(max_length=100) type = models.CharField(max_length=100) owner = models.ForeignKey(User, related_name='videos') ... #Related m2m fields .... class VideoForm(modelForm): class Meta: model = Video fields = ('name', 'type') class VideoCreate(CreateView): template_name = 'video_form.html' form_class = VideoForm model = Video ``` When I do this and post data for 'name' and 'type' - I get a "DoesNotExist" error. It seems to work fine with an UpdateView - or when an 'instance' is passed to init the form. This is the exact location where the error is raised: /usr/lib/pymodules/python2.7/django/db/models/fields/related.py in **get**, line 301 Does anyone know what might be going on? Thanks
2012/01/04
[ "https://Stackoverflow.com/questions/8722182", "https://Stackoverflow.com", "https://Stackoverflow.com/users/801820/" ]
Since you have not posted your full traceback, my guess is that your owner FK is not optional, and you are not specifying one in your model form. You need to post a full traceback.
I think it has to be class `VideoForm(ModelForm)` instead of `VideoForm(modelForm)`. If you aren't going to use the foreign key in the form use `exclude = ('owner')`
21,356,122
I have a small project at home, where I need to scrape a website for links every once in a while and save the links in a txt file. The script need to run on my Synology NAS, therefore the script needs to be written in bash script or python without using any plugins or external libraries as I can't install it on the NAS. (to my knowledge anyhow) A link looks like this: ``` <a href="http://www.example.com">Example text</a> ``` I want to save the following to my text file: ``` Example text - http://www.example.com ``` I was thinking I could isolate the text with curl and some grep (or perhaps regex). First I looked into using Scrapy or Beutifulsoup, but couldn't find a way to install it on the NAS. Could one of you help me put a script together?
2014/01/25
[ "https://Stackoverflow.com/questions/21356122", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3235644/" ]
You can use `urllib2` that ships as free with Python. Using it you can easily get the html of any url ``` import urllib2 response = urllib2.urlopen('http://python.org/') html = response.read() ``` Now, about the parsing the html. You can still use `BeautifulSoup` without installing it. From [their site](http://www.crummy.com/software/BeautifulSoup/), it says "*You can also download the tarball and use BeautifulSoup.py in your project directly*". So search on internet for that `BeautifulSoup.py` file. If you can't find it, then download [this one](https://svn.apache.org/repos/infra/infrastructure/trunk/projects/stats/BeautifulSoup.py) and save into a local file inside your project. Then use it like below: ``` soup = BeautifulSoup(html) for link in soup("a"): print link["href"] print link.renderContents() ```
I recommend using Python's htmlparser library. It will parse the page into a hierarchy of objects for you. You can then find the a href tags. <http://docs.python.org/2/library/htmlparser.html> There are lots of examples of using this library to find links, so I won't list all of the code, but here is a working example: [Extract absolute links from a page using HTMLParser](https://stackoverflow.com/questions/6816138/extract-absolute-links-from-a-page-uisng-htmlparser) **EDIT:** As Oday pointed out, the htmlparser is an external library, and you may not be able to load it. In that case, here are two recommendations for built-in modules that can do what you need: * `htmllib` is included in Python 2.X. * `xml` is includes in Python 2.X and 3.X. There is also a good explanation elsewhere on this site for how to use wget & grep to do the same thing: [Spider a Website and Return URLs Only](https://stackoverflow.com/questions/2804467/spider-a-website-and-return-urls-only)
21,356,122
I have a small project at home, where I need to scrape a website for links every once in a while and save the links in a txt file. The script need to run on my Synology NAS, therefore the script needs to be written in bash script or python without using any plugins or external libraries as I can't install it on the NAS. (to my knowledge anyhow) A link looks like this: ``` <a href="http://www.example.com">Example text</a> ``` I want to save the following to my text file: ``` Example text - http://www.example.com ``` I was thinking I could isolate the text with curl and some grep (or perhaps regex). First I looked into using Scrapy or Beutifulsoup, but couldn't find a way to install it on the NAS. Could one of you help me put a script together?
2014/01/25
[ "https://Stackoverflow.com/questions/21356122", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3235644/" ]
You can use `urllib2` that ships as free with Python. Using it you can easily get the html of any url ``` import urllib2 response = urllib2.urlopen('http://python.org/') html = response.read() ``` Now, about the parsing the html. You can still use `BeautifulSoup` without installing it. From [their site](http://www.crummy.com/software/BeautifulSoup/), it says "*You can also download the tarball and use BeautifulSoup.py in your project directly*". So search on internet for that `BeautifulSoup.py` file. If you can't find it, then download [this one](https://svn.apache.org/repos/infra/infrastructure/trunk/projects/stats/BeautifulSoup.py) and save into a local file inside your project. Then use it like below: ``` soup = BeautifulSoup(html) for link in soup("a"): print link["href"] print link.renderContents() ```
Based on your example, you need something like this: ``` wget -q -O- https://dl.dropboxusercontent.com/s/wm6mt2ew0nnqdu6/links.html?dl=1 | sed -r 's#<a href="([^"]+)">([^<]+)</a>.*$#\2 - \1#' > links.txt ``` `cat links.txt` **outputs:** ``` 1Visit W3Schools - http://www.w3schools.com/ 2Visit W3Schools - http://www.w3schools.com/ 3Visit W3Schools - http://www.w3schools.com/ 4Visit W3Schools - http://www.w3schools.com/ 5Visit W3Schools - http://www.w3schools.com/ 6Visit W3Schools - http://www.w3schools.com/ 7Visit W3Schools - http://www.w3schools.com/ ```
22,180,285
My python function is given a (long) list of path arguments, each of which can possibly be a glob. I make a pass over this list using `glob.glob` to extract all the matching filenames, like this: ``` files = [filename for pattern in patterns for filename in glob.glob(pattern)] ``` That works, but the filesystem I'm on has very poor performance for directory listing operations, and currently this operation adds about a minute(!) to the start-up time of my program. So I would like to only perform glob expansion for non-trivial glob patterns (i.e. those that aren't just normal pathnames) to speed this up. I.e. ``` def cheapglob(pattern): return [pattern] if istrivial(pattern) else glob.glob(pattern) files = [filename for pattern in patterns for filename in cheapglob(pattern)] ``` Since `glob.glob` basically does a set of directory listings coupled with `fnmatch.fnmatch`, I thought it should be possible to somehow ask `fnmatch` whether a given string is a non-trivial pattern or not, but I can't see how to do that. As a fallback, I guess I could attempt to identify these patterns in the string myself, though that feels a lot like reinventing the wheel, and would be error prone. But this feels like the sort of thing there should be an elegant solution for.
2014/03/04
[ "https://Stackoverflow.com/questions/22180285", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1061433/" ]
According to [the fnmatch source code](http://code.google.com/p/unladen-swallow/source/browse/trunk/Lib/fnmatch.py), the only special characters it recognizes are `*`, `?`, `[` and `]`. Hence any pattern that does not contain any of these will only match itself. We can therefore implement the `cheapglob` mentioned in the question as ``` def cheapglob(s): return glob.glob(s) if re.search("[][*?]", s) else [s] ``` This will only hit the file system for patterns which include special characters. This differs subtly from a plain `glob.glob`: For a pattern with no special characters like "foo.txt", this function will return `["foo.txt"]` regardless of whether that file exists, while `glob.glob` will return `[]` if the file isn't there. So the calling function will need to handle the possibility that some of the returned files might not exist.
I don't think you'll find much, as your idea of a trivial pattern might not be mine. Also, from a comp-sci point of view, it might be impossible to tell from inspection whether a pushdown automata is going to run in a set amount of time given the inputs you're running it against, without actually running it against those inputs. I strongly suspect you'd be better off here loading the directory listing once and then applying `fnmatch` against that list manually.
60,538,828
Here I am trying to scrape the teacher jobs from the <https://www.indeed.co.in/?r=us> I want to get that uploaded to the excel sheet like jobtitle, institute/school, salary, howmanydaysagoposted I wrote the code for scraping like this but I am getting all the text from the xpath which I defined ``` import selenium.webdriver from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions url = 'https://www.indeed.co.in/?r=us' driver = webdriver.Chrome(r"mypython/bin/chromedriver_linux64/chromedriver") driver.get(url) driver.find_element_by_xpath('//*[@id="text-input-what"]').send_keys("teacher") driver.find_element_by_xpath('//*[@id="whatWhereFormId"]/div[3]/button').click() items = driver.find_elements_by_xpath('//*[@id="resultsCol"]') for item in items: print(item.text) ``` And even I am able to scrape only one page and I want all the pages that are available after I search for teacher Please help me Thanks in advance.
2020/03/05
[ "https://Stackoverflow.com/questions/60538828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12798693/" ]
I'd encourage you to checkout beautiful soup <https://pypi.org/project/beautifulsoup4/> I've used this for scraping tables, ``` def read_table(table): """Read an IP Address table. Args: table: the Soup <table> element Returns: None if the table isn't an IP Address table, otherwise a list of the IP Address:port values. """ header = None rows = [] for tr in table.find_all('tr'): if header is None: header = read_header(tr) if not header or header[0] != 'IP Address': return None else: row = read_row(tr) if row: rows.append('{}:{}'.format(row[0], row[1])) return rows ``` Here is just a snippet from one of my python projects <https://github.com/backslash/WebScrapers/blob/master/us-proxy-scraper/us-proxy.py> You can use beautiful soup to scrape tables incredibly easily, if your worried about it getting blocked then you just need to send the right headers. Also another advantage to using beautiful soup is that you don't have to wait as long for a lot of stuff. ``` HEADERS = requests.utils.default_headers() HEADERS.update({ 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0', }) ``` Best of luck
You'll have to nevigate to every page and **scrape** them one by one i.e. you'll have to automate click on next page button in selenium(use xpath of Next Page button element). Then extract using page source function. Hope I could help.
60,538,828
Here I am trying to scrape the teacher jobs from the <https://www.indeed.co.in/?r=us> I want to get that uploaded to the excel sheet like jobtitle, institute/school, salary, howmanydaysagoposted I wrote the code for scraping like this but I am getting all the text from the xpath which I defined ``` import selenium.webdriver from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions url = 'https://www.indeed.co.in/?r=us' driver = webdriver.Chrome(r"mypython/bin/chromedriver_linux64/chromedriver") driver.get(url) driver.find_element_by_xpath('//*[@id="text-input-what"]').send_keys("teacher") driver.find_element_by_xpath('//*[@id="whatWhereFormId"]/div[3]/button').click() items = driver.find_elements_by_xpath('//*[@id="resultsCol"]') for item in items: print(item.text) ``` And even I am able to scrape only one page and I want all the pages that are available after I search for teacher Please help me Thanks in advance.
2020/03/05
[ "https://Stackoverflow.com/questions/60538828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12798693/" ]
try this, don't forget to import selenium modules ``` from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.ui import WebDriverWait url = 'https://www.indeed.co.in/?r=us' driver.get(url) driver.find_element_by_xpath('//*[@id="text-input-what"]').send_keys("teacher") driver.find_element_by_xpath('//*[@id="whatWhereFormId"]/div[3]/button').click() # scrape data data = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, "resultsCol"))) result_set = WebDriverWait(data, 10).until( EC.presence_of_all_elements_located((By.CLASS_NAME, "jobsearch-SerpJobCard"))) for result in result_set: data = WebDriverWait(driver, 10).until( EC.presence_of_element_located((By.ID, "resultsCol"))) result_set = WebDriverWait(data, 10).until( EC.presence_of_all_elements_located((By.CLASS_NAME, "jobsearch-SerpJobCard"))) for result in result_set: title = result.find_element_by_class_name("title").text print(title) school = result.find_element_by_class_name("company").text print(school) try: salary = result.find_element_by_class_name("salary").text print(salary) except: # some result set has no salary pass print("--------") # move to next page next_page = result.find_elements_by_xpath("//span[@class='pn']")[-1] driver.execute_script("arguments[0].click();", next_page) ```
You'll have to nevigate to every page and **scrape** them one by one i.e. you'll have to automate click on next page button in selenium(use xpath of Next Page button element). Then extract using page source function. Hope I could help.
22,258,738
I am trying to list items in a S3 container with the following code. ``` import boto.s3 from boto.s3.connection import OrdinaryCallingFormat conn = boto.connect_s3(calling_format=OrdinaryCallingFormat()) mybucket = conn.get_bucket('Container001') for key in mybucket.list(): print key.name.encode('utf-8') ``` Then I get the following error. ``` Traceback (most recent call last): File "test.py", line 5, in <module> mybucket = conn.get_bucket('Container001') File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 370, in get_bucket bucket.get_all_keys(headers, maxkeys=0) File "/usr/lib/python2.7/dist-packages/boto/s3/bucket.py", line 358, in get_all_keys '', headers, **params) File "/usr/lib/python2.7/dist-packages/boto/s3/bucket.py", line 325, in _get_all response.status, response.reason, body) boto.exception.S3ResponseError: S3ResponseError: 301 Moved Permanently <?xml version="1.0" encoding="UTF-8"?> PermanentRedirectThe bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.99EBDB9DE3B6E3AF Container001 <HostId>5El9MLfgHZmZ1UNw8tjUDAl+XltYelHu6d/JUNQsG3OaM70LFlpRchEJi9oepeMy</HostId><Endpoint>Container001.s3.amazonaws.com</Endpoint></Error> ``` I tried to search for how to send requests to the specified end point, but couldn't find useful information. How do I avoid this error?
2014/03/07
[ "https://Stackoverflow.com/questions/22258738", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3394040/" ]
As @garnaat mentioned and @Rico [answered in another question](https://stackoverflow.com/a/22462419/3162882) `connect_to_region` works with `OrdinaryCallingFormat`: ``` conn = boto.s3.connect_to_region( region_name = '<your region>', aws_access_key_id = '<access key>', aws_secret_access_key = '<secret key>', calling_format = boto.s3.connection.OrdinaryCallingFormat() ) bucket = conn.get_bucket('<bucket name>') ```
in terminal run > > nano ~/.boto > > > if there is some configs try to comment or rename file and connect again. (it helps me) <http://boto.cloudhackers.com/en/latest/boto_config_tut.html> there is boto config file directories. take a look one by one and clean them all, it will work by default configs. also configs may be in .bash\_profile, .bash\_source... I guess you must allow only KEY-SECRET also try to use > > calling\_format = boto.s3.connection.OrdinaryCallingFormat() > > >
56,832,149
I have an awkward csv file and I need to skip the first row to read it. I'm doing this easily with python/pandas ``` df = pd.read_csv(filename, skiprows=1) ``` but I don't know how to do it in Go. ``` package main import ( "encoding/csv" "fmt" "log" "os" ) type mwericsson struct { id string name string region string } func main() { rows := readSample() fmt.Println(rows) //appendSum(rows) //writeChanges(rows) } func readSample() [][]string { f, err := os.Open("D:/in/20190629/PM_IG30014_15_201906290015_01.csv") if err != nil { log.Fatal(err) } rows, err := csv.NewReader(f).ReadAll() f.Close() if err != nil { log.Fatal(err) } return rows } ``` Error: ``` 2019/07/01 12:38:40 record on line 2: wrong number of fields ``` `PM_IG30014_15_201906290015_01.csv`: ``` PTN Ethernet-Port RMON Performance,PORT_BW_UTILIZATION,2019-06-29 20:00:00,33366 DeviceID,DeviceName,ResourceName,CollectionTime,GranularityPeriod,PORT_RX_BW_UTILIZATION,PORT_TX_BW_UTILIZATION,RXGOODFULLFRAMESPEED,TXGOODFULLFRAMESPEED,PORT_RX_BW_UTILIZATION_MAX,PORT_TX_BW_UTILIZATION_MAX 3174659,H1095,H1095-11-ISM6-1(to ZJBSC-V1),2019-06-29 20:00:00,15,22.08,4.59,,,30.13,6.98 3174659,H1095,H1095-14-ISM6-1(to T6147-V),2019-06-29 20:00:00,15,2.11,10.92,,,4.43,22.45 ```
2019/07/01
[ "https://Stackoverflow.com/questions/56832149", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8660255/" ]
> > skip the first row when reading a csv file > > > --- For example, ``` package main import ( "bufio" "encoding/csv" "fmt" "io" "os" ) func readSample(rs io.ReadSeeker) ([][]string, error) { // Skip first row (line) row1, err := bufio.NewReader(rs).ReadSlice('\n') if err != nil { return nil, err } _, err = rs.Seek(int64(len(row1)), io.SeekStart) if err != nil { return nil, err } // Read remaining rows r := csv.NewReader(rs) rows, err := r.ReadAll() if err != nil { return nil, err } return rows, nil } func main() { f, err := os.Open("sample.csv") if err != nil { panic(err) } defer f.Close() rows, err := readSample(f) if err != nil { panic(err) } fmt.Println(rows) } ``` Output: ``` $ cat sample.csv one,two,three,four 1,2,3 4,5,6 $ go run sample.go [[1 2 3] [4 5 6]] $ $ cat sample.csv PTN Ethernet-Port RMON Performance,PORT_BW_UTILIZATION,2019-06-29 20:00:00,33366 DeviceID,DeviceName,ResourceName,CollectionTime,GranularityPeriod,PORT_RX_BW_UTILIZATION,PORT_TX_BW_UTILIZATION,RXGOODFULLFRAMESPEED,TXGOODFULLFRAMESPEED,PORT_RX_BW_UTILIZATION_MAX,PORT_TX_BW_UTILIZATION_MAX 3174659,H1095,H1095-11-ISM6-1(to ZJBSC-V1),2019-06-29 20:00:00,15,22.08,4.59,,,30.13,6.98 3174659,H1095,H1095-14-ISM6-1(to T6147-V),2019-06-29 20:00:00,15,2.11,10.92,,,4.43,22.45 $ go run sample.go [[DeviceID DeviceName ResourceName CollectionTime GranularityPeriod PORT_RX_BW_UTILIZATION PORT_TX_BW_UTILIZATION RXGOODFULLFRAMESPEED TXGOODFULLFRAMESPEED PORT_RX_BW_UTILIZATION_MAX PORT_TX_BW_UTILIZATION_MAX] [3174659 H1095 H1095-11-ISM6-1(to ZJBSC-V1) 2019-06-29 20:00:00 15 22.08 4.59 30.13 6.98] [3174659 H1095 H1095-14-ISM6-1(to T6147-V) 2019-06-29 20:00:00 15 2.11 10.92 4.43 22.45]] $ ```
Simply call [`Reader.Read()`](https://golang.org/pkg/encoding/csv/#Reader.Read) to read a line, then proceed to read the rest with [`Reader.ReadAll()`](https://golang.org/pkg/encoding/csv/#Reader.ReadAll). See this example: ``` src := "one,two,three\n1,2,3\n4,5,6" r := csv.NewReader(strings.NewReader(src)) if _, err := r.Read(); err != nil { panic(err) } records, err := r.ReadAll() if err != nil { panic(err) } fmt.Println(records) ``` Output (try it on the [Go Playground](https://play.golang.org/p/UgAA3VyEKP-)): ``` [[1 2 3] [4 5 6]] ```
56,832,149
I have an awkward csv file and I need to skip the first row to read it. I'm doing this easily with python/pandas ``` df = pd.read_csv(filename, skiprows=1) ``` but I don't know how to do it in Go. ``` package main import ( "encoding/csv" "fmt" "log" "os" ) type mwericsson struct { id string name string region string } func main() { rows := readSample() fmt.Println(rows) //appendSum(rows) //writeChanges(rows) } func readSample() [][]string { f, err := os.Open("D:/in/20190629/PM_IG30014_15_201906290015_01.csv") if err != nil { log.Fatal(err) } rows, err := csv.NewReader(f).ReadAll() f.Close() if err != nil { log.Fatal(err) } return rows } ``` Error: ``` 2019/07/01 12:38:40 record on line 2: wrong number of fields ``` `PM_IG30014_15_201906290015_01.csv`: ``` PTN Ethernet-Port RMON Performance,PORT_BW_UTILIZATION,2019-06-29 20:00:00,33366 DeviceID,DeviceName,ResourceName,CollectionTime,GranularityPeriod,PORT_RX_BW_UTILIZATION,PORT_TX_BW_UTILIZATION,RXGOODFULLFRAMESPEED,TXGOODFULLFRAMESPEED,PORT_RX_BW_UTILIZATION_MAX,PORT_TX_BW_UTILIZATION_MAX 3174659,H1095,H1095-11-ISM6-1(to ZJBSC-V1),2019-06-29 20:00:00,15,22.08,4.59,,,30.13,6.98 3174659,H1095,H1095-14-ISM6-1(to T6147-V),2019-06-29 20:00:00,15,2.11,10.92,,,4.43,22.45 ```
2019/07/01
[ "https://Stackoverflow.com/questions/56832149", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8660255/" ]
Simply call [`Reader.Read()`](https://golang.org/pkg/encoding/csv/#Reader.Read) to read a line, then proceed to read the rest with [`Reader.ReadAll()`](https://golang.org/pkg/encoding/csv/#Reader.ReadAll). See this example: ``` src := "one,two,three\n1,2,3\n4,5,6" r := csv.NewReader(strings.NewReader(src)) if _, err := r.Read(); err != nil { panic(err) } records, err := r.ReadAll() if err != nil { panic(err) } fmt.Println(records) ``` Output (try it on the [Go Playground](https://play.golang.org/p/UgAA3VyEKP-)): ``` [[1 2 3] [4 5 6]] ```
while it was informative to learn about io.ReadSeeker, I think a simpler way to skip the first line/row (often times the header) of a csv is to use the slice functionality as follows: ``` func readCsv(filename string) [][]string { f, err := os.Open(filename) if err != nil { log.Fatal(err) } defer f.Close() records := [][]string{} r := csv.NewReader(f) for { record, err := r.Read() if err == io.EOF { break } if err != nil { log.Fatal(err) } records = append(records, record) } return records[1:] // skip the header } ```
56,832,149
I have an awkward csv file and I need to skip the first row to read it. I'm doing this easily with python/pandas ``` df = pd.read_csv(filename, skiprows=1) ``` but I don't know how to do it in Go. ``` package main import ( "encoding/csv" "fmt" "log" "os" ) type mwericsson struct { id string name string region string } func main() { rows := readSample() fmt.Println(rows) //appendSum(rows) //writeChanges(rows) } func readSample() [][]string { f, err := os.Open("D:/in/20190629/PM_IG30014_15_201906290015_01.csv") if err != nil { log.Fatal(err) } rows, err := csv.NewReader(f).ReadAll() f.Close() if err != nil { log.Fatal(err) } return rows } ``` Error: ``` 2019/07/01 12:38:40 record on line 2: wrong number of fields ``` `PM_IG30014_15_201906290015_01.csv`: ``` PTN Ethernet-Port RMON Performance,PORT_BW_UTILIZATION,2019-06-29 20:00:00,33366 DeviceID,DeviceName,ResourceName,CollectionTime,GranularityPeriod,PORT_RX_BW_UTILIZATION,PORT_TX_BW_UTILIZATION,RXGOODFULLFRAMESPEED,TXGOODFULLFRAMESPEED,PORT_RX_BW_UTILIZATION_MAX,PORT_TX_BW_UTILIZATION_MAX 3174659,H1095,H1095-11-ISM6-1(to ZJBSC-V1),2019-06-29 20:00:00,15,22.08,4.59,,,30.13,6.98 3174659,H1095,H1095-14-ISM6-1(to T6147-V),2019-06-29 20:00:00,15,2.11,10.92,,,4.43,22.45 ```
2019/07/01
[ "https://Stackoverflow.com/questions/56832149", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8660255/" ]
> > skip the first row when reading a csv file > > > --- For example, ``` package main import ( "bufio" "encoding/csv" "fmt" "io" "os" ) func readSample(rs io.ReadSeeker) ([][]string, error) { // Skip first row (line) row1, err := bufio.NewReader(rs).ReadSlice('\n') if err != nil { return nil, err } _, err = rs.Seek(int64(len(row1)), io.SeekStart) if err != nil { return nil, err } // Read remaining rows r := csv.NewReader(rs) rows, err := r.ReadAll() if err != nil { return nil, err } return rows, nil } func main() { f, err := os.Open("sample.csv") if err != nil { panic(err) } defer f.Close() rows, err := readSample(f) if err != nil { panic(err) } fmt.Println(rows) } ``` Output: ``` $ cat sample.csv one,two,three,four 1,2,3 4,5,6 $ go run sample.go [[1 2 3] [4 5 6]] $ $ cat sample.csv PTN Ethernet-Port RMON Performance,PORT_BW_UTILIZATION,2019-06-29 20:00:00,33366 DeviceID,DeviceName,ResourceName,CollectionTime,GranularityPeriod,PORT_RX_BW_UTILIZATION,PORT_TX_BW_UTILIZATION,RXGOODFULLFRAMESPEED,TXGOODFULLFRAMESPEED,PORT_RX_BW_UTILIZATION_MAX,PORT_TX_BW_UTILIZATION_MAX 3174659,H1095,H1095-11-ISM6-1(to ZJBSC-V1),2019-06-29 20:00:00,15,22.08,4.59,,,30.13,6.98 3174659,H1095,H1095-14-ISM6-1(to T6147-V),2019-06-29 20:00:00,15,2.11,10.92,,,4.43,22.45 $ go run sample.go [[DeviceID DeviceName ResourceName CollectionTime GranularityPeriod PORT_RX_BW_UTILIZATION PORT_TX_BW_UTILIZATION RXGOODFULLFRAMESPEED TXGOODFULLFRAMESPEED PORT_RX_BW_UTILIZATION_MAX PORT_TX_BW_UTILIZATION_MAX] [3174659 H1095 H1095-11-ISM6-1(to ZJBSC-V1) 2019-06-29 20:00:00 15 22.08 4.59 30.13 6.98] [3174659 H1095 H1095-14-ISM6-1(to T6147-V) 2019-06-29 20:00:00 15 2.11 10.92 4.43 22.45]] $ ```
while it was informative to learn about io.ReadSeeker, I think a simpler way to skip the first line/row (often times the header) of a csv is to use the slice functionality as follows: ``` func readCsv(filename string) [][]string { f, err := os.Open(filename) if err != nil { log.Fatal(err) } defer f.Close() records := [][]string{} r := csv.NewReader(f) for { record, err := r.Read() if err == io.EOF { break } if err != nil { log.Fatal(err) } records = append(records, record) } return records[1:] // skip the header } ```
11,774,163
this is the idea. I'll have 'main' python script that will start (using subprocess) app1 and app2. 'main' script will send input to app1 and output result to app2 and vice versa (and main script will need to remember what was sent so I can't send pipe from app1 to app2). This is main script. ``` import subprocess import time def main(): prvi = subprocess.Popen(['python', 'random1.py'], stdin = subprocess.PIPE , stdout = subprocess.PIPE, stderr = subprocess.STDOUT) while 1: prvi.stdin.write('131231\n') time.sleep(1) # maybe it needs to wait print "procitano", prvi.stdout.read() if __name__ == '__main__': main() ``` And this is 'random1.py' file. ``` import random def main(): while 1: inp = raw_input() print inp, random.random() if __name__ == '__main__': main() ``` First I've tried with only one subprocess just to see if it's working. And it's not. It only outputs 'procitano' and waits there. How can I read output from 'prvi' (without communicate(). When I use it, it exits my app and that's something that I don't want)?
2012/08/02
[ "https://Stackoverflow.com/questions/11774163", "https://Stackoverflow.com", "https://Stackoverflow.com/users/554778/" ]
Add `prvi.stdin.flush()` after `prvi.stdin.write(...)`. Explanation: To optimize communication between processes, the OS will buffer 4KB of data before it sends that whole buffer to the other process. If you send less data, you need to tell the OS "That's it. Send it *now*" -> `flush()` **[EDIT]** The next problem is that `prvi.stdout.read()` will never return since the child doesn't exit. You will need to develop a protocol between the processes, so each knows how many bytes of data to read when it gets something. A simple solution is to use a line based protocol (each "message" is terminated by a new line). To do that, replace `read()` with `readline()` and don't forget to append `\n` to everything you send + `flush()`
**main.py** ``` import subprocess import time def main(): prvi = subprocess.Popen(['python', 'random1.py'], stdin = subprocess.PIPE , stdout = subprocess.PIPE, stderr = subprocess.STDOUT) prvi.stdin.write('131231\n') time.sleep(1) # maybe it needs to wait print "procitano", prvi.stdout.read() if __name__ == '__main__': main() ``` **random1.py** ``` import random def main(): inp = raw_input() print inp, random.random() inp = raw_input() if __name__ == '__main__': main() ``` I've tested with the above codes, then I've got the same problem as your codes. I think problem is timing. Here is my guess, When the main.py tries the code below ``` prvi.stdout.read() # i think this code may use the random1.py process ``` the code below grab the random1.py process ``` inp = raw_input() ``` To solve this problem, I think, as Aaron Digulla says, you need develope the protocol to make it.
11,774,163
this is the idea. I'll have 'main' python script that will start (using subprocess) app1 and app2. 'main' script will send input to app1 and output result to app2 and vice versa (and main script will need to remember what was sent so I can't send pipe from app1 to app2). This is main script. ``` import subprocess import time def main(): prvi = subprocess.Popen(['python', 'random1.py'], stdin = subprocess.PIPE , stdout = subprocess.PIPE, stderr = subprocess.STDOUT) while 1: prvi.stdin.write('131231\n') time.sleep(1) # maybe it needs to wait print "procitano", prvi.stdout.read() if __name__ == '__main__': main() ``` And this is 'random1.py' file. ``` import random def main(): while 1: inp = raw_input() print inp, random.random() if __name__ == '__main__': main() ``` First I've tried with only one subprocess just to see if it's working. And it's not. It only outputs 'procitano' and waits there. How can I read output from 'prvi' (without communicate(). When I use it, it exits my app and that's something that I don't want)?
2012/08/02
[ "https://Stackoverflow.com/questions/11774163", "https://Stackoverflow.com", "https://Stackoverflow.com/users/554778/" ]
Add `prvi.stdin.flush()` after `prvi.stdin.write(...)`. Explanation: To optimize communication between processes, the OS will buffer 4KB of data before it sends that whole buffer to the other process. If you send less data, you need to tell the OS "That's it. Send it *now*" -> `flush()` **[EDIT]** The next problem is that `prvi.stdout.read()` will never return since the child doesn't exit. You will need to develop a protocol between the processes, so each knows how many bytes of data to read when it gets something. A simple solution is to use a line based protocol (each "message" is terminated by a new line). To do that, replace `read()` with `readline()` and don't forget to append `\n` to everything you send + `flush()`
* use `-u` flag to make random1.py output unbuffered * use p.stdout.readline() instead of .read() time.sleep is unnecessary due to .read blocks.
4,181,573
Is there such a program that can open a little input box and send the input to stdout? If there isn't, any suggestions for how to do this (maybe python with TkInter)?
2010/11/15
[ "https://Stackoverflow.com/questions/4181573", "https://Stackoverflow.com", "https://Stackoverflow.com/users/507857/" ]
If you're looking for something that works in text mode, then [`dialog`](http://linux.die.net/man/1/dialog) or [`whiptail`](http://linux.die.net/man/1/whiptail) are two options.
The oldest would probably be [dialog](http://www.linuxjournal.com/article/2807). Another example of such a program is [Zenity](http://freshmeat.net/projects/zenity) and another would be [Xdialog](http://xdialog.free.fr/) (all pretty much replacements for dialog). They tend to do more than just accepting user input, and you can create some fairly complex GUI dialogs with them, but are simple enough to do what you want very easily.
10,592,891
Dear Stack Overflow community, I'm writing in hopes that you might be able to help me connect to an 802.15.4 wireless transceiver using C# or C++. Let me explain a little bit about my project. This semester, I spent some time developing a wireless sensor board that would transmit light, temperature, humidity, and motion detection levels every 8 seconds to a USB wireless transceiver. Now, I didn't develop the USB transceiver. One of the TA's for the course did, and he helped me throughout the development process for my sensor board (it was my first real PCB). Now, I've got the sensor board programmed and I know it's sending the data to the transceiver. The reason I know this is that this TA wrote a simple python module that would pull the latest packet of information from the transceiver (whenever it was received), unpack the hex message, and convert some of the sensor data into working units (like degrees Celsius, % relative humidity, etc.) The problem is that the python module works on his computer (Mac) but not on mine (Windows 7). Basically, he's using a library called zigboard to unpack the sensor message, as well as pyusb and pyserial libraries in the sketch. The 802.15.4 wireless transceiver automatically enumerates itself on a Mac, but runs into larger issues when running on a PC. Basically, I believe the issue lies in the lack of having a signed driver. I'm using libusb to generate the .inf file for this particular device... and I know it's working on my machine because there is an LED on my sensor board and on the transceiver which blink when a message is sent/received. However, when I run the same python module that this TA runs on his machine, I get an error message about missing some Windows Backend Binaries and thus, it never really gets to the stage where it returns the data. But, the larger issue isn't with this python module. The bigger issue is that I don't want to have to use Python. This sensor board is going to be part of a larger project in which I'll be designing a software interface in C# or C++ to do many different things (some of which is dealing with this sensor data). So, ultimately I want to be able to work in .NET in order to access the data from this transceiver. However, all I have to go on is this python sketch (which wont even run on my machine). I know the easiest thing to do would be to ask this TA more questions about how to get this to work on my machine... but I've already monopolized a ton of his time this semester regarding this project and additionally he's currently out of town. Also, his preference is python, where as I'm most comfortable in C# or C++ and would like to use that environment for this project. Now, I would say I'm competent in electronics and programming (but certainly not an expert... my background is actually in architecture). But, if anyone could help me develop some code so I could unpack the sensor message being sent from board, it would be greatly appreciated. I've attached the Python sketch below which is what the TA uses to unpack his sensor messages on his machine (but like I said... I had issues on my windows machine). Does anyone have any suggestions? Thanks again. ``` from zigboard import ZigBoard from struct import unpack from time import sleep, time zb = ZigBoard() lasttime = time() while True: pkt = zb.receive() if pkt is None: sleep(0.01) continue if len(pkt.data) < 10: print "short packet" sleep(0.01) continue data = pkt.data[:10] cmd, bat, light, SOt, SOrh, pir = unpack("<BBHHHH", data) lasttime = time() d1 = -39.6 d2 = 0.01 c1 = -2.0468 c2 = 0.0367 c3 = -1.5955E-6 t1 = 0.01 t2 = 0.00008 sht15_tmp = d1 + d2 * float(SOt); RHL = c1 + c2 * SOrh + c3 * float(SOrh)**2 sht15_rh = (sht15_tmp - 25.0) * (t1 + t2 * float(SOrh)) + RHL print "address: 0x%04x" % pkt.src_addr print "temperature:", sht15_tmp print "humidity:", sht15_rh print "light:", light print "motion:", pir print ```
2012/05/15
[ "https://Stackoverflow.com/questions/10592891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1394922/" ]
Thanks everyone for the help. The key to everything was using [LibUSBDotNet](http://sourceforge.net/projects/libusbdotnet/). Once I had installed and referenced that into my project... I was able to create a console window that could handle the incoming sensor data. I did need to port some of the functions from the original Zigboard library... but on
I'm not 100% sure on exactly how to do this but after having a quick look around I can see that the core of the problem is you need to implement something like the ZigBoard lib in C#. The ZigBoard lib uses a python USB lib to communicate using an API with the USB device, you should be able to [LibUsbDotNet](http://sourceforge.net/projects/libusbdotnet/) to replicate this in C# and if you read though the ZigBoard libs code you should be able to work out the API.
56,081,778
I have a shell script which runs to predict something on raspberrypi which has python version 2.7 as well as 3.5. To support audio features I have made python 3.5 as default. when I run the shell script it is taking version 2.7 and throwing an error.[The error is shown in this link](https://i.stack.imgur.com/QhlMc.png)
2019/05/10
[ "https://Stackoverflow.com/questions/56081778", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9350170/" ]
It's not exactly clear based on the wording of your question how you have your scripts are set up. However, if you are calling python in the shell script, you can always specify python3 or python2 instead of just calling python (which points to your system's default). This would look something like this: ``` $ python3 python_script.py ``` If you are calling a script via the command line (e.g. something along the lines of `./python_script.py` in shell) directly that contains python code in it, you can also format the header of the script like so to force a specific version of python: ``` #!/usr/bin/env python3 python source code here ```
I would suggest [adding an alias to the root bashrc file](https://askubuntu.com/a/492787) as you seem to be calling this thing as the root user. something to the effect of `alias python=python3.5` at the bottom of the file `~root/.bashrc` may have the effect your looking for although I'm sure there's a more permanent solution out there.
56,081,778
I have a shell script which runs to predict something on raspberrypi which has python version 2.7 as well as 3.5. To support audio features I have made python 3.5 as default. when I run the shell script it is taking version 2.7 and throwing an error.[The error is shown in this link](https://i.stack.imgur.com/QhlMc.png)
2019/05/10
[ "https://Stackoverflow.com/questions/56081778", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9350170/" ]
It's not exactly clear based on the wording of your question how you have your scripts are set up. However, if you are calling python in the shell script, you can always specify python3 or python2 instead of just calling python (which points to your system's default). This would look something like this: ``` $ python3 python_script.py ``` If you are calling a script via the command line (e.g. something along the lines of `./python_script.py` in shell) directly that contains python code in it, you can also format the header of the script like so to force a specific version of python: ``` #!/usr/bin/env python3 python source code here ```
You can select version of existent python explicitly at the top of the script. For example, I have a script which subexecutes script in 2.7.12 python and in 3.5.2 python. Let it be named: "different py versioned.sh" Its code: ``` #! /bin/bash ./'py in default.py' ./'py in 3.5.py' ``` Code of 'py in 3.5.py': ``` #!/usr/bin/python3.5 import sys print("Python version") print (sys.version) ``` Code of 'py in default.py': ``` #!/usr/bin/python import sys print("Python version") print (sys.version) ``` Then output of executing: ./'different py versioned.sh' ``` Python version 2.7.12 (default, Oct 8 2019, 14:14:10) [GCC 5.4.0 20160609] Python version 3.5.2 (default, Oct 8 2019, 13:06:37) [GCC 5.4.0 20160609] ``` shows that every piece of code was executed in its separate py version.
35,074,895
I have a module with constants (data types and other things). Let's call the module constants.py Let's pretend it contains the following: ``` # noinspection PyClassHasNoInit class SimpleTypes: INT = 'INT' BOOL = 'BOOL' DOUBLE = 'DOUBLE' # noinspection PyClassHasNoInit class ComplexTypes: LIST = 'LIST' MAP = 'MAP' SET = 'SET' ``` This is pretty and works great, IDEs with inspection will be able to help out by providing suggestions when you code, which is really helpful. PyCharm example below: [![IDE Help](https://i.stack.imgur.com/PxgJN.png)](https://i.stack.imgur.com/PxgJN.png) But what if I now have a value of some kind and want to check if it is among the ComplexTypes, without defining them in one more place. Would this be possible? To clarify even more, I want to be able to do something like: ``` if myVar in constants.ComplexTypeList: # do something ``` Which would of course be possible if I made a list "ComplexTypeList" with all the types in the constants module, but then I would have to add potentially new types to two locations each time (both the class and the list), is it possible to do something dynamically? I want it to work in python 2.7, though suggestions on how to do this in later versions of python is useful knowledge as well. **SOLUTION COMMENTS:** I used inspect, as Prune suggested in the marked solution below. I made the list I mentioned above within the constants.py module as: ``` ComplexTypeList = [m[1] for m in inspect.getmembers(ComplexTypes) if not m[0].startswith('_')] ``` With this it is possible to do the example above.
2016/01/29
[ "https://Stackoverflow.com/questions/35074895", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3240126/" ]
I think I have it. You can use **inspect.getmembers** to return the items in the module. Each item is a tuple of (*name*, *value*). I tried it with the following code. **dir** gives only the names of the module members; **getmembers** also returns the values. You can look for the desired value in the second element of each tuple. constants.py ``` class ComplexTypes: LIST_N = 'LIST' MAP_N = 'MAP' SET_N = 'SET' ``` test code ``` from constants import ComplexTypes from inspect import getmembers, isfunction print dir(ComplexTypes) for o in getmembers(ComplexTypes): print o ``` output: ``` ['LIST_N', 'MAP_N', 'SET_N', '__doc__', '__module__'] ('LIST_N', 'LIST') ('MAP_N', 'MAP') ('SET_N', 'SET') ('__doc__', None) ('__module__', 'constants') ``` You can request particular types of objects with the various **is** operators, such as ``` getmembers(ComplexTypes, inspect.isfunction) ``` to get the functions. Yes, you can remove things with such a simple package. I don't see a method to get you constants in a positive fashion. See [documentation](https://docs.python.org/2/library/inspect.html).
The builtin method .dir(class) will return all the attributes of a class given. Your if statement can therefore be `if myVar in dir(constants.ComplexTypes):`
35,074,895
I have a module with constants (data types and other things). Let's call the module constants.py Let's pretend it contains the following: ``` # noinspection PyClassHasNoInit class SimpleTypes: INT = 'INT' BOOL = 'BOOL' DOUBLE = 'DOUBLE' # noinspection PyClassHasNoInit class ComplexTypes: LIST = 'LIST' MAP = 'MAP' SET = 'SET' ``` This is pretty and works great, IDEs with inspection will be able to help out by providing suggestions when you code, which is really helpful. PyCharm example below: [![IDE Help](https://i.stack.imgur.com/PxgJN.png)](https://i.stack.imgur.com/PxgJN.png) But what if I now have a value of some kind and want to check if it is among the ComplexTypes, without defining them in one more place. Would this be possible? To clarify even more, I want to be able to do something like: ``` if myVar in constants.ComplexTypeList: # do something ``` Which would of course be possible if I made a list "ComplexTypeList" with all the types in the constants module, but then I would have to add potentially new types to two locations each time (both the class and the list), is it possible to do something dynamically? I want it to work in python 2.7, though suggestions on how to do this in later versions of python is useful knowledge as well. **SOLUTION COMMENTS:** I used inspect, as Prune suggested in the marked solution below. I made the list I mentioned above within the constants.py module as: ``` ComplexTypeList = [m[1] for m in inspect.getmembers(ComplexTypes) if not m[0].startswith('_')] ``` With this it is possible to do the example above.
2016/01/29
[ "https://Stackoverflow.com/questions/35074895", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3240126/" ]
The builtin method .dir(class) will return all the attributes of a class given. Your if statement can therefore be `if myVar in dir(constants.ComplexTypes):`
If I have understood the question correctly, you want to define a custom `type`. You don't really need to import any modules. There are a number of ways you can do this, e.g. meta classes, however, a simple method is as follows: ``` my_type = type('ComplexTypes', (object,), {'LIST': 'LIST'}) var = my_type() ``` Now you can test the type: ``` type(var) ``` which returns: ``` __main__.ComplexTypes ``` In other words: ``` type(var) is my_type ``` which returns `True`
35,074,895
I have a module with constants (data types and other things). Let's call the module constants.py Let's pretend it contains the following: ``` # noinspection PyClassHasNoInit class SimpleTypes: INT = 'INT' BOOL = 'BOOL' DOUBLE = 'DOUBLE' # noinspection PyClassHasNoInit class ComplexTypes: LIST = 'LIST' MAP = 'MAP' SET = 'SET' ``` This is pretty and works great, IDEs with inspection will be able to help out by providing suggestions when you code, which is really helpful. PyCharm example below: [![IDE Help](https://i.stack.imgur.com/PxgJN.png)](https://i.stack.imgur.com/PxgJN.png) But what if I now have a value of some kind and want to check if it is among the ComplexTypes, without defining them in one more place. Would this be possible? To clarify even more, I want to be able to do something like: ``` if myVar in constants.ComplexTypeList: # do something ``` Which would of course be possible if I made a list "ComplexTypeList" with all the types in the constants module, but then I would have to add potentially new types to two locations each time (both the class and the list), is it possible to do something dynamically? I want it to work in python 2.7, though suggestions on how to do this in later versions of python is useful knowledge as well. **SOLUTION COMMENTS:** I used inspect, as Prune suggested in the marked solution below. I made the list I mentioned above within the constants.py module as: ``` ComplexTypeList = [m[1] for m in inspect.getmembers(ComplexTypes) if not m[0].startswith('_')] ``` With this it is possible to do the example above.
2016/01/29
[ "https://Stackoverflow.com/questions/35074895", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3240126/" ]
I think I have it. You can use **inspect.getmembers** to return the items in the module. Each item is a tuple of (*name*, *value*). I tried it with the following code. **dir** gives only the names of the module members; **getmembers** also returns the values. You can look for the desired value in the second element of each tuple. constants.py ``` class ComplexTypes: LIST_N = 'LIST' MAP_N = 'MAP' SET_N = 'SET' ``` test code ``` from constants import ComplexTypes from inspect import getmembers, isfunction print dir(ComplexTypes) for o in getmembers(ComplexTypes): print o ``` output: ``` ['LIST_N', 'MAP_N', 'SET_N', '__doc__', '__module__'] ('LIST_N', 'LIST') ('MAP_N', 'MAP') ('SET_N', 'SET') ('__doc__', None) ('__module__', 'constants') ``` You can request particular types of objects with the various **is** operators, such as ``` getmembers(ComplexTypes, inspect.isfunction) ``` to get the functions. Yes, you can remove things with such a simple package. I don't see a method to get you constants in a positive fashion. See [documentation](https://docs.python.org/2/library/inspect.html).
If I have understood the question correctly, you want to define a custom `type`. You don't really need to import any modules. There are a number of ways you can do this, e.g. meta classes, however, a simple method is as follows: ``` my_type = type('ComplexTypes', (object,), {'LIST': 'LIST'}) var = my_type() ``` Now you can test the type: ``` type(var) ``` which returns: ``` __main__.ComplexTypes ``` In other words: ``` type(var) is my_type ``` which returns `True`
12,645,195
To install python IMagick binding wand api on windows 64 bit (python 2.6) This is what I did: 1. downloaded and installed [ImageMagick-6.5.8-7-Q16-windows-dll.exe](http://www.imagemagick.org/download/binaries/ImageMagick-6.5.8-7-Q16-windows-dll.exe) 2. downloaded `wand` module from <http://pypi.python.org/pypi/Wand> 3. after that i ran `python setup.py install` from `wand` directory, 4. then i executed step6. but i got the import error: magickband librarry not found 5. downloaded magickwand' module and executed 'python setup.py install' from magickwand directory. 6. then agian i tried this code ``` from wand.image import Image from wand.display import display with Image(filename='mona-lisa.png') as img: print img.size for r in 1, 2, 3: with img.clone() as i: i.resize(int(i.width * r * 0.25), int(i.height * r * 0.25)) i.rotate(90 * r) i.save(filename='mona-lisa-{0}.png'.format(r)) display(i) ``` 7. but thereafter again i am getting the same import error magickband library not found i am fed up with this coz i have done with all the installation. but not able to execute the code. coz everytime i am getting magickband libraray..import error.
2012/09/28
[ "https://Stackoverflow.com/questions/12645195", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1705984/" ]
You have to set `MAGICK_HOME` environment variable first. See the last part of [this section](http://docs.wand-py.org/en/0.2-maintenance/guide/install.html#install-imagemagick-on-windows). > > [![](https://i.stack.imgur.com/KKEG5.png)](https://i.stack.imgur.com/KKEG5.png) > > (source: [wand-py.org](http://docs.wand-py.org/en/0.2-maintenance/_images/windows-envvar.png)) > > > > > > Lastly you have to set `MAGICK_HOME` environment variable to the path of ImageMagick (e.g. `C:\Program Files\ImageMagick-6.7.7-Q16`). You can set it in *Computer ‣ Properties ‣ Advanced system settings ‣ Advanced ‣ Environment Variables...*. > > >
First i had to install ImageMagic and set Envrioment Variable `MAGIC_HOME` ,just after i was able to install `Wand` from `pip`
62,062,054
I'm new to docker and I created a docker image and this is how my docker file looks like. ``` FROM python:3.8.3 RUN apt-get update \ && apt-get install -y --no-install-recommends \ postgresql-client \ && rm -rf /var/lib/apt/lists/* \ && apt-get install -y gcc libtool-ltdl-devel xmlsec1-1.2.20 xmlsec1-devel-1.2.20 xmlsec1 openssl- 1.2.20 xmlsec1-openssl-devel-1.2.20 \ && apt-get -y install curl gnupg \ && curl -sL https://deb.nodesource.com/setup_14.x | bash - \ && apt-get -y install nodejs WORKDIR /app/ COPY . /app RUN pip install -r production_requirements.txt \ && front_end/noa-frontend/npm install ``` This image is used in docker-compose.yml's app service. So when I run the docker-compose build, I'm getting the below error saying it couldn't find the package. Those are few dependencies which I want to install in order to install a python package. [![enter image description here](https://i.stack.imgur.com/5aeiS.png)](https://i.stack.imgur.com/5aeiS.png) In the beginning, I've run the apt-get update to update the package lists. Can anyone please help me with this issue. **Updated Dockerfile** ``` FROM python:3.8.3 RUN apt-get update RUN apt-get install -y postgresql-client\ && apt-get install -y gcc libtool-ltdl-devel xmlsec1-1.2.20 xmlsec1- devel-1.2.20 xmlsec1 openssl-1.2.20 xmlsec1-openssl-devel-1.2.20 \ && apt-get -y install curl gnupg \ && curl -sL https://deb.nodesource.com/setup_14.x | bash - \ && apt-get -y install nodejs WORKDIR /app/ COPY . /app RUN pip install -r production_requirements.txt \ && front_end/noa-frontend/npm install ```
2020/05/28
[ "https://Stackoverflow.com/questions/62062054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10437046/" ]
When you give `CMD` (or `RUN` or `ENTRYPOINT`) in the JSON-array form, you're responsible for manually breaking up the command into "words". That is, you're running the equivalent of the quoted shell command ```sh 'tail -f /dev/null' ``` and the whole thing gets interpreted as one "word" -- the spaces and options are taken as part of the command name to look up in `$PATH`. The most straightforward workaround to this is to remove the quoting and just use a bare string as `CMD`. Note that the container you're building doesn't actually do anything: it doesn't include any application source code and the command you're providing intentionally does nothing forever. Aside from one running container with an idle process, you get the same effect by just not running the container at all. You typically want to copy your application code in and set `CMD` to actually run it: ```sh FROM node:12.17.0-alpine WORKDIR /src/webui COPY package.json yarn.lock ./ RUN yarn install COPY . ./ CMD ["yarn", "start"] # Also works: CMD yarn start # Won't work: CMD ["yarn start"] ```
`CMD` will append after `ENTRYPOINT` Since node:12.17.0-alpine have default `ENTRYPONINT node` Your dockerfile will becomes ``` node tail -f /dev/null ``` ### option1 Override ENTRYPOINT in build time ``` ENTRYPOINT tail -f /dev/null ``` ### option2 Override ENTRYPOINT in run time ``` docker run --entrypoint sh my-image ```
62,062,054
I'm new to docker and I created a docker image and this is how my docker file looks like. ``` FROM python:3.8.3 RUN apt-get update \ && apt-get install -y --no-install-recommends \ postgresql-client \ && rm -rf /var/lib/apt/lists/* \ && apt-get install -y gcc libtool-ltdl-devel xmlsec1-1.2.20 xmlsec1-devel-1.2.20 xmlsec1 openssl- 1.2.20 xmlsec1-openssl-devel-1.2.20 \ && apt-get -y install curl gnupg \ && curl -sL https://deb.nodesource.com/setup_14.x | bash - \ && apt-get -y install nodejs WORKDIR /app/ COPY . /app RUN pip install -r production_requirements.txt \ && front_end/noa-frontend/npm install ``` This image is used in docker-compose.yml's app service. So when I run the docker-compose build, I'm getting the below error saying it couldn't find the package. Those are few dependencies which I want to install in order to install a python package. [![enter image description here](https://i.stack.imgur.com/5aeiS.png)](https://i.stack.imgur.com/5aeiS.png) In the beginning, I've run the apt-get update to update the package lists. Can anyone please help me with this issue. **Updated Dockerfile** ``` FROM python:3.8.3 RUN apt-get update RUN apt-get install -y postgresql-client\ && apt-get install -y gcc libtool-ltdl-devel xmlsec1-1.2.20 xmlsec1- devel-1.2.20 xmlsec1 openssl-1.2.20 xmlsec1-openssl-devel-1.2.20 \ && apt-get -y install curl gnupg \ && curl -sL https://deb.nodesource.com/setup_14.x | bash - \ && apt-get -y install nodejs WORKDIR /app/ COPY . /app RUN pip install -r production_requirements.txt \ && front_end/noa-frontend/npm install ```
2020/05/28
[ "https://Stackoverflow.com/questions/62062054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10437046/" ]
The correct Dockerfile: ``` FROM node:12.17.0-alpine WORKDIR /src/webui RUN apk update && apk add bash CMD ["tail", "-f", "/dev/null"] ``` So the difference is that this: ``` CMD ["tail -f /dev/null"] ``` needs to be: ``` CMD ["tail", "-f", "/dev/null"] ``` You can read more about CMD in the official Docker [docs](https://docs.docker.com/engine/reference/builder/#cmd).
`CMD` will append after `ENTRYPOINT` Since node:12.17.0-alpine have default `ENTRYPONINT node` Your dockerfile will becomes ``` node tail -f /dev/null ``` ### option1 Override ENTRYPOINT in build time ``` ENTRYPOINT tail -f /dev/null ``` ### option2 Override ENTRYPOINT in run time ``` docker run --entrypoint sh my-image ```
62,062,054
I'm new to docker and I created a docker image and this is how my docker file looks like. ``` FROM python:3.8.3 RUN apt-get update \ && apt-get install -y --no-install-recommends \ postgresql-client \ && rm -rf /var/lib/apt/lists/* \ && apt-get install -y gcc libtool-ltdl-devel xmlsec1-1.2.20 xmlsec1-devel-1.2.20 xmlsec1 openssl- 1.2.20 xmlsec1-openssl-devel-1.2.20 \ && apt-get -y install curl gnupg \ && curl -sL https://deb.nodesource.com/setup_14.x | bash - \ && apt-get -y install nodejs WORKDIR /app/ COPY . /app RUN pip install -r production_requirements.txt \ && front_end/noa-frontend/npm install ``` This image is used in docker-compose.yml's app service. So when I run the docker-compose build, I'm getting the below error saying it couldn't find the package. Those are few dependencies which I want to install in order to install a python package. [![enter image description here](https://i.stack.imgur.com/5aeiS.png)](https://i.stack.imgur.com/5aeiS.png) In the beginning, I've run the apt-get update to update the package lists. Can anyone please help me with this issue. **Updated Dockerfile** ``` FROM python:3.8.3 RUN apt-get update RUN apt-get install -y postgresql-client\ && apt-get install -y gcc libtool-ltdl-devel xmlsec1-1.2.20 xmlsec1- devel-1.2.20 xmlsec1 openssl-1.2.20 xmlsec1-openssl-devel-1.2.20 \ && apt-get -y install curl gnupg \ && curl -sL https://deb.nodesource.com/setup_14.x | bash - \ && apt-get -y install nodejs WORKDIR /app/ COPY . /app RUN pip install -r production_requirements.txt \ && front_end/noa-frontend/npm install ```
2020/05/28
[ "https://Stackoverflow.com/questions/62062054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10437046/" ]
When you give `CMD` (or `RUN` or `ENTRYPOINT`) in the JSON-array form, you're responsible for manually breaking up the command into "words". That is, you're running the equivalent of the quoted shell command ```sh 'tail -f /dev/null' ``` and the whole thing gets interpreted as one "word" -- the spaces and options are taken as part of the command name to look up in `$PATH`. The most straightforward workaround to this is to remove the quoting and just use a bare string as `CMD`. Note that the container you're building doesn't actually do anything: it doesn't include any application source code and the command you're providing intentionally does nothing forever. Aside from one running container with an idle process, you get the same effect by just not running the container at all. You typically want to copy your application code in and set `CMD` to actually run it: ```sh FROM node:12.17.0-alpine WORKDIR /src/webui COPY package.json yarn.lock ./ RUN yarn install COPY . ./ CMD ["yarn", "start"] # Also works: CMD yarn start # Won't work: CMD ["yarn start"] ```
The correct Dockerfile: ``` FROM node:12.17.0-alpine WORKDIR /src/webui RUN apk update && apk add bash CMD ["tail", "-f", "/dev/null"] ``` So the difference is that this: ``` CMD ["tail -f /dev/null"] ``` needs to be: ``` CMD ["tail", "-f", "/dev/null"] ``` You can read more about CMD in the official Docker [docs](https://docs.docker.com/engine/reference/builder/#cmd).
9,629,477
For example, given a python numpy.ndarray `a = array([[1, 2], [3, 4], [5, 6]])`, I want to select the 0th and 2nd row of array `a` into a new array `b`, such that `b` becomes `array([[1,2],[5,6]]`. I need to solution to work on more general problems, where the original 2d array can have more rows and I should be able to select the rows based on some disjoint ranges. In general, I was looking for something like `a[i:j] + a[k:p]` that works for 1-d list, but it seems 2d-arrays won't add up this way. **update** It seems that I can use `vstack((a[i:j], a[k:p]))` to get this working, but is there any elegant way to do this?
2012/03/09
[ "https://Stackoverflow.com/questions/9629477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/228173/" ]
You can use list indexing: ``` a[ [0,2], ] ``` More generally, to select rows `i:j` and `k:p` (I'm assuming in the python sense, meaning rows i to j but not including j): ``` a[ range(i,j) + range(k,p) , ] ``` Note that the `range(i,j) + range(k,p)` creates a *flat* list of `[ i, i+1, ..., j-1, k, k+1, ..., p-1 ]`, which is then used to index the rows of `a`.
`numpy` is kind of clever when it comes to indexing. You can give it a list of indexes and it will return the sliced part. ``` In : a = numpy.array([[i]*10 for i in range(10)]) In : a Out: array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3, 3, 3, 3, 3], [4, 4, 4, 4, 4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6, 6, 6, 6, 6], [7, 7, 7, 7, 7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8, 8, 8, 8, 8], [9, 9, 9, 9, 9, 9, 9, 9, 9, 9]]) In : a[[0,5,9]] Out: array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [5, 5, 5, 5, 5, 5, 5, 5, 5, 5], [9, 9, 9, 9, 9, 9, 9, 9, 9, 9]]) In : a[range(0,2)+range(5,8)] Out: array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [5, 5, 5, 5, 5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6, 6, 6, 6, 6], [7, 7, 7, 7, 7, 7, 7, 7, 7, 7]]) ```
54,995,041
I'm coding with python 3.6 and am working on a Genetic Algorithm. When generating a new population, when I append the new values to the array all the values in the array are changed to the new value. Is there something wrong with my functions? Code: ``` from fuzzywuzzy import fuzz import numpy as np import random import time def mutate(parent): x = random.randint(0,len(parent)-1) parent[x] = random.randint(0,9) print(parent) return parent def gen(cur_gen, pop_size, fittest): if cur_gen == 1: population = [] for _ in range(pop_size): add_to = [] for _ in range(6): add_to.append(random.randint(0,9)) population.append(add_to) return population else: population = [] for _ in range(pop_size): print('\n') population.append(mutate(fittest)) print(population) return population def get_fittest(population): fitness = [] for x in population: fitness.append(fuzz.ratio(x, [9,9,9,9,9,9])) fittest = fitness.index(max(fitness)) fittest_fitness = fitness[fittest] fittest = population[fittest] return fittest, fittest_fitness done = False generation = 1 population = gen(generation, 10, [0,0,0,0,0,0]) print(population) while not done: generation += 1 time.sleep(0.5) print('Current Generation: ',generation) print('Fittest: ',get_fittest(population)) if get_fittest(population)[1] == 100: done = True population = gen(generation, 10, get_fittest(population)[0]) print('Population: ',population) ``` Output: ``` Fittest: ([7, 4, 2, 7, 8, 9], 72) [3, 4, 2, 7, 8, 9] [[3, 4, 2, 7, 8, 9]] [3, 4, 2, 7, 5, 9] [[3, 4, 2, 7, 5, 9], [3, 4, 2, 7, 5, 9]] [3, 4, 2, 7, 4, 9] [[3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 9] [[3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 2] [[3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 5] [[3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5]] [3, 1, 2, 5, 4, 3] [[3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3]] ```
2019/03/05
[ "https://Stackoverflow.com/questions/54995041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10227474/" ]
The Appium method hideKeyboard() is **known to be unstable** when used on iPhone devices, as listed in Appium’s currently known open issues. Using this method for an iOS device may cause the Appium script to hang. Appium identifies that the problem is because - "There is no automation hook for hiding the keyboard,...rather than using this method, to think about how a user would hide the keyboard in your app, and tell Appium to do that instead (swipe, tap on a certain coordinate, etc…)" Workaround: Following the advice of the Appium documentation - use Appium to automate the action that a user would use to hide the keyboard. For example, use the swipe method to hide the keyboard if the application defines this action, or if the application defines a "hide-KB" button, automate clicking on this button. The other workaround is to use **sendkey()** without clicking on the text input field.
The Appium method hideKeyboard() is known to be unstable when used on iPhone devices, as listed in Appium’s currently known open issues. Using this method for an iOS device may cause the Appium script to hang. Appium identifies that the problem is because - "There is no automation hook for hiding the keyboard,...rather than using this method, to think about how a user would hide the keyboard in your app, and tell Appium to do that instead (swipe, tap on a certain coordinate, etc.. If you want to hide the keyboard, you can write a function like below ``` public void typeAndEnter(MobileElement mobileElement, String keyword) { LOGGER.info(String.format("Typing %s ...",keyword)); mobileElement.sendKeys(keyword, Keys.ENTER); } ``` Hope this helps
54,995,041
I'm coding with python 3.6 and am working on a Genetic Algorithm. When generating a new population, when I append the new values to the array all the values in the array are changed to the new value. Is there something wrong with my functions? Code: ``` from fuzzywuzzy import fuzz import numpy as np import random import time def mutate(parent): x = random.randint(0,len(parent)-1) parent[x] = random.randint(0,9) print(parent) return parent def gen(cur_gen, pop_size, fittest): if cur_gen == 1: population = [] for _ in range(pop_size): add_to = [] for _ in range(6): add_to.append(random.randint(0,9)) population.append(add_to) return population else: population = [] for _ in range(pop_size): print('\n') population.append(mutate(fittest)) print(population) return population def get_fittest(population): fitness = [] for x in population: fitness.append(fuzz.ratio(x, [9,9,9,9,9,9])) fittest = fitness.index(max(fitness)) fittest_fitness = fitness[fittest] fittest = population[fittest] return fittest, fittest_fitness done = False generation = 1 population = gen(generation, 10, [0,0,0,0,0,0]) print(population) while not done: generation += 1 time.sleep(0.5) print('Current Generation: ',generation) print('Fittest: ',get_fittest(population)) if get_fittest(population)[1] == 100: done = True population = gen(generation, 10, get_fittest(population)[0]) print('Population: ',population) ``` Output: ``` Fittest: ([7, 4, 2, 7, 8, 9], 72) [3, 4, 2, 7, 8, 9] [[3, 4, 2, 7, 8, 9]] [3, 4, 2, 7, 5, 9] [[3, 4, 2, 7, 5, 9], [3, 4, 2, 7, 5, 9]] [3, 4, 2, 7, 4, 9] [[3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 9] [[3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 2] [[3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 5] [[3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5]] [3, 1, 2, 5, 4, 3] [[3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3]] ```
2019/03/05
[ "https://Stackoverflow.com/questions/54995041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10227474/" ]
The Appium method hideKeyboard() is **known to be unstable** when used on iPhone devices, as listed in Appium’s currently known open issues. Using this method for an iOS device may cause the Appium script to hang. Appium identifies that the problem is because - "There is no automation hook for hiding the keyboard,...rather than using this method, to think about how a user would hide the keyboard in your app, and tell Appium to do that instead (swipe, tap on a certain coordinate, etc…)" Workaround: Following the advice of the Appium documentation - use Appium to automate the action that a user would use to hide the keyboard. For example, use the swipe method to hide the keyboard if the application defines this action, or if the application defines a "hide-KB" button, automate clicking on this button. The other workaround is to use **sendkey()** without clicking on the text input field.
You can also simply use ***driver.navigate().back();*** (for the older version of appium)
54,995,041
I'm coding with python 3.6 and am working on a Genetic Algorithm. When generating a new population, when I append the new values to the array all the values in the array are changed to the new value. Is there something wrong with my functions? Code: ``` from fuzzywuzzy import fuzz import numpy as np import random import time def mutate(parent): x = random.randint(0,len(parent)-1) parent[x] = random.randint(0,9) print(parent) return parent def gen(cur_gen, pop_size, fittest): if cur_gen == 1: population = [] for _ in range(pop_size): add_to = [] for _ in range(6): add_to.append(random.randint(0,9)) population.append(add_to) return population else: population = [] for _ in range(pop_size): print('\n') population.append(mutate(fittest)) print(population) return population def get_fittest(population): fitness = [] for x in population: fitness.append(fuzz.ratio(x, [9,9,9,9,9,9])) fittest = fitness.index(max(fitness)) fittest_fitness = fitness[fittest] fittest = population[fittest] return fittest, fittest_fitness done = False generation = 1 population = gen(generation, 10, [0,0,0,0,0,0]) print(population) while not done: generation += 1 time.sleep(0.5) print('Current Generation: ',generation) print('Fittest: ',get_fittest(population)) if get_fittest(population)[1] == 100: done = True population = gen(generation, 10, get_fittest(population)[0]) print('Population: ',population) ``` Output: ``` Fittest: ([7, 4, 2, 7, 8, 9], 72) [3, 4, 2, 7, 8, 9] [[3, 4, 2, 7, 8, 9]] [3, 4, 2, 7, 5, 9] [[3, 4, 2, 7, 5, 9], [3, 4, 2, 7, 5, 9]] [3, 4, 2, 7, 4, 9] [[3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 9] [[3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 2] [[3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 5] [[3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5]] [3, 1, 2, 5, 4, 3] [[3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3]] ```
2019/03/05
[ "https://Stackoverflow.com/questions/54995041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10227474/" ]
The Appium method hideKeyboard() is **known to be unstable** when used on iPhone devices, as listed in Appium’s currently known open issues. Using this method for an iOS device may cause the Appium script to hang. Appium identifies that the problem is because - "There is no automation hook for hiding the keyboard,...rather than using this method, to think about how a user would hide the keyboard in your app, and tell Appium to do that instead (swipe, tap on a certain coordinate, etc…)" Workaround: Following the advice of the Appium documentation - use Appium to automate the action that a user would use to hide the keyboard. For example, use the swipe method to hide the keyboard if the application defines this action, or if the application defines a "hide-KB" button, automate clicking on this button. The other workaround is to use **sendkey()** without clicking on the text input field.
The problem is trying to hide the keyboard on the first place. Set DesiredCapabilities as ``` cap.setCapability("connectHardwareKeyboard", false); ``` This will keep the keyboard hidden by default. Do your operation of entering data by sendKeys() ``` appDriver.findElementByXPath("//XCUIElementTypeOther[@name=\"Confirm password\"]/XCUIElementTypeSecureTextField").sendKeys(confirmPassword); ``` once done call ``` appDriver.hideKeyboard(); ``` and the keyboard goes away. Hope this helps
54,995,041
I'm coding with python 3.6 and am working on a Genetic Algorithm. When generating a new population, when I append the new values to the array all the values in the array are changed to the new value. Is there something wrong with my functions? Code: ``` from fuzzywuzzy import fuzz import numpy as np import random import time def mutate(parent): x = random.randint(0,len(parent)-1) parent[x] = random.randint(0,9) print(parent) return parent def gen(cur_gen, pop_size, fittest): if cur_gen == 1: population = [] for _ in range(pop_size): add_to = [] for _ in range(6): add_to.append(random.randint(0,9)) population.append(add_to) return population else: population = [] for _ in range(pop_size): print('\n') population.append(mutate(fittest)) print(population) return population def get_fittest(population): fitness = [] for x in population: fitness.append(fuzz.ratio(x, [9,9,9,9,9,9])) fittest = fitness.index(max(fitness)) fittest_fitness = fitness[fittest] fittest = population[fittest] return fittest, fittest_fitness done = False generation = 1 population = gen(generation, 10, [0,0,0,0,0,0]) print(population) while not done: generation += 1 time.sleep(0.5) print('Current Generation: ',generation) print('Fittest: ',get_fittest(population)) if get_fittest(population)[1] == 100: done = True population = gen(generation, 10, get_fittest(population)[0]) print('Population: ',population) ``` Output: ``` Fittest: ([7, 4, 2, 7, 8, 9], 72) [3, 4, 2, 7, 8, 9] [[3, 4, 2, 7, 8, 9]] [3, 4, 2, 7, 5, 9] [[3, 4, 2, 7, 5, 9], [3, 4, 2, 7, 5, 9]] [3, 4, 2, 7, 4, 9] [[3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 9] [[3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 2] [[3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 5] [[3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5]] [3, 1, 2, 5, 4, 3] [[3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3]] ```
2019/03/05
[ "https://Stackoverflow.com/questions/54995041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10227474/" ]
The Appium method hideKeyboard() is **known to be unstable** when used on iPhone devices, as listed in Appium’s currently known open issues. Using this method for an iOS device may cause the Appium script to hang. Appium identifies that the problem is because - "There is no automation hook for hiding the keyboard,...rather than using this method, to think about how a user would hide the keyboard in your app, and tell Appium to do that instead (swipe, tap on a certain coordinate, etc…)" Workaround: Following the advice of the Appium documentation - use Appium to automate the action that a user would use to hide the keyboard. For example, use the swipe method to hide the keyboard if the application defines this action, or if the application defines a "hide-KB" button, automate clicking on this button. The other workaround is to use **sendkey()** without clicking on the text input field.
You define the capabilities like this. ``` desiredCapabilities.setCapability("unicodeKeyboard", true); desiredCapabilities.setCapability("resetKeyboard", true); ```
54,995,041
I'm coding with python 3.6 and am working on a Genetic Algorithm. When generating a new population, when I append the new values to the array all the values in the array are changed to the new value. Is there something wrong with my functions? Code: ``` from fuzzywuzzy import fuzz import numpy as np import random import time def mutate(parent): x = random.randint(0,len(parent)-1) parent[x] = random.randint(0,9) print(parent) return parent def gen(cur_gen, pop_size, fittest): if cur_gen == 1: population = [] for _ in range(pop_size): add_to = [] for _ in range(6): add_to.append(random.randint(0,9)) population.append(add_to) return population else: population = [] for _ in range(pop_size): print('\n') population.append(mutate(fittest)) print(population) return population def get_fittest(population): fitness = [] for x in population: fitness.append(fuzz.ratio(x, [9,9,9,9,9,9])) fittest = fitness.index(max(fitness)) fittest_fitness = fitness[fittest] fittest = population[fittest] return fittest, fittest_fitness done = False generation = 1 population = gen(generation, 10, [0,0,0,0,0,0]) print(population) while not done: generation += 1 time.sleep(0.5) print('Current Generation: ',generation) print('Fittest: ',get_fittest(population)) if get_fittest(population)[1] == 100: done = True population = gen(generation, 10, get_fittest(population)[0]) print('Population: ',population) ``` Output: ``` Fittest: ([7, 4, 2, 7, 8, 9], 72) [3, 4, 2, 7, 8, 9] [[3, 4, 2, 7, 8, 9]] [3, 4, 2, 7, 5, 9] [[3, 4, 2, 7, 5, 9], [3, 4, 2, 7, 5, 9]] [3, 4, 2, 7, 4, 9] [[3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 9] [[3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 2] [[3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 5] [[3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5]] [3, 1, 2, 5, 4, 3] [[3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3]] ```
2019/03/05
[ "https://Stackoverflow.com/questions/54995041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10227474/" ]
The Appium method hideKeyboard() is **known to be unstable** when used on iPhone devices, as listed in Appium’s currently known open issues. Using this method for an iOS device may cause the Appium script to hang. Appium identifies that the problem is because - "There is no automation hook for hiding the keyboard,...rather than using this method, to think about how a user would hide the keyboard in your app, and tell Appium to do that instead (swipe, tap on a certain coordinate, etc…)" Workaround: Following the advice of the Appium documentation - use Appium to automate the action that a user would use to hide the keyboard. For example, use the swipe method to hide the keyboard if the application defines this action, or if the application defines a "hide-KB" button, automate clicking on this button. The other workaround is to use **sendkey()** without clicking on the text input field.
If you're using Android u can use adb to hide the keyboard , send adb command from your code ``` adb shell input keyevent 111 ```
54,995,041
I'm coding with python 3.6 and am working on a Genetic Algorithm. When generating a new population, when I append the new values to the array all the values in the array are changed to the new value. Is there something wrong with my functions? Code: ``` from fuzzywuzzy import fuzz import numpy as np import random import time def mutate(parent): x = random.randint(0,len(parent)-1) parent[x] = random.randint(0,9) print(parent) return parent def gen(cur_gen, pop_size, fittest): if cur_gen == 1: population = [] for _ in range(pop_size): add_to = [] for _ in range(6): add_to.append(random.randint(0,9)) population.append(add_to) return population else: population = [] for _ in range(pop_size): print('\n') population.append(mutate(fittest)) print(population) return population def get_fittest(population): fitness = [] for x in population: fitness.append(fuzz.ratio(x, [9,9,9,9,9,9])) fittest = fitness.index(max(fitness)) fittest_fitness = fitness[fittest] fittest = population[fittest] return fittest, fittest_fitness done = False generation = 1 population = gen(generation, 10, [0,0,0,0,0,0]) print(population) while not done: generation += 1 time.sleep(0.5) print('Current Generation: ',generation) print('Fittest: ',get_fittest(population)) if get_fittest(population)[1] == 100: done = True population = gen(generation, 10, get_fittest(population)[0]) print('Population: ',population) ``` Output: ``` Fittest: ([7, 4, 2, 7, 8, 9], 72) [3, 4, 2, 7, 8, 9] [[3, 4, 2, 7, 8, 9]] [3, 4, 2, 7, 5, 9] [[3, 4, 2, 7, 5, 9], [3, 4, 2, 7, 5, 9]] [3, 4, 2, 7, 4, 9] [[3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 9] [[3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 2] [[3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 5] [[3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5]] [3, 1, 2, 5, 4, 3] [[3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3]] ```
2019/03/05
[ "https://Stackoverflow.com/questions/54995041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10227474/" ]
The Appium method hideKeyboard() is **known to be unstable** when used on iPhone devices, as listed in Appium’s currently known open issues. Using this method for an iOS device may cause the Appium script to hang. Appium identifies that the problem is because - "There is no automation hook for hiding the keyboard,...rather than using this method, to think about how a user would hide the keyboard in your app, and tell Appium to do that instead (swipe, tap on a certain coordinate, etc…)" Workaround: Following the advice of the Appium documentation - use Appium to automate the action that a user would use to hide the keyboard. For example, use the swipe method to hide the keyboard if the application defines this action, or if the application defines a "hide-KB" button, automate clicking on this button. The other workaround is to use **sendkey()** without clicking on the text input field.
If you are using android, use the below method. If keyboard is visible (by id) ``` driver.pressKeyCode(4); //Android back button ``` else ``` logger keyboard is not active ``` Above method will dismiss the keyboard by invoking system back button.
54,995,041
I'm coding with python 3.6 and am working on a Genetic Algorithm. When generating a new population, when I append the new values to the array all the values in the array are changed to the new value. Is there something wrong with my functions? Code: ``` from fuzzywuzzy import fuzz import numpy as np import random import time def mutate(parent): x = random.randint(0,len(parent)-1) parent[x] = random.randint(0,9) print(parent) return parent def gen(cur_gen, pop_size, fittest): if cur_gen == 1: population = [] for _ in range(pop_size): add_to = [] for _ in range(6): add_to.append(random.randint(0,9)) population.append(add_to) return population else: population = [] for _ in range(pop_size): print('\n') population.append(mutate(fittest)) print(population) return population def get_fittest(population): fitness = [] for x in population: fitness.append(fuzz.ratio(x, [9,9,9,9,9,9])) fittest = fitness.index(max(fitness)) fittest_fitness = fitness[fittest] fittest = population[fittest] return fittest, fittest_fitness done = False generation = 1 population = gen(generation, 10, [0,0,0,0,0,0]) print(population) while not done: generation += 1 time.sleep(0.5) print('Current Generation: ',generation) print('Fittest: ',get_fittest(population)) if get_fittest(population)[1] == 100: done = True population = gen(generation, 10, get_fittest(population)[0]) print('Population: ',population) ``` Output: ``` Fittest: ([7, 4, 2, 7, 8, 9], 72) [3, 4, 2, 7, 8, 9] [[3, 4, 2, 7, 8, 9]] [3, 4, 2, 7, 5, 9] [[3, 4, 2, 7, 5, 9], [3, 4, 2, 7, 5, 9]] [3, 4, 2, 7, 4, 9] [[3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 9] [[3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 2] [[3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 5] [[3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5]] [3, 1, 2, 5, 4, 3] [[3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3]] ```
2019/03/05
[ "https://Stackoverflow.com/questions/54995041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10227474/" ]
The Appium method hideKeyboard() is **known to be unstable** when used on iPhone devices, as listed in Appium’s currently known open issues. Using this method for an iOS device may cause the Appium script to hang. Appium identifies that the problem is because - "There is no automation hook for hiding the keyboard,...rather than using this method, to think about how a user would hide the keyboard in your app, and tell Appium to do that instead (swipe, tap on a certain coordinate, etc…)" Workaround: Following the advice of the Appium documentation - use Appium to automate the action that a user would use to hide the keyboard. For example, use the swipe method to hide the keyboard if the application defines this action, or if the application defines a "hide-KB" button, automate clicking on this button. The other workaround is to use **sendkey()** without clicking on the text input field.
Best solution for this problem is, just add `capability` in your program. ``` capabilities.setCapability(MobileCapabilityType.AUTOMATION_NAME, "uiautomator2"); ```
54,995,041
I'm coding with python 3.6 and am working on a Genetic Algorithm. When generating a new population, when I append the new values to the array all the values in the array are changed to the new value. Is there something wrong with my functions? Code: ``` from fuzzywuzzy import fuzz import numpy as np import random import time def mutate(parent): x = random.randint(0,len(parent)-1) parent[x] = random.randint(0,9) print(parent) return parent def gen(cur_gen, pop_size, fittest): if cur_gen == 1: population = [] for _ in range(pop_size): add_to = [] for _ in range(6): add_to.append(random.randint(0,9)) population.append(add_to) return population else: population = [] for _ in range(pop_size): print('\n') population.append(mutate(fittest)) print(population) return population def get_fittest(population): fitness = [] for x in population: fitness.append(fuzz.ratio(x, [9,9,9,9,9,9])) fittest = fitness.index(max(fitness)) fittest_fitness = fitness[fittest] fittest = population[fittest] return fittest, fittest_fitness done = False generation = 1 population = gen(generation, 10, [0,0,0,0,0,0]) print(population) while not done: generation += 1 time.sleep(0.5) print('Current Generation: ',generation) print('Fittest: ',get_fittest(population)) if get_fittest(population)[1] == 100: done = True population = gen(generation, 10, get_fittest(population)[0]) print('Population: ',population) ``` Output: ``` Fittest: ([7, 4, 2, 7, 8, 9], 72) [3, 4, 2, 7, 8, 9] [[3, 4, 2, 7, 8, 9]] [3, 4, 2, 7, 5, 9] [[3, 4, 2, 7, 5, 9], [3, 4, 2, 7, 5, 9]] [3, 4, 2, 7, 4, 9] [[3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 9] [[3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 2] [[3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 5] [[3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5]] [3, 1, 2, 5, 4, 3] [[3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3]] ```
2019/03/05
[ "https://Stackoverflow.com/questions/54995041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10227474/" ]
The Appium method hideKeyboard() is **known to be unstable** when used on iPhone devices, as listed in Appium’s currently known open issues. Using this method for an iOS device may cause the Appium script to hang. Appium identifies that the problem is because - "There is no automation hook for hiding the keyboard,...rather than using this method, to think about how a user would hide the keyboard in your app, and tell Appium to do that instead (swipe, tap on a certain coordinate, etc…)" Workaround: Following the advice of the Appium documentation - use Appium to automate the action that a user would use to hide the keyboard. For example, use the swipe method to hide the keyboard if the application defines this action, or if the application defines a "hide-KB" button, automate clicking on this button. The other workaround is to use **sendkey()** without clicking on the text input field.
``` public void clickAfterFindingElement(By by) { try { getDriver().waitForCondition(ExpectedConditions.elementToBeClickable(by)); getDriver().findElement(by).click(); } catch (NoSuchElementException | TimeoutException e) { swipeUp(); getDriver().findElement(by).click(); } } public void hideKeyBoard() { if (isKeyboardShown()) { if (isConfigurationIOS()) { try { clickAfterFindingElement(By.id("keyboard_done_btn")); } catch (Exception e) { try { getDriver().click(By.id("Done")); } catch (Exception e1) { //noop } } } else { ((AndroidDriver) getDriver().getAppiumDriver()).pressKey(new KeyEvent(AndroidKey.BACK)); } } } ``` This solution has been incredibly stable for us for hiding a keyboard on simulators and emulators
54,995,041
I'm coding with python 3.6 and am working on a Genetic Algorithm. When generating a new population, when I append the new values to the array all the values in the array are changed to the new value. Is there something wrong with my functions? Code: ``` from fuzzywuzzy import fuzz import numpy as np import random import time def mutate(parent): x = random.randint(0,len(parent)-1) parent[x] = random.randint(0,9) print(parent) return parent def gen(cur_gen, pop_size, fittest): if cur_gen == 1: population = [] for _ in range(pop_size): add_to = [] for _ in range(6): add_to.append(random.randint(0,9)) population.append(add_to) return population else: population = [] for _ in range(pop_size): print('\n') population.append(mutate(fittest)) print(population) return population def get_fittest(population): fitness = [] for x in population: fitness.append(fuzz.ratio(x, [9,9,9,9,9,9])) fittest = fitness.index(max(fitness)) fittest_fitness = fitness[fittest] fittest = population[fittest] return fittest, fittest_fitness done = False generation = 1 population = gen(generation, 10, [0,0,0,0,0,0]) print(population) while not done: generation += 1 time.sleep(0.5) print('Current Generation: ',generation) print('Fittest: ',get_fittest(population)) if get_fittest(population)[1] == 100: done = True population = gen(generation, 10, get_fittest(population)[0]) print('Population: ',population) ``` Output: ``` Fittest: ([7, 4, 2, 7, 8, 9], 72) [3, 4, 2, 7, 8, 9] [[3, 4, 2, 7, 8, 9]] [3, 4, 2, 7, 5, 9] [[3, 4, 2, 7, 5, 9], [3, 4, 2, 7, 5, 9]] [3, 4, 2, 7, 4, 9] [[3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9], [3, 4, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 9] [[3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9], [3, 1, 2, 7, 4, 9]] [3, 1, 2, 7, 4, 2] [[3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2], [3, 1, 2, 7, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 2] [[3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2], [3, 1, 2, 5, 4, 2]] [3, 1, 2, 5, 4, 5] [[3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5], [3, 1, 2, 5, 4, 5]] [3, 1, 2, 5, 4, 3] [[3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3], [3, 1, 2, 5, 4, 3]] ```
2019/03/05
[ "https://Stackoverflow.com/questions/54995041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10227474/" ]
The Appium method hideKeyboard() is **known to be unstable** when used on iPhone devices, as listed in Appium’s currently known open issues. Using this method for an iOS device may cause the Appium script to hang. Appium identifies that the problem is because - "There is no automation hook for hiding the keyboard,...rather than using this method, to think about how a user would hide the keyboard in your app, and tell Appium to do that instead (swipe, tap on a certain coordinate, etc…)" Workaround: Following the advice of the Appium documentation - use Appium to automate the action that a user would use to hide the keyboard. For example, use the swipe method to hide the keyboard if the application defines this action, or if the application defines a "hide-KB" button, automate clicking on this button. The other workaround is to use **sendkey()** without clicking on the text input field.
I'm testing a React Native app in a iPad simulator via Appium and WebdriverIO. ``` import { remote } from 'webdriverio'; client = await remote(opts); ``` I eventually found doing **two actions in a row** worked reliably, but one doesn't e.g. ``` await client.hideKeyboard('tapOut') await client.hideKeyboard('tapOut') ``` I also agree with Mat Lavin's comment on the accepted answer that sending '\n' is reliable too. ``` await client.sendKeys(["\n"]) ``` I also found using no param or "default" caused the keyboard to become kind of minimised to a bar which then couldn't be used until reboot.
58,350,001
I have two python dictionaries. Sample: ``` { 'hello' : 10 'phone' : 12 'sky' : 13 } { 'hello' : 8 'phone' :15 'red' :4 } ``` This is the dictionary of counts of words in books 'book1' and 'book2' respectively. How can I generate a pd dataframe, which looks like this: ``` hello phone sky red book1 10 12 13 NaN book2 8 15 NaN 4 ``` I have tried : ``` pd.DataFrame([words,counts]) ``` It generated: ``` hello phone sky red 0 10 12 13 NaN 1 8 15 NaN 4 ``` How can I genrate a required output?
2019/10/12
[ "https://Stackoverflow.com/questions/58350001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12133862/" ]
You need this: ``` pd.DataFrame([words, counts], index=['books1', 'books2']) ``` Output: ``` hello phone red sky books1 10 12 NaN 13.0 books2 8 15 4.0 NaN ```
Use `df.set_index([‘book1’, ‘book2’])`. See the docs here: <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html>
58,350,001
I have two python dictionaries. Sample: ``` { 'hello' : 10 'phone' : 12 'sky' : 13 } { 'hello' : 8 'phone' :15 'red' :4 } ``` This is the dictionary of counts of words in books 'book1' and 'book2' respectively. How can I generate a pd dataframe, which looks like this: ``` hello phone sky red book1 10 12 13 NaN book2 8 15 NaN 4 ``` I have tried : ``` pd.DataFrame([words,counts]) ``` It generated: ``` hello phone sky red 0 10 12 13 NaN 1 8 15 NaN 4 ``` How can I genrate a required output?
2019/10/12
[ "https://Stackoverflow.com/questions/58350001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12133862/" ]
try the below code,hope this helps ``` dict1 = { 'hello' : 10, 'phone' : 12, 'sky' : 13 } dict2 = { 'hello' : 8, 'phone' :15, 'red' :4 } import pandas as pd df = pd.DataFrame([dict1,dict2], index=['book1','book2']) print(df) ``` Ouput will be: ``` hello phone sky red book1 10 12 13.0 NaN book2 8 15 NaN 4.0 ```
Use `df.set_index([‘book1’, ‘book2’])`. See the docs here: <https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html>
58,350,001
I have two python dictionaries. Sample: ``` { 'hello' : 10 'phone' : 12 'sky' : 13 } { 'hello' : 8 'phone' :15 'red' :4 } ``` This is the dictionary of counts of words in books 'book1' and 'book2' respectively. How can I generate a pd dataframe, which looks like this: ``` hello phone sky red book1 10 12 13 NaN book2 8 15 NaN 4 ``` I have tried : ``` pd.DataFrame([words,counts]) ``` It generated: ``` hello phone sky red 0 10 12 13 NaN 1 8 15 NaN 4 ``` How can I genrate a required output?
2019/10/12
[ "https://Stackoverflow.com/questions/58350001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12133862/" ]
You need this: ``` pd.DataFrame([words, counts], index=['books1', 'books2']) ``` Output: ``` hello phone red sky books1 10 12 NaN 13.0 books2 8 15 4.0 NaN ```
Assuming you have a list of dictionaries, you could do something like this: ``` import pandas as pd from itertools import chain data = [{ 'hello': 10, 'phone': 12, 'sky': 13, }, { 'hello': 8, 'phone': 15, 'red': 4 }] df = pd.DataFrame(data=data, columns=set(chain.from_iterable(d.keys() for d in data))) print(df) ``` **Output** ``` sky phone hello red 0 13.0 12 10 NaN 1 NaN 15 8 4.0 ```
58,350,001
I have two python dictionaries. Sample: ``` { 'hello' : 10 'phone' : 12 'sky' : 13 } { 'hello' : 8 'phone' :15 'red' :4 } ``` This is the dictionary of counts of words in books 'book1' and 'book2' respectively. How can I generate a pd dataframe, which looks like this: ``` hello phone sky red book1 10 12 13 NaN book2 8 15 NaN 4 ``` I have tried : ``` pd.DataFrame([words,counts]) ``` It generated: ``` hello phone sky red 0 10 12 13 NaN 1 8 15 NaN 4 ``` How can I genrate a required output?
2019/10/12
[ "https://Stackoverflow.com/questions/58350001", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12133862/" ]
try the below code,hope this helps ``` dict1 = { 'hello' : 10, 'phone' : 12, 'sky' : 13 } dict2 = { 'hello' : 8, 'phone' :15, 'red' :4 } import pandas as pd df = pd.DataFrame([dict1,dict2], index=['book1','book2']) print(df) ``` Ouput will be: ``` hello phone sky red book1 10 12 13.0 NaN book2 8 15 NaN 4.0 ```
Assuming you have a list of dictionaries, you could do something like this: ``` import pandas as pd from itertools import chain data = [{ 'hello': 10, 'phone': 12, 'sky': 13, }, { 'hello': 8, 'phone': 15, 'red': 4 }] df = pd.DataFrame(data=data, columns=set(chain.from_iterable(d.keys() for d in data))) print(df) ``` **Output** ``` sky phone hello red 0 13.0 12 10 NaN 1 NaN 15 8 4.0 ```
10,062,646
Superficially, an easy question: how do I get a great-looking PDF from my XML document? Actually, my input is a subset of XHTML with a few custom attributes added (to save some information on citation sources, etc). I've been exploring some routes and would like to get some feedback if anyone has tried some of this before. Note: I've considered XSL-FO to generate PDFs but heard the typographic quality of open source tools is still lagging behind TeX a lot. Guess the most advanced one is [Apache FOP](http://xmlgraphics.apache.org/fop/). But I'm really interested in a great-looking PDFs (otherwise I could use the print dialog of my browser). Any thoughts, updates on this? So I've been thinking of using XSLT to convert my customized XML/XHTML dialect to DocBook and go from there ([DocBook via XSLT](http://sourceforge.net/projects/docbook/) to proper HTML seems to work quite well, so I might use it for that as well). But how do I go from DocBook to TeX? I've come across a number of solutions. * [dblatex](http://dblatex.sourceforge.net/) A set of XSLT stylesheets that output LaTeX. * [db2latex](http://db2latex.sourceforge.net/) Started as a clone of dblatex but now provides tighter integration with LaTex packages and provides a single script to output PDF, which is quite nice. * [passiveTex](http://projects.oucs.ox.ac.uk/passivetex/) Instead of XSLT it uses a XML parser written in TeX. * [TeXML](http://getfo.org/texml) is essentially an XML serialization of the LaTeX language which can be used as an intermediate format and an accompanying python tool that transforms from that XML format to LaTeX/ConTeXt. They [claimed](http://getfo.org/texml/thesis.html) that this avoids the existing solutions' problems with special symbols, losing some braces or spaces and support for only latin-1 encoding. (Is this still the case?) As my input XML might contains quite a few special characters represented in Unicode, the last point is especially important to me. I've also been thinking of using XeTeX instead of pdfTeX to get around this problem. (I might loose some typographic quality though, but maybe still better than current open source XSL-FO processors?) So db2latex and TeXML seem to be the favorites. So can anybody comment on the robustness of those? Alternatively, I might have more luck using ConTeXt directly, as there seems to be quite some [interest in the ConTeXt community in XML](http://wiki.contextgarden.net/XML). Especially, I might take a deeper look at ["My Way: Getting Web Content and pdf-Output from One Source"](http://dl.contextgarden.net/myway/tas/xhtml.pdf) and ["Dealing with XML in ConTeXt MkIV"](http://pragma-ade.com/general/manuals/xml-mkiv.pdf). Both documents describe an approach using ConTeXt combined with LuaTeX. ([DocBook In ConTeXt](http://www.leverkruid.eu/context/) seems to do about the same but the latest version is from 2003.) The second document notes: > > You may wonder why we do these manipulations in TEX and not use xslt instead. The > advantage of an integrated approach is that it simplifies usage. Think of not only processing the a > document, but also using xml for managing resources in the same run. An xslt > approach is just as verbose (after all, you still need to produce TEX code) and probably > less readable. In the case of MkIV the integrated approach is is also faster and gives us > the option to manipulate content at runtime using Lua. > > > What do you think about this? Please keep in mind that I have some experience with both XSLT and TeX but have never gone terribly deep into either of them. Never tried many different LaTeX packages or alternatives such as ConTeXt (or XeTeX/LuaTeX instead of pdfTeX) but I am willing to learn some new stuff to get my beautiful PDFs in the end ;) Also, I stumbled over [Pandoc](https://pandoc.org/MANUAL.html#creating-a-pdf) but couldn't find any info on how it compares to the other mentioned approaches. And lastly, a link to some quite extensive documentation on [how to use TeXML with ConTeXt](http://getfo.org/context_xml/contents.html).
2012/04/08
[ "https://Stackoverflow.com/questions/10062646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/214446/" ]
I've done something like this in the past (that is, maintaining master versions of documents in XML, and wanting to produce LaTeX output from them). I've used PassiveTeX in the past, but I found creating stylesheets to be hard work -- the usual result of writing two languages at once. I got it to work, and the result looked very good, but it was probably more effort than it was worth. That said, if you amount of styling you need to add is *small*, then this might be a good route, because it's a single step. The most successful route (read, flexible and attractive), was to use XSLT to transform the document into structural LaTeX, which matches the intended structure of the result document, but which doesn't attempt to do more than minimal formatting. Depending on your document, that might be normal-looking LaTeX, or it might have bespoke structures. Then write or adapt a LaTeX stylesheet or class file which formats that output into something attractive. That way, you're using XSLT to its strengths (and not going beyond them, which rapidly becomes very frustrating), using LaTeX to *its* strengths, and not confusing yourself. That is, this more-or-less matches the approach of your first two alternatives, and whether you go with them, or write/customise a LaTeX stylesheet with bespoke output, is a function of how comfortable you feel with LaTeX stylesheets, and how much complicated or specialised formatting you need to do. Since you say you need to handle Unicode characters in the input, then yes, XeLaTeX would be a good choice for the LaTeX part of the pipeline.
You might want to check [questions tagged with XML on TeX.sx](https://tex.stackexchange.com/questions/tagged/xml), especially [this](https://tex.stackexchange.com/questions/11260/is-there-some-typesetting-system-that-uses-xml-notation) one. I suggest you use ConTeXt; the current version has no problems with Unicode and can handle OpenType perfectly - and it's programmable in Lua. The most often used alternative with LaTeX is [XMLTeX](http://www.ctan.org/pkg/xmltex), but that needs a lot of TeX foo. If your documents can be handled by pandoc, use that: You'll have multiple output options, more than from any TeX-based system.
10,062,646
Superficially, an easy question: how do I get a great-looking PDF from my XML document? Actually, my input is a subset of XHTML with a few custom attributes added (to save some information on citation sources, etc). I've been exploring some routes and would like to get some feedback if anyone has tried some of this before. Note: I've considered XSL-FO to generate PDFs but heard the typographic quality of open source tools is still lagging behind TeX a lot. Guess the most advanced one is [Apache FOP](http://xmlgraphics.apache.org/fop/). But I'm really interested in a great-looking PDFs (otherwise I could use the print dialog of my browser). Any thoughts, updates on this? So I've been thinking of using XSLT to convert my customized XML/XHTML dialect to DocBook and go from there ([DocBook via XSLT](http://sourceforge.net/projects/docbook/) to proper HTML seems to work quite well, so I might use it for that as well). But how do I go from DocBook to TeX? I've come across a number of solutions. * [dblatex](http://dblatex.sourceforge.net/) A set of XSLT stylesheets that output LaTeX. * [db2latex](http://db2latex.sourceforge.net/) Started as a clone of dblatex but now provides tighter integration with LaTex packages and provides a single script to output PDF, which is quite nice. * [passiveTex](http://projects.oucs.ox.ac.uk/passivetex/) Instead of XSLT it uses a XML parser written in TeX. * [TeXML](http://getfo.org/texml) is essentially an XML serialization of the LaTeX language which can be used as an intermediate format and an accompanying python tool that transforms from that XML format to LaTeX/ConTeXt. They [claimed](http://getfo.org/texml/thesis.html) that this avoids the existing solutions' problems with special symbols, losing some braces or spaces and support for only latin-1 encoding. (Is this still the case?) As my input XML might contains quite a few special characters represented in Unicode, the last point is especially important to me. I've also been thinking of using XeTeX instead of pdfTeX to get around this problem. (I might loose some typographic quality though, but maybe still better than current open source XSL-FO processors?) So db2latex and TeXML seem to be the favorites. So can anybody comment on the robustness of those? Alternatively, I might have more luck using ConTeXt directly, as there seems to be quite some [interest in the ConTeXt community in XML](http://wiki.contextgarden.net/XML). Especially, I might take a deeper look at ["My Way: Getting Web Content and pdf-Output from One Source"](http://dl.contextgarden.net/myway/tas/xhtml.pdf) and ["Dealing with XML in ConTeXt MkIV"](http://pragma-ade.com/general/manuals/xml-mkiv.pdf). Both documents describe an approach using ConTeXt combined with LuaTeX. ([DocBook In ConTeXt](http://www.leverkruid.eu/context/) seems to do about the same but the latest version is from 2003.) The second document notes: > > You may wonder why we do these manipulations in TEX and not use xslt instead. The > advantage of an integrated approach is that it simplifies usage. Think of not only processing the a > document, but also using xml for managing resources in the same run. An xslt > approach is just as verbose (after all, you still need to produce TEX code) and probably > less readable. In the case of MkIV the integrated approach is is also faster and gives us > the option to manipulate content at runtime using Lua. > > > What do you think about this? Please keep in mind that I have some experience with both XSLT and TeX but have never gone terribly deep into either of them. Never tried many different LaTeX packages or alternatives such as ConTeXt (or XeTeX/LuaTeX instead of pdfTeX) but I am willing to learn some new stuff to get my beautiful PDFs in the end ;) Also, I stumbled over [Pandoc](https://pandoc.org/MANUAL.html#creating-a-pdf) but couldn't find any info on how it compares to the other mentioned approaches. And lastly, a link to some quite extensive documentation on [how to use TeXML with ConTeXt](http://getfo.org/context_xml/contents.html).
2012/04/08
[ "https://Stackoverflow.com/questions/10062646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/214446/" ]
In the end, I've decided to go with [Pandoc](https://pandoc.org/MANUAL.html#creating-a-pdf), seems to be very polished and solid code base. One potential drawback is that you have to limit yourself to the number of markup features available in Pandoc's internal representation which maps basically one-to-one to its [extended markdown](http://johnmacfarlane.net/pandoc/README.html#pandocs-markdown). Because I didn't think generating markdown from my XHTML-like source was a good idea, I succeeded in initiating a pandoc [component that reads DocBook](https://github.com/jgm/pandoc/blob/master/src/Text/Pandoc/Readers/DocBook.hs), which is currently in the master branch of Pandoc's development repo. So now I've a simple XSLT stylesheet that converts from my XHTML dialect to DocBook (which is also XML) and then I use Pandoc to export to a hoist of other formats, including PDF via ConTeXt.
You might want to check [questions tagged with XML on TeX.sx](https://tex.stackexchange.com/questions/tagged/xml), especially [this](https://tex.stackexchange.com/questions/11260/is-there-some-typesetting-system-that-uses-xml-notation) one. I suggest you use ConTeXt; the current version has no problems with Unicode and can handle OpenType perfectly - and it's programmable in Lua. The most often used alternative with LaTeX is [XMLTeX](http://www.ctan.org/pkg/xmltex), but that needs a lot of TeX foo. If your documents can be handled by pandoc, use that: You'll have multiple output options, more than from any TeX-based system.
10,062,646
Superficially, an easy question: how do I get a great-looking PDF from my XML document? Actually, my input is a subset of XHTML with a few custom attributes added (to save some information on citation sources, etc). I've been exploring some routes and would like to get some feedback if anyone has tried some of this before. Note: I've considered XSL-FO to generate PDFs but heard the typographic quality of open source tools is still lagging behind TeX a lot. Guess the most advanced one is [Apache FOP](http://xmlgraphics.apache.org/fop/). But I'm really interested in a great-looking PDFs (otherwise I could use the print dialog of my browser). Any thoughts, updates on this? So I've been thinking of using XSLT to convert my customized XML/XHTML dialect to DocBook and go from there ([DocBook via XSLT](http://sourceforge.net/projects/docbook/) to proper HTML seems to work quite well, so I might use it for that as well). But how do I go from DocBook to TeX? I've come across a number of solutions. * [dblatex](http://dblatex.sourceforge.net/) A set of XSLT stylesheets that output LaTeX. * [db2latex](http://db2latex.sourceforge.net/) Started as a clone of dblatex but now provides tighter integration with LaTex packages and provides a single script to output PDF, which is quite nice. * [passiveTex](http://projects.oucs.ox.ac.uk/passivetex/) Instead of XSLT it uses a XML parser written in TeX. * [TeXML](http://getfo.org/texml) is essentially an XML serialization of the LaTeX language which can be used as an intermediate format and an accompanying python tool that transforms from that XML format to LaTeX/ConTeXt. They [claimed](http://getfo.org/texml/thesis.html) that this avoids the existing solutions' problems with special symbols, losing some braces or spaces and support for only latin-1 encoding. (Is this still the case?) As my input XML might contains quite a few special characters represented in Unicode, the last point is especially important to me. I've also been thinking of using XeTeX instead of pdfTeX to get around this problem. (I might loose some typographic quality though, but maybe still better than current open source XSL-FO processors?) So db2latex and TeXML seem to be the favorites. So can anybody comment on the robustness of those? Alternatively, I might have more luck using ConTeXt directly, as there seems to be quite some [interest in the ConTeXt community in XML](http://wiki.contextgarden.net/XML). Especially, I might take a deeper look at ["My Way: Getting Web Content and pdf-Output from One Source"](http://dl.contextgarden.net/myway/tas/xhtml.pdf) and ["Dealing with XML in ConTeXt MkIV"](http://pragma-ade.com/general/manuals/xml-mkiv.pdf). Both documents describe an approach using ConTeXt combined with LuaTeX. ([DocBook In ConTeXt](http://www.leverkruid.eu/context/) seems to do about the same but the latest version is from 2003.) The second document notes: > > You may wonder why we do these manipulations in TEX and not use xslt instead. The > advantage of an integrated approach is that it simplifies usage. Think of not only processing the a > document, but also using xml for managing resources in the same run. An xslt > approach is just as verbose (after all, you still need to produce TEX code) and probably > less readable. In the case of MkIV the integrated approach is is also faster and gives us > the option to manipulate content at runtime using Lua. > > > What do you think about this? Please keep in mind that I have some experience with both XSLT and TeX but have never gone terribly deep into either of them. Never tried many different LaTeX packages or alternatives such as ConTeXt (or XeTeX/LuaTeX instead of pdfTeX) but I am willing to learn some new stuff to get my beautiful PDFs in the end ;) Also, I stumbled over [Pandoc](https://pandoc.org/MANUAL.html#creating-a-pdf) but couldn't find any info on how it compares to the other mentioned approaches. And lastly, a link to some quite extensive documentation on [how to use TeXML with ConTeXt](http://getfo.org/context_xml/contents.html).
2012/04/08
[ "https://Stackoverflow.com/questions/10062646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/214446/" ]
You might want to check [questions tagged with XML on TeX.sx](https://tex.stackexchange.com/questions/tagged/xml), especially [this](https://tex.stackexchange.com/questions/11260/is-there-some-typesetting-system-that-uses-xml-notation) one. I suggest you use ConTeXt; the current version has no problems with Unicode and can handle OpenType perfectly - and it's programmable in Lua. The most often used alternative with LaTeX is [XMLTeX](http://www.ctan.org/pkg/xmltex), but that needs a lot of TeX foo. If your documents can be handled by pandoc, use that: You'll have multiple output options, more than from any TeX-based system.
If you want more options on how to customize your TeX output, I would suggest using this: [xml2tex](https://github.com/transpect/xml2tex) It's based on a declarative configuration where you can specify your mapping from XML to TeX. MathML and XML tables (HTML and CALS) are automatically converted to TeX. Thus, it's Open Source and provides ready-to-use configurations for DocBook and DITA.
10,062,646
Superficially, an easy question: how do I get a great-looking PDF from my XML document? Actually, my input is a subset of XHTML with a few custom attributes added (to save some information on citation sources, etc). I've been exploring some routes and would like to get some feedback if anyone has tried some of this before. Note: I've considered XSL-FO to generate PDFs but heard the typographic quality of open source tools is still lagging behind TeX a lot. Guess the most advanced one is [Apache FOP](http://xmlgraphics.apache.org/fop/). But I'm really interested in a great-looking PDFs (otherwise I could use the print dialog of my browser). Any thoughts, updates on this? So I've been thinking of using XSLT to convert my customized XML/XHTML dialect to DocBook and go from there ([DocBook via XSLT](http://sourceforge.net/projects/docbook/) to proper HTML seems to work quite well, so I might use it for that as well). But how do I go from DocBook to TeX? I've come across a number of solutions. * [dblatex](http://dblatex.sourceforge.net/) A set of XSLT stylesheets that output LaTeX. * [db2latex](http://db2latex.sourceforge.net/) Started as a clone of dblatex but now provides tighter integration with LaTex packages and provides a single script to output PDF, which is quite nice. * [passiveTex](http://projects.oucs.ox.ac.uk/passivetex/) Instead of XSLT it uses a XML parser written in TeX. * [TeXML](http://getfo.org/texml) is essentially an XML serialization of the LaTeX language which can be used as an intermediate format and an accompanying python tool that transforms from that XML format to LaTeX/ConTeXt. They [claimed](http://getfo.org/texml/thesis.html) that this avoids the existing solutions' problems with special symbols, losing some braces or spaces and support for only latin-1 encoding. (Is this still the case?) As my input XML might contains quite a few special characters represented in Unicode, the last point is especially important to me. I've also been thinking of using XeTeX instead of pdfTeX to get around this problem. (I might loose some typographic quality though, but maybe still better than current open source XSL-FO processors?) So db2latex and TeXML seem to be the favorites. So can anybody comment on the robustness of those? Alternatively, I might have more luck using ConTeXt directly, as there seems to be quite some [interest in the ConTeXt community in XML](http://wiki.contextgarden.net/XML). Especially, I might take a deeper look at ["My Way: Getting Web Content and pdf-Output from One Source"](http://dl.contextgarden.net/myway/tas/xhtml.pdf) and ["Dealing with XML in ConTeXt MkIV"](http://pragma-ade.com/general/manuals/xml-mkiv.pdf). Both documents describe an approach using ConTeXt combined with LuaTeX. ([DocBook In ConTeXt](http://www.leverkruid.eu/context/) seems to do about the same but the latest version is from 2003.) The second document notes: > > You may wonder why we do these manipulations in TEX and not use xslt instead. The > advantage of an integrated approach is that it simplifies usage. Think of not only processing the a > document, but also using xml for managing resources in the same run. An xslt > approach is just as verbose (after all, you still need to produce TEX code) and probably > less readable. In the case of MkIV the integrated approach is is also faster and gives us > the option to manipulate content at runtime using Lua. > > > What do you think about this? Please keep in mind that I have some experience with both XSLT and TeX but have never gone terribly deep into either of them. Never tried many different LaTeX packages or alternatives such as ConTeXt (or XeTeX/LuaTeX instead of pdfTeX) but I am willing to learn some new stuff to get my beautiful PDFs in the end ;) Also, I stumbled over [Pandoc](https://pandoc.org/MANUAL.html#creating-a-pdf) but couldn't find any info on how it compares to the other mentioned approaches. And lastly, a link to some quite extensive documentation on [how to use TeXML with ConTeXt](http://getfo.org/context_xml/contents.html).
2012/04/08
[ "https://Stackoverflow.com/questions/10062646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/214446/" ]
I've done something like this in the past (that is, maintaining master versions of documents in XML, and wanting to produce LaTeX output from them). I've used PassiveTeX in the past, but I found creating stylesheets to be hard work -- the usual result of writing two languages at once. I got it to work, and the result looked very good, but it was probably more effort than it was worth. That said, if you amount of styling you need to add is *small*, then this might be a good route, because it's a single step. The most successful route (read, flexible and attractive), was to use XSLT to transform the document into structural LaTeX, which matches the intended structure of the result document, but which doesn't attempt to do more than minimal formatting. Depending on your document, that might be normal-looking LaTeX, or it might have bespoke structures. Then write or adapt a LaTeX stylesheet or class file which formats that output into something attractive. That way, you're using XSLT to its strengths (and not going beyond them, which rapidly becomes very frustrating), using LaTeX to *its* strengths, and not confusing yourself. That is, this more-or-less matches the approach of your first two alternatives, and whether you go with them, or write/customise a LaTeX stylesheet with bespoke output, is a function of how comfortable you feel with LaTeX stylesheets, and how much complicated or specialised formatting you need to do. Since you say you need to handle Unicode characters in the input, then yes, XeLaTeX would be a good choice for the LaTeX part of the pipeline.
If you want more options on how to customize your TeX output, I would suggest using this: [xml2tex](https://github.com/transpect/xml2tex) It's based on a declarative configuration where you can specify your mapping from XML to TeX. MathML and XML tables (HTML and CALS) are automatically converted to TeX. Thus, it's Open Source and provides ready-to-use configurations for DocBook and DITA.
10,062,646
Superficially, an easy question: how do I get a great-looking PDF from my XML document? Actually, my input is a subset of XHTML with a few custom attributes added (to save some information on citation sources, etc). I've been exploring some routes and would like to get some feedback if anyone has tried some of this before. Note: I've considered XSL-FO to generate PDFs but heard the typographic quality of open source tools is still lagging behind TeX a lot. Guess the most advanced one is [Apache FOP](http://xmlgraphics.apache.org/fop/). But I'm really interested in a great-looking PDFs (otherwise I could use the print dialog of my browser). Any thoughts, updates on this? So I've been thinking of using XSLT to convert my customized XML/XHTML dialect to DocBook and go from there ([DocBook via XSLT](http://sourceforge.net/projects/docbook/) to proper HTML seems to work quite well, so I might use it for that as well). But how do I go from DocBook to TeX? I've come across a number of solutions. * [dblatex](http://dblatex.sourceforge.net/) A set of XSLT stylesheets that output LaTeX. * [db2latex](http://db2latex.sourceforge.net/) Started as a clone of dblatex but now provides tighter integration with LaTex packages and provides a single script to output PDF, which is quite nice. * [passiveTex](http://projects.oucs.ox.ac.uk/passivetex/) Instead of XSLT it uses a XML parser written in TeX. * [TeXML](http://getfo.org/texml) is essentially an XML serialization of the LaTeX language which can be used as an intermediate format and an accompanying python tool that transforms from that XML format to LaTeX/ConTeXt. They [claimed](http://getfo.org/texml/thesis.html) that this avoids the existing solutions' problems with special symbols, losing some braces or spaces and support for only latin-1 encoding. (Is this still the case?) As my input XML might contains quite a few special characters represented in Unicode, the last point is especially important to me. I've also been thinking of using XeTeX instead of pdfTeX to get around this problem. (I might loose some typographic quality though, but maybe still better than current open source XSL-FO processors?) So db2latex and TeXML seem to be the favorites. So can anybody comment on the robustness of those? Alternatively, I might have more luck using ConTeXt directly, as there seems to be quite some [interest in the ConTeXt community in XML](http://wiki.contextgarden.net/XML). Especially, I might take a deeper look at ["My Way: Getting Web Content and pdf-Output from One Source"](http://dl.contextgarden.net/myway/tas/xhtml.pdf) and ["Dealing with XML in ConTeXt MkIV"](http://pragma-ade.com/general/manuals/xml-mkiv.pdf). Both documents describe an approach using ConTeXt combined with LuaTeX. ([DocBook In ConTeXt](http://www.leverkruid.eu/context/) seems to do about the same but the latest version is from 2003.) The second document notes: > > You may wonder why we do these manipulations in TEX and not use xslt instead. The > advantage of an integrated approach is that it simplifies usage. Think of not only processing the a > document, but also using xml for managing resources in the same run. An xslt > approach is just as verbose (after all, you still need to produce TEX code) and probably > less readable. In the case of MkIV the integrated approach is is also faster and gives us > the option to manipulate content at runtime using Lua. > > > What do you think about this? Please keep in mind that I have some experience with both XSLT and TeX but have never gone terribly deep into either of them. Never tried many different LaTeX packages or alternatives such as ConTeXt (or XeTeX/LuaTeX instead of pdfTeX) but I am willing to learn some new stuff to get my beautiful PDFs in the end ;) Also, I stumbled over [Pandoc](https://pandoc.org/MANUAL.html#creating-a-pdf) but couldn't find any info on how it compares to the other mentioned approaches. And lastly, a link to some quite extensive documentation on [how to use TeXML with ConTeXt](http://getfo.org/context_xml/contents.html).
2012/04/08
[ "https://Stackoverflow.com/questions/10062646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/214446/" ]
In the end, I've decided to go with [Pandoc](https://pandoc.org/MANUAL.html#creating-a-pdf), seems to be very polished and solid code base. One potential drawback is that you have to limit yourself to the number of markup features available in Pandoc's internal representation which maps basically one-to-one to its [extended markdown](http://johnmacfarlane.net/pandoc/README.html#pandocs-markdown). Because I didn't think generating markdown from my XHTML-like source was a good idea, I succeeded in initiating a pandoc [component that reads DocBook](https://github.com/jgm/pandoc/blob/master/src/Text/Pandoc/Readers/DocBook.hs), which is currently in the master branch of Pandoc's development repo. So now I've a simple XSLT stylesheet that converts from my XHTML dialect to DocBook (which is also XML) and then I use Pandoc to export to a hoist of other formats, including PDF via ConTeXt.
If you want more options on how to customize your TeX output, I would suggest using this: [xml2tex](https://github.com/transpect/xml2tex) It's based on a declarative configuration where you can specify your mapping from XML to TeX. MathML and XML tables (HTML and CALS) are automatically converted to TeX. Thus, it's Open Source and provides ready-to-use configurations for DocBook and DITA.
2,558,107
In the [App Engine docs](http://code.google.com/appengine/docs/python/xmpp/overview.html#XMPP_Addresses), a JID is defined like this: > > An application can send and receive > messages using several kinds of > addresses, or "JIDs." > > > On Wikipedia, however, a JID is defined like this: > > Every user on the (XMPP) network has a unique > Jabber ID (usually abbreviated as > JID). > > > So, a JID is both a user identifier and an application address?
2010/04/01
[ "https://Stackoverflow.com/questions/2558107", "https://Stackoverflow.com", "https://Stackoverflow.com/users/306533/" ]
A JID is globally unique in that anyone sending an XMPP message as you@domain.com can be you. However, an App Engine app can send XMPP messages as any number of JIDs. Your app can send XMPP messages as `your-app-id@appspot.com` or as `foo@your-app-id.appspotchat.com` or as `bar@your-app-id.appspotchat.com` or as `anything@your-app-id.appspotchat.com`. These IDs are still globally unique and identifying -- anyone sending an XMPP message as `foo@your-app-id.appspotchat.com` can be assumed to be your app.
Since I happened to have this up in my browser, the current best canonical definition of JIDs is here: [draft-saintandre-xmpp-address](http://xmpp.org/internet-drafts/draft-saintandre-xmpp-address-00.html), which just got pulled out of [RFC3920bis](http://xmpp.org/internet-drafts/draft-ietf-xmpp-3920bis-06.html).
71,668,239
I am working on some plotly image processing for my work. I have been using matplotlib but need something more interactive, so I switched to dash and plotly. My goal is for people on my team to be able to draw a shape around certain parts of an image and return pixel values. I am using this documentation and want a similar result: <https://dash.plotly.com/annotations> ; specifically "Draw a path to show the histogram of the ROI" **Code:** ``` import numpy as np import plotly.express as px from dash import Dash from dash.dependencies import Input, Output import dash_core_components as dcc import dash_html_components as html from skimage import data, draw #yes necessary from scipy import ndimage import os, glob from skimage import io imagePath = os.path.join('.','examples') filename = glob.glob(os.path.join(imagePath,'IMG_0650_6.tif'))[0] moon = io.imread(filename) #imagePath = os.path.join('.','examples') #imageName = glob.glob(os.path.join(imagePath,'IMG_0650_6.tif'))[0] def path_to_indices(path): """From SVG path to numpy array of coordinates, each row being a (row, col) point """ indices_str = [ el.replace("M", "").replace("Z", "").split(",") for el in path.split("L") ] return np.rint(np.array(indices_str, dtype=float)).astype(np.int) def path_to_mask(path, shape): """From SVG path to a boolean array where all pixels enclosed by the path are True, and the other pixels are False. """ cols, rows = path_to_indices(path).T rr, cc = draw.polygon(rows, cols) mask = np.zeros(shape, dtype=np.bool) mask[rr, cc] = True mask = ndimage.binary_fill_holes(mask) return mask img = data.moon() #print(img) #print(type(img)) fig = px.imshow(img, binary_string=True) fig.update_layout(dragmode="drawclosedpath") fig_hist = px.histogram(img.ravel()) app = Dash(__name__) app.layout = html.Div( [ html.H3("Draw a path to show the histogram of the ROI"), html.Div( [dcc.Graph(id="graph-camera", figure=fig),], style={"width": "60%", "display": "inline-block", "padding": "0 0"}, ), html.Div( [dcc.Graph(id="graph-histogram", figure=fig_hist),], style={"width": "40%", "display": "inline-block", "padding": "0 0"}, ), ] ) @app.callback( Output("graph-histogram", "figure"), Input("graph-camera", "relayoutData"), prevent_initial_call=True, ) def on_new_annotation(relayout_data): if "shapes" in relayout_data: last_shape = relayout_data["shapes"][-1] mask = path_to_mask(last_shape["path"], img.shape) return px.histogram(img[mask]) else: return dash.no_update if __name__ == "__main__": app.run_server(debug=True) ``` **This returns the following error:** Dash is running on <http://127.0.0.1:8050/> * Serving Flask app "**main**" (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: on **Traceback:** ``` Traceback (most recent call last): File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/traitlets/config/application.py", line 845, in launch_instance app.initialize(argv) File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/traitlets/config/application.py", line 88, in inner return method(app, *args, **kwargs) File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/ipykernel/kernelapp.py", line 632, in initialize self.init_sockets() File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/ipykernel/kernelapp.py", line 282, in init_sockets self.shell_port = self._bind_socket(self.shell_socket, self.shell_port) File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/ipykernel/kernelapp.py", line 229, in _bind_socket return self._try_bind_socket(s, port) File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/ipykernel/kernelapp.py", line 205, in _try_bind_socket s.bind("tcp://%s:%i" % (self.ip, port)) File "/Users/anthea/opt/anaconda3/lib/python3.9/site-packages/zmq/sugar/socket.py", line 208, in bind super().bind(addr) File "zmq/backend/cython/socket.pyx", line 540, in zmq.backend.cython.socket.Socket.bind File "zmq/backend/cython/checkrc.pxd", line 28, in zmq.backend.cython.checkrc._check_rc zmq.error.ZMQError: Address already in use ``` **Solution Attempts:** I've consulted similar issues on here and tried reassigning 8050 to 1337, which was ineffective.
2022/03/29
[ "https://Stackoverflow.com/questions/71668239", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18599127/" ]
Use following as time generator with 15min interval and then use other date time functions as needed to extract date part or time part in separate columns. ``` with CTE as (select timestampadd(min,seq4()*15 ,date_trunc(hour, current_timestamp())) as time_count from table(generator(rowcount=>4*24))) select time_count from cte; +-------------------------------+ | TIME_COUNT | |-------------------------------| | 2022-03-29 14:00:00.000 -0700 | | 2022-03-29 14:15:00.000 -0700 | | 2022-03-29 14:30:00.000 -0700 | | 2022-03-29 14:45:00.000 -0700 | | 2022-03-29 15:00:00.000 -0700 | | 2022-03-29 15:15:00.000 -0700 | . . . ....truncated output | 2022-03-30 13:15:00.000 -0700 | | 2022-03-30 13:30:00.000 -0700 | | 2022-03-30 13:45:00.000 -0700 | +-------------------------------+ ```
There are many answers to this question [h](https://stackoverflow.com/questions/71666252/generate-series-equivalent-in-snowflake/71666318#71666318) [e](https://stackoverflow.com/questions/71473750/duplicating-a-row-a-certain-number-of-times-and-then-adding-30-mins-to-a-timesta/71475320#71475320) [r](https://stackoverflow.com/questions/71387244/fill-in-missing-dates-across-multiple-partitions-snowflake/71390294#71390294) [e](https://stackoverflow.com/questions/71354827/creating-an-amortization-schedule-in-snowflake/71366357#71366357) already (those 4 are all this month). But major point to note is you MUST NOT use `SEQx()` as the number generator (you can use it in the ORDER BY, but that is not needed). As noted in the [doc's](https://docs.snowflake.com/en/sql-reference/functions/seq1.html) > > Important > > > This function uses sequences to produce a unique set of increasing integers, but does not necessarily produce a gap-free sequence. When operating on a large quantity of data, gaps can appear in a sequence. If a fully ordered, gap-free sequence is required, consider using the ROW\_NUMBER window function. > > > ``` CREATE TABLE table_of_2_years_date_times AS SELECT date_time::date as date, date_time::time as time FROM ( SELECT row_number() over (order by null)-1 as rn ,dateadd('minute', 15 * rn, '2022-03-01'::date) as date_time from table(generator(rowcount=>4*24*365*2)) ) ORDER BY rn; ``` then selecting the top/bottom: ``` (SELECT * FROM table_of_2_years_date_times ORDER BY date,time LIMIT 5) UNION ALL (SELECT * FROM table_of_2_years_date_times ORDER BY date desc,time desc LIMIT 5) ORDER BY 1,2; ``` | DATE | TIME | | --- | --- | | 2022-03-01 | 00:00:00 | | 2022-03-01 | 00:15:00 | | 2022-03-01 | 00:30:00 | | 2022-03-01 | 00:45:00 | | 2022-03-01 | 01:00:00 | | 2024-02-28 | 22:45:00 | | 2024-02-28 | 23:00:00 | | 2024-02-28 | 23:15:00 | | 2024-02-28 | 23:30:00 | | 2024-02-28 | 23:45:00 |
16,373,317
In my website, each user has their own login id and password, so if a user is logged in, he can add, edit and update his record only. models.py is ``` class Report(models.Model): user = models.ForeignKey(User, null=False) name = models.CharField(max_length=20, null=True, blank=True) ``` views.py ``` def profile(request): if request.method == 'POST': reportform = ReportForm(request.POST) if reportform.is_valid(): report = reportform.save(commit=False) report.user = request.user report.save() return redirect('/index/') else: report = Report.objects.get() reportform = ReportForm(instance=report) return render_to_response('report/index.html', { 'form': reportform, }, context_instance=RequestContext(request)) ``` A user should have one age and it should be in one row of data in the database. 1. If there is no data in the database, i.e if it's a first time for the user, it is allowed to for the user to insert data into the database. 2. If the user reopens a page, the data that was inserted should be shown in editable mode. It is just like the scenario *"if a new user creates a gmail account, he can create the account at the first time and again he can only edit and update their details"*.The same procedure i want to implement in my website. I tried it with above code, I am not able to insert the data in the database. I tried with direct insertion in mysql db and checked, so the inserted data i can see in editable mode but if i change that and save, it is creating another row of data in db. Then if I am going to insert it for first time, I am getting the following trace back. ``` Environment: Request Method: GET Request URL: http://192.168.100.10/report/index/ Django Version: 1.3.7 Python Version: 2.7.0 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.admin', 'django.contrib.admindocs', 'django.contrib.humanize', 'django.contrib.staticfiles', 'south', 'collect', 'incident', 'report_settings'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.transaction.TransactionMiddleware', 'django.middleware.cache.FetchFromCacheMiddleware') Traceback: File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 111. response = callback(request, *callback_args, **callback_kwargs) File "/root/Projects/ir/incident/views.py" in when 559. report = Report.objects.get() File "/usr/lib/python2.7/site-packages/django/db/models/manager.py" in get 132. return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.7/site-packages/django/db/models/query.py" in get 349. % self.model._meta.object_name) Exception Type: DoesNotExist at report/index/ Exception Value: Report matching query does not exist. ```
2013/05/04
[ "https://Stackoverflow.com/questions/16373317", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2215612/" ]
You may use a [`BackgroundWorker`](http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx) to do the operation that you need in different thread like the following : ``` BackgroundWorker bgw; public Form1() { InitializeComponent(); bgw = new BackgroundWorker(); bgw.DoWork += bgw_DoWork; } private void Form1_DragDrop(object sender, DragEventArgs e) { if (e.Data.GetDataPresent(DataFormats.FileDrop)) { string[] s = (string[])e.Data.GetData(DataFormats.FileDrop, false); bgw.RunWorkerAsync(s); } } ``` Also for your issue "cross thread operation", try to use the [`Invoke`](http://msdn.microsoft.com/en-us/library/system.windows.forms.control.invoke%28v=vs.80%29.aspx) Method like this : ``` void bgw_DoWork(object sender, DoWorkEventArgs e) { Invoke(new Action<object>((args) => { string[] files = (string[])args; }), e.Argument); } ``` Its better to check if the dropped items are files using [`GetDataPresent`](http://msdn.microsoft.com/en-us/library/system.windows.forms.idataobject.getdatapresent.aspx) like above.
You can use a background thread for this long-running operation, if it is not ui-intensive. ``` ThreadPool.QueueUserWorkItem((o) => /* long running operation*/) ```
44,355,493
The following python code gives me the different combinations from the given values. ``` import itertools iterables = [ [1,2,3,4], [88,99], ['a','b'] ] for t in itertools.product(*iterables): print t ``` Output:- ``` (1, 88, 'a') (1, 88, 'b') (1, 99, 'a') (1, 99, 'b') (2, 88, 'a') ``` and so on. Can some one please tell me how to modify this code so the output looks like a list; ``` 188a 188b 199a 199b 288a ```
2017/06/04
[ "https://Stackoverflow.com/questions/44355493", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8110677/" ]
**Yes, you can.** Possible ways: **1)** Use Gradle plugin [gradle-console-reporter](https://github.com/ksoichiro/gradle-console-reporter) to report various kinds of summaries to console. JUnit, JaCoCo and Cobertura reports are supported. In your case, following output will be printed to console: ``` ... BUILD SUCCESSFUL Total time: 4.912 secs Coverage summary: project1: 72.2% project2-with-long-name: 44.4% ``` Then you can use [coverage](https://docs.gitlab.com/ee/ci/yaml/#coverage) with regular expression in Gitlab's `.gitlab-ci.yml` to parse code coverage. **2)** Second option is a little bit tricky. You can print full JaCoCo HTML report (e.g. using `cat target/site/jacoco/index.html`) and then use regular expression (see [this post](https://medium.com/@kaiwinter/javafx-and-code-coverage-on-gitlab-ci-29c690e03fd6)) or `grep` (see [this post](https://blog.exxeta.com/java/2018/08/12/display-the-test-coverage-with-gitlab/)) to parse coverage.
AFAIK Gradle does not support this, each project is treated separately. To support your use case some aggregation task can be created to parse a report and to update some value at root project and finally print that value to stdout. **Update with approximate code for solution:** ``` subprojects { task aggregateCoverage { // parse report from each module into ext.currentCoverage rootProject.ext.coverage += currentCoverage } } ```
58,285,474
``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while(sc != 0) { System.out.println("first number: "); int firstNum = sc.nextInt(); System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ``` So my intended goal is to have a script that will allow me to add two integers (chosen by user input with a scanner) and once those two are added i can then start a new sum. I'd also like to break from my while loop when the user inputs 0. I think my error is that i can't use the != operator on the Scanner type Could someone explain the flaw in my code? (I'm used to python which is probably why I'm making this mistake)
2019/10/08
[ "https://Stackoverflow.com/questions/58285474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9783604/" ]
You need to declare the variable out of while scope and update it until condition is not met Try this: ``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); int firstNum = 1; int secondNum = 1; while(firstNum !=0 && secondNum != 0) { System.out.println("first number: "); firstNum = sc.nextInt(); System.out.println("second number: "); secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ```
Hi just remove the while block since it has no sence to use it Here the corrected code ``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.println("first number: "); int firstNum = sc.nextInt(); System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } ```
58,285,474
``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while(sc != 0) { System.out.println("first number: "); int firstNum = sc.nextInt(); System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ``` So my intended goal is to have a script that will allow me to add two integers (chosen by user input with a scanner) and once those two are added i can then start a new sum. I'd also like to break from my while loop when the user inputs 0. I think my error is that i can't use the != operator on the Scanner type Could someone explain the flaw in my code? (I'm used to python which is probably why I'm making this mistake)
2019/10/08
[ "https://Stackoverflow.com/questions/58285474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9783604/" ]
You should have some kind of an "infinite" loop like so: ``` public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while(true) { System.out.println("first number: "); int firstNum = sc.nextInt(); if (firstNum == 0) { break; } System.out.println("second number: "); int secondNum = sc.nextInt(); if (secondNum == 0) { break; } System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ```
Hi just remove the while block since it has no sence to use it Here the corrected code ``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.println("first number: "); int firstNum = sc.nextInt(); System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } ```
58,285,474
``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while(sc != 0) { System.out.println("first number: "); int firstNum = sc.nextInt(); System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ``` So my intended goal is to have a script that will allow me to add two integers (chosen by user input with a scanner) and once those two are added i can then start a new sum. I'd also like to break from my while loop when the user inputs 0. I think my error is that i can't use the != operator on the Scanner type Could someone explain the flaw in my code? (I'm used to python which is probably why I'm making this mistake)
2019/10/08
[ "https://Stackoverflow.com/questions/58285474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9783604/" ]
You need to declare the variable out of while scope and update it until condition is not met Try this: ``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); int firstNum = 1; int secondNum = 1; while(firstNum !=0 && secondNum != 0) { System.out.println("first number: "); firstNum = sc.nextInt(); System.out.println("second number: "); secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ```
You cannot compare the Object sc with an integer value 0. You can do the below code. ``` public static void main(String[] args) { try (Scanner sc = new Scanner(System.in)) { System.out.println("first number: "); int firstNum = sc.nextInt(); while(firstNum != 0) { System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); System.out.println("first number: "); firstNum = sc.nextInt(); } } } ``` or ``` public static void main(String[] args) { try (Scanner sc = new Scanner(System.in)) { while(true) { System.out.println("first number: "); int firstNum = sc.nextInt(); if(firstNum == 0) { break; } System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); System.out.println("first number: "); firstNum = sc.nextInt(); } } } ```
58,285,474
``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while(sc != 0) { System.out.println("first number: "); int firstNum = sc.nextInt(); System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ``` So my intended goal is to have a script that will allow me to add two integers (chosen by user input with a scanner) and once those two are added i can then start a new sum. I'd also like to break from my while loop when the user inputs 0. I think my error is that i can't use the != operator on the Scanner type Could someone explain the flaw in my code? (I'm used to python which is probably why I'm making this mistake)
2019/10/08
[ "https://Stackoverflow.com/questions/58285474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9783604/" ]
You should have some kind of an "infinite" loop like so: ``` public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while(true) { System.out.println("first number: "); int firstNum = sc.nextInt(); if (firstNum == 0) { break; } System.out.println("second number: "); int secondNum = sc.nextInt(); if (secondNum == 0) { break; } System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ```
You cannot compare the Object sc with an integer value 0. You can do the below code. ``` public static void main(String[] args) { try (Scanner sc = new Scanner(System.in)) { System.out.println("first number: "); int firstNum = sc.nextInt(); while(firstNum != 0) { System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); System.out.println("first number: "); firstNum = sc.nextInt(); } } } ``` or ``` public static void main(String[] args) { try (Scanner sc = new Scanner(System.in)) { while(true) { System.out.println("first number: "); int firstNum = sc.nextInt(); if(firstNum == 0) { break; } System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); System.out.println("first number: "); firstNum = sc.nextInt(); } } } ```
58,285,474
``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while(sc != 0) { System.out.println("first number: "); int firstNum = sc.nextInt(); System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ``` So my intended goal is to have a script that will allow me to add two integers (chosen by user input with a scanner) and once those two are added i can then start a new sum. I'd also like to break from my while loop when the user inputs 0. I think my error is that i can't use the != operator on the Scanner type Could someone explain the flaw in my code? (I'm used to python which is probably why I'm making this mistake)
2019/10/08
[ "https://Stackoverflow.com/questions/58285474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9783604/" ]
You need to declare the variable out of while scope and update it until condition is not met Try this: ``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); int firstNum = 1; int secondNum = 1; while(firstNum !=0 && secondNum != 0) { System.out.println("first number: "); firstNum = sc.nextInt(); System.out.println("second number: "); secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ```
You should have some kind of an "infinite" loop like so: ``` public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while(true) { System.out.println("first number: "); int firstNum = sc.nextInt(); if (firstNum == 0) { break; } System.out.println("second number: "); int secondNum = sc.nextInt(); if (secondNum == 0) { break; } System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ```
58,285,474
``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while(sc != 0) { System.out.println("first number: "); int firstNum = sc.nextInt(); System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ``` So my intended goal is to have a script that will allow me to add two integers (chosen by user input with a scanner) and once those two are added i can then start a new sum. I'd also like to break from my while loop when the user inputs 0. I think my error is that i can't use the != operator on the Scanner type Could someone explain the flaw in my code? (I'm used to python which is probably why I'm making this mistake)
2019/10/08
[ "https://Stackoverflow.com/questions/58285474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9783604/" ]
You need to declare the variable out of while scope and update it until condition is not met Try this: ``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); int firstNum = 1; int secondNum = 1; while(firstNum !=0 && secondNum != 0) { System.out.println("first number: "); firstNum = sc.nextInt(); System.out.println("second number: "); secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ```
It's ugly af but it will let you understand the process ``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); boolean continueRunning = true; while (continueRunning) { System.out.println("first number: "); int firstNum = sc.nextInt(); System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); continueRunning = firstNum != 0 && secondNum != 0; } } } ```
58,285,474
``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while(sc != 0) { System.out.println("first number: "); int firstNum = sc.nextInt(); System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ``` So my intended goal is to have a script that will allow me to add two integers (chosen by user input with a scanner) and once those two are added i can then start a new sum. I'd also like to break from my while loop when the user inputs 0. I think my error is that i can't use the != operator on the Scanner type Could someone explain the flaw in my code? (I'm used to python which is probably why I'm making this mistake)
2019/10/08
[ "https://Stackoverflow.com/questions/58285474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9783604/" ]
You need to declare the variable out of while scope and update it until condition is not met Try this: ``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); int firstNum = 1; int secondNum = 1; while(firstNum !=0 && secondNum != 0) { System.out.println("first number: "); firstNum = sc.nextInt(); System.out.println("second number: "); secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ```
also ugly but i will sleep better tonight. ``` public static void main(String[] args) { Scanner scanner = new Scanner(System.in); requestNumbersAndSum(scanner); } private static void requestNumbersAndSum(Scanner scanner) { int firstNum = requestANum(scanner, "first number: "); int secondNum = requestANum(scanner, "second number: "); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); requestNumbersAndSum(scanner); } private static int requestANum(Scanner scanner, String messageToUser) { System.out.println(messageToUser); int requestedNumber = scanner.nextInt(); if(requestedNumber == 0){ System.exit(0); } return requestedNumber; } } ```
58,285,474
``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while(sc != 0) { System.out.println("first number: "); int firstNum = sc.nextInt(); System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ``` So my intended goal is to have a script that will allow me to add two integers (chosen by user input with a scanner) and once those two are added i can then start a new sum. I'd also like to break from my while loop when the user inputs 0. I think my error is that i can't use the != operator on the Scanner type Could someone explain the flaw in my code? (I'm used to python which is probably why I'm making this mistake)
2019/10/08
[ "https://Stackoverflow.com/questions/58285474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9783604/" ]
You should have some kind of an "infinite" loop like so: ``` public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while(true) { System.out.println("first number: "); int firstNum = sc.nextInt(); if (firstNum == 0) { break; } System.out.println("second number: "); int secondNum = sc.nextInt(); if (secondNum == 0) { break; } System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ```
It's ugly af but it will let you understand the process ``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); boolean continueRunning = true; while (continueRunning) { System.out.println("first number: "); int firstNum = sc.nextInt(); System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); continueRunning = firstNum != 0 && secondNum != 0; } } } ```
58,285,474
``` package com.company; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while(sc != 0) { System.out.println("first number: "); int firstNum = sc.nextInt(); System.out.println("second number: "); int secondNum = sc.nextInt(); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ``` So my intended goal is to have a script that will allow me to add two integers (chosen by user input with a scanner) and once those two are added i can then start a new sum. I'd also like to break from my while loop when the user inputs 0. I think my error is that i can't use the != operator on the Scanner type Could someone explain the flaw in my code? (I'm used to python which is probably why I'm making this mistake)
2019/10/08
[ "https://Stackoverflow.com/questions/58285474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9783604/" ]
You should have some kind of an "infinite" loop like so: ``` public class Main { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while(true) { System.out.println("first number: "); int firstNum = sc.nextInt(); if (firstNum == 0) { break; } System.out.println("second number: "); int secondNum = sc.nextInt(); if (secondNum == 0) { break; } System.out.println("The sum of your numbers: " + (firstNum + secondNum)); } } } ```
also ugly but i will sleep better tonight. ``` public static void main(String[] args) { Scanner scanner = new Scanner(System.in); requestNumbersAndSum(scanner); } private static void requestNumbersAndSum(Scanner scanner) { int firstNum = requestANum(scanner, "first number: "); int secondNum = requestANum(scanner, "second number: "); System.out.println("The sum of your numbers: " + (firstNum + secondNum)); requestNumbersAndSum(scanner); } private static int requestANum(Scanner scanner, String messageToUser) { System.out.println(messageToUser); int requestedNumber = scanner.nextInt(); if(requestedNumber == 0){ System.exit(0); } return requestedNumber; } } ```
3,867,131
I'm stuck for a full afternoon now trying to get python to build in 32bit mode. I run a 64bit Linux machine with openSUSE 11.3, I have the necessary -devel and -32bit packages installed to build applications in 32bit mode. The problem with the python build seems to be not in the make run itself, but in the afterwards run of setup.py, invoked by make. I found the following instructions for Ubuntu Linux: h\*\*p://indefinitestudies.org/2010/02/08/how-to-build-32-bit-python-on-ubuntu-9-10-x86\_64/ When I do as described, I get the following output: <http://pastebin.com/eP8WJ8V4> But I have the -32bit packages of libreadline, libopenssl, etc.pp. installed, but of course, they reside under /lib and /usr/lib and not /lib64 and /usr/lib64. When I start the python binary that results from this build, i get: ``` ./python Python 2.6.6 (r266:84292, Oct 5 2010, 21:22:06) [GCC 4.5.0 20100604 [gcc-4_5-branch revision 160292]] on linux2 Type "help", "copyright", "credits" or "license" for more information. Traceback (most recent call last): File "/etc/pythonstart", line 7, in <module> import readline ImportError: No module named readline ``` So how to get setup.py to observe the LDFLAGS=-L/lib command?? Any help is greatly appreciated. Regards, Philipp
2010/10/05
[ "https://Stackoverflow.com/questions/3867131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/467244/" ]
You'll need to pass the appropriate flags to gcc and ld to tell the compiler to compile and produce 32bit binaries. Use `--build` and `--host`. ``` ./configure --help System types: --build=BUILD configure for building on BUILD [guessed] --host=HOST cross-compile to build programs to run on HOST [BUILD] ``` You need to use `./configure --build=x86_64-pc-linux-gnu --host=i686-pc-linux-gnu` to compile for 32-bit Linux in a 64-bit Linux system. **Note:** You still need to add the other `./configure` options.
Regarding why, since Kirk (and probably others) wonder, here is an example: I have a Python app with large dicts of dicts containing light-weight objects. This consumes almost twice as much RAM on 64bit as on 32bit simply due to the pointers. I need to run a few instances of 2GB (32bit) each and the extra RAM quickly adds up. For FreeBSD, a detailed recipe for 32bit-on-64bit jail is here <http://www.gundersen.net/32bit-jail-on-64bit-freebsd/>
44,153,457
Alright matplotlib afficionados, we know how to plot a [donut chart](https://stackoverflow.com/questions/36296101/donut-chart-python), but what is better than a donut chart? A double-donut chart. Specifically: We have a set of elements that fall into disjoint categories and sub-categories of the first categorization. The donut chart should have slices for the categories in the outer ring and slices for the sub-categories in the inner ring, obviously aligned with the outer slices. Is there any library that provides this or do we need to work this out here? [![enter image description here](https://i.stack.imgur.com/BCM0N.png)](https://i.stack.imgur.com/BCM0N.png)
2017/05/24
[ "https://Stackoverflow.com/questions/44153457", "https://Stackoverflow.com", "https://Stackoverflow.com/users/626537/" ]
To obtain a double donut chart, you can plot as many pie charts in the same plot as you want. So the outer pie would have a `width` set to its wedges and the inner pie would have a radius that is less or equal `1-width`. ``` import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots() ax.axis('equal') width = 0.3 cm = plt.get_cmap("tab20c") cout = cm(np.arange(3)*4) pie, _ = ax.pie([120,77,39], radius=1, labels=list("ABC"), colors=cout) plt.setp( pie, width=width, edgecolor='white') cin = cm(np.array([1,2,5,6,9,10])) labels = list(map("".join, zip(list("aabbcc"),map(str, [1,2]*3)))) pie2, _ = ax.pie([60,60,37,40,29,10], radius=1-width, labels=labels, labeldistance=0.7, colors=cin) plt.setp( pie2, width=width, edgecolor='white') plt.show() ``` [![enter image description here](https://i.stack.imgur.com/C128P.png)](https://i.stack.imgur.com/C128P.png) *Note: I made this code also available in the matplotlib gallery as [nested pie example](https://matplotlib.org/gallery/pie_and_polar_charts/nested_pie.html).*
I adapted the example you provided; you can tackle your problem by plotting two donuts on the same figure, with a smaller outer radius for one of them. ``` import matplotlib.pyplot as plt import numpy as np def make_pie(sizes, text,colors,labels, radius=1): col = [[i/255 for i in c] for c in colors] plt.axis('equal') width = 0.35 kwargs = dict(colors=col, startangle=180) outside, _ = plt.pie(sizes, radius=radius, pctdistance=1-width/2,labels=labels,**kwargs) plt.setp( outside, width=width, edgecolor='white') kwargs = dict(size=20, fontweight='bold', va='center') plt.text(0, 0, text, ha='center', **kwargs) # Group colors c1 = (226, 33, 7) c2 = (60, 121, 189) # Subgroup colors d1 = (226, 33, 7) d2 = (60, 121, 189) d3 = (25, 25, 25) make_pie([100, 80, 90], "", [d1, d3, d2], ['M', 'N', 'F'], radius=1.2) make_pie([180, 90], "", [c1, c2], ['M', 'F'], radius=1) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/cRc9L.png)](https://i.stack.imgur.com/cRc9L.png)
47,302,085
It is not yet clear for me what `metrics` are (as given in the code below). What exactly are they evaluating? Why do we need to define them in the `model`? Why we can have multiple metrics in one model? And more importantly what is the mechanics behind all this? Any scientific reference is also appreciated. ```python model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['mae', 'acc']) ```
2017/11/15
[ "https://Stackoverflow.com/questions/47302085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3705055/" ]
As in [keras metrics](https://keras.io/metrics/) page described: > > A metric is a function that is used to judge the performance of your > model > > > Metrics are frequently used with early stopping callback to terminate training and avoid overfitting
Reference: [Keras Metrics Documentation](https://keras.io/metrics/) As given in the documentation page of `keras metrics`, a `metric` judges the performance of your model. The `metrics` argument in the `compile` method holds the list of metrics that needs to be evaluated by the model during its training and testing phases. Metrics like: * `binary_accuracy` * `categorical_accuracy` * `sparse_categorical_accuracy` * `top_k_categorical_accuracy` and * `sparse_top_k_categorical_accuracy` are the available metric functions that are supplied in the **`metrics`** parameter when the model is compiled. Metric functions are customizable as well. When multiple metrics need to be evaluated it is passed in the form of a `dictionary` or a `list`. One important resource you should refer for diving deep into metrics can be found [here](https://machinelearningmastery.com/custom-metrics-deep-learning-keras-python/)
47,302,085
It is not yet clear for me what `metrics` are (as given in the code below). What exactly are they evaluating? Why do we need to define them in the `model`? Why we can have multiple metrics in one model? And more importantly what is the mechanics behind all this? Any scientific reference is also appreciated. ```python model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['mae', 'acc']) ```
2017/11/15
[ "https://Stackoverflow.com/questions/47302085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3705055/" ]
So in order to understand what `metrics` are, it's good to start by understanding what a `loss` function is. Neural networks are mostly trained using gradient methods by an iterative process of decreasing a `loss` function. A `loss` is designed to have two crucial properties - first, the smaller its value is, the better your model fits your data, and second, it should be differentiable. So, knowing this, we could fully define what a `metric` is: it's a function that, given predicted values and ground truth values from examples, provides you with a scalar measure of a "fitness" of your model, to the data you have. So, as you may see, a `loss` function is a metric, but the opposite doesn't always hold. To understand these differences, let's look at the most common examples of `metrics` usage: 1. **Measure a performance of your network using non-differentiable functions:** e.g. accuracy is not differentiable (not even continuous) so you cannot directly optimize your network w.r.t. to it. However, you could use it in order to choose the model with the best accuracy. 2. **Obtain values of different loss functions when your final loss is a combination of a few of them:** Let's assume that your loss has a regularization term which measures how your weights differ from `0`, and a term which measures the fitness of your model. In this case, you could use `metrics` in order to have a separate track of how the fitness of your model changes across epochs. 3. **Track a measure with respect to which you don't want to directly optimize your model:** so - let's assume that you are solving a multidimensional regression problem where you are mostly concerned about `mse`, but at the same time you are interested in how a `cosine-distance` of your solution is changing in time. Then, it's the best to use `metrics`. I hope that the explanation presented above made obvious what metrics are used for, and why you could use multiple metrics in one model. So now, let's say a few words about mechanics of their usage in `keras`. There are two ways of computing them while training: 1. **Using `metrics` defined while compilation**: this is what you directly asked. In this case, `keras` is defining a separate tensor for each metric you defined, to have it computed while training. This usually makes computation faster, but this comes at a cost of additional compilations, and the fact that metrics should be defined in terms of `keras.backend` functions. 2. **Using `keras.callback`**: It is nice that you can use [`Callbacks`](https://keras.io/callbacks/) in order to compute your metrics. As each callback has a default attribute of `model`, you could compute a variety of metrics using `model.predict` or model parameters while training. Moreover, it makes it possible to compute it, not only epoch-wise, but also batch-wise, or training-wise. This comes at a cost of slower computations, and more complicated logic - as you need to define metrics on your own. [Here](https://keras.io/metrics/) you can find a list of available metrics, as well as an example on how you could define your own.
As in [keras metrics](https://keras.io/metrics/) page described: > > A metric is a function that is used to judge the performance of your > model > > > Metrics are frequently used with early stopping callback to terminate training and avoid overfitting
47,302,085
It is not yet clear for me what `metrics` are (as given in the code below). What exactly are they evaluating? Why do we need to define them in the `model`? Why we can have multiple metrics in one model? And more importantly what is the mechanics behind all this? Any scientific reference is also appreciated. ```python model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['mae', 'acc']) ```
2017/11/15
[ "https://Stackoverflow.com/questions/47302085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3705055/" ]
As in [keras metrics](https://keras.io/metrics/) page described: > > A metric is a function that is used to judge the performance of your > model > > > Metrics are frequently used with early stopping callback to terminate training and avoid overfitting
From an implementation point of view, losses and metrics are actually identical functions in Keras: ``` Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow.keras as Keras >>> print(Keras.losses.mean_squared_error == Keras.metrics.mean_squared_error) True >>> print(Keras.losses.poisson == Keras.metrics.poisson) True ```
47,302,085
It is not yet clear for me what `metrics` are (as given in the code below). What exactly are they evaluating? Why do we need to define them in the `model`? Why we can have multiple metrics in one model? And more importantly what is the mechanics behind all this? Any scientific reference is also appreciated. ```python model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['mae', 'acc']) ```
2017/11/15
[ "https://Stackoverflow.com/questions/47302085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3705055/" ]
As in [keras metrics](https://keras.io/metrics/) page described: > > A metric is a function that is used to judge the performance of your > model > > > Metrics are frequently used with early stopping callback to terminate training and avoid overfitting
Loss helps find the best solution your model can produce. Metric actually tells us how good it is. Imagine, we found the regression line (that has the least minimum squared error). Is that a good enough solution? This is what the metric will answer(considering the shape and spread of data, ideally!).
47,302,085
It is not yet clear for me what `metrics` are (as given in the code below). What exactly are they evaluating? Why do we need to define them in the `model`? Why we can have multiple metrics in one model? And more importantly what is the mechanics behind all this? Any scientific reference is also appreciated. ```python model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['mae', 'acc']) ```
2017/11/15
[ "https://Stackoverflow.com/questions/47302085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3705055/" ]
So in order to understand what `metrics` are, it's good to start by understanding what a `loss` function is. Neural networks are mostly trained using gradient methods by an iterative process of decreasing a `loss` function. A `loss` is designed to have two crucial properties - first, the smaller its value is, the better your model fits your data, and second, it should be differentiable. So, knowing this, we could fully define what a `metric` is: it's a function that, given predicted values and ground truth values from examples, provides you with a scalar measure of a "fitness" of your model, to the data you have. So, as you may see, a `loss` function is a metric, but the opposite doesn't always hold. To understand these differences, let's look at the most common examples of `metrics` usage: 1. **Measure a performance of your network using non-differentiable functions:** e.g. accuracy is not differentiable (not even continuous) so you cannot directly optimize your network w.r.t. to it. However, you could use it in order to choose the model with the best accuracy. 2. **Obtain values of different loss functions when your final loss is a combination of a few of them:** Let's assume that your loss has a regularization term which measures how your weights differ from `0`, and a term which measures the fitness of your model. In this case, you could use `metrics` in order to have a separate track of how the fitness of your model changes across epochs. 3. **Track a measure with respect to which you don't want to directly optimize your model:** so - let's assume that you are solving a multidimensional regression problem where you are mostly concerned about `mse`, but at the same time you are interested in how a `cosine-distance` of your solution is changing in time. Then, it's the best to use `metrics`. I hope that the explanation presented above made obvious what metrics are used for, and why you could use multiple metrics in one model. So now, let's say a few words about mechanics of their usage in `keras`. There are two ways of computing them while training: 1. **Using `metrics` defined while compilation**: this is what you directly asked. In this case, `keras` is defining a separate tensor for each metric you defined, to have it computed while training. This usually makes computation faster, but this comes at a cost of additional compilations, and the fact that metrics should be defined in terms of `keras.backend` functions. 2. **Using `keras.callback`**: It is nice that you can use [`Callbacks`](https://keras.io/callbacks/) in order to compute your metrics. As each callback has a default attribute of `model`, you could compute a variety of metrics using `model.predict` or model parameters while training. Moreover, it makes it possible to compute it, not only epoch-wise, but also batch-wise, or training-wise. This comes at a cost of slower computations, and more complicated logic - as you need to define metrics on your own. [Here](https://keras.io/metrics/) you can find a list of available metrics, as well as an example on how you could define your own.
Reference: [Keras Metrics Documentation](https://keras.io/metrics/) As given in the documentation page of `keras metrics`, a `metric` judges the performance of your model. The `metrics` argument in the `compile` method holds the list of metrics that needs to be evaluated by the model during its training and testing phases. Metrics like: * `binary_accuracy` * `categorical_accuracy` * `sparse_categorical_accuracy` * `top_k_categorical_accuracy` and * `sparse_top_k_categorical_accuracy` are the available metric functions that are supplied in the **`metrics`** parameter when the model is compiled. Metric functions are customizable as well. When multiple metrics need to be evaluated it is passed in the form of a `dictionary` or a `list`. One important resource you should refer for diving deep into metrics can be found [here](https://machinelearningmastery.com/custom-metrics-deep-learning-keras-python/)
47,302,085
It is not yet clear for me what `metrics` are (as given in the code below). What exactly are they evaluating? Why do we need to define them in the `model`? Why we can have multiple metrics in one model? And more importantly what is the mechanics behind all this? Any scientific reference is also appreciated. ```python model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['mae', 'acc']) ```
2017/11/15
[ "https://Stackoverflow.com/questions/47302085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3705055/" ]
Reference: [Keras Metrics Documentation](https://keras.io/metrics/) As given in the documentation page of `keras metrics`, a `metric` judges the performance of your model. The `metrics` argument in the `compile` method holds the list of metrics that needs to be evaluated by the model during its training and testing phases. Metrics like: * `binary_accuracy` * `categorical_accuracy` * `sparse_categorical_accuracy` * `top_k_categorical_accuracy` and * `sparse_top_k_categorical_accuracy` are the available metric functions that are supplied in the **`metrics`** parameter when the model is compiled. Metric functions are customizable as well. When multiple metrics need to be evaluated it is passed in the form of a `dictionary` or a `list`. One important resource you should refer for diving deep into metrics can be found [here](https://machinelearningmastery.com/custom-metrics-deep-learning-keras-python/)
Loss helps find the best solution your model can produce. Metric actually tells us how good it is. Imagine, we found the regression line (that has the least minimum squared error). Is that a good enough solution? This is what the metric will answer(considering the shape and spread of data, ideally!).
47,302,085
It is not yet clear for me what `metrics` are (as given in the code below). What exactly are they evaluating? Why do we need to define them in the `model`? Why we can have multiple metrics in one model? And more importantly what is the mechanics behind all this? Any scientific reference is also appreciated. ```python model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['mae', 'acc']) ```
2017/11/15
[ "https://Stackoverflow.com/questions/47302085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3705055/" ]
So in order to understand what `metrics` are, it's good to start by understanding what a `loss` function is. Neural networks are mostly trained using gradient methods by an iterative process of decreasing a `loss` function. A `loss` is designed to have two crucial properties - first, the smaller its value is, the better your model fits your data, and second, it should be differentiable. So, knowing this, we could fully define what a `metric` is: it's a function that, given predicted values and ground truth values from examples, provides you with a scalar measure of a "fitness" of your model, to the data you have. So, as you may see, a `loss` function is a metric, but the opposite doesn't always hold. To understand these differences, let's look at the most common examples of `metrics` usage: 1. **Measure a performance of your network using non-differentiable functions:** e.g. accuracy is not differentiable (not even continuous) so you cannot directly optimize your network w.r.t. to it. However, you could use it in order to choose the model with the best accuracy. 2. **Obtain values of different loss functions when your final loss is a combination of a few of them:** Let's assume that your loss has a regularization term which measures how your weights differ from `0`, and a term which measures the fitness of your model. In this case, you could use `metrics` in order to have a separate track of how the fitness of your model changes across epochs. 3. **Track a measure with respect to which you don't want to directly optimize your model:** so - let's assume that you are solving a multidimensional regression problem where you are mostly concerned about `mse`, but at the same time you are interested in how a `cosine-distance` of your solution is changing in time. Then, it's the best to use `metrics`. I hope that the explanation presented above made obvious what metrics are used for, and why you could use multiple metrics in one model. So now, let's say a few words about mechanics of their usage in `keras`. There are two ways of computing them while training: 1. **Using `metrics` defined while compilation**: this is what you directly asked. In this case, `keras` is defining a separate tensor for each metric you defined, to have it computed while training. This usually makes computation faster, but this comes at a cost of additional compilations, and the fact that metrics should be defined in terms of `keras.backend` functions. 2. **Using `keras.callback`**: It is nice that you can use [`Callbacks`](https://keras.io/callbacks/) in order to compute your metrics. As each callback has a default attribute of `model`, you could compute a variety of metrics using `model.predict` or model parameters while training. Moreover, it makes it possible to compute it, not only epoch-wise, but also batch-wise, or training-wise. This comes at a cost of slower computations, and more complicated logic - as you need to define metrics on your own. [Here](https://keras.io/metrics/) you can find a list of available metrics, as well as an example on how you could define your own.
From an implementation point of view, losses and metrics are actually identical functions in Keras: ``` Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow.keras as Keras >>> print(Keras.losses.mean_squared_error == Keras.metrics.mean_squared_error) True >>> print(Keras.losses.poisson == Keras.metrics.poisson) True ```
47,302,085
It is not yet clear for me what `metrics` are (as given in the code below). What exactly are they evaluating? Why do we need to define them in the `model`? Why we can have multiple metrics in one model? And more importantly what is the mechanics behind all this? Any scientific reference is also appreciated. ```python model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['mae', 'acc']) ```
2017/11/15
[ "https://Stackoverflow.com/questions/47302085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3705055/" ]
So in order to understand what `metrics` are, it's good to start by understanding what a `loss` function is. Neural networks are mostly trained using gradient methods by an iterative process of decreasing a `loss` function. A `loss` is designed to have two crucial properties - first, the smaller its value is, the better your model fits your data, and second, it should be differentiable. So, knowing this, we could fully define what a `metric` is: it's a function that, given predicted values and ground truth values from examples, provides you with a scalar measure of a "fitness" of your model, to the data you have. So, as you may see, a `loss` function is a metric, but the opposite doesn't always hold. To understand these differences, let's look at the most common examples of `metrics` usage: 1. **Measure a performance of your network using non-differentiable functions:** e.g. accuracy is not differentiable (not even continuous) so you cannot directly optimize your network w.r.t. to it. However, you could use it in order to choose the model with the best accuracy. 2. **Obtain values of different loss functions when your final loss is a combination of a few of them:** Let's assume that your loss has a regularization term which measures how your weights differ from `0`, and a term which measures the fitness of your model. In this case, you could use `metrics` in order to have a separate track of how the fitness of your model changes across epochs. 3. **Track a measure with respect to which you don't want to directly optimize your model:** so - let's assume that you are solving a multidimensional regression problem where you are mostly concerned about `mse`, but at the same time you are interested in how a `cosine-distance` of your solution is changing in time. Then, it's the best to use `metrics`. I hope that the explanation presented above made obvious what metrics are used for, and why you could use multiple metrics in one model. So now, let's say a few words about mechanics of their usage in `keras`. There are two ways of computing them while training: 1. **Using `metrics` defined while compilation**: this is what you directly asked. In this case, `keras` is defining a separate tensor for each metric you defined, to have it computed while training. This usually makes computation faster, but this comes at a cost of additional compilations, and the fact that metrics should be defined in terms of `keras.backend` functions. 2. **Using `keras.callback`**: It is nice that you can use [`Callbacks`](https://keras.io/callbacks/) in order to compute your metrics. As each callback has a default attribute of `model`, you could compute a variety of metrics using `model.predict` or model parameters while training. Moreover, it makes it possible to compute it, not only epoch-wise, but also batch-wise, or training-wise. This comes at a cost of slower computations, and more complicated logic - as you need to define metrics on your own. [Here](https://keras.io/metrics/) you can find a list of available metrics, as well as an example on how you could define your own.
Loss helps find the best solution your model can produce. Metric actually tells us how good it is. Imagine, we found the regression line (that has the least minimum squared error). Is that a good enough solution? This is what the metric will answer(considering the shape and spread of data, ideally!).
47,302,085
It is not yet clear for me what `metrics` are (as given in the code below). What exactly are they evaluating? Why do we need to define them in the `model`? Why we can have multiple metrics in one model? And more importantly what is the mechanics behind all this? Any scientific reference is also appreciated. ```python model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['mae', 'acc']) ```
2017/11/15
[ "https://Stackoverflow.com/questions/47302085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3705055/" ]
From an implementation point of view, losses and metrics are actually identical functions in Keras: ``` Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow.keras as Keras >>> print(Keras.losses.mean_squared_error == Keras.metrics.mean_squared_error) True >>> print(Keras.losses.poisson == Keras.metrics.poisson) True ```
Loss helps find the best solution your model can produce. Metric actually tells us how good it is. Imagine, we found the regression line (that has the least minimum squared error). Is that a good enough solution? This is what the metric will answer(considering the shape and spread of data, ideally!).
38,782,191
Dlib has a really handy, fast and efficient object detection routine, and I wanted to make a cool face tracking example similar to the example [here](https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/). OpenCV, which is widely supported, has VideoCapture module that is fairly quick (a fifth of a second to snapshot compared with 1 second or more for calling up some program that wakes up the webcam and fetches a picture). I added this to the face detector Python example in Dlib. If you directly show and process the OpenCV VideoCapture output it looks odd because apparently OpenCV stores BGR instead of RGB order. After adjusting this, it works, but slowly: ``` from __future__ import division import sys import dlib from skimage import io detector = dlib.get_frontal_face_detector() win = dlib.image_window() if len( sys.argv[1:] ) == 0: from cv2 import VideoCapture from time import time cam = VideoCapture(0) #set the port of the camera as before while True: start = time() retval, image = cam.read() #return a True bolean and and the image if all go right for row in image: for px in row: #rgb expected... but the array is bgr? r = px[2] px[2] = px[0] px[0] = r #import matplotlib.pyplot as plt #plt.imshow(image) #plt.show() print( "readimage: " + str( time() - start ) ) start = time() dets = detector(image, 1) print "your faces: %f" % len(dets) for i, d in enumerate( dets ): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, d.left(), d.top(), d.right(), d.bottom())) print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(image[0]) )) print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(image)) ) print( "process: " + str( time() - start ) ) start = time() win.clear_overlay() win.set_image(image) win.add_overlay(dets) print( "show: " + str( time() - start ) ) #dlib.hit_enter_to_continue() for f in sys.argv[1:]: print("Processing file: {}".format(f)) img = io.imread(f) # The 1 in the second argument indicates that we should upsample the image # 1 time. This will make everything bigger and allow us to detect more # faces. dets = detector(img, 1) print("Number of faces detected: {}".format(len(dets))) for i, d in enumerate(dets): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, d.left(), d.top(), d.right(), d.bottom())) win.clear_overlay() win.set_image(img) win.add_overlay(dets) dlib.hit_enter_to_continue() # Finally, if you really want to you can ask the detector to tell you the score # for each detection. The score is bigger for more confident detections. # Also, the idx tells you which of the face sub-detectors matched. This can be # used to broadly identify faces in different orientations. if (len(sys.argv[1:]) > 0): img = io.imread(sys.argv[1]) dets, scores, idx = detector.run(img, 1) for i, d in enumerate(dets): print("Detection {}, score: {}, face_type:{}".format( d, scores[i], idx[i])) ``` From the output of the timings in this program, it seems processing and grabbing the picture are each taking a fifth of a second, so you would think it should show one or 2 updates per second - however, if you raise your hand it shows in the webcam view after 5 seconds or so! Is there some sort of internal cache keeping it from grabbing the latest webcam image? Could I adjust or multi-thread the webcam input process to fix the lag? This is on an Intel i5 with 16gb RAM. ***Update*** According to here, it suggests the read grabs a video *frame by frame*. This would explain it grabbing the next frame and the next frame, until it finally caught up to all the frames that had been grabbed while it was processing. I wonder if there is an option to set the framerate or set it to drop frames and just click a picture of the face in the webcam *now* on read? <http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera>
2016/08/05
[ "https://Stackoverflow.com/questions/38782191", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778234/" ]
I tried multithreading, and it was just as slow, then I multithreaded with just the `.read()` in the thread, no processing, no thread locking, and it worked quite fast - maybe 1 second or so of delay, not 3 or 5. See <http://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/> ``` from __future__ import division import sys from time import time, sleep import threading import dlib from skimage import io detector = dlib.get_frontal_face_detector() win = dlib.image_window() class webCamGrabber( threading.Thread ): def __init__( self ): threading.Thread.__init__( self ) #Lock for when you can read/write self.image: #self.imageLock = threading.Lock() self.image = False from cv2 import VideoCapture, cv from time import time self.cam = VideoCapture(0) #set the port of the camera as before #self.cam.set(cv.CV_CAP_PROP_FPS, 1) def run( self ): while True: start = time() #self.imageLock.acquire() retval, self.image = self.cam.read() #return a True bolean and and the image if all go right print( type( self.image) ) #import matplotlib.pyplot as plt #plt.imshow(image) #plt.show() #print( "readimage: " + str( time() - start ) ) #sleep(0.1) if len( sys.argv[1:] ) == 0: #Start webcam reader thread: camThread = webCamGrabber() camThread.start() #Setup window for results detector = dlib.get_frontal_face_detector() win = dlib.image_window() while True: #camThread.imageLock.acquire() if camThread.image is not False: print( "enter") start = time() myimage = camThread.image for row in myimage: for px in row: #rgb expected... but the array is bgr? r = px[2] px[2] = px[0] px[0] = r dets = detector( myimage, 0) #camThread.imageLock.release() print "your faces: %f" % len(dets) for i, d in enumerate( dets ): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, d.left(), d.top(), d.right(), d.bottom())) print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(camThread.image[0]) )) print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(camThread.image)) ) print( "process: " + str( time() - start ) ) start = time() win.clear_overlay() win.set_image(myimage) win.add_overlay(dets) print( "show: " + str( time() - start ) ) #dlib.hit_enter_to_continue() for f in sys.argv[1:]: print("Processing file: {}".format(f)) img = io.imread(f) # The 1 in the second argument indicates that we should upsample the image # 1 time. This will make everything bigger and allow us to detect more # faces. dets = detector(img, 1) print("Number of faces detected: {}".format(len(dets))) for i, d in enumerate(dets): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, d.left(), d.top(), d.right(), d.bottom())) win.clear_overlay() win.set_image(img) win.add_overlay(dets) dlib.hit_enter_to_continue() # Finally, if you really want to you can ask the detector to tell you the score # for each detection. The score is bigger for more confident detections. # Also, the idx tells you which of the face sub-detectors matched. This can be # used to broadly identify faces in different orientations. if (len(sys.argv[1:]) > 0): img = io.imread(sys.argv[1]) dets, scores, idx = detector.run(img, 1) for i, d in enumerate(dets): print("Detection {}, score: {}, face_type:{}".format( d, scores[i], idx[i])) ```
If you want to show a frame read in OpenCV, you can do it with the help of `cv2.imshow()` function without any need of changing the colors order. On the other hand, if you still want to show the picture in matplotlib, then you can't avoid using the methods like this: ``` b,g,r = cv2.split(img) img = cv2.merge((b,g,r)) ``` That's the only thing I can help you with for now=)
38,782,191
Dlib has a really handy, fast and efficient object detection routine, and I wanted to make a cool face tracking example similar to the example [here](https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/). OpenCV, which is widely supported, has VideoCapture module that is fairly quick (a fifth of a second to snapshot compared with 1 second or more for calling up some program that wakes up the webcam and fetches a picture). I added this to the face detector Python example in Dlib. If you directly show and process the OpenCV VideoCapture output it looks odd because apparently OpenCV stores BGR instead of RGB order. After adjusting this, it works, but slowly: ``` from __future__ import division import sys import dlib from skimage import io detector = dlib.get_frontal_face_detector() win = dlib.image_window() if len( sys.argv[1:] ) == 0: from cv2 import VideoCapture from time import time cam = VideoCapture(0) #set the port of the camera as before while True: start = time() retval, image = cam.read() #return a True bolean and and the image if all go right for row in image: for px in row: #rgb expected... but the array is bgr? r = px[2] px[2] = px[0] px[0] = r #import matplotlib.pyplot as plt #plt.imshow(image) #plt.show() print( "readimage: " + str( time() - start ) ) start = time() dets = detector(image, 1) print "your faces: %f" % len(dets) for i, d in enumerate( dets ): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, d.left(), d.top(), d.right(), d.bottom())) print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(image[0]) )) print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(image)) ) print( "process: " + str( time() - start ) ) start = time() win.clear_overlay() win.set_image(image) win.add_overlay(dets) print( "show: " + str( time() - start ) ) #dlib.hit_enter_to_continue() for f in sys.argv[1:]: print("Processing file: {}".format(f)) img = io.imread(f) # The 1 in the second argument indicates that we should upsample the image # 1 time. This will make everything bigger and allow us to detect more # faces. dets = detector(img, 1) print("Number of faces detected: {}".format(len(dets))) for i, d in enumerate(dets): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, d.left(), d.top(), d.right(), d.bottom())) win.clear_overlay() win.set_image(img) win.add_overlay(dets) dlib.hit_enter_to_continue() # Finally, if you really want to you can ask the detector to tell you the score # for each detection. The score is bigger for more confident detections. # Also, the idx tells you which of the face sub-detectors matched. This can be # used to broadly identify faces in different orientations. if (len(sys.argv[1:]) > 0): img = io.imread(sys.argv[1]) dets, scores, idx = detector.run(img, 1) for i, d in enumerate(dets): print("Detection {}, score: {}, face_type:{}".format( d, scores[i], idx[i])) ``` From the output of the timings in this program, it seems processing and grabbing the picture are each taking a fifth of a second, so you would think it should show one or 2 updates per second - however, if you raise your hand it shows in the webcam view after 5 seconds or so! Is there some sort of internal cache keeping it from grabbing the latest webcam image? Could I adjust or multi-thread the webcam input process to fix the lag? This is on an Intel i5 with 16gb RAM. ***Update*** According to here, it suggests the read grabs a video *frame by frame*. This would explain it grabbing the next frame and the next frame, until it finally caught up to all the frames that had been grabbed while it was processing. I wonder if there is an option to set the framerate or set it to drop frames and just click a picture of the face in the webcam *now* on read? <http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera>
2016/08/05
[ "https://Stackoverflow.com/questions/38782191", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778234/" ]
I feel your pain. I actually recently worked with that webcam script (multiple iterations; substantially edited). I got it to work really well, I think. So that you can see what I did, I created a GitHub Gist with the details (code; HTML readme file; sample output): <https://gist.github.com/victoriastuart/8092a3dd7e97ab57ede7614251bf5cbd>
If you want to show a frame read in OpenCV, you can do it with the help of `cv2.imshow()` function without any need of changing the colors order. On the other hand, if you still want to show the picture in matplotlib, then you can't avoid using the methods like this: ``` b,g,r = cv2.split(img) img = cv2.merge((b,g,r)) ``` That's the only thing I can help you with for now=)
38,782,191
Dlib has a really handy, fast and efficient object detection routine, and I wanted to make a cool face tracking example similar to the example [here](https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/). OpenCV, which is widely supported, has VideoCapture module that is fairly quick (a fifth of a second to snapshot compared with 1 second or more for calling up some program that wakes up the webcam and fetches a picture). I added this to the face detector Python example in Dlib. If you directly show and process the OpenCV VideoCapture output it looks odd because apparently OpenCV stores BGR instead of RGB order. After adjusting this, it works, but slowly: ``` from __future__ import division import sys import dlib from skimage import io detector = dlib.get_frontal_face_detector() win = dlib.image_window() if len( sys.argv[1:] ) == 0: from cv2 import VideoCapture from time import time cam = VideoCapture(0) #set the port of the camera as before while True: start = time() retval, image = cam.read() #return a True bolean and and the image if all go right for row in image: for px in row: #rgb expected... but the array is bgr? r = px[2] px[2] = px[0] px[0] = r #import matplotlib.pyplot as plt #plt.imshow(image) #plt.show() print( "readimage: " + str( time() - start ) ) start = time() dets = detector(image, 1) print "your faces: %f" % len(dets) for i, d in enumerate( dets ): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, d.left(), d.top(), d.right(), d.bottom())) print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(image[0]) )) print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(image)) ) print( "process: " + str( time() - start ) ) start = time() win.clear_overlay() win.set_image(image) win.add_overlay(dets) print( "show: " + str( time() - start ) ) #dlib.hit_enter_to_continue() for f in sys.argv[1:]: print("Processing file: {}".format(f)) img = io.imread(f) # The 1 in the second argument indicates that we should upsample the image # 1 time. This will make everything bigger and allow us to detect more # faces. dets = detector(img, 1) print("Number of faces detected: {}".format(len(dets))) for i, d in enumerate(dets): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, d.left(), d.top(), d.right(), d.bottom())) win.clear_overlay() win.set_image(img) win.add_overlay(dets) dlib.hit_enter_to_continue() # Finally, if you really want to you can ask the detector to tell you the score # for each detection. The score is bigger for more confident detections. # Also, the idx tells you which of the face sub-detectors matched. This can be # used to broadly identify faces in different orientations. if (len(sys.argv[1:]) > 0): img = io.imread(sys.argv[1]) dets, scores, idx = detector.run(img, 1) for i, d in enumerate(dets): print("Detection {}, score: {}, face_type:{}".format( d, scores[i], idx[i])) ``` From the output of the timings in this program, it seems processing and grabbing the picture are each taking a fifth of a second, so you would think it should show one or 2 updates per second - however, if you raise your hand it shows in the webcam view after 5 seconds or so! Is there some sort of internal cache keeping it from grabbing the latest webcam image? Could I adjust or multi-thread the webcam input process to fix the lag? This is on an Intel i5 with 16gb RAM. ***Update*** According to here, it suggests the read grabs a video *frame by frame*. This would explain it grabbing the next frame and the next frame, until it finally caught up to all the frames that had been grabbed while it was processing. I wonder if there is an option to set the framerate or set it to drop frames and just click a picture of the face in the webcam *now* on read? <http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera>
2016/08/05
[ "https://Stackoverflow.com/questions/38782191", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778234/" ]
Maybe the problem is that there is a threshold is set. As described [here](https://github.com/davisking/dlib/issues/547) ``` dots = detector(frame, 1) ``` Should be changed to ``` dots = detector(frame) ``` To avoid a threshold. This is works for me, but at the same time, there is a problem that frames are processed too fast.
If you want to show a frame read in OpenCV, you can do it with the help of `cv2.imshow()` function without any need of changing the colors order. On the other hand, if you still want to show the picture in matplotlib, then you can't avoid using the methods like this: ``` b,g,r = cv2.split(img) img = cv2.merge((b,g,r)) ``` That's the only thing I can help you with for now=)
38,782,191
Dlib has a really handy, fast and efficient object detection routine, and I wanted to make a cool face tracking example similar to the example [here](https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/). OpenCV, which is widely supported, has VideoCapture module that is fairly quick (a fifth of a second to snapshot compared with 1 second or more for calling up some program that wakes up the webcam and fetches a picture). I added this to the face detector Python example in Dlib. If you directly show and process the OpenCV VideoCapture output it looks odd because apparently OpenCV stores BGR instead of RGB order. After adjusting this, it works, but slowly: ``` from __future__ import division import sys import dlib from skimage import io detector = dlib.get_frontal_face_detector() win = dlib.image_window() if len( sys.argv[1:] ) == 0: from cv2 import VideoCapture from time import time cam = VideoCapture(0) #set the port of the camera as before while True: start = time() retval, image = cam.read() #return a True bolean and and the image if all go right for row in image: for px in row: #rgb expected... but the array is bgr? r = px[2] px[2] = px[0] px[0] = r #import matplotlib.pyplot as plt #plt.imshow(image) #plt.show() print( "readimage: " + str( time() - start ) ) start = time() dets = detector(image, 1) print "your faces: %f" % len(dets) for i, d in enumerate( dets ): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, d.left(), d.top(), d.right(), d.bottom())) print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(image[0]) )) print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(image)) ) print( "process: " + str( time() - start ) ) start = time() win.clear_overlay() win.set_image(image) win.add_overlay(dets) print( "show: " + str( time() - start ) ) #dlib.hit_enter_to_continue() for f in sys.argv[1:]: print("Processing file: {}".format(f)) img = io.imread(f) # The 1 in the second argument indicates that we should upsample the image # 1 time. This will make everything bigger and allow us to detect more # faces. dets = detector(img, 1) print("Number of faces detected: {}".format(len(dets))) for i, d in enumerate(dets): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, d.left(), d.top(), d.right(), d.bottom())) win.clear_overlay() win.set_image(img) win.add_overlay(dets) dlib.hit_enter_to_continue() # Finally, if you really want to you can ask the detector to tell you the score # for each detection. The score is bigger for more confident detections. # Also, the idx tells you which of the face sub-detectors matched. This can be # used to broadly identify faces in different orientations. if (len(sys.argv[1:]) > 0): img = io.imread(sys.argv[1]) dets, scores, idx = detector.run(img, 1) for i, d in enumerate(dets): print("Detection {}, score: {}, face_type:{}".format( d, scores[i], idx[i])) ``` From the output of the timings in this program, it seems processing and grabbing the picture are each taking a fifth of a second, so you would think it should show one or 2 updates per second - however, if you raise your hand it shows in the webcam view after 5 seconds or so! Is there some sort of internal cache keeping it from grabbing the latest webcam image? Could I adjust or multi-thread the webcam input process to fix the lag? This is on an Intel i5 with 16gb RAM. ***Update*** According to here, it suggests the read grabs a video *frame by frame*. This would explain it grabbing the next frame and the next frame, until it finally caught up to all the frames that had been grabbed while it was processing. I wonder if there is an option to set the framerate or set it to drop frames and just click a picture of the face in the webcam *now* on read? <http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera>
2016/08/05
[ "https://Stackoverflow.com/questions/38782191", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778234/" ]
I feel your pain. I actually recently worked with that webcam script (multiple iterations; substantially edited). I got it to work really well, I think. So that you can see what I did, I created a GitHub Gist with the details (code; HTML readme file; sample output): <https://gist.github.com/victoriastuart/8092a3dd7e97ab57ede7614251bf5cbd>
I tried multithreading, and it was just as slow, then I multithreaded with just the `.read()` in the thread, no processing, no thread locking, and it worked quite fast - maybe 1 second or so of delay, not 3 or 5. See <http://www.pyimagesearch.com/2015/12/21/increasing-webcam-fps-with-python-and-opencv/> ``` from __future__ import division import sys from time import time, sleep import threading import dlib from skimage import io detector = dlib.get_frontal_face_detector() win = dlib.image_window() class webCamGrabber( threading.Thread ): def __init__( self ): threading.Thread.__init__( self ) #Lock for when you can read/write self.image: #self.imageLock = threading.Lock() self.image = False from cv2 import VideoCapture, cv from time import time self.cam = VideoCapture(0) #set the port of the camera as before #self.cam.set(cv.CV_CAP_PROP_FPS, 1) def run( self ): while True: start = time() #self.imageLock.acquire() retval, self.image = self.cam.read() #return a True bolean and and the image if all go right print( type( self.image) ) #import matplotlib.pyplot as plt #plt.imshow(image) #plt.show() #print( "readimage: " + str( time() - start ) ) #sleep(0.1) if len( sys.argv[1:] ) == 0: #Start webcam reader thread: camThread = webCamGrabber() camThread.start() #Setup window for results detector = dlib.get_frontal_face_detector() win = dlib.image_window() while True: #camThread.imageLock.acquire() if camThread.image is not False: print( "enter") start = time() myimage = camThread.image for row in myimage: for px in row: #rgb expected... but the array is bgr? r = px[2] px[2] = px[0] px[0] = r dets = detector( myimage, 0) #camThread.imageLock.release() print "your faces: %f" % len(dets) for i, d in enumerate( dets ): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, d.left(), d.top(), d.right(), d.bottom())) print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(camThread.image[0]) )) print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(camThread.image)) ) print( "process: " + str( time() - start ) ) start = time() win.clear_overlay() win.set_image(myimage) win.add_overlay(dets) print( "show: " + str( time() - start ) ) #dlib.hit_enter_to_continue() for f in sys.argv[1:]: print("Processing file: {}".format(f)) img = io.imread(f) # The 1 in the second argument indicates that we should upsample the image # 1 time. This will make everything bigger and allow us to detect more # faces. dets = detector(img, 1) print("Number of faces detected: {}".format(len(dets))) for i, d in enumerate(dets): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, d.left(), d.top(), d.right(), d.bottom())) win.clear_overlay() win.set_image(img) win.add_overlay(dets) dlib.hit_enter_to_continue() # Finally, if you really want to you can ask the detector to tell you the score # for each detection. The score is bigger for more confident detections. # Also, the idx tells you which of the face sub-detectors matched. This can be # used to broadly identify faces in different orientations. if (len(sys.argv[1:]) > 0): img = io.imread(sys.argv[1]) dets, scores, idx = detector.run(img, 1) for i, d in enumerate(dets): print("Detection {}, score: {}, face_type:{}".format( d, scores[i], idx[i])) ```
38,782,191
Dlib has a really handy, fast and efficient object detection routine, and I wanted to make a cool face tracking example similar to the example [here](https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/). OpenCV, which is widely supported, has VideoCapture module that is fairly quick (a fifth of a second to snapshot compared with 1 second or more for calling up some program that wakes up the webcam and fetches a picture). I added this to the face detector Python example in Dlib. If you directly show and process the OpenCV VideoCapture output it looks odd because apparently OpenCV stores BGR instead of RGB order. After adjusting this, it works, but slowly: ``` from __future__ import division import sys import dlib from skimage import io detector = dlib.get_frontal_face_detector() win = dlib.image_window() if len( sys.argv[1:] ) == 0: from cv2 import VideoCapture from time import time cam = VideoCapture(0) #set the port of the camera as before while True: start = time() retval, image = cam.read() #return a True bolean and and the image if all go right for row in image: for px in row: #rgb expected... but the array is bgr? r = px[2] px[2] = px[0] px[0] = r #import matplotlib.pyplot as plt #plt.imshow(image) #plt.show() print( "readimage: " + str( time() - start ) ) start = time() dets = detector(image, 1) print "your faces: %f" % len(dets) for i, d in enumerate( dets ): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, d.left(), d.top(), d.right(), d.bottom())) print("from left: {}".format( ( (d.left() + d.right()) / 2 ) / len(image[0]) )) print("from top: {}".format( ( (d.top() + d.bottom()) / 2 ) /len(image)) ) print( "process: " + str( time() - start ) ) start = time() win.clear_overlay() win.set_image(image) win.add_overlay(dets) print( "show: " + str( time() - start ) ) #dlib.hit_enter_to_continue() for f in sys.argv[1:]: print("Processing file: {}".format(f)) img = io.imread(f) # The 1 in the second argument indicates that we should upsample the image # 1 time. This will make everything bigger and allow us to detect more # faces. dets = detector(img, 1) print("Number of faces detected: {}".format(len(dets))) for i, d in enumerate(dets): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, d.left(), d.top(), d.right(), d.bottom())) win.clear_overlay() win.set_image(img) win.add_overlay(dets) dlib.hit_enter_to_continue() # Finally, if you really want to you can ask the detector to tell you the score # for each detection. The score is bigger for more confident detections. # Also, the idx tells you which of the face sub-detectors matched. This can be # used to broadly identify faces in different orientations. if (len(sys.argv[1:]) > 0): img = io.imread(sys.argv[1]) dets, scores, idx = detector.run(img, 1) for i, d in enumerate(dets): print("Detection {}, score: {}, face_type:{}".format( d, scores[i], idx[i])) ``` From the output of the timings in this program, it seems processing and grabbing the picture are each taking a fifth of a second, so you would think it should show one or 2 updates per second - however, if you raise your hand it shows in the webcam view after 5 seconds or so! Is there some sort of internal cache keeping it from grabbing the latest webcam image? Could I adjust or multi-thread the webcam input process to fix the lag? This is on an Intel i5 with 16gb RAM. ***Update*** According to here, it suggests the read grabs a video *frame by frame*. This would explain it grabbing the next frame and the next frame, until it finally caught up to all the frames that had been grabbed while it was processing. I wonder if there is an option to set the framerate or set it to drop frames and just click a picture of the face in the webcam *now* on read? <http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera>
2016/08/05
[ "https://Stackoverflow.com/questions/38782191", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778234/" ]
I feel your pain. I actually recently worked with that webcam script (multiple iterations; substantially edited). I got it to work really well, I think. So that you can see what I did, I created a GitHub Gist with the details (code; HTML readme file; sample output): <https://gist.github.com/victoriastuart/8092a3dd7e97ab57ede7614251bf5cbd>
Maybe the problem is that there is a threshold is set. As described [here](https://github.com/davisking/dlib/issues/547) ``` dots = detector(frame, 1) ``` Should be changed to ``` dots = detector(frame) ``` To avoid a threshold. This is works for me, but at the same time, there is a problem that frames are processed too fast.
48,738,061
``` M = eval(input("Input the first number ")) N = eval(input("Input the second number(greater than M) ")) sum = 0 while M <= N: if M % 2 == 1: sum = sum + M M = M + 1 print(sum) ``` This is my python code, every time I run the program, it prints the number twice. (1 1 4 4 9 9 etc.) Just confused on why this happening - in intro to computer programming so any help is appreciated (dumbed down help)
2018/02/12
[ "https://Stackoverflow.com/questions/48738061", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You may use ``` import re text = 'MIKE an entry for mike WILL and here is wills text DAVID and this belongs to david' subs = ['MIKE','WILL','TOM','DAVID'] res = re.findall(r'({0})\s*(.*?)(?=\s*(?:{0}|$))'.format("|".join(subs)), text) print(res) # => [('MIKE', 'an entry for mike'), ('WILL', 'and here is wills text'), ('DAVID', 'and this belongs to david')] ``` See the [Python demo](https://ideone.com/T4JO2i). The pattern that is built dynamically will look like [`(MIKE|WILL|TOM|DAVID)\s*(.*?)(?=\s*(?:MIKE|WILL|TOM|DAVID|$))`](https://regex101.com/r/0cNztQ/1) in this case. **Details** * `(MIKE|WILL|TOM|DAVID)` - Group 1 matching one of the alternatives substrings * `\s*` - 0+ whitespaces * `(.*?)` - Group 2 capturing any 0+ chars other than line break chars (use `re.S` flag to match any chars), as few as possible, up to the first... * `(?=\s*(?:MIKE|WILL|TOM|DAVID|$))` - 0+ whitespaces followed with one of the substrings or end of string (`$`). These texts are not consumed, so, the regex engine still can get consequent matches.
You can also use the following regex to achieve your goal: ``` (MIKE.*)(?= WILL)|(WILL.*)(?= DAVID)|(DAVID.*) ``` It uses Positive lookahead to get the intermediate strings. (<http://www.rexegg.com/regex-quickstart.html>) **TESTED:** <https://regex101.com/r/ZSJJVG/1>
24,687,665
I would eventually like to pass data from python data structures to Javascript elements that will render it in a Dygraphs graph within an iPython Notebook. I am new to using notebooks, especially the javascript/nobebook interaction. I have the latest Dygraphs library saved locally on my machine. At the very least, I would like to be able to render a sample Dygraphs plot in the notebook using that library. See the notebook below. I am trying to execute the simple Dygraphs example code, using the library provided here: <http://dygraphs.com/1.0.1/dygraph-combined.js> However, I cannot seem to get anything to render. Is this the proper way to embed/call libraries and then run javascript from within a notebook? ![notebook](https://i.stack.imgur.com/qttUE.png) Eventually I would like to generate JSON from Pandas DataFrames and use that data as Dygraphs input.
2014/07/10
[ "https://Stackoverflow.com/questions/24687665", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1176806/" ]
The trick is to pass the DataFrame into JavaScript and convert it into a [format](http://dygraphs.com/data.html) that dygraphs can handle. Here's the code I used ([notebook here](https://gist.github.com/danvk/e81557c88d61e34dbd75)) ``` html = """ <script src="http://dygraphs.com/dygraph-combined.js"></script> <div id="dygraph" style="width: 600px; height: 400px;"></div> <script type="text/javascript"> function convertToDataTable(d) { var columns = _.keys(d); columns.splice(columns.indexOf("x"), 1); var out = []; for (var k in d['x']) { var row = [d['x'][k]]; columns.forEach(function(col) { row.push(d[col][k]); }); out.push(row); } return {data:out, labels:['x'].concat(columns)}; } function handle_output(out) { var json = out.content.data['text/plain']; var data = JSON.parse(eval(json)); var tabular = convertToDataTable(data); g = new Dygraph(document.getElementById("dygraph"), tabular.data, { legend: 'always', labels: tabular.labels }) } var kernel = IPython.notebook.kernel; var callbacks = { 'iopub' : {'output' : handle_output}}; kernel.execute("dfj", callbacks, {silent:false}); </script> """ HTML(html) ``` This is what it looks like: ![chart rendered using dygraphs in an IPython notebook](https://i.stack.imgur.com/sDkDF.png) The chart is fully interactive: you can hover, pan, zoom and otherwise interact in the same ways that you would with a typical dygraph.
danvk's solution is cleaner and faster than this, but I also was able to get this to work by building a Dygraph String from a DataFrame. It seems limited to about 15K points, but the benefit is that once created, the page can be saved as a static html page and the Dygraphs plot stays in place. Makes for a nice portable sharing mechanism for those without notebooks set up. CELL: ``` %%html <script type="text/javascript" src="http://dygraphs.com/1.0.1/dygraph-combined.js"></script>` ``` CELL: ``` import string import numpy as np import pandas as pd from IPython.display import display,Javascript,HTML,display_html,display_javascript,JSON df = pd.DataFrame(columns=['x','y','z'])x=np.arange(1200) df.y=np.sin(x*2*np.pi/10) + np.random.rand(len(x))*5 df.z=np.cos(x) df.x=df.y+df.z/3 + (df.y-df.z)**1.25 s=df.to_csv(index_label="Index",float_format="%5f",index=True) ss=string.join(['"' + x +'\\n"' for x in s.split('\n')],'+\n') dygraph_str=""" <html> <head> <script type="text/javascript" src="http://dygraphs.com/1.0.1/dygraph-combined.js"> </script> </head> <body> <div id="graphdiv5" style="margin: 0 auto; width:auto;"></div> <script type="text/javascript"> g = new Dygraph( document.getElementById("graphdiv5"), // containing div """ + ss[:-6] + """, { legend: 'always', title: 'FOO vs. BAR', rollPeriod: 1, showRoller: true, errorBars: false, ylabel: 'Temperature (Stuff)' } ); </script> """ display(HTML(dygraph_str)) ``` The notebook looks like this:![enter image description here](https://i.stack.imgur.com/EqWXL.png)
65,804,384
So im trying to make a decimal to binary convertor in python without using the bin and this is incomplete, but for now im trying to get 'a' as a list with all the factors that led to the conversion for example if the decimal inputed = 75, then 'a' should be = [64, 8, 2, 1] Can someone tell me how to correct my code but i seem to be running into a error which ive given below the code ``` q3 = float(input("Enter a number: ")) def raise_to_power(base, power): result = 1 for index in range(power): result = result * base return result def decimal_to_binary(decimal): num1 = 2 count = 1 x = 0 a = list([]) while num1 <= decimal: if num1 < decimal: num1 *= 2 count += 1 x += 1 decimal = decimal - num1 a[x] = raise_to_power(2, count) num1 = 2 count = 0 return a decimal_to_binary(q3) ``` The error: ``` Traceback (most recent call last): File "/Users/veresh/Library/Mobile Documents/com~apple~CloudDocs/Binary_Num Convertor.gyp", line 34, in <module> decimal_to_binary(q3) File "/Users/veresh/Library/Mobile Documents/com~apple~CloudDocs/Binary_Num Convertor.gyp", line 28, in decimal_to_binary a[x] = raise_to_power(2, count) IndexError: list assignment index out of range ```
2021/01/20
[ "https://Stackoverflow.com/questions/65804384", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15031531/" ]
You can't assign to positions in a list that don't exist. Instead of ``` a[x] = raise_to_power(2, count) ``` add the result to the list using [list.append](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) ``` a.append(raise_to_power(2, count)) ```
I think you can do this with fewer steps. ``` num = int(input("Enter a number: ")) b = [] while num > 0: b.append(num%2) num = num//2 f = [(2*a)**i for i,a in enumerate(b) if a != 0] f = f[::-1] print (f) ``` This will give you the following result: ``` Enter a number: 75 [64, 8, 2, 1] Enter a number: 36 [32, 4] Enter a number: 29 [16, 8, 4, 1] ```
65,804,384
So im trying to make a decimal to binary convertor in python without using the bin and this is incomplete, but for now im trying to get 'a' as a list with all the factors that led to the conversion for example if the decimal inputed = 75, then 'a' should be = [64, 8, 2, 1] Can someone tell me how to correct my code but i seem to be running into a error which ive given below the code ``` q3 = float(input("Enter a number: ")) def raise_to_power(base, power): result = 1 for index in range(power): result = result * base return result def decimal_to_binary(decimal): num1 = 2 count = 1 x = 0 a = list([]) while num1 <= decimal: if num1 < decimal: num1 *= 2 count += 1 x += 1 decimal = decimal - num1 a[x] = raise_to_power(2, count) num1 = 2 count = 0 return a decimal_to_binary(q3) ``` The error: ``` Traceback (most recent call last): File "/Users/veresh/Library/Mobile Documents/com~apple~CloudDocs/Binary_Num Convertor.gyp", line 34, in <module> decimal_to_binary(q3) File "/Users/veresh/Library/Mobile Documents/com~apple~CloudDocs/Binary_Num Convertor.gyp", line 28, in decimal_to_binary a[x] = raise_to_power(2, count) IndexError: list assignment index out of range ```
2021/01/20
[ "https://Stackoverflow.com/questions/65804384", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15031531/" ]
You can't assign to positions in a list that don't exist. Instead of ``` a[x] = raise_to_power(2, count) ``` add the result to the list using [list.append](https://docs.python.org/3/library/stdtypes.html#mutable-sequence-types) ``` a.append(raise_to_power(2, count)) ```
You could do it recursively: ``` def decimal_to_binary(inp): power = 1 tmp = 1 while 2**power <= inp: tmp = 2**power power += 1 if inp-tmp != 0: return [tmp] + decimal_to_binary(inp-tmp) else: return [tmp] print(decimal_to_binary(75)) [64, 8, 2, 1] ```
34,321,618
First things first I am not a professional with Regular Expressions and have been depending on [this cookbook](https://www.safaribooksonline.com/library/view/regular-expressions-cookbook/9781449327453/ch06s11.html), [this tool](http://pythex.org/) and [this other tool](http://pythex.org/) Now when I try run it it python 2.7.7 64bit win 8 it simply does nothing for this sample text > > Two weeks ago I went shooing at target and spent USD1,010.53 and earned 300 points. When I checked my balance after I only had USD 1912.04. > > > Note that the USD is joined to the amount (USD1,010.53) and there is a comma for every thousand in the first case but second case it is not joined to the amount and there is no comma for the thousandth place (USD 1912.04) and in some case they are some values which are integers but not currencies and would still need to be parsed.(300 points). Now I managed to get my hands on this > > [0-9]{1,3}(,[0-9]{3})\*(.[0-9]+)?\b|.[0-9]+\b > > > Now i have two problems: 1. Python doesn't return any value for the above regex and sample string and yet the tools do. 2. the regex will only return if every 1000th place has a comma i.e. the USD 1912.04 ends up returning 912.04 on the online tools not too sure how to have it take both cases of comma and non comma. `regex = re.compile('[0-9]{1,3}(,[0-9]{3})*(\.[0-9]+)?\b|\.[0-9]+\b') mynumerics = re.findall(regex,'The final bill is USD1,010.53 and you will earn 300 points. Thank you for shopping at Target')` What I would expect is three items: ``` =>['1,010.53', '300', '1912.04'] ``` or better yet ``` =>[1010.53, 300, 1912.04] ``` Instead all i get is an empty list. I could probably try download a different version of python but i know most productions we deploy on use 2.7.X. So i hope its not a version problem.
2015/12/16
[ "https://Stackoverflow.com/questions/34321618", "https://Stackoverflow.com", "https://Stackoverflow.com/users/944900/" ]
Two main problems: * `re.findall` will return a list of tuples if your pattern has any capturing groups in it. Since your pattern is using groups in a very odd way, you will end up seeing some weird results from this. Make use of non capturing groups by using `(?:` instead of just plain `(` parentheses. * because if the use of `\b`, you should specify your pattern string as a raw string with an `r'string'`. In reality, all of your regex's should use a raw string to ensure that nothing is being parsed weirdly. With those in mind, this works perfectly fine: ``` >>> regex = re.compile(r'[0-9]{1,3}(?:,[0-9]{3})*(?:\.[0-9]+)?\b|\.[0-9]+\b') >>> mynumerics = re.findall(regex,'The final bill is USD1,010.53 and you will earn 300 points. What about .25 and 123,456.12?') >>> mynumerics ['1,010.53', '300', '.25', '123,456.12'] ``` Note some of the particular differences between your pattern and mine. ``` r'[0-9]{1,3}(?:,[0-9]{3})*(?:\.[0-9]+)?\b|\.[0-9]+\b' 1 2 2 '[0-9]{1,3}(,[0-9]{3})*(\.[0-9]+)?\b|\.[0-9]+\b' 1 - raw string 2 - non-capturing groups instead of capturing groups ``` I understand that some of this way go way over your head so please comment if you need clarification and I can edit as needed. I would suggest looking into some other regex references and tips, I personally love [this site](http://www.rexegg.com/regex-quickstart.html) and use it almost religiously for any regex needs. EDIT - matching decimals: ========================= As Mark Dickinson cleverly pointed out, the `|\.[0-9]+` in the original regex is for matching things like `.24` (simple decimals). I added that part back in as well as added to the matching string to show the functionality. Important Comment from ShadowRanger =================================== **Side-note**: This pattern, as written, will see 4400 and return 400, or a123 and return 123. This is a problem (not @RNar's, the original pattern had the same issue) because if 4400 should be ignored, then you shouldn't get pieces of it (just adding \b to the front causes other issues, so it's harder than that), and because [English digit grouping rules allow the omission of the comma when the value is four digits to the left of the decimal, between 1000 and 9999](https://en.wikipedia.org/wiki/Decimal_mark#Exceptions_to_digit_grouping), so you won't match those as written
Can you try this regex? ``` ((?:\d+,?)+\.?\d+) ``` <https://regex101.com/r/qN0gV9/1>
44,992,717
I recently began self-learning python, and have been using this language for an online course in algorithms. For some reason, many of my codes I created for this course are very slow (relatively to C/C++ Matlab codes I have created in the past), and I'm starting to worry that I am not using python properly. Here is a simple python and matlab code to compare their speed. MATLAB ``` for i = 1:100000000 a = 1 + 1 end ``` Python ``` for i in list(range(0, 100000000)): a=1 + 1 ``` The matlab code takes about 0.3 second, and the python code takes about 7 seconds. Is this normal? My python codes for much complex problems are very slow. For example, as a HW assignment, I'm running depth first search on a graph with about 900000 nodes, and this is taking forever. Thank you.
2017/07/09
[ "https://Stackoverflow.com/questions/44992717", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8277919/" ]
Try using `xrange` instead of `range`. The difference between them is that `**xrange**` generates **the values as you use them** instead of `range`, which tries to generate a static list at runtime.
Unfortunately, python's amazing flexibility and ease comes at the cost of being slow. And also, for such large values of iteration, I suggest using itertools module as it has faster caching. The xrange is a good solution however if you want to iterate over dictionaries and such, it's better to use itertools as in that, you can iterate over any type of sequence object.
44,992,717
I recently began self-learning python, and have been using this language for an online course in algorithms. For some reason, many of my codes I created for this course are very slow (relatively to C/C++ Matlab codes I have created in the past), and I'm starting to worry that I am not using python properly. Here is a simple python and matlab code to compare their speed. MATLAB ``` for i = 1:100000000 a = 1 + 1 end ``` Python ``` for i in list(range(0, 100000000)): a=1 + 1 ``` The matlab code takes about 0.3 second, and the python code takes about 7 seconds. Is this normal? My python codes for much complex problems are very slow. For example, as a HW assignment, I'm running depth first search on a graph with about 900000 nodes, and this is taking forever. Thank you.
2017/07/09
[ "https://Stackoverflow.com/questions/44992717", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8277919/" ]
Performance is [not an explicit design goal of Python](http://python-history.blogspot.nl/2009/01/pythons-design-philosophy.html): > > Don’t fret too much about performance--plan to optimize later when > needed. > > > That's one of the reasons why Python integrated with a lot of high performance calculating backend engines, such as [numpy](http://www.numpy.org/), [OpenBLAS](http://www.openblas.net/) and even [CUDA](https://developer.nvidia.com/pycuda), just to name a few. The best way to go foreward if you want to increase performance is to let high-performance libraries do the heavy lifting for you. Optimizing loops within Python (by using xrange instead of range in Python 2.7) won't get you very dramatic results. Here is a bit of code that compares different approaches: * Your original `list(range())` * The suggestes use of `xrange()` * Leaving the `i` out * Using numpy to do the addition using numpy array's (vector addition) * Using CUDA to do vector addition on the GPU Code: ``` import timeit import matplotlib.pyplot as mplplt iter = 100 testcode = [ "for i in list(range(1000000)): a = 1+1", "for i in xrange(1000000): a = 1+1", "for _ in xrange(1000000): a = 1+1", "import numpy; one = numpy.ones(1000000); a = one+one", "import pycuda.gpuarray as gpuarray; import pycuda.driver as cuda; import pycuda.autoinit; import numpy;" \ "one_gpu = gpuarray.GPUArray((1000000),numpy.int16); one_gpu.fill(1); a = (one_gpu+one_gpu).get()" ] labels = ["list(range())", "i in xrange()", "_ in xrange()", "numpy", "numpy and CUDA"] timings = [timeit.timeit(t, number=iter) for t in testcode] print labels, timings label_idx = range(len(labels)) mplplt.bar(label_idx, timings) mplplt.xticks(label_idx, labels) mplplt.ylabel('Execution time (sec)') mplplt.title('Timing of integer addition in python 2.7\n(smaller value is better performance)') mplplt.show() ``` Results (graph) ran on Python 2.7.13 on OSX: [![Performance of integer addition in Python](https://i.stack.imgur.com/HEMx1.png)](https://i.stack.imgur.com/HEMx1.png) The reason that Numpy performs faster than the CUDA solution is that the overhead of using CUDA does not beat the efficiency of Python+Numpy. For larger, floating point calculations, CUDA does even better than Numpy. Note that the Numpy solution performs more that 80 times faster than your original solution. If your timings are correct, this would even be faster than Matlab... A final note on DFS (Depth-afirst-Search): [here](http://eddmann.com/posts/depth-first-search-and-breadth-first-search-in-python/) is an interesting article on DFS in Python.
Try using `xrange` instead of `range`. The difference between them is that `**xrange**` generates **the values as you use them** instead of `range`, which tries to generate a static list at runtime.
44,992,717
I recently began self-learning python, and have been using this language for an online course in algorithms. For some reason, many of my codes I created for this course are very slow (relatively to C/C++ Matlab codes I have created in the past), and I'm starting to worry that I am not using python properly. Here is a simple python and matlab code to compare their speed. MATLAB ``` for i = 1:100000000 a = 1 + 1 end ``` Python ``` for i in list(range(0, 100000000)): a=1 + 1 ``` The matlab code takes about 0.3 second, and the python code takes about 7 seconds. Is this normal? My python codes for much complex problems are very slow. For example, as a HW assignment, I'm running depth first search on a graph with about 900000 nodes, and this is taking forever. Thank you.
2017/07/09
[ "https://Stackoverflow.com/questions/44992717", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8277919/" ]
Performance is [not an explicit design goal of Python](http://python-history.blogspot.nl/2009/01/pythons-design-philosophy.html): > > Don’t fret too much about performance--plan to optimize later when > needed. > > > That's one of the reasons why Python integrated with a lot of high performance calculating backend engines, such as [numpy](http://www.numpy.org/), [OpenBLAS](http://www.openblas.net/) and even [CUDA](https://developer.nvidia.com/pycuda), just to name a few. The best way to go foreward if you want to increase performance is to let high-performance libraries do the heavy lifting for you. Optimizing loops within Python (by using xrange instead of range in Python 2.7) won't get you very dramatic results. Here is a bit of code that compares different approaches: * Your original `list(range())` * The suggestes use of `xrange()` * Leaving the `i` out * Using numpy to do the addition using numpy array's (vector addition) * Using CUDA to do vector addition on the GPU Code: ``` import timeit import matplotlib.pyplot as mplplt iter = 100 testcode = [ "for i in list(range(1000000)): a = 1+1", "for i in xrange(1000000): a = 1+1", "for _ in xrange(1000000): a = 1+1", "import numpy; one = numpy.ones(1000000); a = one+one", "import pycuda.gpuarray as gpuarray; import pycuda.driver as cuda; import pycuda.autoinit; import numpy;" \ "one_gpu = gpuarray.GPUArray((1000000),numpy.int16); one_gpu.fill(1); a = (one_gpu+one_gpu).get()" ] labels = ["list(range())", "i in xrange()", "_ in xrange()", "numpy", "numpy and CUDA"] timings = [timeit.timeit(t, number=iter) for t in testcode] print labels, timings label_idx = range(len(labels)) mplplt.bar(label_idx, timings) mplplt.xticks(label_idx, labels) mplplt.ylabel('Execution time (sec)') mplplt.title('Timing of integer addition in python 2.7\n(smaller value is better performance)') mplplt.show() ``` Results (graph) ran on Python 2.7.13 on OSX: [![Performance of integer addition in Python](https://i.stack.imgur.com/HEMx1.png)](https://i.stack.imgur.com/HEMx1.png) The reason that Numpy performs faster than the CUDA solution is that the overhead of using CUDA does not beat the efficiency of Python+Numpy. For larger, floating point calculations, CUDA does even better than Numpy. Note that the Numpy solution performs more that 80 times faster than your original solution. If your timings are correct, this would even be faster than Matlab... A final note on DFS (Depth-afirst-Search): [here](http://eddmann.com/posts/depth-first-search-and-breadth-first-search-in-python/) is an interesting article on DFS in Python.
Unfortunately, python's amazing flexibility and ease comes at the cost of being slow. And also, for such large values of iteration, I suggest using itertools module as it has faster caching. The xrange is a good solution however if you want to iterate over dictionaries and such, it's better to use itertools as in that, you can iterate over any type of sequence object.
3,679,974
What I'd like to achieve is the launch of the following shell command: ``` mysql -h hostAddress -u userName -p userPassword databaseName < fileName ``` From within a python 2.4 script with something not unlike: ``` cmd = ["mysql", "-h", ip, "-u", mysqlUser, dbName, "<", file] subprocess.call(cmd) ``` This pukes due to the use of the redirect symbol (I believe) - mysql doesn't receive the input file. I've also tried: ``` subprocess.call(cmd, stdin=subprocess.PIPE) ``` no go there ether Can someone specify the syntax to make a shell call such that I can feed in a file redirection ? Thanks in advance.
2010/09/09
[ "https://Stackoverflow.com/questions/3679974", "https://Stackoverflow.com", "https://Stackoverflow.com/users/443779/" ]
You have to feed the file into mysql stdin by yourself. This should do it. ``` import subprocess ... filename = ... cmd = ["mysql", "-h", ip, "-u", mysqlUser, dbName] f = open(filename) subprocess.call(cmd, stdin=f) ```
The symbol `<` has this meaning (i. e. reading a file to `stdin`) only in shell. In Python you should use either of the following: 1) Read file contents in your process and push it to `stdin` of the child process: ``` fd = open(filename, 'rb') try: subprocess.call(cmd, stdin=fd) finally: fd.close() ``` 2) Read file contents via shell (as you mentioned), but redirect `stdin` of your process accordingly: ``` # In file myprocess.py subprocess.call(cmd, stdin=subprocess.PIPE) # In shell command line $ python myprocess.py < filename ```
3,679,974
What I'd like to achieve is the launch of the following shell command: ``` mysql -h hostAddress -u userName -p userPassword databaseName < fileName ``` From within a python 2.4 script with something not unlike: ``` cmd = ["mysql", "-h", ip, "-u", mysqlUser, dbName, "<", file] subprocess.call(cmd) ``` This pukes due to the use of the redirect symbol (I believe) - mysql doesn't receive the input file. I've also tried: ``` subprocess.call(cmd, stdin=subprocess.PIPE) ``` no go there ether Can someone specify the syntax to make a shell call such that I can feed in a file redirection ? Thanks in advance.
2010/09/09
[ "https://Stackoverflow.com/questions/3679974", "https://Stackoverflow.com", "https://Stackoverflow.com/users/443779/" ]
You have to feed the file into mysql stdin by yourself. This should do it. ``` import subprocess ... filename = ... cmd = ["mysql", "-h", ip, "-u", mysqlUser, dbName] f = open(filename) subprocess.call(cmd, stdin=f) ```
As Andrey correctly noticed, the `<` redirection operator is interpreted by shell. Hence another possible solution: ``` import os os.system("mysql -h " + ip + " -u " + mysqlUser + " " + dbName) ``` It works because `os.system` passes its argument to the shell. Note that I assumed that all used variables come from a trusted source, otherwise you need to validate them in order to prevent arbitrary code execution. Also those variables should not contain whitespace (default `IFS` value) or shell special characters.
3,679,974
What I'd like to achieve is the launch of the following shell command: ``` mysql -h hostAddress -u userName -p userPassword databaseName < fileName ``` From within a python 2.4 script with something not unlike: ``` cmd = ["mysql", "-h", ip, "-u", mysqlUser, dbName, "<", file] subprocess.call(cmd) ``` This pukes due to the use of the redirect symbol (I believe) - mysql doesn't receive the input file. I've also tried: ``` subprocess.call(cmd, stdin=subprocess.PIPE) ``` no go there ether Can someone specify the syntax to make a shell call such that I can feed in a file redirection ? Thanks in advance.
2010/09/09
[ "https://Stackoverflow.com/questions/3679974", "https://Stackoverflow.com", "https://Stackoverflow.com/users/443779/" ]
The symbol `<` has this meaning (i. e. reading a file to `stdin`) only in shell. In Python you should use either of the following: 1) Read file contents in your process and push it to `stdin` of the child process: ``` fd = open(filename, 'rb') try: subprocess.call(cmd, stdin=fd) finally: fd.close() ``` 2) Read file contents via shell (as you mentioned), but redirect `stdin` of your process accordingly: ``` # In file myprocess.py subprocess.call(cmd, stdin=subprocess.PIPE) # In shell command line $ python myprocess.py < filename ```
As Andrey correctly noticed, the `<` redirection operator is interpreted by shell. Hence another possible solution: ``` import os os.system("mysql -h " + ip + " -u " + mysqlUser + " " + dbName) ``` It works because `os.system` passes its argument to the shell. Note that I assumed that all used variables come from a trusted source, otherwise you need to validate them in order to prevent arbitrary code execution. Also those variables should not contain whitespace (default `IFS` value) or shell special characters.
68,563,978
This question is basically on how to use regular expressions but I couldn't find any answer to it in a lot of very closely related questions. I create coverage reports in a gitlab pipeline using [coverage.py](https://coverage.readthedocs.io/en/coverage-5.5/) and [py.test](https://docs.pytest.org/en/6.2.x/) which look like the following piped into a file like `coverage37.log`: ``` -------------- generated xml file: /builds/utils/foo/report.xml -------------- ---------- coverage: platform linux, python 3.7.11-final-0 ----------- Name Stmts Miss Cover ------------------------------------------------- foo/tests/bar1.py 52 0 100% ... foo/tests/bar2.py 0 0 100% ------------------------------------------------- TOTAL 431 5 99% ======================= 102 passed, 9 warnings in 4.35s ======================== ``` Now I want to create a badge for the total coverage i.e. here the 99% value and only get number (99) in order to assign it to a variable. This variable can then be used to create a flexible coverage badge using the [anybadge](https://github.com/jongracecox/anybadge) package. My naive approach would be something like: ``` COVERAGE_SCORE=$(sed -n 'what to put here' coverage37.log) echo "Coverage is $COVERAGE_SCORE" ``` Note that I know that gitlab, github, etc. offer specific functionalities to create badges automatically. But I want to create it manually in order to have more control and create the badge per branch. Any hints are welcome. Thanks in advance!
2021/07/28
[ "https://Stackoverflow.com/questions/68563978", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3734059/" ]
It is easier to use `awk` here: ```sh cov_score=$(awk '$1 == "TOTAL" {print $NF+0}' coverage37.log) ``` Here `$1 == "TOTAL"` matches a line with first word as `TOTAL` and `print $NF+0` prints number part of last field.
rather than get approximate values from non-machine-readable outputs you'd be best to use coverage's programmatic apis, either `coverage xml` or `coverage json` here's an example using the json output (note I send it to `/dev/stdout`, by default it goes to `coverage.json`) ``` $ coverage json -o /dev/stdout | jq .totals.percent_covered 52.908756889161054 ``` there's even more information there if you need it: ``` $ coverage json -o /dev/stdout | jq .totals { "covered_lines": 839, "num_statements": 1401, "percent_covered": 52.908756889161054, "missing_lines": 562, "excluded_lines": 12, "num_branches": 232, "num_partial_branches": 7, "covered_branches": 25, "missing_branches": 207 } ```
68,563,978
This question is basically on how to use regular expressions but I couldn't find any answer to it in a lot of very closely related questions. I create coverage reports in a gitlab pipeline using [coverage.py](https://coverage.readthedocs.io/en/coverage-5.5/) and [py.test](https://docs.pytest.org/en/6.2.x/) which look like the following piped into a file like `coverage37.log`: ``` -------------- generated xml file: /builds/utils/foo/report.xml -------------- ---------- coverage: platform linux, python 3.7.11-final-0 ----------- Name Stmts Miss Cover ------------------------------------------------- foo/tests/bar1.py 52 0 100% ... foo/tests/bar2.py 0 0 100% ------------------------------------------------- TOTAL 431 5 99% ======================= 102 passed, 9 warnings in 4.35s ======================== ``` Now I want to create a badge for the total coverage i.e. here the 99% value and only get number (99) in order to assign it to a variable. This variable can then be used to create a flexible coverage badge using the [anybadge](https://github.com/jongracecox/anybadge) package. My naive approach would be something like: ``` COVERAGE_SCORE=$(sed -n 'what to put here' coverage37.log) echo "Coverage is $COVERAGE_SCORE" ``` Note that I know that gitlab, github, etc. offer specific functionalities to create badges automatically. But I want to create it manually in order to have more control and create the badge per branch. Any hints are welcome. Thanks in advance!
2021/07/28
[ "https://Stackoverflow.com/questions/68563978", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3734059/" ]
It is easier to use `awk` here: ```sh cov_score=$(awk '$1 == "TOTAL" {print $NF+0}' coverage37.log) ``` Here `$1 == "TOTAL"` matches a line with first word as `TOTAL` and `print $NF+0` prints number part of last field.
Also if you dont want to store it in a file for some reason, you can do `cov_score=$(coverage report | awk '$1 == "TOTAL" {print $NF+0}')`
68,563,978
This question is basically on how to use regular expressions but I couldn't find any answer to it in a lot of very closely related questions. I create coverage reports in a gitlab pipeline using [coverage.py](https://coverage.readthedocs.io/en/coverage-5.5/) and [py.test](https://docs.pytest.org/en/6.2.x/) which look like the following piped into a file like `coverage37.log`: ``` -------------- generated xml file: /builds/utils/foo/report.xml -------------- ---------- coverage: platform linux, python 3.7.11-final-0 ----------- Name Stmts Miss Cover ------------------------------------------------- foo/tests/bar1.py 52 0 100% ... foo/tests/bar2.py 0 0 100% ------------------------------------------------- TOTAL 431 5 99% ======================= 102 passed, 9 warnings in 4.35s ======================== ``` Now I want to create a badge for the total coverage i.e. here the 99% value and only get number (99) in order to assign it to a variable. This variable can then be used to create a flexible coverage badge using the [anybadge](https://github.com/jongracecox/anybadge) package. My naive approach would be something like: ``` COVERAGE_SCORE=$(sed -n 'what to put here' coverage37.log) echo "Coverage is $COVERAGE_SCORE" ``` Note that I know that gitlab, github, etc. offer specific functionalities to create badges automatically. But I want to create it manually in order to have more control and create the badge per branch. Any hints are welcome. Thanks in advance!
2021/07/28
[ "https://Stackoverflow.com/questions/68563978", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3734059/" ]
rather than get approximate values from non-machine-readable outputs you'd be best to use coverage's programmatic apis, either `coverage xml` or `coverage json` here's an example using the json output (note I send it to `/dev/stdout`, by default it goes to `coverage.json`) ``` $ coverage json -o /dev/stdout | jq .totals.percent_covered 52.908756889161054 ``` there's even more information there if you need it: ``` $ coverage json -o /dev/stdout | jq .totals { "covered_lines": 839, "num_statements": 1401, "percent_covered": 52.908756889161054, "missing_lines": 562, "excluded_lines": 12, "num_branches": 232, "num_partial_branches": 7, "covered_branches": 25, "missing_branches": 207 } ```
Also if you dont want to store it in a file for some reason, you can do `cov_score=$(coverage report | awk '$1 == "TOTAL" {print $NF+0}')`