qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
5,918,353
I'm quite new to python and trying to port a simple exploit I've written for a stack overflow (just a nop sled, shell code and return address). This isn't for nefarious purposes but rather for a security lecture at a university. Given a hex string (deadbeef), what are the best ways to: * represent it as a series of bytes * add or subtract a value * reverse the order (for x86 memory layout, i.e. efbeadde) Any tips and tricks regarding common tasks in exploit writing in python are also greatly appreciated.
2011/05/07
[ "https://Stackoverflow.com/questions/5918353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/742620/" ]
In Python 2.6 and above, you can use the built-in [`bytearray`](http://docs.python.org/library/stdtypes.html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange) class. To create your `bytearray` object: ``` b = bytearray.fromhex('deadbeef') ``` To alter a byte, you can reference it using array notation: ``` b[2] += 7 ``` To reverse the `bytearray` in place, use `b.reverse()`. To create an iterator that iterates over it in reverse order, you can use the `reversed` function: `reversed(b)`. You may also be interested in the new `bytes` class in Python 3, which is like `bytearray` but immutable.
Not sure if this is the best way... ``` hex_str = "deadbeef" bytes = "".join(chr(int(hex_str[i:i+2],16)) for i in xrange(0,len(hex_str),2)) rev_bytes = bytes[::-1] ``` Or might be simpler: ``` bytes = "\xde\xad\xbe\xef" rev_bytes = bytes[::-1] ```
4,712
28,771,226
I have a python list question: Input: ``` l=[2, 5, 6, 7, 10, 11, 12, 19, 20, 26, 28, 33, 34, 45, 46, 47, 50, 57, 59, 64, 67, 77, 79, 87, 93, 97, 106, 110, 111, 113, 115, 120, 125, 126, 133, 135, 142, 148, 160, 166, 169, 176, 202, 228, 234, 253, 274, 365, 433, 435, 436, 468, 476, 529, 570, 575, 577, 581, 614, 766, 813, 944, 1058, 1079, 1245, 1363, 1389, 1428, 1758, 2129, 2336, 2402, 2405, 2576, 3013, 3993, 7687, 8142, 8455, 8456] ``` Now I want to write mark the numbers in a [0]\*10000 list, such that the beginning is like: Output: ``` lp=[0,1,0,0,1,...] ``` The second and fifth elements are marked since they appeared in the input.
2015/02/27
[ "https://Stackoverflow.com/questions/28771226", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4556250/" ]
As you loop through the original array, check whether the current element is the next one in the sequence. If not, use another loop to generate the missing elements: ``` var dataParsed = []; var lastTime = data[0][0]; var timeStep = 60000; for (var i = 0; i < data.length; i++) { var curTime = data[i][0]; if (curTime > lastTime + timeStep) { for (var time = lastTime + timeStep; time < curTime; time += timeStep) { dataParse.push([time, 0]); } } dataParse.push(data[i]); lastTime = curTime; } ```
You can have a loop which counts from first timestamp to the last, incremented by 60 seconds. Then populate a new array with current values + missing values like below. ``` var dataParsed = []; for(var i=data[0][0], j=0; i<=data[data.length-1][0]; i+=60000) { if(i == data[j][0]) { dataParsed.push(data[j]); j++; } else { dataParsed.push([i, 0]); } } ```
4,714
63,922,241
I am trying to use python Ctypes to interface to a C++ class I have been provided. I've gotten most everything working in terms of reading/writing member data and calling methods. But The class Im trying to exercise (call it ClassA) relies on an external Class (call it classB). see: ``` //main.cc This is existing caller code. everything is c++ so using it is easy include "ClassA.hh" include "ClassB.hh" void main() { ClassB objB(x,y,z); ClassA objA(a,b, &objB); objA.DoStuff(); } ``` But I'm not ready to do the work to bind and expose classB in python via ctypes. For those who haven't used ctypes, you basically write some C bindings, and call them in python. The class A binding might look like: ``` //at the bottom of ClassA.hh extern "C" { ClassA* ObjaNew(int x, int y) { return new ClassA(x,y);} void ObjaDoStuff(ClassA* objPtr) { objPtr->DoStuff();} } ``` And then the calling code in python might look like ``` mylib = ctypes.cdll('mylib.so') myPyObj = mylib.ObjaNew(5,6) // executes ObjaNew mylib.ObjaDoStuff(myPyObj) // executes ObjaDoStuff ``` An important point being that python ctypes supports native-ish c types only. Creating, Passing or Getting a class or struct or std::vector through the Ctypes interface is work. For example: this link is code that one would need to write to be able to allocate a c++ vector: <https://stackoverflow.com/a/16887455/2312509> So, what I think I want to do is this: ``` //classA.hh class ClassA{ public: ClassA(int a, int b, ClassB* p_objb) { //This is the existing constructor ; //whatever m_objb = p_objb; } ClassA(int a, int b) { // This is my new constructor ClassB *objB = new ClassB(x,y,z); ClassA(a,b,objB); } ``` I've done this and it compiles, but I can't actually run it yet. My concern is that objB is deallocated, because I can't see it as a member, despite it being allocated in the body of a constructor. I feel like if the call to new was being assigned to a member data pointer It would be right, because that how it works, but the assignment to a local pointer, and then passing the local pointer might fail. I think I could maybe create a child class that inherits from both, like: ``` ClassAB: public b():public a(){ //But the A constructor still would need to NOT rely on the existence of ObjB m_objb = &this; } ``` I've not written a whole bunch of C++, so I don't know what the right answer is, But I feel like this isn't a new or novel concept.
2020/09/16
[ "https://Stackoverflow.com/questions/63922241", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2312509/" ]
Given that you didnt show any code its hard to tell what your really trying to do. However assuming your mean nodes like in a tree then below simple example shows how you could recursivly work back up the tree to show the relationships. this of course is not a complete example but should give you an idea. ```py class node(): def __init__(self, name, parent=None): self.name = name self.parent: node = parent def get_bottom_up_ancestors(self): if self.parent: return [self.name] + self.parent.get_bottom_up_ancestors() return [self.name] def get_top_down_ancestors(self): return self.get_bottom_up_ancestors()[::-1] root = node("Top") child1 = node("first", parent=root) child2 = node("second", parent=root) grandchild1 = node("grandchild", parent=child1) print(grandchild1.get_bottom_up_ancestors()) print(grandchild1.get_top_down_ancestors()) ``` **OUTPUT** ```none ['grandchild', 'first', 'Top'] ['Top', 'first', 'grandchild'] ```
I don't quite understand what you are talking about but from what I DO understand there is no such thing as a parent object only parent classes so a node parent would not be possible
4,715
28,915,587
How can i split a **single** key-value pair into dictionary in python? ``` s = "x=y" sp = s.split('=', 1) for key,value in sp: print(key, "==", value) ``` I did not find anything helpful, except using nested `for` and `dict()` which is really unclear.
2015/03/07
[ "https://Stackoverflow.com/questions/28915587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3367446/" ]
``` >>> s = "x=y" >>> dict([s.split('=', 1)]) {'x': 'y'} ```
``` In [1]: s = "x=y" In [2]: sp = s.split('=', 1) In [3]: i=iter(sp) In [4]: new_dict=dict(zip(i,i)) In [5]: new_dict Out[5]: {'x': 'y'} ``` **dict-** It is used for creating a new dictionary. **zip-** It is used for iterating over two lists in parallel. **iter-** Returns a iterator.
4,716
21,201,661
I am trying to get the output of serverStatus command via pymongo and then insert it into a mongodb collection. Here is the dictionary `{u'metrics': {u'getLastError': {u'wtime': {u'num': 0, u'totalMillis': 0}, u'wtimeouts': 0L}, u'queryExecutor': {u'scanned': 0L}, u'record': {u'moves': 0L}, u'repl': {u'buffer': {u'count': 0L, u'sizeBytes': 0L, u'maxSizeBytes': 268435456}, u'apply': {u'batches': {u'num': 0, u'totalMillis': 0}, u'ops': 0L}, u'oplog': {u'insert': {u'num': 0, u'totalMillis': 0}, u'insertBytes': 0L}, u'network': {u'bytes': 0L, u'readersCreated': 0L, u'getmores': {u'num': 0, u'totalMillis': 0}, u'ops': 0L}, u'preload': {u'docs': {u'num': 0, u'totalMillis': 0}, u'indexes': {u'num': 0, u'totalMillis': 0}}}, u'ttl': {u'passes': 108L, u'deletedDocuments': 0L}, u'operation': {u'fastmod': 0L, u'scanAndOrder': 0L, u'idhack': 0L}, u'document': {u'deleted': 0L, u'updated': 0L, u'inserted': 1L, u'returned': 0L}}, u'process': u'mongod', u'pid': 7073, u'connections': {u'current': 1, u'available': 818, u'totalCreated': 58L}, u'locks': {u'admin': {u'timeAcquiringMicros': {}, u'timeLockedMicros': {}}, u'local': {u'timeAcquiringMicros': {u'r': 1942L, u'w': 0L}, u'timeLockedMicros': {u'r': 72369L, u'w': 0L}}, u'.': {u'timeAcquiringMicros': {u'R': 218733L, u'W': 30803L}, u'timeLockedMicros': {u'R': 311478L, u'W': 145679L}}}, u'cursors': {u'clientCursors_size': 0, u'timedOut': 0, u'totalOpen': 0}, u'globalLock': {u'totalTime': 6517358000L, u'lockTime': 145679L, u'currentQueue': {u'total': 0, u'writers': 0, u'readers': 0}, u'activeClients': {u'total': 0, u'writers': 0, u'readers': 0}}, u'extra_info': {u'note': u'fields vary by platform', u'page_faults': 21, u'heap_usage_bytes': 62271152}, u'uptime': 6518.0, u'network': {u'numRequests': 103, u'bytesOut': 106329, u'bytesIn': 6531}, u'uptimeMillis': 6517358L, u'recordStats': {u'local': {u'pageFaultExceptionsThrown': 0, u'accessesNotInMemory': 0}, u'pageFaultExceptionsThrown': 0, u'accessesNotInMemory': 0}, u'version': u'2.4.8', u'dur': {u'compression': 0.0, u'journaledMB': 0.0, u'commits': 30, u'writeToDataFilesMB': 0.0, u'commitsInWriteLock': 0, u'earlyCommits': 0, u'timeMs': {u'writeToJournal': 0, u'dt': 3077, u'remapPrivateView': 0, u'prepLogBuffer': 0, u'writeToDataFiles': 0}}, u'mem': {u'resident': 36, u'supported': True, u'virtual': 376, u'mappedWithJournal': 160, u'mapped': 80, u'bits': 64}, u'opcountersRepl': {u'getmore': 0, u'insert': 0, u'update': 0, u'command': 0, u'query': 0, u'delete': 0}, u'indexCounters': {u'missRatio': 0.0, u'resets': 0, u'hits': 0, u'misses': 0, u'accesses': 0}, u'uptimeEstimate': 6352.0, u'host': u'kal-el', u'writeBacksQueued': False, u'localTime': datetime.datetime(2014, 1, 18, 8, 1, 30, 22000), u'backgroundFlushing': {u'last_finished': datetime.datetime(2014, 1, 18, 8, 0, 52, 713000), u'last_ms': 0, u'flushes': 108, u'average_ms': 1.1111111111111112, u'total_ms': 120}, u'opcounters': {u'getmore': 0, u'insert': 1, u'update': 0, u'command': 105, u'query': 108, u'delete': 0}, u'ok': 1.0, u'asserts': {u'msg': 0, u'rollovers': 0, u'regular': 0, u'warning': 0, u'user': 0}}` I am getting "Key '.' must not contain ." error. What could be the issue here? I am not seeing any . in the key name. Here is the traceback: ``` Traceback (most recent call last): File "src/mongodb_status.py", line 37, in <module> get_mongodb_status() File "src/mongodb_status.py", line 23, in get_mongodb_status md_status.insert(status_data) File "/home/guruprasad/dev/py/src/dnacraft_monitor_servers/venv/local/lib/python2.7/site-packages/pymongo/collection.py", line 362, in insert self.database.connection) bson.errors.InvalidDocument: key '.' must not contain '.' ```
2014/01/18
[ "https://Stackoverflow.com/questions/21201661", "https://Stackoverflow.com", "https://Stackoverflow.com/users/649746/" ]
In the 14th line: ``` u'.': {u'timeAcquiringMicros': {u'R': 218733L, u'W': 30803L}, u'timeLockedMicros': {u'R': 311478L, u'W': 145679L}}} ``` For future safety, iterate through the keys, replace '.'s with '\_' or something, and then perform a write.
Here is a function which will remove '.' from your keys: ``` def fix_dict(data, ignore_duplicate_key=True): """ Removes dots "." from keys, as mongo doesn't like that. If the key is already there without the dot, the dot-value get's lost. This modifies the existing dict! :param ignore_duplicate_key: True: if the replacement key is already in the dict, now the dot-key value will be ignored. False: raise ValueError in that case. """ if isinstance(data, (list, tuple)): list2 = list() for e in data: list2.append(fix_dict(e)) # end if return list2 if isinstance(data, dict): # end if for key, value in data.items(): value = fix_dict(value) old_key = key if "." in key: key = old_key.replace(".", "") if key not in data: data[key] = value else: error_msg = "Dict key {key} containing a \".\" was ignored, as {replacement} already exists".format( key=key_old, replacement=key) if force: import warnings warnings.warn(error_msg, category=RuntimeWarning) else: raise ValueError(error_msg) # end if # end if del data[old_key] # end if data[key] = value # end for return data # end if return data # end def ```
4,719
53,755,983
There is a related question [here](https://stackoverflow.com/questions/7001917/pause-python-generator). I am attempting to do [this](https://www.hackerrank.com/contests/projecteuler/challenges/euler024/copy-from/1311767864) project Euler challenge on HackerRank. What it requires is that you are able to derive the *n*th permutation of a string "abcdefghijklm". There are 13! permutations. I tried a simple solution where I used `for num, stry in zip(range(1, math.factorial(13)), itertools.permutations("abcdefghijklm"):`. That works, but it times out. What would be really nice is to store each value in a `dict` as I go along, and do something like this: ``` import itertools import math strt = "abcdefghijklm" dic = {} perms_gen = itertools.permutations(strt) idxs_gen = range(1, math.factorial(13)) curr_idx = 0 test_list = [1, 2, 5, 10] def get_elems(n): for num, stry in zip(idxs_gen, perms_gen): print(num) # debug str_stry = "".join(stry) dic[num] = str_stry if num == n: return str_stry for x in test_list: if curr_idx < x: print(get_elems(x)) else: print(dic[x]) ``` This doesn't work. I get this output instead: ``` 1 abcdefghijklm 1 2 abcdefghijlkm 1 2 3 4 5 abcdefghikjml 1 2 3 4 5 6 7 8 9 10 abcdefghilmkj ``` As I was writing this question, I apparently found the answer... to be continued.
2018/12/13
[ "https://Stackoverflow.com/questions/53755983", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4476908/" ]
Pausing is built-in functionality for generators. It's half the point of generators. However, `range` is **not a generator**. It's a lazy sequence type. If you want an object where iterating over it again will resume where you last stopped, you want an iterator over the range object: ``` idsx_iter = iter(range(1, math.factorial(13))) ``` However, it would be simpler to save the `zip` iterator instead of two underlying iterators. Better yet, use `enumerate`: ``` indexed_permutations = enumerate(itertools.permutations(strt)) ``` You've got a lot more things that don't make sense in your code, though, like `curr_idx`, which just stays at 0 forever, or your `range` bounds, which produce 13!-1 indices instead of 13! indices, and really, you should be using a more efficient algorithm. For example, one based on figuring out how many permutations you skip ahead by setting the next element to a specific character, and using that to directly compute each element of the permutation.
The answer to the question in the title is "yes", you can pause and restart. How? Unexpectedly (to me), apparently `zip()` restarts the zipped generators despite them being previously defined (maybe someone can tell me why that happens?). So, I added `main_gen = zip(idxs_gen, perms_gen)` and changed to `for num, stry in zip(idxs_gen, perms_gen):` to `for num, stry in main_gen:`. I then get this output, which, assuming the strings are correct, is exactly what I wanted: ``` 1 abcdefghijklm 2 abcdefghijkml 3 4 5 abcdefghijmkl 6 7 8 9 10 abcdefghiklmj ``` After that change, the code looks like this: ``` import itertools import math strt = "abcdefghijklm" dic = {} perms_gen = itertools.permutations(strt) idxs_gen = range(1, math.factorial(13)) main_gen = zip(idxs_gen, perms_gen) curr_idx = 0 test_list = [1, 2, 5, 10] def get_elems(n): for num, stry in main_gen: print(num) str_stry = "".join(stry) dic[num] = str_stry if num == n: return str_stry for x in test_list: if curr_idx < x: print(get_elems(x)) else: print(dic[x]) ```
4,722
7,949,024
I am trying to install psycopg2 in virtualenv enviroment and am having a heck of a time. I think I may have screwed something up because I installed virtualenv and then upgraded to Xcode 4. ``` (my_enviroment)my_users-macbook-2:my_enviroment my_user$ pip install psycopg2 ``` Produces this message: ``` Downloading/unpacking psycopg2==2.4.2 Running setup.py egg_info for package psycopg2 no previously-included directories found matching 'doc/src/_build' Installing collected packages: psycopg2 Running setup.py install for psycopg2 building 'psycopg2._psycopg' extension gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -g -O2 -DNDEBUG -g -O3 -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.2 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090004 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -I. -I/usr/include -I/usr/include/postgresql/server -c psycopg/psycopgmodule.c -o build/temp.macosx-10.6-intel-2.7/psycopg/psycopgmodule.o unable to execute gcc-4.2: No such file or directory error: command 'gcc-4.2' failed with exit status 1 Complete output from command /Users/my_user/my_enviroment/bin/python -c "import setuptools;__file__='/Users/my_user/my_enviroment/build/psycopg2/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /var/folders/b8/jflj9btd4rzb80xfmcy_rk140000gn/T/pip-lojVKc-record/install-record.txt --install-headers /Users/my_user/my_enviroment/bin/../include/site/python2.7: running install running build running build_py running build_ext building 'psycopg2._psycopg' extension gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -isysroot /Developer/SDKs/MacOSX10.6.sdk -arch i386 -arch x86_64 -g -O2 -DNDEBUG -g -O3 -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.4.2 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090004 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -I. -I/usr/include -I/usr/include/postgresql/server -c psycopg/psycopgmodule.c -o build/temp.macosx-10.6-intel-2.7/psycopg/psycopgmodule.o unable to execute gcc-4.2: No such file or directory error: command 'gcc-4.2' failed with exit status 1 ---------------------------------------- Command /Users/my_user/my_enviroment/bin/python -c "import setuptools;__file__='/Users/my_user/my_enviroment/build/psycopg2/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /var/folders/b8/jflj9btd4rzb80xfmcy_rk140000gn/T/pip-lojVKc-record/install-record.txt --install-headers /Users/my_user/my_enviroment/bin/../include/site/python2.7 failed with error code 1 Storing complete log in /Users/my_user/.pip/pip.log ``` I am running OSX 10.7, Python 2.7.2, pip 1.0.2, Xcode 4. I have tried the following solutions, with no success: [Cannot install psycopg2 on OSX 10.6.7 with XCode4](https://stackoverflow.com/questions/5427157/cannot-install-psycopg2-on-osx-10-6-7-with-xcode4) [GCC error: command 'gcc-4.0' failed with exit status 1](https://stackoverflow.com/questions/7883372/gcc-error-command-gcc-4-0-failed-with-exit-status-1) Any thoughts? What other information would you need to know?
2011/10/31
[ "https://Stackoverflow.com/questions/7949024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/913018/" ]
Your error is this: ``` unable to execute gcc-4.2: No such file or directory ``` Which means that `gcc-4.2` is not installed. Either downgrade (or upgrade) your GCC version, or modify the package to build with just the `gcc` command. A bit more hacky would be to `ln` `gcc-4.2` to the `gcc` command.
I've found the easiest way to install PIL on 10.7 is to create a symlink from gcc-4.2 to gcc. ``` sudo ln -s /usr/bin/gcc /usr/bin/gcc-4.2 easy_install pil ```
4,725
64,882,718
I'm using Rpy2 within python to call R but for some reason I am not able to load a a specific package, 'rmgarch'. I have installed it separately in R and it works when I import it in RStudio, but for whatever reason, it just doesn't wanna work in rpy2, even though rpy2 is perfectly happy importing other packages such as 'rugarch', 'Matrix', 'zoo', etc. They are all installed in the same library which is even more confusing for me. My question is, do you know an alternative way of calling/importing the package while coding in R? Note that I can import any other package. I tried using devtools because that's the only similar thing I can think of, but it doesn't exist in that universe. I am using R 4.0.3. Python version 3.7.6. An example of the use in Jupyter is: ``` import rpy2 import rpy2.robjects as robjects from rpy2.robjects import pandas2ri from rpy2.robjects.conversion import localconverter utils.install_packages('rmgarch') #;utils.install_pa...rugarch,... robjects.r('''         library('rugarch')         library('quantmod')         library('forecast')         library('rmgarch')         f <- function(u) {             l<-u         }         ''') ``` The error output is: ``` RRuntimeError: Error in library("rmgarch") : there is no package called ‘rmgarch’ ```
2020/11/17
[ "https://Stackoverflow.com/questions/64882718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14290433/" ]
You can integrate with something like this: ```js class BoxCast extends React.Component { componentDidMount() { const {broadcastChannelId, broadcastId} = this.props; this.$el = $(this.el); this.context = boxcast(this.$el); this.context.loadChannel(broadcastChannelId, { autoplay: true, showTitle: true, showDescription: true, showHighlights: true, showRelated: false, selectedBroadcastId: broadcastId }); } componentWillUnmount() { this.context.unload(); } render() { return <div ref={el => this.el = el} />; } } ``` And then using it from some other React component: ```js return ( <BoxCast broadcastChannelId={/* Something */} broadcastId={/* Something */} /> ) ```
This does not looks like its being React compatible. If you insist to use it from React, you'll need to [integrate it](https://reactjs.org/docs/integrating-with-other-libraries.html). It might be a tedious job and I discourage you from doing that. Look for another provider claiming to be React compatible.
4,734
54,896,846
What should I do if I want to get the sum of every 3 elements? ``` test_arr = [1,2,3,4,5,6,7,8] ``` It sounds like a map function ``` map_fn(arr, parallel_iterations = True, lambda a,b,c : a+b+c) ``` and the result of `map_fn(test_arr)` should be ``` [6,9,12,15,18,21] ``` which equals to ``` [(1+2+3),(2+3+4),(3+4+5),(4+5+6),(5+6+7),(6+7+8)] ``` --- I have worked out a solution after reviewing the official docs: <https://www.tensorflow.org/api_docs/python/tf/map_fn> ``` import tensorflow as tf def tf_map_elements_every(n, tf_op, input, dtype): if n >= input.shape[0]: return tf_op(input) else: return tf.map_fn( lambda params: tf_op(params), [input[i:i-n+1] if i !=n-1 else input[i:] for i in range(n)], dtype=dtype ) ``` Test ``` t = tf.constant([1, 2, 3, 4, 5, 6, 7, 8]) op = tf_map_elements_every(3, tf.reduce_sum, t, tf.int32) sess = tf.Session() sess.run(op) ``` `[Out]: array([ 6, 9, 12, 15, 18, 21])`
2019/02/27
[ "https://Stackoverflow.com/questions/54896846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5958473/" ]
It's even easier: use a list comprehension. Slice the list into 3-element segments and take the sum of each. Wrap those in a list. ``` [sum(test_arr[i-2:i+1]) for i in range(2, len(test_arr))] ```
Simply loop through your array until you are 3 from the end. ``` # Takes a collection as argument def map_function(array): # Initialise results and i results = [] int i = 0 # While i is less than 3 positions away from the end of the array while(i <= (len(array) - 3)): # Add the sum of the next 3 elements in results results.append(array[i] + array[i + 1] + array[i + 2] # Increment i i += 1 # Return the array return results ```
4,735
50,490,556
wx [event handlers](https://wxpython.org/Phoenix/docs/html/events_overview.html) attached with `.Bind(...)` receive as a parameter an event object with a `.Skip()` method. Calling `.Skip()` allows the event's default behaviour to happen; attaching a handler that does not call `.Skip()` suppresses the default behaviour. For instance, an `EVT_CHAR` handler that doesn't call `.Skip()` suppresses the default behaviour of entering the character into the field, thus blocking user input. How about `EVT_TEXT`? Does it have any default behaviour? It doesn't *seem* to - typing into a field with an `EVT_TEXT` handler *appears* to behave the same regardless of whether I call `.Skip()` or not. But is there some edge case or non-obvious effect that's affected by whether I call `.Skip()`, and which should therefore dictate my choice of whether to call it?
2018/05/23
[ "https://Stackoverflow.com/questions/50490556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1709587/" ]
Not calling `wxEvent::Skip()` does 2 related but different things: it prevents the default event handler inside the underlying UI toolkit from from processing it and it also prevents any other handlers of the same event in your own code from processing it. The first aspect is indeed not important for command events, which typically don't have any default event handlers at the toolkit level. The second one still is, however, especially because command events bubble upwards the window hierarchy by default. So if you bound some handler to your text control and also happen to have a catch all `wxEVT_TEXT` handler at the frame level, you can easily see the difference between calling `Skip()` or not: the frame handler will only get the event if you do call it. From the practical point of view, you typically do *not* want to skip command event handlers as they should be processed once and once only. But it's not an absolute interdiction and there are situations when you may still do it, e.g. if your event handler didn't do anything with this particular event.
Generally speaking calling Skip() inside the event handler only make sense for non-wxCommandEvent handlers. You can just let the handler finish and you program will do next event iteration. Since wxEVT\_TEXT is wxCommandEvent event calling Skip() does not make much sense. However not calling Skip() on non-wxCommandEvent is bad and you really should know what you are doing if you omit such call.
4,736
13,173,029
My friend has an application written in `C` that comes with a GUI made using `GTK` under Linux. Now we want to rewrite the GUI in python (`wxpython` or `PyQT`). I don't have experience with Python and don't know how to make Python communicate with C. I'd like to know if this is possible and if yes, how should I go about implementing it?
2012/11/01
[ "https://Stackoverflow.com/questions/13173029", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1703502/" ]
Yes its possible to call 'C' functions from Python. Please look into SWIG(deprecated) also Python provides its own Extensibility API. You might want to look into that. Also google CTypes. LINKS: [Python Extension](http://docs.python.org/2/extending/extending.html) A simple example: I used Cygwin on Windows for this. My python version on this machine is 2.6.8 - tested it with test.py loading the module called "myext.dll" - it works fine. You might want to modify the `Makefile` to make it work on your machine. original.h ---------- ``` #ifndef _ORIGINAL_H_ #define _ORIGINAL_H_ int _original_print(const char *data); #endif /*_ORIGINAL_H_*/ ``` original.c ---------- ``` #include <stdio.h> #include "original.h" int _original_print(const char *data) { return printf("o: %s",data); } ``` stub.c ------ ``` #include <Python.h> #include "original.h" static PyObject *myext_print(PyObject *, PyObject *); static PyMethodDef Methods[] = { {"printx", myext_print, METH_VARARGS,"Print"}, {NULL, NULL, 0, NULL} }; PyMODINIT_FUNC initmyext(void) { PyObject *m; m = Py_InitModule("myext",Methods); } static PyObject *myext_print(PyObject *self, PyObject *args) { const char *data; int no_chars_printed; if(!PyArg_ParseTuple(args, "s", &data)){ return NULL; } no_chars_printed = _original_print(data); return Py_BuildValue("i",no_chars_printed); } ``` Makefile -------- ``` PYTHON_INCLUDE = -I/usr/include/python2.6 PYTHON_LIB = -lpython2.6 USER_LIBRARY = -L/usr/lib GCC = gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC -DMAJOR_VERSION=1 -DMINOR_VERSION=0 -I/usr/include -I/usr/include/python2.6 win32 : myext.o - gcc -shared myext.o $(USER_LIBRARY) $(PYTHON_LIB) -o myext.dll linux : myext.o - gcc -shared myext.o $(USER_LIBRARY) $(PYTHON_LIB) -o myext.so myext.o: stub.o original.o - ld -r stub.o original.o -o myext.o stub.o: stub.c - $(GCC) -c stub.c -o stub.o original.o: original.c - $(GCC) -c original.c -o original.o clean: myext.o - rm stub.o original.o stub.c~ original.c~ Makefile~ ``` test.py ------- ``` import myext myext.printx('hello world') ``` OUTPUT ------ > > o: hello world > > >
> > Sorry but i don't have python experience so don't know how to make Python communicate with C program. > > > Yes, that's exactly how you do it. Turn your C code into a Python module, and then you can write the entire GUI in Python. See [Extending and Embedding the Python Interpreter](http://docs.python.org/2/extending/).
4,737
8,746,586
I'm using [python-mock](http://www.voidspace.org.uk/python/mock/) to mock out a file open call. I would like to be able to pass in fake data this way, so I can verify that `read()` is being called as well as using test data without hitting the filesystem on tests. Here's what I've got so far: ``` file_mock = MagicMock(spec=file) file_mock.read.return_value = 'test' with patch('__builtin__.open', create=True) as mock_open: mock_open.return_value = file_mock with open('x') as f: print f.read() ``` The output of this is `<mock.Mock object at 0x8f4aaec>` intead of `'test'` as I would assume. What am I doing wrong in constructing this mock? Edit: Looks like this: ``` with open('x') as f: f.read() ``` and this: ``` f = open('x') f.read() ``` are different objects. Using the mock as a context manager makes it return a new `Mock`, whereas calling it directly returns whatever I've defined in `mock_open.return_value`. Any ideas?
2012/01/05
[ "https://Stackoverflow.com/questions/8746586", "https://Stackoverflow.com", "https://Stackoverflow.com/users/112785/" ]
This sounds like a good use-case for a `StringIO` object that already implements the file interface. Maybe you can make a `file_mock = MagicMock(spec=file, wraps=StringIO('test'))`. Or you could just have your function accept a file-like object and pass it a `StringIO` instead of a real file, avoiding the need for ugly monkey-patching. Have you looked the mock documentation? <http://www.voidspace.org.uk/python/mock/compare.html#mocking-the-builtin-open-used-as-a-context-manager>
building on @tbc0 answer, to support Python 2 and 3 (multi-version tests are helpful to port 2 to 3): ``` import sys module_ = "builtins" module_ = module_ if module_ in sys.modules else '__builtin__' try: import unittest.mock as mock except (ImportError,) as e: import mock with mock.patch('%s.open' % module_, mock.mock_open(read_data='test')): with open('/dev/null') as f: print(f.read()) ```
4,740
63,307,054
I am learning how to code a game in python(version 3.6), and I have come across an error that has me lost. I tried to run my code and this error message that traced back to sprite.py(A file that I imported from python's library). This is the error message popped up: > > File "C:\Users\aveil\AppData\Roaming\Python\Python36\site-packages\pygame\sprite.py", line 142, > in add self.add(\*group) TypeError: add() argument after \* must be an iterable, not int >>> > > > This is the code that the traceback lead to: ``` has = self.__g.__contains__ for group in groups: if hasattr(group, '_spritegroup'): if not has(group): group.add_internal(self) self.add_internal(group) else: self.add(*group) ``` I did not paste the whole sprite.py file because it has 1.6k lines, but I hope this is enough context. I did not write sprite.py, and am still relatively new to coding, so this error has me stumped. I am not sure where the "int" is or how to change it from an integer to an "iterable". I would appreciate any suggestions!
2020/08/07
[ "https://Stackoverflow.com/questions/63307054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
you may find useful the function "replace", also this will allow to do a backup for the file (if you want) ``` string backup = destination + ".bak"; File.Delete(backup); File.Replace(source, destination, backup, true); ``` You can play a little with that. More info: <https://learn.microsoft.com/en-us/dotnet/api/system.io.file.replace?view=netcore-3.1>
If the new image has the same name, you can check the existence of the file and delete it before creating the new file. Note: Make sure the file path is correct (filename along with extension should also be included). ``` if (File.Exists("smb://serverUsername:ServerPassword@serverIP/sharefile/fileToDelete.jpg")) { File.Delete("smb://serverUsername:ServerPassword@serverIP/sharefile/fileToDelete.jpg"); } ```
4,742
29,925,783
I am pretty new to programming with python. So apologies in advance: I have two python scripts which should share variables. Furthermore the first script (first.py) should call second script (second.py) first.py: ``` import commands x=5 print x cmd = "path-to-second/second.py" ou = commands.getoutput(cmd) print x ``` second.py looks like this ``` print x x=10 print x ``` I would expect the output: 5 5 10 10 In principle I need a way to communicate between the two scripts. Any solution which does this job is perfectly fine. Thank you for your help! Tim
2015/04/28
[ "https://Stackoverflow.com/questions/29925783", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4723963/" ]
C++ does not provide some magical handling for your abstract logic, it cannot just work out that `File1=File2+File3` means you want to merge two files together. Firstly, those variables would have to be some form of 'type', and to have the logic you want, a type of your own devising. It would be constructed from a `std::string` which would be the file name. You would then need to define an `operator+`, this operator would have to some how combine the file names to produce a new one, then make a new file in the operating system, and then add the content of the other two files, finally return a new instance of this type which has this new file name. As I said in a comment though, you really shouldn't do this. Generally speaking, you should not overload operators in C++, unless doing so has VERY clear and obvious results. For instance, a (maths) vector class, fairly clear what `vector_a + vector_b` would (should) do. However, these 'files', it's not so clear, just look at the questions people had to ask. Just one of those should raise a big red flag that it is not a good idea. You should just use a 'normal' function to do what you want to do, something with a name that makes it clear what is going on.
Unfortunately you cannot add two files together like that Instead you have to use `ifstream` and `ofstream` from the `fstream` library Here is an example that you can use ``` std::ifstream file1( "Data1.txt" ) ; std::ifstream file2( "Data2.txt" ) ; std::ofstream combined_file( "dataOut.txt" ) ; combined_file << file1.rdbuf() << file2.rdbuf() ; ```
4,743
52,781,297
I setup Ubuntu server 18.04 LTS, LAMP, and mod\_mono (which appears to be working fine alongside PHP now by the way.) Got python working too; at first it gave an HTTP "Internal Server Error" message. `sudo chmod +x myfile.py` fixed this error and the code python generates is displayed fine. But any time the execute permission is removed from the file (such as by uploading a new version of the file), the execute bit is stripped and it breaks again. A work-around was implemented with incrontab, where the cgi-bin folder was monitored for changes and any new writes caused `chmod +x %f` to be ran on them. This worked for awhile, then stopped, and seems a hokey solution at best. Perl, PHP, even ASPX do not need to be marked executable - only python. **Is there any way Apache can "run" python *without* the file marked as executable?**
2018/10/12
[ "https://Stackoverflow.com/questions/52781297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4626146/" ]
I really struggled with this and finally managed to clear it with the following approach: ``` require './aws/aws-autoloader.php'; use Aws\Credentials\Credentials; use GuzzleHttp\Client; use GuzzleHttp\Psr7\Request; use Aws\Signature\SignatureV4; use Aws\Credentials\CredentialProvider; $url = '<your URL>'; $region = '<your region>'; $json = json_encode(["Yourpayload"=>"Please"]); $provider = CredentialProvider::defaultProvider(); $credentials = $provider()->wait(); # $credentials = new Credentials($access_key, $secret_key); # if you do not run from ec2 $client = new Client(); $request = new Request('POST', $url, [], $json); $s4 = new SignatureV4("execute-api", $region); $signedrequest = $s4->signRequest($request, $credentials); $response = $client->send($signedrequest); echo($response->getBody()); ``` This example assumes you are running from an EC2 or something that has an instance profile that is allowed to access this API gateway component and the AWS PHP SDK in the ./aws directory.
You can install AWS php sdk via composer `composer require aws/aws-sdk-php` and here is the github <https://github.com/aws/aws-sdk-php> . In case you want to do something simple or they don't have what you are looking for you can use `curl` in php to post data. ``` $ch = curl_init(); $data = http_build_query([ "entity" => "Business", "action" => "read", "limit" => 100 ]); curl_setopt_array($ch, [ CURLOPT_URL => "https://myendpoint.com/api", CURLOPT_FOLLOWLOCATION => true CURLOPT_RETURNTRANSFER => true, CURLOPT_POST => true, CURLOPT_POSTFIELDS => $data ]); $response = curl_exec($ch); $error = curl_error($ch); ```
4,744
12,147,394
I have a C file that has a bunch of #defines for bits that I'd like to reference from python. There's enough of them that I'd rather not copy them into my python code, instead is there an accepted method to reference them directly from python? Note: I know I can just open the header file and parse it, that would be simple, but if there's a more pythonic way, I'd like to use it. Edit: These are very simple #defines that define the meanings of bits in a mask, for example: ``` #define FOO_A 0x3 #define FOO_B 0x5 ```
2012/08/27
[ "https://Stackoverflow.com/questions/12147394", "https://Stackoverflow.com", "https://Stackoverflow.com/users/354209/" ]
Running under the assumption that the C .h file contains only #defines (and therefore has nothing external to link against), then the following would work with swig 2.0 (http://www.swig.org/) and python 2.7 (tested). Suppose the file containing just defines is named just\_defines.h as above: ``` #define FOO_A 0x3 #define FOO_B 0x5 ``` Then: ``` swig -python -module just just_defines.h ## generates just_defines.py and just_defines_wrap.c gcc -c -fpic just_defines_wrap.c -I/usr/include/python2.7 -I. ## creates just_defines_wrap.o gcc -shared just_defines_wrap.o -o _just.so ## create _just.so, goes with just_defines.py ``` Usage: ``` $ python Python 2.7.3 (default, Aug 1 2012, 05:16:07) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import just >>> dir(just) ['FOO_A', 'FOO_B', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '_just', '_newclass', '_object', '_swig_getattr', '_swig_property', '_swig_repr', '_swig_setattr', '_swig_setattr_nondynamic'] >>> just.FOO_A 3 >>> just.FOO_B 5 >>> ``` If the .h file also contains entry points, then you need to link against some library (or more) to resolve those entry points. That makes the solution a little more complicated since you may have to hunt down the correct libs. But for a "just defines case" you don't have to worry about this.
`#define`s are macros, that have no meaning whatsoever outside of your C compiler's preprocessor. As such, they are the bane of multi-language programmers everywhere. (For example, see this Ada question: [Setting the license for modules in the linux kernel](https://stackoverflow.com/questions/11927590/setting-the-license-for-modules-in-the-linux-kernel) from two weeks ago). Short of running your source code through the C-preprocessor, there really is no good way to deal with them. I typically just figure out what they evalutate to (in complex cases, often there's no better way to do this than to actually compile and run the damn code!), and hard-code that value into my program. The (well one of the) annoying parts is that the C preprocessor is considered by C coders to be a very simple little thing that they often use without even giving a second thought to. As a result, they tend to be shocked that it causes big problems for interoperability, while we can deal with most other problems C throws at us fairly easily. In the simple case shown above, by far the easiest way to handle it would be to encode the same two values in constants in your python program somewhere. If keeping up with changes is a big deal, it probably wouldn't be too much trouble to write a Python program to parse those values out of the file. However, you'd have to realise that your C code would only re-evaluate those values on a compile, while your python program would do it whenever it runs (and thus should probably only be run when the C code is also compiled).
4,745
32,367,279
I start Django server with `python manage.py runserver` and then quit with CONTROL-C, but I can still access urls in `ROOT_URLCONF`, why?
2015/09/03
[ "https://Stackoverflow.com/questions/32367279", "https://Stackoverflow.com", "https://Stackoverflow.com/users/456105/" ]
Probably you left another process running somewhere else. Here is how you can list all processes whose command contains `manage.py`: ``` ps ax | grep manage.py ``` Here is how you can kill them: ``` pkill -f manage.py ```
Without seeing your script, I would have to say that you have blocking calls, such as socket.recv() or os.system(executable) running at the time of the CTRL+C. Your script is stuck after the CTRL+C because python executes the `KeyboardInterrupt` AFTER the the current command is completed, but before the next one. If there is a blocking function waiting for a response, such as an exit code, packet, or URL, until it times out, you're stuck unless you abort it with task manager or by closing the console. In the case of threading, it kills all threads after it completes its current command. Again, if you have a blocking call, the thread will not exit until it receives its response.
4,754
71,484,131
I am trying to extract a sub-array using logical indexes as, ``` a = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]) a Out[45]: array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12], [13, 14, 15, 16]]) b = np.array([False, True, False, True]) a[b, b] Out[49]: array([ 6, 16]) ``` python evaluates the logical indexes in b per element of a. However in matlab you can do something like ``` >> a = [1 2 3 4; 5 6 7 8; 9 10 11 12; 13 14 15 16] a = 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 >> b = [2 4] b = 2 4 >> a(b, b) ans = 6 8 14 16 ``` how can I achieve the same result in python without doing, ``` c = a[:, b] c[b,:] Out[51]: array([[ 6, 8], [14, 16]]) ```
2022/03/15
[ "https://Stackoverflow.com/questions/71484131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10976198/" ]
Numpy supports logical indexing, though it is a little different than what you are familiar in MATLAB. To get the results you want you can do the following: ``` a[b][:,b] # first brackets isolates the rows, second brackets isolate the columns Out[27]: array([[ 6, 8], [14, 16]]) ``` The more "numpy" method will be understood after you will understand what happend in your case. `b = np.array([False, True, False, True])` is similar to `b=np.array([1,3])` and will be easier for me to explain. When writing `a[[1,3],[1,3]]` what happens is that numpy crates a (2,1) shape array, and places `a[1,1]` in the `[0]` location and `a[3,3]` in the second location. To create an output of shape (2,2), the indexing must have the same dimensionality. Therefore, the following will get your result: ``` a[[[1,1],[3,3]],[[1,3],[1,3]]] Out[28]: array([[ 6, 8], [14, 16]]) ``` **Explanation**: The indexing arrays are: ``` temp_rows = np.array([[1,1], [3,3]]) temp_cols = np.array([[1,3], [1,3]) ``` both arrays have dimensions of (2,2) and therefore, numpy will create an output of shape (2,2). Then, it places `a[1,1]` in location [0,0], `a[1,3]` in [0,1], `a[3,1]` in location [1,0] and `a[3,3]` in location [1,1]. This can be expanded to any shape but for your purposes, you wanted a shape of (2,2) After figuring this out, you can make things even simpler by utilizing the fact you if you insert a (2,1) array in the 1st dimension and a (1,2) array in the 2nd dimension, numpy will perform the broadcasting, similar to the MATLAB operation. This means that by using: ``` temp_rows = np.array([[1],[3]]) temp_cols = np.array([1,3]) ``` you can do: ``` a[[[1],[3]], [1,3]) Out[29]: array([[ 6, 8], [14, 16]]) ```
You could use [`np.ix_`](https://numpy.org/doc/stable/reference/generated/numpy.ix_.html) here. ``` a[np.ix_(b, b)] # array([[ 6, 8], # [14, 16]]) ``` --- Output returned by `np.ix_` ``` >>> np.ix_(b, b) (array([[1], [3]]), array([[1, 3]])) ```
4,757
60,393,214
In django, I have attempted to switch from using an sqlite3 database to postgresql. `settings.py` has been switched to connect to postgres. Both `python manage.py makemigrations` and `python manage.py migrate` run without errors. `makemigrations` says that it creates the models for the database, however when running `migrate`, it says there is no changes to be made. The django server will run, however when clicking on a specfic table in the database in the `/admin` webpage, it throws the error: ``` ProgrammingError at /admin/app/tablename/ relation "app_tablename" does not exist LINE 1: SELECT COUNT(*) AS "__count" FROM "app_tablename" ``` With the same code (other than `settings.py` database connection) this worked when using sqlite3.
2020/02/25
[ "https://Stackoverflow.com/questions/60393214", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12959523/" ]
The datapoints on the old domain, i.e. ``` import numpy as np from scipy import interpolate from scipy import misc import matplotlib.pyplot as plt arr = misc.face(gray=True) x = np.linspace(0, 1, arr.shape[0]) y = np.linspace(0, 1, arr.shape[1]) f = interpolate.interp2d(y, x, arr, kind='cubic') x2 = np.linspace(0, 1, 1000) y2 = np.linspace(0, 1, 1600) arr2 = f(y2, x2) arr.shape # (768, 1024) arr2.shape # (1000, 1600) plt.figure() plt.imshow(arr) plt.figure() plt.imshow(arr2) ```
[skimage.transform.resize](https://scikit-image.org/docs/stable/api/skimage.transform.html#skimage.transform.resize) is a very convenient way to do this: ``` import numpy as np from skimage.transform import resize import matplotlib.pyplot as plt from scipy import misc arr = misc.face(gray=True) dim1, dim2 = 1000, 1600 arr2= resize(arr,(dim1,dim2),order=3) #order = 3 for cubic spline print(arr2.shape) plt.figure() plt.imshow(arr) plt.figure() plt.imshow(arr2) ```
4,760
26,093,807
I have installed thrift 0.8.0 in Ubuntu 12.04 I followed the all commands correctly with out any error but after installation it's working perfect **Now i want to use PHP by using thrift but in below code it only Shows YES for C++ and Python i need java and PHP but that two languages shows NO How can i use PHP and java in thrift, is there any library for java and php ?** ``` thrift 0.8.0 Building code generators ..... : Building C++ Library ......... : yes Building C (GLib) Library .... : no Building Java Library ........ : no Building C# Library .......... : no Building Python Library ...... : yes Building Ruby Library ........ : no Building Haskell Library ..... : no Building Perl Library ........ : no Building PHP Library ......... : no Building Erlang Library ...... : no Building Go Library .......... : no Building TZlibTransport ...... : yes Building TNonblockingServer .. : yes Using Python ................. : /usr/bin/python ```
2014/09/29
[ "https://Stackoverflow.com/questions/26093807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3967915/" ]
First, download the source version of Thrift. I would strongly recommend using a newer version if possible. There are several ways to include the Thrift Java library (may have to change slightly for your Thrift version): If you are using maven, you can add the maven coordinates to your pom.xml: ``` <dependency> <groupId>org.apache.thrift</groupId> <artifactId>libthrift</artifactId> <version>0.9.1</version> </dependency> ``` Alternatively you can just download the JAR and add it your project: <http://central.maven.org/maven2/org/apache/thrift/libthrift/0.9.1/libthrift-0.9.1.jar> If you are using a version that has not been published to the central maven repositories, you can download the source tarball and navigate to the lib/java directory and build it with Apache Ant by typing: ``` ant ``` The library JAR will be in the lib/java/build directory. Optionally you can add the freshly built JAR to your local Maven repository: ``` mvn install:install-file -DartifactId=libthrift -DgroupId=org.apache.thrift -Dvers ``` For the PHP library, navigate to the `lib/php/src` directory and copy the PHP files into your project. You can then use the Thrift\ClassLoader\ThriftClassLoader class or the autoload.php script to include the Thrift PHP library. No build necessary unless you are trying to use the native PHP extension that implements the thrift protocol.
* for Java: you can download .jar library, javadoc here <http://repo1.maven.org/maven2/org/apache/thrift/libthrift/0.9.1/> * for PHP: copy [thrift-source]/lib/php/lib to your project and use it. This is a example to use: <https://thrift.apache.org/tutorial/php> P/s: i want to use .dll PHP extension rather than PHP source files. Anyone care it, we can discuss at here [How can write or find a PHP extension for Apache Thrift](https://stackoverflow.com/questions/26623610/how-can-write-or-find-a-php-extension-for-apache-thrift)
4,761
43,383,686
In one of my shell script I am using eval command like below to evaluate the environment path - ``` CONFIGFILE='config.txt' ###Read File Contents to Variables while IFS=\| read TEMP_DIR_NAME EXT do eval DIR_NAME=$TEMP_DIR_NAME echo $DIR_NAME done < "$CONFIGFILE" ``` Output: ``` /path/to/certain/location/folder1 /path/to/certain/location/folder2/another ``` In `config.txt` - ``` $MY_PATH/folder1|.txt $MY_PATH/folder2/another|.jpg ``` What is MY\_PATH? ``` export | grep MY_PATH declare -x MY_PATH="/path/to/certain/location" ``` So is there any way I can get the path from python code like I could get in shell with `eval`
2017/04/13
[ "https://Stackoverflow.com/questions/43383686", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2350145/" ]
You can do it a couple of ways depending on where you want to set MY\_PATH. `os.path.expandvars()` expands shell-like templates using the current environment. So if MY\_PATH is set before calling, you do ``` td@mintyfresh ~/tmp $ export MY_PATH=/path/to/certain/location td@mintyfresh ~/tmp $ python3 Python 3.5.2 (default, Nov 17 2016, 17:05:23) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> with open('config.txt') as fp: ... for line in fp: ... cfg_path = os.path.expandvars(line.split('|')[0]) ... print(cfg_path) ... /path/to/certain/location/folder1 /path/to/certain/location/folder2/another ``` If MY\_PATH is defined in the python program, you can use `string.Template` to expand shell-like variables using a local `dict` or even keyword arguments. ``` >>> import string >>> with open('config.txt') as fp: ... for line in fp: ... cfg_path = string.Template(line.split('|')[0]).substitute( ... MY_PATH="/path/to/certain/location") ... print(cfg_path) ... /path/to/certain/location/folder1 /path/to/certain/location/folder2/another ```
You could use os.path.expandvars() (from [Expanding Environment variable in string using python](https://stackoverflow.com/questions/5258647/expanding-environment-variable-in-string-using-python)): ``` import os config_file = 'config.txt' with open(config_file) as f: for line in f: temp_dir_name, ext = line.split('|') dir_name = os.path.expandvars(temp_dir_name) print dir_name ```
4,762
69,442,782
At first I thought it was an error connecting with GitHub but this seems to not be the script since the first part of the script fires up normally Full output for context ``` ┌──(kali㉿kali)-[~/bhptrojan] └─$ python3 git_trojan.py 130 ⨯ [*] Attempting to retrieve dirlister [*] Attempting to retrieve environment Traceback (most recent call last): File "<frozen importlib._bootstrap>", line 919, in _find_spec AttributeError: 'GitImporter' object has no attribute 'find_spec' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/kali/bhptrojan/git_trojan.py", line 93, in <module> trojan.run() File "/home/kali/bhptrojan/git_trojan.py", line 59, in run config = self.get_config() File "/home/kali/bhptrojan/git_trojan.py", line 41, in get_config exec("import %s" % task['module']) File "<string>", line 1, in <module> File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 982, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 921, in _find_spec File "<frozen importlib._bootstrap>", line 895, in _find_spec_legacy File "/home/kali/bhptrojan/git_trojan.py", line 74, in find_module new_library = get_file_contents('modules', f'{name}.py', self.repo) File "/home/kali/bhptrojan/git_trojan.py", line 23, in get_file_contents return repo.file_contents(f'{dirname}/{module_name}').content File "/home/kali/.local/lib/python3.9/site-packages/github3/repos/repo.py", line 1672, in file_contents json = self._json(self._get(url, params={"ref": ref}), 200) File "/home/kali/.local/lib/python3.9/site-packages/github3/models.py", line 155, in _json raise exceptions.error_for(response) github3.exceptions.NotFoundError: 404 Not Found ``` I also get a few other errors as seen above but I think they are all coming from the one I mentioned but I am not sure. Full code here. ``` import base64 import github3 import importlib import json import random import sys import threading import time from datetime import datetime def github_connect(): with open ('mytoken.txt') as f: token = f.read() user = 'Sebthelad' sess = github3.login(token=token) return sess.repository(user, 'bhptrojan') def get_file_contents(dirname, module_name, repo): return repo.file_contents(f'{dirname}/{module_name}').content class Trojan: def __init__(self,id): self.id = id self.config_file = f'{id}.json' self.data_path = f'data/{id}/' self.repo = github_connect() def get_config(self): config_json = get_file_contents('config', self.config_file, self.repo) config = json.loads(base64.b64decode(config_json)) for task in config: if task['module'] not in sys.modules: exec("import %s" % task['module']) return config def module_runner(self, module): result = sys.modules[module].run() self.store_module_result(result) def store_module_result(self, data): message = datetime.now().isoformat() remote_path = f'data/{self.id}/{message}.data' bindata = bytes('%r' % data, 'utf-8') self.repo.create_file(remote_path,message,base64.b64decode(bindata)) def run(self): while True: config = self.get_config() for task in config: thread = threading.Thread(target=self.module_runner,args=(task['module'],)) thread.start() time.sleep(random.randint(1,10)) time.sleep(random.randint(30*60, 3*60*60)) class GitImporter: def __init__(self): self.current_module_code = "" def find_module(self, name, path=None): print("[*] Attempting to retrieve %s" % name) self.repo = github_connect() new_library = get_file_contents('modules', f'{name}.py', self.repo) if new_library is not None: self.current_module_code = base64.b64decode(new_library) return self def load_module(self,name): spec = importlib.util.spec_from_loader(name, loader=None, origin=self.repo.git_url) new_module = importlib.util.module_from_spec(spec) exec(self.current_module_code, new_module.__dict__) sys.modules[spec.name] = new_module return new_module if __name__ == '__main__': sys.meta_path.append(GitImporter()) trojan = Trojan('abc') trojan.run() ``` Thanks in advance. P.S: If you find any other issues in my code please let me know.
2021/10/04
[ "https://Stackoverflow.com/questions/69442782", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16464360/" ]
Here is a possible method of doing this, which should be slightly faster than excessive use of loops. Method: 1. Split each review into its individual words. 2. Create a dictionary with the key being the review word and the value being the frequency. 3. Loop through every topic, then loop through every keyword in that topic. 4. If the keyword is in `reviewsDict`, get the number of occurrences and add it on to `count`'s occurrences. 5. Return dictionary result containing topics and their frequencies. Solution: ``` func countOccurance(topics: [String: [String]], reviews: [String]) -> [String : Int] { var reviewsDict: [String: Int] = [:] for review in reviews { let reviewWords = review.components(separatedBy: CharacterSet.letters.inverted) for word in reviewWords { guard !word.isEmpty else { continue } reviewsDict[word.lowercased(), default: 0] += 1 } } var count: [String: Int] = [:] for (topic, topicKeywords) in topics { for topicKeyword in topicKeywords { guard let occurrences = reviewsDict[topicKeyword] else { continue } count[topic, default: 0] += occurrences } } return count } ``` Result: > > > ``` > 0 : (key: "price", value: 2) > 1 : (key: "business", value: 1) > > ``` > >
I think your `countOccurance(topics:reviews:)` function is violating the single responsibility principle (it's not really counting occurrences, it's also filtering words). As a result, it's very specialized to your one use-case, and you won't find any built-in facilities to help you. On the other hand, if you broke down the problem into smaller, simpler, generic steps, you can leverage existing APIs. Here's how I would do this: I don't know how familiar you are with the Sequence APIs, so I added some comments. Of course, you should remove these from your real code. I've also added some intermediate variables. I think their names act as useful documentation (certainly better than using comments), but that's a matter of taste. ``` extension Sequence where Element: Hashable { typealias Histogram = [Element: Int] func histogram() -> Histogram { // I really with this was built-in :( reduce(into: [:]) { acc, word in acc[word, default: 0] += 1 } } } let topics = [ "price" : ["cheap", "expensive", "price"], "business" : ["small", "medium", "large"] ] // Invert the "topics" dictionary, to obtain a dictionary that can tell you what topic a keyword belongs to. let topicsByKeyword = Dictionary(uniqueKeysWithValues: topics.lazy.flatMap { topic, keywords in keywords.map { keyword in (key: keyword, value: topic) } } ) let reviews = ["large company with expensive items. Some are very cheap"] let reviewWords = reviews .flatMap { $0.components(separatedBy: CharacterSet.letters.inverted) } // Get a flat array of all words in all reviews .filter { !$0.isEmpty } // Filter out the empty words .map { $0.lowercased() } // Lowercase them all let reviewTopicKeywords = reviewWords .compactMap { word in topicsByKeyword[word] } // Map words to the topics they represent let reviewTopicKeywordCounts = reviewTopicKeywords.histogram() // Count the occurrences of the keywords, which is our final result. ``` Using a type might help organize some of these related behaviours: ``` import Foundation extension Sequence where Element: Hashable { typealias Histogram = [Element: Int] func histogram() -> Histogram { reduce(into: [:]) { acc, word in acc[word, default: 0] += 1 } } } struct TopicKeywordCounter { let topicsByKeyword: [String: String] init(keywordsByTopic: [String: [String]]) { // Invert the "topics" dictionary, to obtain a dictionary that can tell you what topic a keyword belongs to. self.topicsByKeyword = Dictionary(uniqueKeysWithValues: keywordsByTopic.lazy.flatMap { topic, keywords in keywords.map { keyword in (key: keyword, value: topic) } } ) } public func countOccurances(in reivews: [String]) -> [String: Int] { let allReviewTopicKeywords = reivews.flatMap { review -> [String] in let reviewWords = allSanitzedWords(in: review) let reviewKeywords = mapWordsToTopics(from: reviewWords) return reviewKeywords } return allReviewTopicKeywords.histogram() } private func allSanitzedWords(in review: String) -> [String] { review .components(separatedBy: CharacterSet.letters.inverted) .filter { !$0.isEmpty } .map { $0.lowercased() } } private func mapWordsToTopics(from words: [String]) -> [String] { words.compactMap { topicsByKeyword[$0] } } } // Make your TopicKeywordCounter let topicKeywordCounter = TopicKeywordCounter(keywordsByTopic: [ "price" : ["cheap", "expensive", "price"], "business" : ["small", "medium", "large"] ]) let reviews = ["large company with expensive items. Some are very cheap"] // ...then use it for any arrays of you reviews you want let reviewTopicKeywordCounts = topicKeywordCounter.countOccurances(in: reviews) print(reviewTopicKeywordCounts) ``` Let me know if you have any questions!
4,763
21,559,433
I have the following code in my client: ``` data = {"method": 2,"read": 3} s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(server_address) req = json.dumps(data) s.send(req) ``` and I am trying the following in my server: ``` # Threat the socket as a file stream. worker = self.conn.makefile(mode="rw") # Read the request in a serialized form (JSON). request = worker.readline() result = json.loads(request) print(result) ``` and I am getting the `No JSON object could be decoded` error. I am using python 3.3. I cannot understand where is my mistake, it seems that the send method does not send an json object. Any idea? `Edit`: I fixed the JSON format, the problem now is `<class 'TypeError'>: unsupported operand type(s) for +: 'NoneType' and 'str'` on the server and `s.send(req) TypeError: 'str' does not support the buffer interface` on the client
2014/02/04
[ "https://Stackoverflow.com/questions/21559433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2277094/" ]
I'm assuming you're using python 3, judging by the errors. You'll need to encode your data into bytes. Sockets cannot directly send python3 strings.
Your JSON isn't valid. Put it through JSON lint to find the error. <http://jsonlint.com/>
4,764
27,849,412
This is the error when I try to get anything with pip3 I'm not sure what to do ``` Exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 283, in run requirement_set.install(install_options, global_options, root=options.root_path) File "/usr/lib/python3/dist-packages/pip/req.py", line 1435, in install requirement.install(install_options, global_options, *args, **kwargs) File "/usr/lib/python3/dist-packages/pip/req.py", line 671, in install self.move_wheel_files(self.source_dir, root=root) File "/usr/lib/python3/dist-packages/pip/req.py", line 901, in move_wheel_files pycompile=self.pycompile, File "/usr/lib/python3/dist-packages/pip/wheel.py", line 206, in move_wheel_files clobber(source, lib_dir, True) File "/usr/lib/python3/dist-packages/pip/wheel.py", line 193, in clobber os.makedirs(destsubdir) File "/usr/lib/python3.4/os.py", line 237, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.4/dist- packages/Django-1.7.2.dist-info' Storing debug log for failure in /home/omega/.pip/pip.log ```
2015/01/08
[ "https://Stackoverflow.com/questions/27849412", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4358680/" ]
just install them using --user option which install the package only for the current user and not for all ``` pip install xxxxxx --user ```
You need to use `sudo` to install globally or have permissions to write to the folder. Or as @Alasdair commented using a [virtualenv](http://docs.python-guide.org/en/latest/dev/virtualenvs/) is a better option.
4,766
50,963,625
I'm using Python 3.6 through Spyder in Anaconda3. I have both the Anaconda installation and a "clean" python installation. Before I installed the "clean" python, when I ran the `Python -V` command in cmd I got the following version description `Python 3.6.5 :: Anaconda, Inc.` Now when I run the command it just says `Python 3.6.5.` and the `pip list` is a whole lot shorter. When ever I open Spyder and find some package that I don't have... how would I go about installing said package? If I just open cmd and write `pip install ...` it will install in the "clean" python directory. How do I tell it to connect to Spyder?
2018/06/21
[ "https://Stackoverflow.com/questions/50963625", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3114229/" ]
I know it's a very late answer, but it may help other people. When you are working with anaconda you can use the basic environement or create a new one (it may be what's you call a "clean" python installation). To do that just do the following : * Open you anaconda navigator * Go to "Environments" * Click on the button create. Here by the way you can choose you python version Then to install your lib you can use your Anaconda GUI : * Double click on you environment * On the right side you have all you installed lib. In the list box select "Not installed" * Look for your lib, check it and click on "apply" on the bottom right You can also do it in your windows console (cmd), I prefer this way (more trust and you can see what's going on) : * Open you console * `conda activate yourEnvName` * `conda install -n yourEnvName yourLib` * *Only if* your conda install did not find your lib do `pip install yourLib` * At the end `conda deactivate` /!\ If you are using this way, close your Anaconda GUI while you are doing this If you want you can find your environement(s) in (on Windows) C:\Users\XxUserNamexX\AppData\Local\Continuum\anaconda3\envs. Each folder will contains the library for the named environement. Hope it will be helpfull PS : Note that it is important to launch spyder through the Anaconda GUI if you want Spyder to find your lib
There is a pip.exe included in the anaconda/Spyder package which can cleanly add mopdules to Spyder. It's not installed in the windows path by default, probably so it' won't interfere with the "normal" pip in my "normal" python package. Check "/c/Users/myname/Anaconda3/Scripts/pip.exe". It seems to depend on local DLLs - it did not work (just hung) until I cd'd into it's directory. Once there I used it to install pymongo in the usual way, and the pymongo package was picked up by Spyder. Hope that helps...
4,769
30,856,274
I am new to python development using virtualenv. I have installed python 2.7, pip, virtualenv, virtualenvwrapper in windows and I am using windows PS. I have referred lots of tutorials for setting this up. Most of them contained the same steps and almost all of them stopped short of explaining what to do *after* the virtualenv was created. 1. How do I actually work in a virtualenv? suppose I want to create a new flask application after installing that package in my new env virtualenv (eg; testenv). 2. If I already have an existing project and I want to put it inside a newly created virtual env, how do I do that? How should the folder structure be like? 3. My understanding of virtual env is that it provides a sandbox for your application by isolating it and keeping all its dependencies to itself in that particular env (and not sharing and it with others). Have I understood it wrong? Please help me clear this.
2015/06/15
[ "https://Stackoverflow.com/questions/30856274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4051896/" ]
> > How do I actually work in a virtualenv? suppose I want to create a new flask application after installing that package in my new env virtualenv (eg; testenv). > > > You open up Command Prompt and activate the virtualenv: ``` > \path\to\env\Scripts\activate ``` When you run `python` and `pip`, they run in the virtualenv. You have to do this for every Command Prompt window, since working in a virtualenv is really just running `C:\path\to\env\bin\python` instead of just `python` and `C:\path\to\env\bin\pip` instead of `pip`. > > If I already have an existing project and I want to put it inside a newly created virtual env, how do I do that? How should the folder structure be like? > > > It doesn't matter. When you install Python packages, they get installed globally into `C:\Python27\site-packages`. With virtualenv, you can create isolated Python environments that have their own packages, so if you're working on two projects that require different versions of a package, they can coexist without any issues. Some people make a folder for their virtualenvs (like `C:\Users\you\Virtualenvs\my_website`). You can also store it with your project (like `C:\Users\you\Projects\my_website\venv`). Once you activate it, the location doesn't matter. I use the latter. > > My understanding of virtual env is that it provides a sandbox for your application by isolating it and keeping all its dependencies to itself in that particular env (and not sharing and it with others). Have I understood it wrong? > > > Nope. The only point that I would clarify is that the "sandbox" is only for Python's packages, it doesn't affect your application in any way.
1. testenv/bin/pip and testenv/bin/python 2. I'd check it in a local repository and check it out in the virtualenv. 3. No, you have not.
4,774
34,093,247
On Windows 8, I've created a sample project in Django (1.6.5) and I'm getting errors when I run a custom command I wrote (runtcpserver). This is how my project structure looks like: c:/django/entitetracker: ``` manage.py tcpserver/ forms.py views.py models.py urls.py management __init__.py command __init__.py runtcpserver.py settings/ __init__.py base.py local.py ``` My manage.py file is as follows: ``` #!/usr/bin/env python import os import sys if __name__ == "__main__": os.environ.setdefault("DJANGO_SETTINGS_MODULE", "entitetracker.settings") from django.core.management import execute_from_command_line execute_from_command_line(sys.argv) ``` My path (python 2.7): ``` >>> import sys >>> for path in sys.path: print path C:\django\entitetracker (...other paths) ``` When I run the python manage.py runtcpserver settings=settings.local command, I am getting the following error: ``` Traceback (most recent call last): File "manage.py", line 9, in <module> execute_from_command_line(sys.argv) File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 399, in execute_from_command_line utility.execute() File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 392, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 261, in fetch_command commands = get_commands() File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 107, in get_commands apps = settings.INSTALLED_APPS File "C:\Python27\lib\site-packages\django\conf\__init__.py", line 54, in __getattr__ self._setup(name) File "C:\Python27\lib\site-packages\django\conf\__init__.py", line 49, in _setup self._wrapped = Settings(settings_module) File "C:\Python27\lib\site-packages\django\conf\__init__.py", line 132, in __init__ % (self.SETTINGS_MODULE, e) ImportError: Could not import settings 'entitetracker.settings' (Is it on sys.path? Is there an import error in the settings file?): No module named settings ``` In python shell, I tried to import the settings module and I'm getting no error: ``` >>> from settings import local >>> ``` Could someone suggest what I am missing?
2015/12/04
[ "https://Stackoverflow.com/questions/34093247", "https://Stackoverflow.com", "https://Stackoverflow.com/users/563130/" ]
Your `PYTHONPATH` is `C:\django\entitetracker`. You can load `entitetracker.settings`. In finish, Python try to find `C:\django\entitetracker\entitetracker\settings` package. Use ``` os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings") ```
My God, I'm so stupid. The error was I was missing the two dashes before settings=settings.local! Thanks for your help Tomasz.
4,775
48,506,093
I have a REST API backend with python/flask and want to stream the response in an event stream. Everything is running inside a docker container with nginx/uwsgi (<https://hub.docker.com/r/tiangolo/uwsgi-nginx-flask/>). The API works fine until it comes to the event-stream. It seems like something (probably nginx) is buffering the "yields" because nothing is received by any kind of client until the server finished the calculation and everything is sent together. I tried to adapt the nginx settings (according to the docker image instructions) with an additional config (nginx\_streaming.conf) file saying: ``` server { location / { include uwsgi_params; uwsgi_request_buffering off; } } ``` dockerfile: =========== ``` FROM tiangolo/uwsgi-nginx-flask:python3.6 COPY ./app /app COPY ./nginx_streaming.conf /etc/nginx/conf.d/nginx_streaming.conf ``` But I am not really familiar with nginx settings and sure what I am doing here^^ This at least does not work.. any suggestions? My server side implementation: ``` from flask import Flask from flask import stream_with_context, request, Response from werkzeug.contrib.cache import SimpleCache cache = SimpleCache() app = Flask(__name__) from multiprocessing import Pool, Process @app.route("/my-app") def myFunc(): global cache arg = request.args.get(<my-arg>) cachekey = str(arg) print(cachekey) result = cache.get(cachekey) if result is not None: print('Result from cache') return result else: print('object not in Cache...calculate...') def calcResult(): yield 'worker thread started\n' with Pool(processes=cores) as parallel_pool: [...] yield 'Somewhere in the processing' temp_result = doSomethingWith( savetocache = cache.set(cachekey, temp_result, timeout=60*60*24) #timeout in seconds yield 'saved to cache with key:' + cachekey +'\n' print(savetocache, flush=True) yield temp_result return Response(calcResult(), content_type="text/event-stream") if __name__ == "__main__": # Only for debugging while developing app.run(host='0.0.0.0', debug=True, port=80) ```
2018/01/29
[ "https://Stackoverflow.com/questions/48506093", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5623899/" ]
I ran into the same problem. Try changing ``` return Response(calcResult(), content_type="text/event-stream") ``` to ``` return Response(calcResult(), content_type="text/event-stream", headers={'X-Accel-Buffering': 'no'}) ```
Following [the answer from @u-rizwan here](https://stackoverflow.com/a/48746083/236195), I added this to the `/etc/nginx/conf.d/mysite.conf` and it resolved the problem: ``` add_header X-Accel-Buffering no; ``` I have added it under `location /`, but it is probably a good idea to put it under the specific location of the event stream (I have a low traffic intranet use case here). Note: Looks like nginx could be stripping this header by default if it comes from the application: <https://serverfault.com/questions/937665/does-nginx-show-x-accel-headers-in-response>
4,776
4,982,138
I am on exercise 43 doing some self-directed work in [Learn Python The Hard Way](http://learnpythonthehardway.org/). And I have designed the framework of a game spread out over two python files. The point of the exercise is that each "room" in the game has a different class. I have tried a number of things, but I cannot figure out how to use the returned value from their initial choice to advance the user to the proper "room", which is contained within a class. Any hints or help would be greatly appreciated. Apologies for the poor code, I'm just starting out in python, but at my wit's end on this. Here is the ex43\_engine.py code which I run to start the game. --- ``` from ex43_map import * import ex43_map import inspect #Not sure if this part is neccessary, generated list of all the classes (rooms) I imported from ex43_map.py, as I thought they might be needed to form a "map" class_list = [] for name, obj in inspect.getmembers(ex43_map): if inspect.isclass(obj): class_list.append(name) class Engine(object): def __init__(self, room): self.room = room def play(self): # starts the process, this might need to go inside the loop below next = self.room start.transportation_choice() while True: print "\n-------------" # I have tried numerous things here to make it work...nothing has start = StartRoom() car = CarRoom() bus = BusRoom() train = TrainRoom() airplane = AirplaneRoom() terminal = TerminalRoom() a_game = Engine("transportation_choice") a_game.play() ``` --- And here is the ex43\_map.py code --- ``` from sys import exit from random import randint class StartRoom(object): def __init__(self): pass def transportation_choice(self): print "\nIt's 6 pm and you have just found out that you need to get to Chicago by tomorrow morning for a meeting" print "How will you choose to get there?\n" print "Choices: car, bus, train, airplane" choice = raw_input("> ") if choice == "car": return 'CarRoom' elif choice == "bus": return 'BusRoom' elif choice == "train": return 'TrainRoom' elif choice == "airplane": return 'AirplaneRoom' else: print "Sorry but '%s' wasn't a choice." % choice return 'StartRoom' class CarRoom(object): def __init__(self): print "Welcome to the CarRoom" class BusRoom(object): def __init__(self): print "Welcome to the BusRoom" class TrainRoom(object): def __init__(self): print "Welcome to the TrainRoom" class AirplaneRoom(object): def __init__(self): print "Welcome to the AirplaneRoom" class TerminalRoom(object): def __init__(self): self.quips = [ "Oh so sorry you died, you are pretty bad at this.", "Too bad, you're dead buddy.", "The end is here.", "No more playing for you, you're dead." ] def death(self): print self.quips[randint(0, len(self.quips)-1)] # randomly selects one of the quips from 0 to # of items in the list and prints it exit(1) ``` ---
2011/02/13
[ "https://Stackoverflow.com/questions/4982138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/614742/" ]
Instead of returning a string try returning an object, ie ``` if choice == "car": return CarRoom() ```
1. It might be a good idea to make a Room class, and derive your other rooms from it. 2. The Room base class can then have a class variable which automatically keeps track of all instantiated rooms. I haven't thoroughly tested the following, but hopefully it will give you some ideas: ``` # getters.py try: getStr = raw_input # Python 2.x except NameError: getStr = input # Python 3.x getStr.type = str def typeGetter(dataType): def getter(msg): while True: try: return dataType(getStr(msg)) except ValueError: pass getter.type = dataType return getter getInt = typeGetter(int) getFloat = typeGetter(float) getBool = typeGetter(bool) def getOneOf(*args, **kwargs): """Get input until it matches an item in args, then return the item @param *args: items to match against @param getter: function, input-getter of desired type (defaults to getStr) @param prompt: string, input prompt (defaults to '> ') Type of items should match type of getter """ argSet = set(args) getter = kwargs.get('getter', getStr) prompt = kwargs.get('prompt', '> ') print('[{0}]'.format(', '.join(args))) while True: res = getter(prompt) if res in argset: return res ``` . ``` # ex43_rooms.py import textwrap import random import getters class Room(object): # list of instantiated rooms by name ROOMS = {} @classmethod def getroom(cls, name): """Return room instance If named room does not exist, throws KeyError """ return cls.ROOMS[name] def __init__(self, name): super(Room,self).__init__() self.name = name Room.ROOMS[name] = self def run(self): """Enter the room - what happens? Abstract base method (subclasses must override) @retval Room instance to continue or None to quit """ raise NotImplementedError() def __str__(self): return self.name def __repr__(self): return '{0}({1})'.format(self.__class__.__name__, self.name) class StartRoom(Room): def __init__(self, name): super(StartRoom,self).__init__(name) def run(self): print textwrap.dedent(""" It's 6 pm and you have just found out that you need to get to Chicago by tomorrow morning for a meeting! How will you get there? """) inp = getters.getOneOf('car','bus','train','airplane') return Room.getroom(inp) class CarRoom(Room): def __init__(self,name): super(CarRoom,self).__init__(name) class BusRoom(Room): def __init__(self,name): super(BusRoom,self).__init__(name) class TrainRoom(Room): def __init__(self,name): super(TrainRoom,self).__init__(name) class PlaneRoom(Room): def __init__(self,name): super(PlaneRoom,self).__init__(name) class TerminalRoom(Room): def __init__(self,name): super(TerminalRoom,self).__init__(name) def run(self): print(random.choice(( "Oh so sorry you died, you are pretty bad at this.", "Too bad, you're dead buddy.", "The end is here.", "No more playing for you, you're dead." ))) return None # create rooms (which registers them with Room) StartRoom('start') CarRoom('car') BusRoom('bus') TrainRoom('train') PlaneRoom('airplane') TerminalRoom('terminal') ``` . ``` # ex43.py from ex43_rooms import Room def main(): here = Room.getroom('start') while here: here = here.run() if __name__=="__main__": main() ```
4,777
13,822,823
Currently, it is possible to **mark** tests and then run them (or not run them) using `-m` argument. However, all tests are still collected first and only then are **deselected** In the below example all 8 are still collected, and then 4 are run and 4 are deselected. ``` ============================= test session starts ============================== platform win32 -- Python 2.7.3 -- pytest-2.3.2 -- C:\Python27\python.exe collecting ... collected 8 items test_0001_login_logout.py:24: TestLoginLogout.test_login_page_ui PASSED test_0001_login_logout.py:36: TestLoginLogout.test_login PASSED test_0001_login_logout.py:45: TestLoginLogout.test_default_admin_has_users_folder_page_loaded_by_default PASSED test_0001_login_logout.py:49: TestLoginLogout.test_logout PASSED ==================== 4 tests deselected by "-m 'undertest'" ==================== ================== 4 passed, 4 deselected in 1199.28 seconds =================== ``` QUESTION: Is it possible to **not collect** marked/unmarked tests at all? The problems are: 1) I'm using some test when the database already has some items in it (like my *device*) and the code it have: ``` @pytest.mark.device class Test1_Device_UI_UnSelected(SetupUser): #get device from the database device = Devices.get_device('t400-alex-win7') @classmethod @pytest.fixture(scope = "class", autouse = True) def setup(self): ... ``` I run the test explicitly excluding and *device* tests: `py.test -m "not device"` however, during collection I get the errors, because `device = Devices.get_device('t400-alex-win7')` is still being executed. 2) Some of the tests are marked `time_demanding` because there are around 400 generated tests. To generate those tests is also takes time. I exclude those tests from the *general* tests, however they are generated and collected and then deselected <- just a wait of time. I know there is a solution for (1) problem - to use pytest.fixtures and pass them to the tests, however I really like *autocompletion* that PyDev provides. `timedemanding` class is: ``` import pytest #... other imports def admin_rights_combinations(admin, containing = ["right"]): ''' Generate all possible combinations of admin rights settings depending on "containing" restriction ''' rights = [right for right in admin.__dict__.iterkeys() if any(psbl_match in right for psbl_match in containing)] total_list = [] l = [] for right in rights: #@UnusedVariable l.append([True, False]) for st_of_values in itertools.product(*l): total_list.append(dict(zip(rights, st_of_values))) return total_list @pytest.mark.timedemanding class Test1_Admin_Rights_Access(SetupUser): user = UserFactory.get_user("Admin Rights Test") user.password = "RightsTest" folder = GroupFolderFactory.get_folder("Folders->Admin Rights Test Folder") group = GroupFolderFactory.get_group("Folders->Admin Rights Test Group") admin = UserFactory.get_admin("Admin Rights Test") @classmethod @pytest.fixture(scope = "class", autouse = True) def setup(self): ... @pytest.mark.parametrize("settings", admin_rights_combinations(admin, containing=['right_read', 'right_manage_folders', 'right_manage_groups'])) def test_admin_rights_menus(self, base_url,settings): ''' test combination of admin rights and pages that are displayed with this rights. Also verify that menu's that are available can be opened ''' ``` As you can see, by the time pytest hits `@pytest.mark.parametrize` it should be already aware that it's in Class with `@pytest.mark.timedemanding`. However, collection still occurs.
2012/12/11
[ "https://Stackoverflow.com/questions/13822823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1167879/" ]
The problem is not `py.test`, but the fact that the class code is executed when the file is imported, so you get the error **before** the decorator is even called. The only way(without modifying the logic of the code) to avoid this is to completely ignore the whole file. Anyway, I do not understand why you set the `device` class attribute there. Use the class-level `setup`! If you put that code in the setup your problem should be solved, because, since the test is not run, the setup is not called either and you do not get the error. The same goes for the `time_demanding` tests. Set them up in the class level setup, so that `py.test` does the class creation does not take so much time(even though, without a sample code, I can't say much about this). If you want to keep things like this, and have PyDev autocompletion, then, as I said, just ignore the whole file with some regex(and eventually you'll have to split up the tests).
Deselecting tests only after collection is complete happens for two reasons: * to always get a correct number of overall tests * collection hooks might dynamically add or remove marks Regarding auto-completion, i believe putting heavy setup into fixtures is more important. I am not sure if Pydev could learn to still auto-complete. You must have this issue also for regular python functions which take in an argument where Pydev cannot really know what type it is. FWIW i am living well with the dumb vim-completion which simply looks through all strings/names in all the buffers and completes on that. It gives more false positives but no false negatives.
4,778
60,932,166
I'm setting up a image data pipeline on Tensorflow 2.1. I'm using a dataset with RGB images of variable shapes (h, w, 3) and I can't find a way to make it work. I get the following error when I call `tf.data.Dataset.batch()` : `tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot batch tensors with different shapes in component 0. First element had shape [256,384,3] and element 3 had shape [160,240,3]` I found the `padded_batch` method but I don't want my images to be padded to the same shape. **EDIT:** I think that I found a little workaround to this by using the function `tf.data.experimental.dense_to_ragged_batch` (which convert the dense tensor representation to a ragged one). > > Unlike `tf.data.Dataset.batch`, the input elements to be batched may have different shapes, and each batch will be encoded as a `tf.RaggedTensor` > > > But then I have another problem. My dataset contains images and their corresponding labels. When I use the function like this: ``` ds = ds.map( lambda x: tf.data.experimental.dense_to_ragged_batch(batch_size) ) ``` I get the following error because it tries to map the function to the entire dataset (thus to images and labels), which is not possible because it can only be applied to a 1 single tensor (not 2). `TypeError: <lambda>() takes 1 positional argument but 2 were given` Is there a way to specify which element of the two I want the transformation to be applied to ?
2020/03/30
[ "https://Stackoverflow.com/questions/60932166", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9727793/" ]
I just hit the same problem. The solution turned out to be loading the data as 2 datasets and then using dataet.zip() to merge them. ``` images = dataset.map(parse_images, num_parallel_calls=tf.data.experimental.AUTOTUNE) images = dataset_images.apply( tf.data.experimental.dense_to_ragged_batch(batch_size=batch_size, drop_remainder=True)) dataset_total_cost = dataset.map(get_total_cost) dataset_total_cost = dataset_total_cost.batch(batch_size, drop_remainder=True) dataset = dataset.zip((dataset_images, dataset_total_cost)) ```
If you do not want to resize your images, you can only use a batch size of `1` and not bigger than that. Thus you can train your model one image at at time. The error you reported clearly says that you are using a batch size bigger than 1 and trying to put two images of different shape/size in a batch. You could either resize your images to a fixed shape (or pad your images), or use batch size of 1 as follows: ``` my_data = tf.data.Dataset(....) # with whatever arguments you use here my_data = my_data.batch(1) ```
4,779
23,526,592
I am trying to extract the stem of the words `taller` and `shorter` from a string in python. I did the following: ``` >>> from nltk.stem.porter import * >>> print(stemmer.stem('shorter')) shorter >>> print(stemmer.stem('taller')) taller ``` And for some reason, I don't get the words `tall` and `short`. Anyone knows how to possibly fix this, or possibly guide to an alternative solution?
2014/05/07
[ "https://Stackoverflow.com/questions/23526592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2816349/" ]
There are a few stemmers. Here's one: ``` >>> from nltk.stem.lancaster import LancasterStemmer >>> stemmer = LancasterStemmer() >>> stemmer.stem('shorter') 'short' ```
``` >>> from nltk import stem >>> s = 'short'; t = 'tall' >>> porter = stem.porter.PorterStemmer() >>> lancaster = stem.lancaster.LancasterStemmer() >>> snowball = stem.snowball.EnglishStemmer() >>> porter.stem(s) u'short' >>> porter.stem(t) u'tall' >>> lancaster.stem(s) 'short' >>> lancaster.stem(t) 'tal' >>> snowball.stem(s) u'short' >>> snowball.stem(t) u'tall' ```
4,780
69,103,209
I have a [dataset](https://drive.google.com/file/d/1EariymtHoBJEflDPkJTg-jKxMfVhll59/view?usp=sharing) containing nested json object. I wish to extract information from this nested json and put it in a DataFrame in python. I have used json\_normalize method but i am unable to parse after a certain level. Kindly help. Thank you.
2021/09/08
[ "https://Stackoverflow.com/questions/69103209", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7054640/" ]
Have been working on a function that will expand all embedded lists and dictionaries. ``` from pathlib import Path with open(Path.home().joinpath("Downloads").joinpath("Sample Json.txt")) as f: js = f.read() def normalize(js, expand_all=False): df = pd.json_normalize(json.loads(js) if type(js) == str else js) # get first column that contains lists col = df.applymap(type).astype(str).eq("<class 'list'>").all().idxmax() # explode list and expand embedded dictionaries df = df.explode(col).reset_index(drop=True) df = df.drop(columns=[col]).join(df[col].apply(pd.Series), rsuffix=f".{col}") # any dictionary to expand? if df.applymap(type).astype(str).eq("<class 'dict'>").any().any(): col = df.applymap(type).astype(str).eq("<class 'dict'>").all().idxmax() df = df.drop(columns=[col]).join(df[col].apply(pd.Series), rsuffix=f".{col}") # any lists left? while expand_all and df.applymap(type).astype(str).eq("<class 'list'>").any().any(): df = normalize(df.to_dict("records")) return df df = normalize(js, expand_all=True) ``` | | cfs | ctin | fldtr1 | cfs3b | flprdr1 | dtcancel | val | inv\_typ | pos | idt | rchrg | inum | chksum | num | csamt | samt | rt | txval | camt | iamt | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | 0 | Y | 03AZX | 10-Aug-20 | Y | Jul-20 | nan | 2390 | R | 03 | 27-07-2020 | N | TI/20-21/111 | 24ea1a46933dd7c6f130cc7ddce3ad89f42194d84e358746f66716d0f1b8aef0 | 101 | 0 | 182.25 | 18 | 2025 | 182.25 | 0 | | 1 | Y | 03AZY | 02-Sep-20 | Y | Jul-20 | nan | 10756 | R | 03 | 20-07-2020 | N | 70 | 164777293c8ce80595cd4803c3d0287bc544772fb9e5331602ed3d7d0534e82f | 1801 | 0 | 820.35 | 18 | 9115 | 820.35 | nan | | 2 | Y | 03A00P1Z7 | 10-Aug-20 | Y | Jul-20 | nan | 411.82 | R | 03 | 01-07-2020 | N | 18IPB06013580804 | 0560d2b220de53f458ac65594f50bfa5ba736f95061c88201d91371fbeccabf8 | 1 | 0 | 31.41 | 18 | 349 | 31.41 | nan | | 3 | Y | 03A00P1Z7 | 10-Aug-20 | Y | Jul-20 | nan | 411.82 | R | 03 | 01-07-2020 | N | 18IPB06013580805 | 08ae71bcb591723318796e797da586ef9b8e5b6b920e9877be6afc9223486760 | 1 | 0 | 31.41 | 18 | 349 | 31.41 | nan | | 4 | Y | 03A00P1Z7 | 10-Aug-20 | Y | Jul-20 | nan | 383.5 | R | 03 | 01-07-2020 | N | 18IPB06013580806 | 4d22ddd1d05d22cc4707a89dd80e76a271b99a7ba2610e3b111489fd4f7950fc | 1 | 0 | 29.25 | 18 | 325 | 29.25 | nan | | 5 | Y | 03A00P1Z7 | 10-Aug-20 | Y | Jul-20 | nan | 496.78 | R | 03 | 01-07-2020 | N | 18IPB06013580807 | 73e6e787493276151783d5ab1107bd0bac53780a5840964f7953bf3ba8a4efb0 | 1 | 0 | 37.89 | 18 | 421 | 37.89 | nan | | 6 | Y | 03A00P1Z7 | 10-Aug-20 | Y | Jul-20 | nan | 411.82 | R | 03 | 21-07-2020 | N | 18IPB07013893564 | 52ef0e7269de052c0353580cad5092ff1cc7a3c454318b2df1041a62a32f033f | 1 | 0 | 31.41 | 18 | 349 | 31.41 | nan | | 7 | Y | 03A00P1Z7 | 10-Aug-20 | Y | Jul-20 | nan | 411.82 | R | 03 | 21-07-2020 | N | 18IPB07013893565 | ab44c119f3db614dccfd3bc63c036eaca22a41c99e3e5090904e38aee056f4ac | 1 | 0 | 31.41 | 18 | 349 | 31.41 | nan | | 8 | Y | 03CAZD | 10-Aug-20 | Y | Jul-20 | nan | 162840 | R | 03 | 13-07-2020 | N | T/20-21/56 | 92e52e48e812bb0bb2e34d9e400248730fdc40363459d05c4e9d6ebb7fe6165d | 101 | 0 | 12420 | 18 | 138000 | 12420 | 0 | | 9 | Y | 03AAE | 22-Aug-20 | Y | Jul-20 | nan | 46556 | R | 03 | 30-07-2020 | N | S20/21-359 | 8138e35895114ae412e8256f3ce8382cdd8ae771f2780781085134618bb033c9 | 1801 | 0 | 3550.87 | 18 | 39454.2 | 3550.87 | 0 | | 10 | Y | 03AAD1ZA | 11-Aug-20 | Y | Jul-20 | nan | 8417.98 | R | 03 | 02-07-2020 | N | 0000030301011976 | 70d17e281b22541b3d41eb3269d057b73140c203771365a892dd496ffc756adb | 1 | 0 | 0 | 0 | 1024.84 | 0 | nan | | 11 | Y | 03AAD1ZA | 11-Aug-20 | Y | Jul-20 | nan | 8417.98 | R | 03 | 02-07-2020 | N | 0000030301011976 | 70d17e281b22541b3d41eb3269d057b73140c203771365a892dd496ffc756adb | 2 | 0 | 233.58 | 18 | 2595.37 | 233.58 | nan | | 12 | Y | 03AAD1ZA | 11-Aug-20 | Y | Jul-20 | nan | 8417.98 | R | 03 | 02-07-2020 | N | 0000030301011976 | 70d17e281b22541b3d41eb3269d057b73140c203771365a892dd496ffc756adb | 3 | 0 | 89.34 | 5 | 3573.99 | 89.34 | nan | | 13 | Y | 03AAD1ZA | 11-Aug-20 | Y | Jul-20 | nan | 8417.98 | R | 03 | 02-07-2020 | N | 0000030301011976 | 70d17e281b22541b3d41eb3269d057b73140c203771365a892dd496ffc756adb | 4 | 0 | 30.96 | 12 | 516.02 | 30.96 | nan | | 14 | Y | 03AAD1ZA | 11-Aug-20 | Y | Jul-20 | nan | 2824.88 | R | 03 | 06-07-2020 | N | 0000030301012348 | 2e7978264e42a74a70aa35d39ca6856f4dfb333e76935667a8de2733f888a1f1 | 1 | 0 | 116.46 | 18 | 1293.94 | 116.46 | nan | | 15 | Y | 03AAD1ZA | 11-Aug-20 | Y | Jul-20 | nan | 2824.88 | R | 03 | 06-07-2020 | N | 0000030301012348 | 2e7978264e42a74a70aa35d39ca6856f4dfb333e76935667a8de2733f888a1f1 | 2 | 0 | 37.27 | 12 | 621.18 | 37.27 | nan | | 16 | Y | 03AAD1ZA | 11-Aug-20 | Y | Jul-20 | nan | 2824.88 | R | 03 | 06-07-2020 | N | 0000030301012348 | 2e7978264e42a74a70aa35d39ca6856f4dfb333e76935667a8de2733f888a1f1 | 3 | 0 | 0 | 0 | 85.26 | 0 | nan | | 17 | Y | 03AAD1ZA | 11-Aug-20 | Y | Jul-20 | nan | 2824.88 | R | 03 | 06-07-2020 | N | 0000030301012348 | 2e7978264e42a74a70aa35d39ca6856f4dfb333e76935667a8de2733f888a1f1 | 4 | 0 | 12.31 | 5 | 492.42 | 12.31 | nan | | 18 | Y | 03AA1ZQ | 17-Aug-20 | Y | Jul-20 | nan | 39294 | R | 03 | 02-07-2020 | N | TI/20-21/43 | 69f7931986ad9274d9595ca5221e3ce82aa389d659e83376ff1ec34571057670 | 101 | 0 | 2997 | 18 | 33300 | 2997 | 0 | | 19 | Y | 03AGG3Z5 | 18-Aug-20 | Y | Jul-20 | 22-Jan-20 | 593583 | R | 03 | 31-07-2020 | N | 25 | 623dcb5b65e34be4d0453c1783915bb8e66684a2e33a3c8a547e38754c4f1af9 | 1 | 0 | 45273.3 | 18 | 503036 | 45273.3 | nan | | 20 | Y | 03AGG3Z5 | 18-Aug-20 | Y | Jul-20 | 22-Jan-20 | 601409 | R | 03 | 31-07-2020 | N | 26 | ef8b99f99fe090f0a2374d8d6c0b15c265740e6c6487ff68d510382ec21d8ce4 | 1 | 0 | 45870.2 | 18 | 509668 | 45870.2 | nan | | 21 | Y | 03AGG3Z5 | 18-Aug-20 | Y | Jul-20 | 22-Jan-20 | 767358 | R | 03 | 31-07-2020 | N | 27 | 9c1257eddeb8cdc7e6a832a3646969b71e49eeeb7d6742b26cfc6e0e3630438a | 1 | 0 | 58527.3 | 18 | 650303 | 58527.3 | nan | | 22 | Y | 03AGG3Z5 | 18-Aug-20 | Y | Jul-20 | 22-Jan-20 | 597886 | R | 03 | 31-07-2020 | N | 28 | 29fc1b28aedd1545e7ea0fd8b67b8332a83f1ac3f62af9398af2dfa26c9f1d90 | 1 | 0 | 45601.4 | 18 | 506683 | 45601.4 | nan | | 23 | Y | 03AA9 | 18-Aug-20 | Y | Jul-20 | nan | 41914 | R | 03 | 29-07-2020 | N | 2020-21/K-916 | d112ad384eb291d49509bdf4a005d509424fefee4caf3443bc9726cf41665295 | 1801 | 0 | 3196.8 | 18 | 35520 | 3196.8 | nan | | 24 | Y | 03A1Z8 | 12-Aug-20 | Y | Jul-20 | nan | 274893 | R | 03 | 20-07-2020 | N | T/20-21/10 | e5851fcc6b370714d7523080582a678a212f5dde90f5c2618880376018221f38 | 101 | 0 | 20966.4 | 18 | 232960 | 20966.4 | 0 | | 25 | Y | 03AD1ZL | 11-Aug-20 | Y | Jul-20 | nan | 125375 | R | 03 | 03-07-2020 | N | T/20-21/155 | 2bb398c7a0fedf11f1f1c1d196c43ad79910be52e6892f88915671025528eb2b | 101 | 0 | 9562.5 | 18 | 106250 | 9562.5 | 0 | | 26 | Y | 03AA3Z9 | 14-Aug-20 | Y | Jul-20 | nan | 529.99 | R | 03 | 31-07-2020 | N | 0301072000000650 | ad1e1d1572c9058fabd6d23fb5dc4b68f1a2a10d3dd3d7e73d73d3c502d92151 | 1 | nan | 40.42 | 18 | 449.15 | 40.42 | nan | | 27 | Y | 03AA3Z9 | 14-Aug-20 | Y | Jul-20 | nan | 1201 | R | 03 | 31-07-2020 | N | 0303072000000025 | 5a69229d907957c1d95eb464684891c202102b8589f5603b8ae14b07607f1655 | 1 | nan | 91.5 | 18 | 1018 | 91.5 | nan | | 28 | Y | 03AB1ZV | 11-Aug-20 | Y | Jul-20 | nan | 30976 | R | 03 | 10-07-2020 | N | 70 | 69bbeb088634a88b30c6e6046b63b1977f5534b2f676b984ef78f2c3bad8ca35 | 1800 | nan | 2362.5 | 18 | 26250 | 2362.5 | nan | | 29 | Y | 03AD1Z1 | 13-Aug-20 | Y | Jul-20 | nan | 8968 | R | 03 | 01-07-2020 | N | B25 | 5b98b819ca14a377c9304e7eab21957152c4819e82e37f2619fb2c547fb84ba6 | 1801 | 0 | 684 | 18 | 7600 | 684 | nan | | 30 | Y | 03AAO | 10-Aug-20 | Y | Jul-20 | nan | 38940 | R | 03 | 13-07-2020 | N | TI/20-21/30 | bae339e580c2ab9ffee90533650e4e2acdc47310230ed54aabbb96f89d3fc7c4 | 101 | 0 | 2970 | 18 | 33000 | 2970 | 0 | | 31 | Y | 07AH1ZU | 11-Aug-20 | Y | Jul-20 | nan | 13836.5 | R | 03 | 31-07-2020 | N | DELR/EXP/12176 | cb34f329adcd88c9e8794db9892fe47bd0a7afc0373a20860de046934f7923fa | 1 | 0 | nan | 18 | 11725.9 | nan | 2110.65 | | 32 | Y | 03A1ZT | 18-Aug-20 | Y | Jul-20 | nan | 41820 | R | 03 | 07-07-2020 | N | TI/20-21/68 | ad61c4dd8227b214dbe4bba24b57a2c976ce8438e53cf15b3530480116ca64da | 101 | 0 | 3189.69 | 18 | 35441 | 3189.69 | 0 | | 33 | Y | 03A1ZT | 18-Aug-20 | Y | Jul-20 | nan | 69773 | R | 03 | 10-07-2020 | N | TI/20-21/71 | 1deca4741b91716bfabc8b2ab826be76342b0fd3e698b128c927f4b426c064d0 | 101 | 0 | 5321.7 | 18 | 59130 | 5321.7 | 0 |
To "flat" a nested json file, you can use the following function: ``` def flatten_json(nested_json): out = {} def flatten(x, name=''): if type(x) is dict: for a in x: flatten(x[a], name + a + '_') elif type(x) is list: i = 0 for a in x: flatten(a, name + str(i) + '_') i += 1 else: out[name[:-1]] = x flatten(nested_json) return out ``` Assuming your json is called `myjson`: ``` df = pd.Series(flatten_json(myjson)).to_frame() ```
4,781
17,369,212
I would like to know if there's a function similar to `map` but working for methods of an instance. To make it clear, I know that `map` works as: ``` map( function_name , elements ) # is the same thing as: [ function_name( element ) for element in elements ] ``` and now I'm looking for some kind of `map2` that does: ``` map2( elements , method_name ) # doing the same as: [ element.method_name() for element in elements ] ``` which I tried to create by myself doing: ``` def map2( elements , method_name ): return [ element.method_name() for element in elements ] ``` but it doesn't work, python says: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'method_name' is not defined ``` even though I'm sure the classes of the elements I'm working with have this method defined. Does anyone knows how could I proceed?
2013/06/28
[ "https://Stackoverflow.com/questions/17369212", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2014276/" ]
[`operator.methodcaller()`](http://docs.python.org/2/library/operator.html#operator.methodcaller) will give you a function that you can use for this. ``` map(operator.methodcaller('method_name'), sequence) ```
You can use `lambda` expression. For example ``` a = ["Abc", "ddEEf", "gHI"] print map(lambda x:x.lower(), a) ``` You will find that all elements of `a` have been turned into lower case.
4,782
9,007,174
What is the best way to take a data file that contains a header row and read this row into a named tuple so that the data rows can be accessed by header name? I was attempting something like this: ``` import csv from collections import namedtuple with open('data_file.txt', mode="r") as infile: reader = csv.reader(infile) Data = namedtuple("Data", ", ".join(i for i in reader[0])) next(reader) for row in reader: data = Data(*row) ``` The reader object is not subscriptable, so the above code throws a `TypeError`. What is the pythonic way to reader a file header into a namedtuple?
2012/01/25
[ "https://Stackoverflow.com/questions/9007174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636493/" ]
Use: ``` Data = namedtuple("Data", next(reader)) ``` and omit the line: ``` next(reader) ``` Combining this with an iterative version based on martineau's comment below, the example becomes for Python 2 ``` import csv from collections import namedtuple from itertools import imap with open("data_file.txt", mode="rb") as infile: reader = csv.reader(infile) Data = namedtuple("Data", next(reader)) # get names from column headers for data in imap(Data._make, reader): print data.foo # ...further processing of a line... ``` and for Python 3 ``` import csv from collections import namedtuple with open("data_file.txt", newline="") as infile: reader = csv.reader(infile) Data = namedtuple("Data", next(reader)) # get names from column headers for data in map(Data._make, reader): print(data.foo) # ...further processing of a line... ```
Please have a look at [`csv.DictReader`](http://docs.python.org/library/csv.html#csv.DictReader). Basically, it provides the ability to get the column names from the first row as you're looking for and, after that, lets you access to each column in a row by name using a dictionary. If for some reason you still need to access the rows as a `collections.namedtuple`, it should be easy to transform the dictionaries to named tuples as follows: ``` with open('data_file.txt') as infile: reader = csv.DictReader(infile) Data = collections.namedtuple('Data', reader.fieldnames) tuples = [Data(**row) for row in reader] ```
4,783
31,813,457
I am trying to run a blat search from within my python code. Right now it's written as... ``` os.system('blat database.fa fastafile pslfile') ``` When I run the code, I specify file names for "fastafile" and "pslfile"... ``` python my_code.py -f new.fasta -p test.psl ``` This doesn't work as "fastafile" and "pslfile" are variables for files created and named when I run the code, but if I were to use the actual file names I would have to go back and change my code each time I run it. I'd like to use the command line arguments above to specify the files. How would I change this so that "fastafile" and "pslfile" will be replaced with my arguments (new.fasta and test.psl) each time?
2015/08/04
[ "https://Stackoverflow.com/questions/31813457", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5025033/" ]
As it applies to agent-based releases: *Tools* are intended to provide a custom resource (executable, PowerShell script, batch file, and so on) with a command line to execute said custom resource, and a default set of command line parameters. Using an example from the built-in resources: IIS Manager. The IIS Manager is a tool that can perform a variety of different IIS actions, depending on how it is called. *Actions* are granular, release-specific actions. They may be built on top of a tool to provide a specific action that uses the tool. *Create Web Site* is an action built on top of the IIS Manager tool. Actions appear in the release template toolbox. *Components* are deployable chunks of software. You specify the relative source of the binaries from your build drop, and choose a tool to execute to install the software. Most common is the "XCopy Deployer" tool, which just copies the binaries from the build drop to a location on the target machine. Components can be added to the release template by right-clicking on "Components" and choosing the "Add" option. You can use actions or components directly within a release template, but not tools. So the relationship is this: ``` /-> Action -> Target server Tool -| \-> Component -> Build drop and target server ``` vNext releases do not have the concepts of actions or tools, only components. Components are reduced to serve only as pointers to the path relative to your build drop root where the binaries come from. There are some other distinctions, but those are the main ones.
I'm not sure that there is a universal definition that doesn't have exceptions, but I see it as: **Actions** - functionality that doesn't interact with a build eg starting or stopping a service (except for the Deploy using Chef or PS/DSC actions). Only used in Agent-based templates. **Tools** - functionality that interacts with a build and / or has a complex command line, eg deploying a website. Only used by **Components - Agent-based**. **Components - Agent-based** - users of tools and also the place where the location of the build is specified and where any token replacement is defined. When the component is used in a template the tool typically does something to the build eg the XCopy Deployer will copy the contents of 'Path to package' to the specified installation path. **Components - vNext** - only allow for specifying the location of the build and any token replacement since any work is done via a script. The component is 'consumed' by the **Deploy using Chef or PS/DSC** actions and is the way to tell these actions where to get the build. Now I've tried to explain this I can see what a muddle it is. At some point you will be able to bypass all this confusion since an all-new web-based version of Release Management will be available with TFS 2015 Update 1 (and earlier with Visual Studio Online). It might pay to hold off for this version if you can but it may be late this year or early next since TFS 2015 RTM isn't out yet. If you can't wait and need to get going now then go down the vNext PowerShell route to make for an easier transition to the web version.
4,786
28,314,742
Using the regex in python, I want to compile a string that gets the pattern "\1" up to "\9". I've tried ``` regex= re.compile("\\(\d)") #sre_constants.error: unbalanced parenthesis regex= re.compile("\\\(\d)") #gets \\4 but not \4 ``` but to no avail.. Any thoughts?
2015/02/04
[ "https://Stackoverflow.com/questions/28314742", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4383952/" ]
One more: `re.compile("\\\\(\\d)")`. Or, a better option, a raw string: `re.compile(r"\\(\d)")`. The reason is the fact that backslash has meaning in both a string and in a regexp. For example, in regexp, `\d` is "a digit"; so you can't just use `\` for a backslash, and backslash is thus `\\`. But in a normal string, `\"` is a quote, so a backslash needs to be `\\`. When you combine the two, the string `"\\\\(\\d)"` actually contains `\\(\d)`, which is a regexp that matches `\` and a digit. Raw strings avoid the problem up to a point by giving backslashes a different and much more restricted semantics.
You should use a [raw-string](https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals) (which does not process escape sequences): ``` regex= re.compile(r"\\(\d)") ```
4,787
70,854,703
I have a bunch of .json files that I am trying to access. I need to calculate the growing season of a particular crop based on the planting and harvest dates. Problem: With the following code, I get this error: AttributeError: Can only use .dt accessor with datetimelike values Code: ``` import os import copy import json import math import numpy as np import pandas as pd import altair as alt def get_data_single(exp_file_name): # Read exp with open(exp_file_name) as f: data = json.load(f) n = len(data['RotationComponents']) out_vars = ['crop', 'plant_date', 'harv_date', 'CWAD_obs'] out = pd.DataFrame({'location': [exp_file_name.split('-')[0]] * n, 'crop': [np.nan] * n, 'plant_date': [np.nan] * n, 'harv_date': [np.nan] * n, 'ppop': [np.nan] * n, 'rowsp': [np.nan] * n}) for i in range(n): crop_id = data['RotationComponents'][i]['Planting']['Crop']['SpeciesID'] if crop_id == "CL": harvest_data = data['RotationComponents'][i]['Harvest'] out.loc[i, 'crop'] = crop_id out.loc[i, 'plant_date'] = pd.to_datetime(data['RotationComponents'][i]['Planting']['Date'],errors = 'coerce',format = '%Y-%m-%d') out.loc[i, 'ppop'] = data['RotationComponents'][i]['Planting']['Ppop'] out.loc[i, 'rowsp'] = data['RotationComponents'][i]['Planting']['RowSpc'] out.loc[i, 'harv_date'] = pd.to_datetime(harvest_data['EventDate']['Date'],errors = 'coerce',format = '%Y-%m-%d') return out ``` > > ``` data_files = ['IA_1.json', 'IA_2.json', 'IA_3.json', 'IA_4.json'] > obs_df = pd.concat([get_data_single(j) for j in data_files]).dropna() obs_df['plant_date'] = > pd.to_datetime(obs_df['plant_date']) obs_df['harv_date'] = > pd.to_datetime(obs_df['harv_date']) obs_df['plant_year'] = > obs_df['plant_date'].dt.strftime('%Y') > obs_df['harv_year'] = obs_df['harv_date'].dt.strftime('%Y') > obs_df['season'] = obs_df.apply(lambda x: x['plant_year'] + '_' + x['harv_year'], axis=1) > obs_df ``` I am trying to get the obs\_df to give the following based on the above code: ``` location crop plant_date harv_date ppop rowsp plant_year harv_year season ``` Traceback error: ``` AttributeError Traceback (most recent call last) /var/folders/ld/b6cy4j0d6pjb0t8d97rrgvbm0000gs/T/ipykernel_80477/757702288.py in <module> 1 data_files = ['IA_1.json', 'IA_2.json', 'IA_3.json', 'IA_4.json'] 2 obs_df = pd.concat([get_data_single(j) for j in data_files]).dropna() ----> 3 obs_df['plant_year'] = obs_df['plant_date'].dt.strftime('%Y') 4 obs_df['harv_year'] = obs_df['harv_date'].dt.strftime('%Y') 5 obs_df['season'] = obs_df.apply(lambda x: x['plant_year'] + '_' + x['harv_year'], axis=1) ~/miniconda3/lib/python3.9/site-packages/pandas/core/generic.py in __getattr__(self, name) 5485 ): 5486 return self[name] -> 5487 return object.__getattribute__(self, name) 5488 5489 def __setattr__(self, name: str, value) -> None: ~/miniconda3/lib/python3.9/site-packages/pandas/core/accessor.py in __get__(self, obj, cls) 179 # we're accessing the attribute of the class, i.e., Dataset.geo 180 return self._accessor --> 181 accessor_obj = self._accessor(obj) 182 # Replace the property with the accessor object. Inspired by: 183 # https://www.pydanny.com/cached-property.html ~/miniconda3/lib/python3.9/site-packages/pandas/core/indexes/accessors.py in __new__(cls, data) 504 return PeriodProperties(data, orig) 505 --> 506 raise AttributeError("Can only use .dt accessor with datetimelike values") AttributeError: Can only use .dt accessor with datetimelike values ``` Example json file > > Blockquote > > > ``` "SDate": "1999-05-26", "NYrs": 22, "SimControl": "N", "RotationComponents": [ { "Planting": { "Ppop": 250.00886969940453, "RowSpc": 25.0, "SDepth": 4.0, "Crop": { "SpeciesID": "RY", "CropParams": { "Name": "Rye", "RootC": 40.0, "RootN": 4.0, "RootP": 0.4, "RootSloC": 30.0, "RootIntC": 30.0, "RootSloN": 10.0, "VegC": 40.0, "VegSloC": 30.0, "VegIntC": 30.0, "VegSloN": 10.0, "KnDnFrac": 0.5, "VegN": 4.0, "VegP": 0.4, "Code": null }, "version": { "id": "global/crops/RY", "version": 1, "updateTime": "2020-09-29T00:00:00Z", "derivedFromId": null, "derivedFromVersion": null, "modelVersion": 1 }, "Name": null, "cultivarId": null, "CHeight": 1.0, "PhotoSyn": { "PhotSynID": "C3", "Points": [ { "CO2": 0.0, "Multiplier": 0.0 }, { "CO2": 220.0, "Multiplier": 0.71 }, { "CO2": 330.0, "Multiplier": 1.0 }, { "CO2": 440.0, "Multiplier": 1.08 }, { "CO2": 550.0, "Multiplier": 1.17 }, { "CO2": 660.0, "Multiplier": 1.25 }, { "CO2": 770.0, "Multiplier": 1.32 }, { "CO2": 880.0, "Multiplier": 1.38 }, { "CO2": 990.0, "Multiplier": 1.43 }, { "CO2": 9999.0, "Multiplier": 1.5 } ] }, "NitrogenFixing": null, "relTT_P1": 0.35, "relTT_P2": 0.62, "relTT_Fl": null, "relTT_Sn": 0.8, "relLAI_P1": 0.02, "relLAI_P2": 0.6, "PlntN_Em": 0.03616, "PlntN_Hf": 0.0288, "PlntN_Mt": 0.014, "GrnN_Mt": 0.023, "PlntP_Em": 0.004, "PlntP_Hf": 0.003, "PlntP_Mt": 0.0024, "GrnP_Mt": 0.0037, "LAImax": 3.0, "RUEmax": 1.5, "SnParLAI": 0.61, "SnParRUE": 1.0, "TbaseDev": 0.0, "ToptDev": 15.0, "TTtoGerm": 20.0, "TTtoMatr": 1200.0, "EmgInter": 15.0, "EmgSlope": 6.0, "HrvIndex": 0.03, "Kf2": null, "MaturityGroup": null, "Source": "ALMANAC (ABRVCROP.DAT) modified through calibration", "LT50C": -28.0 }, "Date": "1999-11-23", "VariableRateSeeding": false, "Area": null }, ```
2022/01/25
[ "https://Stackoverflow.com/questions/70854703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10609402/" ]
``` >>> df plant_date 0 NaN 1 2021-11-12 >>> df.loc[1, 'plant_date'] = pd.to_datetime(df.loc[1, 'plant_date'], errors='coerce', format='%Y-%m-%d') >>> df plant_date 0 NaN 1 2021-11-12 00:00:00 ``` Due to how you're creating your dataframe - it being prepopulated with `np.nan` and converting individial "rows" to datetime - the "type" of the column will be `object` (because `nan` is of type float - so the column contains "mixed types") ``` >>> df.dtypes plant_date object --------------^^^^^^ dtype: object >>> df['plant_date'].dt AttributeError: Can only use .dt accessor with datetimelike values ``` In order to use `.dt` you need the column to be of type datetime ``` >>> df plant_date 0 NaN 1 2021-11-12 >>> df.dtypes plant_date object dtype: object ``` The simplest way is probably just to call `to_datetime()` on the whole column at once - either on `out` in your function - or on `obs_df` after you create it. ``` >>> df['plant_date'] = pd.to_datetime(df['plant_date']) >>> df plant_date 0 NaT 1 2021-11-12 >>> df.dtypes plant_date datetime64[ns] dtype: object ``` Now you can use `.dt` without error ``` >>> df['plant_date'].dt <pandas.core.indexes.accessors.DatetimeProperties object at 0x1210fe160> ``` The `harv_date` column should have the same issue.
I was able to make it work. Here is the updated code. Right after pd.concat, the following code helped: ``` obs_df['plant_date'] = pd.to_datetime(obs_df['plant_date']) obs_df['harvest_date'] = pd.to_datetime(obs_df['harvest_date']) ```
4,790
33,579,522
I am a new python user and I am quite interesting on understanding in depth how works the NumPy module. I am writing on a function able to use both masked and unmasked arrays as data input. I have noticed that there are several [numpy masked operations](http://docs.scipy.org/doc/numpy/reference/routines.ma.html) that look similar (and even work?) to its normal (unmasked) counterpart. One of such functions is `numpy.zeros` and `numpy.ma.zeros`. Could someone else tell me the advantage of, say, creating an array using `numpy.ma.zeros` vs. `numpy.zeros`? It makes an actual difference when you are using masked arrays? I have noticed that when I use `numpy.zeros_like` it works fine for both creating a masked or unmasked array.
2015/11/07
[ "https://Stackoverflow.com/questions/33579522", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5373784/" ]
`np.ma.zeros` creates a masked array rather than a normal array which could be useful if some later operation on this array creates invalid values. An example from the manual: > > Arrays sometimes contain invalid or missing data. When doing > operations on such arrays, we wish to suppress invalid values, which > is the purpose masked arrays fulfill (an example of typical use is > given below). > > > For example, examine the following array: > > > > ``` > >>> x = np.array([2, 1, 3, np.nan, 5, 2, 3, np.nan]) > > ``` > > When we try to calculate the mean of the data, the result is > undetermined: > > > > ``` > >>> np.mean(x) nan > > ``` > > The mean is calculated using roughly `np.sum(x)/len(x)`, but since > any number added to `NaN` produces `NaN`, this doesn't work. > Enter masked arrays: > > > > ``` > >>> m = np.ma.masked_array(x, np.isnan(x)) > >>> m > masked_array(data = [2.0 1.0 3.0 -- 5.0 2.0 3.0 --], > mask = [False False False True False False False True], > fill_value=1e+20) > > ``` > > Here, we construct a masked array that suppress all `NaN` values. > We may now proceed to calculate the mean of the other values: > > > > ``` > >>> np.mean(m) > 2.6666666666666665 > > ``` > >
As a beginner don't get too bogged down with masked arrays. It's a subclass of `np.ndarray`, that is useful when dealing with data that has some bad values that you'd like to ignored when calculating things like the mean. But otherwise you should focus on creation and indexing (and calculations) with the base numpy class. Not only is `ma` array a subclass, it contains 2 regular arrays. One has the data, including any 'bad' values. That is a regular numpy array. The other is a boolean array, the mask. The developers of the masked class tried to make it behave in the same ways as the regular arrays, but with this added masking. Most, if not all, of the added features of masked arrays are implemented in Python code. It is hard to understand the underlying C code for `numpy`, but it is instructive to look at the functions and methods that are implemented in Python. I often look at those in an `ipython` session, but they can also be studied on the numpy github repository.
4,791
17,497,860
I am trying to download an entire play list for Android development tutorial from Youtube. So I used [savefrom](http://en.savefrom.net/) for generating playlist for download. But the problem is that I have so many videos in that playlist. So, I decided to write a python script for making this work simpler. But the problem is it uses Java Script to generate the link so I cant able to fetch generated link using javascript() Example: <http://ssyoutube.com/watch?v=AfleuRtrJoA> It takes 5 sec to generate download links. I want to get page source only after 5 sec **from the browse**. For this kind of work I found a good package named [selenium](https://pypi.python.org/pypi/selenium). ``` import time from selenium import webdriver def savefromnotnet(url): browser = webdriver.Firefox() # Get local session of firefox browser.get(url) # Load page time.sleep(5) # Let the page load, will be added to the API return browser.page_source() source = savefromnotnet("http://ssyoutube.com/watch?v=AfleuRtrJoA") ``` The `savefromnotnet` function open's Firefox and it will request the url, up to this every thing works fine. But when I want to get page source `browser.page_source()` it shows the following error. ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python27\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 523, in runfile execfile(filename, namespace) File "C:\Users\BK\Desktop\Working Folder\Python Temp\temp.py", line 10, in <module> source = savefromnotnet("http://ssyoutube.com/watch?v=AfleuRtrJoA") File "C:\Users\BK\Desktop\Working Folder\Python Temp\temp.py", line 8, in savefromnotnet return browser.page_source() TypeError: 'unicode' object is not callable ```
2013/07/05
[ "https://Stackoverflow.com/questions/17497860", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1464519/" ]
Error occured on following line. ``` return browser.page_source() ``` I think brackets did not need. ``` return browser.page_source ```
I think not ! ``` pcode = wdriver.page_source() ``` is absoluteley right call. By auto-complete in python ide. I have the same problem. Looks like we need to encode page-sourse text variable in somethind like classic ANSI
4,792
73,099,677
Could you help me? I'm trying to close my main window and then create a new window. I'm using withdraw() instead of destroy() since I'm planning to use that widget later. Here is my code, but I just get: `tkinter.TclError: image "pyimage10" doesn't exist` I separated the codes of the main window and a new window into two python file, which are "Page1" and "Page2". Sorry for my rough English. Any help to fix this would be much appreciated:) `tkinter.TclError: image "pyimage10" doesn't exist`seems to occur at `image_2 = canvas.create_image(` Page1 ``` from tkinter import Tk, Canvas, Entry, Text, Button, PhotoImage    OUTPUT_PATH = Path(__file__).parent    ASSETS_PATH = OUTPUT_PATH / Path("./assets")    def relative_to_assets(path: str) -> Path: return ASSETS_PATH / Path(path)    window = Tk()    window.geometry("703x981")    window.configure(bg = "#FFFFFF") button_image_6 = PhotoImage( file=relative_to_assets("button_6.png")) button_6 = Button( image=button_image_6, borderwidth=0, highlightthickness=0, command=lambda: ("WM_DELETE_WINDOW", nextPage()), relief="flat" ) button_6.place( x=415.0, y=793.0, width=86.0, height=78.0 ) def nextPage(): window.withdraw() import Page2 ``` Page2 ``` from tkinter import Tk, Canvas, Entry, Text, Button, PhotoImage window = Tk() def relative_to_assets(path: str) -> Path: return ASSETS_PATH / Path(path) window.geometry("545x470") window.configure(bg = "#FFFFFF") window.title('PythonGuides') canvas = Canvas( window, bg = "#FFFFFF", height = 470, width = 545, bd = 0, highlightthickness = 0, relief = "ridge" ) canvas.place(x = 0, y = 0) image_image_2 = PhotoImage( file=relative_to_assets("image_2.png")) image_2 = canvas.create_image( 272.0, 175.0, image=image_image_2 ) ```
2022/07/24
[ "https://Stackoverflow.com/questions/73099677", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14257622/" ]
::with('trans') returns completely new query and forgets everything about your ::find($id)
What happens is, `House::find($id)` is a method that returns the element by its primary key. Then, the `::with('trans')` gets the result, sees the House class and starts creating a new query builder for the Model. Finally, the `->get()` runs this new query and the end result is what is return for the `$house` value. You have two options that will result to what you want to do. First one is, to find the house entry and then load its relation ``` $house = House::find($id); $house->load('trans'); ``` The second option is this: ``` $house = House::where('id',$id)->with('trans')->first(); ``` The second one is a query builder that results to the first element with id = $id, and load its relation with translations. You can check for more details in the [laravel documentation](https://laravel.com/docs/9.x/eloquent#collections). They have a very well-written documentation and the examples help a lot.
4,793
67,353,503
i'm new to python and i'm trying to parse a list in `["+",1,3,3]` and then identify it's string operator `"+" "-" "x" "/"` and convert it into a question `{"qns": "1 + 3 + 3", "ans": 7}` and answer into a dictionary with just two key "qns and "ans" So the question is there is a nested list as an input to the function. I'm supposed to convert each of the list to a dictionary output. The lists are identified by it's first string index "+" "-" "x" "/" and based on these strings, i'm to output a dictionary based on its input with two keys, first is "qns" which is formatted according to its operator "+" -> 1 + 3 + 3 and "ans" -> 7 hence combining both into a dictionary which shows {"qns": "1 + 3 + 3", "ans": 7} So far i'm only able to come up with this and i'm getting an error when i'm trying to parse in a smaller list `["x",3,2]` instead of `["+",1,3,3]` Is there a better way to do this instead of an if-else statement? ``` def math_qns(input): new_list = [] for x in input: if x[0] == "+": if x[3]: math = x[1] + x[2] + x[3] str_math = "{1} {0} {2} {0} {3}".format(x[0], x[1], x[2], x[3]) else: math = x[1] + x[2] str_math = "{1} {0} {2}".format(x[0], x[1], x[2]) if x[0] == "-": if x[3]: math = x[1] - x[2] - x[3] str_math = "{1} {0} {2} {0} {3}".format(x[0], x[1], x[2], x[3]) else: math = x[1] - x[2] str_math = "{1} {0} {2}".format(x[0], x[1], x[2]) if x[0] == "x": if x[3]: math = x[1] * x[2] * x[3] str_math = "{1} {0} {2} {0} {3}".format(x[0], x[1], x[2], x[3]) else: math = x[1] * x[2] str_math = "{1} {0} {2}".format(x[0], x[1], x[2]) if x[0] == "/": if x[3]: math = x[1] / x[2] / x[3] str_math = "{1} {0} {2} {0} {3}".format(x[0], x[1], x[2], x[3]) else: math = x[1] / x[2] str_math = "{1} {0} {2}".format(x[0], x[1], x[2]) qnsans = { "qns" : str_math, "ans" : math } new_list.append(qnsans) return print(new_list) def main(): input_list = [["+",1,3,3], ["-",2,5,-1], ["x",3,2],["/",12,3,2],["x",0,23],["+",1,2,3,4]] math_qns(input_list) ``` Ideally the outcome would be : ``` [{"qns": "1 + 3 + 3", "ans": 7}, {"qns": "2 - 5 - -1", "ans": -2}, {"qns": "3 x 2", "ans": 6}, {"qns": "12 / 3 / 2", "ans": 2}, {"qns": "0 x 23", "ans": 0}, {"qns": "1 + 2 + 3 + 4", "ans": 10}] ``` So far i've been getting an error ``` if x[3]: IndexError: list index out of range ``` Would be thankful for any advice as i've been stuck for quite some time over this trying to come up with a new way to make it more efficient and at the same time accomplish its objective.
2021/05/02
[ "https://Stackoverflow.com/questions/67353503", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12929490/" ]
A variable of type `dynamic` would be similar to javascript where it can change type during runtime. For example store an integer then change to a string. `var` is not the same as dynamic. `var` is an easy way to initialise variables as you don't have to explicitly state the type. Dart just infers the type to make it easier for you. If you write `int number = 5` it would be the same as `var number = 5` as dart would infer that this variable is an integer. The reason the tutorial might have said that `var` is better than `int` may be convention to make the code more readable but I believe it doesn't have any impact on your code. You can use either and it won't make a difference.
the reason I think var is better is that it is slightly more convenience, for example ```dart var n = 1; int m = 1; ``` both `n` and `m` are integer if some day you changed your mind and want to reinitialize them with new decimal number instead ```dart var n = 9.9; double m = 9.9; ``` in this case, var requires less typing and still able to infer the type correctly
4,795
57,948,794
I have made a Python application which uses GTK. I want to send the user a dialog asking for confirmation for an action, however after creating the dialog based on [this](https://python-gtk-3-tutorial.readthedocs.io/en/latest/dialogs.html) tutorial, I noticed that there is no apparent way to center-align the 'Cancel' and 'OK' buttons. The relavent code from that tutorial is as follows: ```py class DialogExample(Gtk.Dialog): def __init__(self, parent): Gtk.Dialog.__init__(self, "My Dialog", parent, 0, (Gtk.STOCK_CANCEL, Gtk.ResponseType.CANCEL, Gtk.STOCK_OK, Gtk.ResponseType.OK)) self.set_default_size(150, 100) label = Gtk.Label("This is a dialog to display additional information") box = self.get_content_area() box.add(label) self.show_all() ``` In the example above, the buttons are aligned to the right. Is there any way to center align the buttons using this method of creating dialogs?
2019/09/15
[ "https://Stackoverflow.com/questions/57948794", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11116713/" ]
> > **Question**: Aligning dialog buttons to center > > > --- * [Gtk.Dialog.get\_action\_area](http://lazka.github.io/pgi-docs/#Gtk-3.0/classes/Dialog.html#Gtk.Dialog.get_action_area) * [Gtk.Widget.props.parent](http://lazka.github.io/pgi-docs/#Gtk-3.0/classes/Widget.html#Gtk.Widget.props.parent) * [Gtk.Box.set\_child\_packing](http://lazka.github.io/pgi-docs/#Gtk-3.0/classes/Box.html#Gtk.Box.set_child_packing) * [Gtk.Box.set\_center\_widget](http://lazka.github.io/pgi-docs/#Gtk-3.0/classes/Box.html#Gtk.Box.set_center_widget) --- 1. You want to *center* the `action_area` of a `Gtk.Dialog`, which is of type `Gtk.ButtonBox`. Get the `action_area` > > **Note**: Deprecated since version 3.12: Direct access to the action area is discouraged > > > ``` a_area = self.get_action_area() ``` 2. You need the `parent` of the `action_area`, which is of type `Gtk.Box` Get the `parent` box. ``` box = a_area.props.parent ``` 3. To *center* a `Gtk.Widget` you have to reset the default `packing` `expand=False` to `True`. To all other `packing` options no change. ``` box.set_child_packing(a_area, True, False, 0, Gtk.PackType.END) ``` 4. Now, you can *center* the `action_area` ``` a_area.set_center_widget(None) ``` --- > > **Output**: > > > [![enter image description here](https://i.stack.imgur.com/aEkC7.png)](https://i.stack.imgur.com/aEkC7.png) ***Tested with Python: 3.5 - gi.\_\_version\_\_: 3.22.0***
I don't know if this is a cheaty way of doing it but it seems to work. ``` action_area= self.get_action_area() action_area.set_halign(3) ```
4,796
25,849,850
In python 3.4.0, using `json.dumps()` throws me a TypeError in one case but works like a charm in other case (which I think is equivalent to the first one). I have a dict where keys are strings and values are numbers and other dicts (i.e. something like `{'x': 1.234, 'y': -5.678, 'z': {'a': 4, 'b': 0, 'c': -6}}`). This fails (the stacktrace is not from this particular code snippet but from my larger script which I won't paste here but it is essentialy the same): ``` >>> x = dict(foo()) # obtain the data and make a new dict of it to really be sure >>> import json >>> json.dumps(x) Traceback (most recent call last): File "/mnt/data/gandalv/progs/pycharm-3.4/helpers/pydev/pydevd.py", line 1733, in <module> debugger.run(setup['file'], None, None) File "/mnt/data/gandalv/progs/pycharm-3.4/helpers/pydev/pydevd.py", line 1226, in run pydev_imports.execfile(file, globals, locals) # execute the script File "/mnt/data/gandalv/progs/pycharm-3.4/helpers/pydev/_pydev_execfile.py", line 38, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) #execute the script File "/mnt/data/gandalv/School/PhD/Other work/Krachy/code/recalculate.py", line 54, in <module> ls[1] = json.dumps(f) File "/usr/lib/python3.4/json/__init__.py", line 230, in dumps return _default_encoder.encode(obj) File "/usr/lib/python3.4/json/encoder.py", line 192, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/lib/python3.4/json/encoder.py", line 250, in iterencode return _iterencode(o, 0) File "/usr/lib/python3.4/json/encoder.py", line 173, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: 306 is not JSON serializable ``` The `306` is one of the values in one of ther inner dicts in `x`. It is not always the same number, sometimes it is a different number contained in the dict, apparently because of the unorderedness of a dict. However, this works like a charm: ``` >>> x = foo() # obtain the data and make a new dict of it to really be sure >>> import ast >>> import json >>> x2 = ast.literal_eval(repr(x)) >>> x == x2 True >>> json.dumps(x2) "{...}" # the json representation of dict as it should be ``` Could anyone, please, tell me why does this happen or what could be the cause? The most confusing part is that those two dicts (the original one and the one obtained through evaluation of the representation of the original one) are equal but the `dumps()` function behaves differently for each of them.
2014/09/15
[ "https://Stackoverflow.com/questions/25849850", "https://Stackoverflow.com", "https://Stackoverflow.com/users/461202/" ]
The cause was that the numbers inside the `dict` were not ordinary python `int`s but `numpy.in64`s which are apparently not supported by the json encoder.
As you have seen, numpy int64 data types are not serializable into json directly: ``` >>> import numpy as np >>> import json >>> a=np.zeros(3, dtype=np.int64) >>> a[0]=-9223372036854775808 >>> a[2]=9223372036854775807 >>> jstr=json.dumps(a) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/Cellar/python3/3.4.1/Frameworks/Python.framework/Versions/3.4/lib/python3.4/json/__init__.py", line 230, in dumps return _default_encoder.encode(obj) File "/usr/local/Cellar/python3/3.4.1/Frameworks/Python.framework/Versions/3.4/lib/python3.4/json/encoder.py", line 192, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/local/Cellar/python3/3.4.1/Frameworks/Python.framework/Versions/3.4/lib/python3.4/json/encoder.py", line 250, in iterencode return _iterencode(o, 0) File "/usr/local/Cellar/python3/3.4.1/Frameworks/Python.framework/Versions/3.4/lib/python3.4/json/encoder.py", line 173, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: array([-9223372036854775808, 0, 9223372036854775807]) is not JSON serializable ``` However, Python integers -- including longer integers -- can be serialized and deserialized: ``` >>> json.loads(json.dumps(2**123))==2**123 True ``` So with numpy, you can convert directly to Python data structures then serialize: ``` >>> jstr=json.dumps(a.tolist()) >>> b=np.array(json.loads(jstr)) >>> np.array_equal(a,b) True ```
4,797
24,690,298
Python matplotlib gives very nice figures. How to call python matplotlib in Qt C++ project? I'd like to put those figures in Qt dialogs and data are transferred via memory.
2014/07/11
[ "https://Stackoverflow.com/questions/24690298", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1899020/" ]
You can create a python script with function calls to matplotlib and add them as callback functions in your C++ code. [This tutorial](http://www.codeproject.com/Articles/11805/Embedding-Python-in-C-C-Part-I) explains how this can be done. I also recommend reading the documentation on [Python.h](https://docs.python.org/2/extending/extending.html).
I would try using [matplotlib-cpp](https://github.com/lava/matplotlib-cpp). It is built to resemble the plotting API used by Matlab and matplotlib. Basically it is a C++ wrapper around matplotlib and it's header only. Keep in mind though that it does not provide all the matplotlib features from python. Here is the initial example from GitHub: ``` #include "matplotlibcpp.h" namespace plt = matplotlibcpp; int main() { plt::plot({1,3,2,4}); plt::show(); } ``` Compile ``` g++ minimal.cpp -std=c++11 -I/usr/include/python2.7 -lpython2.7 ``` [![Plot of the minimal example](https://i.stack.imgur.com/d5NqD.png)](https://i.stack.imgur.com/d5NqD.png)
4,798
55,503,673
Let's say I have a python function whose single argument is a non-trivial type: ``` from typing import List, Dict ArgType = List[Dict[str, int]] # this could be any non-trivial type def myfun(a: ArgType) -> None: ... ``` ... and then I have a data structure that I have unpacked from a JSON source: ``` import json data = json.loads(...) ``` My question is: How can I check *at runtime* that `data` has the correct type to be used as an argument to `myfun()` before using it as an argument for `myfun()`? ``` if not isCorrectType(data, ArgType): raise TypeError("data is not correct type") else: myfun(data) ```
2019/04/03
[ "https://Stackoverflow.com/questions/55503673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/28835/" ]
It's awkward that there's no built-in function for this but [`typeguard`](https://pypi.org/project/typeguard/) comes with a convenient `check_type()` function: ``` >>> from typeguard import check_type >>> from typing import List >>> check_type("foo", [1,2,"3"], List[int]) Traceback (most recent call last): ... TypeError: type of foo[2] must be int; got str instead type of foo[2] must be int; got str instead ``` For more see: <https://typeguard.readthedocs.io/en/latest/api.html#typeguard.check_type>
The common way to handle this is by making use of the fact that if whatever object you pass to `myfun` doesn't have the required functionality a corresponding exception will be raised (usually `TypeError` or `AttributeError`). So you would do the following: ``` try: myfun(data) except (TypeError, AttributeError) as err: # Fallback for invalid types here. ``` You indicate in your question that you would raise a `TypeError` if the passed object does not have the appropriate structure but Python does this already for you. The critical question is how you would handle this case. You could also move the `try / except` block into `myfun`, if appropriate. When it comes to typing in Python you usually rely on [duck typing](https://en.wikipedia.org/wiki/Duck_typing): if the object has the required functionality then you don't care much about what type it is, as long as it serves the purpose. Consider the following example. We just pass the data into the function and then get the `AttributeError` for free (which we can then except); no need for manual type checking: ``` >>> def myfun(data): ... for x in data: ... print(x.items()) ... >>> data = json.loads('[[["a", 1], ["b", 2]], [["c", 3], ["d", 4]]]') >>> myfun(data) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in myfun AttributeError: 'list' object has no attribute 'items' ``` In case you are concerned about the usefulness of the resulting error, you could still except and then re-raise a custom exception (or even change the exception's message): ``` try: myfun(data) except (TypeError, AttributeError) as err: raise TypeError('Data has incorrect structure') from err try: myfun(data) except (TypeError, AttributeError) as err: err.args = ('Data has incorrect structure',) raise ``` When using third-party code one should always check the documentation for exceptions that will be raised. For example [`numpy.inner`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.inner.html) reports that it will raise a `ValueError` under certain circumstances. When using that function we don't need to perform any checks ourselves but rely on the fact that it will raise the error if needed. When using third-party code for which it is not clear how it will behave in some corner-cases, i.m.o. it is easier and clearer to just hardcode a corresponding type checker (see below) instead of using a generic solution that works for any type. These cases should be rare anyway and leaving a corresponding comment makes your fellow developers aware of the situation. The `typing` library is for type-hinting and as such it won't be checking the types at runtime. Sure you could do this manually but it is rather cumbersome: ``` def type_checker(data): return ( isinstance(data, list) and all(isinstance(x, dict) for x in list) and all(isinstance(k, str) and isinstance(v, int) for x in list for k, v in x.items()) ) ``` This together with an appropriate comment is still an acceptable solution and it is reusable where a similar data structure is expected. The intent is clear and the code is easily verifiable.
4,799
72,709,963
Consider i have 5 files in 5 different location. Example = fileA in XYZ location fileB in ZXC location fileC in XBN location so on I want to check if these files are actually saved in that location if they are not re run the code above that saves the file. Ex: ``` if: fileA, fileB so on are present in their particular location the proceed with code further else: re run the file saving code above ``` How do i do this in python i am not able to figure out.
2022/06/22
[ "https://Stackoverflow.com/questions/72709963", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19189066/" ]
You can store all your files with their locations in a list and then iterate all locations for existence then you can decide further what to do. A Python example: ```py from os.path import exists # all files to check in different locations locations = [ '/some/location/xyz/fileA', '/other/location/fileB', '/yet/another/location/fileC', ] # iterate to check each file existance status = [exists(location) for location in locations] # check the status of all files # if any of the files doesn't exist, else will be called if(all(status)): print('All files are present.') else: print('Any or all files do not exist.') ```
I'm not a python dev, and I just wanted try to contribute to the community. The first answer is way better than mine, but I'd like to share my solution for that question. You could use `sys` to pass the files' names, inside a `try` block to handle when the files are not found. If you run the script from a location while the files are at another location, it would be needed to provide their path. `check.py ../test1.txt ../test2.txt ../test3.txt` ``` #!/usr/bin/python3 import os.path import sys try: fpath = sys.argv[1] fpath = sys.argv[2] fpath = sys.argv[3] if (os.path.isfile(fpath)) != "": print("Exists on system") else: pass except IndexError: for i in range(3): file1 = "test1.txt" file2 = "test2.txt" file3 = "test3.txt" sys.stdout = open(file1, "w") print("Saving content to files") sys.stdout = open(file2, "w") print("Saving content to files") sys.stdout = open(file3, "w") print("Saving content to files") ``` The `exception` part would then "save" the files, by creating new ones, writing whatever content you desire.
4,809
19,001,826
It starts with a url on the web (ex: <http://python.org>), fetches the web-page corresponding to that url, and parses all the links on that page into a repository of links. Next, it fetches the contents of any of the url from the repository just created, parses the links from this new content into the repository and continues this process for all links in the repository until stopped or after a given number of links are fetched. How can i do that using python and scrapy?. I am able to scrape all links in a webpage but how to perform it recursively in depth
2013/09/25
[ "https://Stackoverflow.com/questions/19001826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3001533/" ]
`sfence` doesn't block StoreLoad reordering. Unless there are any NT stores in flight, it's architecturally a no-op. Stores already wait for older stores to commit before they themselves commit to L1d and become globally visible, because x86 doesn't allow StoreStore reordering. (Except for NT stores / stores to WC memory) **For seq\_cst you need a full barrier to flush the store buffer / make sure all older stores are globally visible *before* any later loads.** See <https://preshing.com/20120515/memory-reordering-caught-in-the-act/> for an example where failing to use `mfence` in practice leads to non-sequentially-consistent behaviour, i.e. memory reordering. --- As you found, it is possible to map seq\_cst to x86 asm with full barriers on every seq\_cst load instead of on every seq\_cst store / RMW. In that case you wouldn't need any barrier instructions on stores (so they'd have release semantics), but you'd need `mfence` before every `atomic::load(seq_cst)`.
You don't need an `mfence`; `sfence` does indeed suffice. In fact, you never need `lfence` in x86 unless you are dealing with a device. But Intel (and I think AMD) has (or at least had) a single implementation shared with `mfence` and `sfence` (namely, flushing the store buffer), so there was no performance advantage to using the weaker `sfence`. BTW, note that you don't have to flush after every write to a shared variable; you only have to flush between a write and a subsequent read of a different shared variable.
4,810
66,105,974
I am new to regex and was wondering how the following could be implemented. For example, I have a css file with `url('Inter.ttf')` and my python program would convert this url to `url('user/Inter.ttf')`. However, I run into a problem when I try to avoid double replacement. So how can I use regex to tell python the difference between `url('Inter.ttf')` and `url('/hello/Inter.ttf')` when using re.sub to replace them. I have tried `re.sub(r"\boriginalurl.ttf\b", "/user/" + originalurl.ttf, file)`. But this seems to not work. So how would I tell python to replace the whole word `'Inter.ttf'` with `'/user/Inter.ttf'` and `'/hello/Inter.ttf'` with `'/user/hello/Inter.ttf'`.
2021/02/08
[ "https://Stackoverflow.com/questions/66105974", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15168919/" ]
You can use a `look-around` method to insert the `/user/` dynamically: ``` (?<=url\(')/*(?=(?:.*?Inter\.ttf)'\)) ``` And then use `re.sub` to replace with `/user/`: ``` strings = ["url('Inter.ttf')", "url('/hello/Inter.ttf')"] p = re.compile(r"(?<=url\(')/?(?=(?:.*?Inter\.ttf)'\))") for s in strings: s = re.sub(p, "/user/", s) print(s) ``` ``` url('user/Inter.ttf') url('user/hello/Inter.ttf') ``` Pattern Explanation ------------------- `(?<=url\(')`: Positive lookbehind; matches strings that come after a string like `url('`. `/?`: Matches **zero** or **one** forward slashes `/`. This is important for matching paths like `/hello/Inter.ttf` because it starts with the `/`. This is going to be selected and replaced with the ending forward slash in the replacement string, `/user/`. `(?=(?:.*?Inter.ttf)'\)`: Positive lookahead; matches strings that come **before** a string that ends with `Inter.ttf')`. I suggest playing around with it on <https://regex101.com>, selecting the `Substitution` method on the left-hand-side. Edit ---- If you want to match multiple fonts, you can just remove the `Inter.ttf` part of the regex: ``` (?<=url\(')/?(?=(?:.*?)'\)) ``` Alternatively, if you wanted it to append `/user/` to paths that had a file extension, you can replace `Inter\.ttf` with `\.\w{3}`, which effectively matches 3 of any character in `[a-zA-Z0-9_]`: ``` (?<=url\(')/?(?=(?:.*?\.\w{3})'\)) ```
a simple way to do that is like this without regex: ``` fin = open("input.css", "rt") fout = open("out.css", "wt") for line in fin: if "'Inter.ttf'" in line: fout.write(line.replace("'Inter.ttf'", "'/user/Inter.ttf'")) elif "'/hello/Inter.ttf'" in line: fout.write(line.replace("'/hello/Inter.ttf'", "'/user/hello/Inter.ttf'")) else: fout.write(line) ```
4,811
25,623,841
I am using Python 2.7.5. When raising an int to the power of zero you would expect to see either -1 or 1 depending on whether the numerator was positive or negative. Typing directly into the python interpreter yields the following: ``` >>> -2418**0 -1 ``` This is the correct answer. However when I type this into the same interpretter: ``` >>> result = -2481 >>> result**0 1 ``` the answer is 1 instead of -1. Using the `complex` builtin [as suggested here](https://stackoverflow.com/questions/17747124/valueerror-negative-number-cannot-be-raised-to-a-fractional-power) has no effect on the outcome. Why is this happening?
2014/09/02
[ "https://Stackoverflow.com/questions/25623841", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1139011/" ]
Why would you expect it to be -1? 1 is (according to the definition I was taught) the correct answer. The first gives the incorrect answer due to operator precedence. ``` (-1)**0 = 1 -1**0 = -(1**0) = -(1) = -1 ``` See Wikipedia for the definition of the 0 exponent: <http://en.wikipedia.org/wiki/Exponentiation#Zero_exponent>
`-2418**0` is interpreted (mathematically) as `-1 * (2418**0)` so the answer is `-1 * 1 = -1`. Exponentiation happens before multiplication. In your second example you bind the variable `result` to `-1`. The next line takes the variable `result` and raises it to the power of `0` so you get `1`. In other words you're doing `(-1)**0`. `n**0` is `1` for any real number `n`... except `0`: technically `0**0` is undefined, although Python will still return `0**0 == 1`.
4,812
24,648,132
so for some reason this error([Errno 10013] An attempt was made to access a socket in a way forbidden by its access permissions), keeps occurring. when i try to use registration in Django. I am using windows 7 and pycharm IDE with django 1.65. I have already tried different ports to run server (8001 & 8008) and also adding permission in windows firewall and kasperesky firewall for python.exe and pycharm. Any suggestion. ``` Environment: Request Method: POST Request URL: http://127.0.0.1:8001/accounts/register/ Django Version: 1.6.5 Python Version: 2.7.8 Installed Applications: ('django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'profiles', 'south', 'registration', 'PIL', 'stripe') Installed Middleware: ('django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware') Traceback: File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\core\handlers\base.py" in get_response 112. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\views\generic\base.py" in view 69. return self.dispatch(request, *args, **kwargs) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\registration\views.py" in dispatch 79. return super(RegistrationView, self).dispatch(request, *args, **kwargs) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\views\generic\base.py" in dispatch 87. return handler(request, *args, **kwargs) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\registration\views.py" in post 35. return self.form_valid(request, form) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\registration\views.py" in form_valid 82. new_user = self.register(request, **form.cleaned_data) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\registration\backends\default\views.py" in register 80. password, site) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\db\transaction.py" in inner 431. return func(*args, **kwargs) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\registration\models.py" in create_inactive_user 91. registration_profile.send_activation_email(site) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\registration\models.py" in send_activation_email 270. self.user.email_user(subject, message, settings.DEFAULT_FROM_EMAIL) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\contrib\auth\models.py" in email_user 413. send_mail(subject, message, from_email, [self.email]) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\core\mail\__init__.py" in send_mail 50. connection=connection).send() File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\core\mail\message.py" in send 274. return self.get_connection(fail_silently).send_messages([self]) File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\core\mail\backends\smtp.py" in send_messages 87. new_conn_created = self.open() File "C:\Users\jasan\Virtual_enviornments\virtual_env_matchmaker\lib\site-packages\django\core\mail\backends\smtp.py" in open 48. local_hostname=DNS_NAME.get_fqdn()) File "C:\Python27\Lib\smtplib.py" in __init__ 251. (code, msg) = self.connect(host, port) File "C:\Python27\Lib\smtplib.py" in connect 311. self.sock = self._get_socket(host, port, self.timeout) File "C:\Python27\Lib\smtplib.py" in _get_socket 286. return socket.create_connection((host, port), timeout) File "C:\Python27\Lib\socket.py" in create_connection 571. raise err Exception Type: error at /accounts/register/ Exception Value: [Errno 10013] An attempt was made to access a socket in a way forbidden by its access permissions ```
2014/07/09
[ "https://Stackoverflow.com/questions/24648132", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3818829/" ]
The problem has to do with your email server setup. Instead of figuring out what to fix, just set your `EMAIL_BACKEND` in `settings.py` to the following: ``` if DEBUG: EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' ``` This way, any email sent by django will be shown in the console instead of attempting delivery. You can then continue developing your application. Having emails printed on the console is good if you are developing, but it can be a headache if your application is sending a lot of emails across. A better solution is to install [mailcatcher](http://mailcatcher.me/). This application will create a local mail server for testing and as a bonus, provide you a web interface where you can view the emails being sent by your server: ![mailcatcher](https://i.stack.imgur.com/yRL91.png) It is a Ruby application, and as you are on Windows, I would suggest using [rubyinstaller](http://rubyinstaller.org/) to help with gem installation. The website also shows you how to configure django: ``` if DEBUG: EMAIL_HOST = '127.0.0.1' EMAIL_HOST_USER = '' EMAIL_HOST_PASSWORD = '' EMAIL_PORT = 1025 EMAIL_USE_TLS = False ```
This has nothing to do with your webserver ports, this is to do with the host and port that `smtplib` is trying to open in order to send an email. These are controlled by `settings.EMAIL_HOST` and `settings.EMAIL_PORT`. There are other settings too, see the [documentation](https://docs.djangoproject.com/en/1.7/topics/email/#smtp-backend) for details on how to set up email properly.
4,815
56,553,902
I have been trying to extract stock prices using pandas\_datareader. data, but I kept receiving an error message. I have checked other threads relating to this problem and, I have tried downloading data reader using conda install DataReader and also tried pip install DataReader. ``` import pandas as pd import datetime from pandas import Series,DataFrame import pandas_datareader.data as web pandas_datareader.__version__ '0.6.0' start=datetime.datetime(2009,1,1) end=datetime.datetime(2019,1,1) df=web.DataReader( 'AT&T Inc T',start,end) df.head() ``` My expected result should be a data frame with all the features and rows of the stock. Below is the error message I got: Please, how do I fix this problem? Thanks. ``` <ipython-input-45-d75bedd6b2dd> in <module> 1 start=datetime.datetime(2009,1,1) 2 end=datetime.datetime(2019,1,1) ----> 3 df=web.DataReader( 'AT&T Inc T',start,end) 4 df.head() ~\Anaconda3\lib\site-packages\pandas_datareader\data.py in DataReader(name, data_source, start, end, retry_count, pause, session, access_key) 456 else: 457 msg = "data_source=%r is not implemented" % data_source --> 458 raise NotImplementedError(msg) 459 460 NotImplementedError: data_source=datetime.datetime(2009, 1, 1, 0, 0) is not implemented ```
2019/06/12
[ "https://Stackoverflow.com/questions/56553902", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11290634/" ]
`for number in d:` will iterate through the keys of the dictionary, not values. You can use ``` for number in d.values(): ``` or ``` for name, number in d.items(): ``` if you also need the names.
You need to iterate over the key-value pairs in the dict with `items()` ``` def overNum(): d = {'Tom':'93', 'Hannah':'83', 'Jack':'94'} count = 0 for name, number in d.items(): if int(number) >= 90: count += 1 print(count) ``` Also there are some issues with the `if` statement that i fixed.
4,816
20,023,709
I'm working on a laser tag game project that uses pygame and Raspberry Pi. In the game, I need a background timer in order to keep track of game time. Currently I'm using the following to do this but doesnt seem to work correctly: ``` pygame.timer.get_ticks() ``` My second problem is resetting this timer when the game is restarted. The game should restart without having to restart the program and that is only likely to be done with resetting the timer, I guess. What I need, in brief, is to have a background timer variable and be able to reset it any time in a while loop. I'm a real beginner to python and pygame, but the solution of this problem will give a great boost to my knowledge and the progress of the project. Any help will be greately appreciated.
2013/11/16
[ "https://Stackoverflow.com/questions/20023709", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1478315/" ]
You could use the `Timer` class in Android, and set a repeating timer, with a initial delay. ``` Timer timer = new Timer(); timer.schedule(TimerTask task, long delay, long period) ``` A `TimerTask` is very much like a `Runnable`. See: <http://developer.android.com/reference/java/util/Timer.html>
I've used 2 timers : ``` handler.postDelayed(runnable, 1500); // Creating a timer for 1.5 seconds ``` this created a 1.5sec timer, while inside the timer loop : ``` private Runnable runnable = new Runnable() { @Override public void run() { Foo(); handler.postDelayed(this, 1500); } }; ``` I called `handler.postDelayed(this,1500);` again, which made 2 timers -> causing the time bug.
4,822
72,393
A tutorial I have on Regex in python explains how to use the re module in python, I wanted to grab the URL out of an A tag so knowing Regex I wrote the correct expression and tested it in my regex testing app of choice and ensured it worked. When placed into python it failed: ``` result = re.match("a_regex_of_pure_awesomeness", "a string containing the awesomeness") # result is None` ``` After much head scratching I found out the issue, it automatically expects your pattern to be at the start of the string. I have found a fix but I would like to know how to change: ``` regex = ".*(a_regex_of_pure_awesomeness)" ``` into ``` regex = "a_regex_of_pure_awesomeness" ``` Okay, it's a standard URL regex but I wanted to avoid any potential confusion about what I wanted to get rid of and possibly pretend to be funny.
2008/09/16
[ "https://Stackoverflow.com/questions/72393", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1384652/" ]
In Python, there's a distinction between "match" and "search"; match only looks for the pattern at the start of the string, and search looks for the pattern starting at any location within the string. [Python regex docs](http://docs.python.org/lib/module-re.html) [Matching vs searching](http://docs.python.org/lib/matching-searching.html)
Are you using the `re.match()` or `re.search()` method? My understanding is that `re.match()` assumes a "`^`" at the beginning of your expression and will only search at the beginning of the text, while `re.search()` acts more like the Perl regular expressions and will only match the beginning of the text if you include a "`^`" at the beginning of your expression. Hope that helps.
4,823
65,393,659
I built an application to suggest email addresses fixes, and I need to detect email addresses that are basically not real existing email addresses, like the following: 14370afcdc17429f9e418d5ffbd0334a@magic.com ce06e817-2149-6cfd-dd24-51b31e93ea1a@stackoverflow.org.il 87c0d782-e09f-056f-f544-c6ec9d17943c@microsoft.org.il root@ns3160176.ip-151-106-35.eu ds4-f1g-54-h5-dfg-yk-4gd-htr5-fdg5h@outlook.com h-rt-dfg4-sv6-fg32-dsv5-vfd5-ds312@gmail.com test@454-fs-ns-dff4-xhh-43d-frfs.com I could do multi regex checks, but I don't think I will hit the good rate % of the suspected 'not-real' email addresses, as I go to a specific regex pattern each time. I looked in: [Javascript script to find gibberish words in form inputs](https://stackoverflow.com/questions/10211028/javascript-script-to-find-gibberish-words-in-form-inputs) [Translate this JavaScript Gibberish please?](https://stackoverflow.com/questions/10621334/translate-this-javascript-gibberish-please) [Detect keyboard mashed email addresses](https://stackoverflow.com/questions/43468609/detect-keyboard-mashed-email-addresses) Finally I looked over this: [Unable to detect gibberish names using Python](https://stackoverflow.com/questions/50659889/unable-to-detect-gibberish-names-using-python) And It seems to fit my needs, I think. A script that will give me some score about the possibility of the each part of the email address to be a Gibberish (or not real) email address. So what I want is the output to be: ``` const strings = ["14370afcdc17429f9e418d5ffbd0334a", "gmail", "ce06e817-2149-6cfd-dd24-51b31e93ea1a", "87c0d782-e09f-056f-f544-c6ec9d17943c", "space-max", "ns3160176.ip-151-106-35", "ds4-f1g-54-h5-dfg-yk-4gd-htr5-fdg5h", "outlook", "h-rt-dfg4-sv6-fg32-dsv5-vfd5- ds312", "system-analytics", "454-fs-ns-dff4-xhh-43d-frfs"]; for (let i = 0; i < strings.length; i++) { validateGibbrish(strings[i]); } ``` And this `validateGibberish` function logic will be similar to this python code: ``` from nltk.corpus import brown from collections import Counter import numpy as np text = '\n'.join([' '.join([w for w in s]) for s in brown.sents()]) unigrams = Counter(text) bigrams = Counter(text[i:(i+2)] for i in range(len(text)-2)) trigrams = Counter(text[i:(i+3)] for i in range(len(text)-3)) weights = [0.001, 0.01, 0.989] def strangeness(text): r = 0 text = ' ' + text + '\n' for i in range(2, len(text)): char = text[i] context1 = text[(i-1):i] context2 = text[(i-2):i] num = unigrams[char] * weights[0] + bigrams[context1+char] * weights[1] + trigrams[context2+char] * weights[2] den = sum(unigrams.values()) * weights[0] + unigrams[char] + weights[1] + bigrams[context1] * weights[2] r -= np.log(num / den) return r / (len(text) - 2) ``` So in the end I will loop on all the strings and get something like this: ``` "14370afcdc17429f9e418d5ffbd0334a" -> 8.9073 "gmail" -> 1.0044 "ce06e817-2149-6cfd-dd24-51b31e93ea1a" -> 7.4261 "87c0d782-e09f-056f-f544-c6ec9d17943c" -> 8.3916 "space-max" -> 1.3553 "ns3160176.ip-151-106-35" -> 6.2584 "ds4-f1g-54-h5-dfg-yk-4gd-htr5-fdg5h" -> 7.1796 "outlook" -> 1.6694 "h-rt-dfg4-sv6-fg32-dsv5-vfd5-ds312" -> 8.5734 "system-analytics" -> 1.9489 "454-fs-ns-dff4-xhh-43d-frfs" -> 7.7058 ``` Does anybody have a hint how to do it and can help? Thanks a lot :) **UPDATE (12-22-2020)** I manage to write some code based on @Konstantin Pribluda answer, the Shannon entropy calculation: ``` const getFrequencies = str => { let dict = new Set(str); return [...dict].map(chr => { return str.match(new RegExp(chr, 'g')).length; }); }; // Measure the entropy of a string in bits per symbol. const entropy = str => getFrequencies(str) .reduce((sum, frequency) => { let p = frequency / str.length; return sum - (p * Math.log(p) / Math.log(2)); }, 0); const strings = ['14370afcdc17429f9e418d5ffbd0334a', 'or', 'sdf', 'test', 'dave coperfield', 'gmail', 'ce06e817-2149-6cfd-dd24-51b31e93ea1a', '87c0d782-e09f-056f-f544-c6ec9d17943c', 'space-max', 'ns3160176.ip-151-106-35', 'ds4-f1g-54-h5-dfg-yk-4gd-htr5-fdg5h', 'outlook', 'h-rt-dfg4-sv6-fg32-dsv5-vfd5-ds312', 'system-analytics', '454-fs-ns-dff4-xhh-43d-frfs']; for (let i = 0; i < strings.length; i++) { const str = strings[i]; let result = 0; try { result = entropy(str); } catch (error) { result = 0; } console.log(`Entropy of '${str}' in bits per symbol:`, result); } ``` The output is: ``` Entropy of '14370afcdc17429f9e418d5ffbd0334a' in bits per symbol: 3.7417292966721747 Entropy of 'or' in bits per symbol: 1 Entropy of 'sdf' in bits per symbol: 1.584962500721156 Entropy of 'test' in bits per symbol: 1.5 Entropy of 'dave coperfield' in bits per symbol: 3.4565647621309536 Entropy of 'gmail' in bits per symbol: 2.3219280948873626 Entropy of 'ce06e817-2149-6cfd-dd24-51b31e93ea1a' in bits per symbol: 3.882021446536749 Entropy of '87c0d782-e09f-056f-f544-c6ec9d17943c' in bits per symbol: 3.787301737252941 Entropy of 'space-max' in bits per symbol: 2.94770277922009 Entropy of 'ns3160176.ip-151-106-35' in bits per symbol: 3.1477803284561103 Entropy of 'ds4-f1g-54-h5-dfg-yk-4gd-htr5-fdg5h' in bits per symbol: 3.3502926596166693 Entropy of 'outlook' in bits per symbol: 2.1280852788913944 Entropy of 'h-rt-dfg4-sv6-fg32-dsv5-vfd5-ds312' in bits per symbol: 3.619340871812292 Entropy of 'system-analytics' in bits per symbol: 3.327819531114783 Entropy of '454-fs-ns-dff4-xhh-43d-frfs' in bits per symbol: 3.1299133176846836 ``` It's still not working as expected, as 'dave coperfield' gets about the same points as other gibberish results. Anyone else have better logic or ideas on how to do it?
2020/12/21
[ "https://Stackoverflow.com/questions/65393659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4442606/" ]
This is what I come up with: ```js // gibberish detector js (function (h) { function e(c, b, a) { return c < b ? (a = b - c, Math.log(b) / Math.log(a) * 100) : c > a ? (b = c - a, Math.log(100 - a) / Math.log(b) * 100) : 0 } function k(c) { for (var b = {}, a = "", d = 0; d < c.length; ++d)c[d] in b || (b[c[d]] = 1, a += c[d]); return a } h.detect = function (c) { if (0 === c.length || !c.trim()) return 0; for (var b = c, a = []; a.length < b.length / 35;)a.push(b.substring(0, 35)), b = b.substring(36); 1 <= a.length && 10 > a[a.length - 1].length && (a[a.length - 2] += a[a.length - 1], a.pop()); for (var b = [], d = 0; d < a.length; d++)b.push(k(a[d]).length); a = 100 * b; for (d = b = 0; d < a.length; d++)b += parseFloat(a[d], 10); a = b / a.length; for (var f = d = b = 0; f < c.length; f++) { var g = c.charAt(f); g.match(/^[a-zA-Z]+$/) && (g.match(/^(a|e|i|o|u)$/i) && b++, d++) } b = 0 !== d ? b / d * 100 : 0; c = c.split(/[\W_]/).length / c.length * 100; a = Math.max(1, e(a, 45, 50)); b = Math.max(1, e(b, 35, 45)); c = Math.max(1, e(c, 15, 20)); return Math.max(1, (Math.log10(a) + Math.log10(b) + Math.log10(c)) / 6 * 100) } })("undefined" === typeof exports ? this.gibberish = {} : exports) // email syntax validator function validateSyntax(email) { return /^(([^<>()[\]\\.,;:\s@"]+(\.[^<>()[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/.test(email.toLowerCase()); } // shannon entropy function entropy(str) { return Object.values(Array.from(str).reduce((freq, c) => (freq[c] = (freq[c] || 0) + 1) && freq, {})).reduce((sum, f) => sum - f / str.length * Math.log2(f / str.length), 0) } // vowel counter function countVowels(word) { var m = word.match(/[aeiou]/gi); return m === null ? 0 : m.length; } // dummy function function isTrue(value){ return value } // validate string by multiple tests function detectGibberish(str){ var strWithoutPunct = str.replace(/[.,\/#!$%\^&\*;:{}=\-_`~()]/g,""); var entropyValue = entropy(str) < 3.5; var gibberishValue = gibberish.detect(str) < 50; var vovelValue = 30 < 100 / strWithoutPunct.length * countVowels(strWithoutPunct) && 100 / strWithoutPunct.length * countVowels(str) < 35; return [entropyValue, gibberishValue, vovelValue].filter(isTrue).length > 1 } // main function function validateEmail(email) { return validateSyntax(email) ? detectGibberish(email.split("@")[0]) : false } // tests document.write(validateEmail("dsfghjdhjs@gmail.com") + "<br/>") document.write(validateEmail("jhon.smith@gmail.com")) ``` I have combined multiple tests: gibberish-detector.js, Shannon entropy and counting vowels (between 30% and 35%). You can adjust some values for more accurate result.
A thing you may consider doing is checking each time how random each string is, then sort the results according to their score and given a threshold exclude the ones with high randomness. It is inevitable that you will miss some. There are some implementations for checking the randomness of strings, for example: * <https://en.wikipedia.org/wiki/Diehard_tests> * <http://www.cacert.at/random/> You may have to create a hash (to map chars and symbols to sequences of integers) before you apply some of these because some work only with integers, since they test properties of random numbers generators. Also a stack exchange link that can be of help is this: * <https://stats.stackexchange.com/questions/371150/check-if-a-character-string-is-not-random> PS. I am having a similar problem in a service since robots create accounts with these type of fake emails. After years of dealing with this issue (basically deleting manually from the DB the fake emails) I am now considering introducing a visual check (captcha) in the signup page to avoid the frustration.
4,828
6,116,527
I'm trying to get the pymysql module working with python3 on a Macintosh. Note that I am a beginning python user who decided to switch from ruby and am trying to build a simple (sigh) database project to drive my learning python. In a simple (I thought) test program, I am getting a syntax error in confiparser.py (which is used by the pymysql module) ``` def __init__(self, defaults=None, dict_type=_default_dict, allow_no_value=False, *, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True, default_section=DEFAULTSECT, interpolation=_UNSET): ``` According to Komodo, the error is on the second line. I assume it is related to the asterix but regardless, I don't know why there would be a problem like this with a standard Python module. Anyone seen this before?
2011/05/24
[ "https://Stackoverflow.com/questions/6116527", "https://Stackoverflow.com", "https://Stackoverflow.com/users/57246/" ]
You're most certainly running the code with a 2.x interpreter. I wonder why it even tries to import 3.x libraries, perhaps the answer lies in your installation process - but that's a different question. Anyway, this (before any other `import`s) ``` import sys print(sys.version) ``` should show which Python version is actually run, as Komodo Edit may be choosing the wrong executable for whatever reason. Alternatively, leave out the parens and it simply fails if run with Python 3.
In Python 3.2 the configparser module does indeed look that way. Importing it works fine from Python 3.2, but *not* from Python 2. Am I right in guessing you get the error when you try to run your module with Komodo? Then you just have configured the wrong Python executable.
4,829
67,052,300
I've been trying to find the best way to convert a given GIF image to a sequence of BMP files using python. I've found some libraries like Wand and ImageMagic but still haven't found a good example to accomplish this.
2021/04/12
[ "https://Stackoverflow.com/questions/67052300", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2163839/" ]
Reading an animated GIF file using Python Image Processing Library - Pillow --------------------------------------------------------------------------- ``` from PIL import Image from PIL import GifImagePlugin imageObject = Image.open("./xmas.gif") print(imageObject.is_animated) print(imageObject.n_frames) ``` Display individual frames from the loaded animated GIF file ----------------------------------------------------------- ``` for frame in range(0,imageObject.n_frames): imageObject.seek(frame) imageObject.show() ```
In Imagemagick, which comes with Linux and can be installed for Windows or Mac OSX, ``` convert image.gif -coalese image.bmp ``` the results will be image-0.bmp, image-1.bmp ... Use `convert` for Imagemagick 6 or replace `convert` with `magick` for Imagemagick 7.
4,830
44,963,360
I am very new to python. Long time user of stackoverflow but first time posting a question. I am trying to extract data from website using beautifulsoup. [Sample Code where I want to extract is (listed in and tagged in data)](https://i.stack.imgur.com/AASRq.jpg) The was able to extract in to list but I am unable to extract the acutal data. the goal here is to extract **Listed in:** Nail Polish Subscription Boxes, Subscription Boxes for Beauty Products, Subscription Boxes for Women **Tagged in:** Makeup, Beauty, Nail polish Can you please tell me how to achive it. ``` import requests from bs4 import BeautifulSoup l1=[] url='http://boxes.mysubscriptionaddiction.com/box/julep-maven' source_code=requests.get(url) plain_text=source_code.text soup= BeautifulSoup(plain_text,"lxml") for item in soup.find_all('p'): l1.append(item.contents) search='\nListed in:\n' for a in l1: if a[0] in ('\nTagged in:\n','\nListed in:\n'): print(a) ```
2017/07/07
[ "https://Stackoverflow.com/questions/44963360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8268739/" ]
I don't know if there's a slicker way, but a straightforward for loop will do the trick: ``` $frequency = []; for ($i = 0; $i < sizeof($date); $i++) { $frequency[] = array($date[$i], $time_start[$i], $time_end[$i]); } print_r($frequency); // Output: // Array // ( // [0] => Array // ( // [0] => 2017-06-10 // [1] => 02:00 PM // [2] => 05:00 PM // ) // // [1] => Array // ( // [0] => 2017-06-11 // [1] => 03:00 PM // [2] => 06:00 PM // ) // // [2] => Array // ( // [0] => 2017-06-12 // [1] => 04:00 PM // [2] => 07:00 PM // ) // // ) ```
You can also map them: ``` $result = array_map(function ($value1,$value2,$value3) { return [$value1,$value2,$value3]; }, $date,$time_start,$time_end); ```
4,832
16,326,285
I have old python. So can't use subprocess. I have two python scripts. One primary.py and another secondary.py. While running primary.py I need to run secondary.py. Format to run secondary.py is 'python secondary.py Argument' `os.system('python secondary.py Argument')...is giving error saying that can't open file 'Argument': [Errno 2] No such file or directory`
2013/05/01
[ "https://Stackoverflow.com/questions/16326285", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2340760/" ]
Given the code you described, this error can come up for three reasons: * `python` isn't on your `PATH`, or * `secondary.py` isn't in your current working directory. * `Argument` isn't in your current working directory. From your edited question, it sounds like it's the last of the three, meaning the problem likely has nothing to do with `system` at all… but let's see how to solve all three anyway. First, you want a path to the same `python` that's running `primary.py`, which is what [`sys.executable`](http://docs.python.org/2/library/sys.html#sys.executable) is for. And then you want a path to `secondary.py`. Unfortunately, for this one, there is no way (in Python 2.3) that's guaranteed to work… but on many POSIX systems, in many situations, [`sys.argv\[0\]`](http://docs.python.org/2/library/sys.html#sys.argv) will be an absolute path to `primary.py`, so you can just use `dirname` and `join` out of [`os.path`](http://docs.python.org/2/library/os.path.html) to convert that into an absolute path to `secondary.py`. And then, assuming `Argument` is in the script directory, do the same thing for that: ``` my_dir = os.path.dirname(sys.argv[0]) os.system('%s %s %s' % (sys.executable, os.path.join(my_dir, 'secondary.py'), os.path.join(my_dir, 'Argument'))) ```
Which python version do you have? Could you show contents of your secondary.py ? For newer version it seems to work correctly: ``` ddzialak@ubuntu:$ cat f.py import os os.system("python s.py Arg") ddzialak@ubuntu:$ cat s.py print "OK!!!" ddzialak@ubuntu:$ python f.py OK!!! ddzialak@ubuntu:$ ```
4,837
63,671,929
boto3 provides default **waiters** for some services like EC2, S3, etc. This is not provided by default for all services. Now, I've a case where an EFS volume is created and the lifecycle policy is added to the file system. The EFS creation takes some time and the lifecycle policy isn't in the required efs state. i.e., efs created. How to wait for EFS to be created in a python boto3 code, so that policies can be added?
2020/08/31
[ "https://Stackoverflow.com/questions/63671929", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11192798/" ]
Comment out `django.middleware.csrf.CsrfViewMiddleware` in the `MIDDLEWARE` entry in `settings.py` of your django project. I tried `curl -X POST localhost:8000/` after adding a trivial post to a class-based view. It returned the famous 403 CSRF verification failed. After commenting out the above middleware the post method was invoked.
Had a simlar problem the easiest fix is to disable the firewall to get the the GET and POST working
4,838
5,483,404
I want to build a heavy ajax web2.0 app and I don't have javascript, django or ruby on rails. I have some experience with python. I am not sure which one to choose. I have a backend database and have to run few queries for each page, no big deal. So, I am looking for a choice which is quite easy to learn and maintain in the future. Thank you
2011/03/30
[ "https://Stackoverflow.com/questions/5483404", "https://Stackoverflow.com", "https://Stackoverflow.com/users/207335/" ]
I'm not sure if this meets the guidelines for a valid question on here. If you know any Python go with Django, if you know any Ruby go with Rails. From my understanding Rails is a bit more opinionated when it comes to JavaScript. In other words it comes bundled with a bunch of helpers to make it simpler to Ajaxify your code. Django on the other hand leaves it up to you to choose your own framework. (Note: I'm no expert on Django, but have been informed as much) Rails comes bundled with [Prototype](http://www.prototypejs.org/), works equally well with [jQuery](http://jquery.com/) and in the master codebase they have already switched jQuery to be the default in preparation for the next release.
ROR has much better community activity. It's easier to learn without learning ruby (i do not recommend that way, but yes - you can write in ROR barely understanding ruby). About performance: ruby 1.8 was much slower than python. But maybe ruby 1.9 is faster. If you want to build smart ajax application and you understand javascript it does not matter which framework you will use. If not or you are lazy - ROR have some aid to ajax requests. Also take a note about django /admin/ :)
4,839
63,658,572
I am writing a program to produce an image of the Mandelbrot set. The set requires iterating through the formula: z = z\_{n-1}^2 + C. The (n-1) refers to the previous value of z in the loop. In my program I have written ``` z_new = (self.z)**2.0 + c_number self.z = z_new ``` within a loop. Is there a better way in python to update a value using its current value? I'm not sure the `+=` operator would work here, since the formula requires squaring the current value before adding the complex number, C.
2020/08/30
[ "https://Stackoverflow.com/questions/63658572", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14120667/" ]
I think you may have mis-interpreted @Lev\_Levitsky's comment. If you wanted it on one line then they suggested: ```py self.z = self.z**2 + c_number ``` is equivalent to what you've got written. You don't really need the temporary variable `z_new` since in the "one-liner" the previous value of `self.z` is used when setting the next value.
the simplified version should be: ```py self.z = (self.z)**2.0 + c_number ```
4,842
4,900,003
I'm using the @login\_required decorator in my project since day one and it's working fine, but for some reason, I'm starting to get " AttributeError: 'unicode' object has no attribute 'user' " on some specific urls (and those worked in the past). Example : I am the website, logged, and then I click on link and I'm getting this error that usually is linked to the fact that there is no SessionMiddleware installed. But in my case, there is one since I am logged on the site and the page I am on also had a @login\_required. Any idea? The url is definied as : `(r'^accept/(?P<token>[a-zA-Z0-9_-]+)?$', 'accept'),` and the method as : `@login_required def accept(request,token): ...` The Traceback: ``` Traceback (most recent call last): File "/Users/macbook/virtualenv/proj/lib/python2.6/site-packages/django/core/servers/basehttp.py", line 674, in __call__ return self.application(environ, start_response) File "/Users/macbook/virtualenv/proj/lib/python2.6/site-packages/django/core/handlers/wsgi.py", line 241, in __call__ response = self.get_response(request) File "/Users/macbook/virtualenv/proj/lib/python2.6/site-packages/django/core/handlers/base.py", line 141, in get_response return self.handle_uncaught_exception(request, resolver, sys.exc_info()) File "/Users/macbook/virtualenv/proj/lib/python2.6/site-packages/django/core/handlers/base.py", line 165, in handle_uncaught_exception return debug.technical_500_response(request, *exc_info) File "/Users/macbook/virtualenv/proj/lib/python2.6/site-packages/django/core/handlers/base.py", line 100, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/Users/macbook/virtualenv/proj/lib/python2.6/site-packages/django/contrib/auth/decorators.py", line 25, in _wrapped_view return view_func(request, *args, **kwargs) File "/Users/macbook/dev/pycharm-projects/proj/match/views.py", line 33, in accept return __process(token,callback) File "/Users/macbook/virtualenv/proj/lib/python2.6/site-packages/django/contrib/auth/decorators.py", line 24, in _wrapped_view if test_func(request.user): AttributeError: 'unicode' object has no attribute 'user'` ```
2011/02/04
[ "https://Stackoverflow.com/questions/4900003", "https://Stackoverflow.com", "https://Stackoverflow.com/users/23051/" ]
The decorator was on a private method that doesn't have the request as a parameter. I removed that decorator (left there because of a refactoring and lack of test [bad me]). Problem solved.
This can also happen if you call a decorated method from another method without providing a request parameter.
4,843
46,000,595
I have created a pie chart in `matplotlib`. I want to achieve [**this**](http://jsfiddle.net/ztJkb/4/) result in python **i.e. whenever the mouse is hovered on any slice its color is changed**.I have searched a lot and came up with the use of `bind` method but that was not effective though and therefore was not able to come up with the positive result. I will have no problem if this can be done through any other library(say `tkinter`, `plotly`,etc but I need to come up with the solution with `matplotlib` so I would appreciate that more).Please have a look through my question and any suggestion is warmly welcomed... Here is my code: ``` import matplotlib.pyplot as plt labels = 'A', 'B', 'C', 'D' sizes = [10, 35, 50, 5] explode = (0, 0, 0.1, 0) # only "explode" the 3rd slice (i.e. 'C') fig1, ax1 = plt.subplots() ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90) ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle. plt.show() ``` Regards...
2017/09/01
[ "https://Stackoverflow.com/questions/46000595", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You would need a [matplotlib event handler](https://matplotlib.org/users/event_handling.html) for a `motion_notify_event`. This can be connected to a function which checks if the mouse is inside one of the pie chart's wedges. This is done via [`contains_point`](https://matplotlib.org/devdocs/api/_as_gen/matplotlib.patches.Patch.html#matplotlib.patches.Patch.contains_point). In that case colorize the wedge differently, else set its color to its original color. ``` import matplotlib.pyplot as plt labels = 'A', 'B', 'C', 'D' sizes = [10, 35, 50, 5] explode = (0, 0, 0.1, 0) # only "explode" the 3rd slice (i.e. 'C') fig1, ax1 = plt.subplots() wedges, _, __ = ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90) ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle. ocols= [w.get_facecolor() for w in wedges] ncols= ["gold", "indigo", "purple", "salmon"] def update(event): if event.inaxes == ax1: for i, w in enumerate(wedges): if w.contains_point([event.x, event.y]): w.set_facecolor(ncols[i]) else: w.set_facecolor(ocols[i]) fig1.canvas.draw_idle() fig1.canvas.mpl_connect("motion_notify_event", update) plt.show() ```
First off, what you are looking for is [the documentation on Event handling in matplotlib](https://matplotlib.org/users/event_handling.html). In particular, the `motion_notify_event` will be fired every time the mouse moves. However, I can't think of an easy way to identify which wedge the mouse is over right now. If clicking is acceptable, then the problem is much easier: ``` labels = 'A', 'B', 'C', 'D' sizes = [10, 35, 50, 5] explode = (0, 0, 0.1, 0) # only "explode" the 3rd slice (i.e. 'C') click_color = [0.2, 0.2, 0.2] fig1, ax1 = plt.subplots() patches, texts, autotexts = ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90) # store original color inside patch object # THIS IS VERY HACKY. # We use the Artist's 'gid' which seems to be unused as far as I can tell # to be able to recall the original color for p in patches: p.set_gid(p.get_facecolor()) # enable picking p.set_picker(True) ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle. def on_pick(event): #restore all facescolor to erase previous changes for p in patches: p.set_facecolor(p.get_gid()) a = event.artist # print('on pick:', a, a.get_gid()) a.set_facecolor(click_color) plt.draw() fig1.canvas.mpl_connect('pick_event', on_pick) plt.show() ```
4,844
4,690,366
This is my first post and I'm still a Python and Scipy newcomer, so go easy on me! I'm trying to convert an Nx1 matrix into a python list. Say I have some 3x1 matrix `x = scipy.matrix([1,2,3]).transpose()` My aim is to create a list, y, from x so that `y = [1, 2, 3]` I've tried using the `tolist()` method, but it returns `[[1], [2], [3]]`, which isn't the result that I'm after. The best i can do is this ``` y = [xi for xi in x.flat] ``` but it's a bit cumbersome, and I'm not sure if there's an easier way to achieve the same result. Like I said, I'm still coming to grips with Python and Scipy... Thanks
2011/01/14
[ "https://Stackoverflow.com/questions/4690366", "https://Stackoverflow.com", "https://Stackoverflow.com/users/568864/" ]
A question for your question ---------------------------- While Sven and Navi have answered your question on how to convert ``` x = scipy.matrix([1,2,3]).transpose() ``` into a list, I'll ask a question before answering: * Why are you using an Nx1 matrix instead of an array? Using array instead of matrix ----------------------------- If you look at the Numpy for Matlab Users wiki/documentation page, section 3 discusses [*'array' or 'matrix'? Which should I use?*](http://www.scipy.org/NumPy_for_Matlab_Users#head-e9a492daa18afcd86e84e07cd2824a9b1b651935). The short answer is that you should use array. One of the advantages of using an array is that: > > You can treat rank-1 arrays as either row or column vectors. dot(A,v) treats v as a column vector, while dot(v,A) treats v as a row vector. This can save you having to type a lot of transposes. > > > Also, as stated in the [Numpy Reference Documentation](http://docs.scipy.org/doc/numpy/reference/arrays.classes.html#matrix-objects), "Matrix objects are always two-dimensional." This is why `x.tolist()` returned a nested list of `[[1], [2], [3]]` for you. Since you want an Nx1 object, I'd recommend using array as follows: ``` >>> import scipy >>> x = scipy.array([1,2,3]) >>> x array([1, 2, 3]) >>> y = x.tolist() // That's it. A clean, succinct conversion to a list. >>> y [1, 2, 3] ``` If you really want to use matrix -------------------------------- If for some reason you truly need/want to use a matrix instead of an array, here's what I would do: ``` >>> import scipy >>> x = scipy.matrix([1,2,3]).transpose() >>> x matrix([[1], [2], [3]]) >>> y = x.T.tolist()[0] >>> y [1, 2, 3] ``` In words, the `x.T.tolist()[0]` will: 1. Transpose the x matrix using the `.T` attribute 2. Convert the transposed matrix into a nested list using `.tolist()` 3. Grab the first element of the nested listed using `[0]`
How about ``` x.ravel().tolist()[0] ``` or ``` scipy.array(x).ravel().tolist() ```
4,845
6,876,553
I thought to try using [D](http://en.wikipedia.org/wiki/D_%28programming_language%29) for some system administration scripts which require high performance (for comparing performance with python/perl etc). I can't find an example in the tutorials I looked through so far (dsource.org etc.) on how to make a system call (i.e. calling another software) and receiving it's output from stdout, though? If I missed it, could someone point me to the right docs/tutorial, or provide with the answer right away?
2011/07/29
[ "https://Stackoverflow.com/questions/6876553", "https://Stackoverflow.com", "https://Stackoverflow.com/users/340811/" ]
Well, then I of course found it: <http://www.digitalmars.com/d/2.0/phobos/std_process.html#shell> (Version using the Tango library here: <http://www.dsource.org/projects/tango/wiki/TutExec>). The former version is the one that works with D 2.0 (thereby the current dmd compiler that comes with ubuntu). I got this tiny example to work now, compiled with dmd: ``` import std.stdio; import std.process; void main() { string output = shell("ls -l"); write(output); } ```
std.process has been updated since... the new function is spawnShell ``` import std.stdio; import std.process; void main(){ auto pid = spawnShell("ls -l"); write(pid); } ```
4,847
62,998,373
I have a graph structure like this:[![enter image description here](https://i.stack.imgur.com/SMnZU.png)](https://i.stack.imgur.com/SMnZU.png) I need to select all `ContentItem` nodes they have any connections with the other nodes. I am also passing in a list of ids for each of the nodes for filtering purposes. i.e. I pass in a list of the neo4j ids for the items I wish to INCLUDE in the search. Any `ContentItem` that is related to any of the other nodes which have an id passed in should return. I've tried with a UNION as this felt like the simplest way, but I'm not sure that it's correct. ``` MATCH (n:ContentItem) WHERE id(n) IN $neoIds WITH n OPTIONAL MATCH (n:ContentItem)-[:IN]->(pt:PulseTopic) WHERE id(pt) IN $pulseTopics RETURN n UNION OPTIONAL MATCH (n:ContentItem)-[:IN]->(pst:SubPulseTopic) WHERE id(pst) IN $subPulseTopics RETURN n UNION OPTIONAL MATCH (n:ContentItem)-[:FROM]->(s:Supplier) WHERE id(s) IN $suppliers RETURN n UNION OPTIONAL MATCH (n:ContentItem)-[:USED_FOR]->(ua:UseArea) WHERE id(ua) IN $useAreas RETURN n UNION OPTIONAL MATCH (n:ContentItem)-[:IN]->(blt:BLTopic) WHERE id(blt) IN $blTopics RETURN n ``` Firstly when I reference the record in python I get an error: ``` for r in tx.run(cypherStep2, paramsStep2): d = r['n']['id'] ``` ...gives: `TypeError: 'NoneType' object is not subscriptable` I'm not sure why that would be. If I just do `MATCH (n:ContentItem) WHERE id(n) IN $neoIds RETURN n` I don't get this error, so I'm thinking this is something to do with the `UNION`. And secondly, I am wondering if this will actually filter ContentItem on `$neoIds` passed in or whether `OPTIONAL MATCH (n:ContentItem)` means ANY `ContentItem` in the `UNION`. What is the best way to do a query like this, please?
2020/07/20
[ "https://Stackoverflow.com/questions/62998373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7412939/" ]
You can pass the \_incrementCounter method down to the other widget. File 1: ``` class _MyHomePageState extends State<MyHomePage> { int _counter = 0; void _incrementCounter() { setState(() { _counter++; }); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(widget.title), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ Text( 'You have pushed the button this many times:', ), Text( '$_counter', style: Theme.of(context).textTheme.headline4, ), ], ), ), floatingActionButton: IncrementCounterButton( incrementCounter: _incrementCounter, ), ); } } ``` File 2: ``` class IncrementCounterButton extends StatelessWidget { final void Function() incrementCounter; IncrementCounterButton({Key key, this.incrementCounter) : super(key: key); @override Widget build(BuildContext context) { return FloatingActionButton( onPressed: incrementCounter, tooltip: 'Increment', child: Icon(Icons.add), ); } } ```
You have to pass the function instead of calling it. The code: ``` onPressed: () { _incrementCounter(); }, ``` or like this of you like shortcuts: ``` onPressed: () => _incrementCounter(), ``` Hope it helps! Happy coding:)
4,848
13,421,709
I use `vim` (installed on `cygwin`) to write `c++` programs but it does not highlight some `c++` keywords such as `new`, `delete`, `public`, `friend`, `try`, but highlight others such as `namespace`, `int`, `const`, `operator`, `true`, `class`, `include`. It also not change color of operators. I never changed its syntax file. What's wrong with it? Thanks a lot. I use a customized color scheme; when I change it to `desert` color scheme, highlighting has no problem, but I need to use that color scheme and can't change it to something else. I want it show program as the following picture(I used this color scheme with notepad++ in the picture): ![example of correctly colored one](https://i.stack.imgur.com/qGIb9.png) but now it's as the following picture: ![not correctly colored one](https://i.stack.imgur.com/anaCh.png) the `colorscheme` is here: ``` "Tomorrow Night Bright - Full Colour and 256 Colour " http://chriskempson.com " " Hex colour conversion functions borrowed from the theme "Desert256"" " Default GUI Colours let s:foreground = "eaeaea" let s:background = "000000" let s:selection = "424242" let s:line = "2a2a2a" let s:comment = "969896" let s:red = "d54e53" let s:orange = "e78c45" let s:yellow = "e7c547" let s:green = "b9ca4a" let s:aqua = "70c0b1" let s:blue = "7aa6da" let s:purple = "c397d8" let s:window = "4d5057" set background=dark hi clear syntax reset let g:colors_name = "Tomorrow-Night-Bright" if has("gui_running") || &t_Co == 88 || &t_Co == 256 " Returns an approximate grey index for the given grey level fun <SID>grey_number(x) if &t_Co == 88 if a:x < 23 return 0 elseif a:x < 69 return 1 elseif a:x < 103 return 2 elseif a:x < 127 return 3 elseif a:x < 150 return 4 elseif a:x < 173 return 5 elseif a:x < 196 return 6 elseif a:x < 219 return 7 elseif a:x < 243 return 8 else return 9 endif else if a:x < 14 return 0 else let l:n = (a:x - 8) / 10 let l:m = (a:x - 8) % 10 if l:m < 5 return l:n else return l:n + 1 endif endif endif endfun " Returns the actual grey level represented by the grey index fun <SID>grey_level(n) if &t_Co == 88 if a:n == 0 return 0 elseif a:n == 1 return 46 elseif a:n == 2 return 92 elseif a:n == 3 return 115 elseif a:n == 4 return 139 elseif a:n == 5 return 162 elseif a:n == 6 return 185 elseif a:n == 7 return 208 elseif a:n == 8 return 231 else return 255 endif else if a:n == 0 return 0 else return 8 + (a:n * 10) endif endif endfun " Returns the palette index for the given grey index fun <SID>grey_colour(n) if &t_Co == 88 if a:n == 0 return 16 elseif a:n == 9 return 79 else return 79 + a:n endif else if a:n == 0 return 16 elseif a:n == 25 return 231 else return 231 + a:n endif endif endfun " Returns an approximate colour index for the given colour level fun <SID>rgb_number(x) if &t_Co == 88 if a:x < 69 return 0 elseif a:x < 172 return 1 elseif a:x < 230 return 2 else return 3 endif else if a:x < 75 return 0 else let l:n = (a:x - 55) / 40 let l:m = (a:x - 55) % 40 if l:m < 20 return l:n else return l:n + 1 endif endif endif endfun " Returns the actual colour level for the given colour index fun <SID>rgb_level(n) if &t_Co == 88 if a:n == 0 return 0 elseif a:n == 1 return 139 elseif a:n == 2 return 205 else return 255 endif else if a:n == 0 return 0 else return 55 + (a:n * 40) endif endif endfun " Returns the palette index for the given R/G/B colour indices fun <SID>rgb_colour(x, y, z) if &t_Co == 88 return 16 + (a:x * 16) + (a:y * 4) + a:z else return 16 + (a:x * 36) + (a:y * 6) + a:z endif endfun " Returns the palette index to approximate the given R/G/B colour levels fun <SID>colour(r, g, b) " Get the closest grey let l:gx = <SID>grey_number(a:r) let l:gy = <SID>grey_number(a:g) let l:gz = <SID>grey_number(a:b) " Get the closest colour let l:x = <SID>rgb_number(a:r) let l:y = <SID>rgb_number(a:g) let l:z = <SID>rgb_number(a:b) if l:gx == l:gy && l:gy == l:gz " There are two possibilities let l:dgr = <SID>grey_level(l:gx) - a:r let l:dgg = <SID>grey_level(l:gy) - a:g let l:dgb = <SID>grey_level(l:gz) - a:b let l:dgrey = (l:dgr * l:dgr) + (l:dgg * l:dgg) + (l:dgb * l:dgb) let l:dr = <SID>rgb_level(l:gx) - a:r let l:dg = <SID>rgb_level(l:gy) - a:g let l:db = <SID>rgb_level(l:gz) - a:b let l:drgb = (l:dr * l:dr) + (l:dg * l:dg) + (l:db * l:db) if l:dgrey < l:drgb " Use the grey return <SID>grey_colour(l:gx) else " Use the colour return <SID>rgb_colour(l:x, l:y, l:z) endif else " Only one possibility return <SID>rgb_colour(l:x, l:y, l:z) endif endfun " Returns the palette index to approximate the 'rrggbb' hex string fun <SID>rgb(rgb) let l:r = ("0x" . strpart(a:rgb, 0, 2)) + 0 let l:g = ("0x" . strpart(a:rgb, 2, 2)) + 0 let l:b = ("0x" . strpart(a:rgb, 4, 2)) + 0 return <SID>colour(l:r, l:g, l:b) endfun " Sets the highlighting for the given group fun <SID>X(group, fg, bg, attr) if a:fg != "" exec "hi " . a:group . " guifg=#" . a:fg . " ctermfg=" . <SID>rgb(a:fg) endif if a:bg != "" exec "hi " . a:group . " guibg=#" . a:bg . " ctermbg=" . <SID>rgb(a:bg) endif if a:attr != "" exec "hi " . a:group . " gui=" . a:attr . " cterm=" . a:attr endif endfun " Vim Highlighting call <SID>X("Normal", s:foreground, s:background, "") call <SID>X("LineNr", s:selection, "", "") call <SID>X("NonText", s:selection, "", "") call <SID>X("SpecialKey", s:selection, "", "") call <SID>X("Search", s:background, s:yellow, "") call <SID>X("TabLine", s:foreground, s:background, "reverse") call <SID>X("StatusLine", s:window, s:yellow, "reverse") call <SID>X("StatusLineNC", s:window, s:foreground, "reverse") call <SID>X("VertSplit", s:window, s:window, "none") call <SID>X("Visual", "", s:selection, "") call <SID>X("Directory", s:blue, "", "") call <SID>X("ModeMsg", s:green, "", "") call <SID>X("MoreMsg", s:green, "", "") call <SID>X("Question", s:green, "", "") call <SID>X("WarningMsg", s:red, "", "") call <SID>X("MatchParen", "", s:selection, "") call <SID>X("Folded", s:comment, s:background, "") call <SID>X("FoldColumn", "", s:background, "") if version >= 700 call <SID>X("CursorLine", "", s:line, "none") call <SID>X("CursorColumn", "", s:line, "none") call <SID>X("PMenu", s:foreground, s:selection, "none") call <SID>X("PMenuSel", s:foreground, s:selection, "reverse") end if version >= 703 call <SID>X("ColorColumn", "", s:line, "none") end " Standard Highlighting call <SID>X("Comment", s:comment, "", "") call <SID>X("Todo", s:comment, s:background, "") call <SID>X("Title", s:comment, "", "") call <SID>X("Identifier", s:red, "", "none") call <SID>X("Statement", s:foreground, "", "") call <SID>X("Conditional", s:foreground, "", "") call <SID>X("Repeat", s:foreground, "", "") call <SID>X("Structure", s:purple, "", "") call <SID>X("Function", s:blue, "", "") call <SID>X("Constant", s:orange, "", "") call <SID>X("String", s:green, "", "") call <SID>X("Special", s:foreground, "", "") call <SID>X("PreProc", s:purple, "", "") call <SID>X("Operator", s:aqua, "", "none") call <SID>X("Type", s:blue, "", "none") call <SID>X("Define", s:purple, "", "none") call <SID>X("Include", s:blue, "", "") "call <SID>X("Ignore", "666666", "", "") " Vim Highlighting call <SID>X("vimCommand", s:red, "", "none") " C Highlighting call <SID>X("cType", s:yellow, "", "") call <SID>X("cStorageClass", s:purple, "", "") call <SID>X("cConditional", s:purple, "", "") call <SID>X("cRepeat", s:purple, "", "") " PHP Highlighting call <SID>X("phpVarSelector", s:red, "", "") call <SID>X("phpKeyword", s:purple, "", "") call <SID>X("phpRepeat", s:purple, "", "") call <SID>X("phpConditional", s:purple, "", "") call <SID>X("phpStatement", s:purple, "", "") call <SID>X("phpMemberSelector", s:foreground, "", "") " Ruby Highlighting call <SID>X("rubySymbol", s:green, "", "") call <SID>X("rubyConstant", s:yellow, "", "") call <SID>X("rubyAttribute", s:blue, "", "") call <SID>X("rubyInclude", s:blue, "", "") call <SID>X("rubyLocalVariableOrMethod", s:orange, "", "") call <SID>X("rubyCurlyBlock", s:orange, "", "") call <SID>X("rubyStringDelimiter", s:green, "", "") call <SID>X("rubyInterpolationDelimiter", s:orange, "", "") call <SID>X("rubyConditional", s:purple, "", "") call <SID>X("rubyRepeat", s:purple, "", "") " Python Highlighting call <SID>X("pythonInclude", s:purple, "", "") call <SID>X("pythonStatement", s:purple, "", "") call <SID>X("pythonConditional", s:purple, "", "") call <SID>X("pythonFunction", s:blue, "", "") " JavaScript Highlighting call <SID>X("javaScriptBraces", s:foreground, "", "") call <SID>X("javaScriptFunction", s:purple, "", "") call <SID>X("javaScriptConditional", s:purple, "", "") call <SID>X("javaScriptRepeat", s:purple, "", "") call <SID>X("javaScriptNumber", s:orange, "", "") call <SID>X("javaScriptMember", s:orange, "", "") " HTML Highlighting call <SID>X("htmlTag", s:red, "", "") call <SID>X("htmlTagName", s:red, "", "") call <SID>X("htmlArg", s:red, "", "") call <SID>X("htmlScriptTag", s:red, "", "") " Diff Highlighting call <SID>X("diffAdded", s:green, "", "") call <SID>X("diffRemoved", s:red, "", "") " Delete Functions delf <SID>X delf <SID>rgb delf <SID>colour delf <SID>rgb_colour delf <SID>rgb_level delf <SID>rgb_number delf <SID>grey_colour delf <SID>grey_level delf <SID>grey_number endif ```
2012/11/16
[ "https://Stackoverflow.com/questions/13421709", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1363855/" ]
Add this to the C Highlighting paragraph: ``` call <SID>X("Statement", s:purple, "", "") ```
All the keywords you mention eventually link to the standard `Statement` syntax group. Maybe that one got cleared. Try ``` :verbose highlight Statement ``` If it shows `xxx cleared`, you're one step further and now need to investigate why your colorscheme does not define a coloring.
4,849
26,256,055
In my python program , I have my string: ``` test = {"Controller_node1_external_port": {"properties": {"fixed_ips": [{"ip_address": "12.0.0.1"}],"network_id": {"get_param": ["ex_net_map_param",{"get_param": "ex_net_param"}]}},"type": "OS::Neutron::Port"}} ``` `yaml.dump(test)` is giving me the output : ``` Controller_node1_external_port: properties: fixed_ips: - {ip_address: 12.0.0.1} network_id: get_param: - ex_net_map_param - {get_param: ex_net_param} type: OS::Neutron::Port ``` But I want ip\_address line as `- ip_address: 12.0.0.1` ( means without flower braces covered) Desired ouput: ``` Controller_node1_external_port: properties: fixed_ips: - ip_address: 12.0.0.1 network_id: get_param: - ex_net_map_param - {get_param: ex_net_param} type: OS::Neutron::Port ```
2014/10/08
[ "https://Stackoverflow.com/questions/26256055", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3197309/" ]
I will provide a simple way to store and retrive your data inside that multidimensional array: ``` <?php //Global definition of the array $categories = array( "house" => array(), "indie" => array(), "trap" => array(), "trance" => array(), "partybangers" => array(), ); function push_to_category($category,$value) { global $categories; array_push($categories[$category], $value); } push_to_category("house","2797"); //To retrive data you call $categories[$category][index] //ex) $categories["house"][0] returns 2797 ?> ```
I like [FoxNos's answer](https://stackoverflow.com/questions/26256032/store-value-in-multidimensional-array-if-value-is-equal-to-key#answer-26256906), but not the global, since the global might not be a global in another context (`$categories`might be defined in another function or class). So this is what I would've done: ```php $categories = array(.........); function push_to_category(&$categories, $category, $value) { array_push($categories[$category], $value); } push_to_category($categories, "house","2797"); ``` Since the 1st arg is by-reference, you don't need a global, so you can use it anywhere. Minor improvement. If you're inside another function or class, and want to `push_to_category` a lot, you could even do this: ```php class UserController { function categoriesAction() { $categories = array(.........); $push_to_category = function($category, $value) use (&$categories) { array_push($categories[$category], $value); } $push_to_category("house","2797"); } } ``` which makes a local function ([closure](http://nl1.php.net/manual/en/functions.anonymous.php)) that uses/manipulates a local variable (`$categories`).
4,850
12,326,443
I am writing a python program which validates device events. I am continuosly reading some data from serial port from a device. When I write something on serail port of device, the device writes a string on serialport which I have to read. Continously reading part from serial port is in a seperate worker thread an I read it line by line and write it to a thread. The device writes some data continuously and also it writes the event description on the serail port. To be more specific when I write something on the device, it generates an event on the device. The description of the event is written back to the serial port. This i have to read it back and validate whether the event has occured. Now what is happening as I am reading device output line by line in a thread, by the time i write something and start reading that event desc to occur, the output of that has already gone and next some other output line are being read. How do I synchronize this? Can any help me in designing this part?
2012/09/07
[ "https://Stackoverflow.com/questions/12326443", "https://Stackoverflow.com", "https://Stackoverflow.com/users/348686/" ]
If you are just using threads for asynchronous IO, You may be better off not using threads and use select.select, or perhaps even asyncore if you want to make it even easier on yourself. <http://docs.python.org/library/asyncore.html>
Code Snippet is as follows: Class SerialCom: ``` __init__(self,comport): self.comport = comport self.readSerialPortThread = ReadSearialPortThread(self.comport) def writeStringToSerialPort(someString): self.comport.write(someString) def waitfordata(someString): #I have to continuously read data from serial port till we someString. ``` In ReadSearialPortThread data from serialport is continuously reading device info values. When I write data using writeStringToSerialPort(), the device outputs some data to the serial port I have to read this data from the function waitfordata, to validate the response from the device. Now what is happening is when I write some values and call waitfordata() the required value are already read by readSerialPortThread and continued to reading some other values like device info. So I am losing the values there. I want to know how to syncronize that.
4,851
47,367,681
I am splinting a text based on ",". I need to ignore the commas in text between quotes (simple or doubled). Example of text: ``` Capacitors,3,"C2,C7-C8",100nF,, Capacitors,3,'C2,C7-C8',100nF,, ``` Have to return ``` ['Capacitors','3','C2,C7-C8','100nF','',''] ``` How to say this (ignore between quotes) in regular expressions? (of python) For now, I am using ``` pattern = re.compile('\s*,\s*') pattern.split(myText) ```
2017/11/18
[ "https://Stackoverflow.com/questions/47367681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7803545/" ]
Don't use regex for this. With a little tweaking, you can use `csv` module to parse the line perfectly (`csv` is designed to handle quoted commas). Just normalize the quotes to double quotes: ``` import csv s = """Capacitors,3,"C2,C7-C8",100nF,, Capacitors,3,'C2,C7-C8',100nF,,""" print(next(csv.reader([s.replace("'",'"')]))) ``` result: ``` ['Capacitors', '3', 'C2,C7-C8', '100nF', '', ' Capacitors', '3', 'C2,C7-C8', '100nF', '', ''] ```
I guess you changed your question. That looks like a csv-formatted file: ``` import io s = """\ Capacitors,3,"C2,C7-C8",100nF,, Capacitors,3,'C2,C7-C8',100nF,,""" [i for i in csv.reader(io.StringIO(s), delimiter=',', quotechar='"')] ``` Returns: ``` [['Capacitors', '3', 'C2,C7-C8', '100nF', '', ''], ['Capacitors', '3', "'C2", "C7-C8'", '100nF', '', '']] ```
4,852
14,427,281
AIM: need to find out how to parse data from api search below into a CSV file. The search returns results in the following format: ``` [(u'Bertille Maciag', 10), (u'Peter Prior', 5), (u'Chris OverPar Duguid', 4), (u 'Selby Dhliwayo', 4), (u'FakeBitch!', 4), (u'Django Unchianed UK', 4), (u'Padrai g Lynch ', 4), (u'Jessica Gunn', 4), (u'harvey.', 4), (u'Wowphotography', 3)] ``` I'm a newbie to python and any help would be greatly appreciated ``` import twitter, json, operator #Construct Twitter API object searchApi = twitter.Twitter(domain="search.twitter.com") #Get trends query = "#snow" tweeters = dict() for i in range(1,16): response = searchApi.search(q=query, rpp=100, page=i) tweets = response['results'] for item in tweets: tweet = json.loads(json.dumps(item)) user = tweet['from_user_name'] #print user if user in tweeters: # print "list already contains", user tweeters[user] += 1 else: tweeters[user] = 1 sorted_tweeters = sorted(tweeters.iteritems(), key=operator.itemgetter(1), reverse=True) print len(tweeters) print tweeters print sorted_tweeters[0:10] print 'Done!' ```
2013/01/20
[ "https://Stackoverflow.com/questions/14427281", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1973375/" ]
It looks like you have all the hard bits working, and are just missing the 'save to csv' part. ``` import collections import twitter, json, operator #Construct Twitter API object searchApi = twitter.Twitter(domain="search.twitter.com") #Get trends query = "#snow" tweeters = collections.defaultdict(lambda: 0) for i in range(1,16): response = searchApi.search(q=query, rpp=100, page=i) tweets = response['results'] for item in tweets: user = item['from_user_name'] #print user tweeters[user] += 1 sorted_tweeters = sorted(tweeters.iteritems(), key=operator.itemgetter(1), reverse=True) str_fmt = u'%s\u200E, %d \n' with open('test_so.csv','w') as f_out: for twiters in sorted_tweeters: f_out.write((str_fmt % twiters).encode('utf8')) ``` You need the 'u' on the format string and `encode` because you have non-ascii charachters in the user names. `u'\200E` is the ltr marker so that the csv file will look right with rtl language user names. I also cleaned up the iteration code a bit, by using a `defaultdict` you don't need check if a key exists, if it does not exist, the generator function is called and it's value is returned (in this case 0). `item` is already a `dict`, there is no need to convert it to a json string and then back to a `dict`
Have you looked at the [python CSV](http://docs.python.org/2/library/csv.html) module? using your output: ``` import csv, os x = [(u'Bertille Maciag', 10), (u'Peter Prior', 5), (u'Chris OverPar Duguid', 4), (u'Selby Dhliwayo', 4), (u'FakeBitch!', 4), (u'Django Unchianed UK', 4), (u'Padraig Lynch ', 4), (u'Jessica Gunn', 4), (u'harvey.', 4), (u'Wowphotography', 3)] f = open("/tmp/yourfile.csv", "w") writer = csv.writer(f, quoting=csv.QUOTE_ALL) for i in x: writer.writerow(i) f.close() ```
4,854
73,988,902
I have one tensor slice with all image and one tensor with its masking image. how do i combine/join/add them and make it a single tensor dataset `tf.data.dataset` ``` # turning them into tensor data val_img_data = tf.data.Dataset.from_tensor_slices(np.array(all_val_img)) val_mask_data = tf.data.Dataset.from_tensor_slices(np.array(all_val_mask)) ``` then i mapped a function to paths to make them image ``` val_img_tensor = val_img_data.map(get_image) val_mask_tensor = val_mask_data.map(get_image) ``` So now i have two tensors one image and other mask. how do i join them and make it a tensor data combined? I tried zipping them: it didn't work. ``` val_data = tf.data.Dataset.from_tensor_slices(zip(val_img_tensor, val_mask_tensor)) ``` Error ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/tensorflow/python/data/util/structure.py in normalize_element(element, element_signature) 101 if spec is None: --> 102 spec = type_spec_from_value(t, use_fallback=False) 103 except TypeError: 11 frames TypeError: Could not build a `TypeSpec` for <zip object at 0x7f08f3862050> with type zip During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype) 100 dtype = dtypes.as_dtype(dtype).as_datatype_enum 101 ctx.ensure_initialized() --> 102 return ops.EagerTensor(value, ctx.device_name, dtype) 103 104 ValueError: Attempt to convert a value (<zip object at 0x7f08f3862050>) with an unsupported type (<class 'zip'>) to a Tensor. ```
2022/10/07
[ "https://Stackoverflow.com/questions/73988902", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14076425/" ]
The comment of Djinn is mostly you need to follow. Here is the end to end answer. Here is how you can build data pipeline for segmentation model training, generally a training paris with both `images, masks`. First, get the sample paths. ``` images = [ 1.jpg, 2.jpg, 3.jpg, ... ] masks = [ 1.png, 2.png, 3.png, ... ] ``` Second, define the hyper-params i.e image size, batch size etc. And build the `tf.data` API input pipelines. ``` IMAGE_SIZE = 128 BATCH_SIZE = 86 def read_image(image_path, mask=False): image = tf.io.read_file(image_path) if mask: image = tf.image.decode_png(image, channels=1) image.set_shape([None, None, 1]) image = tf.image.resize(images=image, size=[IMAGE_SIZE, IMAGE_SIZE]) image = tf.cast(image, tf.int32) else: image = tf.image.decode_png(image, channels=3) image.set_shape([None, None, 3]) image = tf.image.resize(images=image, size=[IMAGE_SIZE, IMAGE_SIZE]) image = image / 255. return image def load_data(image_list, mask_list): image = read_image(image_list) mask = read_image(mask_list, mask=True) return image, mask def data_generator(image_list, mask_list, split='train'): dataset = tf.data.Dataset.from_tensor_slices((image_list, mask_list)) dataset = dataset.shuffle(8*BATCH_SIZE) if split == 'train' else dataset dataset = dataset.map(load_data, num_parallel_calls=tf.data.AUTOTUNE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) dataset = dataset.prefetch(tf.data.AUTOTUNE) return dataset ``` Lastly, pass the list of images paths (image + mask) to build data generator. ``` train_dataset = data_generator(images, masks) image, mask = next(iter(train_dataset.take(1))) print(image.shape, mask.shape) (86, 128, 128, 3) (86, 128, 128, 1) ``` Here you can see that, the `tf.data.Dataset.from_tensor_slices` successfully load the training pairs and return as tuple (no need zipping). Hope it will resolve your problem. I've also answered your other query regarding augmentaiton pipelines, [HERE](https://stackoverflow.com/a/73997583/9215780). To add more, check out the following resources, I've shared plenty of semantic segmentaiton modeling approach. It may help. * [Carvana Image Semantic Segmentation : Starter](https://www.kaggle.com/code/ipythonx/carvana-image-semantic-segmentation-starter) * [Stanford Background Scene Understanding : Starter](https://www.kaggle.com/code/ipythonx/stanford-background-scene-understanding-starter) * [Retinal Vessel Segmentation : Starter](https://www.kaggle.com/code/ipythonx/retinal-vessel-segmentation-starter)
Maybe try `tf.data.Dataset.zip`: ``` val_data = tf.data.Dataset.zip((val_img_tensor, val_mask_tensor)) ```
4,855
65,846,292
My list does not appear when I'm running my program. There are no errors it just pops up with a blank screen. This is my code. Please help only new to python. ``` devices = ['iphone', 'ps5', 'pc'] devicesaccessories = ['mouse', 'keyboard', 'airpods'] joinedlist = devices + devicesaccessories ```
2021/01/22
[ "https://Stackoverflow.com/questions/65846292", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15059825/" ]
You are probably not having the dependency binaries like opencv\_\*.dll in the same folder as your binary. Also InferenceEngine binaries needs to be present in the folder from where your binary is expected to run. Please use DependencyWalker to identify the load dependency and copy the needed binary.
copy "\inference\_engine\lib\intel64\Release\plugins.xml" to project dir, then replace Core core to Core core(plugins.xml)
4,856
9,331,010
This post is the same with my question in [MySQL in Python: UnicodeEncodeError: 'ascii'](https://stackoverflow.com/questions/9330046/mysql-in-python-unicodeencodeerror-ascii) this is just to clear things up. I am trying to save a string to a MySQL database but I get an error: > > File ".smart.py", line 51, in > (number, text, 'smart', 'u') > > > UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position > 25: ordinal not in range(128) > > > and the string is saved at ***m['Text']*** > > Lala\*=#&%@<>\_?!:;-'"/()¥¡¿ > > > Here is a snippet to the code ``` risk = m['Text'] msg = risk.encode('utf8') text = db.escape_string(msg) sql = "INSERT INTO posts(nmbr, \ msg, tel, sts) \ VALUES ('%s', '%s', '%s', '%s')" % \ (number, text, 'smart', 'u') ``` If i try to comment out the SQL query and put ***print text*** it would print out Lala\*=#&%@<>\_?!:;-'"/()¥¡¿ The error is only encountered when the SQL is being processed. MySQL encoding is set to utf8\_unicode\_ci. (or should i change this?) Thanks.
2012/02/17
[ "https://Stackoverflow.com/questions/9331010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1067791/" ]
add these parameters `MySQLdb.connect(..., use_unicode=1,charset="utf8")`. create a cursor ``` cur = db.cursor() ``` and then execute like so: ``` risk = m['Text'] sql = """INSERT INTO posts(nmbr, msg, tel, sts) \ VALUES (%s, %s, %s, %s)""" values = (number, risk, 'smart', 'u') cur.execute(sql,values) #use comma to separate sql and values, this will ensure values are escaped/sanitized cur.commit() ``` now you dont need these two lines: ``` msg = risk.encode('utf8') text = db.escape_string(msg) ```
It is not clear whether your `m['Text']` value is of type `StringType` or `UnicodeType`. My bet is that it is a byte-string (`StringType`). If that's true, then adding a line `m['Text'] = m['Text'].decode('UTF-8')` before your insert may work.
4,857
36,307,767
I have set the output of Azure stream analytics job to service bus queue which sends the data in JSON serialized format. When I receive the queue message in python script, along with the data in curly braces, I get @strin3http//schemas.microsoft.com/2003/10/Serialization/� appended in front. I am not able to trim it as the received message is not being recognized as either a string or a message. Because of this I cannot de-serialize the data.
2016/03/30
[ "https://Stackoverflow.com/questions/36307767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6134419/" ]
The issue was similiar with the SO thread [Interoperability Azure Service Bus Message Queue Messages](https://stackoverflow.com/questions/33542509/interoperability-azure-service-bus-message-queue-messages). Per my experience, the data from Azure Stream Analytics to Service Bus was sent via AMQP protocol, but the protocol of receiving the data in Python is HTTP. The excess content was generated by AMQP during transmission. Assumption that receiving the message via the code below, please see <https://azure.microsoft.com/en-us/documentation/articles/service-bus-python-how-to-use-queues/#receive-messages-from-a-queue>. The function `receive_queue_message` with the `False` value of the argument `peek_lock` wrapped the REST API [Receive and Delete Message (Destructive Read)](https://msdn.microsoft.com/en-us/library/azure/hh780770.aspx). ``` msg = bus_service.receive_queue_message('taskqueue', peek_lock=False) print(msg.body) ``` According to the source code of Azure Service Bus SDK for Python include the functions [`receive_queue_message`](https://github.com/Azure/azure-sdk-for-python/blob/master/azure-servicebus/azure/servicebus/servicebusservice.py#L937), [`read_delete_queue_message`](https://github.com/Azure/azure-sdk-for-python/blob/master/azure-servicebus/azure/servicebus/servicebusservice.py#L884) and [`_create_message`](https://github.com/Azure/azure-sdk-for-python/blob/master/azure-servicebus/azure/servicebus/_serialization.py#L59), I think you can directly remove the excess content from the `msg.body` using the string common function [`lstrip`](https://docs.python.org/2/library/string.html#string.lstrip) or [`strip`](https://docs.python.org/2/library/string.html#string.strip).
I ran into this issue as well. The previous answers are only workarounds and do not fix the root cause of this issue. The problem you are encountering is likely due to your Stream Analytics compatibility level. Compatibility level 1.0 uses an XML serializer producing the XML tag you are seeing. Compatibility level 1.1 "fixes" this issue. See my previous answer here: <https://stackoverflow.com/a/49307178/263139>.
4,858
37,190,989
I am using gensim word2vec package in python. I know how to get the vocabulary from the trained model. But how to get the word count for each word in vocabulary?
2016/05/12
[ "https://Stackoverflow.com/questions/37190989", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5969670/" ]
Each word in the vocabulary has an associated vocabulary object, which contains an index and a count. ``` vocab_obj = w2v.vocab["word"] vocab_obj.count ``` Output for google news w2v model: 2998437 So to get the count for each word, you would iterate over all words and vocab objects in the vocabulary. ``` for word, vocab_obj in w2v.vocab.items(): #Do something with vocab_obj.count ```
When you want to create a dictionary of word to count for easy retrieval later, you can do so as follows: ``` w2c = dict() for item in model.wv.vocab: w2c[item]=model.wv.vocab[item].count ``` If you want to sort it to see the most frequent words in the model, you can also do that so: ``` w2cSorted=dict(sorted(w2c.items(), key=lambda x: x[1],reverse=True)) ```
4,866
15,169,101
I'm writing a Python script that needs to write some data to a temporary file, then create a subprocess running a C++ program that will read the temporary file. I'm trying to use [`NamedTemporaryFile`](http://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) for this, but according to the docs, > > Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). > > > And indeed, on Windows if I flush the temporary file after writing, but don't close it until I want it to go away, the subprocess isn't able to open it for reading. I'm working around this by creating the file with `delete=False`, closing it before spawning the subprocess, and then manually deleting it once I'm done: ``` fileTemp = tempfile.NamedTemporaryFile(delete = False) try: fileTemp.write(someStuff) fileTemp.close() # ...run the subprocess and wait for it to complete... finally: os.remove(fileTemp.name) ``` This seems inelegant. Is there a better way to do this? Perhaps a way to open up the permissions on the temporary file so the subprocess can get at it?
2013/03/02
[ "https://Stackoverflow.com/questions/15169101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/938914/" ]
[According](http://bugs.python.org/issue14243#msg164504) to Richard Oudkerk > > (...) the only reason that trying to reopen a `NamedTemporaryFile` fails on > Windows is because when we reopen we need to use `O_TEMPORARY`. > > > and he gives an example of how to do this in Python 3.3+ ``` import os, tempfile DATA = b"hello bob" def temp_opener(name, flag, mode=0o777): return os.open(name, flag | os.O_TEMPORARY, mode) with tempfile.NamedTemporaryFile() as f: f.write(DATA) f.flush() with open(f.name, "rb", opener=temp_opener) as f: assert f.read() == DATA assert not os.path.exists(f.name) ``` Because there's no `opener` parameter in the built-in `open()` in Python 2.x, we have to combine lower level `os.open()` and `os.fdopen()` functions to achieve the same effect: ``` import subprocess import tempfile DATA = b"hello bob" with tempfile.NamedTemporaryFile() as f: f.write(DATA) f.flush() subprocess_code = \ """import os f = os.fdopen(os.open(r'{FILENAME}', os.O_RDWR | os.O_BINARY | os.O_TEMPORARY), 'rb') assert f.read() == b'{DATA}' """.replace('\n', ';').format(FILENAME=f.name, DATA=DATA) subprocess.check_output(['python', '-c', subprocess_code]) == DATA ```
I know this is a really old post, but I think it's relevant today given that the API is changing and functions like mktemp and mkstemp are being replaced by functions like TemporaryFile() and TemporaryDirectory(). I just wanted to demonstrate in the following sample how to make sure that a temp directory is still available downstream: Instead of coding: ``` tmpdirname = tempfile.TemporaryDirectory() ``` and using tmpdirname throughout your code, you should trying to use your code in a with statement block to insure that it is available for your code calls... like this: ``` with tempfile.TemporaryDirectory() as tmpdirname: [do dependent code nested so it's part of the with statement] ``` If you reference it outside of the with then it's likely that it won't be visible anymore.
4,868
61,782,776
I tried this ``` x = np.array([ [0,0], [1,0], [2.61,-1.28], [-0.59,2.1] ]) for i in X: X = np.append(X[i], X[i][0]**2, axis = 1) print(X) ``` But i am getting this ``` IndexError Traceback (most recent call last) <ipython-input-12-9bfd33261d84> in <module>() 6 ]) 7 for i in X: ----> 8 X = np.append(X[i], X[i][0]**2, axis = 1) 9 10 print(X) IndexError: arrays used as indices must be of integer (or boolean) type ``` Someone please help!
2020/05/13
[ "https://Stackoverflow.com/questions/61782776", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12894182/" ]
How about concatenate: ``` np.concatenate((x,x**2)) ``` Output: ``` array([[ 0. , 0. ], [ 1. , 0. ], [ 2.61 , -1.28 ], [-0.59 , 2.1 ], [ 0. , 0. ], [ 1. , 0. ], [ 6.8121, 1.6384], [ 0.3481, 4.41 ]]) ```
``` In [210]: x = np.array([ ...: [0,0], ...: [1,0], ...: [2.61,-1.28], ...: [-0.59,2.1] ...: ]) ...: In [211]: x # (4,2) array Out[211]: array([[ 0. , 0. ], [ 1. , 0. ], [ 2.61, -1.28], [-0.59, 2.1 ]]) In [212]: for i in x: # iterate on rows ...: print(i) # i is a row, not an index x[i] would be wrong ...: [0. 0.] [1. 0.] [ 2.61 -1.28] [-0.59 2.1 ] ``` Look at one row: ``` In [214]: x[2] Out[214]: array([ 2.61, -1.28]) ``` You can join that row with its square with: ``` In [216]: np.concatenate((x[2], x[2]**2)) Out[216]: array([ 2.61 , -1.28 , 6.8121, 1.6384]) ``` And doing the same for the whole array. Where possible in `numpy` work with the whole array, not rows and elements. It's simpler, and faster. ``` In [217]: np.concatenate((x, x**2), axis=1) Out[217]: array([[ 0. , 0. , 0. , 0. ], [ 1. , 0. , 1. , 0. ], [ 2.61 , -1.28 , 6.8121, 1.6384], [-0.59 , 2.1 , 0.3481, 4.41 ]]) ```
4,878
37,690,440
Right now I'm writing a function that reads data from a file, with the goal being to add that data to a numpy array and return said array. I would like to return the array as a 2D array, however I'm not sure what the complete shape of the array will be (I know the amount of columns, but not rows). What I have right now is: ``` columns = _____ for line in currentFile: currentLine = line.split() data = np.zeros(shape=(columns),dtype=float) tempData = [] for i in range(columns): tempData.append(currentLine[i]) data = np.concatenate((data,tempdata),axis=0) ``` However, this makes a 1D array. Essentially what I'm asking is: Is there any way to have add a python list as a row to a numpy array with a variable amount of rows?
2016/06/07
[ "https://Stackoverflow.com/questions/37690440", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5379671/" ]
If your file `data.txt` is ``` 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 ``` All you need to do is ``` >>> import numpy as n >>> data_array = n.loadtxt("data.txt") >>> data_array array([[1., 2., 3., 4.], [1., 2., 3., 4.], [1., 2., 3., 4.], [1., 2., 3., 4.], [1., 2., 3., 4.], [1., 2., 3., 4.]]) ```
If you modify @Abstracted 's solution as: ``` data_array = np.loadtxt("data.txt", dtype =int) ``` you will get the array in integer form if you want it that way.
4,879
32,404,818
I am using iPython in command prompt, Windows 7. I thought this would be easy to find, I searched and found directions on how to use the inspect package but it seems like the inspect package is meant to be used for functions that are created by the programmer rather than functions that are part of a package. My main goal to to be able to use the help files from within command prompt of iPython, to be able to look up a function such as csv.reader() and figure out all the possible arguments for it AND all possible values for these arguements. In R programming this would simply be args(csv.reader()) I have tried googling this but they all point me to the inspect package, perhaps I'm misunderstanding it's use? For example, If I wanted to see a list of all possible arguments and the corresponding possible values for these arguments for the csv.reader() function (from the import csv package), how would I go about doing that? I've tried doing help(csv.reader) but this doesn't provide me a list of all possible arguments and their potential values. 'Dialect' shows up but it doesn't tell me the possible values of the dialect argument of the csv.reader function. I can easily go to the site: <https://docs.python.org/3/library/csv.html#csv-fmt-params> and see that the dialect options are: delimiter, doublequote, escapechar, etc.. etc..but is there a way to see this in Python console? I've also tried dir(csv.reader) but this isn't what I was looking for either. Going bald trying to figure this out....
2015/09/04
[ "https://Stackoverflow.com/questions/32404818", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4959665/" ]
There is no way to do this generically, `help(<function>)` will at a minimum return you the function signature (including the argument names), Python is dynamically typed so you don't get any types and arguments by themselves don't tell you what the valid values are. This is where a good docstring comes in. However, the `csv` module does have a specific function for listing the dialects: ``` >>> csv.list_dialects() ['excel', 'excel-tab', 'unix'] >>> help(csv.excel) Help on class excel in module csv: class excel(Dialect) | Describe the usual properties of Excel-generated CSV files. ... ```
The inspect module is extremely powerful. To get a list of classes, for example in the csv module, you could go: ``` import inspect, csv from pprint import pprint module = csv mod_string = 'csv' module_classes = inspect.getmembers(module, inspect.isclass) for i in range(len(module_classes)): myclass = module_classes[i][0] myclass = mod_string+'.'+myclass myclass = eval(myclass) # could construct whatever query you want about this class here... # you'll need to play with this line to get what you want; it will failasis #line = inspect.formatargspect(*inspect.getfullargspec(myclass)) pprint(myclass) ``` Hope this helps get you started!
4,881
74,074,355
My code uses matplotlib which requires numpy. I'm using pipenv as my environment. When I run the code through my terminal and pipenv shell, it executes without a problem. I've just installed Pycharm for Apple silicon (I have an M1) and set up my interpreter to use the same pipenv environment that I configured earlier. However, when I try to run it through Pycharm (even the terminal in pycharm), it throws me the following error: `Original error was: dlopen(/Users/s/.local/share/virtualenvs/CS_156-UWxYg3KY/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so, 0x0002): tried: '/Users/s/.local/share/virtualenvs/CS_156-UWxYg3KY/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e'))` What's confusing me is the fact that my code executes when using this same environment through the my terminal... But it fails when running on Pycharm? Any insights appreciated!
2022/10/14
[ "https://Stackoverflow.com/questions/74074355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19456156/" ]
Since you are on Windows, I am pretty sure the different results are because the UCRT detects during runtime whether FMA3 (fused-multiply-add) instructions are available for the CPU and if yes, use them in transcendental functions such as cosine. This gives [slightly different results](https://stackoverflow.com/a/29086451/3740047). The solution is to place the call `set_FMA3_enable(0);` at the very start of your `main()` or `WinMain()` function, as described [here](https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/get-fma3-enable-set-fma3-enable?view=msvc-170). If you want to have reproducibility also between different operating systems, things become harder or even impossible. See e.g. [this](https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/) blog post. In response also to the comments stating that you should just use some tolerance, I do not agree with this as a general statement. Certainly, there are many applications where this is the way to go. But I do think that it **can** be a sensible requirement to get exactly the same floating point results **for some applications**, at least when staying on the same OS (Windows, in this case). In fact, we had the very same issue with `set_FMA3_enable` a while ago. I am a software developer for a traffic simulation, and minor differences such as 10^-16 often build up and lead to entirely different simulation results eventually. Naturally, one is supposed to run many simulations with different seeds and average over all of them, making the different behavior irrelevant for the final result. But: Sometimes customers have a problem at a specific simulation second for a specific seed (e.g. an application crash or incorrect behavior of an entity), and not being able to reproduce it on our developer machines due to a different CPU makes it much harder to diagnose and fix the issue. Moreover, if the test system consists of a mixture of older and newer CPUs and test cases are not bound to specific resources, means that sometimes tests can deviate seemingly without reason (flaky tests). This is certainly not desired. Requiring exact reproducibility also makes writing the tests much easier because you do not require heuristic thresholds (e.g. a tolerance or some guessed value for the amount of samples). Moreover, our customers expect the results to remain stable for a specific version of the program since they calibrated (more or less...) their traffic networks to real data. This is somewhat questionable, since (again) one should actually look at averages, but the naive expectation in reality usually wins.
IEEE-745 double precision binary floating point provides no more than 15 decimal significant digits of precision. You are looking at the "noise" of different library implementations and possibly different FPU implementations. > > How to make calculations fully reproducible? > > > That is an X-Y problem. The answer is you can't. But it is the wrong question. You would do better to ask how you can implement valid and robust tests that are sympathetic to this well-known and unavoidable technical issue with floating-point representation. Without providing the test code you are trying to use, it is not possible to answer that directly. Generally you should avoid comparing floating point values for exact equality, and rather subtract the result from the desired value, and test for some acceptable discrepancy within the supported precision of the FP type used. For example: ``` #define EXPECTED_RESULT 40965.8966304650 #define RESULT_PRECISION 00000.0000000001 double actual_result = test() ; bool error = fabs( actual_result- EXPECTED_RESULT ) > RESULT_PRECISION ; ```
4,882
72,479,835
I'm totally new to command line and am trying to follow the instructions [here](http://rleca.pbworks.com/w/file/fetch/124098201/tuto_obitools_install_W10OS.html) to get OBITools installed. I've gotten part way through, but I am getting an error I don't understand when trying to download the OBITools install file. The code presented in the tutorial is: `wget http://metabarcoding.org//obitools/doc/_downloads/get-obitools.py python get-obitools.py` I am getting the following error: `>>> wget http://metabarcoding.org//obitools/doc/_downloads/get-obitools.py File "<stdin>", line 1 wget http://metabarcoding.org//obitools/doc/_downloads/get-obitools.py ^ SyntaxError: invalid syntax` I'm not sure what I'm doing wrong? Any help is much appreciated!
2022/06/02
[ "https://Stackoverflow.com/questions/72479835", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15333572/" ]
You're not supposed to type those lines at the python shell (the one with >>>), you're supposed to type those into your regular bash shell (the one with $).
Type `exit()` to quit from the python shell and try again
4,884
57,077,879
I am running the function below on a very long CSV file. The function calculates the Z-score of the column MFE for every 50 lines. Some of these 50 lines contain just zeros, and therefore when calculating the Zscore the program stops because it can't divide by zero. How can I solve this problem, and instead of stopping the program running print a 0 for the z-score of these lines? ``` def doZscore(csv_file, n_random): df = pd.read_csv(csv_file) row_start = 0 row_end = n_random + 1 step = n_random + 1 zscore = [] while row_end <= len(df): selected_rows = df['MFE'].iloc[row_start:row_end] arr = [] for x in selected_rows: arr.append(float(x)) scores = stats.zscore(arr) for i in scores: zscore.append(round(i, 3)) arr.clear() row_start += step row_end += step df['Zscore'] = zscore with open(csv_file, 'w') as f: df.to_csv(f, index=False) f.close() return ``` The error I am getting is: /s/software/anaconda/python3/lib/python3.7/site-packages/scipy/stats/stats.py:2253: RuntimeWarning: invalid value encountered in true\_divide return (a - mns) / sstd
2019/07/17
[ "https://Stackoverflow.com/questions/57077879", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10647510/" ]
You can do either of the two following options: ``` if sum(arr) == 0: scores = [0] else: scores = stats.zscore(arr) ``` The re factored way is: ``` scores = [0] if sum(arr) == 0 else scores = stats.zscore(arr) ``` Both would work fine.
As long as that is what you want to do, you'd just check before `scores = stats.zscore(arr)` if your array is all 0s, make `scores = arr` instead.
4,885
25,193,352
How can I create a list (or a numpy array if possible) in python that takes `datetime` objects in the first column and other data types in other columns? For example, the list would be something like this: ``` list = [[<datetime object>, 0, 0.] [<datetime object>, 0, 0.] [<datetime object>, 0, 0.]] ``` What is the best way to create and initialize a list like this? So far, I have tried using `np.empty`, `np.zeros`, and a list comprehension, similar to this: ``` list = [[None for x in xrange(3)] for x in xrange(3)] ``` But if I do this, I would need a `for loop` to populate the first column and there doesn't seem to be a way to assign it in a simpler way like the following: ``` list[0][:] = another_list_same_length ```
2014/08/07
[ "https://Stackoverflow.com/questions/25193352", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2613064/" ]
You mean something like: ``` In [17]: import numpy as np In [18]: np.array([[[datetime.now(),np.zeros(2)] for x in range(10)]]) Out[18]: array([[[datetime.datetime(2014, 8, 7, 23, 45, 12, 151489), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151560), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151595), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151619), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151634), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151648), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151662), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151677), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151691), array([ 0., 0.])], [datetime.datetime(2014, 8, 7, 23, 45, 12, 151706), array([ 0., 0.])]]], dtype=object) ```
Use `zip` ``` >>> column1 = [1, 1, 1] >>> column2 = [2, 2, 2] >>> column3 = [3, 3, 3] >>> zip(column1, column2, column3) [(1, 2, 3), (1, 2, 3), (1, 2, 3)] >>> # Or, if you'd like a list of lists: ... >>> [list(tup) for tup in zip(column1, column2, column3)] [[1, 2, 3], [1, 2, 3], [1, 2, 3]] >>> ``` This would allow you to build-up the columns separately and then combine them. `column1` could be dates (or anything else.) Hope this helps.
4,887
56,801,645
I tried to add python scripts to my packages reference this two tutorials. [Handling of setup.py](http://docs.ros.org/api/catkin/html/user_guide/setup_dot_py.html) [Installing Python scripts and modules](http://docs.ros.org/api/catkin/html/howto/format2/installing_python.html) So I added setup.py in root `test\src\test_pkg`, changed CMakeLists.txt in path `test\src`. (My package root path is `test\`, and my package path is `test\src\test_pkg`, my python scripts path is `test\src\test_pkg\scripts`) This is setup.py. ``` #!/usr/bin/env python # -*- coding: utf-8 -*- from distutils.core import setup from catkin_pkg.python_setup import generate_distutils_setup setup_args = generate_distutils_setup( packages=['test_pkg'], scripts=['/scripts'], package_dir={'': 'src'} ) setup(**setup_args) ``` This is CMakeLists.txt ``` cmake_minimum_required(VERSION 2.8.3) project(test_pkg) find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs sensor_msgs message_generation ) catkin_python_setup() catkin_install_python(PROGRAMS scripts/talker DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}) add_message_files( FILES Num.msg ) add_service_files( FILES AddTwoInts.srv ) generate_messages( DEPENDENCIES std_msgs sensor_msgs ) catkin_package( CATKIN_DEPENDS roscpp rospy std_msgs message_runtime sensor_msgs include_directories( # include ${catkin_INCLUDE_DIRS} ) install(PROGRAMS DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) ``` Then I run `catkin_make` in path `test`.(I have run `test\devel\setup.bat`) And got this CMake error: ``` Base path: E:\workspace\ros\test Source space: E:\workspace\ros\test\src Build space: E:\workspace\ros\test\build Devel space: E:\workspace\ros\test\devel Install space: E:\workspace\ros\test\install #### #### Running command: "nmake cmake_check_build_system" in "E:\workspace\ros\test\build" #### Microsoft (R) ?????????ù??? 14.20.27508.1 ?? ??????? (C) Microsoft Corporation?? ????????????? -- Using CATKIN_DEVEL_PREFIX: E:/workspace/ros/test/devel -- Using CMAKE_PREFIX_PATH: E:/workspace/ros/test/devel;C:/opt/ros/melodic/x64;C:/opt/rosdeps/x64 -- This workspace overlays: E:/workspace/ros/test/devel;C:/opt/ros/melodic/x64 -- Using PYTHON_EXECUTABLE: C:/opt/python27amd64/python.exe -- Using default Python package layout -- Using empy: C:/opt/python27amd64/lib/site-packages/em.pyc -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: E:/workspace/ros/test/build/test_results -- Found gtest: gtests will be built -- Using Python nosetests: C:/opt/python27amd64/Scripts/nosetests-2.7.exe -- catkin 0.7.14 -- BUILD_SHARED_LIBS is on -- BUILD_SHARED_LIBS is on -- Using CATKIN_WHITELIST_PACKAGES: test_pkg -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- ~~ traversing 1 packages in topological order: -- ~~ - test_pkg -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- +++ processing catkin package: 'test_pkg' -- ==> add_subdirectory(test_pkg) -- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy CMake Error at C:/opt/ros/melodic/x64/share/catkin/cmake/catkin_install_python.cmake:20 (message): catkin_install_python() called without required DESTINATION argument. Call Stack (most recent call first): test_pkg/CMakeLists.txt:27 (catkin_install_python) -- Configuring incomplete, errors occurred! See also "E:/workspace/ros/test/build/CMakeFiles/CMakeOutput.log". See also "E:/workspace/ros/test/build/CMakeFiles/CMakeError.log". NMAKE : fatal error U1077: ??C:\opt\rosdeps\x64\bin\cmake.exe??: ???????0x1?? Stop. Invoking "nmake cmake_check_build_system" failed ``` How to fix this error? Thanks for any reply. System: Windows10 ROS: ROS1 * /rosdistro: melodic * /rosversion: 1.14.3
2019/06/28
[ "https://Stackoverflow.com/questions/56801645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9589731/" ]
First, while `catkin` is a generic enough tool, it's typically used for the robotics framework ROS. So by asking on ROS' question community [answers.ros.org](https://answers.ros.org/questions/) you might get more response. > > > ``` > CMake Error at C:/opt/ros/melodic/x64/share/catkin/cmake/catkin_install_python.cmake:20 (message): > catkin_install_python() called without required DESTINATION argument. > > ``` > > I think you referred to the right online resources. I've also looked at them and none of them clarified this, but `catkin_package()` needs to be called prior to `catkin_install_python`.
I got the same error. I thought you copy the code to the wrong position. If you place those code in a relatively early position, you are possibly getting it wrong. Try to place the code here. ``` ## Mark executable scripts (Python etc.) for installation ## in contrast to setup.py, you can choose the destination) #catkin_package() #catkin_python_setup() catkin_install_python(PROGRAMS scripts/talker.py scripts/listener.py DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) ```
4,888
17,261,801
I have a Java project where I must build an object from a JSON input, which comes in the following format: ``` { "Shell": 13401, "JavaScript": 2693931, "Ruby": 2264, "C": 111534, "C++": 940606, "Python": 39021, "R": 2216, "D": 35036, "Objective-C": 4913 } ``` Then in my code I have: ``` public void fetchProjectLanguages(Project project) throws IOException { List<Language> languages = null; String searchUrl = String.format("%s/repos/%s/%s/languages", REPO_API, project.getUser().getLogin(), project.getName()); String jsonString = requests.get(searchUrl); Language lang = gson.fromJson(jsonString, Language.class); languages.add(lang); } ``` My `Language` object is composed of two attributes: `name` and `loc`, and the JSON input itself does not represent a language but a **set** of languages, being each line of the object a language itself. In my example: shell, javascript, ruby, c, c++, python, R, D and Objective-C. How can it do that? I appreciate any help!
2013/06/23
[ "https://Stackoverflow.com/questions/17261801", "https://Stackoverflow.com", "https://Stackoverflow.com/users/413570/" ]
You can use an **[adapter](http://google-gson.googlecode.com/svn/trunk/gson/docs/javadocs/com/google/gson/TypeAdapter.html)**. Say you have: ``` class Language { public String name; public Integer loc; } class Languages { public List<Language> list = new ArrayList<Language>(); } ``` The adapter: ``` class LanguagesTypeAdapter implements JsonSerializer<Languages>, JsonDeserializer<Languages> { public JsonElement serialize(Languages languages, Type typeOfT, JsonSerializationContext context) { JsonObject json = new JsonObject(); for (Language language : languages.list) { json.addProperty(language.name, language.loc); } return json; } public Languages deserialize(JsonElement element, Type typeOfT, JsonDeserializationContext context) throws JsonParseException { JsonObject json = element.getAsJsonObject(); Languages languages = new Languages(); for (Entry<String, JsonElement> entry : json.entrySet()) { String name = entry.getKey(); Integer loc = entry.getValue().getAsInt(); Language language = new Language(); language.name = name; language.loc = loc; languages.list.add(language); } return languages; } } ``` And a sample: ``` GsonBuilder builder = new GsonBuilder(); builder.registerTypeAdapter(Languages.class, new LanguagesTypeAdapter()); Gson gson = builder.create(); Languages languages = gson.fromJson("{"+ "\"Shell\": 13401,"+ "\"JavaScript\": 2693931,"+ "\"Ruby\": 2264,"+ "\"C\": 111534,"+ "\"C++\": 940606,"+ "\"Python\": 39021,"+ "\"R\": 2216,"+ "\"D\": 35036,"+ "\"Objective-C\": 4913"+ "}", Languages.class); String json = gson.toJson(languages); ``` Results : ``` {"Shell":13401,"JavaScript":2693931,"Ruby":2264,"C":111534,"C++":940606,"Python":39021,"R":2216,"D":35036,"Objective-C":4913} ``` Hope this helps...
You can try this out `Map<String, String> map = gson.fromJson(json, new TypeToken<Map<String, String>>() {}.getType());` to get a map of language/value.
4,891
30,215,470
I found this example of code here on stackoverflow and I would like to make the first window close when a new one is opened. So what I would like is when a new window is opened, the main one should be closed automatically. ``` #!/usr/bin/env python import Tkinter as tk from Tkinter import * class windowclass(): def __init__(self,master): self.master = master self.frame = tk.Frame(master) self.lbl = Label(master , text = "Label") self.lbl.pack() self.btn = Button(master , text = "Button" , command = self.command ) self.btn.pack() self.frame.pack() def command(self): print 'Button is pressed!' self.newWindow = tk.Toplevel(self.master) self.app = windowclass1(self.newWindow) class windowclass1(): def __init__(self , master): self.master = master self.frame = tk.Frame(master) master.title("a") self.quitButton = tk.Button(self.frame, text = 'Quit', width = 25 , command = self.close_window) self.quitButton.pack() self.frame.pack() def close_window(self): self.master.destroy() root = Tk() root.title("window") root.geometry("350x50") cls = windowclass(root) root.mainloop() ```
2015/05/13
[ "https://Stackoverflow.com/questions/30215470", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3036295/" ]
You would withdraw the main window, but you have no way to close the program after the button click in the Toplevel, when the main window is still open but doesn't show Also pick one or the other of (but don't use both) ``` import Tkinter as tk from Tkinter import * ``` This opens a 2nd Toplevel which allows you to exit the program ``` import Tkinter as tk class windowclass(): def __init__(self,master): self.master = master ##self.frame = tk.Frame(master) not used self.lbl = tk.Label(master , text = "Label") self.lbl.pack() self.btn = tk.Button(master , text = "Button" , command = self.command ) self.btn.pack() ##self.frame.pack() not used def command(self): print 'Button is pressed!' self.master.withdraw() toplevel=tk.Toplevel(self.master) tk.Button(toplevel, text="Exit the program", command=self.master.quit).pack() self.newWindow = tk.Toplevel(self.master) self.app = windowclass1(self.newWindow) class windowclass1(): def __init__(self , master): """ note that "master" here refers to the TopLevel """ self.master = master self.frame = tk.Frame(master) master.title("a") self.quitButton = tk.Button(self.frame, text = 'Quit this TopLevel', width = 25 , command = self.close_window) self.quitButton.pack() self.frame.pack() def close_window(self): self.master.destroy() ## closes this TopLevel only root = tk.Tk() root.title("window") root.geometry("350x50") cls = windowclass(root) root.mainloop() ```
In your code: ``` self.newWindow = tk.Toplevel(self.master) ``` You are not creating a new window independent completely from your root (or `master`) but rather a child of the Toplevel (`master` in your case), of course this new `child` toplevel will act independent of the `master` until the `master` gets detroyed where the `child` toplevel will be destroyed as well, To make it completely seperate, create a new instance of the Tk object and have it close the `windowclass` window (destroy its object): ``` self.newWindow = Tk() ``` you have two options here: 1 - Either you need to specify in the `windowclass1.close_window()`, that you want to destroy the `cls` object when you create the `windowclass1()` object, this way: ``` def close_window(self): cls.master.destroy() ``` 2 - Which is the preferred one for generality, is to destroy the `cls` after you create `windowclass1` object in the `windowclass.command()` method, like this: ``` def command(self): print 'Button is pressed!' self.newWindow = Tk() self.app = windowclass1(self.newWindow) self.master.destroy() ``` and make the quitButton in the `__init__()` of windowclass1 like this: ``` self.quitButton = tk.Button(self.frame, text = 'Quit', width = 25, command = self.master.quit) ``` to quit completely your program
4,892
55,036,033
Now, I have called python to C++. Using ctype to connect between both of them. And I have a problem about core dump when in running time. I have a library which is called "libfst.so" This is my code. NGramFST.h ``` #include <iostream> class NGramFST{ private: static NGramFST* m_Instace; public: NGramFST(){ } static NGramFST* getInstance() { if (m_Instace == NULL){ m_Instace = new NGramFST(); } return m_Instace; } double getProbabilityOfWord(std::string word, std::string context) { std::cout << "reloading..." << std::endl; return 1; } }; ``` NGramFST.cpp ``` #include "NGramFST.h" NGramFST* NGramFST::m_Instace = NULL; extern "C" { double FST_getProbability(std::string word, std::string context){ return NGramFST::getInstance()->getProbabilityOfWord(word, context); } } ``` And this is my python code. ``` from ctypes import cdll lib = cdll.LoadLibrary('./libfst.so') #-------------------------main code------------------------ class FST(object): def __init__(self): print 'Initializing' def getProbabilityOfWord(self, word, context): lib.FST_getProbability(word, context) fst = FST() print fst.getProbabilityOfWord(c_wchar_p('jack london'), c_wchar_p('my name is')) ``` This is error ``` terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc Aborted (core dumped) ``` I reviewed again but I can not detect where is my problem.
2019/03/07
[ "https://Stackoverflow.com/questions/55036033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5798231/" ]
`ctypes` does not understand C++ types (it's not called `c++types`). It cannot handle `std::string`. It wouldn't know that your function expects `std::string` arguments anyway. In order to work with `ctypes`, your library needs a C-compatible interface. `extern "C"` is necessary but not sufficient. The functions need to be actually callable from C. Better yet, use a modern C++/Python binding library such as [pybind11](https://pybind11.readthedocs.io/en/stable/).
It work when I change python code below ``` string1 = "my string 1" string2 = "my string 2" # create byte objects from the strings b_string1 = string1.encode('utf-8') b_string2 = string2.encode('utf-8') print fst.getProbabilityOfWord(b_string1, b_string2) ``` and c++ code change type of param bellow ``` FST_getProbability(const char* word, const char* context) ```
4,893
54,143,731
I am running a long computation on a Jupyter notebook and one of the threads spawn by python (a `pickle.dump` call, I suspect) took all the available RAM making the system clunky. Now, I would like to terminate the single thread. Interrupting the notebook does not work and I would like not to restart the notebook in order to don't lose all the calculations made so far. If I open the Activity Monitor I can clearly see one python process which contains multiple threads. I know I can terminate the whole process, but is there a way to terminate a single thread?
2019/01/11
[ "https://Stackoverflow.com/questions/54143731", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3190076/" ]
I do not think you can kill a thread of a process outside the process itself: As reported in [this answer](https://unix.stackexchange.com/a/1071/210584) > > Threads are an integral part of the process and cannot be killed > outside it. There is the `pthread_kill` function but it only applies in > the context of the `thread itself`. From the docs at the link > > >
Of course the answer is yes, I have a demo code FYI(not secure): ``` from threading import Thread import time class MyThread(Thread): def __init__(self, stop): Thread.__init__(self) self.stop = stop def run(self): stop = False while not stop: print("I'm running") time.sleep(1) # if the signal is stop, break `while loop` so the thread is over. stop = self.stop m = MyThread(stop=False) m.start() while 1: i = input("input S to stop\n") if i == "S": m.stop = True break else: continue ```
4,894
10,755,833
I am attempting to deploy a [Flask](http://flask.pocoo.org/) app to [Heroku](http://www.heroku.com/). I'm using [Peewee](http://peewee.readthedocs.org/en/latest/) as an ORM for a Postgres database. When I follow the [standard Heroku steps to deploying Flask](https://devcenter.heroku.com/articles/python), the web process crashes after I enter `heroku ps:scale web=1`. Here's what the logs say: ``` Starting process with command `python app.py` /app/.heroku/venv/lib/python2.7/site-packages/peewee.py:2434: UserWarning: Table for <class 'flask_peewee.auth.User'> ("user") is reserved, please override using Meta.db_table cls, _meta.db_table, Traceback (most recent call last): File "app.py", line 167, in <module> auth.User.create_table(fail_silently=True) File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 2518, in create_table if fail_silently and cls.table_exists(): File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 2514, in table_exists return cls._meta.db_table in cls._meta.database.get_tables() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 507, in get_tables ORDER BY c.relname""") File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 313, in execute cursor = self.get_cursor() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 310, in get_cursor return self.get_conn().cursor() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 306, in get_conn self.connect() File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 296, in connect self.__local.conn = self.adapter.connect(self. database, **self.connect_kwargs) File "/app/.heroku/venv/lib/python2.7/site-packages/peewee.py", line 199, in connect return psycopg2.connect(database=database, **kwargs) File "/app/.heroku/venv/lib/python2.7/site-packages/psycopg2/__init__.py", line 179, in connect connection_factory=connection_factory, async=async) psycopg2.OperationalError: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? Process exited with status 1 State changed from starting to crashed ``` I've tried a bunch of different things to get Heroku to allow my app to talk to a Postgres db, but haven't had any luck. Is there an easy way to do this? What do I need to do to configure Flask/Peewee so that I can use a db on Heroku?
2012/05/25
[ "https://Stackoverflow.com/questions/10755833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/135156/" ]
According to the [Peewee docs](http://docs.peewee-orm.com/en/latest/peewee/database.html?highlight=postgres#dynamically-defining-a-database), you don't want to use `Proxy()` unless your local database driver is different than your remote one (i.e. locally, you're using SQLite and remotely you're using Postgres). If, however, you are using Postgres both locally and remotely it's a much simpler change. In this case, you'll want to only change the connection values (database name, username, password, host, port, etc.) at runtime and do not need to use `Proxy()`. Peewee has a [built-in URL parser for database connections](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#database-url). Here's how to use it: ``` import os from peewee import * from playhouse.db_url import connect db = connect(os.environ.get('DATABASE_URL')) class BaseModel(Model): class Meta: database = db ``` In this example, peewee's `db_url` module reads the environment variable `DATABASE_URL` and parses it to extract the relevant connection variables. It then creates a `PostgresqlDatabase` [object](http://docs.peewee-orm.com/en/latest/peewee/api.html#PostgresqlDatabase) with those values. Locally, you'll want to set `DATABASE_URL` as an environment variable. You can do this according to the instructions of whatever shell you're using. Or, if you want to use the Heroku toolchain (launch your local server using `heroku local`) you can [add it to a file called `.env` at the top level of your project](https://devcenter.heroku.com/articles/heroku-local#set-up-your-local-environment-variables). For the remote setup, you'll want to [add your database URL as a remote Heroku environment variable](https://devcenter.heroku.com/articles/config-vars#setting-up-config-vars-for-a-deployed-application). You can do this with the following command: ``` heroku config:set DATABASE_URL=postgresql://myurl ``` You can find that URL by going into Heroku, navigating to your database, and clicking on "database credentials". It's listed under `URI`.
Are you parsing the DATABASE\_URL environment variable? It will look something like this: ``` postgres://username:password@host:port/database_name ``` So you will want to pull that in and parse it before you open a connection to your database. Depending on how you've declared your database (in your config or next to your wsgi app) it might look like this: ``` import os import urlparse urlparse.uses_netloc.append('postgres') url = urlparse.urlparse(os.environ['DATABASE_URL']) # for your config DATABASE = { 'engine': 'peewee.PostgresqlDatabase', 'name': url.path[1:], 'password': url.password, 'host': url.hostname, 'port': url.port, } ``` See the notes here: <https://devcenter.heroku.com/articles/django>
4,895
46,125,105
I have done the following to get json file data into redis using this python script- ``` import json import redis r = redis.StrictRedis(host='127.0.0.1', port=6379, db=1) with open('products.json') as data_file: test_data = json.load(data_file) r.set('test_json', test_data) ``` When I use the **get** commmand from redis-cli (get test\_json) I get **nil** back. I must be using the wrong command? Please help for my understanding on this.
2017/09/08
[ "https://Stackoverflow.com/questions/46125105", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5526217/" ]
Ways of circumventing your company's policy, from best to worse: * Load mod\_cgi anyway. * Load mod\_fcgi, and convert your CGI script into a Fast CGI daemon. It's a lot of work, but you'll can get faster code out of it! * Load your own module that does exactly the same thing as mod\_cgi. mod\_cgi is open source, so it should be easy to just rename it. * Load mod\_fcgi, and write a Fast CGI daemon that executes your script. * Install a second apache web server with mod\_cgi enabled. Link to it directly or use mod\_proxy on the original server. * Write your own web server. Link to it directly or use mod\_proxy on the original server.
[Last time you asked this question](https://stackoverflow.com/questions/45864198/cgi-scripts-to-mod-perl), you talked about using `mod_perl` instead. The standard way to run CGI program unchanged (for some value of "unchanged") under `mod_perl` is by using [ModPerl::Registry](https://metacpan.org/pod/ModPerl::Registry). Did you try that? How did it go? Another alternative would be to convert your programs to use [PSGI](http://plackperl.org/). You could try using [Plack::App::Wrap::CGI](https://metacpan.org/pod/Plack::App::WrapCGI) or [CGI::Emulate::PSGI](https://metacpan.org/pod/CGI::Emulate::PSGI). Using Plack would free you from any deployment restrictions. You could run the code under `mod-perl` or even as a separate service behind a proxy server. But I can't help marvelling at how ridiculous this whole situation is. Your company has CGI programs that it (presumably) relies on to run part of its business. And they've just decided to turn off support for them. You need to find out why this decision has been made and try to buy some time in order to convert to an alternative technology.
4,898
73,416,533
I am new to python and wanted to know if there are best approaches for solving this problem. I have a string template which I want to compare with a list of strings and if any difference found, create a dictionary out of it. ``` template = "Hi {name}, how are you? Are you living in {location} currently? Can you confirm if following data is correct - {list_of_data}" list_of_strings = [ "Hi John, how are you? Are you living in California currently? Can you confirm if following data is correct - 123, 456, 345", "Hi Steve, how are you? Are you living in New York currently? Can you confirm if following data is correct - 6542" ] ``` ``` expected = [ {"name": "John", "location": "California", "list_of_data": [123, 456, 345]}, {"name": "Steve", "location": "New York", "list_of_data": [6542]}, ] ``` I tried many different approaches but ended up stuck in some random logics and the solutions did not look generic enough to support any string with the template. Any help is highly appreciated.
2022/08/19
[ "https://Stackoverflow.com/questions/73416533", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12722706/" ]
You can use regular-expression ``` template = "Hi {name}, how are you? Are you living in {location} currently? Can you confirm if following data is correct - {list_of_data}" list_of_strings = [ "Hi John, how are you? Are you living in California currently? Can you confirm if following data is correct - 123, 456, 345", "Hi Steve, how are you? Are you living in New York currently? Can you confirm if following data is correct - 6542" ] import re expected = [] for s in list_of_strings: r_ = re.search("Hi (.+)?, how are you\? Are you living in (.+?) currently\? Can you confirm if following data is correct - (.+)", s) res = {} res["name"] = r_.group(1) res["location"] = r_.group(2) res["list_of_data"] = list(map(int, (r_.group(3).split(",")))) expected.append(res) print(expected) ``` It will produce following output ``` [{'name': 'John', 'location': 'California', 'list_of_data': [123, 456, 345]}, {'name': 'Steve', 'location': 'New York', 'list_of_data': [6542]}] ``` It should produce expected output, please check for minor bugs if any ...
I guess using named regular expression groups is more elegant way to solve this problem, for example: ``` import re list_of_strings = [ "Hi John, how are you? Are you living in California currently? Can you confirm if following data is correct - 123, 456, 345", "Hi Steve, how are you? Are you living in New York currently? Can you confirm if following data is correct - 6542" ] pattern = re.compile( r"Hi (?P<name>(.+)?), how are you\? " r"Are you living in (?P<location>(.+)) currently\? " r"Can you confirm if following data is correct - (?P<list_of_data>(.+))" ) result = [] for string in list_of_strings: if match := pattern.match(string): obj = match.groupdict() obj['list_of_data'] = list(map(int, obj['list_of_data'].split(','))) result.append(obj) print(result) ``` **Output:** ``` [ {'name': 'John', 'location': 'California', 'list_of_data': [123, 456, 345]}, {'name': 'Steve', 'location': 'New York', 'list_of_data': [6542]} ] ```
4,899
16,375,781
I am trying a simple nested for loop in python to scan a threshold-ed image to detect the white pixels and store their location. The problem is that although the array it is reading from is only 160\*120 (19200) it still takes about 6s to execute, my code is as follows and any help or guidance would be greatly appreciated: ``` im = Image.open('PYGAMEPIC') r, g, b = np.array(im).T x = np.zeros_like(b) height = len(x[0]) width = len(x) x[r > 120] = 255 x[g > 100] = 0 x[b > 100] = 0 row_array = np.zeros(shape = (19200,1)) col_array = np.zeros(shape = (19200,1)) z = 0 for i in range (0,width-1): for j in range (0,height-1): if x[i][j] == 255: z = z+1 row_array[z] = i col_array[z] = j ```
2013/05/04
[ "https://Stackoverflow.com/questions/16375781", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2274632/" ]
First, it shouldn't take 6 seconds. Trying your code on a 160x120 image takes ~0.2 s for me. That said, for good `numpy` performance, you generally want to avoid loops. Sometimes it's simpler to vectorize along all except the smallest axis and loop along that, but when possible you should try to do everything at once. This usually makes things both faster (pushing the loops down to C) and easier. Your for loop itself seems a little strange to me-- you seem to have an off-by-one error both in terms of where you're starting storing the results (your first value is placed in `z=1`, not `z=0`) and in terms of how far you're looking (`range(0, x-1)` doesn't include `x-1`, so you're missing the last row/column-- probably you want `range(x)`.) If all you want is the indices where `r > 120` but neither `g > 100` nor `b > 100`, there are much simpler approaches. We can create boolean arrays. For example, first we can make some dummy data: ``` >>> r = np.random.randint(0, 255, size=(8,8)) >>> g = np.random.randint(0, 255, size=(8,8)) >>> b = np.random.randint(0, 255, size=(8,8)) ``` Then we can find the places where our condition is met: ``` >>> (r > 120) & ~(g > 100) & ~(b > 100) array([[False, True, False, False, False, False, False, False], [False, False, True, False, False, False, False, False], [False, True, False, False, False, False, False, False], [False, False, False, True, False, True, False, False], [False, False, False, False, False, False, False, False], [False, True, False, False, False, False, False, False], [False, False, False, False, False, False, False, False], [False, False, False, False, False, False, False, False]], dtype=bool) ``` Then we can use `np.where` to get the coordinates: ``` >>> r_idx, c_idx = np.where((r > 120) & ~(g > 100) & ~(b > 100)) >>> r_idx array([0, 1, 2, 3, 3, 5]) >>> c_idx array([1, 2, 1, 3, 5, 1]) ``` And we can sanity-check these by indexing back into `r`, `g`, and `b`: ``` >>> r[r_idx, c_idx] array([166, 175, 155, 150, 241, 222]) >>> g[r_idx, c_idx] array([ 6, 29, 19, 62, 85, 31]) >>> b[r_idx, c_idx] array([67, 97, 30, 4, 50, 71]) ```
e you're on python 2.x (2.6 or 2.7). In python 2, every time you call `range` you're creating a list with that many elements. (In this case, you're creating 1 list of `width - 1` length, and then `width - 1` lists of `height - 1` length. One way to speed this up is to make one list of each ahead of time and use that list each time. For example ``` height_indices = range(0, height - 1) for i in range(0, width - 1): for j in height_indices: # etc ``` To prevent python having to create either list, you can use `xrange` to return a generator which will save memory and time, e.g., ``` for i in xrange(0, width - 1): for j in xrange(0, height - 1): # etc. ``` You should also look into using the `filter` function which takes a function and executes it. It will return a list of items returned by the function but if all you're doing is incrementing a global counter and modifying global arrays, you don't have to return anything or concern yourself with the list returned.
4,900
11,705,114
I am reading a bunch of strings from mysql database using python, and after some processing, writing them to a CSV file. However I see some totally junk characters appearing in the csv file. For example when I open the csv using gvim, I see characters like `<92>`,`<89>`, `<94>` etc. Any thoughts? I tried doing string.encode('utf-8') before writing to csv but that gave an error that `UnicodeDecodeError: 'ascii' codec can't decode byte 0x93 in position 905: ordinal not in range(128)`
2012/07/28
[ "https://Stackoverflow.com/questions/11705114", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1546936/" ]
I would try a character class regex similar to ``` "[.!?\\-]" ``` Add whatever characters you wish to match inside the `[]`s. Be careful to escape any characters that might have a special meaning to the regex parser. You then have to iterate through the matches by using `Matcher.find()` until it returns false.
I would try > > `\W` > > > it matches any non-word character. This includes spaces and punctuation, but not underscores. It’s equivalent to [^A-Za-z0-9\_]
4,901
49,557,625
For my exercice I must with selenium and Chrome webdriver with python 2.7 click on the link : > > <https://test.com/console/remote.pl> > > > Below structure of the html file : ``` <div class="leftside" > <span class="spacer spacer-20"></span> <a href="https://test.com" title="Retour à l'accueil"><img class="logo" src="https://img.test.com/frontoffice.png" /></a> <span class="spacer spacer-20"></span> <a href="https://test.com/console/index.pl" class="menu selected"><img src="https://img.test.com/icons/fichiers.png" alt="" /> Mes fichiers</a> <a href="https://test.com/console/ftpmode.pl" class="menu"><img src="https://img.test.com/icons/publication.png" alt="" /> Gestion FTP</a> <a href="https://test.com/console/remote.pl" class="menu"><img src="https://img.test.com/icons/telechargement-de-liens.png" alt="" /> Remote Upload</a> <a href="https://test.com/console/details.pl" class="menu"><img src="https://img.test.com/icons/profil.png" alt="" /> Mon profil</a> <a href="https://test.com/console/params.pl" class="menu"><img src="https://img.test.com/icons/parametres.png" alt="" /> Paramètres</a> <a href="https://test.com/console/abo.pl" class="menu"><img src="https://img.test.com/icons/abonnement.png" alt="" /> Services Payants</a> <a href="https://test.com/console/aff.pl" class="menu"><img src="https://img.test.com/icons/af.png" alt="" /> Af</a> <a href="https://test.com/console/com.pl" class="menu"><img src="https://img.test.com/icons/v.png" alt="" /> V</a> <a href="https://test.com/console/logs.pl" class="menu"><img src="https://img.test.com/icons/logs.png" alt="" /> Jour</a> <a href="https://test.com/logout.pl" class="menu"><img src="https://img.test.com/icons/deconnexion.png" alt="" /> Déconnexion</a> <span class="spacer spacer-20"></span> <a href="#" id="msmall"><img src="https://img.test.com/btns/reverse.png"></a> </div> ``` I use : **driver.find\_element\_by\_xpath()** as explained here : [enter link description here](https://stackoverflow.com/questions/41602539/click-on-element-in-dropdown-with-selenium-and-python "Click on element") ``` driver.find_element_by_xpath('//[@id="leftside"]/a[3]').click() ``` But I have this error message : > > SyntaxError: Failed to execute 'evaluate' on 'Document': The string > '//[@id="leftside"]/a[3]' is not a valid XPath expression. > > > Who can help me please ? Regards
2018/03/29
[ "https://Stackoverflow.com/questions/49557625", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4200256/" ]
I think you were pretty close. But as it is a `<div>` tag with *class* attribute set as **leftside** you have to be specific. But again the `<a[3]>` tag won't be the immediate child of `driver.find_element_by_xpath("//div[@class='leftside']` node but a decendent, so instead of `/` you have to induce `//` as follows : ``` driver.find_element_by_xpath("//div[@class='leftside']//a[3]").click() ```
The issue is that you need to have a tag name also. So you should either use ``` driver.find_element_by_xpath('//*[@id="leftside"]/a[3]').click() ``` When you don't care about which tag it is. Or you should use the actual tag if you care ``` driver.find_element_by_xpath('//div[@id="leftside"]/a[3]').click() ``` I would suggest against using the second one, with the `div` tag. As xpath are slower and using a `*` may be slightly slower
4,906
49,544,207
I am using python 2.7. I am looking to calculate compounding returns from daily returns and my current code is pretty slow at calculating returns, so I was looking for areas where I could gain efficiency. What I want to do is pass two dates and a security into a price table and calulate the compounding returns between those dates using the giving security. I have a price table (`prices_df`): ``` security_id px_last asof 1 3.055 2015-01-05 1 3.360 2015-01-06 1 3.315 2015-01-07 1 3.245 2015-01-08 1 3.185 2015-01-09 ``` I also have a table with two dates and security (`events_df`): ``` asof disclosed_on security_ref_id 2015-01-05 2015-01-09 16:31:00 1 2018-03-22 2018-03-27 16:33:00 3616 2017-08-03 2018-03-27 12:13:00 2591 2018-03-22 2018-03-27 11:33:00 3615 2018-03-22 2018-03-27 10:51:00 3615 ``` Using the two dates in this table, I want to use the price table to calculate the returns. The two functions I am using: ``` import pandas as pd # compounds returns def cum_rtrn(df): df_out = df.add(1).cumprod() df_out['return'].iat[0] = 1 return df_out # calculates compound returns from prices between two dates def calc_comp_returns(price_df, start_date=None, end_date=None, security=None): df = price_df[price_df.security_id == security] df = df.set_index(['asof']) df = df.loc[start_date:end_date] df['return'] = df.px_last.pct_change() df = df[['return']] df = cum_rtrn(df) return df.iloc[-1][0] ``` I then iterate over the `events_df` with `.iterrows` passng the `calc_comp_returns` function each time. However, this is a very slow process as I have 10K+ iterations, so I am looking for improvements. Solution does not need to be based in `pandas` ``` # example of how function is called start = datetime.datetime.strptime('2015-01-05', '%Y-%m-%d').date() end = datetime.datetime.strptime('2015-01-09', '%Y-%m-%d').date() calc_comp_returns(prices_df, start_date=start, end_date=end, security=1) ```
2018/03/28
[ "https://Stackoverflow.com/questions/49544207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5293603/" ]
Here is a solution (100x times faster on my computer with some dummy data). ``` import numpy as np price_df = price_df.set_index('asof') def calc_comp_returns_fast(price_df, start_date, end_date, security): rows = price_df[price_df.security_id == security].loc[start_date:end_date] changes = rows.px_last.pct_change() comp_rtrn = np.prod(changes + 1) return comp_rtrn ``` Or, as a one-liner: ``` def calc_comp_returns_fast(price_df, start_date, end_date, security): return np.prod(price_df[price_df.security_id == security].loc[start_date:end_date].px_last.pct_change() + 1) ``` Not that I call the `set_index` method beforehand, it only needs to be done once on the entire `price_df` dataframe. It is faster because it does not recreate DataFrames at each step. In your code, `df` is overwritten almost at each line by a new dataframe. Both the init process and the garbage collection (erasing unused data from memory) take a lot of time. In my code, `rows` is a slice or a "view" of the original data, it does not need to copy or re-init any object. Also, I used directly the numpy product function, which is the same as taking the last cumprod element (pandas uses `np.cumprod` internally anyway). Suggestion : if you are using IPython, Jupyter or Spyder, you can use the magic `%prun calc_comp_returns(...)` to see which part takes the most time. I ran it on your code, and it was the garbage collector, using like more than 50% of the total running time!
I'm not very familiar with pandas, but I'll give this a shot. Problem with your solution ========================== Your solution currently does a huge amount of unnecessary calculation. This is mostly due to the line: ``` df['return'] = df.px_last.pct_change() ``` This line is actually calcuating the percent change for *every* date between start and end. Just fixing this issue should give you a huge speed up. You should just get the start price and the end price and compare the two. The prices inbetween these two prices are completely irrelevant to your calculations. Again, my familiarity with pandas is nil, but you should do something like this instead: ``` def calc_comp_returns(price_df, start_date=None, end_date=None, security=None): df = price_df[price_df.security_id == security] df = df.set_index(['asof']) df = df.loc[start_date:end_date] return 1 + (df['px_last'].iloc(-1) - df['px_last'].iloc(0) ``` Remember that this code relies on the fact that price\_df is sorted by date, so be careful to make sure you only pass `calc_comp_returns` a date-sorted price\_df.
4,908
51,454,694
Azure Cognitive Services OCR has a demo on the site <https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/#text> On the website, I get pretty accurate results. However, when I try to call the same using the code mentioned in their documentation, I get different and poor results. <https://learn.microsoft.com/en-us/azure/cognitive-services/Computer-vision/quickstarts/python-print-text> I'm assuming, the version available on the site is the preview one. <https://westus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/587f2c6a154055056008f200> How can I call that version in Python? Thank you for help!
2018/07/21
[ "https://Stackoverflow.com/questions/51454694", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10093162/" ]
There is now an official Microsoft package for that: * <https://pypi.org/project/azure-cognitiveservices-vision-computervision/> With samples: * <https://github.com/Azure-Samples/cognitive-services-python-sdk-samples/blob/master/samples/vision/computer_vision_samples.py> Create issue on Github if you have troubles :) * <https://github.com/Azure/azure-sdk-for-python/issues> (I work at MS in the Azure SDK team, which releases this SDK)
There are two different APIs for recognizing text. The demo page is using the new way, but has the caveat that it only works for English as of this writing. The example code you should be looking at is [here](https://learn.microsoft.com/en-us/azure/cognitive-services/Computer-vision/quickstarts/python-hand-text). If you want to recognized printed text, you will tweak the `param`. It'll look something like this: ``` region = 'westcentralus' request_url = 'https://{region}.api.cognitive.microsoft.com/vision/v2.0/recognizeText'.format(region=region) headers = {'Ocp-Apim-Subscription-Key': subscription_key} params = {'mode': 'Printed'} data = {'url': image_url} response = requests.post( request_url, headers=headers, params=params, json=data) ``` You will normally get a HTTP 202 response, not the recognition result. You will need to fetch the response from the operation location: ``` operation_url = response.headers["Operation-Location"] operation_response = requests.get(operation_url, headers=headers) ``` Note that you'll need to check the status of the `operation_response` to make sure the task has completed: ``` if operation_response.json()[u'status'] == 'Succeeded': ... ```
4,910
45,836,036
Comparing two python lists upto n-2 elements: ```py list1 = [1,2,3,'a','b'] list2 = [1,2,3,'c','d'] list1 == list2 => True ``` Excluding the last 2 elements of the 2 lists they are the same. I am able to do it by comparing each and every element of the 2 lists. But is there any other efficient way to do this?
2017/08/23
[ "https://Stackoverflow.com/questions/45836036", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7016928/" ]
return false after the first pair (a,b) where a != b ``` def compare(list1,list2): for a,b in zip(list1[:-2],list2[:-2]): if a != b : return False return True ```
This way: ``` list1 = [1,2,3,'a','b'] list2 = [1,2,3,'c','d'] list1[:-2] == list2[:-2] => True ```
4,911
746,873
I am a C/C++ programmer with more than 10 years of experience. I also know python and perl, but I've never used this languages for a web development. Now for some reasons I want to move into the web development realm and as part of that transition I have to learn css, javascript, (x)html etc. So I need an advice for a good sources of information for such topics. For a while I don't want to read lengthy tutorials, I want something quick and dirty, something to start with.
2009/04/14
[ "https://Stackoverflow.com/questions/746873", "https://Stackoverflow.com", "https://Stackoverflow.com/users/90593/" ]
For CSS, how about CSS in a Nutshell, by O'Reilly? Nice and thin.
[W3Schools](http://www.w3schools.com/) is a good place to start. However, you might also benefit by poking around the [Mozilla Developer Centre](https://developer.mozilla.org/En) (MDC), which has lots of information about HTML, CSS, and JavaScript. I now almost exclusively use the MDC for looking things up—it has lots of examples, lots of detail (if you want to go into it), and it shows you many different things that you can do with the item you're looking up. Also, for JavaScript, after you've learnt the basics ("[A re-introduction to JavaScript](https://developer.mozilla.org/En/A_re-introduction_to_JavaScript)" on the MDC is a good place to start), Douglas Crockford's [JavaScript page](http://javascript.crockford.com/) and John Resig's "[Learning Advanced JavaScript](http://ejohn.org/apps/learn/)" make for excellent reading. Steve
4,917