rem
stringlengths
1
226k
add
stringlengths
0
227k
context
stringlengths
6
326k
meta
stringlengths
143
403
input_ids
listlengths
256
256
attention_mask
listlengths
256
256
labels
listlengths
128
128
string = _unicode(string) mapping = htmlentitydefs.name2codepoint for key in mapping: string = string.replace("&%s;" %key, unichr(mapping[key])) return string
string = _unicode(string) mapping = htmlentitydefs.name2codepoint for key in mapping: string = string.replace("&%s;" %key, unichr(mapping[key])) return string
def _unescape_htmlentity(string): string = _unicode(string) mapping = htmlentitydefs.name2codepoint for key in mapping: string = string.replace("&%s;" %key, unichr(mapping[key])) return string
312230e30b9a32836e57d3014b7461a466a3dbed /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/9926/312230e30b9a32836e57d3014b7461a466a3dbed/pylast.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 318, 6939, 67, 2620, 1096, 12, 1080, 4672, 225, 533, 273, 389, 9124, 12, 1080, 13, 225, 2874, 273, 1729, 1096, 12537, 18, 529, 22, 710, 1153, 364, 498, 316, 2874, 30, 533, 273, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 318, 6939, 67, 2620, 1096, 12, 1080, 4672, 225, 533, 273, 389, 9124, 12, 1080, 13, 225, 2874, 273, 1729, 1096, 12537, 18, 529, 22, 710, 1153, 364, 498, 316, 2874, 30, 533, 273, ...
for all points :math:`$i$` in cluster u and :math:`$j$` in cluster :math:`$v$`. This is also known by the Farthest Point
for all points :math:`i` in cluster u and :math:`j` in cluster :math:`v`. This is also known by the Farthest Point
def linkage(y, method='single', metric='euclidean'): """ Performs hierarchical/agglomerative clustering on the condensed distance matrix y. y must be a {n \choose 2} sized vector where n is the number of original observations paired in the distance matrix. The behavior of this function is very similar to the MATLAB(TM) linkage function. A 4 by :math:`$(n-1)$` matrix ``Z`` is returned. At the :math:`$i$`th iteration, clusters with indices ``Z[i, 0]`` and ``Z[i, 1]`` are combined to form cluster :math:`$n + i$`. A cluster with an index less than :math:`$n$` corresponds to one of the :math:`$n$` original observations. The distance between clusters ``Z[i, 0]`` and ``Z[i, 1]`` is given by ``Z[i, 2]``. The fourth value ``Z[i, 3]`` represents the number of original observations in the newly formed cluster. The following linkage methods are used to compute the distance :math:`$d(s, t)$` between two clusters :math:`$s$` and :math:`$t$`. The algorithm begins with a forest of clusters that have yet to be used in the hierarchy being formed. When two clusters :math:`$s$` and :math:`$t$` from this forest are combined into a single cluster :math:`$u$`, :math:`$s$` and :math:`$t$` are removed from the forest, and :math:`$u$` is added to the forest. When only one cluster remains in the forest, the algorithm stops, and this cluster becomes the root. A distance matrix is maintained at each iteration. The ``d[i,j]`` entry corresponds to the distance between cluster :math:`$i$` and :math:`$j$` in the original forest. At each iteration, the algorithm must update the distance matrix to reflect the distance of the newly formed cluster u with the remaining clusters in the forest. Suppose there are :math:`$|u|$` original observations :math:`$u[0], \ldots, u[|u|-1]$` in cluster :math:`$u$` and :math:`$|v|$` original objects :math:`$v[0], \ldots, v[|v|-1]$` in cluster :math:`$v$`. Recall :math:`$s$` and :math:`$t$` are combined to form cluster :math:`$u$`. Let :math:`$v$` be any remaining cluster in the forest that is not :math:`$u$`. :Parameters: Q : ndarray A condensed or redundant distance matrix. A condensed distance matrix is a flat array containing the upper triangular of the distance matrix. This is the form that ``pdist`` returns. Alternatively, a collection of :math:`$m$` observation vectors in n dimensions may be passed as a :math:`$m$` by :math:`$n$` array. method : string The linkage algorithm to use. See the ``Linkage Methods`` section below for full descriptions. metric : string The distance metric to use. See the ``distance.pdist`` function for a list of valid distance metrics. Linkage Methods --------------- The following are methods for calculating the distance between the newly formed cluster :math:`$u$` and each :math:`$v$`. * method=``single`` assigns .. math: d(u,v) = \min(dist(u[i],v[j])) for all points :math:`$i$` in cluster :math:`$u$` and :math:`$j$` in cluster :math:`$v$`. This is also known as the Nearest Point Algorithm. * method=``complete`` assigns .. math: d(u, v) = \max(dist(u[i],v[j])) for all points :math:`$i$` in cluster u and :math:`$j$` in cluster :math:`$v$`. This is also known by the Farthest Point Algorithm or Voor Hees Algorithm. * method=``average`` assigns .. math: d(u,v) = \sum_{ij} \frac{d(u[i], v[j])} {(|u|*|v|) for all points :math:`$i$` and :math:`$j$` where :math:`$|u|$` and :math:`$|v|$` are the cardinalities of clusters :math:`$u$` and :math:`$v$`, respectively. This is also called the UPGMA algorithm. This is called UPGMA. * method='weighted' assigns .. math: d(u,v) = (dist(s,v) + dist(t,v))/2 where cluster u was formed with cluster s and t and v is a remaining cluster in the forest. (also called WPGMA) * method='centroid' assigns .. math: dist(s,t) = euclid(c_s, c_t) where :math:`$c_s$` and :math:`$c_t$` are the centroids of clusters :math:`$s$` and :math:`$t$`, respectively. When two clusters :math:`$s$` and :math:`$t$` are combined into a new cluster :math:`$u$`, the new centroid is computed over all the original objects in clusters :math:`$s$` and :math:`$t$`. The distance then becomes the Euclidean distance between the centroid of :math:`$u$` and the centroid of a remaining cluster :math:`$v$` in the forest. This is also known as the UPGMC algorithm. * method='median' assigns math:`$d(s,t)$` like the ``centroid`` method. When two clusters s and t are combined into a new cluster :math:`$u$`, the average of centroids s and t give the new centroid :math:`$u$`. This is also known as the WPGMC algorithm. * method='ward' uses the Ward variance minimization algorithm. The new entry :math:`$d(u,v)$` is computed as follows, .. math: d(u,v) = \sqrt{\frac{|v|+|s|} {T}d(v,s)^2 + \frac{|v|+|t|} {T}d(v,t)^2 + \frac{|v|} {T}d(s,t)^2} where :math:`$u$` is the newly joined cluster consisting of clusters :math:`$s$` and :math:`$t$`, :math:`$v$` is an unused cluster in the forest, :math:`$T=|v|+|s|+|t|$`, and :math:`$|*|$` is the cardinality of its argument. This is also known as the incremental algorithm. Warning ------- When the minimum distance pair in the forest is chosen, there may be two or more pairs with the same minimum distance. This implementation may chose a different minimum than the MATLAB(TM) version. """ if not isinstance(method, str): raise TypeError("Argument 'method' must be a string.") y = _convert_to_double(np.asarray(y, order='c')) s = y.shape if len(s) == 1: distance.is_valid_y(y, throw=True, name='y') d = distance.num_obs_y(y) if method not in _cpy_non_euclid_methods.keys(): raise ValueError("Valid methods when the raw observations are omitted are 'single', 'complete', 'weighted', and 'average'.") # Since the C code does not support striding using strides. [y] = _copy_arrays_if_base_present([y]) Z = np.zeros((d - 1, 4)) _hierarchy_wrap.linkage_wrap(y, Z, int(d), \ int(_cpy_non_euclid_methods[method])) elif len(s) == 2: X = y n = s[0] m = s[1] if method not in _cpy_linkage_methods: raise ValueError('Invalid method: %s' % method) if method in _cpy_non_euclid_methods.keys(): dm = distance.pdist(X, metric) Z = np.zeros((n - 1, 4)) _hierarchy_wrap.linkage_wrap(dm, Z, n, \ int(_cpy_non_euclid_methods[method])) elif method in _cpy_euclid_methods.keys(): if metric != 'euclidean': raise ValueError('Method %s requires the distance metric to be euclidean' % s) dm = distance.pdist(X, metric) Z = np.zeros((n - 1, 4)) _hierarchy_wrap.linkage_euclid_wrap(dm, Z, X, m, n, int(_cpy_euclid_methods[method])) return Z
8172e9e1737f1f6d84d1b06fa46e587e764a1a0f /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/5882/8172e9e1737f1f6d84d1b06fa46e587e764a1a0f/hierarchy.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1692, 410, 12, 93, 16, 707, 2218, 7526, 2187, 3999, 2218, 73, 22392, 11, 4672, 3536, 27391, 26633, 19, 346, 7043, 362, 264, 1535, 18743, 603, 326, 6941, 28003, 3888, 3148, 677, 18, 677, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1692, 410, 12, 93, 16, 707, 2218, 7526, 2187, 3999, 2218, 73, 22392, 11, 4672, 3536, 27391, 26633, 19, 346, 7043, 362, 264, 1535, 18743, 603, 326, 6941, 28003, 3888, 3148, 677, 18, 677, ...
self.input = windows.CommandInput(1, self.width, self.height-1, 0, self.default_help_message, "", self.reset_help_message, self.execute_slash_command)
self.input = windows.CommandInput("", self.reset_help_message, self.execute_slash_command) self.input.resize(1, self.width, self.height-1, 0, self.core.stdscr)
def on_slash(self): """ '/' is pressed, we enter "input mode" """ curses.curs_set(1) self.input = windows.CommandInput(1, self.width, self.height-1, 0, self.default_help_message, "", self.reset_help_message, self.execute_slash_command) self.input.do_command("/") # we add the slash
d803d5f95a04a7da772fc4bd218aef4e17b6053a /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/9814/d803d5f95a04a7da772fc4bd218aef4e17b6053a/tab.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 603, 67, 12877, 12, 2890, 4672, 3536, 2023, 353, 19504, 16, 732, 6103, 315, 2630, 1965, 6, 3536, 30436, 18, 2789, 67, 542, 12, 21, 13, 365, 18, 2630, 273, 9965, 18, 2189, 1210, 2932, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 603, 67, 12877, 12, 2890, 4672, 3536, 2023, 353, 19504, 16, 732, 6103, 315, 2630, 1965, 6, 3536, 30436, 18, 2789, 67, 542, 12, 21, 13, 365, 18, 2630, 273, 9965, 18, 2189, 1210, 2932, ...
try: h.runFinish(self.currentRun) except: pass
if hasattr(h, "runFinish"): h.runFinish(self.currentRun)
def processMessage(self,message): """message dispatcher""" if isinstance(message,InitData): for h in self.handlers: try: h.processInitData(message) except: pass elif isinstance(message,Telemetry): if not self.runInProgress: self.runInProgress = True for h in self.handlers: if hasattr(h, "runStart"): h.runStart(self.currentRun) for h in self.handlers: h.processTelemetry(message)
1c3af33cf48afed9c4725fd2dad43d8d9dc72b86 /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/5773/1c3af33cf48afed9c4725fd2dad43d8d9dc72b86/cerebellum.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1207, 1079, 12, 2890, 16, 2150, 4672, 3536, 2150, 7393, 8395, 309, 1549, 12, 2150, 16, 2570, 751, 4672, 364, 366, 316, 365, 18, 11046, 30, 775, 30, 366, 18, 2567, 2570, 751, 12, 2150, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1207, 1079, 12, 2890, 16, 2150, 4672, 3536, 2150, 7393, 8395, 309, 1549, 12, 2150, 16, 2570, 751, 4672, 364, 366, 316, 365, 18, 11046, 30, 775, 30, 366, 18, 2567, 2570, 751, 12, 2150, ...
uri + '/', itemId, wasEmpty)
uri + '/', itemId)
def __UpdateURLTree(self, sideBarLevel, parentUri, parentItem, wasEmpty=false): """ Synchronizes the sideBar's URLTree with the application's URLTree. The sideBar only stores a dict mapping visible items in the sideBar to their instances in the application. """ wxWindow = app.association[id(self)] uriList = app.model.URLTree.GetUriChildren(parentUri) for name in uriList: uri = parentUri + name parcel = app.model.URLTree.UriExists(uri) children = app.model.URLTree.GetUriChildren(uri) hasChildren = len(children) > 0
8ba1416afb227f716ae1f5a7e816a907c946e55f /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/9228/8ba1416afb227f716ae1f5a7e816a907c946e55f/SideBar.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 1891, 1785, 2471, 12, 2890, 16, 4889, 5190, 2355, 16, 982, 3006, 16, 982, 1180, 16, 1703, 1921, 33, 5743, 4672, 3536, 26535, 3128, 326, 4889, 5190, 1807, 1976, 2471, 598, 326, 2521...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 1891, 1785, 2471, 12, 2890, 16, 4889, 5190, 2355, 16, 982, 3006, 16, 982, 1180, 16, 1703, 1921, 33, 5743, 4672, 3536, 26535, 3128, 326, 4889, 5190, 1807, 1976, 2471, 598, 326, 2521...
a = self.atoms
a = self.owner.atoms
def constrainedPosition(self): a = self.atoms pos, p0, p1 = self.posn(), a[0].posn(), a[1].posn() z = p1 - p0 nz = norm(z) dotprod = dot(pos - p0, nz) if dotprod < 0.0: return pos - dotprod * nz elif dotprod > vlen(z): return pos - (dotprod - vlen(z)) * nz else: return pos
c3fc042ba5d169cb8eabb07c03a9bf20524cab21 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/11221/c3fc042ba5d169cb8eabb07c03a9bf20524cab21/jigs_measurements.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 27666, 2555, 12, 2890, 4672, 279, 273, 365, 18, 8443, 18, 14937, 949, 16, 293, 20, 16, 293, 21, 273, 365, 18, 917, 82, 9334, 279, 63, 20, 8009, 917, 82, 9334, 279, 63, 21, 8009, 91...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 27666, 2555, 12, 2890, 4672, 279, 273, 365, 18, 8443, 18, 14937, 949, 16, 293, 20, 16, 293, 21, 273, 365, 18, 917, 82, 9334, 279, 63, 20, 8009, 917, 82, 9334, 279, 63, 21, 8009, 91...
for name,value in result['Value']:
for name, value in result['Value']:
def getJobOptParameters(self,jobID,paramList=[]): """ Get optimizer parameters for the given job. If the list of parameter names is empty, get all the parameters then """
99c1bc850ba087890925b3180df206f65bb1d4b3 /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/12864/99c1bc850ba087890925b3180df206f65bb1d4b3/JobDB.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 13024, 6179, 2402, 12, 2890, 16, 4688, 734, 16, 891, 682, 33, 8526, 4672, 3536, 968, 13066, 1472, 364, 326, 864, 1719, 18, 971, 326, 666, 434, 1569, 1257, 353, 1008, 16, 336, 777, 326,...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 13024, 6179, 2402, 12, 2890, 16, 4688, 734, 16, 891, 682, 33, 8526, 4672, 3536, 968, 13066, 1472, 364, 326, 864, 1719, 18, 971, 326, 666, 434, 1569, 1257, 353, 1008, 16, 336, 777, 326,...
{'filename':config.get('logfile',''), 'return_code': '200', 'request': request})
{'filename':config.get('logfile',''), 'return_code': '200', 'request': request})
def blosxom_handler(request): """ This is the default blosxom handler. It calls the renderer callback to get a renderer. If there is no renderer, it uses the blosxom renderer. It calls the pathinfo callback to process the path_info http variable. It calls the filelist callback to build a list of entries to display. It calls the prepare callback to do any additional preparation before rendering the entries. Then it tells the renderer to render the entries. @param request: A standard request object @type request: L{Pyblosxom.pyblosxom.Request} object """ config = request.getConfiguration() data = request.getData() # go through the renderer callback to see if anyone else # wants to render. this renderer gets stored in the data dict # for downstream processing. rend = tools.run_callback('renderer', {'request': request}, donefunc = lambda x: x != None, defaultfunc = lambda x: None) if not rend: # get the renderer we want to use rend = config.get("renderer", "blosxom") # import the renderer rend = tools.importname("Pyblosxom.renderers", rend) # get the renderer object rend = rend.Renderer(request, config.get("stdoutput", sys.stdout)) data['renderer'] = rend # generate the timezone variable data["timezone"] = time.tzname[time.localtime()[8]] # process the path info to determine what kind of blog entry(ies) # this is tools.run_callback("pathinfo", {"request": request}, donefunc=lambda x:x != None, defaultfunc=blosxom_process_path_info) # call the filelist callback to generate a list of entries data["entry_list"] = tools.run_callback("filelist", {"request": request}, donefunc=lambda x:x != None, defaultfunc=blosxom_file_list_handler) # figure out the blog-level mtime which is the mtime of the head of # the entry_list entry_list = data["entry_list"] if isinstance(entry_list, list) and len(entry_list) > 0: mtime = entry_list[0].get("mtime", time.time()) else: mtime = time.time() mtime_tuple = time.localtime(mtime) mtime_gmtuple = time.gmtime(mtime) data["latest_date"] = time.strftime('%a, %d %b %Y', mtime_tuple) # Make sure we get proper 'English' dates when using standards loc = locale.getlocale(locale.LC_ALL) locale.setlocale(locale.LC_ALL, 'C') data["latest_w3cdate"] = time.strftime('%Y-%m-%dT%H:%M:%SZ', mtime_gmtuple) data['latest_rfc822date'] = time.strftime('%a, %d %b %Y %H:%M GMT', mtime_gmtuple) # set the locale back locale.setlocale(locale.LC_ALL, loc) # we pass the request with the entry_list through the prepare callback # giving everyone a chance to transform the data. the request is # modified in place. tools.run_callback("prepare", {"request": request}) # now we pass the entry_list through the renderer entry_list = data["entry_list"] renderer = data['renderer'] if renderer and not renderer.rendered: if entry_list: renderer.setContent(entry_list) # Log it as success tools.run_callback("logrequest", {'filename':config.get('logfile',''), 'return_code': '200', 'request': request}) else: renderer.addHeader('Status', '404 Not Found') renderer.setContent( {'title': 'The page you are looking for is not available', 'body': 'Somehow I cannot find the page you want. ' + 'Go Back to <a href="%s">%s</a>?' % (config["base_url"], config["blog_title"])}) # Log it as failure tools.run_callback("logrequest", {'filename':config.get('logfile',''), 'return_code': '404', 'request': request}) renderer.render() elif not renderer: output = config.get('stdoutput', sys.stdout) output.write("Content-Type: text/plain\n\n" + \ "There is something wrong with your setup.\n" + \ "Check your config files and verify that your " + \ "configuration is correct.\n") cache = tools.get_cache(request) if cache: cache.close()
2d889182f07a1e20247eba2168cf4d18f81fcdab /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/11836/2d889182f07a1e20247eba2168cf4d18f81fcdab/pyblosxom.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 324, 383, 30319, 362, 67, 4176, 12, 2293, 4672, 3536, 1220, 353, 326, 805, 324, 383, 30319, 362, 1838, 18, 225, 2597, 4097, 326, 5690, 1348, 358, 336, 279, 5690, 18, 225, 971, 1915, 35...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 324, 383, 30319, 362, 67, 4176, 12, 2293, 4672, 3536, 1220, 353, 326, 805, 324, 383, 30319, 362, 1838, 18, 225, 2597, 4097, 326, 5690, 1348, 358, 336, 279, 5690, 18, 225, 971, 1915, 35...
note = (object.im_self and ' method of %s instance' + object.im_self.__class__ or ' unbound %s method' % object.im_class.__name__)
inst = object.im_self note = (inst and ' method of %s instance' % classname(inst.__class__, mod) or ' unbound %s method' % classname(imclass, mod))
def docroutine(self, object, name=None, mod=None, funcs={}, classes={}, methods={}, cl=None): """Produce HTML documentation for a function or method object.""" realname = object.__name__ name = name or realname anchor = (cl and cl.__name__ or '') + '-' + name note = '' skipdocs = 0 if inspect.ismethod(object): if cl: imclass = object.im_class if imclass is not cl: url = '%s.html#%s-%s' % ( imclass.__module__, imclass.__name__, name) note = ' from <a href="%s">%s</a>' % ( url, classname(imclass, mod)) skipdocs = 1 else: note = (object.im_self and ' method of %s instance' + object.im_self.__class__ or ' unbound %s method' % object.im_class.__name__) object = object.im_func
6ccd15487c0e9e18682f37c3720cc98bf33de766 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/12029/6ccd15487c0e9e18682f37c3720cc98bf33de766/pydoc.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 997, 22640, 12, 2890, 16, 733, 16, 508, 33, 7036, 16, 681, 33, 7036, 16, 15630, 28793, 3318, 28793, 2590, 28793, 927, 33, 7036, 4672, 3536, 25884, 3982, 7323, 364, 279, 445, 578, 707, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 997, 22640, 12, 2890, 16, 733, 16, 508, 33, 7036, 16, 681, 33, 7036, 16, 15630, 28793, 3318, 28793, 2590, 28793, 927, 33, 7036, 4672, 3536, 25884, 3982, 7323, 364, 279, 445, 578, 707, ...
def concatenated_subfields (subfield_spec):
def handle_concatenated_subfields (subfield_spec):
def concatenated_subfields (subfield_spec): # concatenate all indicated subfields into one string per field instance
3e14bb7d7160914c0dda910d1cfa09a05915f488 /local1/tlutelli/issta_data/temp/all_python//python/2007_temp/2007/3913/3e14bb7d7160914c0dda910d1cfa09a05915f488/parse.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1640, 67, 16426, 275, 690, 67, 1717, 2821, 261, 1717, 1518, 67, 2793, 4672, 468, 11361, 777, 17710, 720, 2821, 1368, 1245, 533, 1534, 652, 791, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1640, 67, 16426, 275, 690, 67, 1717, 2821, 261, 1717, 1518, 67, 2793, 4672, 468, 11361, 777, 17710, 720, 2821, 1368, 1245, 533, 1534, 652, 791, 2, -100, -100, -100, -100, -100, -100, -10...
finoutput = addadminbox(subtitle + """&nbsp;&nbsp&nbsp;<small>[<a href="%s/admin/bibrank/guide.html
finoutput = addadminbox(subtitle + """&nbsp;&nbsp&nbsp;<small>[<a title="See guide" href="%s/admin/bibrank/guide.html
def perform_modifyrank(rnkID, rnkcode='', ln=cdslang, template='', cfgfile='', confirm=0): """form to modify a rank method rnkID - id of the rank method """ subtitle = 'Step 1 - Please modify the wanted values below' if not rnkcode: oldcode = get_rnk_code(rnkID)[0] else: oldcode = rnkcode output = """ <dl> <dd>When changing the BibRank code of a rank method, you must also change any scheduled tasks using the old value. <br>For more information, please go to the <a href="%s/admin/bibrank/guide.html">BibRank guide</a> and read the section about modifying a rank method's BibRank code.</dd> </dl> """ % weburl text = """ <span class="adminlabel">BibRank code</span> <input class="admin_wvar" type="text" name="rnkcode" value="%s" /> <br> """ % (oldcode) try: text += """<span class="adminlabel">Cfg file</span>""" textarea = "" if cfgfile: textarea +=cfgfile else: file = open("%s/bibrank/%s.cfg" % (etcdir, get_rnk_code(rnkID)[0][0])) for line in file.readlines(): textarea += line text += """<textarea class="admin_wvar" name="cfgfile" rows="15" cols="70">""" + textarea + """</textarea>""" except StandardError, e: text += """<b><span class="info">Cannot load file, either it does not exist, or not enough rights to read it: '%s/bibrank/%s.cfg'<br>Please create the file in the path given.</span></b>""" % (etcdir, get_rnk_code(rnkID)[0][0]) output += createhiddenform(action="modifyrank", text=text, rnkID=rnkID, button="Modify", confirm=0) text = "" if rnkcode and confirm in ["0", 0] and get_rnk_code(rnkID)[0][0] != rnkcode: subtitle = 'Step 2 - Confirm modification of configuration' text += "<b>Modify rank method '%s' with old code '%s' and set new code '%s'.</b>" % (get_current_name(rnkID, ln, get_rnk_nametypes()[0][0], "rnkMETHOD")[0][0], get_rnk_code(rnkID)[0][0], rnkcode) if cfgfile and confirm in ["0", 0]: subtitle = 'Step 2 - Confirm modification of configuration' text += "<br><b>Modify configuration file</b>" if (rnkcode or cfgfile) and confirm in ["0", 0] and not template: output += createhiddenform(action="modifyrank", text=text, rnkID=rnkID, rnkcode=rnkcode, cfgfile=cfgfile, button="Confirm", confirm=1) if rnkcode and confirm in ["1", 1] and get_rnk_code(rnkID)[0][0] != rnkcode: oldcode = get_rnk_code(rnkID)[0][0] result = modify_rnk(rnkID, rnkcode) subtitle = "Step 3 - Result" if result: text = """<b><span class="info">Rank method '%s' modified, new code is '%s'.</span></b>""" % (get_current_name(rnkID, ln, get_rnk_nametypes()[0][0], "rnkMETHOD")[0][0], rnkcode) try: file = open("%s/bibrank/%s.cfg" % (etcdir, oldcode), 'r') file2 = open("%s/bibrank/%s.cfg" % (etcdir, rnkcode), 'w') lines = file.readlines() for line in lines: file2.write(line) file.close() file2.close() os.remove("%s/bibrank/%s.cfg" % (etcdir, oldcode)) except StandardError, e: text = """<b><span class="info">Sorry, could not change name of cfg file, must be done manually: '%s/bibrank/%s.cfg'</span></b>""" % (etcdir, oldcode) else: text = """<b><span class="info">Sorry, could not modify rank method.</span></b>""" output += text if cfgfile and confirm in ["1", 1]: try: file = open("%s/bibrank/%s.cfg" % (etcdir, get_rnk_code(rnkID)[0][0]), 'w') file.write(cfgfile) file.close() text = """<b><span class="info"><br>Configuration file modified: '%s/bibrank/%s.cfg'</span></b>""" % (etcdir, get_rnk_code(rnkID)[0][0]) except StandardError, e: text = """<b><span class="info"><br>Sorry, could not modify configuration file, please check for rights to do so: '%s/bibrank/%s.cfg'<br>Please modify the file manually.</span></b>""" % (etcdir, get_rnk_code(rnkID)[0][0]) output += text finoutput = addadminbox(subtitle + """&nbsp;&nbsp&nbsp;<small>[<a href="%s/admin/bibrank/guide.html#mr">?</a>]</small>""" % weburl, [output]) output = "" text = """ <span class="adminlabel">Select</span> <select name="template" class="admin_w200"> <option value="">- select template -</option> """ templates = get_templates() for templ in templates: text += """<option value="%s" %s>%s</option>""" % (templ, template == templ and 'selected="selected"' or '', templ[9:len(templ)-4]) text += """</select><br>""" output += createhiddenform(action="modifyrank", text=text, rnkID=rnkID, button="Show template", confirm=0) try: if template: textarea = "" text = """<span class="adminlabel">Content:</span>""" file = open("%s/bibrank/%s" % (etcdir, template), 'r') lines = file.readlines() for line in lines: textarea += line file.close() text += """<textarea class="admin_wvar" readonly="true" rows="15" cols="70">""" + textarea + """</textarea>""" output += text except StandardError, e: output += """Cannot load file, either it does not exist, or not enough rights to read it: '%s/bibrank/%s'""" % (etcdir, template) finoutput += addadminbox("View templates", [output]) return finoutput
a6a2a3f836d914026219bf3610ec841b23ae03ce /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/12027/a6a2a3f836d914026219bf3610ec841b23ae03ce/bibrankadminlib.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 3073, 67, 17042, 11500, 12, 27639, 79, 734, 16, 20594, 79, 710, 2218, 2187, 7211, 33, 4315, 2069, 539, 16, 1542, 2218, 2187, 2776, 768, 2218, 2187, 6932, 33, 20, 4672, 3536, 687, 358, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 3073, 67, 17042, 11500, 12, 27639, 79, 734, 16, 20594, 79, 710, 2218, 2187, 7211, 33, 4315, 2069, 539, 16, 1542, 2218, 2187, 2776, 768, 2218, 2187, 6932, 33, 20, 4672, 3536, 687, 358, ...
cl = self.changelog msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads)
def changegroupsubset(self, bases, heads, source, extranodes=None): """Compute a changegroup consisting of all the nodes that are descendents of any of the bases and ancestors of any of the heads. Return a chunkbuffer object whose read() method will return successive changegroup chunks.
a0fc20c94dfe5c080fb13a0a5deaf8ef50305184 /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/11312/a0fc20c94dfe5c080fb13a0a5deaf8ef50305184/localrepo.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2549, 1655, 15657, 12, 2890, 16, 8337, 16, 22742, 16, 1084, 16, 7582, 304, 1145, 33, 7036, 4672, 3536, 7018, 279, 2549, 1655, 23570, 434, 777, 326, 2199, 716, 854, 10653, 4877, 434, 1281...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2549, 1655, 15657, 12, 2890, 16, 8337, 16, 22742, 16, 1084, 16, 7582, 304, 1145, 33, 7036, 4672, 3536, 7018, 279, 2549, 1655, 23570, 434, 777, 326, 2199, 716, 854, 10653, 4877, 434, 1281...
def addMapOverlay(fe, overlay, **attr):
def addMapOverlay(fe, overlay, skipBorder = False, **attr):
def addMapOverlay(fe, overlay, **attr): qtColor2figColor = figexport.qtColor2figColor # FIXME: str(type(overlay)).contains(...) instead? if isinstance(overlay, ROISelector): color = qtColor2figColor(overlay.color, fe.f) return fe.addROIRect(overlay.roi, penColor = color, **attr) elif isinstance(overlay, (MapNodes, MapEdges)): oldScale, oldOffset, oldROI = fe.scale, fe.offset, fe.roi extraZoom = float(overlay._zoom) / overlay.viewer.zoomFactor() fe.scale *= extraZoom fe.roi = BoundingBox(fe.roi.begin() / extraZoom, fe.roi.end() / extraZoom) if isinstance(overlay, MapNodes): radius = overlay.origRadius if not overlay.relativeRadius: radius /= float(overlay._zoom) color = qtColor2figColor(overlay.color, fe.f) result = fe.addMapNodes(overlay._map(), radius, fillColor = color, lineWidth = 0, **attr) else: attr = dict(attr) if overlay.width: attr["lineWidth"] = overlay.width if overlay.colors: result = fig.Compound(fe.f) for edge in overlay._map().edgeIter(): edgeColor = overlay.colors[edge.label()] if edgeColor: fe.addClippedPoly(edge, penColor = qtColor2figColor(edgeColor, fe.f), container = result, **attr) else: result = fe.addMapEdges( overlay._map(), penColor = qtColor2figColor(overlay.color, fe.f), **attr) if overlay.protectedColor: attr["lineWidth"] = overlay.protectedWidth or overlay.width attr["penColor"] = \ qtColor2figColor(overlay.protectedColor, fe.f) for edge in overlay._map().edgeIter(): if edge.flag(flag_constants.ALL_PROTECTION): fe.addClippedPoly(edge, container = result, **attr) fe.scale, fe.offset, fe.roi = oldScale, oldOffset, oldROI return result else: return figexport.addStandardOverlay(fe, overlay, **attr)
a8d28a6999b040b1f6b3ee4e71b9b5ee0d64c567 /local1/tlutelli/issta_data/temp/all_python//python/2007_temp/2007/10394/a8d28a6999b040b1f6b3ee4e71b9b5ee0d64c567/mapdisplay.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 527, 863, 11627, 12, 3030, 16, 9218, 16, 2488, 8107, 273, 1083, 16, 2826, 1747, 4672, 25672, 2957, 22, 470, 2957, 273, 4291, 6530, 18, 23311, 2957, 22, 470, 2957, 225, 468, 9852, 30, 6...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 527, 863, 11627, 12, 3030, 16, 9218, 16, 2488, 8107, 273, 1083, 16, 2826, 1747, 4672, 25672, 2957, 22, 470, 2957, 273, 4291, 6530, 18, 23311, 2957, 22, 470, 2957, 225, 468, 9852, 30, 6...
def replace(self, filename, photo_id):
def replace(self, filename, photo_id, callback=None, format=None):
def replace(self, filename, photo_id): """Replace an existing photo.
955fd13655976aabd4dbc292d5608024f484b488 /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/501/955fd13655976aabd4dbc292d5608024f484b488/__init__.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1453, 12, 2890, 16, 1544, 16, 10701, 67, 350, 16, 1348, 33, 7036, 16, 740, 33, 7036, 4672, 3536, 5729, 392, 2062, 10701, 18, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1453, 12, 2890, 16, 1544, 16, 10701, 67, 350, 16, 1348, 33, 7036, 16, 740, 33, 7036, 4672, 3536, 5729, 392, 2062, 10701, 18, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100, -1...
self.dirCtrl.SetHelpText('This is the project's default ' + \
self.dirCtrl.SetHelpText('This is the default ' + \
def __init__(self, parent, project, invalid_names): wxDialog.__init__(self, parent, -1, title = 'Project Settings')
30683460fc541f46ce29281a36a7f34c3fc8cc02 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/5620/30683460fc541f46ce29281a36a7f34c3fc8cc02/dialogs.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 982, 16, 1984, 16, 2057, 67, 1973, 4672, 7075, 6353, 16186, 2738, 972, 12, 2890, 16, 982, 16, 300, 21, 16, 2077, 273, 296, 4109, 8709, 6134, 2, 0, 0, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 982, 16, 1984, 16, 2057, 67, 1973, 4672, 7075, 6353, 16186, 2738, 972, 12, 2890, 16, 982, 16, 300, 21, 16, 2077, 273, 296, 4109, 8709, 6134, 2, -100, -...
self.headers = {}
self.headers = {'title': 'None'}
def __init__(self, document):
3b251e0bafb7b7d4e1b7561db73779fca6622248 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/1278/3b251e0bafb7b7d4e1b7561db73779fca6622248/hthtml.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 1668, 4672, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 1668, 4672, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100...
return terms, links
keys = terms.keys() keys.sort() return [(k, terms[k], links[k]) for k in keys]
def _extract_term_index(self): """ @return: Two dictionaries: - the first maps keys to terms - the second maks keys to lists of links @rtype: C{dictionary} """ terms = {} links = {} for (uid, doc) in self._docmap.items(): if (not self._show_private) and uid.is_private(): continue if uid.is_function(): link = Link(uid.name(), uid.module()) elif uid.is_method(): link = Link(uid.name(), uid.cls()) else: link = Link(uid.name(), uid)
dad6dd09c82d3878a78d977cc13da79d4420db2b /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/3512/dad6dd09c82d3878a78d977cc13da79d4420db2b/html.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 8004, 67, 6408, 67, 1615, 12, 2890, 4672, 3536, 632, 2463, 30, 16896, 16176, 30, 300, 326, 1122, 7565, 1311, 358, 6548, 300, 326, 2205, 29796, 87, 1311, 358, 6035, 434, 4716, 632, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 8004, 67, 6408, 67, 1615, 12, 2890, 4672, 3536, 632, 2463, 30, 16896, 16176, 30, 300, 326, 1122, 7565, 1311, 358, 6548, 300, 326, 2205, 29796, 87, 1311, 358, 6035, 434, 4716, 632, ...
query = """SELECT id FROM %s """ % table_name query += """ WHERE tag = %s and value = %s"""
query = """SELECT id FROM %s """ % table_name query += """ WHERE tag=%s AND value=%s"""
def insert_record_bibxxx(tag, value): """Insert the record into bibxxx""" #determine into which table one should insert the record table_name = 'bib'+tag[0:2]+'x' #check if the table exists in the database query = """SHOW TABLES like %s""" params = (table_name,) try: res = run_sql(query, params) except Error, error: write_message(" Error during the insert_record_bibxxx function : %s " % error, verbose=1, stream=sys.stderr) # if the table doesn't exist, please create it if len(res): pass else: if options['verbose'] >= 1: print 'create table' # check if the tag, value combination exists in the table query = """SELECT id FROM %s """ % table_name query += """ WHERE tag = %s and value = %s""" params = (tag, value) try: res = run_sql(query, params) except Error, error: write_message(" Error during the insert_record_bibxxx function : %s " % error, verbose=1, stream=sys.stderr) if len(res): # get the id of the row, if it exists row_id = res[0][0] return (table_name, row_id) else: # necessary to insert the tag, value into bibxxx table query = """INSERT INTO %s """ % table_name query += """ (tag, value) values (%s , %s)""" params = (tag, value) try: row_id = run_sql(query, params) except Error, error: write_message(" Error during the insert_record_bibxxx function : %s " % error, verbose=1, stream=sys.stderr) return (table_name, row_id)
c01e7675eac4ac02844b96fe38be4646d43de28c /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/12027/c01e7675eac4ac02844b96fe38be4646d43de28c/bibupload.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2243, 67, 3366, 67, 70, 495, 18310, 12, 2692, 16, 460, 4672, 3536, 4600, 326, 1409, 1368, 25581, 18310, 8395, 468, 24661, 1368, 1492, 1014, 1245, 1410, 2243, 326, 1409, 1014, 67, 529, 27...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2243, 67, 3366, 67, 70, 495, 18310, 12, 2692, 16, 460, 4672, 3536, 4600, 326, 1409, 1368, 25581, 18310, 8395, 468, 24661, 1368, 1492, 1014, 1245, 1410, 2243, 326, 1409, 1014, 67, 529, 27...
if not dircase in _dirs_in_sys_path and os.path.exists(dir):
if not dircase in _dirs_in_sys_path:
def makepath(*paths): dir = os.path.abspath(os.path.join(*paths)) return dir, os.path.normcase(dir)
46cf4fc2497a8268c2d0b6eb43a6a3146bd519c3 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/8546/46cf4fc2497a8268c2d0b6eb43a6a3146bd519c3/site.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1221, 803, 30857, 4481, 4672, 1577, 273, 1140, 18, 803, 18, 5113, 803, 12, 538, 18, 803, 18, 5701, 30857, 4481, 3719, 327, 1577, 16, 1140, 18, 803, 18, 7959, 3593, 12, 1214, 13, 225, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1221, 803, 30857, 4481, 4672, 1577, 273, 1140, 18, 803, 18, 5113, 803, 12, 538, 18, 803, 18, 5701, 30857, 4481, 3719, 327, 1577, 16, 1140, 18, 803, 18, 7959, 3593, 12, 1214, 13, 225, ...
sage: f.coeff(sin(y))
sage: f.coefficient(sin(y))
def coeff(self, x, n=1): """ Returns the coefficient of $x^n$ in self.
2e32b7247af3731db881ec5bd651880a8fc8c790 /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/9890/2e32b7247af3731db881ec5bd651880a8fc8c790/calculus.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 11943, 12, 2890, 16, 619, 16, 290, 33, 21, 4672, 3536, 2860, 326, 16554, 434, 271, 92, 66, 82, 8, 316, 365, 18, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 11943, 12, 2890, 16, 619, 16, 290, 33, 21, 4672, 3536, 2860, 326, 16554, 434, 271, 92, 66, 82, 8, 316, 365, 18, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -...
ExtraData="\t%s [%s]" % (str(Module), Arch))
ExtraData="\t%s [%s]" % (str(Module), self._Arch))
def _ResolveLibraryReference(self, Module): EdkLogger.verbose("") EdkLogger.verbose("Library instances of module [%s] [%s]:" % (str(Module), self._Arch)) LibraryConsumerList = [Module]
23fd1ab4b6b323e8188cc3432dd7bd25e9d3512a /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/914/23fd1ab4b6b323e8188cc3432dd7bd25e9d3512a/WorkspaceDatabase.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 8460, 9313, 2404, 12, 2890, 16, 5924, 4672, 512, 2883, 3328, 18, 11369, 2932, 7923, 512, 2883, 3328, 18, 11369, 2932, 9313, 3884, 434, 1605, 9799, 87, 65, 9799, 87, 65, 2773, 738, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 8460, 9313, 2404, 12, 2890, 16, 5924, 4672, 512, 2883, 3328, 18, 11369, 2932, 7923, 512, 2883, 3328, 18, 11369, 2932, 9313, 3884, 434, 1605, 9799, 87, 65, 9799, 87, 65, 2773, 738, ...
aclf.write('group admins bob@QPID \ ')
aclf.write('group admins bob@QPID \n')
def test_illegal_extension_lines(self): """ Test illegal extension lines """ aclf = ACLFile() aclf.write('group admins bob@QPID \ ') aclf.write(' \ \n') aclf.write('joe@QPID \n') aclf.write('acl allow all all') aclf.close() result = self.reload_acl() if (result.text.find("contains illegal characters",0,len(result.text)) == -1): self.fail(result)
bb51d6fcd30b80ed081071c77eacf2185544bc0e /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/198/bb51d6fcd30b80ed081071c77eacf2185544bc0e/acl.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1842, 67, 31751, 67, 6447, 67, 3548, 12, 2890, 4672, 3536, 7766, 16720, 2710, 2362, 3536, 225, 7895, 74, 273, 10098, 812, 1435, 7895, 74, 18, 2626, 2668, 1655, 31116, 800, 70, 36, 53, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1842, 67, 31751, 67, 6447, 67, 3548, 12, 2890, 4672, 3536, 7766, 16720, 2710, 2362, 3536, 225, 7895, 74, 273, 10098, 812, 1435, 7895, 74, 18, 2626, 2668, 1655, 31116, 800, 70, 36, 53, ...
result = 0
def DllCanUnloadNow(): # First ask ctypes.com.server than comtypes.server if we can unload or not. # trick py2exe by doing dynamic imports result = 0 # S_OK try: ctcom = __import__("ctypes.com.server", globals(), locals(), ['*']) except ImportError: pass else: result = ctcom.DllCanUnloadNow() if result != 0: # != S_OK return result
80f9f5c16d81936875e8e0bd489075580d0dafd6 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/12029/80f9f5c16d81936875e8e0bd489075580d0dafd6/__init__.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 463, 2906, 2568, 984, 945, 8674, 13332, 468, 5783, 6827, 6983, 18, 832, 18, 3567, 2353, 532, 2352, 18, 3567, 309, 732, 848, 27060, 578, 486, 18, 468, 28837, 2395, 22, 14880, 635, 9957, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 463, 2906, 2568, 984, 945, 8674, 13332, 468, 5783, 6827, 6983, 18, 832, 18, 3567, 2353, 532, 2352, 18, 3567, 309, 732, 848, 27060, 578, 486, 18, 468, 28837, 2395, 22, 14880, 635, 9957, ...
notebook.quit_idle_worksheet_processes()
def notebook_idle_check(): notebook.quit_idle_worksheet_processes() global last_idle_time t = walltime() if t > last_idle_time + idle_interval: notebook.save() last_idle_time = t
869a6dfb8251d1563e9848fed16c4ca4f125e07b /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/9890/869a6dfb8251d1563e9848fed16c4ca4f125e07b/twist.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 14718, 67, 20390, 67, 1893, 13332, 2552, 1142, 67, 20390, 67, 957, 268, 273, 17662, 957, 1435, 309, 268, 405, 1142, 67, 20390, 67, 957, 397, 12088, 67, 6624, 30, 14718, 18, 5688, 1435, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 14718, 67, 20390, 67, 1893, 13332, 2552, 1142, 67, 20390, 67, 957, 268, 273, 17662, 957, 1435, 309, 268, 405, 1142, 67, 20390, 67, 957, 397, 12088, 67, 6624, 30, 14718, 18, 5688, 1435, ...
def serialize(self, sw, pyobj, name=None, attrtext='', **kw):
def serialize(self, sw, pyobj, name=None, attrtext='', unsuppressedPrefixes=[], **kw):
def serialize(self, sw, pyobj, name=None, attrtext='', **kw): if not self.wrapped: Canonicalize(pyobj, sw, comments=self.comments) return objid = '%x' % id(pyobj) n = name or self.oname or ('E' + objid) if type(pyobj) in _stringtypes: print >>sw, '<%s%s href="%s"/>' % (n, attrtext, pyobj) elif kw.get('inline', self.inline): self.cb(sw, pyobj) else: print >>sw, '<%s%s href="#%s"/>' % (n, attrtext, objid) sw.AddCallback(self.cb, pyobj)
1460ab4d6d3e9581d168ae904fd5c9a83c05f0cf /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/14538/1460ab4d6d3e9581d168ae904fd5c9a83c05f0cf/TC.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 4472, 12, 2890, 16, 1352, 16, 2395, 2603, 16, 508, 33, 7036, 16, 1604, 955, 2218, 2187, 640, 2859, 10906, 11700, 22850, 6487, 2826, 9987, 4672, 309, 486, 365, 18, 18704, 30, 19413, 554, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 4472, 12, 2890, 16, 1352, 16, 2395, 2603, 16, 508, 33, 7036, 16, 1604, 955, 2218, 2187, 640, 2859, 10906, 11700, 22850, 6487, 2826, 9987, 4672, 309, 486, 365, 18, 18704, 30, 19413, 554, ...
cline = cline + " -GAPDIST=%s" % self.gap_sep_range
cline += " -GAPDIST=%s" % self.gap_sep_range
def __str__(self): """Write out the command line as a string.""" cline = self.command + " " + self.sequence_file
2eec844bd4b813ca4a021c23a58ac4936477a812 /local1/tlutelli/issta_data/temp/all_python//python/2007_temp/2007/7167/2eec844bd4b813ca4a021c23a58ac4936477a812/__init__.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 701, 972, 12, 2890, 4672, 3536, 3067, 596, 326, 1296, 980, 487, 279, 533, 12123, 927, 558, 273, 365, 18, 3076, 397, 315, 315, 397, 365, 18, 6178, 67, 768, 2, 0, 0, 0, 0, 0, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 701, 972, 12, 2890, 4672, 3536, 3067, 596, 326, 1296, 980, 487, 279, 533, 12123, 927, 558, 273, 365, 18, 3076, 397, 315, 315, 397, 365, 18, 6178, 67, 768, 2, -100, -100, -100, ...
itemToModify = self.cartitem_set.filter(product__id = chosen_item.id)[0] if 'CustomProduct' in itemToModify.product.get_subtypes(): itemToModify = CartItem(cart=self, product=chosen_item, quantity=0) except IndexError: itemToModify = CartItem(cart=self, product=chosen_item, quantity=0) config=Config.get_shop_config()
item_to_modify = self.cartitem_set.filter(product__id = chosen_item.id)[0] if 'CustomProduct' in item_to_modify.product.get_subtypes(): item_to_modify = self.cartitem_set.create(product=chosen_item, quantity=0) except IndexError: item_to_modify = self.cartitem_set.create(product=chosen_item, quantity=0) config = Config.get_shop_config()
def add_item(self, chosen_item, number_added, details={}): try: itemToModify = self.cartitem_set.filter(product__id = chosen_item.id)[0] # Custom Products will not be added, they will each get their own line item #TODO: More sophisticated checks to make sure the options really are different if 'CustomProduct' in itemToModify.product.get_subtypes(): itemToModify = CartItem(cart=self, product=chosen_item, quantity=0) except IndexError: #It doesn't exist so create a new one itemToModify = CartItem(cart=self, product=chosen_item, quantity=0) config=Config.get_shop_config() if config.no_stock_checkout == False: if chosen_item.items_in_stock < (itemToModify.quantity + number_added): return False
ee9e71f9cdb82607fc4d435dc8c8e69afb1b1ff5 /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/13656/ee9e71f9cdb82607fc4d435dc8c8e69afb1b1ff5/models.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 527, 67, 1726, 12, 2890, 16, 10447, 67, 1726, 16, 1300, 67, 9665, 16, 3189, 12938, 4672, 775, 30, 761, 774, 11047, 273, 225, 365, 18, 11848, 1726, 67, 542, 18, 2188, 12, 5896, 972, 3...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 527, 67, 1726, 12, 2890, 16, 10447, 67, 1726, 16, 1300, 67, 9665, 16, 3189, 12938, 4672, 775, 30, 761, 774, 11047, 273, 225, 365, 18, 11848, 1726, 67, 542, 18, 2188, 12, 5896, 972, 3...
index.
index. EXAMPLES: sage: E = EllipticCurve('11a1') sage: a = RIF(sqrt(2))-1.4142135623730951 sage: E._EllipticCurve_rational_field__adjust_heegner_index(a) [0.0000000... .. 1.490116...e-8]
def __adjust_heegner_index(self, a): r""" Take the square root of the interval that contains the Heegner index. """ if a.lower() < 0: a = IR((0, a.upper())) return a.sqrt()
7cc7405bdd7fadfff0b54f05addf730f5046a467 /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/9890/7cc7405bdd7fadfff0b54f05addf730f5046a467/ell_rational_field.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 13362, 67, 580, 1332, 1224, 67, 1615, 12, 2890, 16, 279, 4672, 436, 8395, 17129, 326, 8576, 1365, 434, 326, 3673, 716, 1914, 326, 8264, 1332, 1224, 770, 18, 225, 5675, 8900, 11386,...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 13362, 67, 580, 1332, 1224, 67, 1615, 12, 2890, 16, 279, 4672, 436, 8395, 17129, 326, 8576, 1365, 434, 326, 3673, 716, 1914, 326, 8264, 1332, 1224, 770, 18, 225, 5675, 8900, 11386,...
elif not res_ids: domain[i] = ('id', '=', '0')
def _rec_get(ids, table, parent): if not ids: return [] ids2 = table.search(cursor, user, [(parent, 'in', ids), (parent, '!=', False)], order=[], context=context) return ids + _rec_get(ids2, table, parent)
956bf728d5d1e352a8f111e2a24615e1b217049c /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/9266/956bf728d5d1e352a8f111e2a24615e1b217049c/modelsql.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 3927, 67, 588, 12, 2232, 16, 1014, 16, 982, 4672, 309, 486, 3258, 30, 327, 5378, 3258, 22, 273, 1014, 18, 3072, 12, 9216, 16, 729, 16, 306, 12, 2938, 16, 296, 267, 2187, 3258, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 3927, 67, 588, 12, 2232, 16, 1014, 16, 982, 4672, 309, 486, 3258, 30, 327, 5378, 3258, 22, 273, 1014, 18, 3072, 12, 9216, 16, 729, 16, 306, 12, 2938, 16, 296, 267, 2187, 3258, ...
if comment: comment = unicode(comment, config.charset))
if not page.exists(): return xmlrpclib.Fault(1, _('No such page %s' % pagename))
def delete(request, pagename, comment = None): _ = request.getText # Using the same access controls as in MoinMoin's xmlrpc_putPage # as defined in MoinMoin/wikirpc.py if (request.cfg.xmlrpc_putpage_trusted_only and not request.user.trusted): return xmlrpclib.Fault(1, _("You are not allowed to delete this page")) # check ACLs if not request.user.may.delete(pagename): return xmlrpclib.Fault(1, _("You are not allowed to delete this page")) #Deletespages page = PageEditor(request, pagename, do_editor_backup=0) if comment: comment = unicode(comment, config.charset)) page.deletePage(comment) return True
33a4c7e4c0e68b350654f703e54a809a8919fa94 /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/888/33a4c7e4c0e68b350654f703e54a809a8919fa94/DeletePage.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1430, 12, 2293, 16, 4262, 1069, 16, 2879, 273, 599, 4672, 389, 273, 590, 18, 588, 1528, 468, 11637, 326, 1967, 2006, 11022, 487, 316, 490, 885, 49, 885, 1807, 31811, 67, 458, 1964, 468...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1430, 12, 2293, 16, 4262, 1069, 16, 2879, 273, 599, 4672, 389, 273, 590, 18, 588, 1528, 468, 11637, 326, 1967, 2006, 11022, 487, 316, 490, 885, 49, 885, 1807, 31811, 67, 458, 1964, 468...
class MSNDispatchClient(MSNEventBase): """ This class provides support for clients connecting to the dispatch server @ivar userHandle: your user handle (passport) needed before connecting. """
class DispatchClient(MSNEventBase): """ This class provides support for clients connecting to the dispatch server @ivar userHandle: your user handle (passport) needed before connecting. """
def gotError(self, errorCode): """ called when the server sends an error which is not in response to a sent command (ie. it has no matching transaction ID) """ log.msg('Error %s' % (errorCodes[errorCode]))
9b2b7163e441396d9e5e7f46775cd7809b4655be /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/12595/9b2b7163e441396d9e5e7f46775cd7809b4655be/msn.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2363, 668, 12, 2890, 16, 12079, 4672, 3536, 2566, 1347, 326, 1438, 9573, 392, 555, 1492, 353, 486, 316, 766, 358, 279, 3271, 1296, 261, 1385, 18, 518, 711, 1158, 3607, 2492, 1599, 13, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2363, 668, 12, 2890, 16, 12079, 4672, 3536, 2566, 1347, 326, 1438, 9573, 392, 555, 1492, 353, 486, 316, 766, 358, 279, 3271, 1296, 261, 1385, 18, 518, 711, 1158, 3607, 2492, 1599, 13, ...
def stop(self):
def stop(self, renew=True):
def stop(self): """ Stops the topology. This will fail if the topology has not been prepared yet. """ self.renew() task = self.start_task(self.stop_run) return task.id
63a41aa539dde6f78e4619ca15d3a72854ffb7ab /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/3860/63a41aa539dde6f78e4619ca15d3a72854ffb7ab/topology.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2132, 12, 2890, 16, 15723, 33, 5510, 4672, 3536, 934, 4473, 326, 9442, 18, 1220, 903, 2321, 309, 326, 9442, 711, 486, 2118, 8208, 4671, 18, 3536, 365, 18, 1187, 359, 1435, 1562, 273, 3...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2132, 12, 2890, 16, 15723, 33, 5510, 4672, 3536, 934, 4473, 326, 9442, 18, 1220, 903, 2321, 309, 326, 9442, 711, 486, 2118, 8208, 4671, 18, 3536, 365, 18, 1187, 359, 1435, 1562, 273, 3...
Basic.__init__(self, binding)
Basic.__init__(self, binding, **kwargs)
def __init__(self, binding): """ @param binding: A binding object. @type binding: L{binding.Binding} """ Basic.__init__(self, binding) self.resolver = NodeResolver(self.schema)
e36d638d64bca24d8fa2a5d209115c9c0f351a4e /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/5377/e36d638d64bca24d8fa2a5d209115c9c0f351a4e/unmarshaller.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 5085, 4672, 3536, 632, 891, 5085, 30, 432, 5085, 733, 18, 632, 723, 5085, 30, 511, 95, 7374, 18, 5250, 97, 3536, 7651, 16186, 2738, 972, 12, 2890, 16, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 5085, 4672, 3536, 632, 891, 5085, 30, 432, 5085, 733, 18, 632, 723, 5085, 30, 511, 95, 7374, 18, 5250, 97, 3536, 7651, 16186, 2738, 972, 12, 2890, 16, ...
cfg.set(section_name, "default", "True")
cfg[section_name][u"default"] = u"True"
def save(rv, filename): """Save selected settings information, including all sharing accounts and shares (whether published or subscribed), to an INI file""" cfg = ConfigParser.ConfigParser() # Sharing accounts currentAccount = schema.ns("osaf.sharing", rv).currentWebDAVAccount.item section_prefix = "sharing_account" counter = 1 for account in sharing.WebDAVAccount.iterItems(rv): if account.username: # skip account if not configured section_name = "%s_%d" % (section_prefix, counter) cfg.add_section(section_name) cfg.set(section_name, "type", "webdav account") cfg.set(section_name, "uuid", account.itsUUID) cfg.set(section_name, "title", account.displayName) cfg.set(section_name, "host", account.host) cfg.set(section_name, "path", account.path) cfg.set(section_name, "username", account.username) cfg.set(section_name, "password", account.password) cfg.set(section_name, "port", account.port) cfg.set(section_name, "usessl", account.useSSL) if account is currentAccount: cfg.set(section_name, "default", "True") counter += 1 # Subscriptions mine = schema.ns("osaf.pim", rv).mine section_prefix = "share" counter = 1 for col in pim.ContentCollection.iterItems(rv): share = sharing.getShare(col) if share: section_name = "%s_%d" % (section_prefix, counter) cfg.add_section(section_name) cfg.set(section_name, "type", "share") cfg.set(section_name, "title", share.contents.displayName) cfg.set(section_name, "mine", col in mine.sources) uc = usercollections.UserCollection(col) if getattr(uc, "color", False): color = uc.color cfg.set(section_name, "red", color.red) cfg.set(section_name, "green", color.green) cfg.set(section_name, "blue", color.blue) cfg.set(section_name, "alpha", color.alpha) urls = sharing.getUrls(share) if sharing.isSharedByMe(share): cfg.set(section_name, "publisher", "True") cfg.set(section_name, "url", share.getLocation()) else: cfg.set(section_name, "publisher", "False") url = share.getLocation() cfg.set(section_name, "url", url) if url != urls[0]: cfg.set(section_name, "ticket", urls[0]) ticketFreeBusy = getattr(share.conduit, "ticketFreeBusy", None) if ticketFreeBusy: cfg.set(section_name, "freebusy", "True") counter += 1 # SMTP accounts section_prefix = "smtp_account" counter = 1 for account in pim.mail.SMTPAccount.iterItems(rv): if account.isActive and account.host: section_name = "%s_%d" % (section_prefix, counter) cfg.add_section(section_name) cfg.set(section_name, "type", "smtp account") cfg.set(section_name, "uuid", account.itsUUID) cfg.set(section_name, "title", account.displayName) cfg.set(section_name, "host", account.host) cfg.set(section_name, "auth", account.useAuth) cfg.set(section_name, "username", account.username) cfg.set(section_name, "password", account.password) cfg.set(section_name, "name", account.fromAddress.fullName) cfg.set(section_name, "address", account.fromAddress.emailAddress) cfg.set(section_name, "port", account.port) cfg.set(section_name, "security", account.connectionSecurity) counter += 1 # IMAP accounts currentAccount = schema.ns("osaf.pim", rv).currentMailAccount.item section_prefix = "imap_account" counter = 1 for account in pim.mail.IMAPAccount.iterItems(rv): if account.isActive and account.host: section_name = "%s_%d" % (section_prefix, counter) cfg.add_section(section_name) cfg.set(section_name, "type", "imap account") cfg.set(section_name, "uuid", account.itsUUID) cfg.set(section_name, "title", account.displayName) cfg.set(section_name, "host", account.host) cfg.set(section_name, "username", account.username) cfg.set(section_name, "password", account.password) cfg.set(section_name, "name", account.replyToAddress.fullName) cfg.set(section_name, "address", account.replyToAddress.emailAddress) cfg.set(section_name, "port", account.port) cfg.set(section_name, "security", account.connectionSecurity) if account.defaultSMTPAccount: cfg.set(section_name, "smtp", account.defaultSMTPAccount.itsUUID) if account is currentAccount: cfg.set(section_name, "default", "True") counter += 1 # POP accounts currentAccount = schema.ns("osaf.pim", rv).currentMailAccount.item section_prefix = "pop_account" counter = 1 for account in pim.mail.POPAccount.iterItems(rv): if account.isActive and account.host: section_name = "%s_%d" % (section_prefix, counter) cfg.add_section(section_name) cfg.set(section_name, "type", "pop account") cfg.set(section_name, "uuid", account.itsUUID) cfg.set(section_name, "title", account.displayName) cfg.set(section_name, "host", account.host) cfg.set(section_name, "username", account.username) cfg.set(section_name, "password", account.password) cfg.set(section_name, "name", account.replyToAddress.fullName) cfg.set(section_name, "address", account.replyToAddress.emailAddress) cfg.set(section_name, "port", account.port) cfg.set(section_name, "security", account.connectionSecurity) cfg.set(section_name, "leave", account.leaveOnServer) if account.defaultSMTPAccount: cfg.set(section_name, "smtp", account.defaultSMTPAccount.itsUUID) if account is currentAccount: cfg.set(section_name, "default", "True") counter += 1 # Show timezones cfg.add_section("timezones") showTZ = schema.ns("osaf.app", rv).TimezonePrefs.showUI cfg.set("timezones", "type", "show timezones") cfg.set("timezones", "show_timezones", showTZ) # Visible hours cfg.add_section("visible_hours") cfg.set("visible_hours", "type", "visible hours") calPrefs = schema.ns("osaf.framework.blocks.calendar", rv).calendarPrefs cfg.set("visible_hours", "height_mode", calPrefs.hourHeightMode) cfg.set("visible_hours", "num_hours", calPrefs.visibleHours) # Event Logger cfg.add_section("event_logger") eventHook = schema.ns("eventLogger", rv).EventLoggingHook cfg.set("event_logger", "type", "event logger") active = eventHook.logging cfg.set("event_logger", "active", active) output = file(filename, "w") cfg.write(output) output.close()
5fcc26e162e0c42f8a4ee0116787795eb4f10175 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/9228/5fcc26e162e0c42f8a4ee0116787795eb4f10175/settings.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1923, 12, 4962, 16, 1544, 4672, 3536, 4755, 3170, 1947, 1779, 16, 6508, 777, 21001, 9484, 471, 24123, 261, 3350, 2437, 9487, 578, 16445, 3631, 358, 392, 2120, 45, 585, 8395, 225, 2776, 2...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1923, 12, 4962, 16, 1544, 4672, 3536, 4755, 3170, 1947, 1779, 16, 6508, 777, 21001, 9484, 471, 24123, 261, 3350, 2437, 9487, 578, 16445, 3631, 358, 392, 2120, 45, 585, 8395, 225, 2776, 2...
EditDesc(self.name, subject=line1(self.desc), desc=self.desc,
EditDesc(self.name, desc=self.desc,
def Flush(self, ui, repo): if self.name == "new": self.Upload(ui, repo) dir = CodeReviewDir(ui, repo) path = dir + '/cl.' + self.name f = open(path+'!', "w") f.write(self.DiskText()) f.close() os.rename(path+'!', path) if self.web: EditDesc(self.name, subject=line1(self.desc), desc=self.desc, reviewers=JoinComma(self.reviewer), cc=JoinComma(self.cc))
3923648111167109b264da8caaf1d3a99d9cfe9a /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/5761/3923648111167109b264da8caaf1d3a99d9cfe9a/codereview.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 11624, 12, 2890, 16, 5915, 16, 3538, 4672, 309, 365, 18, 529, 422, 315, 2704, 6877, 365, 18, 4777, 12, 4881, 16, 3538, 13, 1577, 273, 3356, 9159, 1621, 12, 4881, 16, 3538, 13, 589, 2...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 11624, 12, 2890, 16, 5915, 16, 3538, 4672, 309, 365, 18, 529, 422, 315, 2704, 6877, 365, 18, 4777, 12, 4881, 16, 3538, 13, 1577, 273, 3356, 9159, 1621, 12, 4881, 16, 3538, 13, 589, 2...
def testbuildrequires(self): self.launchtest("./cyg-apt install " + self.package_name, self.v)[0] requiresout = self.launchtest\ ("./cyg-apt buildrequires " + self.package_name, self.v)[0].split() self.assert_(self.package_name_2 in requiresout)
def testrequires(self): # package 2 is a dependency for package utilpack.popen_ext\ ("./cyg-apt install " + self.package_name, self.v)[0] requiresout = utilpack.popen_ext\ ("./cyg-apt requires " + self.package_name, self.v)[0].split() self.assert_(self.package_name_2 in requiresout)
48fce58f11cf6632c412de438496f6b0f42d5b00 /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/9945/48fce58f11cf6632c412de438496f6b0f42d5b00/test-cyg-apt.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1842, 18942, 12, 2890, 4672, 468, 2181, 576, 353, 279, 4904, 364, 2181, 1709, 2920, 18, 84, 3190, 67, 408, 64, 7566, 18, 19, 2431, 75, 17, 1657, 3799, 315, 397, 365, 18, 5610, 67, 52...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1842, 18942, 12, 2890, 4672, 468, 2181, 576, 353, 279, 4904, 364, 2181, 1709, 2920, 18, 84, 3190, 67, 408, 64, 7566, 18, 19, 2431, 75, 17, 1657, 3799, 315, 397, 365, 18, 5610, 67, 52...
dev_linkername = env.LinkerNameLink( install_dirs.lib+'/'+linker_name, install_dirs.lib+'/'+soname) env.Alias( 'install_'+name+'_dev', [install_headers,dev_linkername, install_descriptor])
env.Alias( 'install_'+name+'_runtime', [ install_lib, install_soname ]) env.Alias( 'install_'+name+'_dev', [ install_headers, install_linkername, install_descriptor, ]) install_tgt = env.Alias( 'install_' + name, [ 'install_'+name+'_dev', 'install_'+name+'_runtime', ])
def posix_lib_rules( name, version, headers, sources, install_dirs, env, moduleDependencies=[]) : #for file in sources : # print "file to compile: " + str(file) lib_descriptor = env.File( 'clam_'+name+'.pc' ) # We expect a version like " X.Y-possibleextrachars " versionnumbers = version.split('.') if len(versionnumbers) != 3: print " ERROR in buildtools.posix_lib_rules: version name does not follow CLAM standard " print " Check the variable 'version' in the main SConstruct" sys.exit(1) if sys.platform == 'linux2' : libname = 'libclam_'+name+'.so.%s.%s.%s' % (versionnumbers[0], versionnumbers[1], versionnumbers[2]) middle_linker_name = 'libclam_'+name+'.so.%s.%s' % (versionnumbers[0], versionnumbers[1]) soname = 'libclam_'+name+'.so.%s' % versionnumbers[0] linker_name = 'libclam_'+name+'.so' env.Append(SHLINKFLAGS=['-Wl,-soname,%s'%soname ] ) lib = env.SharedLibrary( 'clam_' + name, sources, SHLIBSUFFIX='.so.%s'%version ) soname_lib = env.SonameLink( soname, lib ) # lib***.so.X.Y -> lib***.so.X.Y.Z linkername_lib = env.LinkerNameLink( linker_name, soname_lib ) # lib***.so -> lib***.so.X env.Depends(lib, ['../%s/libclam_%s.so.%s'%(module,module,versionnumbers[0]) for module in moduleDependencies ]) else : #darwin soname = 'libclam_'+name+'.%s.dylib' % versionnumbers[0] middle_linker_name = 'libclam_'+name+'.%s.%s.dylib' % (versionnumbers[0], versionnumbers[1]) linker_name = 'libclam_'+name+'.dylib' env.Append( CCFLAGS=['-fno-common'] ) env.Append( SHLINKFLAGS=['-dynamic', '-Wl,-install_name,%s'%(install_dirs.lib + '/' + 'libclam_' + name + '.%s.dylib'%(version))] ) lib = env.SharedLibrary( 'clam_' + name, sources, SHLIBSUFFIX='.%s.dylib'%version ) soname_lib = env.LinkerNameLink( middle_linker_name, lib ) # lib***.X.Y.dylib -> lib***.X.Y.Z.dylib middlelinkername_lib = env.LinkerNameLink( soname, soname_lib ) # lib***.so.X -> lib***.so.X.Y linkername_lib = env.LinkerNameLink( linker_name, middlelinkername_lib) # lib***.dylib -> lib***.X.dylib tgt = env.Alias( name, linkername_lib ) install_headers = env.Install( install_dirs.inc+'/CLAM', headers ) env.AddPostAction( install_headers, "chmod 644 $TARGET" ) install_lib = env.Install( install_dirs.lib, lib) env.AddPostAction( install_lib, Action(make_lib_names, make_lib_names_message ) ) install_descriptor = env.Install( install_dirs.lib+'/pkgconfig', lib_descriptor ) install_tgt = env.Alias( 'install_' + name, [install_headers, install_lib, install_descriptor] ) runtime_lib = env.Install( install_dirs.lib, lib ) runtime_soname = env.SonameLink( install_dirs.lib + '/' + soname, runtime_lib ) env.Alias( 'install_'+name+'_runtime', [runtime_lib, runtime_soname] ) env.Append(CPPDEFINES="CLAM_MODULE='\"%s\"'"%name)
67ac7ca4e8c2db221b1217ff72c187ecefac0571 /local1/tlutelli/issta_data/temp/all_python//python/2007_temp/2007/1456/67ac7ca4e8c2db221b1217ff72c187ecefac0571/rulesets.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 16366, 67, 2941, 67, 7482, 12, 508, 16, 1177, 16, 1607, 16, 5550, 16, 3799, 67, 8291, 16, 1550, 16, 1605, 8053, 22850, 5717, 294, 225, 468, 1884, 585, 316, 5550, 294, 468, 202, 1188, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 16366, 67, 2941, 67, 7482, 12, 508, 16, 1177, 16, 1607, 16, 5550, 16, 3799, 67, 8291, 16, 1550, 16, 1605, 8053, 22850, 5717, 294, 225, 468, 1884, 585, 316, 5550, 294, 468, 202, 1188, ...
return self.replyToAddress.fullName
if self.replyToAddress: return self.replyToAddress.fullName return None
def fget(self): return self.replyToAddress.fullName
f368abdd7cb773f9ab5ef9e88bd1637612f221bb /local1/tlutelli/issta_data/temp/all_python//python/2007_temp/2007/9228/f368abdd7cb773f9ab5ef9e88bd1637612f221bb/mail.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 284, 588, 12, 2890, 4672, 327, 365, 18, 10629, 774, 1887, 18, 2854, 461, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 284, 588, 12, 2890, 4672, 327, 365, 18, 10629, 774, 1887, 18, 2854, 461, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -1...
library_dirs, self.runtime_library_dirs,
library_dirs, runtime_library_dirs,
def link_executable (self, objects, output_progname, output_dir=None, libraries=None, library_dirs=None, debug=0, extra_preargs=None, extra_postargs=None): (objects, output_dir, libraries, library_dirs) = \ self._fix_link_args (objects, output_dir, takes_libs=1, libraries=libraries, library_dirs=library_dirs)
a64ed69ae2481319820dc294beb9a6b0f90cd046 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/8125/a64ed69ae2481319820dc294beb9a6b0f90cd046/unixccompiler.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1692, 67, 17751, 261, 2890, 16, 2184, 16, 876, 67, 14654, 529, 16, 876, 67, 1214, 33, 7036, 16, 14732, 33, 7036, 16, 5313, 67, 8291, 33, 7036, 16, 1198, 33, 20, 16, 2870, 67, 1484, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1692, 67, 17751, 261, 2890, 16, 2184, 16, 876, 67, 14654, 529, 16, 876, 67, 1214, 33, 7036, 16, 14732, 33, 7036, 16, 5313, 67, 8291, 33, 7036, 16, 1198, 33, 20, 16, 2870, 67, 1484, ...
x -- number m -- integer
x,m -- numbers or symbolic expressions Either x or x-m must be an integer.
def binomial(x,m): r""" Return the binomial coefficient $$ x (x-1) \cdots (x-m+1) / m! $$ which is defined for $m \in \Z$ and any $x$. If $m<0$ return $0$. INPUT:: x -- number m -- integer OUTPUT:: number EXAMPLES:: sage: binomial(5,2) 10 sage: binomial(2,0) 1 sage: binomial(1/2, 0) 1 sage: binomial(3,-1) 0 sage: binomial(20,10) 184756 sage: binomial(RealField()('2.5'), 2) 1.87500000000000 """ if not isinstance(m, (int, long, integer.Integer)): raise TypeError, 'm must be an integer' if isinstance(x, (int, long, integer.Integer)): return integer_ring.ZZ(pari(x).binomial(m)) try: P = x.parent() except AttributeError: P = type(x) if m < 0: return P(0) return misc.prod([x-i for i in xrange(m)]) / P(factorial(m))
4b14ad40e17006a91ec9c2ff9115046fc951e4f2 /local1/tlutelli/issta_data/temp/all_python//python/2007_temp/2007/9417/4b14ad40e17006a91ec9c2ff9115046fc951e4f2/arith.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 4158, 11496, 12, 92, 16, 81, 4672, 436, 8395, 2000, 326, 4158, 11496, 16554, 5366, 619, 261, 92, 17, 21, 13, 521, 4315, 6968, 261, 92, 17, 81, 15, 21, 13, 342, 312, 5, 5366, 1492, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 4158, 11496, 12, 92, 16, 81, 4672, 436, 8395, 2000, 326, 4158, 11496, 16554, 5366, 619, 261, 92, 17, 21, 13, 521, 4315, 6968, 261, 92, 17, 81, 15, 21, 13, 342, 312, 5, 5366, 1492, ...
mismatched.append('%s, expected: %s, actual: %s' % (key, value, actual[key]))
mismatched.append('%s, expected: %s, actual: %s' % (key, value, actual[key]))
def assertDictContainsSubset(self, expected, actual, msg=None): """Checks whether actual is a superset of expected.""" missing = [] mismatched = [] for key, value in expected.iteritems(): if key not in actual: missing.append(key) elif value != actual[key]: mismatched.append('%s, expected: %s, actual: %s' % (key, value, actual[key]))
60c19399c1aa047e5f8d8048b445290bca0a69fe /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/8125/60c19399c1aa047e5f8d8048b445290bca0a69fe/case.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1815, 5014, 10846, 20315, 12, 2890, 16, 2665, 16, 3214, 16, 1234, 33, 7036, 4672, 3536, 4081, 2856, 3214, 353, 279, 1169, 21686, 434, 2665, 12123, 3315, 273, 5378, 7524, 11073, 273, 5378, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1815, 5014, 10846, 20315, 12, 2890, 16, 2665, 16, 3214, 16, 1234, 33, 7036, 4672, 3536, 4081, 2856, 3214, 353, 279, 1169, 21686, 434, 2665, 12123, 3315, 273, 5378, 7524, 11073, 273, 5378, ...
def getfield(self, packet, s): opsz = (packet.dataofs-5)*4
def getfield(self, pkt, s): opsz = (pkt.dataofs-5)*4
def getfield(self, packet, s): opsz = (packet.dataofs-5)*4 if opsz < 0: warning("bad dataofs (%i). Assuming dataofs=5"%packet.dataofs) opsz = 0 return s[opsz:],self.m2i(pkt,s[:opsz])
722a4073400c335f86f3f613ddc3e31e4e032485 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/7311/722a4073400c335f86f3f613ddc3e31e4e032485/scapy.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 336, 1518, 12, 2890, 16, 11536, 16, 272, 4672, 6727, 94, 273, 261, 5465, 88, 18, 892, 792, 87, 17, 25, 17653, 24, 309, 6727, 94, 411, 374, 30, 3436, 2932, 8759, 501, 792, 87, 6142, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 336, 1518, 12, 2890, 16, 11536, 16, 272, 4672, 6727, 94, 273, 261, 5465, 88, 18, 892, 792, 87, 17, 25, 17653, 24, 309, 6727, 94, 411, 374, 30, 3436, 2932, 8759, 501, 792, 87, 6142, ...
sage: [v.weight() for v in CrystalOfLetters(['E',7])] [(0, 0, 0, 0, 0, 1, -1/2, 1/2), (0, 0, 0, 0, 1, 0, -1/2, 1/2), (0, 0, 0, 1, 0, 0, -1/2, 1/2), (0, 0, 1, 0, 0, 0, -1/2, 1/2), (0, 1, 0, 0, 0, 0, -1/2, 1/2), (-1, 0, 0, 0, 0, 0, -1/2, 1/2), (1, 0, 0, 0, 0, 0, -1/2, 1/2), (1/2, 1/2, 1/2, 1/2, 1/2, 1/2, 0, 0), (0, -1, 0, 0, 0, 0, -1/2, 1/2), (-1/2, -1/2, 1/2, 1/2, 1/2, 1/2, 0, 0), (0, 0, -1, 0, 0, 0, -1/2, 1/2), (-1/2, 1/2, -1/2, 1/2, 1/2, 1/2, 0, 0), (1/2, -1/2, -1/2, 1/2, 1/2, 1/2, 0, 0), (0, 0, 0, -1, 0, 0, -1/2, 1/2), (-1/2, 1/2, 1/2, -1/2, 1/2, 1/2, 0, 0), (1/2, -1/2, 1/2, -1/2, 1/2, 1/2, 0, 0), (1/2, 1/2, -1/2, -1/2, 1/2, 1/2, 0, 0), (-1/2, -1/2, -1/2, -1/2, 1/2, 1/2, 0, 0), (0, 0, 0, 0, -1, 0, -1/2, 1/2), (-1/2, 1/2, 1/2, 1/2, -1/2, 1/2, 0, 0), (1/2, -1/2, 1/2, 1/2, -1/2, 1/2, 0, 0), (1/2, 1/2, -1/2, 1/2, -1/2, 1/2, 0, 0), (-1/2, -1/2, -1/2, 1/2, -1/2, 1/2, 0, 0), (1/2, 1/2, 1/2, -1/2, -1/2, 1/2, 0, 0), (-1/2, -1/2, 1/2, -1/2, -1/2, 1/2, 0, 0), (-1/2, 1/2, -1/2, -1/2, -1/2, 1/2, 0, 0), (1/2, -1/2, -1/2, -1/2, -1/2, 1/2, 0, 0), (0, 0, 0, 0, 0, 1, 1/2, -1/2), (0, 0, 0, 0, 0, -1, -1/2, 1/2), (-1/2, 1/2, 1/2, 1/2, 1/2, -1/2, 0, 0), (1/2, -1/2, 1/2, 1/2, 1/2, -1/2, 0, 0), (1/2, 1/2, -1/2, 1/2, 1/2, -1/2, 0, 0), (-1/2, -1/2, -1/2, 1/2, 1/2, -1/2, 0, 0), (1/2, 1/2, 1/2, -1/2, 1/2, -1/2, 0, 0), (-1/2, -1/2, 1/2, -1/2, 1/2, -1/2, 0, 0), (-1/2, 1/2, -1/2, -1/2, 1/2, -1/2, 0, 0), (1/2, -1/2, -1/2, -1/2, 1/2, -1/2, 0, 0), (0, 0, 0, 0, 1, 0, 1/2, -1/2), (1/2, 1/2, 1/2, 1/2, -1/2, -1/2, 0, 0), (-1/2, -1/2, 1/2, 1/2, -1/2, -1/2, 0, 0), (-1/2, 1/2, -1/2, 1/2, -1/2, -1/2, 0, 0), (1/2, -1/2, -1/2, 1/2, -1/2, -1/2, 0, 0), (0, 0, 0, 1, 0, 0, 1/2, -1/2), (-1/2, 1/2, 1/2, -1/2, -1/2, -1/2, 0, 0), (1/2, -1/2, 1/2, -1/2, -1/2, -1/2, 0, 0), (0, 0, 1, 0, 0, 0, 1/2, -1/2), (1/2, 1/2, -1/2, -1/2, -1/2, -1/2, 0, 0), (0, 1, 0, 0, 0, 0, 1/2, -1/2), (1, 0, 0, 0, 0, 0, 1/2, -1/2), (0, -1, 0, 0, 0, 0, 1/2, -1/2), (0, 0, -1, 0, 0, 0, 1/2, -1/2), (0, 0, 0, -1, 0, 0, 1/2, -1/2), (0, 0, 0, 0, -1, 0, 1/2, -1/2), (0, 0, 0, 0, 0, -1, 1/2, -1/2), (-1/2, -1/2, -1/2, -1/2, -1/2, -1/2, 0, 0), (-1, 0, 0, 0, 0, 0, 1/2, -1/2)]
sage: [v.weight() for v in CrystalOfLetters(['E',7])] [(0, 0, 0, 0, 0, 1, -1/2, 1/2), (0, 0, 0, 0, 1, 0, -1/2, 1/2), (0, 0, 0, 1, 0, 0, -1/2, 1/2), (0, 0, 1, 0, 0, 0, -1/2, 1/2), (0, 1, 0, 0, 0, 0, -1/2, 1/2), (-1, 0, 0, 0, 0, 0, -1/2, 1/2), (1, 0, 0, 0, 0, 0, -1/2, 1/2), (1/2, 1/2, 1/2, 1/2, 1/2, 1/2, 0, 0), (0, -1, 0, 0, 0, 0, -1/2, 1/2), (-1/2, -1/2, 1/2, 1/2, 1/2, 1/2, 0, 0), (0, 0, -1, 0, 0, 0, -1/2, 1/2), (-1/2, 1/2, -1/2, 1/2, 1/2, 1/2, 0, 0), (1/2, -1/2, -1/2, 1/2, 1/2, 1/2, 0, 0), (0, 0, 0, -1, 0, 0, -1/2, 1/2), (-1/2, 1/2, 1/2, -1/2, 1/2, 1/2, 0, 0), (1/2, -1/2, 1/2, -1/2, 1/2, 1/2, 0, 0), (1/2, 1/2, -1/2, -1/2, 1/2, 1/2, 0, 0), (-1/2, -1/2, -1/2, -1/2, 1/2, 1/2, 0, 0), (0, 0, 0, 0, -1, 0, -1/2, 1/2), (-1/2, 1/2, 1/2, 1/2, -1/2, 1/2, 0, 0), (1/2, -1/2, 1/2, 1/2, -1/2, 1/2, 0, 0), (1/2, 1/2, -1/2, 1/2, -1/2, 1/2, 0, 0), (-1/2, -1/2, -1/2, 1/2, -1/2, 1/2, 0, 0), (1/2, 1/2, 1/2, -1/2, -1/2, 1/2, 0, 0), (-1/2, -1/2, 1/2, -1/2, -1/2, 1/2, 0, 0), (-1/2, 1/2, -1/2, -1/2, -1/2, 1/2, 0, 0), (1/2, -1/2, -1/2, -1/2, -1/2, 1/2, 0, 0), (0, 0, 0, 0, 0, 1, 1/2, -1/2), (0, 0, 0, 0, 0, -1, -1/2, 1/2), (-1/2, 1/2, 1/2, 1/2, 1/2, -1/2, 0, 0), (1/2, -1/2, 1/2, 1/2, 1/2, -1/2, 0, 0), (1/2, 1/2, -1/2, 1/2, 1/2, -1/2, 0, 0), (-1/2, -1/2, -1/2, 1/2, 1/2, -1/2, 0, 0), (1/2, 1/2, 1/2, -1/2, 1/2, -1/2, 0, 0), (-1/2, -1/2, 1/2, -1/2, 1/2, -1/2, 0, 0), (-1/2, 1/2, -1/2, -1/2, 1/2, -1/2, 0, 0), (1/2, -1/2, -1/2, -1/2, 1/2, -1/2, 0, 0), (0, 0, 0, 0, 1, 0, 1/2, -1/2), (1/2, 1/2, 1/2, 1/2, -1/2, -1/2, 0, 0), (-1/2, -1/2, 1/2, 1/2, -1/2, -1/2, 0, 0), (-1/2, 1/2, -1/2, 1/2, -1/2, -1/2, 0, 0), (1/2, -1/2, -1/2, 1/2, -1/2, -1/2, 0, 0), (0, 0, 0, 1, 0, 0, 1/2, -1/2), (-1/2, 1/2, 1/2, -1/2, -1/2, -1/2, 0, 0), (1/2, -1/2, 1/2, -1/2, -1/2, -1/2, 0, 0), (0, 0, 1, 0, 0, 0, 1/2, -1/2), (1/2, 1/2, -1/2, -1/2, -1/2, -1/2, 0, 0), (0, 1, 0, 0, 0, 0, 1/2, -1/2), (1, 0, 0, 0, 0, 0, 1/2, -1/2), (0, -1, 0, 0, 0, 0, 1/2, -1/2), (0, 0, -1, 0, 0, 0, 1/2, -1/2), (0, 0, 0, -1, 0, 0, 1/2, -1/2), (0, 0, 0, 0, -1, 0, 1/2, -1/2), (0, 0, 0, 0, 0, -1, 1/2, -1/2), (-1/2, -1/2, -1/2, -1/2, -1/2, -1/2, 0, 0), (-1, 0, 0, 0, 0, 0, 1/2, -1/2)]
def weight(self): """ Returns the weight of self.
74bd21d220c0f1ff36446f7fd328a071d1649391 /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/9890/74bd21d220c0f1ff36446f7fd328a071d1649391/letters.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 3119, 12, 2890, 4672, 3536, 2860, 326, 3119, 434, 365, 18, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 3119, 12, 2890, 4672, 3536, 2860, 326, 3119, 434, 365, 18, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, ...
def setassignee(self,id,assigned_to='',reporter='',qa_contact='',comment=''): '''Set any of the assigned_to, reporter, or qa_contact fields to a new bugzilla account, with an optional comment, e.g. setassignee(id,reporter='sadguy@brokencomputer.org', assigned_to='wwoods@redhat.com') setassignee(id,qa_contact='wwoods@redhat.com',comment='wwoods QA ftw') You must set at least one of the three assignee fields, or this method will throw a ValueError. Returns [bug_id, mailresults].''' if not assigned_to or reporter or qa_contact: raise ValueError, "You must set one of assigned_to, reporter, or qa_contact" return self._setassignee(id,assigned_to=assigned_to,reporter=reporter, qa_contact=qa_contact,comment=comment)
def _setassignee(self,id,**data): '''Raw xmlrpc call to set one of the assignee fields on a bug. changeAssignment($id, $data, $username, $password) data: 'assigned_to','reporter','qa_contact','comment' returns: [$id, $mailresults]''' return self._proxy.bugzilla.changeAssignment(id,data,self.user,self.password)
aa7576fa472b30b51688436cb9924b3c0e76dc75 /local1/tlutelli/issta_data/temp/all_python//python/2007_temp/2007/5050/aa7576fa472b30b51688436cb9924b3c0e76dc75/bugzilla.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 542, 6145, 1340, 12, 2890, 16, 350, 16, 636, 892, 4672, 9163, 4809, 31811, 745, 358, 444, 1245, 434, 326, 2683, 1340, 1466, 603, 279, 7934, 18, 2549, 7729, 4372, 350, 16, 271, 892...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 542, 6145, 1340, 12, 2890, 16, 350, 16, 636, 892, 4672, 9163, 4809, 31811, 745, 358, 444, 1245, 434, 326, 2683, 1340, 1466, 603, 279, 7934, 18, 2549, 7729, 4372, 350, 16, 271, 892...
else:
elif hasattr(i, "DesktopEntry"):
def __init__(self, obj): gtk.ToggleButton.__init__(self) self.set_relief(gtk.RELIEF_NONE) self.set_name("EdgeButton") self.show() icon = obj.getIcon() try: image = gtk.Image() pixbuf = gtk.gdk.pixbuf_new_from_file_at_size(\ xdg.IconTheme.getIconPath(icon, 24), 24, 24) image.set_from_pixbuf(pixbuf) image.show() except: image = gtk.image_new_from_stock(gtk.STOCK_EXECUTE, gtk.ICON_SIZE_BUTTON) image.show() pixbuf = None self.add(image) self.menu_name = obj.getName().replace('&', '&amp;') self.menu_description = obj.getComment().replace('&', '&amp;') self.items = FoopanelMenuWindow(self.menu_name, self.menu_description, pixbuf) for i in obj.getEntries(): if isinstance(i, xdg.Menu.Separator): ib = separator() else: ib = item(i.DesktopEntry) if len(i.DesktopEntry.getOnlyShowIn()) > 0: del(ib) continue ib.connect("button-release-event", self.cb_clicked) self.items.append(ib)
4d652a352ff9487abf5cea294de3c89ecd5af1f5 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/2746/4d652a352ff9487abf5cea294de3c89ecd5af1f5/__init__.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 1081, 4672, 225, 22718, 18, 17986, 3616, 16186, 2738, 972, 12, 2890, 13, 225, 365, 18, 542, 67, 266, 549, 10241, 12, 4521, 79, 18, 862, 2053, 26897, 67, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 1081, 4672, 225, 22718, 18, 17986, 3616, 16186, 2738, 972, 12, 2890, 13, 225, 365, 18, 542, 67, 266, 549, 10241, 12, 4521, 79, 18, 862, 2053, 26897, 67, ...
do_cmd('cl vmmain.c /D "inline=" /Od /Zi /MD /Fdvm.pdb /Fmvm.map /Fevm.exe')
do_cmd('cl vmmain.c /Od /Zi /MD /Fdvm.pdb /Fmvm.map /Fevm.exe')
def build_vs(): # How to compile on windows with Visual Studio: # Call the batch script that sets environement variables for Visual Studio and # then run this script. # For VS 2005 the script is: # "C:\Program Files\Microsoft Visual Studio 8\Common7\Tools\vsvars32.bat" # For VS 2008: "C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\vsvars32.bat" # Doesn't compile with vc6 (no variadic macros) # Note: /MD option causes to dynamically link with msvcrt80.dll. This dramatically # reduces size (for vm.exe 159k => 49k). Downside is that msvcrt80.dll must be # present on the system (and not all windows machine have it). You can either re-distribute # msvcrt80.dll or statically link with C runtime by changing /MD to /MT. mods = CORE[:]; mods.append('tests') os.chdir(os.path.join(TOPDIR,'tinypy')) do_cmd('cl vmmain.c /D "inline=" /Od /Zi /MD /Fdvm.pdb /Fmvm.map /Fevm.exe') do_cmd('python tests.py -win') for mod in mods: do_cmd('python py2bc.py %s.py %s.tpc'%(mod,mod)) do_cmd('vm.exe tests.tpc -win') for mod in mods: do_cmd('vm.exe py2bc.tpc %s.py %s.tpc'%(mod,mod)) build_bc() do_cmd('cl /Od tpmain.c /D "inline=" /Zi /MD /Fdtinypy.pdb /Fmtinypy.map /Fetinypy.exe') #second pass - builts optimized binaries and stuff do_cmd('tinypy.exe tests.py -win') for mod in mods: do_cmd('tinypy.exe py2bc.py %s.py %s.tpc -nopos'%(mod,mod)) build_bc(True) do_cmd('cl /Os vmmain.c /D "inline=__inline" /D "NDEBUG" /Gy /GL /Zi /MD /Fdvm.pdb /Fmvm.map /Fevm.exe /link /opt:ref /opt:icf') do_cmd('cl /Os tpmain.c /D "inline=__inline" /D "NDEBUG" /Gy /GL /Zi /MD /Fdtinypy.pdb /Fmtinypy.map /Fetinypy.exe /link /opt:ref,icf /OPT:NOWIN98') do_cmd("tinypy.exe tests.py -win") do_cmd("dir *.exe")
939971255330bfcaadc0de23f441278958bf5e35 /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/3895/939971255330bfcaadc0de23f441278958bf5e35/setup.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1361, 67, 6904, 13332, 468, 9017, 358, 4074, 603, 9965, 598, 26832, 934, 4484, 30, 468, 3049, 326, 2581, 2728, 716, 1678, 5473, 820, 3152, 364, 26832, 934, 4484, 471, 468, 1508, 1086, 33...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1361, 67, 6904, 13332, 468, 9017, 358, 4074, 603, 9965, 598, 26832, 934, 4484, 30, 468, 3049, 326, 2581, 2728, 716, 1678, 5473, 820, 3152, 364, 26832, 934, 4484, 471, 468, 1508, 1086, 33...
if token[0]=='BREAK' or token[0]=='\n': break
if token[0]=='BREAK': break elif token[0]=='\n': b.append(Text(token[1])) self.next()
def _parseStyledTag(self, style=None): token = self.token[0] if style is None: style = Style(token.t)
f69217c38d0410c9893bea4dbb7002511807b608 /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/12391/f69217c38d0410c9893bea4dbb7002511807b608/parser.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 2670, 24273, 1259, 1805, 12, 2890, 16, 2154, 33, 7036, 4672, 225, 1147, 273, 365, 18, 2316, 63, 20, 65, 309, 2154, 353, 599, 30, 2154, 273, 9767, 12, 2316, 18, 88, 13, 2, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 2670, 24273, 1259, 1805, 12, 2890, 16, 2154, 33, 7036, 4672, 225, 1147, 273, 365, 18, 2316, 63, 20, 65, 309, 2154, 353, 599, 30, 2154, 273, 9767, 12, 2316, 18, 88, 13, 2, -100, ...
'hour': float(wc_use.hour_nbr + (wc.time_start+wc.time_stop+cycle*wc.time_cycle) * (wc.time_efficiency or 1.0)),
'hour': float(wc_use.hour_nbr*mult + (wc.time_start+wc.time_stop+cycle*wc.time_cycle) * (wc.time_efficiency or 1.0)),
def _bom_explode(self, cr, uid, bom, factor, properties, addthis=False, level=0): factor = factor / (bom.product_efficiency or 1.0) factor = rounding(factor, bom.product_rounding) if factor<bom.product_rounding: factor = bom.product_rounding result = [] result2 = [] phantom=False if bom.type=='phantom' and not bom.bom_lines: newbom = self._bom_find(cr, uid, bom.product_id.id, bom.product_uom.id, properties) if newbom: res = self._bom_explode(cr, uid, self.browse(cr, uid, [newbom])[0], factor*bom.product_qty, properties, addthis=True, level=level+10) result = result + res[0] result2 = result2 + res[1] phantom=True else: phantom=False if not phantom: if addthis and not bom.bom_lines: result.append( { 'name': bom.product_id.name, 'product_id': bom.product_id.id, 'product_qty': bom.product_qty * factor, 'product_uom': bom.product_uom.id, 'product_uos_qty': bom.product_uos and bom.product_uos_qty * factor or False, 'product_uos': bom.product_uos and bom.product_uos.id or False, }) if bom.routing_id: for wc_use in bom.routing_id.workcenter_lines: wc = wc_use.workcenter_id d, m = divmod(factor, wc_use.workcenter_id.capacity_per_cycle) cycle = (d + (m and 1.0 or 0.0)) * wc_use.cycle_nbr result2.append({ 'name': bom.routing_id.name, 'workcenter_id': wc.id, 'sequence': level+(wc_use.sequence or 0), 'cycle': cycle, 'hour': float(wc_use.hour_nbr + (wc.time_start+wc.time_stop+cycle*wc.time_cycle) * (wc.time_efficiency or 1.0)), }) for bom2 in bom.bom_lines: res = self._bom_explode(cr, uid, bom2, factor, properties, addthis=True, level=level+10) result = result + res[0] result2 = result2 + res[1] return result, result2
ff9a7525c40e32d3d382be194ccc15459a802003 /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/7397/ff9a7525c40e32d3d382be194ccc15459a802003/mrp.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 70, 362, 67, 2749, 2034, 12, 2890, 16, 4422, 16, 4555, 16, 28626, 16, 5578, 16, 1790, 16, 527, 2211, 33, 8381, 16, 1801, 33, 20, 4672, 5578, 273, 5578, 342, 261, 70, 362, 18, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 70, 362, 67, 2749, 2034, 12, 2890, 16, 4422, 16, 4555, 16, 28626, 16, 5578, 16, 1790, 16, 527, 2211, 33, 8381, 16, 1801, 33, 20, 4672, 5578, 273, 5578, 342, 261, 70, 362, 18, ...
print '%d slices each of %0.2f radians here: %s' % (n, angleBetween, repr(angles))
def draw(self): # normalize slice data g = Group()
bc401eccaecda1f7c0fa807f2550dc7992d6f227 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/7053/bc401eccaecda1f7c0fa807f2550dc7992d6f227/spider.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 3724, 12, 2890, 4672, 468, 3883, 2788, 501, 314, 273, 3756, 1435, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 3724, 12, 2890, 4672, 468, 3883, 2788, 501, 314, 273, 3756, 1435, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,...
'share' : self.itsUUID,
'share' : self.share.itsUUID,
def _get(self, updateCallback=None):
afc475e96a713e7cb2c4273fea6d4b077b5b7127 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/9228/afc475e96a713e7cb2c4273fea6d4b077b5b7127/Sharing.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 588, 12, 2890, 16, 1089, 2428, 33, 7036, 4672, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 588, 12, 2890, 16, 1089, 2428, 33, 7036, 4672, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, ...
return self._group(other)
return self.parse(mv)
def __call__(self,other): """ EXAMPLES: sage: rubik = CubeGroup() sage: rubik(1) () """ return self._group(other)
ec7f2353d5f3404d448b1713a40a91d345a6d70b /local1/tlutelli/issta_data/temp/all_python//python/2007_temp/2007/9890/ec7f2353d5f3404d448b1713a40a91d345a6d70b/cubegroup.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 1991, 972, 12, 2890, 16, 3011, 4672, 3536, 5675, 8900, 11386, 30, 272, 410, 30, 24997, 1766, 273, 385, 4895, 1114, 1435, 272, 410, 30, 24997, 1766, 12, 21, 13, 1832, 3536, 327, 3...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 1991, 972, 12, 2890, 16, 3011, 4672, 3536, 5675, 8900, 11386, 30, 272, 410, 30, 24997, 1766, 273, 385, 4895, 1114, 1435, 272, 410, 30, 24997, 1766, 12, 21, 13, 1832, 3536, 327, 3...
jsfile = path.join(package_dir, 'locale', self.config.language, 'LC_MESSAGES', 'sphinx.js') if path.isfile(jsfile): self.script_files.append('_static/translations.js')
jsfile_list = [path.join(package_dir, 'locale', self.config.language, 'LC_MESSAGES', 'sphinx.js'), path.join(sys.prefix, 'share/sphinx/locale', self.config.language, 'sphinx.js')] for jsfile in jsfile_list: if path.isfile(jsfile): self.script_files.append('_static/translations.js') break
def init(self): # a hash of all config values that, if changed, cause a full rebuild self.config_hash = '' self.tags_hash = '' # section numbers for headings in the currently visited document self.secnumbers = {}
be23bc91da4c93fcf3d09d20430a92f9d9f25fc3 /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/5532/be23bc91da4c93fcf3d09d20430a92f9d9f25fc3/html.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1208, 12, 2890, 4672, 468, 279, 1651, 434, 777, 642, 924, 716, 16, 309, 3550, 16, 4620, 279, 1983, 13419, 365, 18, 1425, 67, 2816, 273, 875, 365, 18, 4156, 67, 2816, 273, 875, 468, 2...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1208, 12, 2890, 4672, 468, 279, 1651, 434, 777, 642, 924, 716, 16, 309, 3550, 16, 4620, 279, 1983, 13419, 365, 18, 1425, 67, 2816, 273, 875, 365, 18, 4156, 67, 2816, 273, 875, 468, 2...
author = "bbum, RonaldO, SteveM, many others stretching back through the reachtes of time...",
author = "bbum, RonaldO, SteveM, LeleG, many others stretching back through the reaches of time...",
def package_version(): fp = open('Modules/objc/pyobjc.h', 'r') for ln in fp.readlines(): if ln.startswith('#define OBJC_VERSION'): fp.close() return ln.split()[-1][1:-1] raise ValueError, "Version not found"
527df15af6d42dffdf094ccf38d5acd37d8f8929 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/97/527df15af6d42dffdf094ccf38d5acd37d8f8929/setup.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2181, 67, 1589, 13332, 4253, 273, 1696, 2668, 7782, 19, 2603, 71, 19, 2074, 2603, 71, 18, 76, 2187, 296, 86, 6134, 364, 7211, 316, 4253, 18, 896, 3548, 13332, 309, 7211, 18, 17514, 191...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2181, 67, 1589, 13332, 4253, 273, 1696, 2668, 7782, 19, 2603, 71, 19, 2074, 2603, 71, 18, 76, 2187, 296, 86, 6134, 364, 7211, 316, 4253, 18, 896, 3548, 13332, 309, 7211, 18, 17514, 191...
self.top.wm_deiconify() self.top.tkraise()
def close(self): self.top.wm_deiconify() self.top.tkraise() reply = self.maybesave() if reply != "cancel": self._close() return reply
67716b5f53715e57d147cde9539b8d76a5a56e11 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/8546/67716b5f53715e57d147cde9539b8d76a5a56e11/EditorWindow.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1746, 12, 2890, 4672, 4332, 273, 365, 18, 24877, 70, 281, 836, 1435, 309, 4332, 480, 315, 10996, 6877, 365, 6315, 4412, 1435, 327, 4332, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1746, 12, 2890, 4672, 4332, 273, 365, 18, 24877, 70, 281, 836, 1435, 309, 4332, 480, 315, 10996, 6877, 365, 6315, 4412, 1435, 327, 4332, 2, -100, -100, -100, -100, -100, -100, -100, -100...
r"""\b(?P<bt>(([a-z]+)?\s+bugs?|[a-z]+))\s+
r"""\b(?P<bt>(([a-z0-9]+)?\s+bugs?|[a-z]+))\s+
def bugSnarfer(self, irc, msg, match): r"""\b(?P<bt>(([a-z]+)?\s+bugs?|[a-z]+))\s+#?(?P<bug>\d+(?!\d*\.\d+)((,|\s*(and|en|et|und|ir))\s*#?\d+(?!\d*\.\d+))*)""" if msg.args[0][0] == '#' and not self.registryValue('bugSnarfer', msg.args[0]): return
94c2d4d21db1453bb737d293d825c11761514569 /local1/tlutelli/issta_data/temp/all_python//python/2007_temp/2007/3104/94c2d4d21db1453bb737d293d825c11761514569/plugin.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 7934, 10461, 297, 586, 12, 2890, 16, 277, 1310, 16, 1234, 16, 845, 4672, 436, 8395, 64, 70, 3680, 52, 32, 23602, 34, 12, 3816, 69, 17, 94, 7941, 10936, 87, 15, 19381, 35, 24162, 69, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 7934, 10461, 297, 586, 12, 2890, 16, 277, 1310, 16, 1234, 16, 845, 4672, 436, 8395, 64, 70, 3680, 52, 32, 23602, 34, 12, 3816, 69, 17, 94, 7941, 10936, 87, 15, 19381, 35, 24162, 69, ...
def __init__(self, is_default, tag, *args, **kw):
def __init__(self, tag, is_default=False, *args, **kw):
def __init__(self, is_default, tag, *args, **kw): self.set_default_val(is_default) super(RetentionTag, self).__init__(tag, *args, **kw) session.flush()
835466dda6773486ec7d952dfff4da194fdcaf8b /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/14755/835466dda6773486ec7d952dfff4da194fdcaf8b/model.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 1047, 16, 353, 67, 1886, 33, 8381, 16, 380, 1968, 16, 2826, 9987, 4672, 365, 18, 542, 67, 1886, 67, 1125, 12, 291, 67, 1886, 13, 2240, 12, 14688, 1805,...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 1047, 16, 353, 67, 1886, 33, 8381, 16, 380, 1968, 16, 2826, 9987, 4672, 365, 18, 542, 67, 1886, 67, 1125, 12, 291, 67, 1886, 13, 2240, 12, 14688, 1805,...
def splitbins(bins): bytes = sys.maxint for shift in range(16): bin1 = [] bin2 = []
def splitbins(t, trace=0): """t, trace=0 -> (t1, t2, shift). Split a table to save space. t is a sequence of ints. This function can be useful to save space if many of the ints are the same. t1 and t2 are lists of ints, and shift is an int, chosen to minimize the combined size of t1 and t2 (in C code), and where for each i in range(len(t)), t[i] == t2[(t1[i >> shift] << shift) + (i & mask)] where mask is a bitmask isolating the last "shift" bits. If optional arg trace is true (default false), progress info is printed to sys.stderr. """ import sys if trace: def dump(t1, t2, shift, bytes): print >>sys.stderr, "%d+%d bins at shift %d; %d bytes" % ( len(t1), len(t2), shift, bytes) print >>sys.stderr, "Size of original table:", len(t)*getsize(t), \ "bytes" n = len(t)-1 maxshift = 0 if n > 0: while n >> 1: n >>= 1 maxshift += 1 del n bytes = sys.maxint t = tuple(t) for shift in range(maxshift + 1): t1 = [] t2 = []
def splitbins(bins): # split a sparse integer table into two tables, such as: # value = t2[(t1[char>>shift]<<shift)+(char&mask)] # and value == 0 means no data bytes = sys.maxint for shift in range(16): bin1 = [] bin2 = [] size = 2**shift bincache = {} for i in range(0, len(bins), size): bin = bins[i:i+size] index = bincache.get(tuple(bin)) if index is None: index = len(bin2) bincache[tuple(bin)] = index for v in bin: if v is None: bin2.append(0) else: bin2.append(v) bin1.append(index>>shift) # determine memory size b = len(bin1)*getsize(bin1) + len(bin2)*getsize(bin2) if b < bytes: best = shift, bin1, bin2 bytes = b shift, bin1, bin2 = best
2101348830ff0d65cebd4caf886011f45bcc7618 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/8546/2101348830ff0d65cebd4caf886011f45bcc7618/makeunicodedata.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1416, 11862, 12, 11862, 4672, 468, 1416, 279, 9387, 3571, 1014, 1368, 2795, 4606, 16, 4123, 487, 30, 468, 282, 460, 273, 268, 22, 63, 12, 88, 21, 63, 3001, 9778, 4012, 65, 17685, 4012,...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1416, 11862, 12, 11862, 4672, 468, 1416, 279, 9387, 3571, 1014, 1368, 2795, 4606, 16, 4123, 487, 30, 468, 282, 460, 273, 268, 22, 63, 12, 88, 21, 63, 3001, 9778, 4012, 65, 17685, 4012,...
debug(" non-local domain %s contains no embedded dot", domain)
_debug(" non-local domain %s contains no embedded dot", domain)
def set_ok_domain(self, cookie, request): if self.is_blocked(cookie.domain): debug(" domain %s is in user block-list", cookie.domain) return False if self.is_not_allowed(cookie.domain): debug(" domain %s is not in user allow-list", cookie.domain) return False if cookie.domain_specified: req_host, erhn = eff_request_host(request) domain = cookie.domain if self.strict_domain and (domain.count(".") >= 2): # XXX This should probably be compared with the Konqueror # (kcookiejar.cpp) and Mozilla implementations, but it's a # losing battle. i = domain.rfind(".") j = domain.rfind(".", 0, i) if j == 0: # domain like .foo.bar tld = domain[i+1:] sld = domain[j+1:i] if sld.lower() in ("co", "ac", "com", "edu", "org", "net", "gov", "mil", "int", "aero", "biz", "cat", "coop", "info", "jobs", "mobi", "museum", "name", "pro", "travel", "eu") and len(tld) == 2: # domain like .co.uk debug(" country-code second level domain %s", domain) return False if domain.startswith("."): undotted_domain = domain[1:] else: undotted_domain = domain embedded_dots = (undotted_domain.find(".") >= 0) if not embedded_dots and domain != ".local": debug(" non-local domain %s contains no embedded dot", domain) return False if cookie.version == 0: if (not erhn.endswith(domain) and (not erhn.startswith(".") and not ("."+erhn).endswith(domain))): debug(" effective request-host %s (even with added " "initial dot) does not end end with %s", erhn, domain) return False if (cookie.version > 0 or (self.strict_ns_domain & self.DomainRFC2965Match)): if not domain_match(erhn, domain): debug(" effective request-host %s does not domain-match " "%s", erhn, domain) return False if (cookie.version > 0 or (self.strict_ns_domain & self.DomainStrictNoDots)): host_prefix = req_host[:-len(domain)] if (host_prefix.find(".") >= 0 and not IPV4_RE.search(req_host)): debug(" host prefix %s for domain %s contains a dot", host_prefix, domain) return False return True
7fde00696ae6c872310a407df8b8923ab907f247 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/8125/7fde00696ae6c872310a407df8b8923ab907f247/cookielib.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 444, 67, 601, 67, 4308, 12, 2890, 16, 3878, 16, 590, 4672, 309, 365, 18, 291, 67, 23156, 12, 8417, 18, 4308, 4672, 1198, 2932, 282, 2461, 738, 87, 353, 316, 729, 1203, 17, 1098, 3113...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 444, 67, 601, 67, 4308, 12, 2890, 16, 3878, 16, 590, 4672, 309, 365, 18, 291, 67, 23156, 12, 8417, 18, 4308, 4672, 1198, 2932, 282, 2461, 738, 87, 353, 316, 729, 1203, 17, 1098, 3113...
self.sendReply(protocol, message, protocol, address)
self.sendReply(protocol, message, address)
def recursiveLookupFailed(self, failure, message, protocol, address): message.rCode = dns.ESERVER self.sendReply(protocol, message, protocol, address) if self.verbose: log.msg("Recursive lookup failed")
1f97260d0d06fc0eba00bb6002d5778b7d491bc0 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/12595/1f97260d0d06fc0eba00bb6002d5778b7d491bc0/server.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 5904, 6609, 2925, 12, 2890, 16, 5166, 16, 883, 16, 1771, 16, 1758, 4672, 883, 18, 86, 1085, 273, 6605, 18, 41, 4370, 365, 18, 4661, 7817, 12, 8373, 16, 883, 16, 1758, 13, 309, 365, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 5904, 6609, 2925, 12, 2890, 16, 5166, 16, 883, 16, 1771, 16, 1758, 4672, 883, 18, 86, 1085, 273, 6605, 18, 41, 4370, 365, 18, 4661, 7817, 12, 8373, 16, 883, 16, 1758, 13, 309, 365, ...
raise Exception
raise Exception("No phil object or path name supplied.")
def get_standard_phil_label (phil_object=None, phil_name=None, append="") : if phil_object is None and phil_name is None : raise Exception if phil_object is not None : if phil_object.short_caption is None : if phil_name is not None : phil_object.short_caption = reformat_phil_name(phil_name) else : phil_object.short_caption = reformat_phil_name(phil_object.name) return phil_object.short_caption + append else : return reformat_phil_name(phil_name)
7cd0159e0d6be37c22eb27f3dc0a85cb7d64734f /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/696/7cd0159e0d6be37c22eb27f3dc0a85cb7d64734f/interface.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 336, 67, 10005, 67, 844, 330, 67, 1925, 261, 844, 330, 67, 1612, 33, 7036, 16, 1844, 330, 67, 529, 33, 7036, 16, 714, 1546, 7923, 294, 309, 1844, 330, 67, 1612, 353, 599, 471, 1844, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 336, 67, 10005, 67, 844, 330, 67, 1925, 261, 844, 330, 67, 1612, 33, 7036, 16, 1844, 330, 67, 529, 33, 7036, 16, 714, 1546, 7923, 294, 309, 1844, 330, 67, 1612, 353, 599, 471, 1844, ...
scripts = ['idle']
scripts = [os.path.join(package_dir, 'idle')]
def _bytecode_filenames(self, files): files = [n for n in files if n.endswith('.py')] return install_lib._bytecode_filenames(self,files)
dc46175dc35d47df87e2955fec899cf471d3135f /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/8546/dc46175dc35d47df87e2955fec899cf471d3135f/setup.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 1637, 16651, 67, 19875, 12, 2890, 16, 1390, 4672, 1390, 273, 306, 82, 364, 290, 316, 1390, 309, 290, 18, 5839, 1918, 2668, 18, 2074, 6134, 65, 327, 3799, 67, 2941, 6315, 1637, 166...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 1637, 16651, 67, 19875, 12, 2890, 16, 1390, 4672, 1390, 273, 306, 82, 364, 290, 316, 1390, 309, 290, 18, 5839, 1918, 2668, 18, 2074, 6134, 65, 327, 3799, 67, 2941, 6315, 1637, 166...
[tags.invisible(render=tags.directive("diff")),
[tags.slot("diff"),
def render_user(self, context, data): return data.user
eeb67df63d05845b53ad47cd1ff7658325b074cc /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/5471/eeb67df63d05845b53ad47cd1ff7658325b074cc/server.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1743, 67, 1355, 12, 2890, 16, 819, 16, 501, 4672, 327, 501, 18, 1355, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1743, 67, 1355, 12, 2890, 16, 819, 16, 501, 4672, 327, 501, 18, 1355, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,...
wait_time -= 1 time.sleep(1)
attempts -= 1 time.sleep(3)
def _UrlIsAlive(self, url): """Checks to see if we get an http response from |url|. We poll the url 5 times with a 1 second delay. If we don't get a reply in that time, we give up and assume the httpd didn't start properly.
e7896714379322d0c76eddee27af4e2c7cdcbefb /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/5060/e7896714379322d0c76eddee27af4e2c7cdcbefb/http_server.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 1489, 2520, 10608, 12, 2890, 16, 880, 4672, 3536, 4081, 358, 2621, 309, 732, 336, 392, 1062, 766, 628, 571, 718, 96, 18, 1660, 7672, 326, 880, 1381, 4124, 598, 279, 404, 2205, 462...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 1489, 2520, 10608, 12, 2890, 16, 880, 4672, 3536, 4081, 358, 2621, 309, 732, 336, 392, 1062, 766, 628, 571, 718, 96, 18, 1660, 7672, 326, 880, 1381, 4124, 598, 279, 404, 2205, 462...
parent = cols[i].get_widget().get_ancestor(gtk.Button)
parent = self.cols[i].get_widget().get_ancestor(gtk.Button)
def __init__(self, frame, ChatNotebook): self.frame = frame self.joinedrooms = {} self.autojoin = 1 self.rooms = [] self.privaterooms = {} self.OtherPrivateRooms = self.frame.np.config.sections["private_rooms"]["membership"] self.roomsmodel = gtk.ListStore(str, str, int, int) frame.roomlist.RoomsList.set_model(self.roomsmodel) cols = InitialiseColumns(frame.roomlist.RoomsList, [_("Room"), 150, "text", self.RoomStatus], [_("Users"), -1, "text", self.RoomStatus], ) cols[0].set_sort_column_id(0) cols[1].set_sort_column_id(2) self.roomsmodel.set_sort_func(2, self.PrivateRoomsSort, 2) self.roomsmodel.set_sort_column_id(2, gtk.SORT_ASCENDING) #cols[1].set_sort_indicator(True) for i in range (2): parent = cols[i].get_widget().get_ancestor(gtk.Button) if parent: parent.connect('button_press_event', PressHeader)
3461deb06acecf71d609bd51524fb250228a39af /local1/tlutelli/issta_data/temp/all_python//python/2007_temp/2007/8738/3461deb06acecf71d609bd51524fb250228a39af/chatrooms.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 2623, 16, 16903, 19751, 4672, 365, 18, 3789, 273, 2623, 365, 18, 5701, 329, 13924, 87, 273, 2618, 365, 18, 6079, 5701, 273, 404, 365, 18, 13924, 87, 273,...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 2623, 16, 16903, 19751, 4672, 365, 18, 3789, 273, 2623, 365, 18, 5701, 329, 13924, 87, 273, 2618, 365, 18, 6079, 5701, 273, 404, 365, 18, 13924, 87, 273,...
if len(prev_misc_txt) > 0: count_misc += 1
def convert_processed_reference_line_to_marc_xml(line): """Given a processed reference line, convert it to MARC XML. @param line: (string) - the processed reference line, in which the recognised citations have been tagged. @return: (tuple) - + xml_line (string) - the reference line with all of its identified citations marked up into the various subfields. + count_misc (integer) - number of sections of miscellaneous found in the line + count_title (integer) - number of title-citations found in the line + count_reportnum (integer) - number of report numbers found in the line + count_url (integer) - number of URLs found in the line """ count_misc = count_title = count_reportnum = count_url = 0 xml_line = "" previously_cited_item = None processed_line = line.lstrip() ## 1. Extract reference line marker (e.g. [1]) from start of line and tag it: ## get patterns to identify numeration markers at the start of lines: marker_patterns = get_reference_line_numeration_marker_patterns() marker_match = perform_regex_search_upon_line_with_pattern_list(processed_line, marker_patterns) if marker_match is not None: ## found a marker: marker_val = marker_match.group(u'mark') ## trim the marker from the start of the line: processed_line = processed_line[marker_match.end():].lstrip() else: marker_val = u" " ## Now display the marker in marked-up XML: xml_line += _refextract_markup_reference_line_marker_as_marcxml(marker_val) ## 2. Loop through remaining identified segments in line and tag them into MARC XML segments: cur_misc_txt = u"" ## a marker to hold gathered miscellaneous text before a citation tag_match = sre_tagged_citation.search(processed_line) while tag_match is not None: ## found a tag - process it: tag_match_start = tag_match.start() tag_match_end = tag_match.end() tag_type = tag_match.group(1) if tag_type == "TITLE": ## This tag is an identified journal TITLE. It should be followed by VOLUME, ## YEAR and PAGE tags. cur_misc_txt += processed_line[0:tag_match_start] ## extract the title from the line: idx_closing_tag = processed_line.find(CFG_REFEXTRACT_MARKER_CLOSING_TITLE, tag_match_end) ## Sanity check - did we find a closing TITLE tag? if idx_closing_tag == -1: ## no closing </cds.TITLE> tag found - strip the opening tag and move past it processed_line = processed_line[tag_match_end:] else: ## Closing tag was found: title_text = processed_line[tag_match_end:idx_closing_tag] ## Now trim this matched title and its tags from the start of the line: processed_line = processed_line[idx_closing_tag+len(CFG_REFEXTRACT_MARKER_CLOSING_TITLE):] ## Was this title followed by the tags of recognised VOLUME, YEAR and PAGE objects? numeration_match = sre_recognised_numeration_for_title.match(processed_line) if numeration_match is not None: ## recognised numeration immediately after the title - extract it: reference_volume = numeration_match.group(2) reference_year = numeration_match.group(3) reference_page = numeration_match.group(4) ## Skip past the matched numeration in the working line: processed_line = processed_line[numeration_match.end():] if previously_cited_item is None: ## There is no previously cited item - this should be added as the previously ## cited item: previously_cited_item = { 'type' : "TITLE", 'misc_txt' : cur_misc_txt, 'title' : title_text, 'volume' : reference_volume, 'year' : reference_year, 'page' : reference_page, } ## Now empty the miscellaneous text and title components: cur_misc_txt = "" title_text = "" reference_volume = "" reference_year = "" reference_page = "" elif (previously_cited_item is not None) and \ (previously_cited_item['type'] == "REPORTNUMBER") and \ (len(cur_misc_txt.lower().replace("arxiv", "").strip(".,:;- []")) == 0): ## This TITLE belongs with the REPORT NUMBER before it - add them both into ## the same datafield tag (REPORT NUMBER first, TITLE second): prev_report_num = previously_cited_item['report_num'] prev_misc_txt = previously_cited_item['misc_txt'].lstrip(".;, ").rstrip() xml_line += \ _refextract_markup_title_followed_by_report_number_as_marcxml(title_text, reference_volume, reference_year, reference_page, prev_report_num, prev_misc_txt) ## Increment the stats counters: if len(prev_misc_txt) > 0: count_misc += 1 count_title += 1 count_reportnum += 1 ## reset the various variables: previously_cited_item = None cur_misc_txt = u"" title_text = "" reference_volume = "" reference_year = "" reference_page = "" else: ## either the previously cited item is NOT a REPORT NUMBER, or this cited TITLE ## is preceeded by miscellaneous text. In either case, the two cited objects are ## not the same and do not belong together in the same datafield. if previously_cited_item['type'] == "REPORTNUMBER": ## previously cited item was a REPORT NUMBER. ## Add previously cited REPORT NUMBER to XML string: prev_report_num = previously_cited_item['report_num'] prev_misc_txt = previously_cited_item['misc_txt'].lstrip(".;, ").rstrip() xml_line += _refextract_markup_reportnumber_as_marcxml(prev_report_num, prev_misc_txt) ## Increment the stats counters: if len(prev_misc_txt) > 0: count_misc += 1 count_reportnum += 1 elif previously_cited_item['type'] == "TITLE": ## previously cited item was a TITLE. ## Add previously cited TITLE to XML string: prev_title = previously_cited_item['title'] prev_volume = previously_cited_item['volume'] prev_year = previously_cited_item['year'] prev_page = previously_cited_item['page'] prev_misc_txt = previously_cited_item['misc_txt'].lstrip(".;, ").rstrip() xml_line += _refextract_markup_title_as_marcxml(prev_title, prev_volume, prev_year, prev_page, prev_misc_txt) ## Increment the stats counters: if len(prev_misc_txt) > 0: count_misc += 1 count_title += 1 ## Now add the current cited item into the previously cited item marker previously_cited_item = { 'type' : "TITLE", 'misc_txt' : cur_misc_txt, 'title' : title_text, 'volume' : reference_volume, 'year' : reference_year, 'page' : reference_page, } ## empty miscellaneous text cur_misc_txt = u"" title_text = "" reference_volume = "" reference_year = "" reference_page = "" else: ## No numeration was recognised after the title. Add the title into misc and carry on: cur_misc_txt += " %s" % title_text elif tag_type == "REPORTNUMBER": ## This tag is an identified institutional report number: ## Account for the miscellaneous text before the citation: cur_misc_txt += processed_line[0:tag_match_start] ## extract the institutional report-number from the line: idx_closing_tag = processed_line.find(CFG_REFEXTRACT_MARKER_CLOSING_REPORT_NUM, tag_match_end) ## Sanity check - did we find a closing report-number tag? if idx_closing_tag == -1: ## no closing </cds.REPORTNUMBER> tag found - strip the opening tag and move past this ## recognised reportnumber as it is unreliable: processed_line = processed_line[tag_match_end:] else: ## closing tag was found report_num = processed_line[tag_match_end:idx_closing_tag] ## now trim this matched institutional report-number and its tags from the start of the line: processed_line = processed_line[idx_closing_tag+len(CFG_REFEXTRACT_MARKER_CLOSING_REPORT_NUM):] ## Now, if there was a previous TITLE citation and this REPORT NUMBER citation one has no ## miscellaneous text after punctuation has been stripped, the two refer to the same object, ## so group them under the same datafield: if previously_cited_item is None: ## There is no previously cited item - this should be added as the previously ## cited item: previously_cited_item = { 'type' : "REPORTNUMBER", 'misc_txt' : "%s" % cur_misc_txt, 'report_num' : "%s" % report_num, } ## empty miscellaneous text cur_misc_txt = u"" report_num = u"" elif (previously_cited_item is not None) and \ (previously_cited_item['type'] == "TITLE") and \ (len(cur_misc_txt.lower().replace("arxiv", "").strip(".,:;- []")) == 0): ## This REPORT NUMBER belongs with the title before it - add them both into ## the same datafield tag (TITLE first, REPORT NUMBER second): prev_title = previously_cited_item['title'] prev_volume = previously_cited_item['volume'] prev_year = previously_cited_item['year'] prev_page = previously_cited_item['page'] prev_misc_txt = previously_cited_item['misc_txt'].lstrip(".;, ").rstrip() xml_line += \ _refextract_markup_title_followed_by_report_number_as_marcxml(prev_title, prev_volume, prev_year, prev_page, report_num, prev_misc_txt) ## Increment the stats counters: if len(prev_misc_txt) > 0: count_misc += 1 count_title += 1 count_reportnum += 1 ## Reset variables: previously_cited_item = None cur_misc_txt = u"" else: ## either the previously cited item is NOT a TITLE, or this cited REPORT NUMBER ## is preceeded by miscellaneous text. In either case, the two cited objects are ## not the same and do not belong together in the same datafield. if previously_cited_item['type'] == "REPORTNUMBER": ## previously cited item was a REPORT NUMBER. ## Add previously cited REPORT NUMBER to XML string: prev_report_num = previously_cited_item['report_num'] prev_misc_txt = previously_cited_item['misc_txt'].lstrip(".;, ").rstrip() xml_line += _refextract_markup_reportnumber_as_marcxml(prev_report_num, prev_misc_txt) ## Increment the stats counters: if len(prev_misc_txt) > 0: count_misc += 1 count_reportnum += 1 elif previously_cited_item['type'] == "TITLE": ## previously cited item was a TITLE. ## Add previously cited TITLE to XML string: prev_title = previously_cited_item['title'] prev_volume = previously_cited_item['volume'] prev_year = previously_cited_item['year'] prev_page = previously_cited_item['page'] prev_misc_txt = previously_cited_item['misc_txt'].lstrip(".;, ").rstrip() xml_line += _refextract_markup_title_as_marcxml(prev_title, prev_volume, prev_year, prev_page, prev_misc_txt) ## Increment the stats counters: if len(prev_misc_txt) > 0: count_misc += 1 count_title += 1 ## Now add the current cited item into the previously cited item marker previously_cited_item = { 'type' : "REPORTNUMBER", 'misc_txt' : "%s" % cur_misc_txt, 'report_num' : "%s" % report_num, } ## empty miscellaneous text cur_misc_txt = u"" report_num = u"" elif tag_type == "URL": ## This tag is an identified URL: ## Account for the miscellaneous text before the URL: cur_misc_txt += processed_line[0:tag_match_start] ## extract the URL information from within the tags in the line: idx_closing_tag = processed_line.find(CFG_REFEXTRACT_MARKER_CLOSING_URL, tag_match_end) ## Sanity check - did we find a closing URL tag? if idx_closing_tag == -1: ## no closing </cds.URL> tag found - strip the opening tag and move past it processed_line = processed_line[tag_match_end:] else: ## Closing tag was found: ## First, get the URL string from between the tags: url_string = processed_line[tag_match_end:idx_closing_tag] ## Now, get the URL description string from within the opening cds tag. E.g.: ## from <cds.URL description="abc"> get the "abc" value: opening_url_tag = processed_line[tag_match_start:tag_match_end] if opening_url_tag.find(u"""<cds.URL description=\"""") != -1: ## the description is present - extract it: ## (Stop 2 characters before the end of the string - we assume they are the ## closing characters '">'. url_descr = opening_url_tag[22:-2] else: ## There is no description - description should now be the url string: url_descr = url_string ## now trim this URL and its tags from the start of the line: processed_line = processed_line[idx_closing_tag+len(CFG_REFEXTRACT_MARKER_CLOSING_URL):] ## Build the MARC XML representation of this identified URL: if previously_cited_item is not None: ## There was a previously cited item. We must convert it to XML before we can ## convert this URL to XML: if previously_cited_item['type'] == "REPORTNUMBER": ## previously cited item was a REPORT NUMBER. ## Add previously cited REPORT NUMBER to XML string: prev_report_num = previously_cited_item['report_num'] prev_misc_txt = previously_cited_item['misc_txt'].lstrip(".;, ").rstrip() xml_line += _refextract_markup_reportnumber_as_marcxml(prev_report_num, prev_misc_txt) ## Increment the stats counters: if len(prev_misc_txt) > 0: count_misc += 1 count_reportnum += 1 elif previously_cited_item['type'] == "TITLE": ## previously cited item was a TITLE. ## Add previously cited TITLE to XML string: prev_title = previously_cited_item['title'] prev_volume = previously_cited_item['volume'] prev_year = previously_cited_item['year'] prev_page = previously_cited_item['page'] prev_misc_txt = previously_cited_item['misc_txt'].lstrip(".;, ").rstrip() xml_line += _refextract_markup_title_as_marcxml(prev_title, prev_volume, prev_year, prev_page, prev_misc_txt) ## Increment the stats counters: if len(prev_misc_txt) > 0: count_misc += 1 count_title += 1 ## Empty the previously-cited item place-holder: previously_cited_item = None ## Now convert this URL to MARC XML cur_misc_txt = cur_misc_txt.lstrip(".;, ").rstrip() if url_string.find("http //") == 0: url_string = u"http://" + url_string[7:] elif url_string.find("ftp //") == 0: url_string = u"ftp://" + url_string[6:] if url_descr.find("http //") == 0: url_descr = u"http://" + url_descr[7:] elif url_descr.find("ftp //") == 0: url_descr = u"ftp://" + url_descr[6:] xml_line += \ _refextract_markup_url_as_marcxml(url_string, url_descr, cur_misc_txt) ## Increment the stats counters: if len(cur_misc_txt) > 0: count_misc += 1 count_url += 1 cur_misc_txt = u"" elif tag_type == "SER": ## This tag is a SERIES tag; Since it was not preceeded by a TITLE tag, ## it is useless - strip the tag and put it into miscellaneous: (cur_misc_txt, processed_line) = \ _convert_unusable_tag_to_misc(processed_line, cur_misc_txt, \ tag_match_start,tag_match_end, CFG_REFEXTRACT_MARKER_CLOSING_SERIES) elif tag_type == "VOL": ## This tag is a VOLUME tag; Since it was not preceeded by a TITLE tag, ## it is useless - strip the tag and put it into miscellaneous: (cur_misc_txt, processed_line) = \ _convert_unusable_tag_to_misc(processed_line, cur_misc_txt, \ tag_match_start,tag_match_end, CFG_REFEXTRACT_MARKER_CLOSING_VOLUME) elif tag_type == "YR": ## This tag is a YEAR tag; Since it's not preceeded by TITLE and VOLUME tags, it ## is useless - strip the tag and put the contents into miscellaneous: (cur_misc_txt, processed_line) = \ _convert_unusable_tag_to_misc(processed_line, cur_misc_txt, \ tag_match_start,tag_match_end, CFG_REFEXTRACT_MARKER_CLOSING_YEAR) elif tag_type == "PG": ## This tag is a PAGE tag; Since it's not preceeded by TITLE, VOLUME and YEAR tags, ## it is useless - strip the tag and put the contents into miscellaneous: (cur_misc_txt, processed_line) = \ _convert_unusable_tag_to_misc(processed_line, cur_misc_txt, \ tag_match_start,tag_match_end, CFG_REFEXTRACT_MARKER_CLOSING_PAGE) else: ## Unknown tag - discard as miscellaneous text: cur_misc_txt += processed_line[0:tag_match.end()] processed_line = processed_line[tag_match.end():] ## Look for the next tag in the processed line: tag_match = sre_tagged_citation.search(processed_line) ## If a previously cited item remains, convert it into MARC XML: if previously_cited_item is not None: if previously_cited_item['type'] == "REPORTNUMBER": ## previously cited item was a REPORT NUMBER. ## Add previously cited REPORT NUMBER to XML string: prev_report_num = previously_cited_item['report_num'] prev_misc_txt = previously_cited_item['misc_txt'].lstrip(".;, ").rstrip() xml_line += _refextract_markup_reportnumber_as_marcxml(prev_report_num, prev_misc_txt) ## Increment the stats counters: if len(prev_misc_txt) > 0: count_misc += 1 count_reportnum += 1 elif previously_cited_item['type'] == "TITLE": ## previously cited item was a TITLE. ## Add previously cited TITLE to XML string: prev_title = previously_cited_item['title'] prev_volume = previously_cited_item['volume'] prev_year = previously_cited_item['year'] prev_page = previously_cited_item['page'] prev_misc_txt = previously_cited_item['misc_txt'].lstrip(".;, ").rstrip() xml_line += _refextract_markup_title_as_marcxml(prev_title, prev_volume, prev_year, prev_page, prev_misc_txt) ## Increment the stats counters: if len(prev_misc_txt) > 0: count_misc += 1 count_title += 1 ## free up previously_cited_item: previously_cited_item = None ## place any remaining miscellaneous text into the appropriate MARC XML fields: cur_misc_txt += processed_line if len(cur_misc_txt.strip(" .;,")) > 0: ## The remaining misc text is not just a full-stop or semi-colon. Add it: xml_line += _refextract_markup_miscellaneous_text_as_marcxml(cur_misc_txt) ## Increment the stats counters: count_misc += 1 ## return the reference-line as MARC XML: return (xml_line, count_misc, count_title, count_reportnum, count_url)
6258c992576ce615917d5d2bfa3f925bb23de3b5 /local1/tlutelli/issta_data/temp/all_python//python/2007_temp/2007/12027/6258c992576ce615917d5d2bfa3f925bb23de3b5/refextract.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1765, 67, 11005, 67, 6180, 67, 1369, 67, 869, 67, 3684, 71, 67, 2902, 12, 1369, 4672, 3536, 6083, 279, 5204, 2114, 980, 16, 1765, 518, 358, 490, 27206, 3167, 18, 632, 891, 980, 30, 2...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1765, 67, 11005, 67, 6180, 67, 1369, 67, 869, 67, 3684, 71, 67, 2902, 12, 1369, 4672, 3536, 6083, 279, 5204, 2114, 980, 16, 1765, 518, 358, 490, 27206, 3167, 18, 632, 891, 980, 30, 2...
data["bl_type"] = "file"
data["bl_type"] = "dir"
def blosxom_process_path_info(args): """ Process HTTP PATH_INFO for URI according to path specifications, fill in data dict accordingly The paths specification looks like this: - C{/foo.html} and C{/cat/foo.html} - file foo.* in / and /cat - C{/cat} - category - C{/2002} - year - C{/2002/Feb} (or 02) - Year and Month - C{/cat/2002/Feb/31} - year and month day in category. To simplify checking, four digits directory name is not allowed. @param args: dict containing the incoming Request object @type args: L{Pyblosxom.pyblosxom.Request} """ request = args['request'] config = request.getConfiguration() data = request.getData() pyhttp = request.getHttp() logger = tools.getLogger() form = request.getForm() # figure out which flavour to use. the flavour is determined # by looking at the "flav" post-data variable, the "flav" # query string variable, the "default_flavour" setting in the # config.py file, or "html" flav = config.get("default_flavour", "html") if form.has_key("flav"): flav = form["flav"].value data['flavour'] = flav data['pi_yr'] = '' data['pi_mo'] = '' data['pi_da'] = '' path_info = pyhttp.get("PATH_INFO", "") data['path_info'] = path_info data['root_datadir'] = config['datadir'] data["pi_bl"] = path_info # first we check to see if this is a request for an index and we can pluck # the extension (which is certainly a flavour) right off. newpath, ext = os.path.splitext(path_info) if newpath.endswith("/index") and ext: # there is a flavour-like thing, so that's our new flavour # and we adjust the path_info to the new filename data["flavour"] = ext[1:] path_info = newpath if path_info.startswith("/"): path_info = path_info[1:] absolute_path = os.path.join(config["datadir"], path_info) if os.path.isdir(absolute_path): # this is an absolute path data['root_datadir'] = absolute_path data['bl_type'] = 'dir' elif absolute_path.endswith("/index") and \ os.path.isdir(absolute_path[:-6]): # this is an absolute path with /index at the end of it data['root_datadir'] = absolute_path[:-6] data['bl_type'] = 'dir' else: # this is either a file or a date ext = tools.what_ext(data["extensions"].keys(), absolute_path) if not ext: # it's possible we didn't find the file because it's got a flavour # thing at the end--so try removing it and checking again. newpath, flav = os.path.splitext(absolute_path) if flav: ext = tools.what_ext(data["extensions"].keys(), newpath) if ext: # there is a flavour-like thing, so that's our new flavour # and we adjust the absolute_path and path_info to the new # filename data["flavour"] = flav[1:] absolute_path = newpath path_info, flav = os.path.splitext(path_info) if ext: # this is a file data["bl_type"] = "file" data["root_datadir"] = absolute_path + "." + ext else: data["bl_type"] = "dir" path_info = path_info.split("/") # it's possible to have category/category/year/month/day # (or something like that) so we pluck off the categories # here. pi_bl = "" while len(path_info) > 0 and \ not (len(path_info[0]) == 4 and path_info[0].isdigit()): pi_bl = os.path.join(pi_bl, path_info.pop(0)) # handle the case where we do in fact have a category # preceeding the date. if pi_bl: data["pi_bl"] = pi_bl data["root_datadir"] = os.path.join(config["datadir"], pi_bl) if len(path_info) > 0: item = path_info.pop(0) # handle a year token if len(item) == 4 and item.isdigit(): data['pi_yr'] = item item = "" if (len(path_info) > 0): item = path_info.pop(0) # handle a month token if item in tools.MONTHS: data['pi_mo'] = item item = "" if (len(path_info) > 0): item = path_info.pop(0) # handle a day token if len(item) == 2 and item.isdigit(): data["pi_da"] = item item = "" if len(path_info) > 0: item = path_info.pop(0) # if the last item we picked up was "index", then we # just ditch it because we don't need it. if item == "index": item = "" # if we picked off an item we don't recognize and/or # there is still stuff in path_info to pluck out, then # it's likely this wasn't a date. if item or len(path_info) > 0: data["bl_type"] = "file" data["root_datadir"] = absolute_path # figure out the blog_title_with_path data variable blog_title = config["blog_title"] if data['pi_bl'] != '': data['blog_title_with_path'] = '%s : %s' % (blog_title, data['pi_bl']) else: data['blog_title_with_path'] = blog_title # construct our final URL data['url'] = '%s/%s' % (config['base_url'], data['pi_bl'])
33949689d26954f7584aa20aa6ad7f2ac488be88 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/11836/33949689d26954f7584aa20aa6ad7f2ac488be88/pyblosxom.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 324, 383, 30319, 362, 67, 2567, 67, 803, 67, 1376, 12, 1968, 4672, 3536, 4389, 2239, 7767, 67, 5923, 364, 3699, 4888, 358, 589, 21950, 16, 3636, 316, 501, 2065, 15905, 225, 1021, 2953, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 324, 383, 30319, 362, 67, 2567, 67, 803, 67, 1376, 12, 1968, 4672, 3536, 4389, 2239, 7767, 67, 5923, 364, 3699, 4888, 358, 589, 21950, 16, 3636, 316, 501, 2065, 15905, 225, 1021, 2953, ...
for elements in dirs + files:
for element in dirs + files:
def copytree(src, dst, symlink=False): mkdirChain(os.path.dirname(dst)) if not os.path.isdir(src): shutil.copy2(src, dst) return shutil.copytree(src, dst, symlink) _copystat(src, dst) for srcdir, dirs, files in os.walk(src): dstdir = os.path.join(dst, srcdir[len(src):]) for elements in dirs + files: srcE = os.path.join(srcdir, element) dstE = os.path.join(dstdir, element) _copystat(srcE, dstE)
f5d5d93257908e94b67f80283dd4dd4c9dad817e /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/7645/f5d5d93257908e94b67f80283dd4dd4c9dad817e/util.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1610, 3413, 12, 4816, 16, 3046, 16, 10563, 33, 8381, 4672, 6535, 3893, 12, 538, 18, 803, 18, 12287, 12, 11057, 3719, 225, 309, 486, 1140, 18, 803, 18, 291, 1214, 12, 4816, 4672, 11060,...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1610, 3413, 12, 4816, 16, 3046, 16, 10563, 33, 8381, 4672, 6535, 3893, 12, 538, 18, 803, 18, 12287, 12, 11057, 3719, 225, 309, 486, 1140, 18, 803, 18, 291, 1214, 12, 4816, 4672, 11060,...
this = apply(_quickfix.new_RefOrderIDSource, args)
this = _quickfix.new_RefOrderIDSource(*args)
def __init__(self, *args): this = apply(_quickfix.new_RefOrderIDSource, args) try: self.this.append(this) except: self.this = this
7e632099fd421880c8c65fb0cf610d338d115ee9 /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/8819/7e632099fd421880c8c65fb0cf610d338d115ee9/quickfix.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 380, 1968, 4672, 333, 273, 389, 19525, 904, 18, 2704, 67, 1957, 2448, 734, 1830, 30857, 1968, 13, 775, 30, 365, 18, 2211, 18, 6923, 12, 2211, 13, 1335, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 380, 1968, 4672, 333, 273, 389, 19525, 904, 18, 2704, 67, 1957, 2448, 734, 1830, 30857, 1968, 13, 775, 30, 365, 18, 2211, 18, 6923, 12, 2211, 13, 1335, ...
def __init__(self, *args, **kwds): Method.__init__(self, *args, **kwds) self.__doc__ += os.linesep.join(Node.fields.keys())
def __init__(self, *args, **kwds): Method.__init__(self, *args, **kwds) # Update documentation with list of default fields returned self.__doc__ += os.linesep.join(Node.fields.keys())
aaacce646b89974ce85ffde13fd47b5417224b98 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/7598/aaacce646b89974ce85ffde13fd47b5417224b98/GetNodes.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 380, 1968, 16, 2826, 25577, 4672, 2985, 16186, 2738, 972, 12, 2890, 16, 380, 1968, 16, 2826, 25577, 13, 468, 2315, 7323, 598, 666, 434, 805, 1466, 2106, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 380, 1968, 16, 2826, 25577, 4672, 2985, 16186, 2738, 972, 12, 2890, 16, 380, 1968, 16, 2826, 25577, 13, 468, 2315, 7323, 598, 666, 434, 805, 1466, 2106, ...
if matches(values, unicode(txt, DEFAULT_ENCODING, "replace")):
if matches(values, txt.decode(DEFAULT_ENCODING, "replace")):
def matches(searches, text): for search in searches: if not search in text: return False return True
c9854d38e3d2b64a5be58b946d7d99fcd94605fb /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/12904/c9854d38e3d2b64a5be58b946d7d99fcd94605fb/aptBackend.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1885, 12, 3072, 281, 16, 977, 4672, 364, 1623, 316, 16662, 30, 309, 486, 1623, 316, 977, 30, 327, 1083, 327, 1053, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1885, 12, 3072, 281, 16, 977, 4672, 364, 1623, 316, 16662, 30, 309, 486, 1623, 316, 977, 30, 327, 1083, 327, 1053, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, ...
sep + "vppen size=r vpstyle=n gridnum=%d,1 $SOURCES" % n,
vppen + " size=r vpstyle=n gridnum=%d,1 $SOURCES" % n,
def retrieve(target=None,source=None,env=None): "Fetch data from the web" top = env.get('top') folder = top + os.sep +env['dir'] private = env.get('private') if private: login = private['login'] password = private['password'] server = private['server'] try: session = ftplib.FTP(server,login,password) session.cwd(folder) except: print 'Could not establish connection with "%s/%s" ' % (server, folder) return 3 for file in map(str,target): remote = os.path.basename(file) try: download = open(file,'wb') session.retrbinary('RETR '+remote, lambda x: download.write(x)) download.close() except: print 'Could not download file "%s" ' % file return 1 if not os.stat(file)[6]: print 'Could not download file "%s" ' % file os.unlink(file) return 4 session.quit() else: server = env.get('server') for file in map(str,target): remote = os.path.basename(file) rdir = string.join([server,folder,remote],'/') try: urllib.urlretrieve(rdir,file) if not os.stat(file)[6]: print 'Could not download file "%s" ' % file os.unlink(file) return 2 except: print 'Could not download "%s" from "%s" ' % (file,rdir) return 5 return 0
9826f7cd9d8f2da6836e7df78950cf03bbee4dc5 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/3143/9826f7cd9d8f2da6836e7df78950cf03bbee4dc5/rsfproj.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 4614, 12, 3299, 33, 7036, 16, 3168, 33, 7036, 16, 3074, 33, 7036, 4672, 315, 5005, 501, 628, 326, 3311, 6, 1760, 273, 1550, 18, 588, 2668, 3669, 6134, 3009, 273, 1760, 397, 1140, 18, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 4614, 12, 3299, 33, 7036, 16, 3168, 33, 7036, 16, 3074, 33, 7036, 4672, 315, 5005, 501, 628, 326, 3311, 6, 1760, 273, 1550, 18, 588, 2668, 3669, 6134, 3009, 273, 1760, 397, 1140, 18, ...
grown in opposite directions. (See also XXX [nim, to be modified from make_pi_bond_obj], which calls us twice and does this.)
grown in opposite directions, or one bond list if bond is part of a ring. (See also XXX [nim, to be modified from make_pi_bond_obj], which calls us twice and does this.)
def grow_bond_chain(bond, atom, next_bond_in_chain): #bruce 070415; generalized from grow_pi_sp_chain """Given a bond and one of its atoms, grow the bond chain containing bond (as defined by next_bond_in_chain, called on a bond and one of its atoms) in the direction of atom, adding newly found bonds and atoms to respective lists (listb, lista) which we'll return, until you can't or until you notice that it came back to bond and formed a ring (in which case return as much as possible, but not another ref to bond or atom). Return value is the tuple (ringQ, listb, lista) where ringQ says whether a ring was detected and len(listb) == len(lista) == number of new (bond, atom) pairs found. Note that each (bond, atom) pair found (at corresponding positions in the lists) has a direction (in bond, from atom) which is backwards along the direction of chain growth. Note that listb never includes the original bond, so it is never a complete list of bonds in the chain. In general, to form a complete chain, a caller must piece together a starting bond and two bond lists grown in opposite directions. (See also XXX [nim, to be modified from make_pi_bond_obj], which calls us twice and does this.) The function next_bond_in_chain(bond, atom) must return another bond containing atom in the same chain or ring, or None if the chain ends at bond (on the end of bond which is atom), and must be defined in such a way that its progress through any atom is consistent from either direction. That means it's not possible to find a ring which comes back to bond but does not include bond (by coming back to atom before coming to bond's other atom), so if that happens, we raise an exception. """ listb, lista = [], [] origbond = bond # for detecting a ring origatom = atom # for error checking while 1: nextbond = next_bond_in_chain(bond, atom) # this is the main difference from grow_pi_sp_chain if nextbond is None: return False, listb, lista nextatom = nextbond.other(atom) if nextbond is origbond: assert nextatom is not origatom, "grow_bond_chain(%r, %r, %r): can't have 3 bonds in chain at atom; data: %r" % \ (origbond, origatom, next_bond_in_chain, (listb, lista)) return True, listb, lista listb.append(nextbond) lista.append(nextatom) bond, atom = nextbond, nextatom pass
d964b3e68c182c35dec802f8c628767cb53060b1 /local1/tlutelli/issta_data/temp/all_python//python/2007_temp/2007/11221/d964b3e68c182c35dec802f8c628767cb53060b1/bonds.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 13334, 67, 26425, 67, 5639, 12, 26425, 16, 3179, 16, 1024, 67, 26425, 67, 267, 67, 5639, 4672, 468, 2848, 3965, 10934, 3028, 3600, 31, 7470, 1235, 628, 13334, 67, 7259, 67, 1752, 67, 5...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 13334, 67, 26425, 67, 5639, 12, 26425, 16, 3179, 16, 1024, 67, 26425, 67, 267, 67, 5639, 4672, 468, 2848, 3965, 10934, 3028, 3600, 31, 7470, 1235, 628, 13334, 67, 7259, 67, 1752, 67, 5...
page.write("||<-2> H1H2L1||||<-2> H1L1||||<-2> H2L1||\n")
page.write("|| || H1H2L1|||| || H1L1|||| || H2L1||\n")
def finish(self): self.file.close()
e411f70d1fee81d2ee12abcba0977a60c8d9ee40 /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/5758/e411f70d1fee81d2ee12abcba0977a60c8d9ee40/make_summary_page.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 4076, 12, 2890, 4672, 365, 18, 768, 18, 4412, 1435, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 4076, 12, 2890, 4672, 365, 18, 768, 18, 4412, 1435, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, ...
'DecadeAD' : (lambda v: 0<=v and v<2051, 0,2051),
'DecadeAD' : (lambda v: 0<=v and v<2501, 0,2501),
def makeMonthNamedList( lang, pattern, makeUpperCase = None ): """Creates a list of 12 elements based on the name of the month. The language-dependent month name is used as a formating argument to the pattern. The pattern must be have one %s that will be replaced by the localized month name. Use %%d for any other parameters that should be preserved. """ if makeUpperCase == None: f = lambda s: s elif makeUpperCase == True: f = lambda s: s[0].upper() + s[1:] elif makeUpperCase == False: f = lambda s: s[0].lower() + s[1:] return [ pattern % f(monthName(lang, m)) for m in range(1,13) ]
9a30db2f5b4d0436101187e400c1c1079182f310 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/4404/9a30db2f5b4d0436101187e400c1c1079182f310/date.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1221, 5445, 7604, 682, 12, 3303, 16, 1936, 16, 1221, 8915, 273, 599, 262, 30, 3536, 2729, 279, 666, 434, 2593, 2186, 2511, 603, 326, 508, 434, 326, 3138, 18, 1021, 2653, 17, 10891, 313...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1221, 5445, 7604, 682, 12, 3303, 16, 1936, 16, 1221, 8915, 273, 599, 262, 30, 3536, 2729, 279, 666, 434, 2593, 2186, 2511, 603, 326, 508, 434, 326, 3138, 18, 1021, 2653, 17, 10891, 313...
llop.debug_print(lltype.Void, "\tremember_young_pointer", addr_struct, "<-", addr)
def remember_young_pointer(addr_struct, addr): llop.debug_print(lltype.Void, "\tremember_young_pointer", addr_struct, "<-", addr) ll_assert(not self.is_in_nursery(addr_struct), "nursery object with GCFLAG_NO_YOUNG_PTRS") if self.is_in_nursery(addr): self.old_objects_pointing_to_young.append(addr_struct) self.header(addr_struct).tid &= ~GCFLAG_NO_YOUNG_PTRS elif addr == NULL: return self.write_into_last_generation_obj(addr_struct, addr)
c730ced88525d2e4d21df516ca62487214cdec5c /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/6934/c730ced88525d2e4d21df516ca62487214cdec5c/generation.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 11586, 67, 93, 465, 75, 67, 10437, 12, 4793, 67, 1697, 16, 3091, 4672, 282, 6579, 67, 11231, 12, 902, 365, 18, 291, 67, 267, 67, 82, 295, 550, 93, 12, 4793, 67, 1697, 3631, 315, 82...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 11586, 67, 93, 465, 75, 67, 10437, 12, 4793, 67, 1697, 16, 3091, 4672, 282, 6579, 67, 11231, 12, 902, 365, 18, 291, 67, 267, 67, 82, 295, 550, 93, 12, 4793, 67, 1697, 3631, 315, 82...
if self.request.get("download"): m = email.message_from_string(self.context.data) parts = [item for item in m.walk() if item.get_filename() != None] if parts[int(self.request.get("download"))].is_multipart(): data = str(parts[int(self.request.get("download"))]) else: data = parts[int(self.request.get("download"))].get_payload(decode=1) REQUEST = self.request RESPONSE = REQUEST.RESPONSE filename = REQUEST.get("filename") mimetype = REQUEST.get("mimetype") if filename is not None: header_value = contentDispositionHeader( disposition='attachment', filename=filename) RESPONSE.setHeader("Content-disposition", header_value) RESPONSE.setHeader("Content-Type", mimetype) return data else: return self.render()
return self.render()
def __call__(self): if self.request.get("download"): m = email.message_from_string(self.context.data) parts = [item for item in m.walk() if item.get_filename() != None] if parts[int(self.request.get("download"))].is_multipart(): data = str(parts[int(self.request.get("download"))]) else: data = parts[int(self.request.get("download"))].get_payload(decode=1) REQUEST = self.request RESPONSE = REQUEST.RESPONSE filename = REQUEST.get("filename") mimetype = REQUEST.get("mimetype") if filename is not None: header_value = contentDispositionHeader( disposition='attachment', filename=filename) RESPONSE.setHeader("Content-disposition", header_value) RESPONSE.setHeader("Content-Type", mimetype) return data else: return self.render()
725abadcc72c7459f07270446129b1e79939cd6b /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/10466/725abadcc72c7459f07270446129b1e79939cd6b/emailview.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 1991, 972, 12, 2890, 4672, 309, 365, 18, 2293, 18, 588, 2932, 7813, 6, 4672, 312, 273, 2699, 18, 2150, 67, 2080, 67, 1080, 12, 2890, 18, 2472, 18, 892, 13, 2140, 273, 306, 1726...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 1991, 972, 12, 2890, 4672, 309, 365, 18, 2293, 18, 588, 2932, 7813, 6, 4672, 312, 273, 2699, 18, 2150, 67, 2080, 67, 1080, 12, 2890, 18, 2472, 18, 892, 13, 2140, 273, 306, 1726...
self.set_colors([stroke, WHITE])
self._set_colors([stroke, WHITE])
def check_card(self, n, style, stroke, fill): svg_string = "" if style == "none": self.set_colors([stroke, WHITE]) elif style == "gradient": self.set_colors([stroke, fill]) else: self.set_colors([stroke, stroke]) if n == 1: svg_string += self._svg_check(45.5) elif n == 2: svg_string += self._svg_check(25.5) svg_string += self._svg_check(65.5) else: svg_string += self._svg_check( 5.5) svg_string += self._svg_check(45.5) svg_string += self._svg_check(85.5) return svg_string
3abdde2d4415b99e072401aa95cc310d13046e05 /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/7609/3abdde2d4415b99e072401aa95cc310d13046e05/gencards.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 866, 67, 3327, 12, 2890, 16, 290, 16, 2154, 16, 11040, 16, 3636, 4672, 9804, 67, 1080, 273, 1408, 309, 2154, 422, 315, 6102, 6877, 365, 6315, 542, 67, 9724, 3816, 16181, 16, 24353, 571...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 866, 67, 3327, 12, 2890, 16, 290, 16, 2154, 16, 11040, 16, 3636, 4672, 9804, 67, 1080, 273, 1408, 309, 2154, 422, 315, 6102, 6877, 365, 6315, 542, 67, 9724, 3816, 16181, 16, 24353, 571...
print "Character", `c`
if DEBUG: print "Character", `c`
def do_char(self, c, event): print "Character", `c`
62092368395e2928a1a1027a33b6cb9078768e6f /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/12029/62092368395e2928a1a1027a33b6cb9078768e6f/FrameWork.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 741, 67, 3001, 12, 2890, 16, 276, 16, 871, 4672, 309, 6369, 30, 1172, 315, 7069, 3113, 1375, 71, 68, 225, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 741, 67, 3001, 12, 2890, 16, 276, 16, 871, 4672, 309, 6369, 30, 1172, 315, 7069, 3113, 1375, 71, 68, 225, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -10...
try: connection, client_address = ear.accept() except socket.error: continue handle_request(connection, self)
iready, oready, eready = select(iwtd, owtd, ewtd) for x in iready: if x is ear: try: connection, client_address = ear.accept() except socket.error: continue iwtd.append(connection) else: iwtd.remove(x) handle_request(x, self)
def start(self): if self.pid_file is not None: pid = os.getpid() open(self.pid_file, 'w').write(str(pid))
3c3e916fde6f1404e3cce6469359682f899a279c /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/12681/3c3e916fde6f1404e3cce6469359682f899a279c/server.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 787, 12, 2890, 4672, 309, 365, 18, 6610, 67, 768, 353, 486, 599, 30, 4231, 273, 1140, 18, 588, 6610, 1435, 1696, 12, 2890, 18, 6610, 67, 768, 16, 296, 91, 16063, 2626, 12, 701, 12, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 787, 12, 2890, 4672, 309, 365, 18, 6610, 67, 768, 353, 486, 599, 30, 4231, 273, 1140, 18, 588, 6610, 1435, 1696, 12, 2890, 18, 6610, 67, 768, 16, 296, 91, 16063, 2626, 12, 701, 12, ...
for x in os.listdir(download_dir): linkpath = os.path.join(download_dir, x) if os.path.islink(linkpath): if fullnukepath == os.readlink(linkpath): os.unlink(linkpath) os.symlink(nukemoveto, linkpath) break
for x in os.listdir(download_dir): linkpath = os.path.join(download_dir, x) if os.path.islink(linkpath): if fullnukepath == os.readlink(linkpath): os.unlink(linkpath) os.symlink(nukemoveto, linkpath) break
def find_relink(fullnukepath, nukemoveto): for x in os.listdir(download_dir): linkpath = os.path.join(download_dir, x) if os.path.islink(linkpath): if fullnukepath == os.readlink(linkpath): os.unlink(linkpath) os.symlink(nukemoveto, linkpath) break
0dace5e13519b49a10fec8a5cdaff7b36ee5fb6b /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/13915/0dace5e13519b49a10fec8a5cdaff7b36ee5fb6b/tvwrangler.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1104, 67, 266, 1232, 12, 2854, 29705, 803, 16, 9244, 79, 351, 1527, 11453, 4672, 225, 364, 619, 316, 1140, 18, 1098, 1214, 12, 7813, 67, 1214, 4672, 1692, 803, 273, 1140, 18, 803, 18, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1104, 67, 266, 1232, 12, 2854, 29705, 803, 16, 9244, 79, 351, 1527, 11453, 4672, 225, 364, 619, 316, 1140, 18, 1098, 1214, 12, 7813, 67, 1214, 4672, 1692, 803, 273, 1140, 18, 803, 18, ...
'classo3d_1_1_', '')
'classo3d_1_1_', '', '')
def BuildO3DDocsFromJavaScript(js_files, ezt_output_dir, html_output_dir): RunJSDocToolkit(js_files, ezt_output_dir, html_output_dir, 'classo3d_1_1_', '')
9d9f2f5d1a2c2c49b0ce17b27baaf2eca3663f36 /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/5060/9d9f2f5d1a2c2c49b0ce17b27baaf2eca3663f36/build_docs.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 3998, 51, 23, 40, 12656, 1265, 16634, 12, 2924, 67, 2354, 16, 8012, 88, 67, 2844, 67, 1214, 16, 1729, 67, 2844, 67, 1214, 4672, 1939, 6479, 1759, 6364, 8691, 12, 2924, 67, 2354, 16, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 3998, 51, 23, 40, 12656, 1265, 16634, 12, 2924, 67, 2354, 16, 8012, 88, 67, 2844, 67, 1214, 16, 1729, 67, 2844, 67, 1214, 4672, 1939, 6479, 1759, 6364, 8691, 12, 2924, 67, 2354, 16, ...
result = execCommand( 0, ['mysqladmin', '-u', 'root', '-p%s' % mysqlRootPwd, 'flush-privileges'] ) if not result['OK']: return result
def installMySQL(): """ Attempt an installation of MySQL mode: Master Slave None """ if mysqlInstalled( doNotExit = True )['OK']: gLogger.info( 'MySQL already installed' ) return S_OK() if mysqlMode.lower() not in [ '', 'master', 'slave' ]: error = 'Unknown MySQL server Mode' if exitOnError: gLogger.fatal( error, mysqlMode ) exit( -1 ) gLogger.error( error, mysqlMode ) return S_ERROR( error ) if mysqlHost: gLogger.info( 'Installing MySQL server at', mysqlHost ) if mysqlMode: gLogger.info( 'This is a MySQl %s server' % mysqlMode ) fixMySQLScripts() try: os.makedirs( mysqlDbDir ) os.makedirs( mysqlLogDir ) except: error = 'Can not create MySQL dirs' gLogger.exception( error ) if exitOnError: exit( -1 ) return S_ERROR( error ) try: f = open( mysqlMyOrg, 'r' ) myOrg = f.readlines() f.close() f = open( mysqlMyCnf, 'w' ) for line in myOrg: if line.find( '[mysqld]' ) == 0: line += '\n'.join( [ 'innodb_file_per_table', '' ] ) elif line.find( 'innodb_log_arch_dir' ) == 0: line = '' elif line.find( 'innodb_data_file_path' ) == 0: line = line.replace( '2000M', '200M' ) elif line.find( 'server-id' ) == 0 and mysqlMode.lower() == 'master': # MySQL Configuration for Master Server line = '\n'.join( ['server-id = 1', '# DIRAC Master-Server', 'sync-binlog = 1', 'replicate-ignore-table = mysql.MonitorData', '# replicate-ignore-db=db_name', 'log-bin = mysql-bin', 'log-slave-updates', '' ] ) elif line.find( 'server-id' ) == 0 and mysqlMode.lower() == 'slave': # MySQL Configuration for Slave Server import time line = '\n'.join( ['server-id = %s' % int( time.time() ), '# DIRAC Slave-Server', 'sync-binlog = 1', 'replicate-ignore-table = mysql.MonitorData', '# replicate-ignore-db=db_name', 'log-bin = mysql-bin', 'log-slave-updates', '' ] ) elif line.find( '/opt/dirac/mysql' ) > -1: line = line.replace( '/opt/dirac/mysql', mysqlDir ) if mysqlSmallMem: if line.find( 'innodb_buffer_pool_size' ) == 0: line = 'innodb_buffer_pool_size = 200M\n' elif mysqlLargeMem: if line.find( 'innodb_buffer_pool_size' ) == 0: line = 'innodb_buffer_pool_size = 10G\n' f.write( line ) f.close() except: error = 'Can not create my.cnf' gLogger.exception( error ) if exitOnError: exit( -1 ) return S_ERROR( error ) gLogger.info( 'Initializing MySQL...' ) result = execCommand( 0, ['mysql_install_db', '--defaults-file=%s' % mysqlMyCnf, '--datadir=%s' % mysqlDbDir ] ) if not result['OK']: return result gLogger.info( 'Starting MySQL...' ) result = startMySQL() if not result['OK']: return result gLogger.info( 'Setting MySQL root password' ) result = execCommand( 0, ['mysqladmin', '-u', 'root', 'password', mysqlRootPwd] ) if not result['OK']: return result if mysqlHost: result = execCommand( 0, ['mysqladmin', '-u', 'root', '-p%s' % mysqlRootPwd, '-h', '%s' % mysqlHost, 'password', mysqlRootPwd] ) if not result['OK']: return result result = execCommand( 0, ['mysqladmin', '-u', 'root', '-p%s' % mysqlRootPwd, 'flush-privileges'] ) if not result['OK']: return result if not _addMySQLToDiracCfg(): return S_ERROR( 'Failed to add MySQL logging info to local configuration' ) return S_OK()
637d0dedad7d7de824841e68b5eb367adf86bdc2 /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/12864/637d0dedad7d7de824841e68b5eb367adf86bdc2/InstallTools.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 3799, 28723, 13332, 3536, 12864, 392, 13193, 434, 13485, 1965, 30, 13453, 9708, 836, 599, 3536, 309, 7219, 16747, 12, 741, 1248, 6767, 273, 1053, 262, 3292, 3141, 3546, 30, 314, 3328, 18, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 3799, 28723, 13332, 3536, 12864, 392, 13193, 434, 13485, 1965, 30, 13453, 9708, 836, 599, 3536, 309, 7219, 16747, 12, 741, 1248, 6767, 273, 1053, 262, 3292, 3141, 3546, 30, 314, 3328, 18, ...
self.conf.outputdir = os.path.join(self.conf.basedir, self.conf.relative_dir)
self.conf.outputdir = os.path.join(self.conf.basedir, self.conf.relative_dir)
def _parse_directory(self): """pick up the first directory given to us and make sure we know where things should go""" if os.path.isabs(self.conf.directory): self.conf.basedir = os.path.dirname(self.conf.directory) self.conf.relative_dir = os.path.basename(self.conf.directory) else: self.conf.basedir = os.path.realpath(self.conf.basedir) self.conf.relative_dir = self.conf.directory
473cb91d36b5c88e3a55fa60404eb172a1c6e32b /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/9557/473cb91d36b5c88e3a55fa60404eb172a1c6e32b/__init__.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 2670, 67, 5149, 12, 2890, 4672, 3536, 11503, 731, 326, 1122, 1867, 864, 358, 584, 471, 1221, 3071, 732, 5055, 1625, 9198, 1410, 1960, 8395, 309, 1140, 18, 803, 18, 291, 5113, 12, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 389, 2670, 67, 5149, 12, 2890, 4672, 3536, 11503, 731, 326, 1122, 1867, 864, 358, 584, 471, 1221, 3071, 732, 5055, 1625, 9198, 1410, 1960, 8395, 309, 1140, 18, 803, 18, 291, 5113, 12, ...
imagePage = wikipedia.ImagePage(self.site, 'Image:%s' % self.image) duplicates = imagePage.getDuplicates()
imagePage = wikipedia.ImagePage(self.site, 'Image:%s' % self.image) hash_found = imagePage.getHash() duplicates = self.site.getImagesFromAnHash(hash_found)
def checkImageDuplicated(self, image): """ Function to check the duplicated images. """ # {{Dupe|Image:Blanche_Montel.jpg}} dupText = wikipedia.translate(self.site, duplicatesText) dupRegex = wikipedia.translate(self.site, duplicatesRegex) dupTalkHead = wikipedia.translate(self.site, duplicate_user_talk_head) dupTalkText = wikipedia.translate(self.site, duplicates_user_talk_text) dupComment_talk = wikipedia.translate(self.site, duplicates_comment_talk) dupComment_image = wikipedia.translate(self.site, duplicates_comment_image) self.image = image duplicateRegex = r'\n\*(?:\[\[:Image:%s\]\] has the following duplicates:|\*\[\[:Image:%s\]\])$' % (self.convert_to_url(self.image), self.convert_to_url(self.image)) imagePage = wikipedia.ImagePage(self.site, 'Image:%s' % self.image) duplicates = imagePage.getDuplicates() if duplicates == None: return False # Error, we need to skip the page. if len(duplicates) > 1: if len(duplicates) == 2: wikipedia.output(u'%s has a duplicate! Reporting it...' % self.image) else: wikipedia.output(u'%s has %s duplicates! Reporting them...' % (self.image, len(duplicates) - 1)) if self.duplicatesReport: repme = "\n*[[:Image:%s]] has the following duplicates:" % self.convert_to_url(self.image) for duplicate in duplicates: if self.convert_to_url(duplicate) == self.convert_to_url(self.image): continue # the image itself, not report also this as duplicate repme += "\n**[[:Image:%s]]" % self.convert_to_url(duplicate) result = self.report_image(self.image, self.rep_page, self.com, repme, addings = False, regex = duplicateRegex) if not result: return True # If Errors, exit (but continue the check) if not dupText == None and not dupRegex == None: time_image_list = list() time_list = list() for duplicate in duplicates: DupePage = wikipedia.ImagePage(self.site, u'Image:%s' % duplicate) imagedata = DupePage.getLatestUploader()[1] # '2008-06-18T08:04:29Z' data = time.strptime(imagedata, "%Y-%m-%dT%H:%M:%SZ") data_seconds = time.mktime(data) time_image_list.append([data_seconds, duplicate]) time_list.append(data_seconds) older_image = self.returnOlderTime(time_image_list, time_list) # And if the images are more than two? Page_oder_image = wikipedia.ImagePage(self.site, u'Image:%s' % older_image) string = '' images_to_tag_list = [] for duplicate in duplicates: if wikipedia.ImagePage(self.site, u'%s:%s' % (self.image_namespace, duplicate)) == \ wikipedia.ImagePage(self.site, u'%s:%s' % (self.image_namespace, older_image)): continue # the older image, not report also this as duplicate DupePage = wikipedia.ImagePage(self.site, u'Image:%s' % duplicate) try: DupPageText = DupePage.get() older_page_text = Page_oder_image.get() except wikipedia.NoPage: continue # The page doesn't exists if re.findall(dupRegex, DupPageText) == [] and re.findall(dupRegex, older_page_text) == []: wikipedia.output(u'%s is a duplicate and has to be tagged...' % duplicate) images_to_tag_list.append(duplicate) #if duplicate != duplicates[-1]: string += "*[[:%s%s]]\n" % (self.image_namespace, duplicate) #else: # string += "*[[:%s%s]]" % (self.image_namespace, duplicate) else: wikipedia.output(u"Already put the dupe-template in the image's page or in the dupe's page. Skip.") return True # Ok - No problem. Let's continue the checking phase older_image_ns = '%s%s' % (self.image_namespace, older_image) # adding the namespace if len(images_to_tag_list) > 1: for image_to_tag in images_to_tag_list[:-1]: self.report(re.sub(r'__image__', r'%s' % older_image_ns, dupText), image_to_tag, commImage = dupComment_image, unver = True) if len(images_to_tag_list) != 0: self.report(re.sub(r'__image__', r'%s' % older_image_ns, dupText), images_to_tag_list[-1], dupTalkText % (older_image_ns, string), dupTalkHead, commTalk = dupComment_talk, commImage = dupComment_image, unver = True) if older_image != self.image: return False # The image is a duplicate, it will be deleted. return True # Ok - No problem. Let's continue the checking phase
305ededf5d8ce4db2a30ebd9251b4b7af13faa5a /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/4404/305ededf5d8ce4db2a30ebd9251b4b7af13faa5a/checkimages.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 866, 2040, 19682, 690, 12, 2890, 16, 1316, 4672, 3536, 4284, 358, 866, 326, 16975, 4602, 18, 3536, 468, 10179, 40, 89, 347, 96, 2040, 30, 4802, 304, 18706, 67, 49, 1580, 292, 18, 14362...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 866, 2040, 19682, 690, 12, 2890, 16, 1316, 4672, 3536, 4284, 358, 866, 326, 16975, 4602, 18, 3536, 468, 10179, 40, 89, 347, 96, 2040, 30, 4802, 304, 18706, 67, 49, 1580, 292, 18, 14362...
if name is None: global _field_count _field_count += 1 name = 'field.%s' % _field_count
def __init__(self, name=None, **kwargs): """ Assign name to __name__. Add properties and passed-in keyword args to __dict__. Validate assigned validator(s). """ DefaultLayerContainer.__init__(self)
453dbec64114494e84772b5599152b8e99e1eeab /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/12165/453dbec64114494e84772b5599152b8e99e1eeab/Field.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 508, 33, 7036, 16, 2826, 4333, 4672, 3536, 12093, 508, 358, 1001, 529, 25648, 1436, 1790, 471, 2275, 17, 267, 4932, 833, 358, 1001, 1576, 25648, 3554, 6958...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 508, 33, 7036, 16, 2826, 4333, 4672, 3536, 12093, 508, 358, 1001, 529, 25648, 1436, 1790, 471, 2275, 17, 267, 4932, 833, 358, 1001, 1576, 25648, 3554, 6958...
Label(frame["Commands"], text=desc, state=getStatus(status), width=35, anchor=W).grid(row=row, column=1, sticky=W)
Label(frame[menu[3]], text=desc, state=getStatus(status), width=35, anchor=W).grid(row=row, column=1, sticky=W)
def main(): global root, info_line, menu_frame root = Tkinter.Tk() root.title(tabs['Common']['Title']) #-- Create the menu frame, and add menus to the menu frame menu_frame = Tkinter.Frame(root) menu_frame.pack(fill=Tkinter.X, side=Tkinter.TOP) menu_frame.tk_menuBar(file_menu(), help_menu()) #-- Create the info frame and fill with initial contents info_frame = Tkinter.Frame(root) info_frame.pack(fill=Tkinter.X, side=Tkinter.BOTTOM, pady=1) #-- Tabs frame main_frame = conf_tab(info_frame, LEFT) #-- Common frame frame["Common"] = Tkinter.Frame(main_frame()) Label(frame["Common"], text=tabs['Common']['Title'], width=25, background="green").grid(row=0, column=0) Label(frame["Common"], text="", width=35, background="green").grid(row=0, column=1) Label(frame["Common"], text="Программа предназначенная для началь-", width=35).grid(row=1, column=1) Label(frame["Common"], text="ной инициализации и тестирования ап-", width=35).grid(row=2, column=1) Label(frame["Common"], text="паратуры. А так же для ее отладки. А", width=35).grid(row=3, column=1) Label(frame["Common"], text="так же для отладки системного кода для", width=35).grid(row=4, column=1) Label(frame["Common"], text=" дальнейшего переноса кода в Линукс ", width=35).grid(row=5, column=1) global common_var #-- Arch subframe common_var["Arch_num"] = IntVar() Label(frame["Common"], text="Arch", width=25, background="lightblue").grid(row=1, column=0) for ar, value in tabs['Common']['Arch']: Radiobutton(frame["Common"], text=ar, value=value, variable=common_var["Arch_num"], anchor=W).grid(row=value+2, column=0, sticky=W) common_var["Arch_num"].set(tabs["Common"]["Arch_num"]) #-- Compiler subframe Label(frame["Common"], text="Compiler", width=25, background="lightblue").grid(row=5, column=0) common_var["Compiler"] = StringVar() Entry(frame["Common"], width=25, textvariable=common_var["Compiler"]).grid(row=6, column=0) common_var["Compiler"].set(tabs["Common"]["Compiler"]) #-- LDFLAGS subframe Label(frame["Common"], text="LDFLAGS", width=25, background="lightblue").grid(row=7, column=0) common_var["Ldflags"] = StringVar() Entry(frame["Common"], width=25, textvariable=common_var["Ldflags"]).grid(row=8, column=0) common_var["Ldflags"].set(tabs["Common"]["Ldflags"]) #-- CFLAGS subframe Label(frame["Common"], text="CFLAGS", width=25, background="lightblue").grid(row=9, column=0) common_var["Cflags"] = StringVar() Entry(frame["Common"], width=25, textvariable=common_var["Cflags"]).grid(row=10, column=0) common_var["Cflags"].set(tabs["Common"]["Cflags"]) #-- Target subframe Label(frame["Common"], text="Target", width=25, background="lightblue").grid(row=11, column=0) common_var["Target"] = StringVar() Entry(frame["Common"], width=25, textvariable=common_var["Target"]).grid(row=12, column=0) common_var["Target"].set(tabs["Common"]["Target"]) #-- Drivers frame frame["Drivers"] = Tkinter.Frame(main_frame()) Label(frame["Drivers"], text="Driver", width=25, background="lightblue").grid(row=0, column=0) Label(frame["Drivers"], text="Description", width=35, background="lightblue").grid(row=0, column=1) vard = IntVar() row = 1 for driver, inc, status, desc in tabs['Drivers']: setattr(vard, driver, IntVar()) Checkbutton(frame["Drivers"], text=driver, state=getStatus(status), anchor=W, variable = getattr(vard, driver), \ command=(lambda row=row: onPress(tabs['Drivers'], row-1, 1))).grid(row=row, column=0, sticky=W) getattr(vard, driver).set(inc) Label(frame["Drivers"], text=desc, state=getStatus(status), width=35, anchor=W).grid(row=row, column=1, sticky=W) row = row + 1 #-- Tests frame frame["Tests"] = Tkinter.Frame(main_frame()) Label(frame["Tests"], text="Start testing", width=25, background="lightblue").grid(row=0, column=0) Label(frame["Tests"], text="Description", width=35, background="lightblue").grid(row=0, column=1) vart = IntVar() row = 1 for desc, inc, status, test_name in tabs['Tests']: setattr(vart, test_name, IntVar()) Checkbutton(frame["Tests"], text=test_name, state=getStatus(status), anchor=W, variable = getattr(vart, test_name), \ command=(lambda row=row: onPress(tabs['Tests'], row-1, 1))).grid(row=row, column=0, sticky=W) getattr(vart, test_name).set(inc) Label(frame["Tests"], text=desc, state=getStatus(status), width=35, anchor=W).grid(row=row, column=1, sticky=W) row = row + 1 #-- Commands frame frame["Commands"] = Tkinter.Frame(main_frame()) Label(frame["Commands"], text="Shell commands", width=25, background="lightblue").grid(row=0, column=0) Label(frame["Commands"], text="Description", width=35, background="lightblue").grid(row=0, column=1) varc = IntVar() row = 1 for cmd, pack, inc, status, desc in tabs['Commands']: setattr(varc, cmd, IntVar()) Checkbutton(frame["Commands"], text=cmd, state=getStatus(status), anchor=W, variable = getattr(varc, cmd), \ command=(lambda row=row: onPress(tabs['Commands'], row-1, 2))).grid(row=row, column=0, sticky=W) getattr(varc, cmd).set(inc) Label(frame["Commands"], text=desc, state=getStatus(status), width=35, anchor=W).grid(row=row, column=1, sticky=W) row = row + 1 #-- Level frame global level_var frame["Levels"] = Tkinter.Frame(main_frame()) Label(frame["Levels"], text="Verbous level", width=25, background="lightblue").grid(row=0, column=0) Label(frame["Levels"], text="", width=35).grid(row=0, column=1) for i in range( len(tabs['Levels'].keys()) ): name = str(tabs['Levels'].keys()[i]) level_var[name] = IntVar() Checkbutton(frame["Levels"], text=tabs['Levels'].keys()[i], state=NORMAL, anchor=W, \ variable = level_var[name]).grid(row=i+1, column=0, sticky=W) level_var[name].set(tabs["Levels"][name]) #-- Build frame global build_var frame["Build"] = Tkinter.Frame(main_frame()) Label(frame["Build"], text="Build", width=25, background="lightblue").grid(row=0, column=0) Label(frame["Build"], text="", width=35).grid(row=0, column=1) for i in range( len(tabs['Build'].keys()) ): name = str(tabs['Build'].keys()[i]) build_var[name] = IntVar() Checkbutton(frame["Build"], text=tabs['Build'].keys()[i], state=NORMAL, anchor=W, \ variable = build_var[name]).grid(row=i+1, column=0, sticky=W) build_var[name].set(tabs["Build"][name]) #-- build tabs for i in range( len(menu) ): main_frame.add_screen(frame[menu[i]], menu[i]) root.mainloop()
dacad817a6c80c16abe6d085d389452e80f65113 /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/2128/dacad817a6c80c16abe6d085d389452e80f65113/configure.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2774, 13332, 2552, 1365, 16, 1123, 67, 1369, 16, 3824, 67, 3789, 1365, 273, 399, 79, 2761, 18, 56, 79, 1435, 1365, 18, 2649, 12, 16056, 3292, 6517, 21712, 4247, 19486, 225, 468, 413, 1...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2774, 13332, 2552, 1365, 16, 1123, 67, 1369, 16, 3824, 67, 3789, 1365, 273, 399, 79, 2761, 18, 56, 79, 1435, 1365, 18, 2649, 12, 16056, 3292, 6517, 21712, 4247, 19486, 225, 468, 413, 1...
text1 += _("We hope that you will appreciate GTG. Please send us bug reports and ideas for improvement using : ")
text1 += _("We hope that you will appreciate GTG. Please send us bug reports and ideas for improvement using: ")
def populate() : doc,root = cleanxml.emptydoc("project") #Task 0@1 : Getting started with GTG title1 = _("Getting started with GTG") text1 = _("Welcome in Getting Things Gnome!, your new task manager.") text1 += "\n\n" text1 += _("In GTG, everything is a task. From building a bridge over the Pacific Ocean to changing a light bulb or organizing a party. When you edit a task, it is automatically saved.") text1 += "\n\n" text1 += _("Once a task is done, you can push the &quot;Mark as done&quot; button. If the task is not relevant any-more, simply press &quot;Dismiss&quot;.") text1 += "\n\n" text1 += _("A task might be composed of multiple subtasks that appear as links in the description. Simply click on the following link :") text1 += "\n" text1 += "<subtask>1@1</subtask>\n" text1 += "\n\n" text1 += _("Don't forget to mark this subtask as done !") text1 += "\n\n" text1 += _("Other stuff you should read :") text1 += "\n" text1 += "<subtask>2@1</subtask>\n" text1 += "<subtask>3@1</subtask>\n" text1 += "<subtask>4@1</subtask>\n" text1 += "\n\n" text1 += _("We hope that you will appreciate GTG. Please send us bug reports and ideas for improvement using : ") text1 += "https://bugs.launchpad.net/gtg" text1 += "\n\n" text1 += _("Thank you for trying out GTG :-)") t1 = addtask(doc,"0@1",title1,text1,["1@1","2@1","3@1","4@1"]) root.appendChild(t1) #Task 1@1 : Learn to use subtasks title2 = _("Learn to use subtasks") text2 = _("In the task description (this window), if you begin a line with &quot;-&quot;, it will be considered as a &quot;subtask&quot;, something that needs to be done in order to accomplish your task. Just try to write &quot;- test subtask&quot; on the next line and press enter.") text2 += "\n\n" text2 += _("You can also use the &quot;insert subtask&quot; button.") text2 += "\n\n\n" text2 += _("Task and subtasks can be re-organized by drag-n-drop in the tasks list.") text2 += "\n\n" text2 += _("Some concept come with subtasks : for example, a subtask due date can never be after its parent due date.") text2 += "\n\n" text2 += _("Also, marking a parent as done will mark all the subtasks as done.") t2 = addtask(doc,"1@1",title2,text2,[]) root.appendChild(t2) #Task 2@1 : Learn to use tags title3 = _("Learn to use tags") text3 = _("A tag is a simple word that begin with &quot;@&quot;.") text3 += "\n\n" text3 += _("Try to type a word beginning with @ here :") text3 += "\n\n" text3 += _("It becomes yellow, it's a tag.") text3 += "\n\n" text3 += _("Tags are useful to sort your tasks. In the view menu, you can enable a sidebar which displays all the tags you are using so you can easily see tasks for a given tag. There's no limit to the number of tags a task can have.") text3 += "\n\n" text3 += _("If you right click on a tag in the sidebar you can also set its color. It will permit you to have a more colorful list of tasks, if you want it that way.") text3 += "\n\n" text3 += _("A new tag is only added to the current task. There's no recursivity and the tag is not applied to subtasks. But when you create a new subtask, this subtask will inherit the tags of its parent as a good primary default (it will also be the case if you add a tag to a parent just after creating a subtask). Of course, you can modify at any time the tags of this particular subtask. It will never be changed by the parent.") t3 = addtask(doc,"2@1",title3,text3,[]) root.appendChild(t3) #Task 3@1 : Using the Workview title4 = _("Using the Workview") text4 = _("If you press the &quot;Workview&quot; button, only actionable tasks will be displayed.") text4 += "\n\n" text4 += _("What is an actionable task? It's a task you can do directly, right now.") text4 += "\n\n" text4 += _("It's a task that is already &quot;start-able&quot;, i.e. the start date is already over.") text4 += "\n\n" text4 += _("It's a task that doesn't have open subtasks, i.e. you can do the task itself directly.") text4 += "\n\n" text4 += _("Thus, the workview will only show you tasks you should do right now.") text4 += "\n\n" text4 += _("If you use tags, you can right click on a tag in the sidebar and choose to not display tasks with this particular tag in the workview. It's very useful if you have a tag like &quot;someday&quot; that you use for tasks you would like to do but are not particularly urgent.") t4 = addtask(doc,"3@1",title4,text4,[]) root.appendChild(t4) #Task 4@1 : Reporting bugs title5 = _("Reporting bugs") text5 = _("GTG is still very alpha software. We like it and use it everyday but you will encounter some bugs.") text5 += "\n\n" text5 += _("Please, report them ! We need you to make this software better. Any contribution, any idea is welcome.") text5 += "\n\n" text5 += _("If you have some trouble with GTG, we might be able to help you or to solve your problem really quickly.") t5 = addtask(doc,"4@1",title5,text5,[]) root.appendChild(t5) return doc
e5bb674db8dc80fc0b68053f5ab1782bdfcbe3f3 /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/7036/e5bb674db8dc80fc0b68053f5ab1782bdfcbe3f3/firstrun_tasks.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 6490, 1435, 294, 997, 16, 3085, 273, 2721, 2902, 18, 5531, 2434, 2932, 4406, 7923, 468, 2174, 374, 36, 21, 294, 26602, 5746, 598, 19688, 43, 2077, 21, 273, 389, 2932, 19213, 5746, 598, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 6490, 1435, 294, 997, 16, 3085, 273, 2721, 2902, 18, 5531, 2434, 2932, 4406, 7923, 468, 2174, 374, 36, 21, 294, 26602, 5746, 598, 19688, 43, 2077, 21, 273, 389, 2932, 19213, 5746, 598, ...
def __init__(self, default='', save_mode=False):
def __init__(self, default="", save_mode=False):
def __init__(self, default='', save_mode=False): gtk.FileChooserButton.__init__(self, _("Python-Fu File Selection")) self.set_action(gtk.FILE_CHOOSER_ACTION_OPEN) if default: self.set_filename(default)
11f094f565c8843d3b52447f6bf7564fe383f605 /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/11058/11f094f565c8843d3b52447f6bf7564fe383f605/gimpfu.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 805, 1546, 3113, 1923, 67, 3188, 33, 8381, 4672, 22718, 18, 812, 17324, 3616, 16186, 2738, 972, 12, 2890, 16, 389, 2932, 15774, 17, 42, 89, 1387, 12977, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 805, 1546, 3113, 1923, 67, 3188, 33, 8381, 4672, 22718, 18, 812, 17324, 3616, 16186, 2738, 972, 12, 2890, 16, 389, 2932, 15774, 17, 42, 89, 1387, 12977, ...
markeredgewidth=1, linewidth=1, label = plot_name)
markeredgewidth=1, linewidth=2, label = plot_name)
def efficiencyplot(found, missed, col_name, ifo=None, plot_type = 'linear', \ nbins = 40, output_name = None, plotsym = 'k-', plot_name = '', \ title_string = '', errors = False): """ function to plot the difference if col_name_a in two tables against the value of col_name_b in table1. @param found: metaDataTable containing found injections @param missed: metaDataTable containing missed injections @param col_name: name of column used to plot efficiency @param ifo: name of ifo (default = None), used in extracting information (e.g. which eff_dist) @param plot_type: either 'linear' or 'log' plot on x-axis @param plot_sym: the symbol to use when plotting, default = 'k-' @param plot_name: name of the plot (for the legend) @param title_string: extra info for title @param errorbars: plot errorbars on the efficiencies (using binomial errors) default = False """ if not ifo and found.table[0].has_key('ifo'): ifo = found.table[0]["ifo"] foundVal = readcol(found,col_name, ifo) missedVal = readcol(missed,col_name, ifo) if plot_type == 'log': foundVal = log10(foundVal) missedVal = log10(missedVal) step = (max(foundVal) - min(foundVal)) /nbins bins = arange(min(foundVal),max(foundVal), step ) fig_num = gcf().number figure(100) [num_found,binsf,stuff] = hist(foundVal, bins) [num_missed,binsm,stuff] = hist(missedVal ,bins) close(100) figure(fig_num) num_found = array(num_found,'d') eff = num_found / (num_found + num_missed) error = sqrt( num_found * num_missed / (num_found + num_missed)**3 ) error = array(error) if plot_type == 'log': bins = 10**bins if plot_name: semilogx(bins, eff, plotsym,markersize=12, markerfacecolor=None,\ markeredgewidth=1, linewidth=1, label = plot_name) else: semilogx(bins, eff, plotsym,markersize=12, markerfacecolor=None,\ markeredgewidth=1, linewidth=1) if errors: errorbar(bins, eff, error,markersize=12, markerfacecolor=None,\ markeredgewidth=1, linewidth = 1, label = plot_name, \ fmt = plotsym) else: if errors: errorbar(bins, eff, error, fmt = plotsym, markersize=12,\ markerfacecolor=None,\ markeredgewidth=1, linewidth=1, label = plot_name) else: plot(bins, eff, plotsym,markersize=12, markerfacecolor=None,\ markeredgewidth=1, linewidth=1, label = plot_name) xlabel(col_name, size='x-large') ylabel('Efficiency', size='x-large') ylim(0,1.1) if ifo: title_string += ' ' + ifo title_string += ' ' + col_name title_string += ' efficiency plot' title(title_string, size='x-large') grid(True) if output_name: if ifo: output_name += '_' + ifo output_name += '_' + col_name + '_eff.png' savefig(output_name)
d01096d2265b0e3bf51665276108fe5a57425330 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/3592/d01096d2265b0e3bf51665276108fe5a57425330/viz.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 30325, 4032, 12, 7015, 16, 25143, 16, 645, 67, 529, 16, 21479, 33, 7036, 16, 3207, 67, 723, 273, 296, 12379, 2187, 521, 4264, 2679, 273, 8063, 16, 876, 67, 529, 273, 599, 16, 3207, 8...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 30325, 4032, 12, 7015, 16, 25143, 16, 645, 67, 529, 16, 21479, 33, 7036, 16, 3207, 67, 723, 273, 296, 12379, 2187, 521, 4264, 2679, 273, 8063, 16, 876, 67, 529, 273, 599, 16, 3207, 8...
if file[:1] != '_': path = os.path.join(object.__path__[0], file) modname = modulename(file) if modname and modname not in modpkgs: modpkgs.append(modname) elif ispackage(path): modpkgs.append(file + ' (package)')
path = os.path.join(object.__path__[0], file) modname = inspect.getmodulename(file) if modname and modname not in modpkgs: modpkgs.append(modname) elif ispackage(path): modpkgs.append(file + ' (package)')
def docmodule(self, object, name=None): """Produce text documentation for a given module object.""" name = object.__name__ # ignore the passed-in name namesec = name lines = split(strip(getdoc(object)), '\n') if len(lines) == 1: if lines[0]: namesec = namesec + ' - ' + lines[0] lines = [] elif len(lines) >= 2 and not rstrip(lines[1]): if lines[0]: namesec = namesec + ' - ' + lines[0] lines = lines[2:] result = self.section('NAME', namesec)
af4215be4e43ddf4f8775f622d9db3233f712894 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/12029/af4215be4e43ddf4f8775f622d9db3233f712894/pydoc.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 997, 2978, 12, 2890, 16, 733, 16, 508, 33, 7036, 4672, 3536, 25884, 977, 7323, 364, 279, 864, 1605, 733, 12123, 508, 273, 733, 16186, 529, 972, 468, 2305, 326, 2275, 17, 267, 508, 1257...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 997, 2978, 12, 2890, 16, 733, 16, 508, 33, 7036, 4672, 3536, 25884, 977, 7323, 364, 279, 864, 1605, 733, 12123, 508, 273, 733, 16186, 529, 972, 468, 2305, 326, 2275, 17, 267, 508, 1257...
'stylesheet_path': fixpath('data/stylesheets/pep.css'), 'tab_width': 8, 'template': fixpath('data/pep-html-template'), 'trim_footnote_reference_space': 1}, 'two': {'footnote_references': 'superscript', 'generator': 0,
u'stylesheet_path': fixpath(u'data/stylesheets/pep.css'), u'tab_width': 8, u'template': fixpath(u'data/pep-html-template'), u'trim_footnote_reference_space': 1}, 'two': {u'footnote_references': u'superscript', u'generator': 0,
def fixpath(path): return os.path.abspath(os.path.join(*(path.split('/'))))
457c6201c5a142eed56fbbbbf566fe842c81fdb4 /local1/tlutelli/issta_data/temp/all_python//python/2006_temp/2006/1532/457c6201c5a142eed56fbbbbf566fe842c81fdb4/test_settings.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2917, 803, 12, 803, 4672, 327, 1140, 18, 803, 18, 5113, 803, 12, 538, 18, 803, 18, 5701, 12, 21556, 803, 18, 4939, 2668, 2473, 3719, 3719, 282, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2917, 803, 12, 803, 4672, 327, 1140, 18, 803, 18, 5113, 803, 12, 538, 18, 803, 18, 5701, 12, 21556, 803, 18, 4939, 2668, 2473, 3719, 3719, 282, 2, -100, -100, -100, -100, -100, -100, ...
def test_attributes(self): p = self.thetype(hex) try: del p.__dict__ except TypeError: pass else: self.fail('partial object allowed __dict__ to be deleted')
def f(x, y): x // y
a34f87f98120bb470136f3be212c67d5ef981379 /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/8546/a34f87f98120bb470136f3be212c67d5ef981379/test_functools.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 284, 12, 92, 16, 677, 4672, 619, 368, 677, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 284, 12, 92, 16, 677, 4672, 619, 368, 677, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -1...
try:
if hasattr(system.__class__, 'chdir'):
def syseval(system, cmd, dir=None): if dir: try: system.chdir(dir) except (AttributeError, TypeError): pass try: return system.eval(cmd, sage_globals, locals = sage_globals) except TypeError: return system.eval(cmd)
9274eb20f7143986d488113fbbed13f2d2d8bd71 /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/9890/9274eb20f7143986d488113fbbed13f2d2d8bd71/support.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1393, 307, 1125, 12, 4299, 16, 1797, 16, 1577, 33, 7036, 4672, 309, 1577, 30, 309, 3859, 12, 4299, 16186, 1106, 972, 16, 296, 343, 1214, 11, 4672, 2619, 18, 343, 1214, 12, 1214, 13, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1393, 307, 1125, 12, 4299, 16, 1797, 16, 1577, 33, 7036, 4672, 309, 1577, 30, 309, 3859, 12, 4299, 16186, 1106, 972, 16, 296, 343, 1214, 11, 4672, 2619, 18, 343, 1214, 12, 1214, 13, ...
if options.output:
if hasattr(options, 'output'):
#~ def render(text, output, options): #~ """helper function for tests. scan the given image and create svg output""" #~ import pprint #~ aaimg = AsciiArtImage(text) #~ print text #~ aaimg.recognize() #~ aav = aa.AsciiOutputVisitor() #~ pprint.pprint(aaimg.shapes) #~ aav.visit(aaimg) #~ print aav #~ svgout = svg.SVGOutputVisitor( #~ file('aafigure_%x.svg' % (long(hash(text)) & 0xffffffffL,), 'w'), #~ scale = 10 #~ ) #~ svgout.visit(aaimg)
d9770d814e0873e62a4c9ada4481f65c646fa0e6 /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/5620/d9770d814e0873e62a4c9ada4481f65c646fa0e6/aafigure.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 468, 98, 1652, 1743, 12, 955, 16, 876, 16, 702, 4672, 468, 98, 3536, 4759, 445, 364, 7434, 18, 4135, 326, 864, 1316, 471, 752, 9804, 876, 8395, 468, 98, 1930, 18771, 468, 98, 279, 4581, 75...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 468, 98, 1652, 1743, 12, 955, 16, 876, 16, 702, 4672, 468, 98, 3536, 4759, 445, 364, 7434, 18, 4135, 326, 864, 1316, 471, 752, 9804, 876, 8395, 468, 98, 1930, 18771, 468, 98, 279, 4581, 75...
print "Opening browser"
def run_command(self, options={}, args=[]): import webbrowser print "Opening browser" params = {} if options.description: params['description'] = options.description if options.description: params['cachefile'] = options.cachefile if options.owner: params['owner'] = options.owner if options.url: params['url'] = options.url if options.ifos: for ifo in mkIfos(options.ifos.split(','),warn=True).split(','): params['ifo_'+ifo] = 'CHECKED' if options.gpstime: params['gpstime'] = options.gpstime if params: url = urljoin(options.server, "/searchResults") url += "?" + urlencode(params) else: url = urljoin(options.server, "/search") if options.browser: webbrowser.open_new(url) else: server = Server(options.server) rv = server.search(**params) for result in rv['results']: printAnalysis(result) printErrors(rv)
d85258fbd97f497c005ca78e3bd1ffbe064b7fe8 /local1/tlutelli/issta_data/temp/all_python//python/2008_temp/2008/5758/d85258fbd97f497c005ca78e3bd1ffbe064b7fe8/__init__.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1086, 67, 3076, 12, 2890, 16, 702, 28793, 833, 33, 8526, 4672, 1930, 3311, 11213, 225, 859, 273, 2618, 309, 702, 18, 3384, 30, 859, 3292, 3384, 3546, 273, 702, 18, 3384, 309, 702, 18, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1086, 67, 3076, 12, 2890, 16, 702, 28793, 833, 33, 8526, 4672, 1930, 3311, 11213, 225, 859, 273, 2618, 309, 702, 18, 3384, 30, 859, 3292, 3384, 3546, 273, 702, 18, 3384, 309, 702, 18, ...
return (cmp(astat[0], bstat[0]) or cmp(astat[1], bstat[1]) or cmp(astat[2], bstat[2]) or cmp(astat[3], bstat[3]))
if len(bstat) == 5: if bstat[0] == bstat[1] == bstat[2] == bstat[3] == "0": return -1 if len(astat) == 5: if astat[0] == astat[1] == astat[2] == astat[3] == "0": return 1 return (cmp((10000 * astat[3] + 1000*astat[2] + 100 * astat[1] + astat[0]), (10000 * bstat[3] + 1000*bstat[2] + 100 * bstat[1] + bstat[0])))
def status_cmp(a, b): bstat = build_status_vals(b) astat = build_status_vals(a)
e4357c01db7c9139998d37e45875ae7b887e933d /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/7314/e4357c01db7c9139998d37e45875ae7b887e933d/build.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1267, 67, 9625, 12, 69, 16, 324, 4672, 324, 5642, 273, 1361, 67, 2327, 67, 4524, 12, 70, 13, 3364, 270, 273, 1361, 67, 2327, 67, 4524, 12, 69, 13, 2, 0, 0, 0, 0, 0, 0, 0, 0, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1267, 67, 9625, 12, 69, 16, 324, 4672, 324, 5642, 273, 1361, 67, 2327, 67, 4524, 12, 70, 13, 3364, 270, 273, 1361, 67, 2327, 67, 4524, 12, 69, 13, 2, -100, -100, -100, -100, -100, ...
self.headers['Cache-Control'] = 'no-cache'
def __init__(self, environ): self.environ = environ # This isn't "state" really, since the object is derivative: self.headers = EnvironHeaders(environ) # Default caching of requests using Response to not cache self.headers['Cache-Control'] = 'no-cache' defaults = self.defaults._current_obj() self.charset = defaults.get('charset') if self.charset: # There's a charset: params will be coerced to unicode. In that # case, attempt to use the charset specified by the browser browser_charset = self.determine_browser_charset() if browser_charset: self.charset = browser_charset self.errors = defaults.get('errors', 'strict') self.decode_param_names = defaults.get('decode_param_names', False) self._languages = None
858f63097dc501682148fb67d360d89f3453188f /local1/tlutelli/issta_data/temp/all_python//python/2007_temp/2007/2097/858f63097dc501682148fb67d360d89f3453188f/wsgiwrappers.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 5473, 4672, 365, 18, 28684, 273, 5473, 468, 1220, 5177, 1404, 315, 2019, 6, 8654, 16, 3241, 326, 733, 353, 16417, 30, 365, 18, 2485, 273, 27912, 3121, 12...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 2738, 972, 12, 2890, 16, 5473, 4672, 365, 18, 28684, 273, 5473, 468, 1220, 5177, 1404, 315, 2019, 6, 8654, 16, 3241, 326, 733, 353, 16417, 30, 365, 18, 2485, 273, 27912, 3121, 12...
print "]"
print "]"
def main(): if __name__ != '__main__': return if sys.argv[1:] == ['-g']: for statements, kind in ((exec_tests, "exec"), (single_tests, "single"), (eval_tests, "eval")): print kind+"_results = [" for s in statements: print repr(to_tuple(compile(s, "?", kind, 0x400)))+"," print "]" print "main()" raise SystemExit test_main()
d0b5c412a69e26255972e86ba320eba3d19747ea /local1/tlutelli/issta_data/temp/all_python//python/2009_temp/2009/6753/d0b5c412a69e26255972e86ba320eba3d19747ea/test_ast.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2774, 13332, 309, 1001, 529, 972, 480, 4940, 5254, 972, 4278, 327, 309, 2589, 18, 19485, 63, 21, 26894, 422, 10228, 17, 75, 3546, 30, 364, 6317, 16, 3846, 316, 14015, 4177, 67, 16341, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 2774, 13332, 309, 1001, 529, 972, 480, 4940, 5254, 972, 4278, 327, 309, 2589, 18, 19485, 63, 21, 26894, 422, 10228, 17, 75, 3546, 30, 364, 6317, 16, 3846, 316, 14015, 4177, 67, 16341, ...
rms first mass out __rms.dat
rms first mass out __RMS.dat
def printusage(): import sys print "analyzeConvergence.py <mdout> <mdcrd> <prmtop> {<gnuplot_script_prefix>}" sys.exit()
a39f83c60a278365cde5a32c526bdbef17d0a810 /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/8376/a39f83c60a278365cde5a32c526bdbef17d0a810/analyzeConvergence.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1172, 9167, 13332, 1930, 2589, 1172, 315, 304, 9508, 442, 502, 15570, 18, 2074, 411, 1264, 659, 34, 411, 1264, 3353, 72, 34, 411, 683, 1010, 556, 34, 288, 32, 1600, 89, 4032, 67, 4263,...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1172, 9167, 13332, 1930, 2589, 1172, 315, 304, 9508, 442, 502, 15570, 18, 2074, 411, 1264, 659, 34, 411, 1264, 3353, 72, 34, 411, 683, 1010, 556, 34, 288, 32, 1600, 89, 4032, 67, 4263,...
for result in data['responseData']['results']: if result: print '[%s]'%(urllib.unquote(result['titleNoFormatting'])) print result['content'].strip("<b>...</b>").replace("<b>",'').replace("</b>",'').replace("& print urllib.unquote(result['unescapedUrl'])+'\n'
if data['responseStatus'] == 200: for result in data['responseData']['results']: if result: print '[%s]'%(urllib.unquote(result['titleNoFormatting'])) print result['content'].strip("<b>...</b>").replace("<b>",'').replace("</b>",'').replace("& print urllib.unquote(result['unescapedUrl'])+'\n'
def __search__(self,print_results = False): results = [] for page in range(0,self.pages): args = {'q' : self.query, 'v' : '1.0', 'start' : page, 'rsz': RSZ_LARGE, 'safe' : SAFE_OFF, 'filter' : FILTER_ON, } q = urllib.urlencode(args) search_results = urllib.urlopen(URL+q) data = json.loads(search_results.read()) if print_results: for result in data['responseData']['results']: if result: print '[%s]'%(urllib.unquote(result['titleNoFormatting'])) print result['content'].strip("<b>...</b>").replace("<b>",'').replace("</b>",'').replace("&#39;","'").strip() print urllib.unquote(result['unescapedUrl'])+'\n' results.append(data) return results
52bde302eb267a4f6ed28911af4539771bc8e9c0 /local1/tlutelli/issta_data/temp/all_python//python/2010_temp/2010/7667/52bde302eb267a4f6ed28911af4539771bc8e9c0/pygoogle.py
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 3072, 972, 12, 2890, 16, 1188, 67, 4717, 273, 1083, 4672, 1686, 273, 5378, 364, 1363, 316, 1048, 12, 20, 16, 2890, 18, 7267, 4672, 833, 273, 13666, 85, 11, 294, 365, 18, 2271, ...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1...
[ 1, 8585, 326, 22398, 316, 326, 981, 30, 1652, 1001, 3072, 972, 12, 2890, 16, 1188, 67, 4717, 273, 1083, 4672, 1686, 273, 5378, 364, 1363, 316, 1048, 12, 20, 16, 2890, 18, 7267, 4672, 833, 273, 13666, 85, 11, 294, 365, 18, 2271, ...