text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
The Python WSGI Utility Library
Werkzeug is a WSGI utility library for Python. It's widely used and BSD licensed.
from werkzeug.wrappers import Request, Response @Request.application def application(request): return Response('Hello World!') if __name__ == '__main__': from werkzeug.serving import run_simple run_simple('localhost', 4000, application).
Werkzeug is the base of frameworks such as Flask and more, as well as in house frameworks developed for commercial products and websites.
Have you looked at werkzeug.routing? It's hard to find anything that's simpler, more self-contained, or purer-WSGI than Werkzeug, in general — I'm quite a fan of it! — Alex Martelli
Found a bug? Have a good idea for improving Werkzeug? Head over to Werkzeug's new github page and create a new ticket or fork. If you just want to chat with fellow developers, visit the IRC channel or join the mailinglist.
|
http://werkzeug.pocoo.org/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
I have the word infinity appear in my output and I am trying to change its color when it becomes and actual number.
So when I check to see if it isFinite it changes to orange, but then when it actually becomes a number- I cant get it to change to black. I am so close with this, am I writing this wrong?
<span ng-
Am I writing this wrong?
Yes... several issues
isFinite is function first off. Also it is in global namespace whereas angular expressions are evaluated against scope variables
You would need to define
isFinite(testValue) in angular scope (controller) to use it in the view
// be sure to inject $window in controller $scope.isFinite = $window.isFinite
And in view would be:
ng-class="{'test3':isFinite(users2) , 'test2':!isFinite(users2)}"
Reference MDN isFinite()
|
https://codedump.io/share/NeMgLlz0Kohu/1/angularjs-check-if-ng-model--isfinite
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
ContactData Class
The ContactData class returns information about a pinned link when the GetUserColleagues method of the UserProfileService class is called.
The User Profile Web service namespace is an arbitrary name for a reference to the UserProfileService.asmx Web service in Microsoft Office SharePoint Server 2007..
You access the Web Services Description Language (WSDL) for the User Profile Web service endpoint through UserProfileService.asmx?wsdl.
The following example shows the format of the URL to the User Profile Web service WSDL file.
If you do not have a custom site, you can use the following URL temporarily:
It is recommended that you create a custom site, and then use the URL that includes the custom site in the URL format.
The following table describes each element in the URL.
For more information about the WSDL format, see the World Wide Web Consortium (W3C) WSDL specification at.
For code examples about how to use the ContactData class, see How to: Use the Web Service to Find What's Common Between Two User Profiles and How to: Use the Web Service to Retrieve Profile Data of a User.
User Profile.ContactData
|
https://msdn.microsoft.com/en-us/library/aa980910(v=office.12).aspx
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
I'm hoping anybody could help me with the following.
I have 2 lists of arrays, which should be linked to each-other. Each list stands for a certain object.
arr1
arr2
import numpy as np
arr1 = [np.array([1, 2, 3]), np.array([1, 2]), np.array([2, 3])]
arr2 = [np.array([20, 50, 30]), np.array([50, 50]), np.array([75, 25])]
1
arr1
20
arr2
array([[ 0, 20, 50, 30],
[ 0, 50, 50, 0],
[ 0, 0, 75, 25]])
Here's an almost* vectorized approach -
lens = np.array([len(i) for i in arr1]) N = len(arr1) row_idx = np.repeat(np.arange(N),lens) col_idx = np.concatenate(arr1) M = col_idx.max()+1 out = np.zeros((N,M),dtype=int) out[row_idx,col_idx] = np.concatenate(arr2)
*: Almost because of the loop comprehension at the start, but that should be computationally negligible as it doesn't involve any computation there.
|
https://codedump.io/share/aGnqYrGqSwtT/1/combine-list-of-numpy-arrays-and-reshape
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Doing BFS right-to-left means we can simply return the last node's value and don't have to keep track of the first node in the current row or even care about rows at all. Inspired by @fallcreek's solution (not published) which uses two nested loops to go row by row but already had the right-to-left idea making it easier. I just took that further.
Python:
def findLeftMostNode(self, root): queue = [root] for node in queue: queue += filter(None, (node.right, node.left)) return node.val
Java:
public int findLeftMostNode(TreeNode root) { Queue<TreeNode> queue = new LinkedList<>(); queue.add(root); while (!queue.isEmpty()) { root = queue.poll(); if (root.right != null) queue.add(root.right); if (root.left != null) queue.add(root.left); } return root.val; }
dude you do know whole lotta tricks, you just simply switch the order of left and right while i spent time to write the naive level-order traversal, brilliant!
@SB.Hu Well like I said, I got that idea from @fallcreek.
Edit: I just checked again, actually @fallcreek's solution does have nested loops, the inner loop going over one whole level. But the right-to-left idea was already there, and I just took full advantage of it.
@StefanPochmann aha, didn't notice there are description on top, good job anyway.
Dude thanks for inspiring. Just one quick question as for the "(node.right, node.left)" that I did not figure out by researching. Does that work like appending node.right first and then node.left to the queue? And is the type of "(node.right, node.left)" just a group of elements or a list-like.
Thanks very much!
@StefanPochmann I have a question: For below example, the code will return 4.
It is not left leaf, why it is correct?
1
/ \
2 3
\ /
4 5
@yanchun said in Right-to-Left BFS (Python + Java):
It is not left leaf
Yes. And?
@yanchun hi, this question asks for left most value of last level, it does not have to be a left left. Hope it helps.
@StefanPochmann
Got it. I think I misunderstand the question. It is not left node value. Thank you.
@jinping
Now I got it. I think I misunderstand the question. :) It is not left node value. Thank you.
Whoa I didn't know the scope of a for loop variable stays with the last element.
Hence we didn't have to pop anything from queue, so don't have to use deque instead.
@Hellokitty_2015 What does that have to do with this thread / my solution?
@StefanPochmann brilliant idea. A C++ version:
int findBottomLeftValue(TreeNode* root) { queue<TreeNode*> q; q.push(root); TreeNode* last; while(!q.empty()){ last = q.front(), q.pop(); if(last->right) q.push(last->right); if(last->left) q.push(last->left); } return last->val; }
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
|
https://discuss.leetcode.com/topic/78981/right-to-left-bfs-python-java
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
[
]
Himanshu Vashishtha commented on HBASE-9539:
--------------------------------------------
Thanks for the review Stack.
Re: naming: These DOT_XXX naming is borrowed from existing code, NameSpaceUpgrade class. But
no problem changing them.
Re: +if (upgrade) {
Yes, that is to add mutual exclusive behavior in the upgrade script in the two modes (check
and upgrade). I think it is good to have.
Re: more jave doc
Sure, will add more of it.
bq. Does this make it so this tool only works against pre-0.96 snapshot layout?
Not really. This patch adds more paths to look for when we want to open the FileLink (the
FileLink takes an array of path to look when opening the link). So, we just add pre-96 snapshot
layout in those options, and make it work on a pre-96 layout (we not removing post-96 related
paths).
> Handle post namespace snapshot files when checking for HFile V1
> -----------------------------------------------------------------
>
> Key: HBASE-9539
> URL:
> Project: HBase
> Issue Type: Bug
> Components: migration
> Affects Versions: 0.95.2
> Reporter: Himanshu Vashishtha
> Fix For: 0.98.0, 0.96.0
>
> Attachments: HBase-9539.patch
>
>
> When checking for HFileV1 before upgrading to 96, the snapshot file links tries to read
from post-namespace locations. The migration script needs to be run on 94 cluster, and it
requires reading the old (94) layout to check for HFileV1.
> {code}
> Got exception while reading trailer for file: hdfs://xxx:41020/cops/cluster_collection_events_snapshot/2086db948c484be62dcd76c170fe0b17/meta/cluster_collection_event=42037b88dbc34abff6cbfbb1fde2c900-c24b358ddd2f4429a7287258142841a2
> java.io.FileNotFoundException: Unable to open link: org.apache.hadoop.hbase.io.HFileLink
locations=[hdfs://xxx:41020/hbase-96/.tmp/archive/data/default/cluster_collection_event/42037b88dbc34abff6cbfbb1fde2c900/meta/c24b358ddd2f4429a7287258142841a2]
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:
|
http://mail-archives.apache.org/mod_mbox/hbase-issues/201309.mbox/%3CJIRA.12668695.1379268479455.141289.1379356733628@arcas%3E
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Primarily, I suggested this change to the ML, but I don't explain the detail on that to Nagano-san. Sorry. Paul Eggert <address@hidden> writes: >> From: Keiichiro Nagano (=?ISO-2022-JP?B?GyRCMUpMbjc9MGxPOhsoQg==?=) >> <address@hidden> >> Date: Mon, 22 Apr 2002 05:15:17 +0900 >> >> init_buffer uses environmental variable PWD to identify current >> working directory. I think we should not use it on Windows. On >> Windows with Cygwin, PWD is unreliable and confusing > > PWD is unreliable on all platforms, but Emacs works around the problem > with a similar method on all platforms by statting $PWD and ".", and > using $PWD only if stat results agree. What is the problem with > this workaround on Windows? Yes, the current Emacs implementation compares the result of stat("."), but stat() of Windows could not strictly emulate the behavior especially on inode. So if "PWD" environment variable is wrongly set, default_directory would be also wrongly set. But, in the first place, is this code necessary on all platform? Even now, is it really efficient on almost all of the platforms? I don't think we should stick to such hacked code for the simple job to get the current directory. FYI, I've tested the following code on Debian GNU/Linux (sid) to check the efficiency. ---------------------------------------- #include <stdlib.h> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #define MAXPATHLEN 2048 #ifndef DIRECTORY_SEP #define DIRECTORY_SEP '/' #endif #ifndef IS_DIRECTORY_SEP #define IS_DIRECTORY_SEP(_c_) ((_c_) == DIRECTORY_SEP) #endif #ifndef IS_DEVICE_SEP #ifndef DEVICE_SEP #define IS_DEVICE_SEP(_c_) 0 #else #define IS_DEVICE_SEP(_c_) ((_c_) == DEVICE_SEP) #endif #endif char* getcwd_stat(char *buf) { char *pwd; struct stat pwdstat, dotstat; return NULL; return buf; } char * getcwd_normal(char *buf) { return getcwd (buf, MAXPATHLEN + 1); } int main(int argc, char **argv) { char buf[MAXPATHLEN + 1]; char *pwd; int i; if (argv[1][0] == 's') { for (i = 0;i < 1000000;i++) getcwd_stat(buf); } else { for (i = 0;i < 1000000;i++) getcwd_normal(buf); } return 1; } ---------------------------------------- The result are [~:] time ./a.out s ./a.out s 1.22s user 2.68s system 100% cpu 3.883 total [~:] time ./a.out n ./a.out n 0.29s user 0.92s system 100% cpu 1.200 total ---------------------------------------- Of course, this check only covers narrow situation. It may depends on the file system the current directly locates. But at least, I could state there exist situations that the hacked code is much slower than simple getcwd() calling. Therefore, I'd like to propose to remove this hacked optimization from init_buffer(). Do you have any comments? With regards, from himi
|
https://lists.gnu.org/archive/html/emacs-devel/2002-04/msg00782.html
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
4. The Smart Card Product Model¶
The demo provided by cookiecutter-django-shop using the product model “smartcard”, shows how to setup a shop, with a single product type. In our example we use a Smart Card for it. Here the Django model is managed by the merchant implementation.
Smart Cards have many different attributes such as their card type, the manufacturer, storage capacity and the maximum transfer speed. Here it’s the merchant’s responsibility to create the database model according to the physical properties of the product. The model class to describe a Smart Card therefore is not part of the shop’s framework, but rather in the merchant’s implementation as found in our example.
Creating a customized product model, requires only a few lines of declarative Python code. Here is a simplified example:
from django.db import models from shop.models.product import BaseProduct, BaseProductManager, CMSPageReferenceMixin from shop.money.fields import MoneyField class SmartCard(CMSPageReferenceMixin, BaseProduct): product_name = models.CharField( max_length=255, verbose_name="Product Name", ) slug = models.SlugField(verbose_name="Slug") caption = models.TextField( "Caption", help_text="Short description used in the catalog's list view.", ) description = models.TextField( "Description", help_text="Long description used in the product's detail view.", ) order = models.PositiveIntegerField( "Sort by", db_index=True, ) cms_pages = models.ManyToManyField( 'cms.Page', through=ProductPage, help_text="Choose list view this product shall appear on.", ) images = models.ManyToManyField( 'filer.Image', through=ProductImage, ) unit_price = MoneyField( "Unit price", decimal_places=3, help_text="Net price for this product", ) card_type = models.CharField( "Card Type", choices=[(t, t) for t in ('SD', 'SDXC', 'SDHC', 'SDHC II')], max_length=9, ) product_code = models.CharField( "Product code", max_length=255, unique=True, ) storage = models.PositiveIntegerField( "Storage Capacity", help_text="Storage capacity in GB", ) class Meta: verbose_name = "Smart Card" verbose_name_plural = "Smart Cards" ordering = ['order'] lookup_fields = ['product_code__startswith', 'product_name__icontains'] objects = BaseProductManager() def get_price(self, request): return self.unit_price def __str__(self): return self.product_name @property def sample_image(self): return self.images.first()
Let’s examine this product model. Our
SmartCard inherits from the abstract
shop.models.product.BaseProduct, which is the base class for any product. It only contains
a minimal amount of fields, because django-SHOP doesn’t make any assumptions about the product’s
properties. Additionally this class inherits from the mixin
shop.models.product.CMSPageReferenceMixin, which adds some functionality to handle CMS
pages as product categories.
In this class declaration, we use one field for each physical property of our Smart Cards, such as card type, storage, transfer speed, etc. Using one field per property allows us to build much simpler interfaces, rather than e-commerce solutions, which use a one-size-fits-all approach, attempting to represent all product’s properties. Otherwise, this product model class behaves exactly like any other Django model.
In addition to the properties, the example above contains these extra fields:
slug: This is the URL part after the category part.
order: This is an integer field to remember the sorting order of products.
cms_pages: A list of CMS pages, this product shall appear on.
images: A list of images of this product.
The list in
lookup_fields is used by the Select2-widget, when searching for a product. This is
often required, while setting internal links onto products.
In django-SHOP, the field
unit_price is optional. Instead, each product class must provide a
method
get_price(), which shall return the unit price for the catalog’s list view. This is
because products may have variations with different price tags, or prices for different groups of
customers. Therefore the unit price must be computed per request, rather than being hard coded into
a database column.
5. An Internationalized Smart Card Model¶
If in the demo provided by cookiecutter-django-shop, support for multiple languages (I18N) is enabled, the product model for our Smart Card changes slightly.
First ensure that django-parler is installed and
'parler' is listed in the project’s
INSTALLED_APPS. Then import some extra classes into the project’s
models.py and adopt the
product class. Only the relevant changes to our model class are shown here:
... from parler.managers import TranslatableManager, TranslatableQuerySet from polymorphic.query import PolymorphicQuerySet ... class ProductQuerySet(TranslatableQuerySet, PolymorphicQuerySet): pass class ProductManager(BaseProductManager, TranslatableManager): queryset_class = ProductQuerySet def get_queryset(self): qs = self.queryset_class(self.model, using=self._db) return qs.prefetch_related('translations') class SmartCard(CMSPageReferenceMixin, TranslatableModelMixin, BaseProduct): ... caption = TranslatedField() description = TranslatedField() ... class SmartCardTranslation(TranslatedFieldsModel): master = models.ForeignKey( SmartCard, related_name='translations', null=True, ) caption = models.TextField( "Caption", help_text="Short description used in the catalog's list view.", ) description = models.TextField( "Description", help_text="Long description used in the product's detail view.", ) class Meta: unique_together = [('language_code', 'master')]
For this model we decided to translate the fields
caption and
description. The product name
of a Smart Card is international anyways and doesn’t have to be translated into different langauges.
Hence we neither use a translatable field for the product name, nor its slug. On the other hand, if
it makes sense to translate the product name, then we’d simply move these fields into the related
class
SmartCardTranslation. This gives us all the flexibility we need to model our products
according to their physical properties, and prevents that the administrator of the site has to enter
redundant data through the administration backend, while creating or editing an instance.
6. Add Product Model to Django Admin¶
In order to make our Smart Card editable, we have to register it in the Django administration backend:
from django.contrib import admin from adminsortable2.admin import SortableAdminMixin from shop.admin.product import CMSPageAsCategoryMixin, ProductImageInline, InvalidateProductCacheMixin from myshop.models import SmartCard @admin.register(SmartCard) class SmartCardAdmin(InvalidateProductCacheMixin, SortableAdminMixin, CMSPageAsCategoryMixin, admin.ModelAdmin): fields = ['product_name', 'slug', 'product_code', 'unit_price', 'active', 'caption', 'description', 'storage', 'card_type'] inlines = [ProductImageInline] prepopulated_fields = {'slug': ['product_name']} list_display = ['product_name', 'product_code', 'unit_price', 'active']
This is a typical implementation of a Django ModelAdmin. This class uses a few additions however:
shop.admin.product.InvalidateProductCacheMixin: After saving a product instance, all caches are going to be cleared.
adminsortable2.admin.SortableAdminMixin: Is used to add sorting capabilities to the backend list view.
shop.admin.product.CMSPageAsCategoryMixin: Is used to assign a product to one ore more CMS pages, tagged as Categories.
shop.admin.product.ProductImageInline: Is used to assign a one ore more images to a product and sort them accordingly.
6.1. With I18N support¶
If multilingual support is required, then we also must add a possibility to make some fields translatable:
from parler.admin import TranslatableAdmin ... class SmartCardAdmin(InvalidateProductCacheMixin, SortableAdminMixin, TranslatableAdmin, CMSPageAsCategoryMixin, admin.ModelAdmin): ...
For detail, please refer to the documentation provided by django-parler.
7. Next Chapter¶
In the next chapter of this tutorial, we will see how to organize the Cart and Checkout
|
https://django-shop.readthedocs.io/en/latest/tutorial/smartcard-product.html
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
On Aug 6, 2007, at 12:06, Xell Zhang wrote:
> I have compiled and installed SVN 1.4.4 and want to configure it to
> run by using apache2. (On Ubuntu 7.04)
> I followed the book <Version Control with Subversion> and apache2
> successfully started up with svn mods. My configuration in
> httpd.conf is like below:
> <Location /svn>
> DAV svn
> SVNParentPath /var/svnroot
> SVNListParentPath on
> AuthType Basic
> AuthName "Subversion repository"
> AuthUserFile /etc/svn-auth-file
> Require valid-user
> </Location>
>
> My apache is started as apache:apache user so I change the owner
> of /var/svnroot from root to apache.apache.
> Then I use:
> svn import dummy.test -m "test"
> to create the very first test. But after I input the password I got:
> svn: PROPFIND request failed on '/svn/test'
> svn: Could not open the requested SVN filesystem
>
> In apache2 error logs I found:
> [Tue Aug 07 01:00:02 2007] [error] [client 127.0.0.1] (20014)
> Internal error: Can't open file '/var/svnroot/test/format': No such
> file or directory
> [Tue Aug 07 01:00:02 2007] [error] [client 127.0.0.1] Could not
> fetch resource information. [500, #0]
> [Tue Aug 07 01:00:02 2007] [error] [client 127.0.0.1 ] Could not
> open the requested SVN filesystem [500, #2]
> [Tue Aug 07 01:00:02 2007] [error] [client 127.0.0.1] Could not
> open the requested SVN filesystem [500, #2]
>
> Then I use:
> sudo svnadmin create /var/svnroot/dummy
> It seems successful to create a repository because in my browser
> when I visit "" I can get a page of
> which content is like:
> Revision 0: /Powered by Subversion version 1.4.4 (r25188).
>
> So I don't know why I cannot import a file. I google this for a
> long time but cannot find useful information...
> Thanks for helping!
When you use "SVNParentPath /var/svnroot" you are informing
Subversion that you would like to host multiple repositories under
that directory. Each repository must be created with "svnadmin
create" before it can be used. You created a repository called
"dummy" and were able to access it just fine. You should be able to
import into it too. You were not able to import into the "test"
repository because it doesn't sound like you ever "svnadmin create"d
a test repository.
If you would prefer to have just a single repository, then you would
use "SVNPath /var/svnroot" and you would "svnadmin create" just the
single repository /var/svnroot.
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org
Received on Mon Aug 6 19:18:24 2007
This is an archived mail posted to the Subversion Users
mailing list.
|
https://svn.haxx.se/users/archive-2007-08/0127.shtml
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
Plotting of activation maps¶
The module
nipy.labs.viz provides functions to plot
visualization of activation maps in a non-interactive way.
2D cuts of an activation map can be plotted and superimposed on an anatomical map using matplotlib. In addition, Mayavi2 can be used to plot 3D maps, using volumetric rendering. Some emphasis is made on automatic choice of default parameters, such as cut coordinates, to give a sensible view of a map in a purely automatic way, for instance to save a summary of the output of a calculation.
Warning
The content of the module will change over time, as neuroimaging volumetric data structures are used instead of plain numpy arrays.
An example¶
from nipy.labs.viz import plot_map, mni_sform, coord_transform # First, create a fake activation map: a 3D image in MNI space with # a large rectangle of activation around Broca Area import numpy as np mni_sform_inv = np.linalg.inv(mni_sform) # Color an asymmetric rectangle around Broca area: x, y, z = -52, 10, 22 x_map, y_map, z_map = coord_transform(x, y, z, mni_sform_inv) map = np.zeros((182, 218, 182)) map[x_map-30:x_map+30, y_map-3:y_map+3, z_map-10:z_map+10] = 1 # We use a masked array to add transparency to the parts that we are # not interested in: thresholded_map = np.ma.masked_less(map, 0.5) # And now, visualize it: plot_map(thresholded_map, mni_sform, cut_coords=(x, y, z), vmin=0.5)
This creates the following image:
The same plot can be obtained fully automatically, by letting
plot_map() find the activation threshold and the cut coordinates:
plot_map(map, mni_sform, threshold='auto')
In this simple example, the code will easily detect the bar as activation and position the cut at the center of the bar.
3D plotting utilities¶
The module
nipy.labs.viz3d can be used as helpers to
represent neuroimaging volumes with Mayavi2.
For more versatile visualizations the core idea is that given a 3D map and an affine, the data is exposed in Mayavi as a volumetric source, with world space coordinates corresponding to figure coordinates. Visualization modules can be applied on this data source as explained in the Mayavi manual
|
http://nipy.org/nipy/labs/viz.html
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
Fermat numbers have the form
Fermat numbers are prime if n = 0, 1, 2, 3, or 4. Nobody has confirmed that any other Fermat numbers are prime. Maybe there are only five Fermat primes and we’ve found all of them. But there might be infinitely many Fermat primes. Nobody knows.
There’s a specialized test for checking whether a Fermat number is prime, Pépin’s test. It says that for n ≥ 1, the Fermat number Fn is prime if and only if
We can verify fairly quickly that F1 through F4 are prime and that F5 through F14 are not with the following Python code.
def pepin(n): x = 2**(2**n - 1) y = 2*x return y == pow(3, x, y+1) for i in range(1, 15): print(pepin(i))
After that the algorithm gets noticeably slower.
We have an efficient algorithm for testing whether Fermat numbers are prime, efficient relative to the size of numbers involved. But the size of the Fermat numbers is growing exponentially. The number of digits in Fn is
So F14 has 4,933 digits, for example.
The Fermat numbers for n = 5 to 32 are known to be composite. Nobody knows whether F33 is prime. In principle you could find out by using Pépin’s test, but the number you’d need to test has 2,585,827,973 digits, and that will take a while. The problem is not so much that the exponent is so big but that the modulus is also very big.
The next post presents an analogous test for whether a Mersenne number is prime.
|
http://www.statsblogs.com/2018/11/27/searching-for-fermat-primes/
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
Configuration macros for platform, compiler, etc. More...
Configuration macros for platform, compiler, etc.
#include <mi/base/base.h>
The operating system specific default filename extension for shared libraries (DLLs)
Creates an identifier from concatenating the values of
X and
Y, possibly expanding macros in
X and
Y.
Creates a string from the value of
X, possibly expanding macros in
X.
This macro is defined if the compiler supports rvalue references.
The compiler-specific, strong
inline keyword.
The C++ language keyword
inline is a recommendation to the compiler. Whether an inline function is actually inlined or not depends on the optimizer. In some cases, the developer knows better than the optimizer. This is why many compilers offer a separate, stronger inline statement. This define gives portable access to the compiler-specific keyword.
Pre-define
MI_FORCE_INLINE to override the setting in this file.
Empty macro that can be used after function names to prevent macro expansion that happen to have the same name, for example,
min or
max functions.
|
https://raytracing-docs.nvidia.com/iray/api_reference/iray/html/group__mi__base__config.html
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
Additional utils¶
- class
snakemake.utils.
AlwaysQuotedFormatter(quote_func=None, *args, **kwargs)[source]¶
Subclass of QuotedFormatter that always quotes.
Usage is identical to QuotedFormatter, except that it always acts like “q” was appended to the format spec.
- class
snakemake.utils.
QuotedFormatter(quote_func=None, *args, **kwargs)[source]¶
Subclass of string.Formatter that supports quoting.
Using this formatter, any field can be quoted after formatting by appending “q” to its format string. By default, shell quoting is performed using “shlex.quote”, but you can pass a different quote_func to the constructor. The quote_func simply has to take a string argument and return a new string representing the quoted form of the input string.
Note that if an element after formatting is the empty string, it will not be quoted.
snakemake.utils.
R(code)[source]¶
Execute R code.
This is deprecated in favor of the
scriptdirective. This function executes the R code given as a string. The function requires rpy2 to be installed.
- class
snakemake.utils.
SequenceFormatter(separator=' ', element_formatter=<string.Formatter object>, *args, **kwargs)[source]¶).
snakemake.utils.
argvquote(arg, force=True)[source]¶
Returns an argument quoted in such a way that that CommandLineToArgvW on Windows will return the argument string unchanged. This is the same thing Popen does when supplied with an list of arguments. Arguments in a command line should be separated by spaces; this function does not add these spaces. This implementation follows the suggestions outlined here:
snakemake.utils.
available_cpu_count()[source]¶
Return the number of available virtual or physical CPUs on this system. The number of available CPUs can be smaller than the total number of CPUs when the cpuset(7) mechanism is in use, as is the case on some cluster systems.
Adapted from
snakemake.utils.
format(_pattern, *args, stepout=1, _quote_all=False, **kwargs)[source]¶
Format a pattern in Snakemake style.
This means that keywords embedded in braces are replaced by any variable values that are available in the current namespace.
snakemake.utils.
listfiles(pattern, restriction=None, omit_value=None)[source]¶
Yield a tuple of existing filepaths for the given pattern.
Wildcard values are yielded as the second tuple item.
snakemake.utils.
makedirs(dirnames)[source]¶
Recursively create the given directory or directories without reporting errors if they are present.
snakemake.utils.
min_version(version)[source]¶
Require minimum snakemake version, raise workflow error if not met.
snakemake.utils.
read_job_properties(jobscript, prefix='# properties', pattern=re.compile('# properties = (.*)'))[source]¶
Read the job properties defined in a snakemake jobscript.
This function is a helper for writing custom wrappers for the snakemake –cluster functionality. Applying this function to a jobscript will return a dict containing information about the job.
snakemake.utils.
report(text, path, stylesheet='/home/docs/checkouts/readthedocs.org/user_builds/snakemake/checkouts/latest/snakemake/report.css', defaultenc='utf8', template=None, metadata=None, **files)[source]¶
Create an HTML report using python docutils.
This is deprecated in favor of the –report flag.
Attention: This function needs Python docutils to be installed for the python installation you use with Snakemake.
All keywords not listed below are intepreted as paths to files that shall be embedded into the document. They keywords will be available as link targets in the text. E.g. append a file as keyword arg via F1=input[0] and put a download link in the text like this:
report(''' ============== Report for ... ============== Some text. A link to an embedded file: F1_. Further text. ''', outputpath, F1=input[0]) Instead of specifying each file as a keyword arg, you can also expand the input of your rule if it is completely named, e.g.: report(''' Some text... ''', outputpath, **input)
snakemake.utils.
update_config(config, overwrite_config)[source]¶
Recursively update dictionary config with overwrite_config.
See for details.
|
https://snakemake.readthedocs.io/en/latest/api_reference/snakemake_utils.html
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
. I have decided to name this edition #130-2 so that eventually (well, in about a week), we will be back to uninflated post numbers. Nobody likes inflation. Except perhaps tyres. And balloons.
Your brain at work part 2: Dopamine and more mindfulness
Ironically, the incorrectly numbered post #130 dealt with the many ways in which our brains fail us every day. (Now that I’ve finally gotten around to installing the WP Anchor Header plugin, we can link directly down to any heading in any post, as demonstrated in the previous sentence.)
At least some clouds do seem to have a silver lining.
Your Brain at Work, the book I mentioned last week, has turned out to be a veritable treasure trove of practical human neuroscience, and I still have about 30% to go. My attempt at meteorological humour above was inspired by part of the book’s treatment of the important role of dopamine in your daily life.
For optimal results, one is supposed to remain mildly optimistic about expected future rewards, but not too much, which will result in a sharp dopamine drop when those rewards don’t crystallise, and a greater increase when they do. For optimal results, one should try to remain in a perpetual state of mildly optimistic expectations, but also in a state of being continually pleasantly surprised when those expectations are slightly exceeded.
More generally, the book deals really well with the intricacies of trying to keep one’s various neural subsystems happy and in balance. Too much stress, and the limbic system starts taking over (you want to run away, more or less), blocking your ability to think and make new connections, which in this modern life could very well be your only ticket out of Stress Town.
To my pleasant surprise (argh, I’ll stop), mindfulness made its appearance at about 40% into the book, shortly after I had published last week’s WHV. In my favourite mindfulness book, Mindfulness: A Practical Guide to Peace in a Frantic World by Mark Williams and Danny Penman, two of the major brain states are called doing, the planning and execution mode we find ourselves in most of the time, also in the middle of the night when we’re worrying about things we can do nothing about at that point, and being, the mode of pure, unjudgemental observation the activation and cultivation of which is practised in mindfulness.
In David Rock’s book, these two states are described as being actual brain networks, and they have different but complementary names: The narrative network corresponds to the doing mode, and the direct experience network corresponds to the being mode.
The narrative network processes all incoming sensory information through various filters, moulding it to fit into one’s existing mental model of the world. David Rock describes it in the book and in this HuffPost piece as follows:.
This is certainly useful most of the time, but it can get tiring and increase stress when you least need it.
The much-more attractively named direct experience network is active when you feel all of your senses opening up to the outside world to give you that full HD IMAX™ surround sound VR experience. No judging, no mental modelling, just sensory bliss and inner calm. Rock sez:.
Again, these two systems are on opposite sides of a neurophysiological see-saw. When you are worrying and planning, no zen for you! On the other hand, when you’re feeling the breeze flowing and and through each individual hair on your arms and the sun rays seemingly feeding energy directly into your cells, your stress is soon forgotten.
Fortunately, mindfulness gives us practical tools to distinguish more easily when we’re on which path, and, more importantly, to switch mental modes at will.
I hope you don’t mind me concluding this piece by recursively quoting David Rock quoting John Teasdale, one of the three academic founders of Mindfulness Based Cognitive Therapy (MBCT):.
(If the book has any more interesting surprises, I’ll be sure to report on them in future WHV editions.)
Miscellany at the end of week 5 of 2018
- The rather dire water situation has not changed much, except that due to more citizens putting their backs into the water saving efforts, day zero (when municipal water is to be cut off) has been postponed by 4 days to April 16. We are now officially limited to 50 litres per person per day, for everything. Practically, this means even more buckets of grey water are being carried around in my house every day in order to be re-used.
- I ran 95km in January, which is nicely on target for my modest 2018 goal. Although January was a long month, and Winter Is Coming (And Then We Run Much Less Often), I am mildly optimistic that I might be able to keep it up.
- Python type hinting is brilliant. I have started using it much more often, but I only recently discovered how to specify a type which can have a value or None, an often-occurring pattern:
from typing import Optional, Tuple def get_preview_filename(attachment: Attachment) -> Tuple[Optional[str], Optional[str]]: pass
- On Wednesday, January 31, GOU #3 had her first real (play) school day, that is, without any of us present at least for a while. We’re taking it as gradually as possible, but it must be pretty intense when you’re that young (but old enough to talk, more or less) and all of a sudden you notice that you’re all alone with all those other little human beings, none of which are the family members you’re usually surrounded with.
The End
Thank you dear reader for coming to visit me over here, I really do enjoy it when you do!
I hope to see you next again next week, same time, same place.
|
https://cpbotha.net/2018/02/04/weekly-head-voices-130-2-direct-experience-dopamine/
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
Guide to EJB Set-up
Last modified: November 14, 2018
1. Overview
In this article, we’re going to discuss how to get started with Enterprise JavaBean (EJB) development.
Enterprise JavaBeans are used for developing scalable, distributed, server-side components and typically encapsulate the business logic of the application.
We’ll use WildFly 10.1.0 as our preferred server solution, however, you are free to use any Java Enterprise application server of your choice.
2. Setup
Let’s start by discussing the Maven dependencies required for EJB 3.2 development and how to configure the WildFly application server using either the Maven Cargo plugin or manually.
2.1. Maven Dependency
In order to use EJB 3.2, make sure you add the latest version to the dependencies section of your pom.xml file:
<dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>7.0</version> <scope>provided</scope> </dependency>
2.2. WildFly Setup With Maven Cargo
Let’s talk about how to use the Maven Cargo plugin to setup the server.
Here is the code for the Maven profile that provisions the WildFly server:
<profile> <id>wildfly-standalone</id> <build> <plugins> <plugin> <groupId>org.codehaus.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <version>${cargo-maven2-plugin.version</version> <configuration> <container> <containerId>wildfly10x</containerId> <zipUrlInstaller> <url> wildfly/10.1.0.Final/ wildfly-10.1.0.Final.zip </url> </zipUrlInstaller> </container> <configuration> <properties> <cargo.hostname>127.0.0.0</cargo.hostname> <cargo.jboss.management-http.port> 9990 </cargo.jboss.management-http.port> <cargo.servlet.users> testUser:admin1234! </cargo.servlet.users> </properties> </configuration> </configuration> </plugin> </plugins> </build> </profile>
We use the plugin to download the WildFly 10.1 zip directly from the WildFly’s website. Which is then configured, by making sure that the hostname is 127.0.0.1 and setting the port to 9990.
Then we create a test user, by using the cargo.servlet.users property, with the user id testUser and the password admin1234!.
Now that configuration of the plugin is completed we should be able to call a Maven target and have the server download, installed, launched and the application deployed.
To do this, navigate to the ejb-remote directory and run the following command:
mvn clean package cargo:run
When you run this command for the first time, it will download the WildFly 10.1 zip file, extract it and execute the installation and then launch it. It will also add the test user discussed above. Any further executions will not download the zip file again.
2.3. Manual Setup of WildFly
In order to setup WildFly manually, you must download the installation zip file yourself from the wildfly.org website. The following steps are a high-level view of the WildFly server setup process:
After downloading and unzipping the file’s contents to the location where you want to install the server, configure the following environment variables:
JBOSS_HOME=/Users/$USER/../wildfly.x.x.Final JAVA_HOME=`/usr/libexec/java_home -v 1.8`
Then in the bin directory, run the ./standalone.sh for Linux based operating systems or ./standalone.bat for Windows.
After this, you will have to add a user. This user will be used to connect to the remote EJB bean. To find out how to add a user you should take a look at the ‘add a user’ documentation.
For detailed setup instructions please visit WildFly’s Getting Started documentation.
The project POM has been configured to work with the Cargo plugin and manual server configuration by setting two profiles. By default, the Cargo plugin is selected. However to deploy the application to an already installed, configured and running Wildfly server execute the following command in the ejb-remote directory:
mvn clean install wildfly:deploy -Pwildfly-runtime
3. Remote vs Local
A business interface for a bean can be either local or remote.
A @Local annotated bean can only be accessed if it is in the same application as the bean that makes the invocation, i.e. if they reside in the same .ear or .war.
A @Remote annotated bean can be accessed from a different application, i.e. an application residing in a different JVM or application server.
There are some important points to keep in mind when designing a solution that includes EJBs:
- The java.io.Serializable, java.io.Externalizable and interfaces defined by the javax.ejb package are always excluded when a bean is declared with @Local or @Remote
- If a bean class is remote, then all implemented interfaces are to be remote
- If a bean class contains no annotation or if the @Local annotation is specified, then all implemented interfaces are assumed to be local
- Any interface that is explicitly defined for a bean which contains no interface must be declared as @Local
- The EJB 3.2 release tends to provide more granularity for situations where local and remote interfaces need to explicitly defined
4. Creating the Remote EJB
Let’s first create the bean’s interface and call it HelloWorld:
@Remote public interface HelloWorld { String getHelloWorld(); }
Now we will implement the above interface and name the concrete implementation HelloWorldBean:
@Stateless(name = "HelloWorld") public class HelloWorldBean implements HelloWorld { @Resource private SessionContext context; @Override public String getHelloWorld() { return "Welcome to EJB Tutorial!"; } }
Note the @Stateless annotation on the class declaration. It denotes that this bean is a stateless session bean. This kind of bean does not have any associated client state, but it may preserve its instance state and is normally used to do independent operations.
The @Resource annotation injects the session context into the remote bean.
The SessionContext interface provides access to the runtime session context that the container provides for a session bean instance. The container then passes the SessionContext interface to an instance after the instance has been created. The session context remains associated with that instance for its lifetime.
The EJB container normally creates a pool of stateless bean’s objects and uses these objects to process client requests. As a result of this pooling mechanism, instance variable values are not guaranteed to be maintained across lookup method calls.
5. Remote Setup
In this section, we will discuss how to setup Maven to build and run the application on the server.
Let’s look at the plugins one by one.
5.1. The EJB Plugin
The EJB plugin which is given below is used to package an EJB module. We have specified the EJB version as 3.2.
The following plugin configuration is used to setup the target JAR for the bean:
<plugin> <artifactId>maven-ejb-plugin</artifactId> <version>2.4</version> <configuration> <ejbVersion>3.2</ejbVersion> </configuration> </plugin>
5.2. Deploy the Remote EJB
To deploy the bean in a WildFly server ensure that the server is up and running.
Then to execute the remote setup we will need to run the following Maven commands against the pom file in the ejb-remote project:
mvn clean install
Then we should run:
mvn wildfly:deploy
Alternatively, we can deploy it manually as an admin user from the admin console of the application server.
6. Client Setup
After creating the remote bean we should test the deployed bean by creating a client.
First, let’s discuss the Maven setup for the client project.
6.1. Client-Side Maven Setup
In order to launch the EJB3 client we need to add the following dependencies:
<dependency> <groupId>org.wildfly</groupId> <artifactId>wildfly-ejb-client-bom</artifactId> <type>pom</type> <scope>import</scope> </dependency>
We depend on the EJB remote business interfaces of this application to run the client. So we need to specify the EJB client JAR dependency. We add the following in the parent pom:
<dependency> <groupId>com.baeldung.ejb</groupId> <artifactId>ejb-remote</artifactId> <type>ejb</type> </dependency>
The <type> is specified as ejb.
6.2. Accessing The Remote Bean
We need to create a file under src/main/resources and name it jboss-ejb-client.properties that will contain all the properties that are required to access the deployed bean:
remote.connections=default remote.connection.default.host=127.0.0.1 remote.connection.default.port=8080 = ${host.auth:JBOSS-LOCAL-USER} remote.connection.default.username=testUser remote.connection.default.password=admin1234!
7. Creating the Client
The class that will access and use the remote HelloWorld bean has been created in EJBClient.java which is in the com.baeldung.ejb.client package.
7.1 Remote Bean URL
The remote bean is located via a URL that conforms to the following format:
ejb:${appName}/${moduleName}/${distinctName}/${beanName}!${viewClassName}
- The ${appName} is the application name of the deployment. Here we have not used any EAR file but a simple JAR or WAR deployment, so the application name will be empty
- The ${moduleName} is the name we set for our deployment earlier, so it is ejb-remote
- The ${distinctName} is a specific name which can be optionally assigned to the deployments that are deployed on the server. If a deployment doesn’t use distinct-name then we can use an empty String in the JNDI name, for the distinct-name, as we did in our example
- The ${beanName} variable is the simple name of the implementation class of the EJB, so in our example it is HelloWorld
- ${viewClassName} denotes the fully-qualified interface name of the remote interface
7.2 Look-up Logic
Next, let’s have a look at our simple look-up logic:
public HelloWorld lookup() throws NamingException { String appName = ""; String moduleName = "remote"; String distinctName = ""; String beanName = "HelloWorld"; String viewClassName = HelloWorld.class.getName(); String toLookup = String.format("ejb:%s/%s/%s/%s!%s", appName, moduleName, distinctName, beanName, viewClassName); return (HelloWorld) context.lookup(toLookup); }
In order to connect to the bean we just created, we will need a URL which we can feed into the context.
7.3 The Initial Context
We’ll now create/initialize the session context:
public void createInitialContext() throws NamingException { Properties prop = new Properties(); prop.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming"); prop.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFacto[ERROR] prop.put(Context.PROVIDER_URL, "http-remoting://127.0.0.1:8080"); prop.put(Context.SECURITY_PRINCIPAL, "testUser"); prop.put(Context.SECURITY_CREDENTIALS, "admin1234!"); prop.put("jboss.naming.client.ejb.context", false); context = new InitialContext(prop); }
To connect to the remote bean we need a JNDI context. The context factory is provided by the Maven artifact org.jboss:jboss-remote-naming and this creates a JNDI context, which will resolve the URL constructed in the lookup method, into proxies to the remote application server process.
7.4 Define Lookup Parameters
We define the factory class with the parameter Context.INITIAL_CONTEXT_FACTORY.
The Context.URL_PKG_PREFIXES is used to define a package to scan for additional naming context.
The parameter org.jboss.ejb.client.scoped.context = false tells the context to read the connection parameters (such as the connection host and port) from the provided map instead of from a classpath configuration file. This is especially helpful if we want to create a JAR bundle that should be able to connect to different hosts.
The parameter Context.PROVIDER_URL defines the connection schema and should start with http-remoting://.
8. Testing
To test the deployment and check the setup, we can run the following test to make sure everything works properly:
@Test public void testEJBClient() { EJBClient ejbClient = new EJBClient(); HelloWorldBean bean = new HelloWorldBean(); assertEquals(bean.getHelloWorld(), ejbClient.getEJBRemoteMessage()); }
With the test passing, we can now be sure everything is working as expected.
9. Conclusion
So we have created an EJB server and a client which invokes a method on a remote EJB. The project can be run on any application server by properly adding the dependencies for that server.
The entire project can be found over on GitHub.
|
https://www.baeldung.com/ejb-intro
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
I've got a bunch of OpenStreetMap content that I'm trying to label (ArcGIS Pro 2.2.4) based on whether content is found in the other_tags field. Here's what I'm trying to do in the expression editor (Python):
def FindLabel ( [other_tags] ):
import re
ls = [other_tags]
slabel = re.search('"name:en"=>"(.*?)","', ls)
return slabel.group(1)
It says "NameError: name 'FindLabel' is not defined
Which is weird because if I remove the "," from the regex operation and do this:
def FindLabel ( [other_tags] ):
import re
ls = [other_tags]
slabel = re.search('"name:en"=>"(.*?)"', ls)
return slabel.group(1)
it renders as valid, although it prints out everything beyond the "name:en"=>" string.
I have also tried this, which validates but returns nothing:
def FindLabel ( [other_tags] ):
ss = [other_tags]
ls = ss[ss.find('"name:en"=>"')+1:ss.find('"')]
return ls
Here is a bit of the sort of substring someone could expect to find in OSM's other_tags:
"int_name"=>"Dubai","is_in:continent"=>"Asia","is_in:country_code"=>"AE","name:en"=>"Dubai","population"=>"1984000"....
Can anyone help me to get this label expression working?
Eric,
I was curious about this one and using ArcGIS Pro 2.3 I see the following:
seems to work?
2.3 will be available soon.
Mark
|
https://community.esri.com/thread/227390-why-isnt-this-label-code-working
|
CC-MAIN-2019-22
|
en
|
refinedweb
|
Hi all,
I have a program, which is invoked from a shell script. I need to return back to the shell script a status, whether the program succeeded or failed, so the shell script determines whether to continue with its normal course of actions or to stop upon faileur.
So, here's what I am assuming, and please do correct me if I am wrong.
I am planning to catch all the possible exceptions and upon catching each one, and properly handeling, I do a System.exit(2). Besides, at the very end of program flow, I do a System.exit(0).
So, in the shell script, I will check the value of $? by echoing it, and if it is 0, then the program ran successfully, else, there was some problems.
public class HelloWorld { public static void main( String[] args ) { try { callFirstMethod(); callSecondMethod(); System.exit(0); } catch (Exception e) { e.printStackTrace(); System.exit(2); } }
is my theory correct? please comment.
Thanks.
|
https://www.daniweb.com/programming/software-development/threads/130740/system-exit-in-catch-block
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
In many apps there is a need to have distributed cache. For years now, adopting distributed cache on Windows has been a challenge. Most popular product, Redis, is no longer supported on Windows platform. The easiest way to get it going on Windows today is Docker for Windows in my opinion. To do so, first you have to install Docker for Windows. I am a fan of graphical user interfaces, so I would also recommend installing Kitematic. Once that is done, run Kitematic, click New and in search box type Redis. In search results pick Redis (Redis:latest) and click Create. Now just wait a bit while you are downloading and isntalling Redis. Eventually, you will see in Kitematic that Redis is running. Now you can test Redis. Open command prompt, I use Terminal for Windows. You can use PowerShell or Command Prompt, either one.
Type the following and hit Enter
docker exec -it redis sh
At the next prompt type the following and hit Enter
redis-cli
Now type the following and hit Enter
set name John
You should see Ok. Now type the following and hit Enter.
get name
You should see John, meaning that you were successful in storing and retrieving a cache entry with the key of name and value of John.
Now let’s create new .NET Core Console app. Run Visual Studio, create new project, pick .NET Core Console App. Add new NuGet package of Microsoft.Extensions.Caching.Redis.Core. For testing reason I am just going to create options class.
public class MyRedisOptions : IOptions<RedisCacheOptions> { public RedisCacheOptions Value => new RedisCacheOptions { Configuration = "127.0.0.1:32768", InstanceName = "serge" }; }
The main thing you see is the port number. You can get that from Kitematic. If you click on Redis image, under IP and ports you will see Access Url, this is what we need. I would like to store a instance of a class, so I am going to create a person class.
[Serializable] public class Person { public string Name { get; set; } public Person Parent { get; set; } }
You will notices that I added Serializable attribute. This comes for requirement of IDistributedCache interface. it only operations on byte arrays. I am going to lean on BinaryFormatter class, new in .NET Code 2.1 to serialize an object into a byte array. I am going to create a little extensions class for that that I can reuse everywhere.
public static class IDistributedCacheExtensions { public static async Task SetAsync<T>(this IDistributedCache cache, string key, T value, DistributedCacheEntryOptions options) where T : class, new() { using (var stream = new MemoryStream()) { new BinaryFormatter().Serialize(stream, value); await cache.SetAsync(key, stream.ToArray(), options); } } public static async Task<T> GetAsync<T>(this IDistributedCache cache, string key) where T : class, new() { var data = await cache.GetAsync(key); using (var stream = new MemoryStream(data)) { { return (T)new BinaryFormatter().Deserialize(stream); } } } public static async Task SetAsync(this IDistributedCache cache, string key, string value, DistributedCacheEntryOptions options) { await cache.SetAsync(key, Encoding.UTF8.GetBytes(value), options); } public static async Task<string> GetAsync(this IDistributedCache cache, string key) { var data = await cache.GetAsync(key); if(data != null) { return Encoding.UTF8.GetString(data); } return null; } }
I am ready for the final test in my console app.
class Program { public static async Task Main(string[] args) { RedisCache cache = new RedisCache(new MyRedisOptions()); var person = new Person { Name = "Sergey", Parent = new Person { Name = "John" } }; await cache.SetAsync("sergey", person, new DistributedCacheEntryOptions ()); var p = await cache.GetAsync<Person>("sergey"); Console.WriteLine(p.Name + " " + p.Parent.Name); await cache.SetAsync("random", "some value", new DistributedCacheEntryOptions()); Console.ReadKey(); } }
As you can see I am creating an instance of RedisCache, new instance of a Person class, then just set and retrieve the value of Person and some random string.
A few of thoughts in conclusion.
- Did you notice that I used async main? Console apps now support async entry point!
- In normal asp.net core app I would not use Redis options directly, instead I would use methods available in Microsoft.Extensions.Caching.Redis.Core to inject and configure Redis as distributed cache.
- Now anytime I need distributed cache, I would inject IDistributedCache and set or retrieve the data.
- I can also use Json.Net to serialize or deserialize object.
Thanks and enjoy.
well done! Nice article as always Sergey. Could you correct your syntax like instead of <RedisCacheOptions> or “127.0.0.1:32768” instead of "127.0.0.1:32768" . thanks
|
http://www.dotnetspeak.com/asp-net-core/distributed-cache-redis-and-net-core/
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
#include <rte_mbuf.h>
#include <rte_memory.h>
#include <rte_mempool.h>
#include <rte_common.h>
#include "rte_crypto_sym.h"
Go to the source code of this file.
RTE Cryptography Common Definitions
Definition in file rte_crypto.h.
Crypto operation types
Definition at line 28 of file rte_crypto.h.
Status of crypto operation
Definition at line 36 of file rte_crypto.h.
Crypto operation session type. This is used to specify whether a crypto operation has session structure attached for immutable parameters or if all operation information is included in the operation data structure.
Definition at line 59 of file rte_crypto.h.
Reset the fields of a crypto operation to their default values.
Definition at line 127 of file rte_crypto.h.
Returns the size of private data allocated with each rte_crypto_op object by the mempool
Definition at line 163 of file rte_crypto.h.
Creates a crypto operation pool
Bulk allocate raw element from mempool and return as crypto operations
Definition at line 208 of file rte_crypto.h.
Allocate a crypto operation from a mempool with default parameters set
Definition at line 236 of file rte_crypto.h.
Bulk allocate crypto operations from a mempool with default parameters set
Definition at line 266 of file rte_crypto.h.
Returns a pointer to the private data of a crypto operation if that operation has enough capacity for requested size.
Definition at line 296 of file rte_crypto.h.
free crypto operation structure If operation has been allocate from a rte_mempool, then the operation will be returned to the mempool.
Definition at line 319 of file rte_crypto.h.
Allocate a symmetric crypto operation in the private data of an mbuf.
Definition at line 337 of file rte_crypto.h.
Allocate space for symmetric crypto xforms in the private data space of the crypto operation. This also defaults the crypto xform type and configures the chaining of the xforms in the crypto operation
Definition at line 371 of file rte_crypto.h.
Attach a session to a crypto operation
Definition at line 397 of file rte_crypto.h.
|
https://doc.dpdk.org/api-18.05/rte__crypto_8h.html
|
CC-MAIN-2022-21
|
en
|
refinedweb
|
New and Improved
Coming changes to unittest in Python 2.7 & 3.2
The Pycon Testing Goat.
Note
This article started life as a presentation at PyCon 2010. You can watch a video of the presentation:
Since that presentation lots of features have been added to unittest in Python 2.7 and unittest2.
This article also introduces a backport of the new features in unittest to work with Python 2.4, 2.5 & 2.6:
For a more general introduction to unittest see: Introduction to testing with unittest.
There are now ports of unittest2 for both Python 2.3 and Python 3. The Python 2.3 distribution is linked to from the unittest2 PyPI page. The Python 3 distribution is available from:
New and improved: Coming changes to unittest
- Introduction
- unittest is changing
- New Assert Methods
- Deprecations
- Type Specific Equality Functions
- Set Comparison
- Unicode String Comparison
- Add New type specific functions
- assertRaises
- Command Line Behaviour
- Test Discovery
- load_tests
- Cleanup Functions with addCleanup
- Test Skipping
- More Skipping
- As class decorator
- Class and Module Level Fixtures
- Minor Changes
- The unittest2 Package
- The Future
Introduction
unittest is the Python standard library testing framework. It is sometimes known as PyUnit and has a rich heritage as part of the xUnit family of testing libraries.
Python has the best testing infrastructure available of any of the major programming languages, but by virtue of being included in the standard library unittest is the most widely used Python testing framework.
unittest has languished whilst other Python testing frameworks have innovated. Some of the best innovations have made their way into unittest which has had quite a renovation. In Python 2.7 and 3.2 a whole bunch of improvements to unittest will arrive.
This article will go through the major changes, like the new assert methods, test discovery and the load_tests protocol, and also explain how they can be used with earlier versions of Python.
unittest is changing
The new features are documented in the Python 2.7 development documentation at: docs.python.org/dev/library/unittest.html. Look for "New in 2.7" or "Changed in 2.7" for the new and changed features.
An important thing to note is that this is evolution not revolution, backwards compatibility is important. In particular innovations are being brought in from other test frameworks, including test frameworks from large projects like Zope, Twisted and Bazaar, where these changes have already proved themselves useful.
New Assert Methods
The point of assertion methods in unittest is to provide useful messages on failure and to provide ready made methods for common assertions. Many of these were contributed by google or are in common use in other unittest extensions.
- assertGreater / assertLess / assertGreaterEqual / assertLessEqual
- assertRegexpMatches(text, regexp) - verifies that regexp search matches text
- assertNotRegexpMatches(text, regexp)
- assertIn(value, sequence) / assertNotIn - assert membership in a container
- assertIs(first, second) / assertIsNot - assert identity
- assertIsNone / assertIsNotNone
And even more...
- assertIsInstance / assertNotIsInstance
- assertDictContainsSubset(subset, full) - Tests whether the key/value pairs in dictionary full are a superset of those in superset.
- assertSequenceEqual(actual, expected) - ignores type of container but checks members are the same
- assertItemsEqual(actual, expected) - ignores order, equivalent of assertEqual(sorted(first), sorted(second)), but it also works with unorderable types
It should be obvious what all of these do, for more details refer to the friendly manual.
As well as the new methods a delta keyword argument has been added to the assertAlmostEqual / assertNotAlmostEqual methods. I really like this change because the default implementation of assertAlmostEqual is never (almost) useful to me. By default these methods round to a specified number of decimal places. When you use the delta keyword the assertion is that the difference between the two values you provide is less than (or equal to) the delta value. This permits them to be used with non-numeric values:
import datetime delta = datetime.timedelta(seconds=10) second_timestamp = datetime.datetime.now() self.assertAlmostEqual(first_timestamp, second_timestamp, delta=delta)
Deprecations
unittest used to have lots of ways of spelling the same methods. The duplicates have now been deprecated (but not removed).
- assert_ -> use assertTrue instead
- fail* -> use assert* instead
- assertEquals -> assertEqual is the one true way
New assertion methods don't have a fail... alias as well. If you preferred the fail* variant, tough luck.
Not all the 'deprecated' methods issue a PendingDeprecationWarning when used. assertEquals and assert_ are too widely used for official deprecations, but they're deprecated in the documentation. In the next version of the documentation the deprecated methods will be expunged and relegated to a 'deprecated methods' section.
Methods that have deprecation warnings are:
failUnlessEqual, failIfEqual, failUnlessAlmostEqual, failIfAlmostEqual, failUnless, failUnlessRaises, failIf
Type Specific Equality Functions
More import new assert methods are the type specific ones. These provide useful failure messages when comparing specific types.
- assertMultiLineEqual - uses difflib, default for comparing unicode strings
- assertSetEqual - default for comparing sets
- assertDictEqual - you get the idea
- assertListEqual
- assertTupleEqual
The nice thing about these new assert methods is that they are delegated to automatically by assertEqual when you compare two objects of the same type.
Add New type specific functions
- addTypeEqualityFunc(type, function)
Functions added will be used by default for comparing the specified type. For example if you wanted to hookup assertMultiLineEqual for comparing byte strings as well as unicode strings you could do:
self.addTypeEqualityFunc(str, self.assertMultiLineEqual)
addTypeEqualityFunc is useful for comparing custom types, either for teaching assertEqual how to compare objects that don't define equality themselves, or more likely for presenting useful diagnostic error messages when a comparison fails.
Note that functions you hook up are only used when the exact type matches, it does not use isinstance. This is because there is no guarantee that sensible error messages can be constructed for subclasses of the registered types.
assertRaises
The changes to assertRaises are one of my favourite improvements. There is a new assertion method and both methods can be used as context managers with the with statement. If you keep a reference to the context manager you can access the exception object after the assertion. This is useful for making further asserts on it, for example to test an error code:
# as context manager with self.assertRaises(TypeError): add(2, '3') # test message with a regex msg_re = "^You shouldn't Foo a Bar$" with self.assertRaisesRegexp(FooBarError, msg_re): foo_the_bar() # access the exception object with self.assertRaises(TypeError) as cm: do_something() exception = cm.exception self.assertEqual(exception.error_code, 3)
Command Line Behaviour
python -m unittest test_module1 test_module2 python -m unittest test_module1.suite_name python -m unittest test_module.TestClass python -m unittest test_module.TestClass.test_method
The unittest module can be used from the command line to run tests from modules, suites, classes or even individual test methods. In earlier versions it was only possible to run individual test methods and not modules or classes.
If you are running tests for a whole test module and you define a load_tests function, then this function will be called to create the TestSuite for the module. This is the load_tests protocol.
You can run tests with more detail (higher verbosity) by passing in the -v flag:
python -m unittest -v test_module
For a list of all the command line options:
python -m unittest -h
There are also new verbosity and exit arguments to the main() function. Previously main() would always sys.exit() after running tests, making it not very useful to call programmatically. The new parameters make it possible to control this:
>>> from unittest import main >>> main(module='test_module', verbosity=2, ... exit=False)
Passing in verbosity=2 is the equivalent of the -v command line option.
failfast, catch and buffer command line options
There are three more command line options for both standard test running and test discovery. These command line options are also available as parameters to the unittest.main() function.
-f / --failfast
Stop the test run on the first error or failure.
-c / --catch
Control-c during the test run waits for the current test to end and then reports all the results so far. A second control-c raises the normal KeyboardInterrupt exception.
There are a set of functions implementing this feature available to test framework writers wishing to support this control-c handling. See Signal Handling in the development documentation [1].
-b / --buffer
The standard out and standard error streams are buffered during the test run. Output during a passing test is discarded. Output is echoed normally on test fail or error and is added to the failure messages.
The command line can also be used for test discovery, for running all of the tests in a project or just a subset.
Test Discovery
Test discovery has been missing from unittest for a long time, forcing everyone to write their own test discovery / collection system.
python -m unittest discover
The options can also be passsed in as positional arguments. The following two command lines are equivalent:
python -m unittest discover -s project_directory -p '*_test.py' python -m unittest discover project_directory '*_test.py'
There are a few rules for test discovery to work, these may be relaxed in the future. For test discovery all test modules must be importable from the top level directory of the project.
Test discovery also supports using dotted package names instead of paths. For example:
python -m unittest discover package.test
There is an implementation of just the test discovery (well, plus load_tests) to work with standard unittest. The discover module:
pip install discover python -m discover
load_tests
If a test module defines a load_tests function it will be called to create the test suite for the module.
This example loads tests from two specific TestCases:
def load_tests(loader, tests, pattern): suite = unittest.TestSuite() case1 = loader.loadTestsFromTestCase(TestCase1) case2 = loader.loadTestsFromTestCase(TestCase2) suite.addTests(case1) suite.addTests(case2) return suite
The tests argument is the standard tests that would be loaded from the module by default as a TestSuite. If you just want to add extra tests you can just call addTests on this. pattern is only used in the __init__.py of test packages when loaded from test discovery. This allows the load_tests function to continue (and customize) test discovery into the package. In normal test modules pattern will be None.
Cleanup Functions with addCleanup
This is an extremely powerful new feature for improving test readability and making tearDown obsolete! Push clean-up functions onto a stack, at any point including in setUp, tearDown or inside clean-up functions, and they are guaranteed to be run when the test ends (LIFO).
def test_method(self): temp_dir = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, temp_dir) ...
No need for nested try: ... finally: blocks in tests to clean up resources.
The full signature for addCleanup is: self.addCleanup(function, *args, **kwargs). Any additional args or keyword arguments will be passed into the cleanup function when it is called. It saves the need for nested try:..finally: blocks to undo actions performed by the test.
If setUp() fails, meaning that tearDown() is not called, then any cleanup functions added will still be called. Exceptions raises inside cleanup functions will cause the test to report an error, but all cleanup functions will still run.
If you want to manually clear out the cleanup stack you can call doCleanups().
Test Skipping
Decorators that work as class or method decorators for conditionally or unconditionally skipping tests:
@skip("skip this test") def test_method(self): ... @skipIf(sys.version_info[2] < 5, "only Python > 2.5") def test_method(self): ... @skipUnless(sys.version_info[2] < 5, "only Python < 2.5") def test_method(self): ...
More Skipping
def test_method(self): self.skipTest("skip, skippety skip") def test_method(self): raise SkipTest("whoops, time to skip") @expectedFailure def test_that_fails(self): self.fail('this *should* fail')
Ok, so expectedFailure isn't for skipping tests. You use it for test that are known to fail currently. If you fix the problem, so the test starts to pass, then it will be reported as an unexpected success. This will remind you to go back and remove the expectedFailure decorator.
Skipped tests appear in the report as 'skipped (s)', so the number of tests run will always be the same even when skipping.
As class decorator
If you skip an entire class then all tests in that class will be skipped.
# for Python >= 2.6 @skipIf(sys.platform == 'win32') class SomeTest(TestCase) ... # Python pre-2.6 class SomeTest(TestCase) ... SomeTest = skipIf(sys.platform == 'win32')(SomeTest)
Class and Module Level Fixtures
You can now define class and module level fixtures; these are versions of setUp and tearDown that are run once per class or module. in the suite have run the final tearDownClass and tearDownModule are run.
setUpClass and tearDownClass..
The Details.
If there are any exceptions raised during one of these functions / methods then.
Caution!
Note that shared fixtures do not play well with features like test parallelization and they also break test isolation. They should be used with care.
A setUpModule or setUpClass that raises a SkipTest exception will be reported as skipped instead of as an error.
Minor Changes
There are a host of other minor changes, some of them steps towards making unittest more extensible. For full details on these see the documentation:
- unittest is now a package instead of a module
- Better messages with the longMessage class attribute
- TestResult: startTestRun and stopTestRun
- TextTestResult public and the TextTestRunner takes a runnerclass argument for providing a custom result class (you used to have to subclass TextTestRunner and override _makeResult)
- TextTestResult adds the test name to the test description even if you provide a docstring
setuptools test command
Included in unittest2 is a test collector compatible with the setuptools test command. This allows you to run:
python setup.py test
and have all your tests run. They will be run with a standard unittest test runner, so a few features (like expected failures and skips) don't work fully, but most features do. If you have setuptools or distribute installed you can see it in action with the unittest2 test suite.
To use it specify test_suite = 'unittest2.collector' in your setup.py. This starts test discovery with the default parameters from the directory containing setup.py, so it is perhaps most useful as an example (see the unittest2/collector.py module).
The unittest2 Package
To use the new features with earlier versions of Python:
pip install unittest2
-
-
- Tested with Python 2.4, 2.5 & 2.6
- This is the documentation...
Replace import unittest with import unittest2. An alternative pattern for conditionally using unittest2 where it is available is:
try: import unittest2 as unittest except ImportError: import unittest
python -m unittest ... works in Python 2.7 even though unittest is a package. In Python 2.4-2.6 this doesn't work (packages can't be executed with -m).
The unittest2 command line functionality is provided with the unit2 / unit2.py script..
There is also the discover module if all you want is test discovery: python -m discover (same command line options).
The Future
The big issue with unittest is extensibility. This is being addressed in an experimental "plugins branch" of unittest2, which is being used as the basis of a new version of nose:
Please:
- Use unittest2 and report any bugs or problems
- Make feature requests on the Python issue tracker: bugs.python.org
- Join the Testing in Python mailing list
For buying techie books, science fiction, computer hardware or the latest gadgets: visit The Voidspace Amazon Store.
Last edited Tue Aug 2 00:51:34 2011.
Counter...
|
http://www.voidspace.org.uk/python/articles/unittest2.shtml
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Required by state, using the states argument in the field ?
Hi,
In early versions I used to do this:
states={'close': [('required', True), ('readonly',True)]}
in the definition of the field. When I did this it prevent to change the state while the field (the one required in the next state) isn't set.
In odoo 9 the state changes anyway and then if you try to edit the record you see that the fields is required.
So, my question is how do I declare my filed in order to prevente any changes of state if the field is not set?
For now, I did this:
@api.multi
def to_close(self):
for record in self:
if record.reason and record.so_id:
record.write({'state':'close'})
else:
raise ValidationError(_('Missing required
|
https://www.odoo.com/forum/help-1/question/required-by-state-using-the-states-argument-in-the-field-97800
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
How to Use ES6 for
Universal JavaScript Apps
Now that the dust has settled a bit, I’m finally beginning to use ES6 for production apps — and because I write universal JavaScript, it has to work for both Node.js and browsers.
This won’t be an in-depth tutorial about ES6 features or universal JavaScript (aka isomorphic JavaScript). We’re just going to cover the basics to get you up and running.
Use Babel
Babel.js is a great tool that lets you transpile your ES6 code into ES5 code that runs in today’s JavaScript environments, including Node.js and browsers, but it isn’t obvious how to get it set up.
Install Babel
Nearly every tutorial you find will tell you to install Babel globally. That’s fine if nobody has to share your code, but if you’re on a team or producing open-source libraries, install it locally per-package, as well:
$ npm install -g babel-cli
$ npm install --save-dev babel-cli babel-preset-es2015 babel-preset-stage-0
Now you can launch the `babel-node` CLI/REPL:
$ babel-node
> Object.assign({}, {msg: 'wow!'}); // => { msg: 'wow!' }
For Browserify workflows, you may need these, as well:
$ npm install --save-dev babelify browserify
This will let you use all the cool new ES6 syntax, like arrow functions:
(x) => x + 1;
But it won’t let you use the new built-in methods like `Object.assign()`, `Object.is()`, etc…
This isn’t immediately obvious, because these features work great using the `babel-node` REPL:
Object.is(NaN, NaN); // => true
Lucky for us, this is easily solved with a polyfill:
$ npm install --save core-js
Then, at the top of your entry file:
import 'core-js';
Linting
Worried that you’ll have to give up linting your code? No worries. ESLint has you covered!
$ npm install --save-dev eslint babel-eslint
The ES6 love is in the `env` and `ecmaFeatures` keys. You’ll need those to prevent errors when ESLint encounters your ES6-specific code.
If you also want object rest & spread (an ES7 proposal commonly used in React code), you’ll also need `babel-eslint`.
Compiling
For most cases, you can simply replace your `node` calls with `babel-node` calls. `babel-node` caches modules in memory, so there’s a significant startup delay and potentially a lot of memory usage. Consider compiling to to ES5 for production. Read on.
Babel’s docs make compiling look like a breeze:
$ babel script.js --out-file script-compiled.js
Couldn’t be easier, right? Well, if you don’t try to import any modules, sure, this will work fine. But if you do anything non trivial, you’ll want to compile your whole code base, not just one file. For that, you want to use the `-d` option.
$ babel -d build-dir source-dir
Note that the output directory comes first.
If you want the debugger to work properly, you will want to add source maps with the `-s` option:
$ babel -d build-dir source-dir -s
Doing so will tell Babel that for each file it compiles, it should also produce a source map file that will tell debuggers where to find the original source code while you’re stepping through the live code in the engine. In other words, you’ll see the code that you wrote, instead of the compiled output that Babel generated. That’s usually what you want.
To compile for the browser, you want to use Webpack, or the Babelify Browserify transform. I typically use babelify for quick compiles at the terminal. For instance, to run some unit tests:
npm install -g browserify browser-run
browserify -t babelify script.js | browser-run -p 2222
- Install `browserify` and `browser-run` so that you can use them anywhere in your terminal.
- Create a bundle from `script.js` and run the script in chrome. Hit from your favorite browser, and the script will run in the browser. Console log output will get piped back to the console.
Compile a bundle:
$ browserify script.js -t babelify --outfile bundle.js
Configuring Webpack is too much for this quick tutorial, but if you want to skip all the busywork, you can use this boilerplate for production modules.
Using Existing Modules
Using the above mentioned tools, you can import both ES6 and Node-style modules using ES6 syntax:
import 'core-js'; // Node module
import harness from './harness'; // ES6 module
So you can keep using all the standard modules you’ll find in the massive npm ecosystem in your modern ES6 codebase — but since so many people are still using ES5, I recommend that you publish compiled versions of your modules to npm.
For public libraries, I put the source in a `source` directory, and compiled ES5 in `dist` or `lib`. For now it’s probably a good idea to point to the compiled ES5 version with the `main` key in `package.json`.
Automation
I like to put these commands in my npm scripts:
React
Bonus for React users: you’re covered, too. Both Babel and ESLint support JSX. Nice!
EDIT: As of Nov 8, 2015, changes to the Babel 6 API have broken babel-react-transform. If you need it, you can install the latest compatible version of Babel:
npm install --save-dev babel@5.8.29
Party Time
Congratulations! You’re ready to start using ES6 for your universal apps. In case you’re wondering, here are a few of my favorite ES6 things you should explore:
- Compact object literals
- Destructuring
- Arrow functions (Great for one-line lambdas)
- Default params
- Rest params
- Generators
You should memorize the new defaults/overrides pattern right now:
Enjoy!
Eric Elliott is the author of “Programming JavaScript Applications” (O’Reilly), host of the documentary film-in-production, “Programming Literacy”..
|
https://medium.com/javascript-scene/how-to-use-es6-for-isomorphic-javascript-apps-2a9c3abe5ea2
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
API Basics
This document provides an overview of the basics of the
Draft API. A
working example
is also available to follow along.
Controlled InputsControlled Inputs
The
Editor React component is built as a controlled ContentEditable component,
with the goal of providing a top-level API modeled on the familiar React
controlled input API.
As a brief refresher, controlled inputs involve two key pieces:
- A value to represent the state of the input
- An onChange prop function to receive updates to the input
This approach allows the component that composes the input to have strict control over the state of the input, while still allowing updates to the DOM to provide information about the text that the user has written.
class MyInput extends React.Component { constructor(props) { super(props); this.state = {value: ''}; this.onChange = (evt) => this.setState({value: evt.target.value}); } render() { return <input value={this.state.value} onChange={this.onChange} />; } }
The top-level component can maintain control over the input state via this
value state property.
Controlling Rich TextControlling Rich Text
In a React rich text scenario, however, there are two clear problems:
- A string of plaintext is insufficient to represent the complex state of a rich editor.
- There is no such
onChangeevent available for a ContentEditable element.
State is therefore represented as a single immutable
EditorState object, and
onChange is implemented within the
Editor core to provide this state
value to the top level.
The
EditorState object is a complete snapshot of the state of the editor,
including contents, cursor, and undo/redo history. All changes to content and
selection within the editor will create new
EditorState objects. Note that
this remains efficient due to data persistence across immutable objects.
import {Editor, EditorState} from 'draft-js'; class MyEditor extends React.Component { constructor(props) { super(props); this.state = {editorState: EditorState.createEmpty()}; this.onChange = (editorState) => this.setState({editorState}); } render() { return <Editor editorState={this.state.editorState} onChange={this.onChange} />; } }
For any edits or selection changes that occur in the editor DOM, your
onChange
handler will execute with the latest
EditorState object based on those changes.
|
https://draftjs.org/docs/quickstart-api-basics.html
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Configuration push services can be classified into global configuration push and intra-application configuration push in EDAS.
- The Global Configuration Push service pushes configurations to all applications under a given username.
- The (Intra-Application) Configuration Push service only pushes configurations within a given application.
This article describes the global configuration push service. For information about the intra-application configuration push service, see (Intra-application) configuration push.
Configuration in EDAS is a trio that contains Group, DataId, and Content. The three elements are defined as follows:
- Group: Name of a group, which is created in Service Group and used to isolate services in a namespace. for example, a package in Java. The maximum length allowed is 128 characters.
DataId: Configuration name, for example, a class name in Java. The maximum length allowed is 256 characters.
A piece of configuration is identified by Group and DataId collectively and corresponds to a value. The symbols that are allowed in the names of Group and DataId are period (.), colon (:), hyphen (-), and underscore (_).
Content: Configuration value. The maximum length allowed is 1,024 characters.
You can add, modify, and delete configurations in real time, and apply configurations dynamically without the need to modify code, republish services, or restart services.
Note: If you have not created any services, the configuration list displays a piece of configuration that is automatically generated by the system, which you can ignore.
Create a global configuration
Log on to the EDAS console.
Choose Service Market > Service Groups on the left-side menu bar.
In the container upgrade prompt, select Upgrade Later.
Click Create Service Group in the upper-right corner of the page. In the Create HSF Service Group dialog box, enter the Service Group Name and click Create.
Click Global Configuration on the left-side menu bar.
In the Configuration List page, select the region and then select the service group you just created. Then, click Create Configuration in the upper-right corner of the page.
In the Create Configuration dialog box, enter the DataId and Content and then click OK.
Note: The group has already been selected on the Configuration List page. It is not editable in this dialog box.
View configuration list
Click Global Configuration on the left-side menu bar of the EDAS Console.
In the Configuration List page, select the region in which to view configurations.
View the global configuration list for this region.
By default, this page shows all the configuration information for the first group. From the group drop-down menu, you can select the group of which you want to view the configurations.
View global configuration details
On the Configuration List page, click the View button in the Actions column of the desired configuration.
The dialog box that appears shows the Group, DataId, and Content for the selected configuration.
Update global configuration
On the Configuration List page, click the Update button in the Actions column of the configuration to update.
The dialog box that appears allows you to modify the content of the configuration.
After completing the modification, click OK to update the configuration.
Delete global configuration
You can delete any global configuration that will no longer be used.
Note: A piece of configuration can no longer be used once deleted. Please proceed with caution.
On the Configuration List page, click the Delete button in the Actions column of the configuration to delete.
In the Delete Configuration dialog box, confirm the information and click Delete.
|
https://www.alibabacloud.com/help/doc-detail/54463.htm
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
Welcome to our free Java tutorial. This tutorial is based on Webucator's Introduction to Java Training course.
In this lesson, you will learn about error handling within Java.
Lesson Goals
try ... catchstructures to catch expected exceptions.
finallyblocks to guarantee execution of code.
Exceptions are generated when a recognized condition, usually an error condition, occurs during the execution of a method. There are a number of standard error conditions defined in Java, and you may define your own error conditions as well.
When an exception is generated, it is said to be thrown.
Java syntax includes a system for managing exceptions, by tracking the potential for each method to throw specific exceptions. Note that:
There are two ways to handle an exception:
So, if you use a method in your code that is marked as throwing a particular exception, the compiler will not allow that code unless you handle the exception
Once an exception is thrown, it propagates backward up the chain of methods, from callees to callers, until it is caught. Note that:
If an exception is not caught in your code (which would happen if
main was
marked as throwing the exception) then the JVM will catch
the exception, end that thread of execution, and print a stack trace.
There are cases where the compiler does not enforce these rules. Exceptions that fit this category are called unchecked exceptions.
Let's say we are writing a method called
getThatInt(ResultSet rs) and we want to use the method
getInt(int column) from the
ResultSet passed in as a parameter:
public int getThatInt(ResultSet rs) { int i = 0; return rs.getInt(3); }
A look at the API listing for
ResultSet tells us that the
getInt() method
throws
SQLException, so we must handle that in our code
tryand
catch
public int getThatInt(ResultSet rs) { int i = 0; try { i = rs.getInt(3); } catch (SQLException e) { System.out.println("Exception occurred!"); System.out.println(e.getMessage()); e.printStackTrace(); } return i; }
public int getThatInt(ResultSet rs) throws SQLException { int i = 0; i = rs.getInt(3); return i; }
Note that although you are required to "handle" the exception, you aren't necessarily required to do anything useful about it!
Your decision as to which approach to use should be based on where you think responsibility for handling the exception lies. In the example above, the second approach is probably better, so that the code that works more closely with the SQL handles the exception.
When an exception is thrown, an exception object is created and passed to
the
catch block much like a parameter to a method. Note that:
There is an API class called
Exception. Note that:
Exception, which inherits from
Throwable.
Error, also inherits from
Throwable.
Errorsubtypes (like
OutOfMemoryErroror
StackOverflowError).
RuntimeExceptionis a subclass of
Exceptionthat is a base class for all the exception classes that you are not obligated to handle, but still might want to anyway (examples are
ArithmeticException, from dividing by zero,
NullPointerException, and
ArrayIndexOutOfBoundsException).
So, there are several classes of exceptions you are not required to handle (shaded in the image below). Note that:
Erroror
RuntimeException.
ArrayIndexOutOfBoundsException.
IllegalArgumentException.
try.
finally.
A method that generates an exception can be written to not
catch it. Instead it can let it be thrown back to the method that called it.
The possibility that a method may throw an exception must be defined with the method.
[modifiers] returnType functionName(arguments) throws ExceptionClassName { body including risky code }
Then an instance of
ExceptionClassName or a class
that extends it may be thrown so, stating that a method throws
Exception is about as generic
as you can get (stating that it throws
Throwableis as generic
as you can get, but not recommended). A method can throw more than one type of exception; in which case you
would use a comma-separated list of exception types.
In this way, the method is now marked as throwing that type of exception, and a code that calls this method will be obligated to handle it.
When you extend a class and override a method, you cannot add exceptions
to the
throws list, but a base class method can list exceptions that it does not throw
in the expectation that an overriding method will throw the exception. This is another example of the "inheritance cannot restrict access" principle
we saw earlier.
If
main() throws an exception, the JVM, which runs under Java
rules, will handle the exception (by printing a stack trace and
closing down the offending thread. In a single-threaded program, this will
shut down the JVM).
The keyword
throw is used to trigger the exception-handling
process (or, "raise" the exception, as it is often termed in other
languages).
That word is followed by an instance of a throwable object, i.e., an instance
of a class that extends
Throwable. Usually, a new instance of an appropriate exception class is created
to contain information about the exception.
For example, suppose a
setAge() method expects a nonnegative integer; we can have
it throw an
IllegalArgumentException if it receives a negative value. It makes sense for the method that calls
setAge() to do something about
the problem, since it is where the illegal number came from.
So, we can declare
setAge() as
throws IllegalArgumentException.
public void setAge(int age) throws IllegalArgumentException { if (age < 0) throw new IllegalArgumentException("Age must be >= 0"); else this.age = age; }
NumberFormatExceptionin Payroll
Our program to this point has been prone to potential
bad numeric inputs when reading from the keyboard. The parsing methods all throw
NumberFormatException.
We could now put each line that requests a number inside a small loop.
The loop could be controlled by a
boolean variable, perhaps
with a name like
isInvalid and initially set to
true (using
the reverse approach is also a possible strategy).
mainmethod or in the
KeyboardReaderclass?
As a general principle, tools shouldn't attempt to handle exceptions when the handling logic would vary depending on the code using the tool. But, it would be a tremendous burden to put each step of a program that requests a numeric input in a looped try/catch block.
Instead, we could recognize that a common approach would be to loop until the input is numeric, printing an error message each time.
We could overload the get methods in
KeyboardReader to
accept an error message string, so it could do the looping for us. This way a reasonable solution would be provided, but the original method
would still be available if the programmer wants to customize the exception
handling.
If a programmer wants a different approach, they are still free to write
it in their code and use the original
KeyboardReader methods .
KeyboardReaderthat accepts an error message string as a second parameter; have it loop on each numeric input request until it succeeds without throwing the exception (and printing the error message each time the exception occurs).
Payrollto call these methods.
This approach still doesn't solve the problem of limited employee types, valid department numbers, etc. Can you think of an approach that would? (Hint: interfaces are a powerful tool ...).
package util; import java.io.*; public class KeyboardReader { ---- C O D E O M I T T E D ---- public static int getPromptedInt(String prompt, String errMsg) { for ( ; ; ) { try { return Integer.parseInt(getPromptedString(prompt)); } catch (NumberFormatException nfe) { System.out.println(errMsg); } } } public static float getPromptedFloat(String prompt, String errMsg) { for( ; ; ) { try { return Float.parseFloat(getPromptedString(prompt)); } catch (NumberFormatException nfe) { System.out.println(errMsg); } } } public static double getPromptedDouble(String prompt, String errMsg) { for( ; ; ) { try { return Double.parseDouble(getPromptedString(prompt)); } catch (NumberFormatException nfe) { System.out.println(errMsg); } } } }
import employees.*; import vendors.*; import finance.*; import util.*; public class Payroll { public static void main(String[] args) { Employee[] e = new Employee[5]; String fName = null; String lName = null; int dept = 0; double payRate = 0.0; double hours = 0.0;: "); dept = KeyboardReader.getPromptedInt( "Enter department: ", "Department must be numeric");); } System.out.println(e[i].getPayInfo()); } ---- C O D E O M I T T E D ---- } }
The revised code uses the new overloads of the getPromptedXXX methods.
package util; public interface IntValidator { public boolean accept(int candidate); }
This interface specifies a method that will be used to validate integers. A validator for a specific field (like department) would implement this with code to test for legal values for that field. The package contains similar interfaces for floats and doubles.
package employees; import util.IntValidator; public class DeptValidator implements IntValidator { @Override public boolean accept(int dept) { return dept > 0 && dept <= 5; } } /* Sample usage in Payroll dept = KeyboardReader.getPromptedInt( "Enter department: ", "Dept must be numeric", new DeptValidator(), "Valid depts are 1 - 5"); */
This class validates department numbers to be from 1 - 5 inclusive. We also could create separate validators for pay rates, etc.
package util; import java.io.*; public class KeyboardReader { ---- C O D E O M I T T E D ---- public static int getPromptedInt(String prompt) { return Integer.parseInt(getPromptedString(prompt)); } public static int getPromptedInt(String prompt, String errMsg) { for ( ; ; ) { try { return Integer.parseInt(getPromptedString(prompt)); } catch (NumberFormatException e) { System.out.println(errMsg); } } } public static int getPromptedInt( String prompt, String formatErrMsg, IntValidator val, String valErrMsg) { for ( ; ; ) { try { int num = Integer.parseInt(getPromptedString(prompt)); if (val.accept(num)) return num; else System.out.println(valErrMsg); } catch (NumberFormatException e) { System.out.println(formatErrMsg); } } } public static float getPromptedFloat(String prompt) { return Float.parseFloat(getPromptedString(prompt)); } public static float getPromptedFloat(String prompt, String errMsg) { for ( ; ; ) { try { return Float.parseFloat(getPromptedString(prompt)); } catch (NumberFormatException e) { System.out.println(errMsg); } } } public static float getPromptedFloat( String prompt, String formatErrMsg, FloatValidator val, String valErrMsg) { for ( ; ; ) { try { float num = Float.parseFloat(getPromptedString(prompt)); if (val.accept(num)) return num; else System.out.println(valErrMsg); } catch (NumberFormatException e) { System.out.println(formatErrMsg); } } } public static double getPromptedDouble(String prompt) { return Double.parseDouble(getPromptedString(prompt)); } public static double getPromptedDouble(String prompt, String errMsg) { for ( ; ; ) { try { return Double.parseDouble(getPromptedString(prompt)); } catch (NumberFormatException e) { System.out.println(errMsg); } } } public static double getPromptedDouble( String prompt, String formatErrMsg, DoubleValidator val, String valErrMsg) { for ( ; ; ) { try { double num = Double.parseDouble(getPromptedString(prompt)); if (val.accept(num)) return num; else System.out.println(valErrMsg); } catch (NumberFormatException e) { System.out.println(formatErrMsg); } } } }
If a base class method throws an exception, that behavior will also occur in any derived classes that do not override the method.
An overriding method may throw the same exception(s) that the base class method threw.
An overriding method cannot add new exceptions to the
throws list. Similar to placing more strict access on the method, this would restrict
the derived class object in ways that a base class reference would be unaware
of.
If the derived class method does not throw the exception that the base class threw, it can either:
If you have a base class method that does not throw an exception, but you
expect that subclasses might, you can declare the base class to
throw that
exception.
ExceptionClass Constructors and Methods
There are several forms of constructors defined in the base class for the exception hierarchy.
The forms involving a cause are used in situations like Servlets and Java Server Pages, where a specific exception is thrown by the JSP engine, but it may be rooted in an exception from your code.
throws Exception, they settled on
throws IOException, ServletException(or
JSPExceptionfor Java Server Pages).
ServletExceptionobjects if you did not want to handle them
A
Exception object has several useful methods:
Also worth noting is that the Java Logging API has logging methods that
will accept a
Throwable parameter and make a log entry with the stack trace.
You can create your own exception class by extending an existing exception class.
[modifiers] class NewExceptionClassName extends ExceptionClassName { create constructors that usually delegate to super-constructors }
You could then add any fields or methods that you wish, although often that is not necessary.
You must, however, override any constructors you wish to use:
Exception(),
Exception(String
Exception(String
message, Throwable cause),
Exception(Throwable cause). Usually you can just call the corresponding super-constructor.
If you extend
RuntimeException or one of its subclasses, your exception will be treated as a runtime exception (it will not be checked).
When a situation arises for which you would want to throw the exception,
use the
throw keyword with a new object from your exception
class, for example:
throw new ExceptionClassName(messageString);
class NewException extends Exception { NewException() { super(); } NewException(String message) { super(message); } NewException(String message, Throwable cause) { super(message, cause); } NewException(Throwable cause) { super(cause); } } public class NewExceptionTest { public void thrower() throws NewException { if (Math.random() < 0.5) { throw new NewException("This is my exception"); } } public static void main(String[] args) { NewExceptionTest t = new NewExceptionTest(); try { t.thrower(); } catch(NewException e) { System.out.println("New Exception: " + e.getMessage()); } finally { System.out.println("Done"); } } }
The
thrower method randomly throws a
NewException, by creating and
throwing a new instance of
NewException.
main tries to call
thrower, and catches the
NewException when it occurs.
Our payroll program can now handle things like a bad numeric input for pay rate (valid format, but not sensible, like a negative number) in a more comprehensive manner. We already are checking the numeric inputs from the keyboard, but there is no guarantee that later code will remember to do this. Using an exception mechanism guarantees protection from invalid values.
utilpackage, create an exception class for
InvalidValueException. Note that the Java API already contains a class for this purpose,
IllegalArgumentException, but it is a
RuntimeException- we would like ours to be a checked exception.
Employee(and potentially its subclasses), change the constructors that accept pay rate and the
setPayRatemethods to now
throwthat exception (a question to ask yourself - is it necessary to actually test the pay rate value anywhere other than in the
Employeeclass
setPayRatemethod?). You will see that the effect of throwing the exception ripples through a lot of code.
package util; public class InvalidValueException extends Exception { public InvalidValueException() { super(); } public InvalidValueException(String message) { super(message); } public InvalidValueException(Throwable cause) { super(cause); } public InvalidValueException(String message, Throwable cause) { super(message, cause); } }
package employees; import finance.Payable; import util.*; public class Employee extends Person implements Payable { ---- C O D E O M I T T E D ---- public Employee(String firstName, String lastName, double payRate) throws InvalidValueException { super(firstName, lastName); setPayRate(payRate); } public Employee(String firstName, String lastName, int dept, double payRate) throws InvalidValueException { this(firstName, lastName, dept); setPayRate(payRate); } ---- C O D E O M I T T E D ---- public void setPayRate(double payRate) throws InvalidValueException { DoubleValidator val = new ValidatePayRate(); if (!val.accept(payRate)) throw new InvalidValueException(payRate + " is an invalid pay rate"); this.payRate = payRate; } ---- C O D E O M I T T E D ---- }
The marking of
setPayRate throws InvalidValueException ripples through the constructors,
so they should be marked as well.
package employees; import util.*;) throws InvalidValueException { super(firstName, lastName, payRate); } public ExemptEmployee(String firstName, String lastName, int dept, double payRate) throws InvalidValueException { super(firstName, lastName, dept, payRate); } public String getPayInfo() { return "Exempt Employee " + getId() + " dept " + getDept() + " " + getFirstName() + " " + getLastName() + " paid " + getPayRate(); } }
Calling super-constructors that throw our exception requires that these constructors also be marked. The other classes, not shown, should be similarly marked.
import employees.*; import vendors.*; import util.*; import finance.*; public class Payroll { public static void main(String[] args) { ---- C O D E O M I T T E D ---- for (int i = 0; i < e.length; i++) { try { ---- C O D E O M I T T E D ----); } ---- C O D E O M I T T E D ---- } catch (InvalidValueException ex) { System.out.println(ex.getMessage()); i--; //failed, so back up counter to repeat this employee } } ---- C O D E O M I T T E D ---- } }
Since we are already checking the values of the pay rate and hours, we shouldn't expect to see any exceptions thrown, so it is reasonable to put the entire block that gets the employee data and creates an employee in a try block. If we decrement the counter upon a failure, then that employee's data will be requested again.
You might want to test your logic by temporarily changing one of the test
conditions you use when reading input (like
hours > 0 to
hours > -20),
so that you can the result (keep count of how many employees you are asked to enter).
An exception may be rethrown.
When we throw an exception, it does not necessarily have to be a new object. We can reuse an existing one.
This allows us to partially process the exception and then pass it up to the method that called this one to complete processing. This is often used in servlets and JSPs to handle part of the problem (possibly just log it), but then pass the problem up to the servlet or JSP container to abort and send an error page.
String s = "1234X"; try { Integer.parseInt(s); } catch (NumberFormatException e) { System.out.println("Bad number, passing buck to JVM"); throw e; }
The stack trace will still have the original information. The
fillInStackTrace method for the exception object will replace the
original information with information detailing the line on which
fillInStackTrace was called as the origin of the exception.
Class.
Java.
|
https://www.webucator.com/tutorial/learn-java/exceptions.cfm
|
CC-MAIN-2018-34
|
en
|
refinedweb
|
« Return to documentation listing
MPI_File_delete - Deletes a file.
C Syntax
#include <mpi.h>
int MPI_File_delete(char *filename, MPI_Info info)
Fortran Syntax
INCLUDE 'mpif.h'
MPI_FILE_DELETE(FILENAME, INFO, IERROR)
CHARACTER*(*) FILENAME
INTEGER INFO, IERROR
#include <mpi.h>
static void MPI::File::Delete(const char* filename, const
MPI::Info& info)
filename Name of file to delete (string).
info Info object (handle).
IERROR Fortran only: Error status (integer).
MPI_File_delete deletes the file identified by the file name filename,
provided it is not currently open by any process. It is an error to
delete the file with MPI_File_delete if some process has it open, but
MPI_File_delete does not check this. If the file does not exist,
MPI_File_delete returns an error in the class MPI_ERR_NO_SUCH.
|
http://icl.cs.utk.edu/open-mpi/doc/v1.2/man3/MPI_File_delete.3.php
|
CC-MAIN-2016-22
|
en
|
refinedweb
|
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
set coordinate value (in radian) to a point
Coordinate value will be set correctly, if coordinate system of point is in Degree, Radian value will be converted to Degree
template<std::size_t Dimension, typename Geometry> void set_from_radian(Geometry & geometry, typename fp_coordinate_type< Geometry >::type const & radians)
Either
#include <boost/geometry/geometry.hpp>
Or
#include <boost/geometry/core/radian_access.hpp>
|
http://www.boost.org/doc/libs/1_47_0/libs/geometry/doc/html/geometry/reference/access/set/set_from_radian.html
|
CC-MAIN-2016-22
|
en
|
refinedweb
|
Thanks a lot, now it works without linking directly libpython to myserver;) On Fri, Aug 8, 2008 at 8:27 PM, Giuseppe Scrivano <address@hidden> wrote: > Hello, > > The python symbols must be loaded globally. To ensure it (by default > a plugin is not loaded globally) you need to specify these lines in > the myserver.xml configuration file: > > <PLUGIN namespace="executors" name="python"> > <ENABLED>YES</ENABLED> > <GLOBAL>YES</GLOBAL> > </PLUGIN> > > This is the same behaviour you get linking libpython2.5 directly to > the myserver executable instead of the python plugin as it should be. > > Regards, > Giuseppe > > "Daniele Perrone" <address@hidden> writes: > >> Hello, I'm working on the python plugin to add the capability of call >> python callbacks from myserver. I've created a python class that >> manages a thread pool for execute the callbacks. At this point the >> work is in progress, but there is a linking problem when I attempt to >> load python plugin into myserver. Momentarily I solved it only linking >> libpython into myserver. >> 've attached the patch at this email, if there are ideas for resolve >> this issue they are welcome!! > -- Perrone Daniele
|
http://lists.gnu.org/archive/html/bug-myserver/2008-08/msg00002.html
|
CC-MAIN-2016-22
|
en
|
refinedweb
|
In Nagios it is easy to check that a log message happened in the last 48 hours and sound alarm.
But how can I configure Nagios that it should sound alarm when a message did not occur in the last 48 hours? Why is this so hard?
I'm using the "Check WMI Plus" plugin (no agent required) to check the event log on a windows box.
I think the question is really about how to structure the WMI query so that it returns true when no results are returned. (I would add "WQL" or "WMI" or both as tags to the question).
One of the best ways to get some experience with WMI querying is to download the WMI Code Creator from Microsoft. Of course you have to run this on a windows box, but you'll be able to zero in on the query you need to feed into the Nagios plugin using the GUI.
The querying language used for WMI is WMI Query Language (WQL), similar to SQL you can query whether a particular eventcode exists within the last 48 hours. Here are some useful links about syntax that is acceptable for WQL.
WQL Keywords: link
WQL Operators: link
WQL Supported Date Formats: link
Namespace: root\CIMV2
Event Class: [depends on what you're looking for specifically]
TargetClass: Win32_NTLogEvent link
You'll be using the most common root\CIMV2 namespace, and the class you'll need is the Win32_NTLogEvent class to obtain the information you're looking for. The rest is just the structure of the query.
Since we don't know which particular event you're looking for there are a couple of properties you can use to change up the query.
Logfile = Which Event log do you want to look in? "Application" or "System" etc...
User = Who generated the event? "NT AUTHORITY\SYSTEM" or maybe you're looking for someone specifically.
You can narrow the query using the WHERE clause, just like in SQL, using the TimeGenerated property. TimeGenerated is in IntervalFormat (or UTC format) link.
Here is a quick guide on working with Dates and Times using WMI. link
WHERE DateDiff("hh",TimeGenerated,GetDate()) < 48
So to put all that together it should look something like this.
SELECT * FROM Win32_NTLogEvent WHERE EventCode=4001
AND DateDiff(hh,TimeGenerated,GetDate()) < 48
4001 is just a made-up number, look up the event ID for what you're wanting to query on. ;)
You can add additional AND statements to include properties to narrow the results as needed. This in addition to Phil's answer should get you where you need to be.
I don't have a lot of experience with WMI, so I'm not sure how the queries for getting things from the event log go, but assuming you can write that part (and you indicate that you can), you can use Check WMI Plus to set a lower threshold for the number of matching log messages with something like this:
[section check]
query=SELECT * FROM <your query here ...>
test=_ItemCount
display=_DisplayMsg||~|~| - ||
display=_ItemCount|#
perf=_ItemCount||Log Entries
With that in place, you can run check_wmi_plus.pl with -c 1: to return a CRITICAL status if there are fewer than one log entries found. (More information about thresholds in Check WMI Plus is at "Can you show me some example warning/critical criteria?".)
check_wmi_plus.pl
-c 1:
It's not hard. You can just combine your typical log check with the standard negate plugin to achieve this.
By posting your answer, you agree to the privacy policy and terms of service.
asked
2 years ago
viewed
408 times
active
|
http://serverfault.com/questions/511034/notify-when-a-message-did-not-occur
|
CC-MAIN-2016-22
|
en
|
refinedweb
|
In this article I will explain how to open and start a web page (.aspx page ) from a Window application in C#.Step 1: Initially create new window based project.Execute Visual Studio then go to "File" -> "New" -> "Project..." then select "Windows" -> "Windows Forms application".Step 2: Design your form: drag a button onto your form.Step 3: Add this namespace to the top your application code.Step 4: Write this code for the button Click event; here you can provide the web page URL.Step 5: Then run you application and click on the button and see the output.Output
This is my web page.
©2016
C# Corner. All contents are copyright of their authors.
|
http://www.c-sharpcorner.com/UploadFile/7d3362/call-aspx-page-from-window-application-in-C-Sharp/
|
CC-MAIN-2016-22
|
en
|
refinedweb
|
OCaml Labs.
Team
Tasks
Multicore
In Progress by KC Sivaramakrishnan (Mar 2013 - Apr 2015)
Github Paper Video. In the case of OCaml, users have come to rely on certain operations being cheap, and OCaml’s C API exposes quite a lot of internals..
A previous design by Doligez et al. for Caml Light was based on many thread-private heaps and a single shared heap. It maintains the invariant that there are no pointers from the shared to the private heaps. Thus, storing a pointer to a private object into the shared heap causes the private object and all objects reachable from it to be promoted to the shared heap en masse. Unfortunately this eagerly promotes many objects that were never really shared: just because an object is pointed to by a shared object does not mean another thread is actually going to attempt to access it.
Our design is similar but lazier, along the lines of the multicore Haskell work, where objects are promoted to the shared heap whenever another thread actually tries to access them. This has a slower sharing operation, since it requires synchronisation of two different threads, but it is performed less often.
Please see the OCaml 2014 short paper linked above for more details.
Emission of DWARF debugging information
In Progress by Mark Shinwell (Jan 2013 - Apr 2015)
4.00.1-allocation-profiling Video
Debuggers such as the GNU debugger gdb are valuable tools when tracking down problems in low-level or parallel applications. The programmer experience when using such a debugger to examine natively-compiled OCaml programs currently lacks lustre. Recent versions of the compiler can emit a limited amount of debugging information which enables the recovery of correct stack traces in the debugger. However names of functions still appear in mangled form, it is not possible to reference local variables by name, and traversal of OCaml values is troublesome. This is unfortunately by no means an exhaustive list of deficiencies.
This project aims to equip the native-code OCaml compiler and the GNU debugger with the necessary infrastructure to improve debugging of OCaml programs. The compiler will be enhanced to emit the standard DWARF debugging information format in order to describe the naming and placement of data together with relevant type information. At the same time the debugger will gain functionality to understand the OCaml-specific parts of this information including the ability to demangle OCaml names. It is planned to implement much of the DWARF output stage in the compiler and the debugger-side support in libraries such that they might be re-used in other projects.
It is hoped that, as support for native-code debugging of OCaml programs in the traditional manner evolves, it will become more easily possible to build more advanced debugging tools. These might exploit the scripting capabilities of gdb, for example, and target environments such as large-scale concurrent systems.
This work is ongoing in the dwarf branch of the OCaml repository.
Modular Implicits prototype
Complete by Leo White (Jan 2014 - Feb 2015)
Github Demo Paper Video print function which works across multiple types.
Taking inspiration from Scala’s implicits and "Modular Type Classes", we propose a system for ad-hoc polymorphism in OCaml based on using modules as type-directed implicit parameters. You can try out an interactive REPL of a prototype implementation online.
Namespaces and module aliases
Complete by Leo White (Feb 2013 - Sep 2014)
Blog Epic Mail Thread Paper Video
Namespaces provide a means for grouping the components of a library together.
Up to now this has been achieved using the OCaml module system. Since the components of an OCaml library are modules, a module can be created that contains all the components of the library as sub-modules. However, there are some critical problems with creating a single module containing the whole library:
The module is a single unit that has to be linked or not as a whole. This means that any program using part of the library must include the entire library.
The module is a choke-point in the dependency graph. If a file depends on one thing in the library then it needs to be recompiled if anything in the library changes.
Opening a very large module is slow and can seriously affect build performance.
These problems are caused by the runtime semantics of modules. Namespaces have no runtime sematics and could provide a solution to these problems.
While the namespaces feature continues to be refined, support for type-level module aliases was added to the OCaml 4.02 compiler. This is a trivial extension of the ML module system that helps to avoid unnecessary code dependencies, and provides an alternative to strengthening for type equalities.
Higher kinded polymorphism
Complete by Jeremy Yallop (Jun 2013 - Aug 2014)
Paper Github.
The higher library show how to express higher-kinded polymorphism in OCaml
without functors, using an abstract type
app to represent type application,
and opaque brands to denote abstractable type constructors. The flexibility of
our approach can be seen by using it to translate a variety of standard
higher-kinded programs into functor-free OCaml code. Read more about this in
the FLOPS 2014 paper linked above.
Exception matches
Complete by Jeremy Yallop (Nov 2013 - Jun 2014)
Blog Post Bug report
OCaml's
try construct is good at dealing with exceptions, but not so good at
handling the case where no exception is raised. This feature (now part of
OCaml 4.02.0) implements a simple extension to
try that adds support for
handling the "success" case..
Open types
Complete by Leo White (Oct 2012 - May 2014)
Github Website Bug report
Add open extensible types to OCaml. One open type already exists
within OCaml: the
exn type used for exceptions. This project extends
this mechanism to allow the programmer to create their own open types.
This has previously been proposed for functional languages a number of
times, for instance as part of a solution to the expression problem
(Loh et al. "Open Data Types and Open Functions").
Unlike "exn", these extensible types can have type parameters, allowing
for extensible GADTs.
For example:
type foo = .. type foo += A type foo += B of int let is_a x = match x with A -> true | _ -> false
This feature was merged upstream into OCaml 4.02 and is now available
as standard. To try it with OPAM if you have an older system compiler,
just do
opam switch 4.02.1.
OCaml Java
Complete by Xavier Clerc (Apr 2013 - Aug 2013)
OCaml Java is a compiler from OCaml source code to Java bytecode, that can run on any modern Java runtime. This is an interesting way to explore the multicore runtime performance of OCaml with a highly concurrent collector, as is present in the latest JVMs.
The goal of this work is to stabilise and release the preview of 2.0, which greatly improves CPU utilisation and memory footprint.
Syntax extensions
Complete by Leo White (Dec 2012 - Jun 2013)
Working group Blog (part 1) Blog (part 2)
Since its creation camlp4 has proven to be a very useful tool. People have used it to experiment with new features for OCaml and to provide interesting meta-programming facilities. However, there is general agreement that camlp4 is too powerful and complex for the applications that it is most commonly used for, and there is a growing movement to provide a simpler alternative.
A working group was formed (wg-camlp4@lists.ocaml.org) regarding the future of syntax extensions in OCaml. The aim of the working group is to formulate a solid transition plan to create a 'basic OCaml ecosystem' that does not require camlp4. Alain Frisch's introductory email has more detail and can be found in the archive.
Record disambiguation
Complete by Leo White (Sep 2012 - Feb 2013)
Bug report
Type-based record disambiguation: Leo helped with the record-disambiguation branch of OCaml by Jacques Garrigue. This branch uses type-information to disambiguate between record labels and variant constructors with the same names. For discussions of the semantics of this feature see Gabriel's or Alain's blog posts. Leo rewrote the record-disambiguation branch to use an alternative semantics and improved the error messages. The branch has since been merged into OCaml trunk
|
http://www.cl.cam.ac.uk/projects/ocamllabs/tasks/compiler.html
|
CC-MAIN-2016-22
|
en
|
refinedweb
|
This article is Day #16 in a series called 31 Days of Windows 8. Each of the articles in this series will be published for both HTML5/JS and XAML/C#. You can find additional resources, downloads, and source code on our website.
Today, our focus is on context menus aka PopupMenu. These are those small little popup commands that appear from time to time in your application when you right click on something. Microsoft offers some very specific guidance on when to use these context menus vs. when to use the AppBar control instead, so we will be following those rules in this article.
What Is a Context Menu?
If you’ve used Windows 8 at all, you’ve likely encountered these before. Often they result from a right-click on something you couldn’t select, or on text you wanted to interact with. Here’s a quick sample of a context menu:
(image from)
You could also launch a context menu from an element on your page that is unselectable, like this image in my sample app:
Right-clicking the image launches the context menu center right above it. (I will show you how to make this happen next.) Each of the command items will have an action assigned to it, which is executed when the item is clicked.
Creating the Context Menu
Creating the Context Menu that you see above is pretty straight forward. To start with, we will wire up a handler to our image element such that anytime someone right clicks on the object they will get our context menu. This looks like any other event listener, and we will listen on ‘contextmenu’ like such.
document.getElementById("myImage").addEventListener("contextmenu", imageContentHandler, false);
Next, we need to create our handler, which I have managed to call imageContentHandler, I know pretty awesome naming there. In our function we will simply start out by creating our PopupMenu object and adding the items we want to be shown. Each item can have its on event handler as well and in the example below I am just giving them all the same event handler of somethingHandler ( I know more awesome naming ).));
With our PopupMenu defined, now we just need to show it. We can do that by calling showAsync and passing it some x,y coordinates like you see below.
contextMenu.showAsync({ x: [someCoords], y: [someCoords] });
Now how do we figure out where exactly to put it?
Determining The Location
You may have noticed that context menus appear directly above and centered to the element that has been selected. This doesn’t happen by magic. We’ll actually have to determine the position of the clicked element by ourselves (as well as any applicable offsets), and pass that along when we show the menu. Time to sharpen some pencils.
When a user click on our image, our contextmenu event is going to fire, calling our handler, imageContentHandler. It’s going to pass some arguments that we need to kick off our fact finding. There are three critical pieces of data we’re looking for here. 1. the x and y of the click and 2. the offset of the element that was its target and 3. the width of the target object. ( target object here is the image ) . Let’s see the code, and then come back to it.
function showWhereAbove(clickArgs) { var zoomFactor = document.documentElement.msContentZoomFactor; var position = { x: (clickArgs.pageX - clickArgs.offsetX - window.pageXOffset + (clickArgs.target.width / 2)) * zoomFactor, y: (clickArgs.pageY - clickArgs.offsetY - window.pageYOffset) * zoomFactor } return position; }
So I have created this simple function called showWhereAbove taking something called the clickArgs. clickArgs is nothing more than the args that was passed into our contextmenu handler. To figure out both x and y are actually pretty much the same.
x = ( where x click was – offset to top of target – windows x offset + ( target width /2 )) * window zoom factor
Let’s continue to break that down
- where x click was = the x position of the mouse at the time of click
- offset to top of target = how much to the right of the elements left edge are we
- window x offset = where are we in relation to the window being scrolled
- target width / 2 = we need to move the PopupMenu to the middle of the object so we’re going to take the width and get middle
- window zoom factor = if the screen is zoomed in put it in apply that factor
Now the y axis is the same in principal as x, except we don’t have to worry about the height of the target. Since we’re working with the bottom center of the context menu, just getting our y offset is enough to put the bottom of the PopupMenu in the right position.
Again remember – we’re trying to position the the bottom middle of the Popup Menu. Now that we know our position, we just need to show it.
contextMenu.showAsync(showWhereAbove(args));
Here is the finished function:
function imageContentHandler(args) {)); contextMenu.showAsync(showWhereAbove(args)); }
Launching a Context Menu From Selected Text
Actually, nearly everything about this process is the same, except for the math on where to pop the box from. Initially when setting out to solve this problem, I was hoping to add on to the existing context menu that already appears when you right-click on text:
As it turns out, you can’t ( at least that we’re aware of ). So, for our example, let’s say that we want to keep all of those options, but also add a new one titled “Delete” at the bottom. To do this, we’re going to have to create our own. While we know how to do so already there are two differences:
- We need to place our PopupMenu at the top of our selected text and in the middle.
- Stop the default behavior.
Like wire up an event handler to the contextmenu event of something like a <p> element.
document.getElementById("myInputBox").addEventListener( "contextmenu", inputHandler, false);
- tip – I am using a paragraph element here for demonstration. Because of that I need to make my element selectable. I did so in CSS like so
p { -ms-user-select: element; }
Now we need to create our handler in this case inputHandler. Remember I said that we needed to prevent the default and do our own thing. To do so, we’re going to call preventDefault AND we’re going to see if some text was in fact selected. At that point, we can then crate our PopupMenu and show it.
function inputHandler(args) { args.preventDefault(); if (isTextSelected()) { //create the PopupMenu and its items } } function isTextSelected() { return (document.getSelection().toString().length > 0); }
After your PopupMeu is created we need to show it, but unlike before when we called showAsync() this time we need to call showForSectinoAsync. Since we’re going to show above the selection we need to understand what exactly was selected rather than just the element itself.
contextMenu.showForSelectionAsync( showForSelectionWhere( document.selection.createRange().getBoundingClientRect()));
You can see I have created again a helper function called showForSelectionWhere to take care of that math. We’re going to pass into it the bounding rectangle of our selection. Lets look at the code and then come back to it.
function showForSelectionWhere(boundingRect) { var zoomFactor = document.documentElement.msContentZoomFactor; var position = { x: (boundingRect.left + document.documentElement.scrollLeft - window.pageXOffset) * zoomFactor, y: (boundingRect.top + document.documentElement.scrollTop - window.pageYOffset) * zoomFactor, width: boundingRect.width * zoomFactor, height: boundingRect.height * zoomFactor } return position; }
showForSelectionAsync takes an object with 4 properties; x, y, width, height. Let’s look at x quickly.
- x = left edge of the bounding rect + any pixels scrolled within the container element – the window scroll offset * zoom factor
- y is basically the same as x, different axis.
Very similar to when we were working with the image element other than we have to figure out our position from within the element we we’re part of. The results should be as you would expect.
Summary
Today we looked at the art of creating inline context menus for our users. They are an excellent way to provide interactions with elements that aren’t selectable, or for commands that make more sense directly adjacent to the element being interacted with.
If you would like to see the entire code sample from this article, click the icon below:
Tomorrow, we will venture further into the Clipboard functionality available to us in Windows 8 development. See you then!
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/31-days-windows-8-html5-day-16
|
CC-MAIN-2016-22
|
en
|
refinedweb
|
On Wednesday, August 13, 2003, at 03:19 pm, David Jencks wrote:
> On Wednesday, August 13, 2003, at 08:06 AM, James Strachan wrote:
>> On Wednesday, August 13, 2003, at 05:15 am, David Jencks wrote:
>>> On Tuesday, August 12, 2003, at 03:13 PM,) ?
>>>
>>> -- you can deploy a pojo as an mbean with __no__ modification or
>>> interface required. For instance, in JBoss 4, the jca adapter
>>> objects are deployed directly as xmbeans. Thus you can see and
>>> modify all the properties of a deployed ManagedConnectionFactory
>>> instance.
>>> --"artificial" attributes that are not actually in the underlying
>>> object.
>>> --attributes whose accessors don't follow the standard mbean naming
>>> convention
>>> --you can include lots of descriptive info to display in a
>>> management console. This makes the console self documenting.
>>> --with an interceptor based model mbean implementation, you can add
>>> interceptors that implement additional operations. For instance, in
>>> the JBoss 4 jca 1.5 support, ActivationSpec instances are deployed
>>> directly as xmbeans. There is an no-arg "start" operation
>>> implemented by an interceptor that is called by the JBoss lifecycle
>>> management and in turn calls the jca start(ResourceAdapter) method.
>>> (the ObjectName of the deployed resourceadapter instance is held in
>>> an "artificial" attribute).
>>
>> Agreed. This is good. I like the idea of supporting POJOs and beans.
>> The nice thing from Geronimo's perspective is they can be easily
>> converted to MBeans and it doesn't need to worry about the different
>> ways you can use to make an MBean.
>>
>>
>>>>
>>>> ?).
>>>
>>> How can you generate the metadata without at least minimal source
>>> code markup? What is the useful stuff you are thinking of?
>>
>> Incidentally, I've often found making MBeans via XDoclet painful.
>> Typically when I have some service, I want all its public methods and
>> attributes to be visible to JMX unless I explicitly say not to.
>> Typically the methods I put on the service are for JMX anywyas. So I
>> don't wanna have to litter my code with @jmx:attribute and
>> @jmx:operation and so forth all over the code - I find it easier to
>> use the *MBean.java interface approach.
>
> You find writing a *MBean.java class yourself easier than including
> the xdoclet tags?
Yes! :) Maybe its just me, but I can take advantage of refactoring
tools etc. So I can add a method to the interface & the IDE will
generate a method body, or use 'extract to interface' and so forth. Its
so hard to miss a doclet tag to mess things up - hacking doclet tags
feels like lots of extra error-prone typing - maybe I'm just lazy.
>> So I can imagine a way of generating MBeans metadata in a simple way
>> with most of the metadata defaulted unless overridden by metadata
>> (doclet tags or XML etc). e.g. use the normal javadoc descriptions of
>> the service & attributes & methods unless I explicitly override it -
>> plus default to use all public methods for introspection.
>>
>> e.g. this bean should be usable to generate pretty much all of the
>> JMX metadata.
>>
>> /** An egg timer... */
>> public class EggTimer implements Serializable {
>>
>> /** Starts the egg timer */
>> public void start() {...}
>>
>> pubic int getFoo() {...}
>>
>> /** sets the foo thingy */
>> public void setFoo() {...}
>>
>> /** @jmx:hide */
>> public void hack();
>> // the above method will be hidden from JMX
>> // which is the exception
>> }
>>
>>
>> Then in your build system you could have something like this
>>
>> <xdoclet:makeJmx>
>> <fileset dir="src/java" includes="**/*Service.java"/>
>> </xdoclet:makeJmx>
>>
>
> This would be a pretty easy xdoclet template to write, although
> xdoclet might need some help to distinguish between operations and
> attributes. I guess we could use the javadoc for the jmx comment?
Absolutely - both on the class, methods & properties. Of course it can
be overloaded - I'm just discussing a defaulting mechanism if you don't
happen to specify any jmx-related doclet tags.
> I'm not sure how to choose between the getter and setter javadoc for
> an attribute.
Agreed. I've been meaning to have a go at trying this out for some
time. I've discussed this with Aslak of XDoclet a few times - now he
lives a mile away there's no excuse for not sorting it out. AFAIK
Xdoclet2 should allow this fairly easily I hope. I basically want a
pipeline approach.
extract doclet tags -> transformer -> xdoclet template.
e.g. take all the doclet tags & then process them with rules - adding
sensible defaults for a project (like for all classes of a certain
kind, name pattern, interface or package, apply some doclet tag
defaulting rules first before generating the code). Should be easy to
do for all kinds of XDoclet generations.
>>>>> ?
>>> How would it decide which operations/attributes should be exposed?
>>> For the jca stuff I mentioned, the needed metadata is specified in
>>> the ra.xml file.
>>
>> Default to all of them that are available via introspection unless
>> any metadata says not to (or hides certain things). Whether this is
>> via XML config file or doclet tags is optional.
>
> This seems quite reasonable.
Phew :)
James
-------
|
http://mail-archives.apache.org/mod_mbox/geronimo-dev/200308.mbox/%3C8B09F9AC-CD9B-11D7-A504-000A959D0312@yahoo.co.uk%3E
|
CC-MAIN-2016-22
|
en
|
refinedweb
|
Answered by:
get numa node from pointer adress
- Is there a way to learn on which NUMA node a memory block resides? I have an application where large blocks of memory that are externally allocated need to be processed by multiple threads in parallel. I would like something like:
GetNumaNodeNumber(
__in LPVOID ptr,
__out PULONG NodeNumber
);
Question
Answers
-
All replies
- I might have missed something here, but I don't know of anything. I will definitely ask and see if I have missed something, but there doesn't appear to be an equivalent to VirtualAllocNumaEx which lets you query the NUMA node from an address.
Windows Server 2008 R2 did extend the NUMA API set and there is a good overview here:
Are you hitting measurable performance issues as a result of this and if I may ask what is the rough HW config you're looking at here.
Thanks,
Rick
Rick Molloy Parallel Computing Platform :
- Marked as answer by rickmolloy Friday, September 18, 2009 4:14 PM
- Unmarked as answer by mattijsdegroot Friday, December 04, 2009 9:27 AM
- Thanks a lot for your input and sorry for not responding sooner. Forgot to check on any replies while I was busy with some other things.
To be honest I am not sure at this point if I am hitting perfomance issues since I have no way of knowing if the data resides on the node processing it or not. I have a camera that spits out frames of data at a high rate (c.a. 500 MBytes/s). These frames need to be processed and I would like to do the processing on the node where the data resides. Since I have no control over the memory allocation for the camera buffers I would like to check on which node they reside.
Even if there is no API function that does this is there a way to determine it based on the pointer adress? Is the first half of the adress space allocated to Node 0 and the second half to Node 1 perhaps?
My hardware is the following:
- HP Proliant ML370 G6
- 2x Intel Xeon E5540
- 8GB memory
All comments are highly appreciated,
Mattijs
-
- Thanks for the information. I'll have a look at that code.
Do you agree that it would be reasonable to expect such a function in the API? It would seem to me that having to deal with externally allocated memory is a fairly general problem in NUMA aware applications.
Mattijs
- I, personally, can see no reason why such information should be withheld or not presented via the API if the information is already available to the Operating System itself (Full system topology info is a related case in point that IMO should always have been available since even before dual processor machines first emerged, Win7 at least addresses that lack now).
There may even be some undocumented API that publishes the "memory@node" location that the other APIs use, buried someplace that could be exposed as part of the NUMA API ?
- As you stated above, the information is indeed available. If I interpret the code for The Numa Explorer correctly it is possible to get the information through the process status API (PSAPI), but that is a very cumbersome process.
Where could I file such an API feature request?
- Indeed QueryWorkingSetEx is the way to go for now.
I got the following response from microsoft after sending an API request:
QueryWorkingSetEx can be used for this. There is an example here:
Note that physical pages in a given virtual buffer are not necessarily allocated from the same NUMA node. In most cases, checking only the first page in the buffer should work, but there might be situations where the first page is allocated from node X and the rest of the pages are from node Y (even if all nodes in the system have plenty of available pages). This could happen for example if the contents of first page are initialized by one thread, and the rest of the pages are initialized from a different thread whose ideal processor is part of a different node.
If they don't control how the buffer is allocated and initialized it might make sense to check several pages at random and select the node that
appears most often.
========
The example at the above link is "Allocating Memory from a NUMA Node." See the DumpNumaNodeInfo function for the QueryWorkingSetEx call.
I will pass your API request on to the NUMA product team; they are currently in the planning stage for the next version of Windows. But in the
meantime, I hope QueryWorkingSetEx works for your application so you don't have to wait. :-)
- Proposed as answer by Mohamed Ameen Ibrahim Friday, August 06, 2010 5:15 PM
- After a long delay I revisited this problem and I have come up with the code listed below to directly determine the NUMA Node from a pointer. It was actually much easier than I thought. It seems to work correctly, but beware that the pointer needs to point to initialized memory. If you like this function or have suggestions to improve it I would love to see a reply.
Mattijs
#define _WIN32_WINNT 0x0600
#include <windows.h>
#include <psapi.h>
int GetNumaNodeFromAdress (PVOID Buffer)
{
//PCHAR StartPtr = (PCHAR)(Buffer);
PSAPI_WORKING_SET_EX_INFORMATION WsInfo;
WsInfo.VirtualAddress = Buffer;
BOOL bResult = QueryWorkingSetEx(
GetCurrentProcess(),
&WsInfo,
sizeof(PSAPI_WORKING_SET_EX_INFORMATION)
);
if (!bResult)
return -2;
PCHAR Address = (PCHAR)WsInfo.VirtualAddress;
BOOL IsValid = WsInfo.VirtualAttributes.Valid;
DWORD Node = WsInfo.VirtualAttributes.Node;
if (IsValid)
return Node;
else
return -1;
}
|
https://social.msdn.microsoft.com/Forums/vstudio/en-US/37a02e17-e160-48d9-8625-871ff6b21f72/get-numa-node-from-pointer-adress?forum=parallelcppnative
|
CC-MAIN-2016-22
|
en
|
refinedweb
|
I am migrating an SL 4 project to SL 5 and I got this build error:
The tag 'TimeUpDown' does not exist in XML namespace 'clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Input.Toolkit'
So where did it go to?
thanks for your help.
did you install SL 5 Toolkit ?
Nope. I thought I did. thanks!
did you install SL 5 Toolkit ?
Microsoft is conducting an online survey to understand your opinion of the Msdn Web site. If you choose to participate, the online survey will be presented to you when you leave the Msdn Web site.
Would you like to participate?
|
https://social.msdn.microsoft.com/Forums/silverlight/en-US/9b93e595-d0c5-46b9-a150-581a03f8a6f5/where-is-timeupdown-in-sl-5?forum=silverlightgen
|
CC-MAIN-2016-22
|
en
|
refinedweb
|
Submission only takes namespaces from specific elements: instance, model and main document node. So any namespaces declared in between are not included.
It must be possible to ask the document for all namespaces visible at a certain node instead of the specific copying we are doing now.
Created attachment 215118 [details]
Testcase
"It must be possible to ask the document for all namespaces visible at a certain
node instead of the specific copying we are doing now."
When I wrote the code, I couldn't figure out how to do that. smaug might know more :)
It seems that transformiix does not keep namespaces declarations done at root level, it only dumps namespaces which will be actually used in the resulting XML.
That is a problem when trying to transform schemas, as namespaces which are declared sometimes refer to element declarations like <xsd:element.
The namespace foo is not used in the markups of the schemas, but it has to be kept in the resulting XML.
As far as I know, the attachment does not even describe the actual behaviour. Other XSLT processors keep the namespaces declared at root level.
RIP xforms
|
https://bugzilla.mozilla.org/show_bug.cgi?id=330557
|
CC-MAIN-2016-22
|
en
|
refinedweb
|
glibmm: Glib::Dispatcher Class Reference
Signal class for inter-thread communication. More...
#include <glibmm/dispatcher.h>
Detailed Description
Signal class for inter-thread communication.:
- Only one thread may connect to the signal and receive notification, but multiple senders are allowed even without locking.
- The GLib main loop must run in the receiving thread (this will be the GUI thread usually).
- The Dispatcher object must be instantiated by the receiver thread.
- The Dispatcher object should be instantiated before creating any of the sender threads, if you want to avoid extra locking.
- The Dispatcher object must be deleted by the receiver thread.
- All Dispatcher objects instantiated by the same receiver thread must use the same main context.
Notes about performance:
- After instantiation, Glib::Dispatcher will never lock any mutexes on its own. The interaction with the GLib main loop might involve locking on the receiver side. The sender side, however, is guaranteed not to lock, except for internal locking in the
write()system call.
- All Dispatcher instances of a receiver thread share the same pipe. That is, if you use Glib::Dispatcher only to notify the GUI thread, only one pipe is created no matter how many Dispatcher objects you have.
Using Glib::Dispatcher on Windows:
Glib::Dispatcher also works on win32-based systems. Unfortunately though, the implementation cannot use a pipe on win32 and therefore does have to lock a mutex on emission, too. However, the impact on performance is likely minor and the notification still happens asynchronously. Apart from the additional lock the behavior matches the Unix implementation.
- Examples:
- thread/dispatcher.cc.
Constructor & Destructor Documentation
Create new Dispatcher instance using the default main context.
- Exceptions
-
Create new Dispatcher instance using an arbitrary main context.
- Exceptions
-
|
https://developer.gnome.org/glibmm/unstable/classGlib_1_1Dispatcher.html
|
CC-MAIN-2016-22
|
en
|
refinedweb
|
How to realize the idea of live data in a Grid
Robbie Cheng,
Engineer, Potix Corporation
March 19, 2007
Version
Applicable to ZK Freshly after March 12, 2007
Introduction
In the past,only
Listbox supports "live data" since
ZK's version 1.0. It's an encouraging news that
also supports "live data" from now on. Both
Grid
Grid and
Listbox
are used to display data in the form of a table, however, there exists
one major difference of their purposes. Since the purpose of
Listbox
is especially for the scenario of selectable data,
Grid is
more suitable for the case of table-like presentation data. Thus,
developer could adopt the appropriate choice depends on different
circumstances. at
the first time. To fulfill this requirement, it requires implementations
of two Interfaces,including
org.zkoss.zul.ListModel and
org.zkoss.zul.RowRenderer.
RowRenderer is responsible for render data stored in the
ListModel
to the specified row in the
Grid. In the following
paragraphs, the usage and demonstration of "live data" in
Grid
will be introduced..
Single-Column Example
In the following example, an array of data("data") is prepared , it is passed as a parameter for the generation of a
ListModel("strset"). Last, asign this
ListModelinto a
Gridby setting the model of the
Grid.
<window title="Live grid" border="normal"> <zscript> String[] data = new String[30]; for(int j=0; j < data.length; ++j) { data[j] = "option "+j; } ListModel strset = new SimpleListModel(data); </zscript> <grid width="100px" height="100px" model="${strset}"> <columns> <column label="options"/> </columns> </grid> </window>
Two-Columns Example
Since the default rowrender only satisfies the scenario of single-column, a customized rowrender must be implemented to commit the scenario of multi-column. In the following example, a two-columns live data for grid is demonstrated.
1.Prepare the Data Model
The first step is to prepare required data. A two-way array is adoped in this expample and it is passed as a parameter for the generation of a
ListModelwhich is implemented by
SimpleListModel. In addtion,
ListModelnow implenmented.Method
render(Row row, java.lang.Object data)
row- The row to render the result.
data- Returned from
ListModel.getElementAt()
In additon to pass required paremters into the method
render(), the form of view (UI Component) for displaying data in the
Gridalso has to be defined, and any component which is supported by
Gridcould be adopted. In this example,
Labelis adopted, and the last step is to render
Labelinto the specified row in the
Gridby calling
Label.setParent(Row row).
rowRenderer.java //define the RowRenderer class public class row" border="normal"> <grid id="mygrid" height="100px" width="400px"> <columns> <column label="key"/> <column label="value"/> </columns> </grid> <zscript><![CDATA[ //prepare the data model String[][] model = new String[100][2]; for(int j=0; j < model.length; ++j) { model[j][0] = "key"+j; model[j][1] = "value"+j; } ListModel strset = new SimpleListModel(model); mygrid.setModel(strset); mygrid.setRowRenderer(new rowRenderer()); ]]></zscript> </window>
Example Code
Download the example code from here.
- Deploy the livedata.war file
- Visit.
More Interfaces to Implement
- For better control, your rowrender can also implement
RowRendererExt.
- In addition, you could also implement
RendererCtrlto render all rows for the same request while starting an transaction.
Download the example code here.
|
http://www.zkoss.org/smalltalks/livedata/livedataforgrid.dsp
|
crawl-002
|
en
|
refinedweb
|
There are many ways to communicate from one form to another in a Windows
Forms .NET application. However, when using MDI (Multiple Document Interface)
mode, you may often find that the wiring up of simple events may better
suit your needs. This short article illustrates one of the simplest techniques
for having a Child MDI form call back to its parent Container Form and
even how to pass an object instance (which could contain an entire class,
if you like) back to the Parent.
Fire up Visual Studio .NET and create a new Windows Application project.
In the Properties window with your Default Form1 selected, change the
IsMDIContainer property to "true". Add a Panel at the top, and add a
button and a label to it, with the Panel taking up, say, the top 20%
of the form's designer surface.
Now add a new form to the project.
Let's call it MDIChild. On the Kid form, add a button which we will
use to call the event to the Daddy form and also close the Kid.
At this point, your Daddy form should look something like this:
Now lets wire up our event and event handlers, starting with the Kid:
First we need a custom EventArgs derived class so we can pass the
information we need:
using
namespace
In our Codebehind for the Kid form, lets add our event and eventhandler,
and make the call in our Button1_Click event:
// declare the EventHandler
}
Note that we are passing "Howdy Pop" in the
EventArgs parameter. This can really be any business
logic you want. If your Kid form created a DataSet that you needed for
the Daddy to receive, you would plug it in here. All Daddy would need
to know is that he is expecting a DataSet from the Kid in the EventArts
parameter.
Now, in our Daddy Form, we are going to wire up everything
we need to show the kid and also to receive his event message:
private void button1_Click(object sender, System.EventArgs
e)
{
// declare and instantiate the Kid
MDIChild chForm = new MDIChild();
//set parent form for the child window
chForm.MdiParent=this;
// make it fill the container
chForm.WindowState=FormWindowState.Maximized;
// add the event handler we wired up
chForm.NotifyDaddy+=new EventHandler(chForm_NotifyDaddy);
// show the Kid
chForm.Show();
}
private
Note that we create chForm.NotifyDaddy and add chForm_NotifyDaddy
to the Delegates. Then in the handler itself, we write out the message
including the value of the sender (in this case it is just some text).
Your result when the Kid form is closed should look something like this:
And that's all it takes to teach all your kids to call home to Daddy!
The full solution is ready to go at the link below.
Download the Source Code that accompanies this article
Articles
Submit Article
Message Board
Software Downloads
Videos
Rant & Rave
|
http://www.eggheadcafe.com/articles/20040229.asp
|
crawl-002
|
en
|
refinedweb
|
I have a function which makes use of memory on the heap and it will go badly wrong if it is called before another instance of the same function has completed.
How can I prevent this from happening at compile time?
Detecting recursion with any amount determinism of at compile-time is going to be quite difficult. Some static code analysis tools might be able to do it, but even then you can get in to run-time scenarios involving threads that code analyzers won't be able to detect.
You need to detect recursion at run-time. Fundamentally, it's very simple to do this:
bool MyFnSimple() { static bool entered = false; if( entered ) { cout << "Re-entered function!" << endl; return false; } entered = true; // ... entered = false; return true; }
The biggest problem with this, of course, is it is not thread safe. There are a couple of ways to make it thread safe, the simplest being to use a critical section and block the second entry until the first has left. Windows code (no error handling included):
bool MyFnCritSecBlocking() { static HANDLE cs = CreateMutex(0, 0, 0); WaitForSingleObject(cs, INFINITE); // ... do stuff ReleaseMutex(cs); return true; }
If you want the function to return an error when a function has been reentered, you can first test the critsec before grabbing it:
bool MyFnCritSecNonBlocking() { static HANDLE cs = CreateMutex(0, 0, 0); DWORD ret = WaitForSingleObject(cs, 0); if( WAIT_TIMEOUT == ret ) return false; // someone's already in here // ... do stuff ReleaseMutex(cs); return true; }
There are probably an infinite ways to skin this cat other than the use of static bools and critsecs. One that comes to mind is a combination of testing a local value with one of the Interlocked functions in Windows:
bool MyFnInterlocked() { static LONG volatile entered = 0; LONG ret = InterlockedCompareExchange(&entered, 1, 0); if( ret == 1 ) return false; // someone's already in here // ... do stuff InterlockedExchange(&entered, 0); return false; }
And, of course, you have to think about exception safety and deadlocks. You don't want a failure in your function to leave it un-enterable by any code. You can wrap any of the constructs above in RAII in order to ensure the release of a lock when an exception or early exit occurs in your function.
After readong comments I realized I could have included code that illustrates how to implement an RAII solution, since any real code you write is going to use RAII to handle errors. Here is a simple RAII implementation that also illustrates what happens at runtime when things go wrong:
#include <windows.h> #include <cstdlib> #include <stdexcept> #include <iostream> class CritSecLock { public: CritSecLock(HANDLE cs) : cs_(cs) { DWORD ret = WaitForSingleObject(cs_, INFINITE); if( ret != WAIT_OBJECT_0 ) throw std::runtime_error("Unable To Acquire Mutex"); std::cout << "Locked" << std::endl; } ~CritSecLock() { std::cout << "Unlocked" << std::endl; ReleaseMutex(cs_); } private: HANDLE cs_; }; bool MyFnPrimitiveRAII() { static HANDLE cs = CreateMutex(0, 0, 0); try { CritSecLock lock(cs); // ... do stuff throw std::runtime_error("kerflewy!"); return true; } catch(...) { // something went wrong // either with the CritSecLock instantiation // or with the 'do stuff' code std::cout << "ErrorDetected" << std::endl; return false; } } int main() { MyFnPrimitiveRAII(); return 0; }
|
https://codedump.io/share/X4Ern1wjv8PS/1/in-c-how-can-i-prevent-a-function-from-being-called-recursively
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
I'm not sure I understand. 1. The default for JSON strings seems to be UTF-8. 2. If a JSON string uses an encoding other than UTF-8, the entire string should be transcoded. This needs to be done when the data its retrieve. For example, by passing an encoding parameter to file:read-text.
Advertising
-- You received this bug notification because you are a member of Zorba Coders, which is the registrant for Zorba. Title: data-converter module problems with non utf-8 characters Status in Zorba - The XQuery Processor: Incomplete Bug description: In public Json streams lots of non-utf8 character escapes can be found causing some problems when parsing json or tidying the contained html ( as for example marketed here:- fisted-hand-sign ). The following example Query causes a whole bunch of problems: import module namespace json = ""; import module namespace html = ""; declare namespace j = ""; let $text := "<p>" || json:parse("{""text"":""Let's get it. \ud83d\udc4a""}")/j:pair[@name="text"]/text() || "</p>" return html:parse($text) Problems: 1. html:parse () has return type document-node(), but tries to return an empty-sequence in this example (discovered by ghislain) * --> moved to bug #1025194 * 2. in file src/com/zorba- xquery/www/modules/converters/html.xq.src/tidy_wrapper.h function createHtmlItem(...) doesn't throw a proper error message (discovered by ghislain) which makes debugging really hard. In contrast, parse-xml throws a very helpful error: dynamic error [err:FODC0006]: invalid content passed to fn:parse- xml(): loader parsing error: Char 0xD83D out of allowed range; Could html:parse report the same error? * --> moved to bug #1025193 * 3. json:parse() doesn't report an error here which is good in my opinion. Yet, as these utf-16 (?) encoded characters are used a lot in json, would it be possible to transform them into valid utf-8 (e.g. \ud83d\udc4a -> 👊)? Maybe these findings are going to be a problem in Jsoniq as well? To manage notifications about this bug go to: -- Mailing list: Post to : zorba-coders@lists.launchpad.net Unsubscribe : More help :
|
https://www.mail-archive.com/zorba-coders@lists.launchpad.net/msg12076.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Java.io.RandomAccessFile.readShort() Method
Description
The java.io.RandomAccessFile.readShort() method reads a signed 16-bit number from this file. The method reads two bytes from this file, starting at the current file pointer.
Declaration
Following is the declaration for java.io.RandomAccessFile.readShort() method.
public final short readShort()
Parameters
NA
Return Value
This method returns the next two bytes of this file, interpreted as a signed 16-bit number.
Exception
IOException -- if an I/O error occurs.
EOFException -- if this file reaches the end before reading two bytes.
Example
The following example shows the usage of java.io.RandomAccessFile.readShort() method.
package com.tutorialspoint; import java.io.*; public class RandomAccessFileDemo { public static void main(String[] args) { try { short s = 15000; // create a new RandomAccessFile with filename test RandomAccessFile raf = new RandomAccessFile("c:/test.txt", "rw"); // write something in the file raf.writeShort(s); // set the file pointer at 0 position raf.seek(0); // print the short System.out.println("" + raf.readShort()); // set the file pointer at 0 position raf.seek(0); // write something in the file raf.writeShort(134); // set the file pointer at 0 position raf.seek(0); // print the short System.out.println("" + raf.readShort()); } catch (IOException ex) { ex.printStackTrace(); } } }
Assuming we have a text file c:/test.txt, which has the following content. This file will be used as an input for our example program:
ABCDE
Let us compile and run the above program, this will produce the following result:
15000 134
|
http://www.tutorialspoint.com/java/io/randomaccessfile_readshort.htm
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Background Worker Threads
IronPython & Windows Forms, Part IX
Note
This is part of a series of tutorials on using IronPython with Windows Forms.
Follow my exploration of living a spiritual life and finding the kingdom at Unpolished Musings.
The BackgroundWorker Class
Recently we tried to add an 'activity indicator' (throbber) to a long running process in our Windows Forms application. Unfortunately we ran into difficulties.
The main problem was that we were using the wrong event to detect when a control lost the focus. You might think that LostFocus was the obvious choice [1]. In fact this is a low level event only used when updating UICues. The correct event to use is Leave.
LostFocus is raised when the user clicks on the exit button, but Leave
isn't. We spent part of today fixing all the places we used GotFocus
and LostFocus and replacing them with Enter
and Leave. Luckily it wasn't too many.
Using the BackgroundWorker, suggested by Andriy in a comment, the code is quite nice [2].
You provide the BackgroundWorker with your long running process as an event handler. It has a method to detect if one is already running, and raises an event when it has finished.
A common idiom in our code is to have our own event hooks. Rather than tightly coupling our objects together, they can raise events.
An approximation of the code structure we used is shown below. This is also a good [3] example of how to use the BackgroundWorker.
This code shows an event hook class [4], which provide the LongRunningStart and LongRunningEnd events which enable and disable the activity indicator: the throbber.
This is automatically triggered when the textbox Leave event is raised. (But I've omitted all the boiler-plate in setting up the form and textbox of course.)
clr.AddReference('System')
clr.AddReference('System.Windows.Forms')
from System.ComponentModel import BackgroundWorker
from System.Windows.Forms import Form, TextBox
class EventHook(object):
def __init__(self):
self.__handlers = []
def __iadd__(self, handler):
self.__handlers.append(handler)
return self
def __isub__(self, handler):
self.__handlers.remove(handler)
return self
def fire(self, *args, **keywargs):
for handler in self.__handlers:
handler(*args, **keywargs)
class LongRunning(object):
def __init__(self):
self._worker = BackgroundWorker()
#
self.LongRunningStart = EventHook()
self.LongRunningEnd = EventHook()
self._worker.DoWork += lambda _, __: self.__longRunningProcess()
self._worker.RunWorkerCompleted += lambda _, __: self.LongRunningEnd.fire()
def LongRunningProcess(self):
# This can be called directly if you need a
# synchronous call as well.
# The long running process will block the GUI from
# updating though.
self.LongRunningStart.fire()
self.__longRunningProcess()
self.LongRunningEnd.fire()
def LongRunningProcessAsync(self):
# Just drop out if one is already running
if not self._worker.IsBusy:
# This starts __longRunningProcess on a background thread
self.LongRunningStart.fire()
self._worker.RunWorkerAsync()
def __longRunningProcess(self):
# Do *lots* of stuff :-)
class MainForm(Form):
def __init__(self):
self.longRunning = LongRunning()
self.longRunning.LongRunningStart += self.enableThrobber
self.longRunning.LongRunningEnd += self.disableThrobber
self.textBox = TextBox
self.Controls.Add(self.textBox)
self.textBox.Leave += lambda _, __: self.longRunning.LongRunningProcessAsync()
def enableThrobber(self):
# do something
def disableThrobber(self):
# do something
To check if the BackgroundWorker is in the middle of running, we use the IsBusy Property.
To tell it what to do when started, we add our long running process to the DoWork Event, this is kicked off on a separate thread: so be careful !
This is actually launched by the RunWorkerAsync Method.
When our process (bad choice of word, hey) has finished, the RunWorkerCompleted Event is raised.
Notice that if we wrap our .NET event handlers in a lambda we don't need the sender and event arguments.
lambda _, __: self.LongRunningEnd.fire()
You can use the event argument sent to DoWork handlers to pass arguments when RunWorkerAsync is called, but this isn't shown.
Note
Thanks to Davy Mitchell for pointing out a couple of errors in the code example which have now been fixed.
For buying techie books, science fiction, computer hardware or the latest gadgets: visit The Voidspace Amazon Store.
Last edited Fri Nov 27 18:32:35 2009.
Counter...
|
http://www.voidspace.org.uk/ironpython/winforms/part9.shtml
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Traceback (most recent call last):It would be great if the Zope folks fixed their code so that the Twisted tests will start passing again. This issue actually impacts all packages that depend on zope.interface -- for example zanshin, which also fails in the Pybots buildslave for the OSAF libraries (graciously set up by Bear from OSAF).
File "/tmp/Twisted/bin/trial", line 23, in
from twisted.scripts.trial import run
File "/tmp/Twisted/twisted/scripts/trial.py", line 10, in
from twisted.application import app
File "/tmp/Twisted/twisted/application/app.py", line 10, in
from twisted.application import service
File "/tmp/Twisted/twisted/application/service.py", line 20, in
from twisted.python import components
File "/tmp/Twisted/twisted/python/components.py", line 37, in
from zope.interface.adapter import AdapterRegistry
File "/tmp/python-buildbot/local/lib/python2.6/site-packages/zope/interface/adapter.py", line 201
for with, objects in v.iteritems():
^
SyntaxError: invalid syntax
If you're interested in following such issues as they arise in the Pybots buildbot farm, I encourage you to subscribe to the Pybots mailing list.
Update
I mentioned the 'with' keyword already causing problems. As it turns out, Seo Sanghyeon's buildslave, which is running tests for docutils and roundup, uncovered an issue in roundup, related to the 'as' keyword:
Traceback (most recent call last):
File "run_tests.py", line 889, in
process_args()
File "run_tests.py", line 879, in process_args
bad = main(module_filter, test_filter, libdir)
File "run_tests.py", line 671, in main
runner(files, test_filter, debug)
File "run_tests.py", line 585, in runner
s = get_suite(file)
File "run_tests.py", line 497, in get_suite
mod = package_import(modname)
File "run_tests.py", line 489, in package_import
mod = __import__(modname)
File "/home/buildslave/pybots/roundup/./test/test_actions.py", line 6, in
from roundup import hyperdb
File "/home/buildslave/pybots/roundup/roundup/hyperdb.py", line 29, in
import date, password
File "/home/buildslave/pybots/roundup/roundup/date.py", line 735
as = a[0]
^
SyntaxError: invalid syntax
8 comments:
This was "fixed" about 4 months ago in the zope.interface trunk. I don't know if any new releases have been made since then, though.
Cool, thanks for the heads up. Maybe it's time for a new release :-)
Grig, thanks again for setting this up! This is exactly the kind of issue I was hoping would be uncovered.
Cool! I'm excited to see that Pybots is starting to actually uncover issues :-)
I think you meant Python 2.5, not 2.6.
No, I meant Python 2.6. The checkin was made in the Python trunk, which is 2.6. The 2.5 branch is tested against separately.
See:
vs.
Great blog - I'v really enjoyed your articles!
You may want to check on your RSS feed, is producing an error... :-(
Anonymous -- thanks for the tip on atom.xml. I fixed it, so it should work now.
|
http://agiletesting.blogspot.com/2006/09/pay-attention-to-new-with-and-as.html
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
im trying to develop a simple database web application in netbeans, but im getting a problem at database connection , my database is not being connected for some reason, the username password and hostname are correct and the mysql workbench is registered with my netbeans as well. is there something wrong with this code?
public class test {
public static void main(String args[]) {
Connection conn=null;
try{ `conn=DriverManager.getConnection("jdbc:mysql://localhost:3306/crud_news", "root","zunairaabira");`
if(conn!=null){
System.out.println("successful");
} }
catch(Exception e){ System.out.println("notconnected");
}}}
You need to call instance of the mysql driver too. Add
Class.forName("com.mysql.jdbc.Driver").newInstance();
before Your conn= statement.
|
https://codedump.io/share/74NYruaKs0ID/1/mysql-database-connection-in-java-netbeans
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Red Hat Bugzilla – Bug 132850
add nscd support for initgroups()
Last modified: 2007-11-30 17:07:04 EST
Description of problem:
When you set "group: files ldap" in "/etc/nsswitch.conf" and you have
a statically build application, a call to "initgroups()" call cause a
segmentation fault.
Version-Release number of selected component (if applicable):
glibc-2.3.2-95.20
How reproducible:
Steps to Reproduce:
1. Set "group: files ldap" in "/etc/nsswitch.conf"
2. Use the following reproducer program. The user is "mysql", but you
can choose another.
#include <stdio.h>
#include <grp.h>
#include <pwd.h>
#include <errno.h>
main()
{
struct passwd *pw_ptr;
char *user = "mysql";
pw_ptr = getpwnam(user);
printf("pw_ptr->pw_gid = %d\n", pw_ptr->pw_gid);
initgroups((char*) user, pw_ptr->pw_gid);
}
3. Compile with "cc filename.c -static"
4. Run "a.out".
Actual results:
# ./a.out
pw_ptr->pw_gid = 101
Segmentation fault
Expected results:
# ./a.out
pw_ptr->pw_gid = 101
Additional info:
This only happens when compiling with "-static".
*** Bug 133116 has been marked as a duplicate of this bug. ***
I have further analyzed the problem and have determined the exact
cause of the problem. I am hoping the RedHat could provide a fix for
this problem now that the cause of the problem is understood. The
details are below.
Problem: Nested "dlopen()" calls from a statically built application
will cause a segmentation fault.
Example: A statically built application a.out does a dlopen() of
libfoo1.so. In turn, libfoo1.so does a dlopen() of libfoo2.so. The
second dlopen(), which is libfoo2.so, will cause a segmentation fault.
Cause: The segmentation fault occurs in the dynamic loader ld.so in
the function _dl_catch_error() [elf/dl-error.c] due to an
uninitialized function pointer GL(dl_error_catch_tsd) which, after
macro expansion, is really _rltd_local._dl_error_catch_tsd
[sysdeps/generic/ldsodefs.h]. Thus, the question becomes, why isn't
GL(dl_error_catch_tsd) being initialized during the second dlopen()?
Keep in mind that I'm picking on GL(dl_error_catch_tsd) because that
is where the segmentation fault occured. There are likely other
variables in the _rtld_local structure may be uninitialized as well.
An explanation follows for both the statically built case, which
crashes, and the dynamically built case, which works.
Application Built Statically (segmentation fault)
-------------------------------------------------
For libc.a, the GL(dl_error_catch_tsd) macro expands to the variable
shown below [elf/dl-tsd.c]
# ifndef SHARED
...
void **(*_dl_error_catch_tsd) (void) __attribute__ ((const)) =
&_dl_initial_error_catch_tsd;
...
#endif
Thus, libc.a has an initialized copy of _dl_error_catch_tsd which
points to the _dl_initial_error_catch_tsd routine.
# nm -A /usr/lib64/libc.a | grep error_catch_tsd
/usr/lib64/libc.a:dl-error.o: U _dl_error_catch_tsd
/usr/lib64/libc.a:dl-tsd.o:0000000000000000 D _dl_error_catch_tsd
/usr/lib64/libc.a:dl-tsd.o:0000000000000000 T
_dl_initial_error_catch_tsd
Also in libc.a, the _dl_catch_error function is defined, which is the
routine in which the segmentation fault occurs.
# nm -A /usr/lib64/libc.a | grep dl_catch_error
/usr/lib64/libc.a:dl-deps.o: U _dl_catch_error
/usr/lib64/libc.a:dl-error.o:0000000000000000 T _dl_catch_error
/usr/lib64/libc.a:dl-open.o: U _dl_catch_error
/usr/lib64/libc.a:dl-libc.o: U _dl_catch_error
For libc.so, none of the symbols mentioned above are defined.
The a.out has the symbols because it was compiled with libc.a.
Thus, the first call to dlopen( libfoo1.so ) resolves its symbols
from the a.out address space. That is, it calls the _dl_catch_error
routine in the a.out address space which, in turn, accesses the
_dl_error_catch_tsd function pointer in the a.out address space which
was initialized with the address of the _dl_initial_error_catch_tsd
routine, which also exists in the a.out address space.
By the way, the reason I know what address space things are coming
from is because I put "_dl_printf" statements in the "glibc" sources
and compared the addresses that were printed at runtime with the
addresses shown in "/proc/<pid>/maps".
The second call to dlopen( libfoo2.so ) tries to resolve its symbols
from the ld.so (loader) address space.
Before I continue, let me say a few words about ld.so. During the
compilation of the loader, the GL(dl_error_catch_tsd) macro expands
to _rtld_local._dl_error_catch_tsd [sysdeps/generic/ldsodefs.h], a
totally different variable that the one in libc.a. That is, GL
(dl_error_catch_tsd) expands to a different variable in libc.a than
ld.so as can be seen by the code snippet shown below
from "sysdeps/generic/ldsodefs.h"
#ifndef SHARED
# define EXTERN extern
# define GL(name) _##name
#else
# define EXTERN
# ifdef IS_IN_rtld
# define GL(name) _rtld_local._##name
# else
# define GL(name) _rtld_global._##name
# endif
As you can see, during the compilation of libc.a, which is NOT
SHARED, GL(dl_error_catch_tsd) becomes _dl_error_catch_tsd. In the
compilation of ld.so, GL(dl_error_catch_tsd) expands to
_rtld_local._dl_error_catch_tsd. The reason I mention this is
because we can't even think about using libc.a's object because they
are completely different.
Anyway, back to the second call to dlopen( libfoo2.so ). This is
going to call the _dl_error_catch routine in the ld.so's address
space. The problem is that, for the loader, GL(dl_error_catch_tsd)
gets initialized in dl_main [elf/rtld.c], but dl_main only gets
called for shared applications, not during a dlopen. Therefore, GL
(dl_error_catch_tsd) never gets initialized and, when it is
referenced in _dl_catch_error [elf/dl-error.c], it contains a value
a "0" (NULL pointer) which causes a segmentation fault.
So, why does the first dlopen( libfoo1.so ) execute routines in the
a.out, while the second dlopen( libfoo2.so ) execute routines in
ld.so?
The reason is that when the a.out calls dlopen() it uses the dlopen
statically linked in from libdl.a . When the first library calls
dlopen() it get resolved to the one in the pulled-in libdl.so.
That's because the a.out does NOT have a ** dynamic symbol table **
(separate from externals and debug symbols) so the first library
can't hook back to the dlopen() in the a.out. Thus it must use the
one pulled in from libdl.so.
Application Built With Shared Libraries (works)
-----------------------------------------------
In the case where the a.out is built with shared libraries, the
ld.so's (loader) dl_main [elf/rtld.c] routine is called which will
initialize GL(dl_error_catch_tsd), so we don't get a segmentation
fault since the variable is properly initialized.
Conclusion
----------
One possible fix would be to put a check in either _dl_catch_error
[elf/dl-error.c] or dlerror_run [elf/dl-libc.c] to see if we are in
the loader code and if dl_main has NOT been called. If we are in the
loader code and dl_main has not been called, then we need to
initialize GL(dl_error_catch_tsd) and other needed variables so that
we don't get a segmentation fault due to uninitialized variables.
I will be adding a small reproducer for this problem shortly.
Rigoberto Corujo
Created attachment 104377 [details]
Reproducer for the problem where nested dlopen()'s cause segmentation fault
Untar this file and compile with the "compile.sh" script.
Set LD_LIBRARY_PATH to your working directory.
Run the "a.out"
dlopen support in statically linked apps is very limited, not meant
to be general purpose library loader for any kind of libraries.
Its role is just to support NSS modules (built against the same
libc as later run on).
dlopen from within the dlopened libraries is definitely not supported.
If libnss_ldap.so.* calls dlopen, then the bug is in that library.
For NSS purposes there is _dl_open_hook through which libraries
that call __libc_dlopen/__libc_dlsym/__libc_dlclose can use the
loader in the statically linked binary.
Using any NSS functionality in statically linked applications is only
supportable if nscd is used. Without nscd you are on your own. We
will not and *can not* handle anything else.
I don't think it makes any sense to keep this bug open. It is an
installation problem if nscd is not running.
Ulrich,
Are you saying that "service nscd start" would prevent the
segmentation fault from occuring? I just tried that with the initial
reproducer that I provided (the one that calls initgroups()) and I
get the same results (segmentation fault). Have you guys been
successful in running my reproducer with nscd?
As a follow-up to Jakub's comment, I just want to add that it is
actually "libsasl.a" that is doing the dlopen().
The "libnss_ldap.so" library links against "libldap.a".
The "libldap.a" links against "libsasl.a".
If the solution to this problem is to run nscd, then so be it. But,
there must be more to it than that because, like I said before, I
don't see a difference. I need some clarification, because I
understood Jakub to mean that what was going on was illegal but
Ulrich seems to suggest that this should work as long as nscd is
running.
Also, if dlopen'ing a shared library from a dlopen'ed library is not
allowed, then it would be beneficial to put a check in "glibc" so
that an error is returned to the calling dlopen() rather than letting
a segmentation fault occur.
Rigoberto
> I just tried that with the initial
> reproducer that I provided (the one that calls initgroups()) and I
> get the same results (segmentation fault). Have you guys been
> successful in running my reproducer with nscd?
That is impossible unless the program cannot communicate with the nscd
and falls back on using NSS itself or you hit a different problem.
There has been at one point a change in the protocol but I don't think
there are any such binaries out there.
Run the program using strace and eventually start nscd by hand and add
-d -d -d (three -d) to the command line. It won't fork then and spit
out lots of information.
Ulrich,
I followed your instructions. Every time I run my "a.out" there is
output from "nscd", so there is communication going on. The
segmentation fault is still occuring.
Can you confirm that you have indeed run my reproducer that calls
initgroups() and have not had a segmentation fault?
The man page for "nscd" states that it is used to cache data. I'm
not sure why running this daemon would solve my problem?
Rigoberto
> Can you confirm that you have indeed run my reproducer that calls
> initgroups() and have not had a segmentation fault?
Which producer which calls initgroups? There is only one attachment
and this is code which uses dlopen() for other purposes than NSS.
This is not supported. If it breaks, you keep the pieces.
Run your applications which uses NSS and make sure there are no other
dlopen calls in the statically linked code. Use strace to see what is
going on.
> The man page for "nscd" states that it is used to cache data. I'm
> not sure why running this daemon would solve my problem?
It's not the caching part which is interesting here, it's the "nscd
takes care of using the LDAP NSS module" part. All the statically
linked application has to do is to communicate the request via a
socket to nscd and receive the result. No NSS modules involved on the
client side. Which is why I say that if you still see NSS modules
used, something is wrong.
One possibility is that you use services other than passwd, group, or
hosts. Is this the case? These services are currently not supported
in nscd. There is usually no need for this since plain files are
enough (/etc/services etc don't change).
So, please make sure your code does not use dlopen() for anything but
NSS and that after starting nscd either it is used or only
libnss_files is used.
Ulrich,
Either I'm misunderstanding you, you're misunderstanding me, or we're
both misunderstanding each other. Please take a look at the very
first entry I made to this bugzilla. Would you please compile and
run the code as I described and then tell me whether you see the same
problem I'm seeing? This problem has nothing to do with any
application that I'm writing. The second reproducer, which I had
attached, was merely to show what is happening under the covers in an
easy to understand way. The first reproducer, which I embedded
directly into the text I entered, is at the heart of the problem.
Please take a look at that and then we can continue our discussion.
Rigoberto
Why don't you just attach the data I'm looking for? Yes, your code
uses initgroups and this cannot fail if nscd is used. Which is why I
ask for the strace output related to the initgroups call and the
actual crash.
Since I do not believe that you can continue to see the same crash
with and without nscd (unless there is something broken in nscd) I
also asked for other places you might use dlopen (explicitly or
implicitly).
So, run strace.
FWIW, with a FC3t2 system I have no problem using the LDAP NSS module
from the statically linked executable but this pure luck. Important
is that once nscd runs no NSS module is used.
Created attachment 104426 [details]
output of the strace with the statically built a.out
The LDAP database contains only one user "johndoe" as well as the group
"johndoe". Running the "id johndoe" command verifies that communications with
the slapd server is good. The "nscd -d -d -d" is also running. Communication
with it also appears to be good. I will attach the output of "ncsd -d -d -d"
shortly.
Created attachment 104427 [details]
output of the "nscd -d -d -d"
Comment on attachment 104427 [details]
output of the "nscd -d -d -d"
The "nscd -d -d -d" is started freshly. The "strace a.out" is immediately run.
The output of "nscd" is shown. The "a.out" is still getting a segmentation
fault.
I see what is going on. The initgroup calls do not try to use nscd at
all but instead use the NSS modules directly. This is fatal in this
situation.
We might be able to get some code changes into one of the next RHEL3
updates but there is not much we can do right now. Except questioning
why you have to link statically. This is nothing but disadvantages.
Ulrich,
I, like you, work for support. You work for RedHat support and I
work for HP support. Our XC (Extreme Clusters) product is based on
RedHat Linux. One of our customers had asked us to document how to
configure LDAP. While configuring LDAP, I found that "mysqld" did
not start when LDAP was configured. After further analysis, I found
that mysqld was linked statically and called initgroups(). To work
around the mysqld problem we simply used a non-static version of
mysqld. However, this was a concern to me because there may be other
packages, or customer written applications, which could potentially
run into this problem. So, I had to get to the bottom of the
situation and find out why statically built applications which called
initgroups() would seg fault. This has led to this conversation that
you and I have been having. As you can see, it is not I who is
developing statically linked applications, but I am concerned that
customers who do develop statically linked applications and turn on
LDAP may run into this problem.
At the very least, for the short term, that second dlopen() should
return an error and not seg fault. Maybe errno could be set to EPERM
(operation not permitted) or something along those lines.
So, we are leaving this as a "to be fixed in a future release",
correct?
Rigoberto
I'm reassigning this bug to glibc and marked it as an enhancement.
This is what it is, NSS simply isn't supported in statically linked
applications. The summary has been changed to reflect the status.
If you are entitled to support for these kind of issues you should
bring this issue up with your Red Hat representative so that it can be
added to IssueTracker. If you don't know what this is then you are
likely not entitled and you might want to consider getting appropriate
service agreements.
> At the very least, for the short term, that second dlopen() should
> return an error and not seg fault.
No, since there are situations when it works. NSS in statically
linked code is simply an "if it breaks you keep the pieces" thing, if
it works you can be very happy, if not, you'll have the find another
way. I cannot prevent people from having at least the opportunity to
get it to work.
> So, we are leaving this as a "to be fixed in a future release",
> correct?
Yes. I'll keep this bug open so that once we have code for this, I
can announce it. Whether we can use this in code in future RHEL3
updates is another issue.
I added support for caching initgroups data in the current upstream
glibc. Backporting the changes to RHEL3 is likely not going to happen
since the whole program changed dramatically since the fork of the
sources for RHEL3. If it is essential, contact your representative
for support from Red Hat. I close this bug since the improvement has
been implemented.
|
https://bugzilla.redhat.com/show_bug.cgi?id=132850
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
Catalyst::Plugin::DateTime - DateTime plugin for Catalyst.
SYNOPSIS
#
METHODS
- datetimeas a default.
- dt
Alias to datetime.
DESCRIPTION
This module's intention is to make the wonders of DateTime easily accesible within a Catalyst application via the Catalyst::Plugin interface.
It adds the methods
datetime and
dt to the
Catalyst namespace.
AUTHOR
James Kiser james.kiser@gmail.com
SEE ALSO
Copyright (c) 2006 the aforementioned author(s). All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
1 POD Error
The following errors were encountered while parsing the POD:
- Around line 11:
=pod directives shouldn't be over one line long! Ignoring all 2 lines of content
|
https://metacpan.org/pod/Catalyst::Plugin::DateTime
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
In this article we will see how to use Java interfaces correctly with with some examples. Using Java as Interfaces is often not used or wrongly used. This article demonstrates the practical uses of Interfaces in Java. First of all it is important to understand what is an interface?
Let us create a “contract” where the class that implements shall obey it. In Listing 1 we will see how to create a simple interface in Java.
Listing 1 : My first interface
package net.javabeat; public interface MyFirstInterface { /** Methods that must * Must be implemented by * Class that implements this interface */ public void method1(); public int method2(); public String metodo3(String parameter1); }
Note, in this interface methods have no body, and only have signature. Now we have a “contract” that must be followed if someone implements this interface. See in Listing 2, a class that implements our above interface.
Listing 2 : Implementing the Interface
package net.javabeat; public class MyClass implements MyFirstInterface { @Override public void method1() { // TODO Auto-generated method stub } public int method2() { // TODO Auto-generated method stub return 0; } @Override public String metodo3(String parameter1) { // TODO Auto-generated method stub return null; } /** * @ Param args */ public static void main(String[] args) { // TODO Auto-generated method stub } }
When using the keyword “implements” in MyClass, you will see that the IDE (Eclipse, Netbeans, etc.) warns you to implement the methods declared in the interface.
Practical Uses of Interface
Having basic knowledge of an interface, let us now understand its use in real time.
The interface is widely used in large projects to force the programmer to follow the pattern of the project, this is a contract where a coder is obliged to implement its methods, and the class must always follow the standard implementation of the interface. Assume the following case: Let’s say that an Interface BasicDAO of our project contains declaration of methods that any DAO class should have. Say any DAO class will have CRUD methods.
Listing 3 : Our DAO Interface
package net.javabeat; import java.util.List; public interface BasicoDAO { public void save (Object bean); public void update (Object bean); public void delete (int id); public Object GetById (int id); public List<Object> getAll (); }
Now let’s say, one of the developers who is working in HR module wants to create a DAO to perform CRUD operations on employee object. Now he will implement the above interface.
Listing 4 : Implemented Interface DAO
package net.javabeat; import java.util.List; public class EmployeeDAO implements BasicoDAO { @Override public void save(Object bean) { // TODO Auto-generated method stub } @Override public void update(Object bean) { // TODO Auto-generated method stub } @Override public void delete(int id) { // TODO Auto-generated method stub } @Override public Object GetById(int id) { // TODO Auto-generated method stub return null; } @Override public List<Object> getAll() { // TODO Auto-generated method stub return null; } // Method part created and defined by the programmer public void SalaryCalculation() { } }
We now have all the items, let’s now use it in our application. Suppose a new programmer (has not created the class EmployeeDAO), and wants to insert a new employee. Say this new programmer has no idea how the interface was implemented by EmployeeDAO, he doesn’t even have access to this class, but he knows something much more important. He will use the power of polymorphism and create a new object EmployeeDAO of type BasicoDAO. See Listing 5.
Listing 5 : Using polymorphism
package net.javabeat; public class MyApp { /** * @ Param args */ public static void main (String [] args) { BasicoDAO emplyoeeDAO = new EmployeeDAO(); emplyoeeDAO.save(employee001); } }
Note that we create the object EmployeeDAO of type BasicoDAO , so we can only call the methods of the interface BasicoDAO. But most important is that the new programmer who uses the class EmployeeDAO, can call the methods described in the interface. What forces the programmer who created the class EmployeeDAO implements BasicoDAO?
Few Java Basic Tutorials:
You can create EmployeeDAO without implementing the interface, but the new scheduler that uses this class will not achieve polymorphism and you will see that the class is wrong as per standards. There is another way of knowing if the class that the interface was created by BasicDAO, see in Listing 6.
Listing 6 : Using the instanceof
package net.javabeat; public class MyApp { /** * @param args */ public static void main(String[] args) { EmployeeDAO employeeDAO = new EmployeeDAO(); if (employeeDAO instanceof BasicoDAO) employeeDAO.save(employee001); else System.err.println("The class EmployeeDAO does not implement BasicoDAO , no procedure was performed"); } }
Marker Interface
There is also a concept that we call: Marker Interface.These interfaces serve only to make classes, so that when performing “instanceof” we can test a class .
Let’s try another example: We have an Employee interface without any methods or attribute, because it is just a marker interface. See in Listing 7.
Listing 7 : Interface Markup Employee
public interface Employee { }
Now let us create 3 Beans, which correspond to three distinct types of employees: Manager, Coordinator and Operator all implementing Employee.
Listing 8 : Creating Manager, Coordinator and Operator
package net.javabeat; public class Manager implements Employee{ private int id; private String name; } package net.javabeat; public class Coordinator implements Employee{ private int id; private String name; } package net.javabeat; public class Operator implements Employee{ private int id; private String name; }
Now in our application we have a method that performs a procedure for calculation of salary for each different type of employee. Now lets say we don’t use the interface and make the implementation as below.
Listing 9 : Misuse Interface Markup
package net.javabeat; public class MyApp { /** * @param args */ public void toCalculateManagerSalary(Manager manager) { } public void toCalculateCoordinatorSalary(Coordinator Coordinator) { } public void toCalculateOperatorSalary(Operator operator) { } }
Much work can be reduced to only 1 method, as shown in Listing 10.
List 10 : Using the marker interface
package net.javabeat; public class MyApp { public void calculaEmployeeSalary(Employee employee) { if (employee instanceof Manager) { // Calculate for manager Salary } else if (employee instanceof Coordinator) { // Calculate for coordinator salary } else if (employee instanceof Operator) { // Calculate for operator salary } } }
Rather than creating a method for each type of employee, everything together in just 1 using the marker interface.
Summary
When properly used, an interface eventually becomes a powerful tool in the hands of a programmer or analyst, and it is essential to use in day-to-day programming. Good programming practices governing the use of an interface is essential for a good software design.
Very Good exaplanation about interface!
HI Siddu,
Thank you for the comments.
Thanks,
Krishna
Could you give one scenario when to use abstract class with real time examples?
Dear Siddu,
You can check Java Collections API, say for example it has List (java.util.List) and AbstractList. This is a real time example used by most programmers in day to day programming. Hope this will help you understand and come with differences between Interface and Abstract classes.
Imagine a scenario where you have a set of methods out of which few methods have common implementation and then few others are abstract methods. In such cases you can have an abstract class with implementation for few methods and the rest methods being abstract.
HI Manisha , Sanaulla,
Great explanation. Keep up the good work!
Thanks,
Krishna
I have question on JDBC API? I am retriving ten records from table? Since result set is a cursor?How the records will be stored? Will it be stored in an array of objects? Could you explain please.
Below URL also have good details:
|
http://javabeat.net/java-interface/
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
mdb;23 24 import javax.ejb.Stateless ;25 26 /**27 * Comment28 *29 * @author <a HREF="mailto:bill@jboss.org">Bill Burke</a>30 * @version $Revision: 44837 $31 */32 @Stateless 33 public class JMSTestBean implements JMSTest34 {35 public boolean wasCalled()36 {37 return JMSMDBBean.called;38 }39 40 public void clearCalled()41 {42 JMSMDBBean.called = false;43 }44 }45
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/jboss/ejb3/test/jca/inflowmdb/JMSTestBean.java.htm
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
source: Several implementations of the Java Virtual Machine have been reported to be prone to a denial of service condition. This vulnerability occurs in several methods in the java.util.zip class. The methods can be called with certain types of parameters however, there does not appear to be proper checks to see whether the parameters are NULL values. When these native methods are called with NULL values, this will cause the JVM to reach an undefined state which will cause it to behave in an unpredictable manner and possibly crash. import lotus.domino.*; import java.util.zip.*; public class JavaAgent extends AgentBase { public void NotesMain() { try { Session session = getSession(); AgentContext agentContext = session.getAgentContext(); CRC32 crc32 = new CRC32(); crc32.update(new byte[0], 4, 0x7ffffffc); // (Your code goes here) } catch(Exception e) { e.printStackTrace(); } } }
|
https://www.exploit-db.com/exploits/22360/
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
.core.groups;47 48 /**49 * holds the information about a multicast group and serves as a key in the groups map50 * 51 * @author Amir Shevat52 *53 */54 public class GroupKey {55 // the multicast addess56 private String groupIP;57 // the multicast IP58 private int groupPort;59 60 public GroupKey(String groupIP ,int groupPort ){61 this.groupIP = groupIP;62 this.groupPort = groupPort;63 }64 65 /**66 * @return Returns the groupIP.67 */68 public String getGroupIP() {69 return groupIP;70 }71 /**72 * @return Returns the groupPort.73 */74 public int getGroupPort() {75 return groupPort;76 }77 78 public int hashCode(){79 return groupIP.hashCode()+groupPort;80 }81 public boolean equals(Object obj) {82 if(obj instanceof GroupKey){83 return ((GroupKey)obj).groupIP.equals(this.groupIP) && ((GroupKey)obj).groupPort == this.groupPort; 84 }else{85 return false;86 }87 }88 }89
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/mr/core/groups/GroupKey.java.htm
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
This code can be copied and pasted in the footer area of any .aspx
file in your website.
The first
thing you'll want to do is create an empty text file, call it
counter.txt and save it to your root directory. The next step is even
easier, copy and paste the code below into your .aspx file and save it.
Be sure to import the system.IO class into your page something like this
<%@ import namespace="System.IO" %>
< script
public string counter()
{
StreamReader re = File.OpenText(Server.MapPath("counter.txt"));
string input = null;
string mycounter = "";
while ((input = re.ReadLine()) != null)
{
mycounter = mycounter + input;
}
re.Close();
int myInt = int.Parse(mycounter);
myInt = myInt + 1;
TextWriter tw = new StreamWriter(Server.MapPath("counter.txt"));
tw.WriteLine(Convert.ToString(myInt));
tw.Close();
re = File.OpenText(Server.MapPath("counter.txt"));
input = null;
mycounter = "";
}
re.Close();
return mycounter;
< /script >
'copy this code to the bottom of your .aspx page.
< % Response.Write(counter() + " visited");% >
A brief description of what is going on in this code goes as follows:
a. create a method called counter
b. Call the StreamReader class from the system.IO library and read your text file
c. Store the value of the text file in a variable
d. Close the StreamReader
e. Open the StreamWriter
f. Add 1 to the variable holding the value from the text file
g. Write the new incremented by 1 value to the text file
h. Close the StreamWriter
This last line
<% Response.Write(counter() + " visited");%>
also more example
Hi,
you can use three technologies for this...
1. Application variable
2. Storing counter in Database
3. Storing counter in files
storing counter in database will be the best method.
to store in application variable
Application["Counter"] = 1;
but this will be cleard when you restrat the IIS or your website.
this code usefull thnaks also all guys
|
http://www.nullskull.com/q/10121166/hit-counter.aspx
|
CC-MAIN-2017-04
|
en
|
refinedweb
|
IRC bot - full featured, yet extensible and customizable
Project description
pmxbot is bot for IRC and Slack written in Python. Originally built for internal use at YouGov, it’s been sanitized and set free upon the world. You can find out more details on the project website.
Commands
pmxbot listens to commands prefixed by a ‘!’ 3. It also requires a few python packages as defined in setup.py. Some optional dependencies are installed with extras:
- mongodb: Enable MongoDB persistence (instead of sqlite).
- irc: IRC bot client.
- slack: Slack bot client.
- viewer: Enable the web viewer application.
Testing
pmxbot includes a test suite that does some functional tests written against the Python IRC server and quite a few unit tests as well. Install tox and run tox to invoke the tests. your favorite process = pmxbot.mymodule', ], },
During startup, pmxbot will load pmxbot.mymodule. plugin name can be anything, but should be a name suitable to identify the plugin (and it will be displayed during pmxbot startup).
Note that the pmxbot package is a namespace package, and you’re welcome to use that namespace for your plugin (e.g. pmxbot.nsfw).
If your plugin requires any initialization, specify an initialization function (or class method) in the entry point. For example:
'plugin name = pmxbot.mymodule:initialize_func'
On startup, pmxbot will call initialize_func with no parameters.
Within the script you’ll want to import the decorator(s) you need to use with:
from pmxbot.core import command, contains, regexp, execdelay, execat`.
You’ll then decorate each function with the appropriate line so pmxbot registers it.
A command (!g) gets the @command decorator:
@command(aliases=('tt', 'tear', 'cry')) def tinytear(rest): "I cry a tiny tear for you." if rest: return "/me sheds a single tear for %s" % rest else: return "/me sits and cries as a single tear slowly trickles down its cheek"
A response (when someone says something) uses the @contains decorator:
@contains("sqlonrails") def yay_sor(): karma.Karma.store.change('sql on rails', 1) return "Only 76,417 lines..."
Each handler may solicit any of the following parameters:
- channel (the channel in which the message occurred)
- nick (the nickname that triggered the command or behavior)
- rest (any text after the command)
A more complicated response (when you want to extract data from a message) uses the @regexp decorator:
@regexp("jira", r"(?<![a-zA-Z0-9/])(OPS|LIB|SALES|UX|GENERAL|SUPPORT)-\d\d+") def jira(client, event, channel, nick, match): return "" % match.group()
For an example of how to implement a setuptools-based plugin, see one of the many examples in the pmxbot project itself or one of the popular third-party projects:.
pmxbot as a Slack bot (native)
To use pmxbot as a Slack bot, install with pmxbot[slack], and set slack token in your config to the token from your Bot User. Easy, peasy.
pmxbot as a Slack bot (IRC)
As Slack provides an IRC interface, it’s easy to configure pmxbot for use in Slack. Here’s how:
Install with pmxbot[irc].
Enable the IRC Gateway <>.
Create an e-mail for the bot.
Create the account for the bot in Slack and activate its account.
Log into Slack using that new account and get the IRC gateway password <> for that account.
Configure the pmxbot as you would for an IRC server, but use these settings for the connection:
message rate limit: 2.5 password: <gateway password> server_host: <team name>.irc.slack.com server_port: 6667
The rate limit is necessary because Slack will kick the bot if it issues more than 25 messages in 10 seconds, so throttling it to 2.5 messages per second avoids hitting the limit.
Consider leaving ‘log_channels’ and ‘other_channels’ empty, especially if relying on Slack logging. Slack will automatically re-join pmxbot to any channels to which it has been /invited.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/pmxbot/1121.1/
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
I am using java to create a program that gives the user the option to either add two numbers multiply two numbers or see if the 2 numbers are equal this is what i have so far. I know hot to the menu option so that is ommitted. how can i do the equals>> like equals(x,y) or equals(x,plus(x,y)) am i on the right track? any feedback would be appreciated
equals(x,plus(x,y); sum = plus(x,y); product = times(x,y); public int Plus(int x, int y) { return (x + y); } public int Times(int x, int y) { return (x * y); } public bool equals(int x, int y) { if(x==y) return true else return false }
|
https://www.daniweb.com/programming/software-development/threads/381372/adding-multiplying-and-seeing-if-it-is-equal-program-using-java
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Simple Introduction to Generics in C#
Generics in C# gives the ability to write type independent code rather than allowing a class or method to work with only a specific type.
Let’s consider a scenario where we have to consume a REST service and Deserialize the response to a defined object. The code would be something similar to below.
public static MovieData GetMovieData<MovieData>(responseData); }
Above code fulfills the purpose but assume we need to consume another service method and its going to return “CastData” instead of “MovieData”. So are we going to write another get “GetCastData” method? Of course we could write another method but deep down in your heart you know that there should be a better way to do this.
That’s where generics comes into play. Generics is a way of telling your class or method,
“Yo Bro, You don’t worry about the Type you are going to deal with. When I call you, I’ll let you know that information. Cool?”.
Noticed that the above “GetMovieData” method deserializes the object as “MovieData” and returns “MovieData”. We need to change those two places to be type independent using Generics.
This is how we can achieve this in C#.
public class ServiceConsumer<T> { public static T GetData<T>(responseData); } }
The “T” denotes the type. So this class will deal with the type that we specify when we create the object.
MovieData mData = ServiceConsumer<MovieData>.GetData("Movie URL as String"); //T => MovieData CastData cData = ServiceConsumer<CastData>.GetData("Cast URL as String"); //T => CastData
The above two lines of code specifies the objects that the class is going to deal with at the point of creation. So when GetData is called its going to deserialize the data to the specified object and return the particular type object. Freaking Awesome Right?
|
http://raathigesh.com/Simple-Introduction-to-Generics-in-C/
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Scan input from a character string
#include <stdio.h> int sscanf( const char* in_string, const char* format, ... );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The sscanf() function scans input from the character string in_string, under control of the argument format. Following the format string is the list of addresses of items to receive values.
The number of input arguments for which values were successfully scanned and stored, or EOF when the scanning is terminated by reaching the end of the input string.
It's safe to call this function in a signal handler if the data isn't floating point.
|
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/s/sscanf.html
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Handle various OS signals. More...
#include "angband.h"
#include "game-world.h"
#include "savefile.h"
#include "ui-game.h"
#include "ui-signals.h"
#include "ui-term.h"
#include <signal.h>
#include <sys/types.h>
Handle various OS signals..
Handle signal – abort, kill, etc.
References character_generated, character_saved, COLOUR_RED, player::died_from, my_strcpy(), quit(), savefile, savefile_save(), signal_aux, signals_ignore_tstp(), Term_erase(), Term_fresh(), Term_putstr(), and void().
Referenced by signals_init().
Handle signals – simple (interrupt and quit)
This function was causing a huge number of problems, so it has been simplified greatly. We keep a global variable which counts the number of times the user attempts to kill the process, and we commit suicide if the user does this a certain number of times.
We attempt to give "feedback" to the user as he approaches the suicide thresh-hold, but without penalizing accidental keypresses.
To prevent messy accidents, we should reset this global variable whenever the user enters a keypress, or something like that.
References character_generated, character_saved, close_game(), COLOUR_WHITE, player::died_from, FALSE, player::is_dead, my_strcpy(), player_upkeep::playing, quit(), signal_aux, signal_count, Term_erase(), Term_fresh(), Term_putstr(), Term_xtra(), TERM_XTRA_NOISE, TRUE, player::upkeep, and void().
Referenced by signals_init().
Handle signals – suspend.
Actually suspend the game, and then resume cleanly
References signal_aux, Term_fresh(), Term_redraw(), Term_xtra(), TERM_XTRA_ALIVE, and void().
Referenced by signals_handle_tstp(), and signals_init().
Handle SIGTSTP signals (keyboard suspend)
References handle_signal_suspend(), signal_aux, and void().
Referenced by close_game(), and save_game().
Ignore SIGTSTP signals (keyboard suspend)
References signal_aux, and void().
Referenced by close_game(), handle_signal_abort(), and save_game().
Prepare to handle the relevant signals.
SIGDANGER: This is not a common (POSIX, SYSV, BSD) signal, it is used by AIX(?) to signal that the system will soon be out of memory.
References handle_signal_abort(), handle_signal_simple(), handle_signal_suspend(), signal_aux, and void().
Wrapper around signal() which it is safe to take the address of, in case signal itself is hidden by some some macro magic.
Referenced by handle_signal_simple(), and inkey_ex().
|
http://buildbot.rephial.org/builds/restruct/doc/ui-signals_8c.html
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Flush all pending data in a PCM channel's queue and stop the channel
#include <sys/asoundlib.h> int snd_pcm_channel_flush( snd_pcm_t *handle, int channel );
The snd_pcm_plugin_flush() function flushes all unprocessed data in the driver queue by calling snd_pcm_capture_flush() or snd_pcm_playback_flush(), depending on the value of channel.
EOK on success, a negative errno upon failure. The errno values are available in the errno.h file.
QNX Neutrino
This function is not thread safe if handle (snd_pcm_t) is used across multiple threads.
|
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.audio/topic/libs/snd_pcm_channel_flush.html
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Get a string of characters from a file
#include <stdio.h> char* input_line( FILE* fp, char* buf, int bufsize ); extern int _input_line_max;
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.; }
|
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/i/input_line.html
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
ITT: Relationship red flags.
Here, I'll start.
You realize the girl you've been dating for the past 4 weeks only has male friends.
She has fucked a nigger
She has an enema kit under her bathroom sink.
>>592335655
Shit nigger that means she's down for anal.
Red Flag- Her family is really religious.
You find her fucking another guy
>>592335998
>she's down for anal.
Yep. With anyone
>always ready
>>592335106
>she has a "just friend" online
>she says she is "bored and wishes you both could do more stuff together"
>she is a virgin
You find out that the name you've been calling her all this time is actually her middle name
her mother didn't age well
"I am not like other girls."
Red flag: she often stares into the abyss with an empty look on her face
Protip: she dumb yo
She uses you as a punching bag.
She refuses to meet/let you meet parents.
She was dating someone else when you started fucking her.
NO DAD
>>592336513
does that mean im dumb?
>>592336563
I love those girls. They're easy to fuck and they like old guys.
>she complaines about being bored all the time
she will expect you to entertaine her, she is a boreing person in general, if you get married she will find other men to entertain her while you work your ass off to pay for shit she demands
Was ever raped
No dad
Your penis burns after banging her
Can't wait more than 10-20 minutes between texts
>>592336645
Trick question: this anon can't read the response if we answer.
she is suspicious even for the small things.
>>592335106
She has a penis
You catch her going through your locked cell phone.
She is he
>>592336187
This guy
She says things like "Get away from me" and "Who are you"
Has a penis
>>592336513
Look its the Bad Grandpa kid.
ANY mention of the following:
Femenism
Vegan
Patriarchy
Red Flag- You hate all of her friends.
>"i would never have kids"- means, i want them now
>parents dine at hooters
>tells stories about being hit as a child
>parents are divorced
>300 pictures of herself in her bathroom on her phone
>instagram/facebook are >50% herself
>more than 300 friends on IG
>>592336553
This for sure.
Also,
>2 weeks into relationship
>"anon, do you think you could marry me?"
>>592336915
>gril w/ no peen
das gay bro
She only has "guy" friends.
Says she gets bored and lonely when you're away for any period of time.
Translation: Obsessive and making her whole world revolve around you so she can control every aspect of it.
She never initiates sex anymore
>>592337066
we're talking about women in ACTUAL relationships
>>592336915
kek
>>592336563
Oh man I wonder how many of these the chick I'm with has.
No Dad
Started fucking her before she had backed out of her relationship
mom did not age well
I'm "the friend" from "online"
she was raped once
I hate all her friends
so far i wonder what else will pop up
>>592335106
She's a single mom.
>>592337410
ummm...
Are your really that fucking retarded or are you just trolling? Op made that point in his post dumb ass.
this
>>592337413
>>592336744
hell yah homie 34y/o can confirm, banging 19 y/o slutbox
>>592336830
im staring into the abyss too much to read it
she tries to kill you with knives
>>592337743
You aren't the only one banging her.
>>592337795
I'm a relationship expert and I can confirm this
>>592335106
She blurts out she loves you and YOU DONT LOVE HER BACK.
> How I knew my first relationship was at an end.
> Broke up a few days later and told her I don't love her back and that it was unfair of me to hold her back
> She tried to get me to come over or go out with her for months until I got another GF.
>>592335106
"Can we talk"(..about our relationship?)
"We need to talk"(..about our relationship)
i hate other women 08q345-918345134859q345-123????????
wtf? move on.
>>592337743
Currently in a 20 year age gap. I fucking love teenage girls.
>>592335106
complains and is proud of being overweight simultaneously.
>"I know I'm not a 0, but god damn am I sexy."
>>592335106
is this a trap or not? moar?
>>592335106
shes ill
she takes care of an ill person
hen pecking in a cute way, turns into demands
def agree with never date a girl thats already seeing someone or just broke up.
relationships tire me out
honestly, it can all be boiled down to just a few:
1. she doesn't seem excited about you, hanging out with you (insincere)
2. she gets angry, frustrated, sad, or extremely emotional with zero legitimate reason (i.e.; someone died)
3. she has no interests, or requires other people for entertainment (passionless)
4. she has no desire for clear relationship boundaries, and thus doesn't respect you as a person (i'll summarize that as "whore")
TL;DR - she's young.
sorry kids, you can have either the body of a teenager or the wisdom of an adult. just make your choice and adjust your little online dating profiles to match
Turns tv on while talking to you on the phone.
Addicted to drugs or alcohol.
Fat
Believes in Jesus
Doesn't like Vidya
Shopping all the time
Already has a baby
Starts conversations by complaining
Is mean to everyone.
>>592336233
What's wrong with virgins?
Also, nice dubs.
>>592335106
I also think this would be a red flag.
>>592337950
oh god. hep us both. christ. then what right? you thought it was all good and no...... hahah..... oh shit. here we go.
>>592335106
she refers to her friends as just a friend. If they were just a friend then she would just call them a friend.
>>592337895
>implying im not a cuck
>>592337385
>>2 weeks into relationship>"anon, do you think you could marry me?"
fuck this happens every time I have anything deeper than a Fwbs with a woman, it doesn't even make any sense to me I'm poor and have thinly covered emotional problems, I shouldn't be showing up at all on the marriage radar.
>>592335106
during the part in a movie that opens with lots of shit that will most likely be explained a little later in the film she asks "whats going on?"
>>592336231
You dense motherfucker.
>HAve condoms in your drawer
>don't use condoms with gf
not a red flag
>>592338405
You could turn a girl that nuts into a willing sex slave within a month.
>>592338101
was the ages my dude
>>592335106
> she only has male friends
oh man. this one hits home.
I learned that lesson the hard way. fuck man.
She claims to be a feminist.
She also does not swallow.
>>592338697
me too
>>592337895
That's a good one... Red Flag
>>592335106
Whats wrong with this OP?
>>592338563
>implying cucks aren't worse than fagots because at least fagots have the courage to accept that they are homosexual.
Come out of the closet already your fucking coward.
>>592338697
>tfw you know that feel
Shits weak man
>>592338697
>>592338862
Me three, bros.
>>592337354
> tons of selfies
oh yeah. spot on on that one.
I was on my second date with a girl and she was taking selfies of her self while in the passenger seat of my car. Holy shit what a vapid cunt.
>>592335106
>You realize the girl you've been dating for the past 4 weeks only has male friends.
>red flag
>being this insecure
Need /b/ honest opinion
Gf of fifteen months will not stop going through phone. Keeps getting mad over dumb bs etc. etc.
>Solve problem with screen lock
>Gf bursts into treats cuz i'm tired of her going through my shit
Should I just dump this nosy insecure cunt and get on with my life ??
pic unrelate
You're scared to mention your other girlfriends to her.
>>592335106
She constantly sees John Redcorn whenever she had a headache.
>>592338521
>she refers to her friends as just friends
>if they were her friend, she would call them a friend
>>592335106
Correction:
>You realize the girl you've been dating for the past 4 weeks only has male friends. Because she's a man.
>>592338392
They can't duck dick for shit.
They can't ride dick for shit.
Super boring in bed.
Starting from square 1.
>>592335106
if she says the words, "if you really care about me you'll..."
for real yo
>dated some bitch in the summer
>had the feminist fist tattooed on her foot
>was all for equal rights but liked sub and dom bedroom play.
>i brought up the fact that she was a walking contradiction every chance i got
>got dumped
>doesn't matter had sex
>>592338697
so what happened?
>>592335106
LOVES disney movies. Insists that you watch them with her.
>>592339063
She doesn't trust you, so yes.
>>592335998
>Red Flag- Her family is really religious.
Naw dude. Women that have repressed their urges and feelings for a long time are massive sluts.
>>592339063
Leave her now. Never stick your dick in crazy (and insecure)
>>592338966
Oh plz explain to me obi wan about how her fucking other people makes me a faggot.
>>592339063
100% Yes.
The instant you go through someones phone you might as well plaster, "I don't trust you." on your forehead.
Question for you: Will a relationship without trust work in the long term?
>>592338408
ok im high right now what the fuck did you just say?
>>592338405
Shit, now he has to change his name and leave the state.
>>592338697
me too...cheers
I have a list of rules about dating, all learned from experience.
- Never trust a girl who's fucked someone you don't trust.
- Never trust a girl who doesn't go by her real name.
- Never trust a ballerina
- Never trust a girl who won't make it facebook official
- Never trust a girl who uses the word casual at any point to describe your relationship
- Never trust a girl from Virginia
- Never trust a girl who prioritizes world travel
-Never trust a girl with more than 550 facebook friends
-Never trust a girl with more guy friends than girl friends
- Never trust a girl who offers anal the first sexual interaction or immediately following
And the most important:
- NEVER, EVER, UNDER ANY CIRCUMSTANCE trust a girl with a tattoo of a dagger...
doesnt eat any other meat than chicken
>>592337556
Your insanity, believe me
>>592338249
omg bro have you dated my baby mama too?
>>592336472
Yeah, she started out pretty cool now she's kind of a cunt. It's usually the other way around isn't it?
>>592338318
> wants to talk to you on the phone and do other shit at the same time.
nope. I had this last chick I dated want me to listen to her as she just went thru her day, like call me and be shopping and shit and I could just hear her in the background
hellanope.
>>592339450
how often do you find yourself in a relationship with a ballerina? stfu faggot
>>592336187
kek
>>592339124
try it in a sentence
You: So who was that on the phone?
>> Oh that was my friend john.
>> Oh that was john, we are just friends.
You honestly cant understand the different implications here?
>>592338697
>In a year long relationship with a girl that believes in "lying to avoid the other person being hurt" who only has male friends.
>her excuse is "do you know how hard it is to make female friends"
>haven't found any legit evidence to prove she's cheating
>have her tell me she doesn't trust me almost weekly
Awww shieeet.
>>592337477
fuckin A m8
>>592339124
I actually agree with what he's saying.
"Who's that?" or "Who are you going out with?"
Answer: "Oh it's just [guys name], it's ok we're just friends."
That's the red flag.
If she normally answers with something like: "Oh I'm going out with [guys name]." it implies less because she's not instantly defending herself by saying, "JUST friends".
Once she starts instantly defending herself from a simple question like that, it somewhat implies she knows she's guilty and trying to hide it.
>>592339063
yeah dude. I had a nosy ass bitch too. always accused me of cheating. Turns out she was the one that was cheating. Thats why she was so god damn paranoid
i dated a girl, who was vegan, ate glutein-free and she also didn't like soya.
>>592338929
she probably is accustomed to stringing guys along for attention, and probably fucks around.
there are exceptions, there are a minority of tomboyish women who genuinely connect platonically better with males do to having more male interests, but even then they usually have a female friend they do girly shit with on the side. or an actual girlfriend.
>>592337066
That's my girlfriend and she's pretty cool, but I'm also a vegan feminist so it's all good
if she is super close with her mom
>>592339383
haha. im high too. maybe tghat shy i dunno. your bein cool and gettin laid and shes not she want more an im igh... yeah so..
>>592339690
Explained like that it makes sense, but nobody is going to read the first message and understand what the fuck you meant.
way too much time spent riding horses
>>592339062
>being this whiteknight
>>592338124
oh god
>>592339063
Yeah, probably dude. Problems like this are only going to get worse down the line. If you're already having trouble with that shit now it's just going to intensify the longer you stay with her.
>>592339137
duck dick
>>592336817
this.
She's completely oblivious to the fact that her "best friend" has a huge crush on her.
>>592339621
Several time actually. Do you doubt my wisdom? Trust me on this one. Avoid Ballerina's. All that perfectionist high maintenance bullshit they are trained to do makes them insane.
>>592339949
why is that bad? less time for you?
>>592339249
This
>>592339064
>implying /b/ has >1 girlfriend
>implying /b/ has =1 girlfriend
>>592339063
Next time you sex her hit it as hard as you can that will shut the bitch up.
she refuses to get divorced and/or change her married name
>>592335998
This means nothing. If SHE is religious, on the other hand..
My girlfriend was a virgin when I met her. She likes me because I entertain her. She has more guy friends than girl friends. Hates it when I take too long to reply. Gets emotional very easily. She has a hot body though. I think I want to marry her. I would not mind tapping datass for the rest of my life.
>>592340096
Naw son, she knows she just doesn't love you back.
>>592338318
Lol is the current girl I'm banging. Like 90% of those things.
When she carries a whistle
>>592339450
who the fuck doesn't have over 550 facebook friends now-a-days?
>>592335106
She keeps having her friend Darel hidden in our closet buttnaked whenever I show up.
>>592339980
oh this brother. this. listen. lo.
<<<<<<<<<<<<<<<<<<<<<<<
>>592340359
>2015
>not dating a referee
I refuse to date a woman who has male friends.
Not because I think she'll cheat on me with them, but because they're going to hit on her all the time and I don't want to have to be kicking asses 24/7. I got shit to do.
Honestly the only reason you have a female friend is if you want to fuck her. Any single one of your girl friends you would fuck daily.
>>592337066
i wouldn't mind dating a vegan girl. she would probably be caring and compassionate... and healthy from all the fruit n veg eating.
long-term vegans are rarely overweight.
>>592339063
Femanon here
She cheated on you a little while ago and now she's desperately trying to find evidence that you're doing the same so that she can stop feeling guilty.
>>592339191
well basically all the male "friends" were men she was either considering dating, or had fucked in the past. This was a woman with a mind of a man. And just wanted to pit all the min against eachother. Like a bidding war at an auction.
she was just all around crazy. she was into rape-sex, which is very much a turn-off for me.
She just ended up being a crazy cunt, and then told everyone at work that I was some wierdo faggot after I stopped talking to her.
Mega game player.
just all around a weird incident. and I made the mistake of dipping the pen in company ink so it fucked my life up. Ultimately, the fact that all her friends were men should have been a massive red flag. I mean how fucking wierd is that? It'd be like a guy whose only friends are a flock of women. Something wrong there.
>>592340145
um yeah, this girl I'm seeing has forced me to watch some disney movies. Why is this red flag? Doesn't seem to be the worst thing. I watch a shitty movie like frozen then she fucks me for the next two weeks. Don't get the huge problem....
>>592337066
My girlfriend is starting to turn feminazi. I hate it and she always avoids the conversation when I start telling her what's wrong with the stupid shit she's saying. It usually goes "wow, these people are stupid" "no they're not they have a right to blah blah blah false justification" "if it really mattered they'd do logical decision rather than stupid shit they did" "I don't want to start with this, I'll just get mad" I don't think I can ever forgive my cool female friend for showing my girlfriend how to use tumblr.
>>592340102
you could easily make that connection without dating them which is why i dont believe you actually dated them. unless you have some connection to a ballerina (studio?) or are one yourself. either way you are a liar or a faggot but im just gonna kill two birds with one stone and say you are a lying faggot.
>>592340388
i dont. I keep it to important people only. Which is maybe 12 people
>>592339863
>girlfriend
yeah, like lesbian life partner level stuff. big red flag.
What if I the booty is really really good?
Also I've been in love with her for 6 years and I had to go through a really elaborate plot to get her boyfriend fired from his job, addicted to meth, and into prison for domestic assault, in order to free her up.
So I mean I should be fine right?
>>592335106
She has never masturbated in her life. (protip most likely asexual enjoy your sexual frustration)
>fml
>>592340454
>kicking asses 24/7
y so insecurr m8
Talks about her Dad all the time.
Has a wierd relationship with her dad.
>>592339249
I wouldn't call this a red flag.
Disney has some great content and it can be fun to watch with someone.
Hates Weed. Like fucking hates it.
>>592339008
>vapid
I like that word, it sounds really harsh and serious.
>>592339786
so much this.
3 years of her bs, getting all insecure and playing the victim when you even brush the subject of something as simple as "who is texting you right now?". crying, blaming ect. should have been more obvious, but they make YOU feel like you are being the bad guy by not trusting her, when its really just your instinct trying to get the bitch out.
>>592339939
I understood. Not being an autistic nigger helps.
>>592335106
MGTOW
Men/Man.
>>592340657
i had a gf that would masturbate herself for me. shit was fuckin cash.
When she's actually a potato with Mac and cheese on it
>>592339450
>Never trust a ballerina
Why?
>>592339939
I'm high fuck off also thanks other guy for correction
>>592338966
Moron.
Red flags
You find out she has had fuck buddies "i didnt enjoy it and dont do it anymore" happened twice a few months before you start dating
If shes ever model'd amateur
Ever dated a non famous "musician"
Has had a "pay pig" ( someone who pays you to sexually humiliate or dom them)
Tells you if you broke up shed still try to stick out being friends if you moved on
Unfortunatly these were all from the same chick in one relationshit
Has physically abused all her ex-bfs.
Yeah, I'm scared as fuck.
>>592335106
>>592336513
>>592336513
she visits 9fag or posts traps as OP on /b/
>>592340557
damn bra, but I hope you got over it (mostly), sounds like she was legit crazy
has an unattractive laugh.
>it ruins humor and therefore good times
>>592339063
Pro-tip: Girls that don't trust, cannot be trusted. They think you would do, what they would do. Which is cheat.
Sad but true.
>>592335106
I've been in a few before.
1. She talks about marriage or kids too much in the first year.
2. She "forgets" to take the pill on occasions. My tip is to get her the coil, and maybe go on the male pill too.
3. She acts too quickly as if she's moving herself into your apartment, like taking up clothes drawers for herself without asking.
4. She starts moving your own stuff around your apartment.
5. She starts to become less and less independent, and relies on you more for money. Any two people who can't stand on their own two feet will have a rocky relationship filled with future arguments.
6. She bans your "you time" with friends and consoles.
>>592340673
They deserve it. She's not dating them for a reason.
When asking about her dad
>I'm glad he's dead
Annnnnnd that was a can of worms I wasn't about to open.
>>592339949
naw, that's a pro, its a hobby that is 95% women and builds great thigh and pelvic muscles.
>>592338101
>I love fucking teenage girls
FTFY
>>592335106
She has a penis
>>592340745
Weed is one of the best drugs there is*. How can anyone hate it unless they're willfully ignorant?
*personal opinion, faggots.
>>592339330
You want to be her. You're fucking the people she's fucking vicariously through her.
>>592339063
>>592335106
32 year oldfag here. I'll share a few
1) Multicolored or unnaturally colored hair. Sign of a personality disorder or extreme mental instability.
2) Has 'rebelled' against deeply religious parents or is otherwise estranged from them.
3) Majority of her friends are men. She has fucked or will fuck nearly all of them. They're waiting their turn.
3b) Other women cannot stand her but 'can't quite put their finger on why'. Especially women that you're friends with (Protip: Your buddies' girlfriends/wives are amazing litmus tests to use for potential new women)
4) Any time you point out shady behavior she goes immediately to accusing you of being 'controlling'. She's deflecting. Just outright say what you think she's doing, chances are you're right. That's why she's deflecting.
5) Any woman you can confirm has cheated before. Immediately proves she cannot ever be trusted.
5b) If she'll cheat with you, she'll cheat on you. It's a matter of time and her convenience.
5c) Any time a woman claims she was 'raped' by one of her beta-orbiter men. She fucked him willingly. She'll do it again too if she thinks you won't find out.
6) Compulsive liars.
7) Women who can't hold a job.
8) Women that insist on going with no condom ridiculously early in the relationship (read <6 months). One of the times you try to pull out she's going to leglock you you and refuse to let you. Congratulations, you've just created an 18 year government-mandated stipend for her.
9) Women that were actually raped or molested at an early age. The amount of psychological damage this cause will never fully heal. They're complete emotional nightmares as a result.
10) The ones that say 'I love you' ridiculously early. Bitch you've known me 4 weeks and you love me?
10b) The ones that refuse to say they love you even after a year or more. You're being used. Soon as she finds someone else to fufill the purpose she's assigned to you, you're going to be out on your ass.
And that's just the big ones.
>>592340674
My gf's dad apparently gets upset if he has a day off work and she's not available to spend the day with him.
>mfw my gf's dad's probably getting it in with my girl more than I am
>>592340868
>masturbate herself for me
wtf are you saying
>>592339063
There's clearly a fundamental problem with the way both of you view what a relationship means and that's just as much your fault as hers. I personally wouldn't give a damn if my girlfriend of over a year wanted to looks at everything aside from asking for access to my bank account, passwords, and social security number. If she wants to ask me to look through my phone I take that as a good sign. I'd rather have a girl afraid that I'm too good for her than one that doesn't give a shit. But that's just me. I see relation ships as requiring complete transparency to work. Other people have different opinions. If you don't mesh on the nature of what a relationship fundamentally is then yes, you should leave her.
>>592338588
oh god, i used to have to explain that what was hapenning was foreshadowing and that they were building up to something that was gonna happen, like it doesn't make sense yet because the thing they are referring to hasn't even happened yet. be patient give it time.
i can't remember but that probably just started another argument because i pointed out... the truth.
>>592340657
My current one never came before I came along. She literally doesn't do anything in bed except get fucked silly. Ugh.
>>592338572
that's what happens when you date chicks with low self esteem
>>592335106
When she wants "eletro therapy"
When she hangs out with one guy, and he's just "a friend", or "like a brother".
Second one is the worst. You know she's fucking bullshiting, but she keeps saying it. I know he's not "just a friend'.
>>592341083
That too.
I've had all good relationships actually. The best one
>liked anime but kept it away from me
>was vegetarian but i didn't know about it for a while
>owned one cat and one dog
>didn't have a facebook
>didn't talk much
I still love her so much. I wonder where she went.
>>592336817
>Your penis burns after banging her
what this?
>>592337354
>"i would never have kids"- means, i want them now
My gf and I are in agreement that we never want kids.
>>592339180
> women
> logic
they just can't man. and won't. never will.
and really when you think about it, why should they? They can think however they want in whatever strange ways they want and men will still want to fuck them.
really they run shit when you think about it. We strive to meet their illogical perceptions.
>>592339949
>>592341078
It's also not the cheapest thing to get in to. Implies wealth.
>>592340976
end it now you dumb fuck its only gonna get worse
>>592338101
>>592338101
>>592338644
are you retarded?, thats not a "redflag" thats a fuckin relaitionship ender right there bro.
>>592340976
This happened to me recently.
>bitch clocks me in face drunk, breaking $400 prescription glasses
>fuck it, we're on lease together - forgive it
>3 weeks later claws at my face, scraping glasses off durr 'open hands'
>fuck you
>50% power into fist into torso
>bitch flies 5ft back
"This is your equality. Fuck you. Next time it will be 100% power."
Hate this cunt so much. Can't wait til lease is up. Sucks I'll have to be a master carpenter to repair all the damage I've done here.... (punched a hole in about everything solid b/c typically I don't EVER hit people unless provoked)
>>592340903
See:
>>592340102
>>592337413
id prefer that over "yeah anon im just hanging out with my guy friend but hes just a friend dont worry :)"
Pageant girls
>>592339869
>>592340447
>>592340841
weird how it works that way. Ive never cheated on someone. If you dont want to be with someone then fucking leave. I aint keeping you here. I get around when Im single and have broken up with girls when there is another girl i really wanna fuck. Its not hard. I guess some bitches are just crazy.
>>592339926
This conversation is the biggest shit show I've ever seen
>>592341160
>6) Compulsive liars.
Hit the nail right there for me anon. Fucking hate liars. Especially when you call them out on it and they continue to try and feed you bullshit.
>>592341350
Super AIDS.
>>592341160
>1) Multicolored or unnaturally colored hair. Sign of a personality disorder or extreme mental instability.
So you're saying this is a no?
>>592335106
She has a facebook.
>>592341053
fucking A bro. went out with a girl 3 times in my hometown before she started asking me if i was fucking girls at my college. when i said no, she said its not a big deal if i am. i realized she already had it in her head that i was and i wasnt getting through to her so i ended it right there. didnt realize until you said it anon, but she originally said she had trust issues and i brushed it off
>>592339754
Been there my friend.
>>592341197
i said masturbate herself as opposed to masturbating me. I know. Masturbation means sexual pleasure of yourself. I get it. But sometimes people are retarded. Like you.
>>592340349
i think i just realized my first relationship of nearly 4 years i was getting cheated on left and right. well shit
>>592341472
Daddy issues are my bread and butter.
I'm pregnant, anon
>>592341589
amen brother.
>>592341197
That she would get herself off while the dude watched, come on bro.
>>592341209
she cheated on him that's why she is looking for evidence of the same simple man.
>>592340447
I'm on a different machine so I don't have an appropriate reaction image but fuck that was good. Thanks.
>>592335106
When the girl you've been dating is part of a clique of female friends who spend a lot of their time discussing you and trying to convince her to break up with you so they can all be "single gurlzz" together
>>592339754
yea shes cheating on you
>>592339863
Nailed it.
the "tons of guy friends" chick loves stringing men along for kicks.
>>592341618
Holy cunts, I know a girl named Angel that looks the exact same minus the hair and she's nuts as fuck.
>>592340604
Trial and error.
I dated two ballerinas by chance. How is that so unbelievable? You must be a ballerina getting so easily rustled.
I'm surprised you didn't call me out on Virginians. I approve. This must mean you're from West Virginia, the one true Virginia.
She has a tattoo. Any size, any place.
>>592341634
every girl has a fucking facebook account even the shy bookworm girl that I used to know
>>592339150
I hate it when a girl can't duck my dick.
>>592340388
I have like 60 max, I don't add family or people I'm not close enough that I would hang out with 1v1
>her bestfriend is a guy who was her Ex/someone that she had feelings for at one point/He had feelings for her.
>>592341160
2 is okay. Better than being religious
>>592341145
underrated post
kek
She has borderline personality disorder. Run for the fucking hills and never look back. It'll save you from being manipulated, lied to, ignored and being cheated on.
>>592336395
fucking underrated post
>>592341790
You're better off without her m8.
Get yourself a better gf with a bigger penis so the ex knows you traded up.
>>592340604
I can back up his claim. I only dated one but I met her friends and there was a common thread that wore me out after two weeks.
>>592335106
>>592338697
Fuck this is the girl I am just starting to date. She was a part of our guy friend group for a long time though so maybe it is a bit different? I t's awesome right now though because she is fucking hot, and yes albiet a little crazy in a good way, and she does all the 'manly' things I like to do, like climbing backpacking, drinking, smoking
fuckkkkkkk am I getting myself into some shit?
>>592341497
>(punched a hole in about everything solid b/c typically I don't EVER hit people unless provoked)
That's still not healthy, anon.
>>592336395
or her mother is hotter
>>592339104
underrated post
>>592341160
OH hell I'll throw in a few more.
11) Has no father/absentee father. This woman will crave male attention and approval. How do adult women get this? By fucking as many men as they can. These are the women that end up working graveyard at the C-string stripjoint in their 40s.
12) Has two kids or more that are extremely close in age, and look absolutely nothing alike. They have separate fathers, and she fucked both of their fathers within a very short timeframe. This is a whore. You are her next mark..
14) any woman that constantly brings up multiple ex-boyfriends in casual conversation. This should tell you her deep-down opinion of men. They're conversation pieces. A means to an end. That's exactly what you'll end up being.
15) Any woman that asks you for any amount of money above $50 within the first 5 months. You know what she's using you for now.
16) If she's a student when you meet her and suddenly isn't after about 6 months of dating.. you're being primed to be her retirement fund.
17) If she insists on going somewhere and when you say you'd like to come along, she actually gets mad while insisting you can't. You're interfering with her ability to wrangle a side-dick. Insist and if she still refuses, tell her to take her shit with her when she leaves.
>>592340454
This. I hang out with men because we have things in common.
I hang out with women cause i want to get inside them.
This just seems like the normal way of things. Why do some women seem to deny this is true?
She shaves her eyebrows and gets tattoo replacements.
She gets short, more manageable haircut.
She doesn't shave her legs or armpits for you, but for special occasions.
Socks as gifts.
>>592341898
ok james dean
1.She's always sick or hurt. Either she's really fucking whiny or she's poor breeding stock, either way you don't want that.
2.You always have to explain movies to her. Explaining inception is one thing. I had to explain the fucking Princess Bride to a bitch. It's a fucking kids movie dumb cunt, a 3rd grader could follow the plot...
>she has a dick
>>592339104
>>592342220
.
holy,fucking, shit..... this is actually pretty close to what my ex was.... damn it.
all that love talk n shit ...why.
>her dicks smaller than yours
>>592338929
Girls who have all male friends are usually oblivious to the fact that they all just want to fuck her but don't know how to make a move. Dating a girl like that is a nightmare, because all of her friends see you as a rival and all they do is put you down and talk shit about you when your not there.
Or worse, she knows they all want to fuck her and she gets a kick out of being desired by so many men and she uses their attention to compensate for the fact that her father left when she was five.
>>592339450
welllllllll seems like you are projecting arent cha son?
Good advice so far.
If she cries first time or regularly during or after sex. Had 2 gfs like this, both crazy. The one was hot as fuck though so you know...held on to her a little longer than I should have
>>592335106
When you tell her "you got what I need" but she says he's just a friend
>>592339319
>Never stick your dick in crazy (and insecure)
yeah, good luck with that, it's amazing how many women out there are severly damaged
>>592342554
Biz dat you?
>>592340648
>>592339542
shit meant to link
>>592339966
>not knowing what the fuck white knighting is
>getting double dubs
>>592341160
>8) Women that insist on going with no condom ridiculously early in the relationship (read <6 months)
This doesn't always have anything to do with:
>Congratulations, you've just created an 18 year government-mandated stipend for her.
Mostly it means theyre on the pill and want to have sex without a condom.
>>592342085
>a little crazy in a good way
you can stick your dick in her but don't do anything serious.
>>592337743
>>592338101
How'd you guys meet these sluts? 28yo, looking for a youngin'.
>>592342110
I don't like to hurt anything. Objects can't feel pain.
>was brought up by old school parents where 'beating' meant 'fixing'
All I ever feel is hate, resentment, regret, and pain. Job is pure stress, no friends b/c of long-term bitchcunt, the only thing keeping me alive is my prodigy of a child.
>bitchcunt 'fiance/gf' thinks we're still going strong
>fuck you cunt, installed packet sniffers on network 4 months ago I know everything
>she will never, ever, ever win a custody battle
I play for keeps.
>>592340557
>she was into rape-sex
Almost all girls are, most just won't tell you unless they feel 100% sure you aren't going to judge them for it.
>>592342235
>She gets short, more manageable haircut.
What's the problem with that?
>>592342511
mx ex had a best male guy friend.
always told me he was "only a friend" and " nothing will happen".
yeah fuck me, after 2 years of relationship she breaks up under blameshifting and stupid reasons and gets together with him. wow....just fuckin wow..
she was beautiful, the right mixture of humor, looks,character...and then fucks me over like that
>>592341006
thanks im good.
i think ultimatelley people know she is a cunt. plus i fucked a bunch of girls right after.
whatev. that's life.
>You realize the girl you've been dating for the past 4 weeks only has male friends.
Implying that a girl can't be anything other than straight when dating your swampass
Implying that by having all male friends means she's a slut
Implying that if she had all female friends she could be just as slutty if bi
fucking christ niggertits
>>592342229
They think they're actually interesting. I guess you can have a good time with them, but it will never be beyond a real flirty time.
Like, you can't wait to go bowling with your friend. You'll joke and drink and it's good. If you went bowling with her, you'd constantly think about not fucking up because you still want to get your dick in her. It's just not as comfortable, even if you don't like her.
>>592342766
Pic related
>>592339450
/b/ro i'm feeling the hardship of a GF who keeps begging to travel. To be honest i'm piss poor and a student and it just is not feasible currently. But she goes on like it is her destiny or that her current life is fulfilling. Although she unintentionally is shitting on the relationship the implications are sufficient for me to prepare for the incoming breakup.
Also life experience, if she is catholic and wishes to argue with you about the moral nature of sex related topics, it is not worth it.
|
https://4archive.org/board/b/thread/592335106/itt-relationship-red-flags-here-i-039-ll-start-you-realize-the
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
[
]
Leonardo Uribe commented on MYFACES-1834:
-----------------------------------------
After many hour trying to find a way, I could not find a way to generate this properly. Maybe
the sentence "keep it simple..." applies on this problem and the better is let is as is.
> suffix added to component id when including files
> -------------------------------------------------
>
> Key: MYFACES-1834
> URL:
> Project: MyFaces Core
> Issue Type: Bug
> Components: JSR-252
> Affects Versions: 1.2.2
> Reporter: Simon Kitching
> Priority: Minor
>
> In core 1.2 to 1.2.2, any use of jsp:include causes the ids of components in the included
file to have a random string appended to them.
> This results in some ugly ids. However more significantly, the id of a component is not
predictable even when an id attribute is explicitly assigned.
> In addition, this breaks the tomahawk forceId feature, because although the namespace
prefix is omitted the rest of the id changes making "forceId" useless.
> The cause is class UIComponentClassicTagBase, where checkIfItIsInAnIterator() returns
true if the current component is in an included or forwarded page. Later, the createUniqueId
method adds a suffix to the user-specified if member isInAnIterator is true.
> Unfortunately, documentation on why things are implemented as they are is lacking.
> Checking for iteration is needed to support
> <c:forEach ..>
> <h:inputText
> </c:forEach>
> Checking for includedOrForwarded might be to allow:
> <jsp:include
> <jsp:include
> However this is a very rare usage; support for this should not hurt everyone.
> And Sun Mojarra does *not* mess with the ids like this...testing shows
> that the ids of components are the same regardless of whether they are
> inline or in an included file.
> Maybe the "isInIterator" check could look to see whether the *same file* is being included
twice, eg by keeping a list of the files included so far, and returning true only if the same
string is encountered twice? That would allow multiple inclusions, but not mess with ids for
a single inclusion.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
|
http://mail-archives.apache.org/mod_mbox/myfaces-dev/200806.mbox/%3C1918940356.1214240445089.JavaMail.jira@brutus%3E
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
I saw a code here at Cpp Quiz [Question #38]
#include <iostream>
struct Foo
{
Foo(int d) : x(d) {}
int x;
};
int main()
{
double x = 3.14;
Foo f( int(x) );
std::cout << f.x << std::endl;
return 0;
}
Foo f( int(x) );
Foo
Foo f( int(x) );
Foo f( int );
Foo f( int x );
Foo f( int x );
what is this syntax
int(x)in statement
Foo f( int(x) );means?
The parentheses around
x are superfluous and will be ignored. So
int(x) is same as
int x here, means a parameter named
x with type
int.
Is it same as
Foo f( int x );?
Yes.
Foo f( int(x) );, is a function declaration which is named
f, returns
Foo, takes one parameter named
x with type
int.
Here's the explanation from the standard. $8.2/1 Ambiguity resolution [dcl.ambig.res]:
(emphasis mine)
The ambiguity arising from the similarity between a function-style cast and a declaration mentioned in [stmt.ambig], the resolution is to consider any construct that could possibly be a declaration a declaration. [ Note: A declaration can be explicitly disambiguated by adding parentheses around the argument. The ambiguity can be avoided by use of copy-initialization or list-initialization syntax, or by use of a non-function-style cast. — end note ] [ Example:
struct S { S(int); }; void foo(double a) { S w(int(a)); // function declaration S x(int()); // function declaration S y((int(a))); // object declaration S y((int)a); // object declaration S z = int(a); // object declaration }
— end example ]
So,
int(x) will be considered as a declaration (of parameter) rather than a function style cast.
|
https://codedump.io/share/ulNdqTvGI5Gp/1/most-vexing-parse
|
CC-MAIN-2018-26
|
en
|
refinedweb
|
Introduction
The existing built-in Grafana dashboards provided by Red Hat OpenShift Container Platform clusters have a fixed template with many metric details. Whereas with a customized dashboard, a system administrator can focus only on required monitoring parameters. However, it is not easy to accomplish because of constraints with writing custom queries. This step-by-step tutorial explains how to deploy the community edition of the Grafana Operator and leverage an existing Prometheus as a Grafana data source to create customizable Grafana dashboards.
A bit of background
OpenShift Container Platform includes a Prometheus-based monitoring stack by default. However, this built-in monitoring capability provides read-only cluster monitoring and does not allow monitoring any additional target. The built-in monitoring feature monitors the cluster components such as pods, workspaces, nodes, and provides a set of Grafana dashboards that are non-customisable.
Prerequisites
- Red Hat OpenShift on IBM Cloud is a managed service that simplifies deployment and configuration of the OpenShift Container Platform.
- Red Hat CodeReady Containers are preconfigured OpenShift 4.1 (or newer) clusters. They offer a minimal OpenShift cluster environment for test and development purposes on local computers.
Deployment
Install Grafana Operator community edition
- Log in to the OpenShift Container Platform cluster console with the cluster-admin role.
Create a new project named Grafana.
$ oc adm new-project Grafana Created project Grafana
On the web console, click Operators, and then click OperatorHub.
Search for Grafana Operator and install the community edition of the Grafana Operator.
On Create Operator Subscription, under Installation Mode, click A specific namespace on the cluster; under Update Channel, click alpha; and under Approval Strategy, click Automatic. Then click Subscribe.
Check the pod status to see if the install is complete.
$ oc get pods -n grafana -o name pod/grafana-operator-655f76684-7jsz5
Create a Prometheus user
Before creating Grafana and Grafana data source instances, you need to create a special user in the existing Prometheus on openshift-monitoring project.
Navigate to the openshift-monitoring namespace:
$ oc project openshift-monitoring Now using project "openshift-monitoring" on server
Load prometheus-k8s-htpassword data in the tmp file:
$ oc get secret prometheus-k8s-htpasswd -o jsonpath='{.data.auth}' | base64 -d > /tmp/htpasswd-tmp
Create a special user to the existing Prometheus secret:
$ htpasswd -s -b /tmp/htpasswd-tmp grafana-user mysupersecretpasswd Adding password for user grafana-user
Check the content of /tmp/htpasswd-tmp for grafana-user:
$ cat /tmp/htpasswd-tmp | tail -1 grafana-user:{SHA}xxxxxxSwuJxNmjPI6vdZEyyyyy=
Replace the prometheus-k8s-secret data with our /tmp/htpasswd-tmp:
$ oc patch secret prometheus-k8s-htpasswd -p "{\"data\":{\"auth\":\"$(base64 -w0 /tmp/htpasswd-tmp)\"}}" secret/prometheus-k8s-htpasswd patched
Delete the Prometheus pods to restart the pods with new data:
$ oc delete pods -l app=prometheus pod "prometheus-k8s-0" deleted pod "prometheus-k8s-1" deleted $ oc get pods -l app=prometheus -o name pod/prometheus-k8s-0 pod/prometheus-k8s-1
Create Grafana instances
On the Grafana namespace, click Installed Operators > Grafana Operator > Create Instance on the Grafana card (shown below):
On Create Grafana YAML file, edit the name and spec.config.security.admin_user and spec.config.security.admin_password as required, and then click Create. The following image shows the YAML file used for creating the Grafana instance.
Make sure the Grafana pods are created and running:
$ oc get pods -n grafana -o name pods/grafana-deployment-689d864797-n4lpl pods/grafana-operator-655f76684-7jsz5
Create Grafana data source instances
- Click Installed Operators > Grafana Operator > Create Grafana DataSource instance.
Modify the metadata.name, spec.name, basicAuthUser, and basicAuthPassword in the Create GrafanaDataSource YAML file.
Note: Make sure to add spec.datasources.jsonData.tlsSkipVerify and spec.datasources.basicAuth set to true. Also, verify the preconfigured Prometheus url and add the right one under spec.datasources.url in the Create GrafanaDataSource YAML file.
The Operator automatically replaces the grafana-deployment-xxx-xxx pods to reflect the new configuration.
Now, you are ready to access the Grafana route:
$ oc get route NAME HOST/PORT grafana-route grafana-route-grafana.xxxx.appdomain.cloud
Import dashboards
Now, you export an existing dashboard from the built-in Grafana and import it into the new Grafana instance created using an operator to check if the Prometheus data source is integrated.
On the openshift-monitoring stack:
- Log in to the openshift-monitoring stack Grafana.
- Select any of the dashboard (for example, Kubernetes / Compute Resources / Workload).
Click the Share dashboard icon export tab and copy the .json file.
Now open the newly created Grafana instance route.
Go to Dashboard > Manage > Import. Paste the .json file and click Load.
Modify the Name as required and click Import to import the dashboard.
Review the dashboard.
- Once you import the dashboard onto the Grafana UI, create an instance of the Grafana dashboard to preserve the dashboards when Grafana instances are restarted.
Create Grafana DataSource instances
- On the Grafana namespace, create an instance of the Grafana dashboard (Operators > Grafana Operator > Grafana Dashboard).
- Copy the .json file of the dashboard imported into Grafana (as mentioned in Steps 3 and 4 under Import Dashboards).
- Paste the copied .json file under spec.json in the Create Grafana Dashboard YAML space.
- Modify the metadata.name as required and click Save.
With the custom Grafana dashboard, you can create a custom dashboard as required for individual pods, workspaces, namespaces, and more.
Summary
These custom Grafana dashboards with a preconfigured data source saves the computing resources on the cluster and provides the option to create your own dashboard views giving high level on compute resource usage for each application and making the monitoring easy for infrastructure administrators.
|
https://developer.ibm.com/components/redhat-openshift-ibm-cloud/tutorials/custom-grafana-dashboards-from-pre-configured-prometheus-ocp-43-datasource/
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
How do you transform a legacy app into one that uses K8s#
Preparing your Application for Migration#
These tasks:
- Extract Configuration Data
- Offload Application State
- Implement Health Checks
- Instrument code for Logging and Monitoring
- Build Administration Logic into API
Extract Configuration Data#
Any information that varies across deployments should be removed.
- service endpoints
- database addresses
- credentials
- various parameters and options
So these things should be in a
settings.py file or in the environment variables.
Your app becomes more portable and secure - not bound to a code change or environment.
for example:
from flask import Flask DB_HOST = 'mydb.mycloud.com' DB_USER = 'sammy'
over:
import os from flask import Flask DB_HOST = os.environ.get('APP_DB_HOST') DB_USER = os.environ.get('APP_DB_USER')
Before running the app set the environment variables:
export APP_DB_HOST=mydb.mycloud.com export APP_DB_USER=sammy flask run
Offload Application State#
Applications must be designed in a stateless fashion. They don’t store persistent client and application data locally - that way if containers restart they are not lost.
Application cache and session should also be available no matter the node that is hit - like redis cache.
It can be achieved with persistent block storage volumes to containers
To ensure that a Pod can maintain state and access the same persistent volume after a restart, the StatefulSet workload must be used
Implement Health Checks#
The control plain can repair broken applications. It does this by checking health.
- Readiness probe - lets k8s know your application is ready for traffic
- Liveness probe - Lets k8s know when the app is healthy and running
Methods of probing:
- HTTP - kubelet makes an HTTP request
- Container command - runs a command on container - if exit code is 0 it succeeds.
- TCP - If it can establish a TCP connection
For example a flask health check:
@app.route('/health') def return_ok(): return 'Ok!', 200
then in your pod specification manifest:
livenessProbe: httpGet: path: /health port: 80 initialDelaySeconds: 5 periodSeconds: 2
Instrument code for Logging and Monitoring#
Error Logging and performance metrics are important.
Remember 12 factor app -
Treat logs as event streams - output to
STDOUT and
STDERR
Use prometheus and the prometheus client to get performance logs.
Ensure that your logs settings send out to
STDOUT and
STDERR
Use the RED method:
- R - Rate - number of requests received
- E - Errors
- D - Duration - time to respond
Check what to measure in the Google SRE Site reliability book
The container orchestrator will catch the logs and send it to EFK (Elasticsearch, Fluentd, and Kibana) stack.
Build Administration Logic into API#
With your app in k8s, you no longer have shell access to your app.
Deploying on Kubernetes#
After containerising and publishing to a registry…
- Write Deployment and Pod Configuration Files
- Configure Pod Storage
- Injecting Configuration Data with Kubernetes
- ConfigMaps and Secrets
- Create Services
- Logging and Monitoring
Write Deployment and Pod Configuration Files#
Pods typically consist of an application container (like a containerized Flask web app), or an app container and any “sidecar” containers that perform some helper function like monitoring or logging
Containers in a Pod share storage resources, a network namespace, and port space
Configure Pod Storage#
Kubernetes manages Pod storage using Volumes, Persistent Volumes (PVs) and Persistent Volume Claims (PVCs)
If your application requires one persistent volume per replica, which is the case with many databases, you should not use Deployments but use the StatefulSet controller
Injecting Configuration Data with Kubernetes#
Kubernetes provides
env and
envFrom fields.
This example pod spec sets
HOSTNAME to
my_hostname
spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 env: - name: HOSTNAME value: my_hostname
This allows you to move configuration out of Dockerfiles and into Pod and Deployment configuration files
The advantage of this is that you can now modify these Kubernetes workload configurations without needing to test, rebuild and push the image to a registry.
You can also version these configurations
ConfigMaps and Secrets#
ConfigMaps allow you to save configuration data as objects that you then reference in your Pod
so you can avoid hardcoding configuration data and reuse
Create a configmap with:
You can also use the
--from-file flag
Secrets provide the same essential functionality as ConfigMaps, but should be used for sensitive data like database credentials as the values are base64-encoded
Create Services#
a stable IP address that load balances requests across its containers
4 types:
ClusterIP- grants the Service a stable internal IP accessible from anywhere inside of the cluster
NodePort- expose your Service on each Node at a static port, between 30000-32767 by default. When a request hits a Node at its Node IP address and the NodePort for your service, the request will be load balanced and routed to the application containers for your service.
LoadBalancer- Using cloud provider’s load balancing product
ExternalName- map a Kubernetes Service to a DNS record
To manage routing external requests to multiple services using a single load balancer, you can use an Ingress Controller
Logging and Monitoring#
Parsing through individual container and Pod logs using
kubectl logs and
docker logs can get tedious as the number of running applications grows
To help you debug application or cluster issues, you should implement centralized logging
In a standard setup, each Node runs a logging agent like
Filebeat or
Fluentd that picks up container logs created by Kubernetes.
The application should be a DaemonSet - running on all nodes.
Then use prometheus and grafana to view the data.
For added resiliency, you may wish to run your logging and monitoring infrastructure on a separate Kubernetes cluster, or using external logging and metrics services
|
https://fixes.co.za/kubernetes/converting-modernising-applications-for-k8s/
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
Vehicle Detection, Tracking and Counting
Last page update: 12/04/2017 (Added Python API & OpenCV 3.x support)
Last version: 1.0.0 (see Release Notes for more info)
Hi everyone,
There are several ways to perform vehicle detection, tracking and counting. Here is a step-by-step of a simplest way to do this:
If you use this code for your publications, please cite it as:
@ONLINE{vdtc, author = "Andrews Sobral", title = "Vehicle Detection, Tracking and Counting", year = "2014", url = "" }
Dependencies: * OpenCV 3.x (tested with OpenCV 3.2.0) * GIT (tested with git version 2.7.2.windows.1). * CMAKE for Windows (tested with cmake version 3.1.1). * Microsoft Visual Studio (tested with VS2015).
Note: the procedure is similar for OpenCV 2.4.x and Visual Studio 2013.
Please follow the instructions below:
1) Go to Windows console.
2) Clone git repository:
git clone --recursive
3) Go to simplevehiclecounting/build folder.
4) Set your OpenCV PATH:
set OpenCV_DIR=C:\OpenCV3.2.0\build
5) Launch CMAKE:
cmake -DOpenCV_DIR=%OpenCV_DIR% -G "Visual Studio 14 Win64" ..
6) Include OpenCV binaries in the system path:
set PATH=%PATH%;%OpenCV_DIR%\x64\vc14\bin
7) Open the bgs.sln file in your Visual Studio and switch to 'RELEASE' mode
8) Click on 'ALL_BUILD' project and build!
9) If everything goes well, copy simplevehiclecounting.exe to simplevehiclecounting/ and run!
For Linux and Mac users, a CMakefile is provided to compile the source code.
~/git clone --recursive ~/cd simple_vehicle_counting ~/simple_vehicle_counting/cd build ~/simple_vehicle_counting/build/ cmake .. ~/simple_vehicle_counting/build/ make
~/simple_vehicle_counting/run_simple_vehicle_counting.sh
#include #include
#include "package_bgs/PBAS/PixelBasedAdaptiveSegmenter.h" #include "package_tracking/BlobTracking.h" #include "package_analysis/VehicleCouting.h"
int main(int argc, char *argv) { / Open video file */ CvCapture *capture = 0; capture = cvCaptureFromAVI("dataset/video.avi"); if(!capture){ std::cerr << "Cannot open video!" << std::endl; return 1; }
/* Background Subtraction Algorithm */ IBGS *bgs; bgs = new PixelBasedAdaptiveSegmenter;
/* Blob Tracking Algorithm / cv::Mat img_blob; BlobTracking blobTracking; blobTracking = new BlobTracking;
/* Vehicle Counting Algorithm / VehicleCouting vehicleCouting; vehicleCouting = new VehicleCouting;
std::cout << "Press 'q' to quit..." << std::endl; int key = 0; IplImage *frame; while(key != 'q') { frame = cvQueryFrame(capture); if(!frame) break;
cv::Mat img_input = cv::cvarrToMat(frame); cv::imshow("Input", img_input); // bgs->process(...) internally process and show the foreground mask image cv::Mat img_mask; bgs->process(img_input, img_mask); if(!img_mask.empty()) { // Perform blob tracking blobTracking->process(img_input, img_mask, img_blob); // Perform vehicle counting vehicleCouting->setInput(img_blob); vehicleCouting->setTracks(blobTracking->getTracks()); vehicleCouting->process(); } key = cvWaitKey(1);
}
delete vehicleCouting; delete blobTracking; delete bgs;
cvDestroyAllWindows(); cvReleaseCapture(&capture);
return 0; }
A python demo shows how to call the Python API. It is similar as the C++ demo.
To use the Python API, you should copy "python" directory to overwrite the generated one.
~/simple_vehicle_counting/cd build ~/simple_vehicle_counting/build/cmake .. ~/simple_vehicle_counting/build/make -j 8 ~/simple_vehicle_counting/build/cp -r ../python/* python/ ~/simple_vehicle_counting/build/../run_python_demo.sh
If you have previously built the project at the project root, make sure there are no previously generated libraries in the "python" directory by
make clean.
12/04/2017: Added OpenCV 3.x support. Removed vs2013 template project (use CMAKE instead).
07/04/2017: Added Python API, thanks to @kyu-sz.
Version 1.0.0: First version.
|
https://xscode.com/andrewssobral/simple_vehicle_counting
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
Pod::Man is a module to convert documentation in the POD format (the preferred language for documenting Perl) into *roff input using the man macro set. The resulting *roff code is suitable for display on a terminal using nroff(1), normally via man(1)...RRA/podlators-4.14 - 04 Jan 2020 23:32:29 UTC - Search in distribution
- pod2man - Convert POD data to formatted *roff input
- Pod::Text - Convert POD data to formatted text
- perlpodstyle - Perl POD style guide
- 1 more result from podlators »
This is a "plug-in" class that allows Perldoc to use Pod::Man and "groff" for reading Pod pages. The following options are supported: center, date, fixed, fixedbold, fixeditalic, fixedbolditalic, quotes, release, section (Those options are explained ...MALLEN/Pod-Perldoc-3.28 - 16 Mar 2017 01:14:07 UTC - Search in distribution
- perldoc - Look up Perl documentation in Pod format.
- Pod::Perldoc::ToNroff - let Perldoc convert Pod to nroff
Send a module's pod through pod2text and your pager. This is mostly here for people too lazy to type $ pod2text `pmpath CGI` | $PAGER...MLFISHER/pmtools-2.2.0 - 15 Mar 2018 15:25:35 UTC - Search in distribution
: This tool generates POD documentation for each all of the commands in a tree for a given executable. This command must be run from within the namespace directory....BRUMMETT/UR-0.47 - 06 Aug 2018 14:29:10 UTC - Search in distribution
ack is designed as an alternative to grep for programmers. ack searches the named input FILEs or DIRECTORYs for lines containing a match to the given PATTERN. By default, ack prints the matching lines. If no FILE or DIRECTORY is given, the current di...PETDANCE/ack-v3.4.0 - 30 Jun 2020 04:13:07.5 - 25 Oct 2017 07:28:20 UTC -...ETJ/PDL-2.025 - 19 Nov 2020 13:17:38 UTC - Search in distribution
- PDL::PP - Generate PDL routines from concise descriptions
- PDL::BadValues - Discussion of bad value support in PDL
JOHNH/Fsdb-2.71 - 17 Nov 2020 05:00:30 UTC - Search in distribution
- PDLA::PP - Generate PDLA routines from concise descriptions
- PDLA::BadValues - Discussion of bad value support in PDLA.28 - 13 Jun 2020 04:57:39 UTC - Search in distribution
- todo - Perl TO-DO list
- perlretut - Perl regular expressions tutorial
- perlhist - the Perl history records
- 16 more results from perl » UTC - Search in distribution loc...ASPIERS/Stow-v2.3.1 - 28 Jul 2019 12:27:30 UTC -.062 - 13 Aug 2020 07:24:35 UTC - UTC - UTC - Search in distribution
"pp2html" creates a set of HTML files for a foilset based on a simple textfile slide_text. Due to its formatting features and the capability of creating navigation, table of contents and index pages, "pp2html" is also a suitable tool for writing onli...LDOMKE/PerlPoint-Converters-1.0205 - 08 Feb 2006 15:33:27 UTC - Search in distribution
Does this happen often with you: you install a CPAN module: % cpanm -n Finance::Bank::ID::BCA The CPAN distribution is supposed to contain some CLI utilities, but it is not obvious what the name is. So you do: % man Finance::Bank::ID::BCA to find out...PERLANCAR/App-PMUtils-0.734 - 13 Jun 2020 00:59:10 UTC - - Search in distribution
|
https://metacpan.org/search?q=Pod%3A%3Aman
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
Batch.com iOS SDK
About
Batch is the leading mobile engagement & push notification suite engineered for the mobile era.
Installation
Carthage
github "BatchLabs/ios-sdk"
CocoaPods
pod 'Batch'
Manual
- Download the SDK
- Drag and drop the xcframework into your project
- Add
libsqlite3,
lzand Batch to
Linked Frameworks and Librariesin your project settings
- Add
-ObjCin
Other Build Flags
- Enjoy
Note: If you can't add
-ObjC, you can use
-force_load:
Usage
Importing the framework
If you're in swift:
import Batch
or Objective-C
@import Batch;
or
#import <Batch/Batch.h>
Using it
Describing what Batch can do is a little too big for this README. Read our setup documentation to follow a step by step tutorial for integrating Batch features into your app.
Documentation
Github
You may find interesting
Dependencies
Used By
Total: 0
Releases
1.16.0 -
A migration guide from Batch 1.15 and lower is available here.
BREAKING: This version drops support for iOS 8 and 9 Batch requires Xcode 12 and iOS 10.0 or higher
Batch now depends on libz. This might require a change in your project:
- Cocoapods: The podspec has been updated to add the required linker flag. No action is needed.
- Carthage/Manual integration: When building, Xcode should automatically link your project with libz. If you get a compilation error, add
-lzto your linker flags, or add
libzto "Frameworks and Libraries" in your app's target.
Batch and Apple Silicon
In order to support Apple Silicon, Batch will adopt the XCFramework format: it is not possible to support the iOS Simulator on Apple Silicon macs in a "fat" framework. What it means for you:
- Cocoapods users will be migrated to the XCFramework automatically
- Carthage users will stay on the old distribution style until Carthage supports XCFrameworks.
- Manual integrations should update to XCFramework as soon as possible. Batch will be distributed in both formats for a couple of versions.
Note that the
armv7s slice is not included in the XCFramework distribution.
BatchExtension
BatchExtension isn't distributed with the SDK zip anymore. It will be released on github soon after this release.
Core
- Batch is now compatible with iOS 14's tracking consent and IDFA changes.
- Added UIScene support. If your application uses it, you must add an
UNUserNotificationCenterDelegate, otherwise Direct Opens, Deeplinks and Mobile Landings will not work: UIScene disables legacy push application delegate methods.
- eSIMs are now supported. Phones that only have an eSIM will now properly report back their carrier, if the feature hasn't been disabled.
- More nullability annotations have been added. As those annotations match Apple's own, we do not expect source compatibility to break.
- Support for TLS versions older than 1.2 has been removed.
- Added a new builtin action named
batch.ios_tracking_consent, which requests tracking consent via AppTrackingTransparency. More info in the documentation.
Event Dispatchers
BatchEventDispatcherTypeNotificationOpenis no longer broadcasted when the application is processing a background push.
Inbox
- Enhanced the way the SDK fetchs notifications from the servers to greatly reduce bandwidth usage. No public API change.
Push
- Add a new method
BatchPush.isBatchPushto check if a received push comes from Batch.
- Added
BatchUNUserNotificationCenterDelegate, an UNUserNotificationCenterDelegate implementation that forwards events for Batch. Call
BatchUNUserNotificationCenterDelegate.register()to automatically set it as your delegate. BatchUNUserNotificationCenterDelegate can display notifications when your app is in the foreground using the
showForegroundNotificationsproperty.
- Batch will now emit a warning in the console when your app does not register an
UNUserNotificationCenterDelegateby the time
application:didFinishLaunchingWithOptions:returns. Implementing this delegate improves Batch's handling of various features that rely on notification interaction feedback, such as analytics or deeplinks: it is strongly recommended that you implement it if you don't already.
- Batch now emits the
BatchPushOpenedNotificationNSNotification when a notification has been opened by the user. This deprecates BatchPushReceivedNotification: see
BatchPush.hfor more info.
- In automatic mode,
application:didReceiveRemoteNotification:fetchCompletionHandler:'s completion handler is no longer called on your behalf when a deeplink is opened.
Messaging
- The "modal" format now correctly triggers actions after it has been dismissed. This will have an impact on when the custom actions are executed, making them more consistent with the Fullscreen format.
- This fixes an issue where deeplinks could not be opened in the application using SFSafariViewController with modal formats.
- The image format is now properly sized when in a freeform window (iPad in Split View, Catalyst)
- Fix a rare race condition with the interstitial format where it would fail to layout properly when the image server could not be reached.
- Modally presented messages will not be allowed to be dismissed unless they're the frontmost view controller. This fix an issue where a message with an autodismiss might dismiss a view controller presented on top of it.
- Improved dismissal logic. While automatic dismissal may fail in some rare occasions due to iOS refusing to dismiss a modal view controller when one is animating, it will not prevent the user from manually dismissing the view anymore.
User
- Added new strongly typed methods for setting attributes on BatchUserDataEditor. They greatly enhance Swift usage of the API. See
setBooleanAttribute:forKey:errorand similar methods (
set(attribute: Bool, forKey: String)in Swift).
- Those new methods return validation errors: you can now know if your attribute key/value does not pass validation and will be discarded.
- nil values are not supported in the new methods. Use
removeAttributeForKey:explicitly.
Debug
- Clicking the "share" buttons in the debug views no longer crash on iPads.
1.15.2 -
Compiled with Xcode 11.5 This minor release is the last one to support iOS 8 and 9.
Event Dispatcher
- Fix an issue where event dispatchers might not be called.
User
- Fix an issue where events that had not been sent to the server would be lost when the app's process was killed.
1.15.1 -
Compiled with Xcode 11.5
User
- Added support for Date in BatchEventData.
- BatchEventData now supports up to 15 attributes (from 10).
1.15.0 -
Requires Xcode 11
This release has been built using Xcode 11.3.1.
This is the LAST release that support iOS 8 and 9. Future releases will require iOS 10+
Core
- Changed how notification status is reported: The SDK will now tell Batch's servers that notifications are enabled if :
- The app has requested and holds a provisional authorization.
- The user has disabled banners but kept lockscreen or notification center delivery enabled.
- Added support for external analytics using
BatchEventDispatcher. See the documentation for more details.
Messaging
- New enum property
BatchMessagingContentType contentTypeon class
BatchInAppMessageto help cast to the right content class.
- A new optional delegate method
BatchMessagingDelegate.presentingViewControllerForBatchUIallows to specify which view controller to display Batch messages on.
- Fixed an issue where the last line of a label could be cut.
- Improved accessibility of all message formats
Inbox
- Added the
markNotificationAsDeleted:method on
BatchInboxFetcher, allowing the deletion of notifications
1.14.2 -
This release has been built using Xcode 11.1.
Messaging
- Fix an issue where mobile landings might fail to be shown after opening a push for a backgrounded app on iOS 13
- Fix an issue where BatchDeeplinkDelegate does not fire in certain conditions when opening a push from the lockscreen
- Fix an issue where banners would move more than expected when swiping on them on iOS 13
-
This release officially supports iOS 13. It has been built using and requires Xcode 11.0 GM 2.
This is the LAST release that support iOS 8 and 9. Apple has already removed their simulators form Xcode 11.
Future releases of the SDK will be distributed as a dynamic framework, as opposed to the current static framework, and will require the Swift runtime in your app.
It will be distributed in the XCFramework format for supported package managers and manual downloads.
Messaging
- Add UIScene support
- Add an analytics event for iOS 13's modal swipe-to-dismiss
1.14.0 -
Core
- Bug fix: deeplinks from actions are properly sent to Deeplink delegate method
User
- High level data (language/region/custom user id) can now be read back.
- User data (attributes and tags) can now be read back. Documentation
Messaging
Added support for two new UI formats: Modal, and Image. See the documentation for more information.
Added support for GIFs in Mobile Landings and In-App messages
Added support for rich text.
Added support for text scrolling in all formats. Banners will now have a maximum body height of ~160pt, and their text will scroll.
Deeplinks can now be open directly in the app using a SFSafariViewController for Push Notifications, Mobile Landings and In-App Mesasges
Added new methods on the messaging delegate allowing you to track more information such as close/autoclose and button presses. More info in the Mobile Landings documentation.
In swift,
BatchMessaging.setAutomaticModehas been renamed to
BatchMessaging.setAutomaticMode(on:)
Push
- BatchPushAlertDidResignNotification is sent when user dismissed the Remote notification authorization alert. Notification's userInfo dict contains user's choice in BatchPushNotificationDidAcceptKey
1.13.2 -
Core
- Fixed a rare race condition crash that could happen when tracking an event while In-App campaigns are fetched from the server.
User
- Fixed an issue where adding tags would not work with 1.13.1
1.13.1 -
Re-release of 1.13.0, compiled with Xcode 10.1. Batch now includes an arm64e slice.
Note: This release comes with an update to the included BatchExtension framework.
1.13.0 -
Batch SDK 1.13.0 Release
Core
Fixed a rare crash that could happen when Batch's internal database failed to initialize in
[BAUserDataManager startAttributesSendWSWithDelay:].
Opting-out from the SDK now sends an event notifying the server of this. If a data wipe has been asked, the request will also be forwarded to the server. New methods have been introduced to be informed of the status of the request to update your UI accordingly, and possibly revert the opt-out if the network is unreachable.
Added the
BatchDeeplinkDelegateprotocol. Adopting it allows you to manually process deeplink open requests from Batch, rather than having to implement
openURL. See
Batch.deeplinkDelegatefor more information.
Push
- The SDK will report whether notifications are allowed, denied, or undecided more accurately on iOS 10 or higher
- Added a method to easily open iOS' settings for the current application's notification settings
- Split
+[BatchPush registerForRemoteNotifications]into more explicit methods.
+[BatchPush requestNotificationAuthorization]shows the system notification authorization prompt, and then fetches the push token. Equivalent to calling
+[BatchPush registerForRemoteNotifications].
+[BatchPush refreshToken]will only ask iOS for a new token. This needs to be called on every application start to handle upgrades from versions without Batch, or if iOS changes the push token.
- Added support for iOS 12's notification changes:
- You can now ask for provisional notification authorization using
+[BatchPush requestProvisionalNotificationAuthorization]. This method does nothing on versions lower than iOS 12.
- You can now ask Batch to tell iOS that your app supports opening an in-app notification settings from the system settings by calling
[BatchPush setsetSupportsAppNotificationSettings:true]Note that this still requires you to implement a UNUserNotificationCenterDelegate, and the appropriate method to open the settings.
Events
Event data support has been overhauled. As a result:
- Introduced
BatchEventData. Use this class to attach attributes and tags to an event. See this class' documentation for more information about limits.
+[BatchUser trackEvent:withLabel:data:]has been deprecated
- Calls to this method will log deprecation warnings in the console
- Legacy data (NSDictionary) will be converted to
BatchEventData. Same data format restrictions apply: Any key/value entry that can't be converted will be ignored, and logged. Tags are not supported
- Introduced
+[BatchUser trackEvent:withLabel:associatedData:]which uses
BatchEventData, replacing the deprecated method.
- Swift users: Since Swift allows methods to be overloaded by type, the method and arguments name do not change: simply replace your data dictionary with a
BatchEventDatainstance Example:
BatchUser.trackEvent("event_name", withLabel: "label", data: BatchEventData())
|
https://swiftpack.co/package/BatchLabs/Batch-iOS-SDK
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
Multi-class text categorization using state-of-the-art pre-trained contextualized language models, e.g. BERT.
Project description
Text2Class
Build multi-class text classifiers using state-of-the-art pre-trained contextualized language models, e.g. BERT. Only a few hundred samples per class are necessary to get started.
Background
This project is based on our study: Transfer Learning Robustness in Multi-Class Categorization by Fine-Tuning Pre-Trained Contextualized Language Models.
Citation
To cite this work, use the following BibTeX citation.
@article{transfer2019multiclass, title={Transfer Learning Robustness in Multi-Class Categorization by Fine-Tuning Pre-Trained Contextualized Language Models}, author={Liu, Xinyi and Wangperawong, Artit}, journal={arXiv preprint arXiv:1909.03564}, year={2019} }
Installation
pip install text2class
Example usage
Create a dataframe with two columns, such as 'text' and 'label'. No text pre-processing is necessary.
import pandas as pd from text2class.text_classifier import TextClassifier df = pd.read_csv("data.csv") train = df.sample(frac=0.9,random_state=200) test = df.drop(train.index) cls = TextClassifier( num_labels=3, data_column="text", label_column="label", max_seq_length=128 ) cls.fit(train) predictions = cls.predict(test["text"]) BERT's GitHub. cls = TextClassifier( num_labels=3, data_column="text", label_column="label", max_seq_length=128, hub_module_handle="" )
Contributing
Text2Class is an open-source project founded and maintained to better serve the machine learning and data science community. Please feel free to submit pull requests to contribute to the project. By participating, you are expected to adhere to Text2Class's code of conduct.
Questions?
For questions or help using Text2Class, please submit a GitHub issue.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/text2class/
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
The evolution of D has been a process of steady, if not always speedy, improvement. In the early days, Walter was the only person working on the language, the reference compiler, and the standard library, though he did accept contributions. Today, numerous people contribute to or maintain a piece of every aspect of D’s development. Semi-formal processes exist where none did before. Things have come a long way.
As with anything in life, those who come later are unaware of the improvements that came before. If you’ve just stepped off the boat into D Land, you don’t remember the days before core D development moved to github, or the time before the first release of Visual D, the D plugin for Visual Studio developed and maintained by Rainer Schuetze. All you’re going to see is that your pet Pull Request hasn’t yet been merged, or that Visual D doesn’t allow you to do all of the things that the core of Visual Studio allows. You’ll rightly head over to the forums to complain about the PR merge process, or request a new feature for Visual D (or gripe about its absence).
And so the story goes, ad infinitum. Along the way, the community loses some members who once found promise in the language, but become frustrated because their pet peeves have not been addressed. And herein lies the point of this post.
The D Programming Language is a community driven language. As such, there is no formal process for creating opportunities to fix and improve the areas that need fixing and improving. Walter has his head down in compiler code and often barely has time to come up for air. Andrei is stretched thin in the additions he has planned for Phobos, the work he does on behalf of the D Foundation, and his numerous other commitments. It’s up to the community to get things moving on other fronts.
Most often, improvements come at the instigation of one highly motivated individual, who either gets busy with the rolling up of sleeves or keeps pushing until someone better positioned takes up the challenge. Still others want to contribute, but aren’t sure where to begin. One place is the documentation, as described the last time this blog touched on this theme. For other areas, the place to look is the D Vision Document.
Andrei started the Vision Document as a means to let the community know the direction in which he and Walter expect the language to go, with plans to release an updated document biannually. Each document lists progress that has been made in the preceding six months and the goals for the coming six. The first document focused on goals the D leadership will personally enable or make happen. The latest document, which was put up for feedback recently in the forums, includes additional goals the leadership strongly believe are important for the success of the D language. This new addition is the result of an initiative by community member Robert Schadek.
Robert took the simple step of creating a Wiki page to collate the wishlist items that Walter and Andrei mention in the forum now and again. These are things they would like to see completed, but have no time to make happen themselves. Andrei merged these items into the Vision Document so that there is only one document to maintain and one source for the information. This is a perfect example of a low-barrier community contribution that has led to an overall improvement (though Robert is no stranger to higher-barrier contributions) and shows how there are ways other than submitting Pull Requests to improve the D Programming Language experience.
A higher-level means of helping to improve the language is through D Improvement Proposals. A DIP is a formal proposal to make changes or additions to the language. The author of the DIP need not be the implementer. For some time, these have primarily been loosely maintained in an informal process at the D Wiki. Recently, Mihails Strasuns decided the process needed better management, so he created a github repository for DIPs as part of a new process to address some shortcomings of the old system. Then he made a forum announcement about it, including the news that he is, for now, volunteering his services as the new DIP manager.
So, to recap, if you are itching to contribute to D in some way that you know will have an impact, the latest Vision Document is the place to go for inspiration. As I write, that’s the 2016H2 document. If you have an idea for a new feature or improvement to the language, creating a DIP is the way to go (a future post here will say more about the DIP process). There are numerous other areas where contributions are welcome, like fixing or reporting bugs, reviewing PRs, and taking the initiative to contribute to the D process like Robert and Mihails have done.
If you don’t have the time, motivation, or inclination to contribute, that’s fine, too. Just keep in mind when you come to the forums to vent about one frustration or another (and please, feel free to do so) that much of what exists in the D ecosystem today, along with nearly every improvement that will come for the foreseeable future, is the result of someone in the community expending their own time and effort to make it so, usually with no financial incentive (the exception being bug bounties). Please let that knowledge color your perspective and your expectations. If you can’t, or won’t, work to resolve shortcomings in the language, the processes, or the ecosystem yourself, your goal in bringing them to the community should be to motivate someone else to do so. Not only will that increase the odds of fixing your issues, it will help all D users inch closer to the ultimate D Nirvana.
3 thoughts on “The DLang Vision and Improvement Process”
It is nice to know that DIP’s management will be improved. Is there any formal procedure on the new std library packages approvement?
As far as I’m aware, the semi-formal process in place has worked fairly well so far. However, the addition of the std.experimental namespace has mixed things up a bit. A package is proposed in the forum, someone volunteers to manage the voting, if it’s approved it goes into std.experimental for a time. Since then, no new packages have graduated into std. Not sure what the process is for getting out of std.experimental. It may be laid out on the Wiki somewhere. I suggest you ask about this is the forum. If there’s anyone with a definitive answer, you’re more likely to find them there.
Thanks for your job!
|
https://dlang.org/blog/2016/07/13/the-dlang-vision-and-improvement-process/
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
They?
Noisy Cricket, perhaps?
I find the nesting SD adapters (Micro->Mini->SD) oddly comforting.
For added yucks you can get CF->SD adapters, and then put the CF adapter in a PCMCIA card.
I don't know if you can still find these anywhere, but those and the CF/IDE adapters are great old-laptop-ressurection widgets.
I can't believe I actually have both -- an SD to CF adapter and CF to PCMCIA adapter. However, I no longer have a PCMCIA slot. I have instead photographers' storage that reads straight from CF.
I think the memory stick micro for my phone is smaller than that. I'd take it out for a photo, but i'm afraid i would lose it.
How annoying is the keyboard at that size?
Is PalmOS still the best the world has to offer as a phone OS? I despair a little...
The keyboard itself is only slightly smaller than on the 700; it's fine.
And: yes. It is.
Palm . . . does it then run Dali?
(As in; or have they somehow 'invented themselves' beyond earlier standards, not as in; programmer's time and resources have no value; please fix me.)
Yeah, DaliClock works fine on modern PalmOS devices. Though it runs in 160x160 instead of 320x320, because it runs as a 68020-emulated app instead of a native ARM app, since there exists no ARM-targeted PalmOS development environment that runs on MacOS or Linux.
It works in color, though.
What won't run from an ARMlet? The graphics initialization?
I have never heard of an ARMlet before today, and still haven't found any documentation on what they are or how to use them, but if you can tell me how to compile my code (on a Mac) such that at runtime (on the Palm device), WinGetBitmap() gives me a 320x320 frame buffer instead of a 160x160 frame buffer, well, then we'd be getting somewhere.
I had been under the impression that the only way to emit ARM executables for PalmOS was to be running NT.
Download and install PRC-tools OSX first.
Then look at this simple sample code which is far better than the instructions for explaining what to do. Basically in your armlet .c:
#include <PceNativeCall.h>
#include <Standalone.h>
STANDALONE_CODE_RESOURCE_ID (id)
Where id is usually 1 for just one armlet; compile it with:
arm-palmos-gcc -Wall -palmos5 -O2 -c myarmlet.c
link it with:
arm-palmos-gcc -nostartfiles -e start -o myarmlet myarmlet.o
where start is a function of the form unsigned long start(const void *emulStateP, char *userData68KP, Call68KFuncType *call68KFuncP) {...} containing your code.
In your 68k application wrapper.c:
#include <PalmOS.h>
#include <PceNativeCall.h>
and get the function pointer to start from
MemHandle startH = DmGetResource('armc', id);
void *startp = MemHandleLock(startH);
and call it with:
PceNativeCall(startp, NULL);
compile the wrapper with:
m68k-palmos-gcc -Wall -palmos5 -O2 wrapper.c
then hook them together with:
build-prc -n Armlets -c ARML wrapper myarmlet
For accessing WinGetBitmap(), do this or this with libarmboot.a.
Good luck.
oops, compile the wrapper so it doesn't end up as a.out:
m68k-palmos-gcc -Wall -palmos5 -O2 wrapper.c -o wrapper
oh and libarmboot.a is here
Ok, I've looked at this stuff and it still leaves me saying what the HELL?
I have no idea what's going on here. Why is there no documentation on this? When would I use any of these things and what exactly do I accomplish by so doing?
The docs are here I've never built Palm apps on OSX so I didn't know that you wouldn't get the docs with the OSX PRC-Tools.
If you put your code inside the armlet's start function as above, and declare WinGetBitmap() as
PACE_CLASS_WRAPPER(BitmapType *) WinGetBitmap(WinHandle winHandle)
{
PACE_PARAMS_INIT()
PACE_PARAMS_ADD32(winHandle)
PACE_EXEC_RP(sysTrapWinGetBitmap,BitmapType *)
}
and link the armlet against libarmtools.a from the above site, you should get a 320x320 bitmap.
The answer to the underlying question is that phone manufacturers are not motivated to document or otherwise support their APIs because (1) nobody buys phones based on the number of applications available for them, and (2) they are very ambivalent about competition for application sales. It's the same way with Symbian, WinCE, and even Java ME has sucky docs and stupid hoops you have to jump through compared to their mainstream x86 etc. stuff.
Damn this is hairy.
I assume that the only way to make the same executable work on older PalmOS devices (including POSE) is to include two copies of my code, one in 68k and one in ARM, and run the 68k version if the ARM version fails to load... but do all of those -palmos5 command-line arguments mean that I'm building executables that won't even get that far on older systems?
And will I have to do that PACE nonsense for every system call I make? (WinScreenMode, ErrDisplay, etc.)?
(I haven't actually gotten any of this shit to link yet, so I can't tell for myself.)
Yes :(
How annoying is the keyboard at that size?
It doesn't matter. You're not in the USA, so forget about the Centro. A Nokia E90 is the only sane option, and the keyboard on that is fine (the only downside is that it doesn't play nicely with US networks for anything other than voice calls). Mine was 20 quid on O2.
For now - a GSM Centro has been spotted, leaked, and photographed several times now. A release seems inevitable.
Perhaps so, but an E90 will still remain the only sane option. The problem isn't the lack of a GSM Centro, it's the lack of CDMA E90. Which is fine for those of us in Europe, as we get the better phone...
but europeans (including romania) don't want to buy stuff from nokia anymore, do we?
Heh, europeans complaining about globalization. Thanks, you made my evening.
Once you're a member of the United Nations, NATO, the G8, the G4, Kyoto signatory, OECD, WTO, and the most powerful european country in the IMF, you've long ago signed away the right to complain about globalization.
au contraire, mon ami. the europeans may complain, because the globalization rules are not made by them but the worldwide capital and market. the role of given voters of given countries is to neglect. ...so, no pointy fingers when it comes to world politics, please. who governs your country?
You get the government you deserve.
It is more than a driver issue - it is a Palm OS issue. Palm OS has a 32-bit addressing limit == 4GB.
A SD card is not a RAM chip though. From a software point of view it is a storage device, similar to a hard drive. 32 bits operating systems are not limited to 4G storage. So this is a driver/filesystem issue.
Palm OS doesn't know from storage devices. Remember, the guts of Palm OS are ancient and it all evolved from running on RAM. Everything since then has been tacked on, including their file system support. The 4GB limit is deep in Palm OS, it can't address anything beyond 32-bits. That was one of the things Palm OS 6 was supposed to fix with new underpinnings. And now Palm OS II, their Linux-based project which won't be out until 2009 (if anyone still cares by then - and I say that as a Palm OS user myself), is the Next Best Hope.
Palm OS Garnet, the current incarnation, is seriously creaky at this point. It was meant to be replaced several years ago, so it has soldiered on well past its design life. But it is also why we don't have 3G GSM Palm OS phones - Garnet can't handle the requirements of UMTS/HSDPA without a major overhaul. The implementation differences in the systems allowed it to handle CDMA/EV-DO, but just barely.
So, it turns out that if I manually put 8GB of data on the card, the Centro can access it all: it sees all the file names, and I can play all 8GB of MP3s. But it still reports it as a 4GB card with less-than-4GB on it.
So I suppose the reason Missing Sync refuses to put 8GB of data on the card is because it's believing the PalmOS lie about remaining space.
Hopefully this means it's a relatively easy fix, which means maybe it will actually happen some day (here's me holding my breath).
Funky - the FAT32 table is accessible, I guess that gives the Palm a back door.
I could swear I saw something about a Centro-only software update that lets you use a >4G card. I ignored it because I don't have a Centro, and now I can't find it.
Why did you choose a Centro instead of a 755p?
Because it's smaller.
"It's like a joke: like the "Noisy Cricket" gun from Men in Black."
These actually remind me of the 'microsofts' that people stuck in their head in Gibson's Neuromancer oevre.
If I remember correctly, the original ad campaign for Sony's Memory Stick format (from several years ago now) featured photos of shaved heads with memory slots at the base of their neck and volume controls behind their ears.
Is that what you're sporting in that usericon?
That's my emergency caffeine reserve.
It is the new hotness. The new teensy awesome hotness.
MicroSD is coming way too close to the Event Horizon of shrinking storage mediums: the limit beyond which it is possible to accidentally inhale your last three months' worth of photography.
(And man do I wish the Centro were available on Verizon. Oh well, my old 700p seems to be mostly soldiering on...)
Seriously, I've taken pills bigger than this thing. I've found larger items in my nose.
Pshaw. I've had much larger fragments of my anatomy surgically removed (also through my nose, BTW).
Luckily, I have a large nose.
Someone needs a manicure.
There must be something screwy/ambiguous about the SD standard. So many phones/PDAs seem to have a hard coded limit of 1GB, 2GB or 4GB.
I bought a 2GB miniSD card about a year ago that had a formatted capacity of only 1GB. I thought I had a knockoff card, but it turned out the card was "partitioned" in the factory to a single 1GB volume so it would work with all the buggy phones out there. SDFix2G fixed the card and my phone saw all 2GB of it. There might be a similar utility for the 8GB cards.
Well, there is the SDHC clusterfuck, but that only causes issues at 2GB. I suspect most of the rest of it is just lazy programmers saying "nobody could possibly need more than $X GB."...
Well, it's definitely formatted as 8GB; when I mount it on my Mac directly, it sees the full size.
I remember reading (and, of course, I don't remember where) that MicroSD cards are not meant to be removable. They're sort of a user-decides-how-much-storage-they-want-but-don't-really-take-it-out sort of idea. Transfers from card to computer were supposed to be via USB or Bluetooth or WiFi or some other magic/nonsense.
"Hello, my name is LohPhat and I have a problem."
I dumped Treos 2 years ago because the damned OS still has no MMU features to keep a buggy app from trashing memory. That and poor QA + suicidal firmware updates.
I toked on the big corporate weenie and got a crackberry. Mmm java OS. Useable. No more stylus.
My new 8320 from t-mobile has UMA () -- basically voip and data over wifi. Since I travel a lot out of the country it has dropped my phone bills dramatically -- no more $1.30 (or $5 in Russia) per minute roaming fees. On wifi the phone thinks it's in the US and all US calls are local and free (don't count towards my minute plan). All t-mobile hotspots auto-connect, again free calling; so many airports and Starbucks to choose from.
The Nokia tablets already have a VM or two to choose from to emulate Palms. Upgrading might be slicker outside Palm soon.
Not sure whether you care, but Matt Siber moved the URL for the Untitled Project (and appears to have some other newer stuff along the same lines, not all of which I'm sure was there four years ago).
Have you verified that you can only see 4GB of files on the card. IIRC, the drivers in the Centro can access the full 8GB of storage on the card, but the VFSVolumeSize API in Palm OS Garnet only can return unsigned 32-bit values for the total space and space used.
Well, Missing Sync believes the card is 4GB and won't put more MP3s on it.
Did you get the delicious pink flavor?
-bZj
the same specs as the treo you say...
didn't you have problems with your sd not working on your treo?
if those are your fingers holding the microsd card in the photo please for the love of god either trim or paint your long fingernails!!
you need those in order to grasp onto something that small. (no not talking about a weenie)
|
https://www.jwz.org/blog/2008/01/treo-700p-old-and-busted-centro-new-hotness/
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
Originally published in my blog:
When talking about "bad code" people almost certainly mean "complex code" among other popular problems. The thing about complexity is that it comes out of nowhere. One day you start your fairly simple project, the other day you find it in ruins. And no one knows how and when did it happen.
But, this ultimately happens for a reason! Code complexity enters your codebase in two possible ways: with big chunks and incremental additions. And people are bad at reviewing and finding both of them.
When a big chunk of code comes in, the reviewer will be challenged to find the exact location where the code is complex and what to do about it. Then, the review will have to prove the point: why this code is complex in the first place. And other developers might disagree. We all know these kinds of code reviews!
The second way of complexity getting into your code is incremental addition: when you submit one or two lines to the existing function. And it is extremely hard to notice that your function was alright one commit ago, but now it is too complex. It takes a good portion of concentration, reviewing skill, and good code navigation practice to actually spot it. Most people (like me!) lack these skills and allow complexity to enter the codebase regularly.
So, what can be done to prevent your code from getting complex? We need to use automation! Let's make a deep dive into the code complexity and ways to find and finally solve it.
In this article, I will guide you through places where complexity lives and how to fight it there. Then we will discuss how well written simple code and automation enable an opportunity of "Continous Refactoring" and "Architecture on Demand" development styles.
Complexity explained
One may ask: what exactly "code complexity" is? And while it sounds familiar, there are hidden obstacles in understanding the exact complexity location. Let's start with the most primitive parts and then move to higher-level entities.
Remember, that this article is named "Complexity Waterfall"? I will show you how complexity from the simplest primitives overflows into the highest abstractions.
I will use
python as the main language for my examples and
wemake-python-styleguide as the main linting tool to find the violations in my code and illustrate my point.
Expressions
All your code consists of simple expressions like
a + 1 and
print(x). While expressions themself are simple, they might unnoticeably overflow your code with complexity at some point. Example: imagine that you have a dictionary that represents some
User model and you use it like so:
def format_username(user) -> str: if not user['username']: return user['email'] elif len(user['username']) > 12: return user['username'][:12] + '...' return '@' + user['username']
It looks pretty simple, doesn't it? In fact, it contains two expression-based complexity issues. It overuses
'username' string and uses magic number
12 (why do we use this number in the first place, why not
13 or
10?). It is hard to find these kinds of things all by yourself. Here's how the better version would look like:
#: That's how many chars fit in the preview box. LENGTH_LIMIT: Final = 12 def format_username(user) -> str: username = user['username'] if not username: return user['email'] elif len(username) > LENGTH_LIMIT: # See? It is now documented return username[:LENGTH_LIMIT] + '...' return '@' + username
There are different problems with expression as well. We can also have overused expressions: when you use
some_object.some_attr attribute everywhere instead of creating a new local variable. We can also have too complex logic conditions or too deep dot access.
Solution: create new variables, arguments, or constants. Create and use new utility functions or methods if you have to.
Lines
Expressions form code lines (please, do not confuse lines with statements: single statement can take multiple lines and multiple statements might be located on a single line).
The first and the most obvious complexity metric for a line is its length. Yes, you heard it correctly. That's why we (programmers) prefer to stick to
80 chars-per-line rule and not because it was previously used in the teletypewriters. There are a lot of rumors about it lately, saying that it does not make any sence to use
80 chars for your code in 2k19. But, that's obviously not true.
The idea is simple. You can have twice as much logic in a line with
160 chars than in line with only
80 chars. That's why this limit should be set and enforced. Remember, this is not a stylistic choice. It is a complexity metric!
The second main line complexity metric is less known and less used. It is called Jones Complexity. The idea behind it is simple: we count code (or
ast) nodes in a single line to get its complexity. Let's have a look at the example. These two lines are fundamentally different in terms of complexity but have the exact same width in chars:
print(first_long_name_with_meaning, second_very_long_name_with_meaning, third) print(first * 5 + math.pi * 2, matrix.trans(*matrix), display.show(matrix, 2))
Let's count the nodes in the first one: one call, three names. Four nodes totally. The second one has twenty-one
ast nodes. Well, the difference is clear. That's why we use Jones Complexity metric to allow the first long line and disallow the second one based on an internal complexity, not on just raw length.
What to do with lines with a high Jones Complexity score?
Solution: Split them into several lines or create new intermediate variables, utility functions, new classes, etc.
print( first * 5 + math.pi * 2, matrix.trans(*matrix), display.show(matrix, 2), )
Now it is way more readable!
Structures
The next step is analyzing language structures like
if,
for,
with, etc that are formed from lines and expressions. I have to say that this point is very language-specific. I'll showcase several rules from this category using
python as well.
We'll start with
if. What can be easier than a good-old
if? Actually,
if starts to get tricky really fast. Here's an example of how one can reimplement
switch with
if:
if isinstance(some, int): ... elif isinstance(some, float): ... elif isinstance(some, complex): ... elif isinstance(some, str): ... elif isinstance(some, bytes): ... elif isinstance(some, list): ...
What's the problem with this code? Well, imagine that we have tens of data types that should be covered including customs ones that we are not aware of yet. Then this complex code is an indicator that we are choosing a wrong pattern here. We need to refactor our code to fix this problem. For example, one can use
typeclasses or
singledispatch. They the same job, but nicer.
python never stops to amuse us. For example, you can write
with with an arbitrary number of cases, which is too mentally complex and confusing:
with first(), second(), third(), fourth(): ...
You can also write comprehensions with any number of
if and
for expressions, which can lead to complex, unreadable code:
[ (x, y, z) for x in x_coords for y in y_coords for z in z_coords if x > 0 if y > 0 if z > 0 if x + y <= z if x + z <= y if y + z <= x ]
Compare it with the simple and readable version:
[ (x, y, z) for x, y, x in itertools.product(x_coords, y_coords, z_coords) if valid_coordinates(x, y, z) ]
You can also accidentally include multiple statements inside a
try case, which is unsafe because it can raise and handle an exception in an expected place:
try: user = fetch_user() # Can also fail, but don't expect that log.save_user_operation(user.email) # Can fail, and we know it except MyCustomException as exc: ...
And that's not even 10% of cases that can and will go wrong with your
python code. There are many, many more edge cases that should be tracked and analyzed.
Solution: The only possible solution is to use a good linter for the language of your choice. And refactor complex places that this linter highlights. Otherwise, you will have to reinvent the wheel and set custom policies for the exact same problems.
Functions
Expressions, statements, and structures form functions. Complexity from these entities flows into functions. And that's where things start to get intriguing. Because functions have literally dozens of complexity metrics: both good and bad.
We will start with the most known ones: cyclomatic complexity and function's length measured in code lines. Cyclomatic complexity indicates how many turns your execution flow can take: it is almost equal to the number of unit tests that are required to fully cover the source code. It is a good metric because it respects the semantic and helps the developer to do the refactoring. On the other hand, a function's length is a bad metric. It does not coop with the previously explained Jones Complexity metric since we already know: multiple lines are easier to read than one big line with everything inside. We will concentrate on good metrics only and ignore bad ones.
Based on my experience multiple useful complexity metrics should be counted instead of regular function's length:
- Number of function decorators; lower is better
- Number of arguments; lower is better
- Number of annotations; higher is better
- Number of local variables; lower is better
- Number of returns, yields, awaits; lower is better
- Number of statements and expressions; lower is better
The combination of all these checks really allows you to write simple functions (all rules are also applied to methods as well).
When you will try to do some nasty things with your function, you will surely break at least one metric. And this will disappoint our linter and blow your build. As a result, your function will be saved.
Solution: when one function is too complex, the only solution you have is to split this function into multiple ones.
Classes
The next level of abstraction after functions are classes. And as you already guessed they are even more complex and fluid than functions. Because classes might contain multiple functions inside (that are called method) and have other unique features like inheritance and mixins, class-level attributes and class-level decorators. So, we have to check all methods as functions and the class body itself.
For classes we have to measure the following metrics:
- Number of class-level decorators; lower is better
- Number of base classes; lower is better
- Number of class-level public attributes; lower is better
- Number of instance-level public attributes; lower is better
- Number of methods; lower is better
When any of these is overly complicated - we have to ring the alarm and fail the build!
Solution: refactor your failed class! Split one existing complex class into several simple ones or create new utility functions and use composition.
Notable mention: one can also track cohesion and coupling metrics to validate the complexity of your OOP design.
Modules
Modules do contain multiple statements, functions, and classes. And as you might have already mentioned we usually advise to split functions and classes into new ones. That's why we have to keep and eye on module complexity: it literally flows into modules from classes and functions.
To analyze the complexity of the module we have to check:
- The number of imports and imported names; lower is better
- The number of classes and functions; lower is better
- The average complexity of functions and classes inside; lower is better
What do we do in the case of a complex module?
Solution: yes, you got it right. We split one module into several ones.
Packages
Packages contain multiple modules. Luckily, that's all they do.
So, he number of modules in a package can soon start to be too large, so you will end up with too many of them. And it is the only complexity that can be found with packages.
Solution: you have to split packages into sub-packages and packages of different levels.
Complexity Waterfall effect
We now have covered almost all possible types of abstractions in your codebase. What have we learned from it? The main takeaway, for now, is that most problems can be solved with ejecting complexity to the same or upper abstraction level.
This leads us to the most important idea of this article: do not let your code be overflowed with the complexity. I will give several examples of how it usually happens.
Imagine that you are implementing a new feature. And that's the only change you make:
+++ if user.is_active and user.has_sub() and sub.is_due(tz.now() + delta): --- if user.is_active and user.has_sub():
Looks ok, I would pass this code on review. And nothing bad would happen. But, the point I am missing is that complexity overflowed this line! That's what
wemake-python-styleguide will report:
Ok, we now have to solve this complexity. Let's make a new variable:
class Product(object): ... def can_be_purchased(self, user_id) -> bool: ... is_sub_paid = sub.is_due(tz.now() + delta) if user.is_active and user.has_sub() and is_sub_paid: ... ... ...
Now, the line complexity is solved. But, wait a minute. What if our function has too many variables now? Because we have created a new variable without checking their number inside the function first. In this case we will have to split this method into several ones like so:
class Product(object): ... def can_be_purchased(self, user_id) -> bool: ... if self._has_paid_sub(user, sub, delta): ... ... def _has_paid_sub(self, user, sub, delta) -> bool: is_sub_paid = sub.is_due(tz.now() + delta) return user.is_active and user.has_sub() and is_sub_paid ...
Now we are done! Right? No, because we now have to check the complexity of the
Product class. Imagine, that it now has too many methods since we have created a new
_has_paid_sub one.
Ok, we run our linter to check the complexity again. And turns out our
Product class is indeed too complex right now. Our actions? We split it into several classes!
class Policy(object): ... class SubcsriptionPolicy(Policy): ... def can_be_purchased(self, user_id) -> bool: ... if self._has_paid_sub(user, sub, delta): ... ... def _has_paid_sub(self, user, sub, delta) -> bool: is_sub_paid = sub.is_due(tz.now() + delta) return user.is_active and user.has_sub() and is_sub_paid class Product(object): _purchasing_policy: Policy ... ...
Please, tell me that it is the last iteration! Well, I am sorry, but we now have to check the module complexity. And guess what? We now have too many module members. So, we have to split modules into separate ones! Then we check the package complexity. And also possibly split it into several sub-packages.
Have you seen it? Because of the well-defined complexity rules our single-line modification turned out to be a huge refactoring session with several new modules and classes. And we haven't made a single decision ourselves: all our refactoring goals were driven by the internal complexity and the linter that reveals it.
That's what I call a "Continuous Refactoring" process. You are forced to do the refactoring. Always.
This process also has one interesting consequence. It allows you to have "Architecture on Demand". Let me explain. With "Architecture on Demand" philosophy you always start small. For example with a single
logic/domains/user.py file. And you start to put everything
User related there. Because at this moment you probably don't know what your architecture will look like. And you don't care. You only have like three functions.
Some people fall into architecture vs code complexity trap. They can overly-complicate their architecture from the very start with the full repository/service/domain layers. Or they can overly-complicate the source code with no clear separation. Struggle and live like this for years (if they will be able to live for years with the code like this!).
"Architecture on Demand" concept solves these problems. You start small, when the time comes - you split and refactor things:
- You start with
logic/domains/user.pyand put everything in there
- Later you create
logic/domains/user/repository.pywhen you have enough database related stuff
- Then you split it into
logic/domains/user/repository/queries.pyand
logic/domains/user/repository/commands.pywhen the complexity tells you to do so
- Then you create
logic/domains/user/services.pywith
httprelated stuff
- Then you create a new module called
logic/domains/order.py
- And so on and so on
That's it. It is a perfect tool to balance your architecture and code complexity. And get as much architecture as you truly need at the moment.
Conclusion
Good linter does much more than finding missing commas and bad quotes. Good linter allows you to rely on it with architecture decisions and help you with the refactoring process.
For example,
wemake-python-styleguide might help you with the
python source code complexity, it allows you to:
- Successfully fight the complexity at all levels
- Enforce the enormous amount of naming standards, best practices, and consistency checks
- Easily integrate it into a legacy code base with the help of
diffoption or
flakehelltool, so old violation will be forgiven, but new ones won't be allowed
- Enable it into your CI, even as a Github Action
Do not let the complexity to overflow your code, use a good linter!
Discussion
Nice article, might send some of our juniors this way. :)
I can forgive most of the complexity linting can fix.
Proper compartmentalization, like using modules, can limit the cost of it a lot.
Code made without a some compartmentalization/deprecation/refactoring thoughts.
That is usually more time consuming to work with in my experience.
Glad that you liked it!
Feel free to share your feedback with juniors. I would love to hear that.
Being able to recognize unnecessary complex code and break it down into simpler code is what made me feel like I had "graduated" from being a junior developer into a more intermediate one. It has the potential to cut down on your overall lines of code, make it less prone to bugs, easier to scale, and in my experience simpler code is easier to optimize.
Although I have limited knowledge of linters, I am going to bite the bullet and try out wemake-python-styleguide since Python is my hobby language of choice.
Great write up! Thanks for taking the time to share this with all of us.
Thanks!
Feel free to ask any questions you will (possibly) have about
wemake-python-styleguide.
Awesome post, love it when there are actual real world examples. Helps drivd the point across. Lots of good tips!
Thanks!
|
https://dev.to/wemake-services/complexity-waterfall-n2d
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
With this post, I start my last very exciting topic to concepts: define your concepts. Consequentially, I answer the questions I opened in previous posts.
First and foremost, most of the concepts I define are already available in the C++ 20 draft. Therefore, there is no need to define them. To distinguish my concepts from the predefined concepts, I capitalize them. To remind you, my previous post gave an overview of the predefined concepts: C++20: Concepts - Predefined Concepts.
There are two typical ways to define concepts: use the direct definition or use requires-expressions.
The syntactic form changed a little bit from the syntax based on the concepts TS (Technical Specification) to the proposed syntax for the C++20 standard.
template<typename T>
concept bool Integral(){
return std::is_integral<T>::value;
}
template<typename T>
concept Integral = std::is_integral<T>::value;
The C++20 standard syntax is less verbose. Both use under the hood the function std::is_integral<T>::value from the C++11 type-traits library. T fulfills the concept if the compile-time predicate std::integral<T>::value evaluates to true. Compile-time predicate means, that the function runs at compile-time and returns a boolean. Since C++17, you can write std::integral<T>::value less verbose: std::integral_v<T>.
I'm not sure if the two terms variable concept for direct definition and function concept for requires-expressions are still used but they help to keep the difference between the direct definition and the requires-expressions in mind.
I skip the example to the usage of the concept Integral. If you are curious, read my previous post: C++ 20: Concepts, the Placeholder Syntax.
Analogous to the direct definition, the syntax of requires-expressions changed also from the concepts TS to the proposed draft C++20 standard.
template<typename T>
concept bool Equal(){
return requires(T a, T b) {
{ a == b } -> bool;
{ a != b } -> bool;
};
}
template<typename T>
concept Equal =
requires(T a, T b) {
{ a == b } -> bool;
{ a != b } -> bool;
};
As before, the C++20 syntax is more concise. T fulfills the concept if the operator == and != are overloaded and return a boolean. Additionally, the types of a and b have to be the same.
Now, it's time to use the concept Equal:
// conceptsDefintionEqual.cpp
#include <iostream>
template<typename T>
concept Equal =
requires(T a, T b) {
{ a == b } -> bool;
{ a != b } -> bool;
};
bool areEqual(Equal auto a, Equal auto b) { // (1)
return a == b;
}
/*
struct WithoutEqual{
bool operator==(const WithoutEqual& other) = delete;
};
struct WithoutUnequal{
bool operator!=(const WithoutUnequal& other) = delete;
};
*/
int main() {
std::cout << std::boolalpha << std::endl;
std::cout << "areEqual(1, 5): " << areEqual(1, 5) << std::endl;
/*
bool res = areEqual(WithoutEqual(), WithoutEqual());
bool res2 = areEqual(WithoutUnequal(), WithoutUnequal());
*/
std::cout << std::endl;
}
I used the concept of Equal in the function areEqual (line 1). To remind you. By using a concept as a function parameter, the compiler creates under the hood a function template, with by the concept specified constraints on the parameters. To get more information to this concise syntax, read my already mentioned post: C++ 20: Concepts, the Placeholder Syntax.
The output is not so exciting:
Now, it becomes exciting. What happens, if I use the types WithoutEqual and WithoutUnequal. I set intentionally the == operator and the != operator to delete. The compiler complains immediately that both types do not fulfill the concept of Equal.
When you look carefully at the error message, you see the reason: (a == b) would be ill-formed and (a != b) would be ill-formed.
Before I continue I have to make a short detour. You can skip the detour if you don't want to compile the program.
I faked the output of the program conceptsDefinitonEqual.cpp. The output is from the Concepts TS implementation of GCC. At this point in time, there is no C++20 standard-conforming implementation of the concepts syntax available.
I already mentioned in the previous post C++20: Two Extremes and the Rescue with Concepts, that concepts at first remind me of Haskell's type classes. Type classes in Haskell are interfaces for similar types. The main difference to concepts is, that a type such as Int in Haskell has to be an instance of a type class and, therefore, to implement the type class. On the contrary, the compiler checks with concepts if a type fulfills a concept.
Here is a part of the Haskell type class hierarchy.
This is my crucial observation. Haskell supports the type class Eq. When you compare the definition of the type class Eq and the concept Equal, they look quite similar.
class Eq a where
(==) :: a -> a -> Bool
(/=) :: a -> a -> Bool
Haskell's type class requires from its instances such as Int,
Let's look once more on the type class hierarchy of Haskell. The type class Ord is a refinement of the type class Eq. The definition of Ord makes this clear.
class Eq a => Ord a where
compare :: a -> a -> Ordering
(<) :: a -> a -> Bool
(<=) :: a -> a -> Bool
(>) :: a -> a -> Bool
(>=) :: a -> a -> Bool
max :: a -> a -> a
The most interesting point about the definition of the type class Ord is their first line. An instance of the type class Ord has to be already an instance of the type class Eq. Ordering is an enumeration having the values EQ, LT, and GT. This refinement of type classes is highly elegant.
Here is the challenge for the next post. Can concepts be refined in a similarly elegant way?
In my next post, I accept the challenge to refine the concept of Equal. Additionally, I write about the import concepts Regular and Semiregular and, of course, I define11
Yesterday 8573
Week 9385
Month 167816
All 5037130
Currently are 100 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more...
|
http://www.modernescpp.com/index.php/c-20-concepts-defining-concepts
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
Today, I conclude my story to your myths about C++. These myths are around function parameters, the initialisation of class members, and pointer versus references.
When a function takes its parameter and doesn't want to modify it, you have two options.
This was the correctness perspective, but what can be said about the performance. The C++ core guidelines is specific about performance. Let's look at the following example.
void f1(const string& s); // OK: pass by reference to const; always cheap
void f2(string s); // bad: potentially expensive
void f3(int x); // OK: Unbeatable
void f4(const int& x); // bad: overhead on access in f4()
Presumably, based on experience, the guidelines states a rule of thumb:
Okay, now you should know how big your data types are. The program sizeofArithmeticTypes.cpp gives the answers for arithmetic types.
// sizeofArithmeticTypes.cpp
#include <iostream>
int main(){
std::cout << std::endl;
std::cout << "sizeof(void*): " << sizeof(void*) << std::endl;
std::cout << std::endl;
std::cout << "sizeof(5): " << sizeof(5) << std::endl;
std::cout << "sizeof(5l): " << sizeof(5l) << std::endl;
std::cout << "sizeof(5ll): " << sizeof(5ll) << std::endl;
std::cout << std::endl;
std::cout << "sizeof(5.5f): " << sizeof(5.5f) << std::endl;
std::cout << "sizeof(5.5): " << sizeof(5.5) << std::endl;
std::cout << "sizeof(5.5l): " << sizeof(5.5l) << std::endl;
std::cout << std::endl;
}
sizeof(void*) returns if it is a 32-bit or a 64-bit system. Thanks to online compiler rextester, I can execute the program with GCC, Clang, and cl.exe (Windows). Here are the numbers for all 64-bit systems.
cl.exe behaves differently to GCC and Clang. A long int has only 4 bytes, and a long double has 8 bytes. On GCC and Clang, long int and long double have the double size.
To decide, when to take the parameter by value or by const reference is just math. If you want to know the exact performance numbers for your architecture, there is only one answer: measure.
First, let me show you initialisation and assignment in the constructor.
class Good{
int i;
public:
Good(int i_): i{i_}{}
};
class Bad{
int i;
public:
Bad(int i_): { i = i_; }
};
The class Good uses initialisation but the class Bad assignment. The consequences are:
The constructor initialisation is, on one hand, slower but does not work on the other hand for const members, references, or members which can not be default-constructed possible.
// constructorAssignment.cpp
struct NoDefault{
NoDefault(int){};
};
class Bad{
const int constInt;
int& refToInt;
NoDefault noDefault;
public:
Bad(int i, int& iRef){
constInt = i;
refToInt = iRef;
}
// Bad(int i, int& iRef): constInt(i), refToInt(iRef), noDefault{i} {}
};
int main(){
int i = 10;
int& j = i;
Bad bad(i, j);
}
When I try to compile the program, I get three different errors.
In the second successful compilation, I used the second commented out constructor which uses initialisation instead of assignment.
The example used references instead of raw pointers for a good reason.
Motivated by a comment from Thargon110, I want to be dogmatic: NNN. What? I mean No Naked New. From an application perspective, there is no reason to use raw pointers. If you need pointer like semantic, put your pointer into a smart pointer (You see: NNN) and you are done.
In essence C++11 has a std::unique_ptr for exclusive ownership and a std::shared_ptr for shared ownership. Consequently, when you copy a std::shared_ptr, the reference counter is incremented, and when you delete the std::shared_ptr, the reference counter is decremented. Ownership means, that the smart pointer keeps track of the underlying memory and releases the memory if it is not necessary any more. The memory is not necessary any more in the case of the std::shared_ptr when the reference counter becomes 0.
So memory leaks are gone with modern C++. Now I hear your complaints. I'm happy to destroy them.
The last complaint is quite dominant. The small example should make my point:
// moveUniquePtr.cpp
#include <algorithm>
#include <iostream>
#include <memory>
#include <utility>
#include <vector>
void takeUniquePtr(std::unique_ptr<int> uniqPtr){ // (1)
std::cout << "*uniqPtr: " << *uniqPtr << std::endl;
}
int main(){
std::cout << std::endl;
auto uniqPtr1 = std::make_unique<int>(2014);
takeUniquePtr(std::move(uniqPtr1)); // (1)
auto uniqPtr2 = std::make_unique<int>(2017);
auto uniqPtr3 = std::make_unique<int>(2020);
auto uniqPtr4 = std::make_unique<int>(2023);
std::vector<std::unique_ptr<int>> vecUniqPtr;
vecUniqPtr.push_back(std::move(uniqPtr2)); // (2)
vecUniqPtr.push_back(std::move(uniqPtr3)); // (2)
vecUniqPtr.push_back(std::move(uniqPtr4)); // (2)
std::cout << std::endl;
std::for_each(vecUniqPtr.begin(), vecUniqPtr.end(), // (3)
[](std::unique_ptr<int>& uniqPtr){ std::cout << *uniqPtr << std::endl; } );
std::cout << std::endl;
}
The function takeUniquePtr in line (1) takes a std::unique_ptr by value. The key observation is that you have to move the std::unique_ptr inside. The same argument holds for the std::vector<std::unique_ptr<int>> (line 2). std::vector as all containers of the standard template library wants to own its elements but to copy a std::unique_ptr is not possible. std::move solves this issue. You can apply an algorithm such as std::for_each on the std::vector<std::unique_ptr<int>> (line 3) if no copy semantic is used.
On the end, I want to refer to the key concern of Thargon110. Admittedly, this rule is way more import in classical C++ without smart pointers because smart pointers are in contrast to raw pointers owners.
Use a reference instead of a pointer because a reference has always a value. Boring checks such as the following one are gone with references.
if(!ptr){
std::cout << "Something went terrible wrong" << std::endl;
return;
}
std::cout << "All fine" << std::endl;
Additionally, you can forget the check. References behave just as constant pointers.
The C++ core guidelines defines profiles. Profiles are subset of rules. They exist for type safety, bounds safety, and lifetime safety. They will be my next topic.20
Yesterday 8573
Week 9594
Month 168025
All 5037339
Currently are 130 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more...
|
https://www.modernescpp.com/index.php/more-myths-of-my-blog-readers
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
Running into a real stumper, this is the first time needing to set up a new site in our Kentico suite since we upgraded to V12, currently on v12.0.25 it is not going smoothly.
I tried to do an Export but am getting this error: (Removing the query for brevity)
ERROR: Error exporting objects.
[DataConnection.HandleError]:
Query:
FROM (
(.....)
Message: Incorrect syntax near the keyword 'AS'.
Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.
Incorrect syntax near the keyword 'with'. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.
Incorrect syntax near ')'.
Exception type: System.Data.SqlClient.SqlException
Stack trace: CMS.DataEngine.AbstractDataConnection.ExecuteQuery(String queryText, QueryDataParameters queryParams, QueryTypeEnum queryType, Boolean requiresTransaction)
It is like the system seems to be malforming the query, this always happens when it gets to the Pages part of the export.
Update:
I managed to do the export from our Stage and production servers, make the modifications for the new site and even do a test export of the new site with no issues. I'll post the full error down below.
I tried posting but get an error...
Would it be possible to post entire error or link to download the txt file with the error? What is the SQL query in the error? Looks like you can omitted this part which is crucial to know...
Do you have any other "odd" errors or functions not working as expected that appeared after the upgrade? Do you know if the upgrade was finished correctly to begin with?
Is this a secondary environment onto which the upgrade was rolled-out on, or was it performed on this instance?
Are you using CI and did you properly sync those to disk first after the upgrade?
Please, sign in to be able to submit a new answer.
|
https://devnet.kentico.com/questions/exporting-site-in-portal-mode-causes-error-when-getting-to-the-pages
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
@Component public class DefaultExceptionResponseFactory extends java.lang.Object implements ExceptionResponseFactory
ExceptionResponseFactory. The default implementation creates an
ApiErrorResponseResourceif the exception is an
ApiExceptionand a
ErrorResponseResourcein all other cases.
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public DefaultExceptionResponseFactory()
public ErrorResponseResource create(java.lang.Throwable throwable)
If present, the first
HawaiiException found in the cause chain of the throwable is used
to determine the response type. If there is no cause, or if it doesn't contain a
HawaiiException,
the throwable itself is used.
As an example, assume throwable is some type of
HttpException, caused by an
ApiException. In such a case,
we want the error information to be derived from the
ApiException.
createin interface
ExceptionResponseFactory
throwable- the throwable
|
http://www.hawaiiframework.org/docs/2.0.0.M12/api/org/hawaiiframework/web/exception/DefaultExceptionResponseFactory.html
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
ASPxGaugeControl Class
Represents an ASPxGaugeControl.
Namespace: DevExpress.Web.ASPxGauges
Assembly: DevExpress.Web.ASPxGauges.v18.2.dll
Declaration
public class ASPxGaugeControl : ASPxWebControl, IGaugeContainerEx, IGaugeContainer, ILayoutManagerContainer, INamingContainer, IXtraSerializable, IRequiresLoadPostDataControl, ISharePointEmptyDesignTimeControl
Public Class ASPxGaugeControl Inherits ASPxWebControl Implements IGaugeContainerEx, IGaugeContainer, ILayoutManagerContainer, INamingContainer, IXtraSerializable, IRequiresLoadPostDataControl, ISharePointEmptyDesignTimeControl
Remarks
The ASPxGaugeControl represents a container of gauges. It can display multiple gauges of different types simultaneously. There are four gauge types: circular, linear, digital gauges and state indicators.
At design time, you can create gauges on the fly from predefined presets, using the Preset Manager. Numerous presets are ready-to-use gauges that you can quickly load, and then customize as required. Any gauge element can also be customized. You can choose among numerous paint styles of needles, background layers and scales, and specify the appearance of labels, the position of scales and needles, etc.
The ASPxGaugeControl stores gauges within the ASPxGaugeControl.Gauges collection. Individual gauges can be accessed using indexed notation. To add empty gauges to this collection in code, use the ASPxGaugeControl.AddGauge method.
Typically, an ASPxGaugeControl contains only one gauge. You can use the ASPxGaugeControl.Value property to assign a value to the gauge. For instance, you can use this property to assign text to be displayed in a DigitalGauge, or assign a scale value to a LinearGauge or CircularGauge.
NOTE
The ASPxGaugeControl control provides you with a comprehensive client-side functionality implemented using JavaScript code:
- The control's client-side equivalent is represented by the ASPxClientGaugeControl object.
- On the client side, the client object can be accessed directly by the name specified via the ASPxGaugeControl.ClientInstanceName property.
- The available client events can be accessed by using the ASPxGaugeControl.ClientSideEvents property.
The control's client-side API is enabled if the ASPxGaugeControl.EnableClientSideAPI property is set to true, or the ASPxGaugeControl.ClientInstanceName property is defined, or any client event is handled.
|
https://docs.devexpress.com/AspNet/DevExpress.Web.ASPxGauges.ASPxGaugeControl?v=18.2
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
- When to write a migration test
- How does it work?
- Testing an
ActiveRecord::Migrationclass
- Testing a non-
ActiveRecord::Migrationclass
Testing Rails migrations at GitLab
In order to reliably check Rails migrations, we need to test them against a database schema.
When to write a migration test
- Post migrations (
/db/post_migrate) and background migrations (
lib/gitlab/background_migration) must have migration tests performed.
- If your migration is a data migration then it must have a migration test.
- Other migrations may have a migration test if necessary.
How does it work?
Adding a
:migration tag to a test signature enables some custom RSpec
before and
after hooks in our
spec/support/migration.rb
to run.
A
before hook will revert all migrations to the point that a migration
under test is not yet migrated.
In other words, our custom RSpec hooks will find a previous migration, and migrate the database down to the previous migration version.
With this approach you can test a migration against a database schema.
An
after hook will migrate the database up and reinstitute the latest
schema version, so that the process does not affect subsequent specs and
ensures proper isolation.
Testing an
ActiveRecord::Migration class
To test an
ActiveRecord::Migration class (i.e., a
regular migration
db/migrate or a post-migration
db/post_migrate), you
will need to manually
require the migration file because it is not
autoloaded with Rails. Example:
require Rails.root.join('db', 'post_migrate', '20170526185842_migrate_pipeline_stages.rb')
Test helpers
table
Use the
table helper to create a temporary
ActiveRecord::Base-derived model
for a table. FactoryBot
should not be used to create data for migration specs. For example, to
create a record in the
projects table:
project = table(:projects).create!(id: 1, name: 'gitlab1', path: 'gitlab1')
migrate!
Use the
migrate! helper to run the migration that is under test. It will not only
run the migration, but will also bump the schema version in the
schema_migrations
table. It is necessary because in the
after hook we trigger the rest of
the migrations, and we need to know where to start. Example:
it 'migrates successfully' do # ... pre-migration expectations migrate! # ... post-migration expectations end
reversible_migration
Use the
reversible_migration helper to test migrations with either a
change or both
up and
down hooks. This will test that the state of
the application and its data after the migration becomes reversed is the
same as it was before the migration ran in the first place. The helper:
- Runs the
beforeexpectations before the up migration.
- Migrates up.
- Runs the
afterexpectations.
- Migrates down.
- Runs the
beforeexpectations a second time.
Example:
reversible_migration do |migration| migration.before -> { # ... pre-migration expectations } migration.after -> { # ... post-migration expectations } end
Example database migration test
This spec tests the
db/post_migrate/20170526185842_migrate_pipeline_stages.rb
migration. You can find the complete spec in
spec/migrations/migrate_pipeline_stages_spec.rb.
require 'spec_helper' require Rails.root.join('db', 'post_migrate', '20170526185842_migrate_pipeline_stages.rb') describe MigratePipelineStages do # Create test data - pipeline and CI/CD jobs. let(:jobs) { table(:ci_builds) } let(:stages) { table(:ci_stages) } let(:pipelines) { table(:ci_pipelines) } let(:projects) { table(:projects) } before do projects.create!(id: 123, name: 'gitlab1', path: 'gitlab1') pipelines.create!(id: 1, project_id: 123, ref: 'master', sha: 'adf43c3a') jobs.create!(id: 1, commit_id: 1, project_id: 123, stage_idx: 2, stage: 'build') jobs.create!(id: 2, commit_id: 1, project_id: 123, stage_idx: 1, stage: 'test') end # Test just the up migration. it 'correctly migrates pipeline stages' do expect(stages.count).to be_zero migrate! expect(stages.count).to eq 2 expect(stages.all.pluck(:name)).to match_array %w[test build] end # Test a reversible migration. it 'correctly migrates up and down pipeline stages' do reversible_migration do |migration| # Expectations will run before the up migration, # and then again after the down migration migration.before -> { expect(stages.count).to be_zero } # Expectations will run after the up migration. migration.after -> { expect(stages.count).to eq 2 expect(stages.all.pluck(:name)).to match_array %w[test build] } end end
Testing a non-
ActiveRecord::Migration class
To test a non-
ActiveRecord::Migration test (a background migration),
you will need to manually provide a required schema version. Please add a
schema tag to a context that you want to switch the database schema within.
If not set,
schema defaults to
:latest.
Example:
describe SomeClass, schema: 20170608152748 do # ... end
Example background migration test
This spec tests the
lib/gitlab/background_migration/archive_legacy_traces.rb
background migration. You can find the complete spec on
spec/lib/gitlab/background_migration/archive_legacy_traces_spec.rb
require 'spec_helper' describe Gitlab::BackgroundMigration::ArchiveLegacyTraces, schema: 20180529152628 do include TraceHelpers let(:namespaces) { table(:namespaces) } let(:projects) { table(:projects) } let(:builds) { table(:ci_builds) } let(:job_artifacts) { table(:ci_job_artifacts) } before do namespaces.create!(id: 123, name: 'gitlab1', path: 'gitlab1') projects.create!(id: 123, name: 'gitlab1', path: 'gitlab1', namespace_id: 123) @build = builds.create!(id: 1, project_id: 123, status: 'success', type: 'Ci::Build') end context 'when trace file exists at the right place' do before do create_legacy_trace(@build, 'trace in file') end it 'correctly archive legacy traces' do expect(job_artifacts.count).to eq(0) expect(File.exist?(legacy_trace_path(@build))).to be_truthy described_class.new.perform(1, 1) expect(job_artifacts.count).to eq(1) expect(File.exist?(legacy_trace_path(@build))).to be_falsy expect(File.read(archived_trace_path(job_artifacts.first))).to eq('trace in file') end
|
https://docs.gitlab.com/12.10/ee/development/testing_guide/testing_migrations_guide.html
|
CC-MAIN-2020-50
|
en
|
refinedweb
|
At this point our application is complete and tested. At this point you can dive right in to the User’s Guide, but there are a few extra things we can add to further demonstrate some more of Ferris’ capabilities.
Ferris uses Components as a way of organizing commonly used functionality for controllers. Ferris comes with a handful of built-in components and includes one for automatic pagination for list methods.
It’s pretty easy to use. First import it:
from ferris.components.pagination import Pagination
Then add it to our controller’s Meta component list and set the limit:
class Posts(Controller): class Meta: components = (scaffold.Scaffolding, Pagination) pagination_limit = 5
If you open up, you’ll see that it will show no more than five posts. However, we don’t currently have a way to move between pages. Luckily, the scaffolding macros can handle that.
Add this to our list.html template right before the end of the layout_content block:
{{scaffold.next_page_link()}}
Now there is a paginator at the bottom of the page.
Similar to components, Ferris uses Behaviors as a way of organizing commonly used functionality for models. A useful behavior is the Searchable behavior.
First, we need to modify our model:
from ferris.behaviors.searchable import Searchable def Post(BasicModel): class Meta: behaviors = (Searchable,)
Note
Any posts created before you made this change will not be searchable until you edit and resave them.
Now we’ll use the Search component in our controller to use the behavior:
from ferris.components.search import Search class Posts(Controller): components = (scaffold.Scaffolding, Pagination, Search)
Now let’s the ability to search to our list action:
def list(self): if 'query' in self.request.params: self.context['posts'] = self.components.search() elif 'mine' in self.request.params: self.context['posts'] = self.Model.all_posts_by_user() else: self.context['posts'] = self.Model.all_posts()
Import the search macros into templates/posts/list.html:
{% import 'macros/search.html' as search with context %}
Finally, add these somewhere at the top of the inside of the layout_content block:
{{search.search_filter(action='search')}} {{search.search_info()}}
Now when we visit there should be a search box from which we can search through all posts.
|
http://ferris-framework.appspot.com/docs21/tutorial/7_extras.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Find the error in the given code :
1 public class Example20{
2 public static void main(String[] args){
3 int a = 0;
4 int b=2;
5 int c = 3;
6 if(a++) {
7 b = a * c;
8 a /= 2;
9 }else {
10 b = b * b;
11 ++c;
12}}}
Find the error in the given code :
1. Error at line number 2
2.Error at line number 6
3.Error at line number 7
4..Error at line number 10.
(2)
Due to the code at line number 6, this code will not compile. Since " if " statement takes 'Boolean' or 'conditional expression' as argument .
The output of the following program will be :
Exception in thread "main" java.lang.Error: Unresolved compilation problem: Type mismatch: cannot convert from int to boolean
|
http://www.roseindia.net/tutorial/java/scjp/part2/question20.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Project 1 Speech Detection. Due: Friday, February 1 st , 11:59pm. Outline. Motivation Problem Statement Details Hints Grading Turn in. Motivation. Speech recognition needs to detect word boundaries in raw audio data. “Silence Is Golden”. Motivation. Recognizing silence can 1Speech Detection
Due: Friday, February 1st, 11:59pm
“Silence
Is
Golden”
CleanSNR = 5 dB SNR = -5 dB
Speech corrupted by additive background noise at decreasing SNRs.
Energy (dB)
“Five”
Needs to be done in ‘real-time’
play sound.raw
play speech.raw
wFormatTag: set to WAVE_FORMAT_PCM
nChannels, nSamplesPerSec, wBitsPerSample: set to voice quality audio settings
nBlockAlign: set to number of channels times number of bytes per sample
nAvgBytesPerSec: set to number of samples per second times the nBlockAlign value
cbSize: set this to zero
gets invoked when sound device has a sample of audio
Useful data types:
HWAVEOUT
writing audio device
HWAVEIN
reading audio device
WAVEFORMATEX
sound format structure
LPWAVEHDR
buffer
MMRESULT
Return type from wave system calls
#include <windows.h>
#include <stdio.h>
#include <stdlib.h>
#include <mmsystem.h>
#include <winbase.h>
#include <memory.h>
#include <string.h>
#include <signal.h>
extern "C"
Online documentation from Visual C++ for more information (Visual Studio Help Samples)
How it works: Linux audio explained
/dev/dsp
open("/dev/dsp", O_RDWR)
read() to record
write() to play
fd = open("/dev/dsp", O_RDWR);
arg = 8;
ioctl(fd, SOUND_PCM_WRITE_BITS, &arg);
#include <alsa/asoundlib.h>
snd_pcm_t *handle;
/* open playback device (e.g. speakers default) */
snd_pcm_open(&handle, "default", SND_PCM_STREAM_PLAYBACK, 0);
/* open record device (e.g. microphone default) */
snd_pcm_open(&handle, "default", SND_PCM_STREAM_CAPTURE, 0);
/* write to audio device */
snd_pcm_writei(handle, buffer, frames);
/* read from audio device */
snd_pcm_readi(handle, buffer, frames);
[Tra04] J. Tranter. Introduction to Sound Programming with ALSA, Linux Journal, Issue #126, October, 2004.
open sound device
set sound device parameters
record silence
set algorithm parameters
while(1)
record sound
compute algorithm stuff
detect speech
write data to file
write sound to file
if speech, write speech to file
When done, briefanswers (in answers.txt)
25% basic recording of sound
25% basic playback of sound
20% speech detection
10% adjustment of thresholds
10% proper file output (sound, speech, data)
10% answers to questions
|
http://www.slideserve.com/zorina/project-1-speech-detection
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
The XPath 2.0 Data Model
February 2, 2005
In everything I've written in this column so far about XSLT 2.0, I've been cherry-picking—taking fun new features and plugging them into what were otherwise XSLT 1.0 stylesheets in order to demonstrate these features. As XSLT 2.0 and its companion specification XQuery 1.0 approach Recommendation status, it's time to step back and look at a more fundamental difference between 2.0 and 1.0: the underlying data models. A better understanding of the differences gives you a better understanding of what you can get out of XSLT 2.0 besides a wider selection of function calls.
The current XSLT 2.0 Working Draft's short Data Model section opens by saying, "The data model used by XSLT is the XPath 2.0 and XQuery 1.0 data model, as defined in [the W3C TR XQuery 1.0 and XPath 2.0 Data Model document]. XSLT operates on source, result, and stylesheet documents using the same data model." The XSLT 2.0 Data Model section goes on to describe a few details about issues such as white space and attribute type handling, but these concern XSLT processor developers more than stylesheet developers. The "XQuery 1.0 and XPath 2.0 Data Model" document that it mentions is what the majority of us really want to look at.
Before looking more closely at this document, however, let's back up for some historical context. Many people felt that the original XML 1.0 Recommendation released in February of 1998 failed to describe a rigorous data model. This raised the possibility of different applications interpreting document information differently, increasing the danger of incompatibilities. To remedy this, the W3C released the XML Information Set Recommendation (also known as "the infoset") in October of 2001. It described the exact information to expect from an XML document more formally and less ambiguously than the original 1998 XML Recommendation did.
Like the XSLT 2.0 spec, the XSLT 1.0 Recommendation includes a Data Model section that describes its basic dependence on the XPath data model (in this case, XPath 1.0) before going on to describe a few new details. The XPath 1.0 Recommendation's Data Model section, which is about six pages when printed out, provides a subsection for each possible node type in the XPath representation of an XML tree: root nodes, element nodes, text nodes, attribute nodes, namespace nodes, processing instruction nodes, and comment nodes. (Remember that there are different ways to model an XML document as a tree—for example, a DOM tree, an entity structure tree, an element tree, and an XPath tree—so certain basic tree structure ideas such as "root" will vary from one model to another.) This spec also includes a brief, non-normative appendix that describes how you can create an XPath 1.0 tree from an infoset, so that no guesswork is needed regarding the relationship of the XPath 1.0 data model to the official model describing the information to find in an XML document.
The Data Model sections of these W3C technical reports are all fairly short. When I printed the latest draft of the new XQuery 1.0 and XPath 2.0 Data Model document, it added up to 90 pages, although nearly half consist of appendices that restate earlier information in more mechanical or tabular form. The document has a reputation for being complex, but once you have a good overview of its structure, the complexity appears more manageable.
The Data Model document's Introduction tells us that it's based on the infoset with two additions: "support for [W3C] XML Schema types" and "representation of collections of documents and of complex values."
This talk of "complex values" makes the second addition sound more complicated, but it's actually simpler, so I'll cover it first. This part of the data model has actually simplified a messier aspect of XPath 1.0, and we've already seen the payoff in an earlier column on using temporary trees in XSLT 2.0. Here are the key points, including some direct quotes from the document:
- "Every instance of the data model is a sequence." (One class of "pervasive changes" from XSLT 1.0 to 2.0 is "support for sequences as a replacement for the node-sets of XPath 1.0.")
- "A sequence is an ordered collection of zero or more items."
- All items are either atomic values (like the number 14 or "this string") or a node of one of the types listed above: element node, attribute node, text node, and so forth. The choice of seven types now lists "document node" instead of "root node," presumably because temporary trees can have root nodes that lack the special properties found in the root node of an actual document tree.
Sequences
There's nothing special that must happen to a node for it to be considered part of a sequence; a single item is treated as a sequence with one item in it. One sequence can't contain another sequence, which simplifies things: a sequence's items are the node items and atomic value items in that sequence, period. Because an XPath expression describes a set of nodes, the value of that XPath expression is a sequence.
The idea of a "sequence constructor" comes up often in XSLT 2.0—the spec uses the term about 300 times. The "Sequence Constructors section defines one as "a sequence of zero or more sibling nodes in the stylesheet that can be evaluated to return a sequence of nodes and atomic values." In 2.0, a template rule contains a sequence constructor that gets evaluated to return a sequence for use in your result tree, a temporary tree, or wherever you like. Variables, stylesheet-defined functions, and elements such as xsl:for-each, xsl:if, and xsl:element are all defined in the specification as having sequence constructors as their contents.
A node can have several properties, such as a node name, a parent, and children. Not all node types have the same properties; for example, a document node has no parent property. The data model document talks a lot about retrieving the values of node properties with "accessors," which are abstract versions of functions that represent ways to get the values of node properties. For example, the dm:parent accessor returns the node that is the parent of the node whose parent property you want to know about. (The data model document uses the "dm:" prefix on all accessor functions without declaring a namespace URI for it, because these aren't real functions. If you want real functions, see the XQuery 1.0 and XPath 2.0 Functions and Operators document.)
A tree consists of a node and all the nodes that you can reach from it using the dm:children, dm:attributes, and dm:namespaces accessors. If a tree's root node is a document node, the tree is an XML document. If it's not, the tree is considered to be a fragment. This idea of tree fragments is an improvement over the XSLT 1.0 data model, with its concept of Result Tree Fragments, because the operations allowed on Result Tree Fragments were an often-frustrating subset of those permitted on node-sets. The ability to perform the same operations on a 2.0 source tree, a temporary result tree, a subtree of either, or a temporary tree created on the fly or stored in a variable gives you a lot more flexibility, because you can apply template rules to the nodes of any of them.
The new data model offers more ways to address a collection of documents than the XPath 1.0 model did. XSLT 2.0 offers several ways to create a sequence. One simple XPath 2.0 way is to put a comma-delimited list of sequence items inside of parentheses, as I demonstrated in an earlier column on writing your own functions in XSLT 2.0. The list's items can even be entire documents, as shown in the following xsl:value-of instruction:
<xsl:value-of
The outer parentheses belong to the reverse function, which expects a sequence as an argument, and the next parentheses in from those create a sequence of documents, with each document being pulled in with a call to the document function. (I used the reverse call to test whether my XSLT 2.0 processor would treat the list of documents as a proper sequence.)
The new collection function returns a collection of documents. The argument to pass to it (for example, the name of a directory containing documents, or a document listing document URIs) depends on the implementation.
XSLT and W3C Schema Types
The Data Model document's Types section tells us that "the data model supports strongly typed languages such as [XPath 2.0] and [XQuery] that have a type system based on [Schema Part 1]." Note that it's not based on "Schema Part 2," the Datatypes part of the W3C Schema specification but on XML Schema Part 1: Structures, which includes Part 2. Part 2 defines built-in types such as xs:integer, xs:dateTime, and xs:boolean. It also lets you define new simple types by restricting the existing ones (for example, an employeeAge type defined as an integer between 15 and 70), and Part 1 builds on that by letting you define complex types that have structure: content models (for example, an article type consisting of a title followed by one or more para elements) and attributes.
An XSLT 2.0 stylesheet can use the typing information provided by a schema to ensure the correctness of a source document, of a result document, and even of temporary trees and expressions in the stylesheet itself. You can declare stylesheet functions to return values of a certain type, and you can declare parameters and values to be of specific types so that attempts to create invalid values for them will result in an error. These features help you find data and stylesheet errors earlier and with more helpful error messages.
Another nice feature of type-aware XSLT 2.0 processors is their ability to process source nodes by matching against their type. For example, a stylesheet can include a template rule that processes all elements and attributes of type xs:date.
I mentioned above how the XPath 1.0 spec includes an appendix that describes the relationship of its data model to the infoset's data model. The XQuery 1.0 and XPath 2.0 Data Model document includes detailed descriptions of how to map its data model to an infoset and how to create each component of the data model from an infoset and from a PSVI. The latter is a huge job, accounting for a good portion of the new data model spec's length, because a Post Schema Validation Infoset can hold more information than a pre-validation infoset. A W3C schema can include information about typing, defaults, and more, so validation against that schema can turn an element such as <length>2</length> into <length unit="in">2.0</length> along with the associated information that the "2.0" represents a decimal number. In XPath 2.0, the type-name node property stores the name of the data type declared for the value in the schema, if available from a validation stage of the processing, and "xdt:untyped" otherwise. The rules for determining the value of the type-name property from an associated W3C schema are laid out in the Mapping PSVI Additions to Type Names section of the Data Model document, but be warned—only a big fan of W3C schemas could enjoy reading it.
Things are a bit simpler when the types that may come up in your XSLT processing are limited to the Schema Part 2 types. Without having your stylesheet refer to content models and other aspects of complex types, it can be handy to identify some values as integers, some as booleans, and some as URIs. The Data Model document adds five new types to the 19 primitive types defined in the Part 2 Recommendation: the xdt:untyped one mentioned above and the xdt:untypedAtomic "type," which also serves as more of an annotation about typing status than as the name of an actual type; xdt:anyAtomicType, an abstract type that plugs a newly-discovered architectural hole; and xdt:dayTimeDuration and xdt:yearMonthDuration, two types that offer totally ordered ways to measure elapsed time. That is, a sequence of values of either of these types can be properly sorted, which wasn't always the case with the duration types offered in the Schema Part 2 specification—comparing "one month" with "30 days" wouldn't always give you a clear answer.
For now, remember that if you completely ignore type-aware XSLT 2.0 processing—which will be a perfectly valid approach for much XSLT 2.0 development—the big difference between the XSLT 1.0 and 2.0 data models is that a single node, a complete document, a subtree representing a fragment of a document, and any other set of nodes described by an XPath expression are all sequences, and that much of the processing is now described in terms of these sequences. From a practical standpoint, an XSLT 1.0 stylesheet with the version attribute of its xsl:stylesheet element set to "2.0" and a few new functions called here and there will still work, as we've seen in my earlier columns on XSLT 2.0. Also remember that the new Data Model document lays the groundwork for not only XPath 2.0 (and therefore XSLT 2.0), but also XQuery, so an appreciation of sequences will help you learn XQuery more quickly.
In a future column, I'll demonstrate what type-aware XSLT processing adds to your stylesheet and its handling of source and result documents.
|
http://www.xml.com/pub/a/2005/02/02/xpath2.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Definition
An instance of type random_source is a random source. It allows to generate uniformly distributed random bits, characters, integers, and doubles. It can be in either of two modes: In bit mode it generates a random bit string of some given length p ( 1 < = p < = 31) and in integer mode it generates a random integer in some given range [low..high] ( low < = high < low + 231). The mode can be changed any time, either globally or for a single operation. The output of the random source can be converted to a number of formats (using standard conversions).
#include < LEDA/core/random_source.h >
Creation
Operations
|
http://www.algorithmic-solutions.info/leda_manual/random_source.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
I'm using MSVC++ 6.0 and I have created a class. The declaration is in a .hpp file and the implementation is in a .cpp file. Both files have been manually added to the project and the .hpp file is included in the main file. Yet it won't build as it says my class is not a class or namespace. The compiler can clearly find the header file (because it's not saying it can't) but still the class declaration cannot be seen by the compiler.
I'm tearing my hair out - as far as I'm aware I've done everything necessary and it just refuses to build.
|
https://cboard.cprogramming.com/cplusplus-programming/63425-cant-see-header.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
I am getting very strange results from the following:
def list(self)
import shelve
db = shelve.open(testfile)
list = []
cnt = 0
for id in db.keys():
print db[id].info()
the testfile was written with an class defined object;
What i am finding, is if I write 50 objects, the above code prints
only 38 objects, and there are odd gaps,. like,with the name of the
objects: test1, test2, test3...test50, the print outputs
test1, test3, test5, test6, ... etc with gaps in the file names output.
But using another function, that reads the files one by one, I can read all 50 object entries, so I know they are there on the shelf. Note when I read them one by one, I supply the id,
not from the "in db.keys()" method.
I have a feeling when you list a shelved set of objects, that scanning it
via "keys" is not correct, and that is why I am getting weird results.
Actually, when you think about it, what IS the "key", for a saved object?
There is a __dict__.keys for each record, but that implies multiple keys
within one record.
Man, I am very confused....
Well, then how DO you walk through a variable length list, that you stored
via shelve?
|
https://www.daniweb.com/programming/software-development/threads/100904/how-do-you-iterate-though-a-file-on-a-shelf
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Models are responsible for persisting data and containing business logic. Models are direct interfaces to Google App Engine’s Datastore. It’s recommended that you encapsulate as much of your business and workflow logic within your models (or a separate service layer) and not your controllers. This enables you to use your models from multiple controllers, task queues, etc without the worry of doing something wrong with your data or repeating yourself. Models can be heavily extended via Behaviors which allow you to encapsulate common model functionality to be re-used across models.
An example model to manage posts might look like this:
from ferris import BasicModel, ndb from ferris.behaviors import searchable class Post(BasicModel): class Meta: behaviors = (searchable.Searchable,) search_index = ('global',) title = ndb.StringProperty() idk = ndb.BlobProperty() content = ndb.TextProperty()
Models are named with singular nouns (for example: Page, User, Image, Bear, etc.). Breaking convention is okay but scaffolding will not work properly without some help.
Each model class should be in its own file under /app/models and the name of the file should be the underscored class name. For example, to create a model to represent furry bears, name the file /app/models/furry_bear.py. Inside the file, define a class named FurryBear.
The Model class is built directly on top of App Engine’s google.appengine.ext.ndb module. You may use any regular ndb.Model class with Ferris. Documentation on propeties, querying, etc. can be found here.
Base class that augments ndb Models by adding easier find methods and callbacks.
Ferris provides two simple query shortcuts but it’s unlikely you’ll ever use these directly. Instead, you’ll use the automatic methods described in the next section.
Generates an ndb.Query with filters generated from the keyword arguments.
Example:
User.find_all_by_properties(first_name='Jon',role='Admin')
is the same as:
User.query().filter(User.first_name == 'Jon', User.role == 'Admin')
Similar to find_all_by_properties, but returns either None or a single ndb.Model instance.
Example:
User.find_by_properties(first_name='Jon',role='Admin')
The Model class automatically generates a find_by_[property] and a find_all_by_[property] classmethod for each property in your model. These are shortcuts to the above methods.
For example:
class Show(Model): title = ndb.StringProperty() author = ndb.StringProperty() Show.find_all_by_title("The End of Time") Show.find_all_by_author("Russell T Davies")
The Model class also provides aliases for the callback methods. You can override these methods in your Model and they will automatically be called after their respective action.
Called before an item is saved.
Called after an item has been saved.
Called before an item is retrieved. Note that this does not occur for queries.
Called after an item has been retrieved. Note that this does not occur for queries.
Called before an item is deleted.
Called after an item is deleted.
These methods are useful for replicating database triggers, enforcing application logic, validation, search indexing, and more.
The BasicModel adds automatic access fields. These are useful for a variety of situations where you need to track who created and updated an entity and when.
Adds the common properties created, created_by, modified, and modified_by to Model
Stores the created time of an item as a datetime (UTC)
Stores the modified time of an item as a datetime (UTC)
Stores the user (a google.appengine.api.users.User) who created an item.
Stores the user (a google.appengine.api.users.User) who modified an item.
|
http://ferris-framework.appspot.com/docs21/users_guide/models.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Introduction
Notifications are a very useful way to interact with your application's users and, with Android Wear, we now also have wearable devices running Android. It's therefore a good idea to learn how to take advantage of these new features by adding appropriate actions to notifications or by creating actions that are only visible on wearable devices.
In this tutorial, I'm going to show you the modern implementation of notifications, as shown during this year's Google I/O. We'll use the new support package and extend its capabilities by adding actions that are only visible on smartwatches, the only wearable devices available with Android Wear at the time of writing.
1. Prerequisites
For this project, you can use either Android Studio or the Android Developer Tools. If you're using Android Studio, then make sure to add the following line to your build.gradle file.
compile "com.android.support:support-v4:20.0.+"
2. Setting Up the Project
Launch your IDE and create a new Android project, or open a project you've created previously. For this tutorial, I'm going to create a new project and name it ImprovedNotifications. Don't forget to use a unique package name.
While setting up the project, make sure that you select the Empty Activity option in the Create Activity step.
Once the project
is created, create a new Activity,
ActivatedActivity. This Activity is going to be called from a notification on your
mobile or wearable device.
Before we move on, we need to update the strings.xml file by adding the strings that we're going to be using a bit later in this tutorial.
<?xml version="1.0" encoding="utf-8"?> <resources> <string name="app_name">ImprovedNotifications</string> <string name="title_activity_activated">ActivatedActivity</string> <string name="message">Hi"!" I"'"m the activated activity</string> <string name="button_text">Try me for a new notification</string> <string name="notification_title">Hey Mom"!" I"'""m a title</string> <string name="notification_text">Look at me"!" I"'"m a sexy notification content</string> <string name="first_action">Let"'"s see the author"'"s twitter profile</string> <string name="wearable_action">I only appear here</string> </resources>
3. Creating the Layout
The next step is to create a layout for the
MainActivity and the
ActivatedActivity classes. The layout for the
MainActivity class is shown below.
<RelativeLayout xmlns: <Button android: </RelativeLayout>
And this is the layout for the
ActivatedActivity class.
<RelativeLayout xmlns: <TextView android: </RelativeLayout>
4. Creating a Notification
We create a notification in the
MainActivity class. In the code snippet below, you can see what steps are involved in creating a notification. I've commented the code block to help you understand the various steps, but let's walk through the snippet step by step.
package com.androiheroes.improvednotifications; import android.app.Activity; import android.app.Notification; import android.app.PendingIntent; import android.content.Intent; import android.net.Uri; import android.os.Bundle; import android.support.v4.app.NotificationCompat; import android.support.v4.app.NotificationManagerCompat; import android.view.View; import android.widget.Button; public class MainActivity extends Activity { /* Widgets you are going to use */ private Button button; /* * This is the notification id * You can use it to dismiss the notification calling the .cancel() method on the notification_manager object */ private int notification_id = 1; private final String NOTIFICATION_ID = "notification_id"; /* These are the classes you use to start the notification */ private NotificationCompat.Builder notification_builder; private NotificationManagerCompat notification_manager; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); /* * Step 1 * Instantiation of the button you use to start the notification */ button = (Button) findViewById(R.id.notification_button); /* * Step 2 * Create the intent you are going to launch when your notification is pressed * and let the PendingIntent handle it */ Intent open_activity_intent = new Intent(this, ActivatedActivity.class); open_activity_intent.putExtra(NOTIFICATION_ID, notification_id); PendingIntent pending_intent = PendingIntent.getActivity(this, 0, open_activity_intent, PendingIntent.FLAG_CANCEL_CURRENT); /* * Step 3 * our notification must have all the default characteristics of a notification * like sound and vibration */ .setDefaults(Notification.DEFAULT_ALL) /* This method is going to dismiss the notification once it is pressed */ .setAutoCancel(true) .setContentIntent(pending_intent); /* * Step 4 * Here we instantiate the Notification Manager object to start/stop the notifications */ notification_manager = NotificationManagerCompat.from(this); } @Override protected void onStart() { super.onStart(); /* * Step 5 * The notification is going to appear when you press the button on the screen */ button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { notification_manager.notify(notification_id, notification_builder.build()); } }); } }
Step 1
We first instantiate the button that we'll use to launch the notification. You could also create the notification directly in the
onCreate method, but by using a button you have more control over the exact timing of the notification.
Step 2
In the second step, we instantiate an
Intent object with the task to perform when the notification is tapped. We pass the object to a
PendingIntent instance to handle it later when it's called.
Step 3
Using the Android Support Library, we create the notification using the
Builder class of the
NotificationCompat object and set its attributes.
Step 4
In this step, we instantiate a
NotificationManagerCompat instance to start and/or stop the notification anytime we want. This will make testing much easier.
Step 5
When the button is tapped, the notification is fired using the
notify method.
Don't forget to use the classes from the Android Support Library. This way you can be sure your notification is going to look fine on older versions of Android.
You can now run the app, tap the button, and see the notification appear at the top of the screen. If you tap the notification, it should take you to the
ActivatedActivity activity. With the notification set up and working, it's time to start adding actions to it.
5. Adding Actions to the Notification
You can add extra actions to the notification by invoking the
addAction method on the
notification_builder object. For this to work, you need to pass a
PendingIntent instance with the task you like to perform.
In the following code snippet, I show you the steps you have to implement to create an action with a custom task. In this example, I'm going to take you to my Twitter profile in the Twitter app. This means I need a
URI instance pointing to my Twitter profile, add this to the
Intent, and let the
PendingIntent handle it when the action is tapped. Insert this code block before the
instantiation of the
notification_builder object.
/* The action in the handheld notification must perform some task * In this case the author's twitter profile is going to be opened, in the twitter app, when it is clicked * but you can change it with your profile if you want ;) */ Intent open_twitter_profile = new Intent(Intent.ACTION_VIEW); Uri twitter_profile_location = Uri.parse("twitter://user?screen_name=@kerpie"); open_twitter_profile.setData(twitter_profile_location); PendingIntent twitter_intent = PendingIntent.getActivity(this, 0, open_twitter_profile, 0);
To add the action, invoke the
addAction method on the
notification_builder object and pass in the
open_twitter_profile object we just created.
/* *);
Run the
application, tap the button to trigger the notification, and you should see the notification
appear along with the action we just created.
While you can add more actions to a notification using the
addAction method, make sure the user isn't overwhelmed by the number of actions they can choose from.
6. Supporting Android Wear
So far, we have used the classes from the Android Support Library to make sure the notifications are also shown on smartwatches running Android Wear. You can run the application on a physical smartwatch or you can try it on the emulator from the Android Virtual Device Manager. Either way, you need to sync your device with the smartwatch.
Before syncing your device with the smartwatch emulator, you need to install the Android Wear app, which is available on Google Play. After you've unplugged every other Android device connected to your computer, execute the following command from the command line.
adb devices
This command lists the devices connected to your development machine. You should see two of them, the smartwatch emulator and your device. Then run the following command from the command line to enable port forwarding.
adb -d forward
tcp:5601 tcp:5601
You can now connect your device with the emulator and turn on the notifications using the Android Wear app. Run the app again and trigger the notification. The notification should look similar to the one shown below.
7. Adding Wearable-Only Actions
It is possible to add actions that are only visible on wearables. This is accomplished by invoking the
addAction method of the
WearableExtender class. The result is that any actions added through the
NotificationCompat.Builder class are ignored.
As we did before, to trigger the action, we make use of an
Intent and a
PendingIntent instance, but we'll create the action displayed on the wearable device using the
Builder class of a special
Action class, which is part of the
NotificationCompat class as shown below.
/* Here we instantiate the Intent we want to use when the action in the smartwatch is pressed */ Intent wearable_intent = new Intent(this, ActivatedActivity.class); PendingIntent wearable_pending_intent = PendingIntent.getActivity(this, 0, wearable_intent, PendingIntent.FLAG_UPDATE_CURRENT); /* Now we have an intent for the wearable created we have to create a wearable action using it*/ NotificationCompat.Action wearable_action = new NotificationCompat.Action.Builder( android.R.drawable.ic_dialog_email, getString(R.string.wearable_action), wearable_pending_intent).build();
We then add
this action to the
notification_builder object using the
extend method as shown below.
/* *) /* * Here you add a wearable-only action * This action won't be visible in the handheld device */ .extend(new WearableExtender().addAction(wearable_action));
Run the app and tap the button to display the notification on your device. It should be different than the notification that pops up on the wearable emulator.
Conclusion
Smartwatches are here to stay, for a while at least, and it's therefore important to take advantage of this new way of communicating with your application's users. I hope you've found the tutorial helpful and don't forget to share it if you liked it.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
|
https://code.tutsplus.com/tutorials/enhanced-and-wearable-ready-notifications-on-android--cms-21868
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Note
The part of uicfg that deals with primary views is in the Primary view configuration chapter.
This module (cubicweb.web.views.uicfg) regroups a set of structures that may be used to configure various options of the generated web interface.
To configure the interface generation, we use RelationTag objects.
from cubicweb.web.views import uicfg # force hiding uicfg.indexview_etype_section['HideMe'] = 'subobject' # force display uicfg.indexview_etype_section['ShowMe'] = 'application'
This module provide highlevel helpers to avoid uicfg boilerplate for most common tasks such as fields ordering, widget customization, etc.
Here are a few helpers to customize action box rendering:
and a few other ones for form configuration:
The module also provides a FormConfig base class that lets you gather uicfg declaration in the scope of a single class, which can sometimes be clearer to read than a bunch of sequential function calls.
helper base class to define uicfg rules on a given entity type.
In all descriptions below, attributes list can either be a list of attribute names of a list of 2-tuples (relation name, role of the edited entity in the relation).
Attributes
- etype
- which entity type the form config is for. This attribute is mandatory
- formtype
- the formtype the class tries toc customize (i.e. main, inlined, or muledit), default is main.
- hidden
- the list of attributes or relations to hide.
- rels_as_attrs
- the list of attributes to edit in the attributes section.
- inlined
- the list of attributes to edit in the inlined section.
- fields_order
- the list of attributes to edit, in the desired order. Unspecified fields will be displayed after specified ones, their order being consistent with the schema definition.
- widgets
- a dictionary mapping attribute names to widget instances.
- fields
- a dictionary mapping attribute names to field instances.
- uicfg_afs
- an instance of cubicweb.web.uicfg.AutoformSectionRelationTags Default is None, meaning cubicweb.web.uicfg.autoform_section is used.
- uicfg_aff
- an instance of cubicweb.web.uicfg.AutoformFieldTags Default is None, meaning cubicweb.web.uicfg.autoform_field is used.
- uicfg_affk
- an instance of cubicweb.web.uicfg.AutoformFieldKwargsTags Default is None, meaning cubicweb.web.uicfg.autoform_field_kwargs is used.
Examples:
from cubicweb.web import uihelper, formwidgets as fwdgs class LinkFormConfig(uihelper.FormConfig): etype = 'Link' hidden = ('title', 'description', 'embed') widgets = dict( url=fwdgs.TextInput(attrs={'size':40}), ) class UserFormConfig(uihelper.FormConfig): etype = 'CWUser' hidden = ('login',) rels_as_attrs = ('in_group',) fields_order = ('firstname', 'surname', 'in_group', 'use_email') inlined = ('use_email',)
|
https://docs.cubicweb.org/book/devweb/rtags.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Multiple Projects
When multiple projects use the same or similar resources, you may wish to share or integrate these project resources. Here we address four scenarios: Sharing helper methods between projects, sharing data sources between projects, and importing tests from other projects.
Sharing Helper Methods Between Projects
If your projcts use common methods, you may wish to place them in a custom DLL. You can then add references to this utility class in your different projects in Standalone Test Studio or the Visual Studio Plugin.
Sharing Data Sources Between Projects
Generally, data sources must be in the same project folder as the tests bound to them. Test Studio creates a copy of the target data source for this purpose, so that after data binding, changes to the original data source do not automatically affect the data source for the test.
A SQL database, however, behaves differently: any test configured to connect to a SQL database as a data source will retrieve up-to-date data from that database. So, any number of projects can data bind to the same SQL database and access the same data.
Importing Tests from Another Project
All Test Studio projects are file-based. To import tests from one project to another, you can copy the test files (.tstest) from those projects into the target project. Next, manually update the namespace for any copied code-behind files. Test Studio will automatically detect this new test on restart or Project view refresh.
|
http://docs.telerik.com/teststudio/knowledge-base/project-configuration-kb/multiple-projects
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Lazy Operations
Some of the operations in RavenDB can be evaluated lazily (performed only when needed).
This section will describe each of the available lazy operations:
* Querying
* Faceted searching
* Suggesting
Querying
To perform query below
IEnumerable<User> users = session .Query<User>() .Where(x => x.Name == "john");
as a lazy operation we have introduced
Lazily() extension method that will mark any type of queries as a lazy operation, so to perform above query as such an operation just mark it with this extension method like in example below.
Lazy<IEnumerable<User>> lazyUsers = session .Query<User>() .Where(x => x.Name == "John") .Lazily();
To evaluate our
lazyUsers just access
Value property.
IEnumerable<User> users = lazyUsers.Value;
An action that will be executed when value gets evaluated can be passed to
Lazily. It is very handy in scenarios when you want to perform additional actions when value gets evaluated or when you want to execute all pending lazy operations at once.
IEnumerable<User> users = null; IEnumerable<City> cities = null; session .Query<User>() .Where(x => x.Name == "John") .Lazily(x => users = x); session .Query<City>() .Where(x => x.Name == "New York") .Lazily(x => cities = x); session.Advanced.Eagerly.ExecuteAllPendingLazyOperations();
Lazily is a part of
LinqExtensions available in
Raven.Client namespace, so
using Raven.Client;
is mandatory.
Lazy Lucene queries are also possible.
Lazy<IEnumerable<User>> users = session.Advanced .LuceneQuery<User>() .WhereEquals("Name", "John") .Lazily();
Loading lazily is done in a bit different manner, it can be achieved by using one of the methods available in
session.Advanced.Lazily property, so to perform below query
User user = session.Load<User>("users/1");
as a lazy operation just use one of the methods from
session.Advanced.Lazily like in example below
Lazy<User> lazyUser = session.Advanced.Lazily.Load<User>("users/1");
Value can be evaluated in same way as when Querying and Action that will be performed when value gets evaluated can also be passed.
User user = lazyUser.Value;
User user = null; session.Advanced.Lazily.Load<User>("users/1", x => user = x); session.Advanced.Eagerly.ExecuteAllPendingLazyOperations();
Other available lazy loading methods are:
1. LoadStartingWith where you can load all users with given key prefix.
var users = session.Advanced.Lazily.LoadStartingWith<User>("users/1");
2. Includes where additional documents will be loaded by given path.
If we consider having
User and
City classes as defined below
private class User { public string Id { get; set; } public string Name { get; set; } public string CityId { get; set; } } private class City { public string Id { get; set; } public string Name { get; set; } }
and store one
User and
City
using (var session = store.OpenSession()) { var user = new User { Id = "users/1", Name = "John", CityId = "cities/1" }; var city = new City { Id = "cities/1", Name = "New York" }; session.Store(user); session.SaveChanges(); }
then we will be able to perform code such as
var lazyUser = session.Advanced.Lazily .Include("CityId") .Load<User>("users/1"); var user = lazyUser.Value; var isCityLoaded = session.Advanced.IsLoaded("cities/1"); // will be true
Faceted search
To take advantage of lazy Faceted search use
ToFacetsLazy() extension method from
LinqExtensions found in
Raven.Client namespace.
To change Faceted search from last step described here to lazy operation just substitute
ToFacets with
ToFacetsLazy.
Lazy<FacetResults> lazyFacetResults = session .Query<Camera>("CameraCost") .Where(x => x.Cost >= 100 && x.Cost <= 300) .ToFacetsLazy("facets/CameraFacets"); FacetResults facetResults = lazyFacetResults.Value;
Suggesting
Similar practice as in Faceted search has been used in lazy suggestions. The
SuggestLazy() extension method is available in
LinqExtensions and can be used as a substitution to
Suggest() to mark operation as a lazy one.
Lazy<SuggestionQueryResult> lazySuggestionResult = session .Query<User>() .Where(x => x.Name == "John") .SuggestLazy(); SuggestionQueryResult suggestionResult = lazySuggestionResult.Value;
|
https://ravendb.net/docs/article-page/2.5/csharp/client-api/querying/lazy-operations
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
#include <quad.h>
#include <quad.h>
Inheritance diagram for aeQuad:
Definition at line 35 of file quad.h.
Create a quad with no name.
Naturally you won't be able to find this object from the engine by name.
Create a quad given a name.
[virtual]
Draw the quad.
This is called by aeEngine::Render(). You shouldn't need to call this yourself.
Implements aeObject.
|
http://aeengine.sourceforge.net/documentation/online/pubapi/classaeQuad.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Integrating BowerStatic¶
Introduction¶
This tutorial explains how to use BowerStatic with a WSGI application. BowerStatic doesn’t have a huge API, but your web framework may provide more integration, in which case you may only have to know even less.
The Bower object¶
To get started with BowerStatic you need a
Bower
instance. Typically you only have one global
Bower instance in
your application.
You create it like this:
import bowerstatic bower = bowerstatic.Bower()
Integrating BowerStatic with a WSGI app¶
For BowerStatic to function, we need to wrap your WSGI application
with BowerStatic’s middleware. Here’s to do this for our
bower
object:
app = bower.wrap(my_wsgi_app)
Your web framework may have special BowerStatic integration instead that does this for you.
Later on we will go into more details about what happens here (both an injector and publisher get installed).
Declaring Bower Directories¶
Bower manages a directory in which it installs components (jQuery,
React, Ember, etc). This directory is called
bower_components by
default. Bower installs components into this directory as
sub-directories. Bower makes sure that the components fit together
according to their dependency requirements.
Each
bower_components directory is an “isolated universe” of
components. Components in a
bower_components directory can depend
on each other only – they cannot depend on components in another
bower_components directory.
You need to let BowerStatic know where a
bower_components
directory is by registering it with the
bower object:
components = bower.components('components', '/path/to/bower_components')
Bowerstatic needs an absolute path to the components. With the help of
module_relative_path you can use a path relative to the calling module:
components = bower.components('components', bowerstatic.module_relative_path('path/relative/to/calling/module'))
You can register multiple
bower_components directories with the
bower object. You need to give each a unique name; in the example
it is
components. This name is used in the URL used to serve
components in this directory to the web.
The object returned we assign to a variable
components that we use
later.
Including Static Resources in a HTML page¶
Now that we have a
components object we can start including static
resources from these components in a HTML page. BowerStatic provides
an easy, automatic way for you to do this from Python.
Using the
components object we created earlier for a
bower_components directory, you create a
include function:
include = components.includer(environ)
You need to create the
include function within your WSGI
application, typically just before you want to use it. You need to
pass in the WSGI
environ object, as this is where the inclusions
are stored. You can create the
include function as many times as
you like for a WSGI environ; the inclusions are shared.
Now that we have
include, we can use it to include resources:
include('jquery/dist/jquery.js')
This specifies you want to include the
dist/jquery.js resource
from within the installed
jquery component. This refers to an
actual file in the jQuery component; in
bower_components there is
a directory
jquery with the sub-path
dist/jquery.js inside. It
is an error to refer to a non-existent file.
If you call
include somewhere in code where also a HTML page is
generated, BowerStatic adds the following
<script> tag to that
HTML page automatically:
<script type="text/javascript" src="/bowerstatic/components/jquery/2.1.1/dist/jquery.js"> </script>
Supporting additional types of resources¶
There are all kinds of resource types out there on the web, and BowerStatic does not know how to include all of them on a HTML page. Additional types can be added by making a renderer and register that renderer to an extension.
Renderers will take a resource and returns a html snippet which will be injected in the HTML head element. Renderers can be defined as a callable.
The callable will need to take the resource as the single argument. Based on the resource, the callable can create a html snippet. The following attributes of resource are useful for creating the html:
- url
- The url which can be used to load the resource
- content
- The content of the resource, which can used to make an inline resource. This is mainly useful for small resources as it reduces the numbers of http requests
An example:
def render_foo(resource): return "<foo>%s</foo>" % resource.url()
A renderer can be registered to resources types by:
bower.register_renderer('.foo', render_foo)
If you now include a resource like
example.foo, that resource gets
included on the web page as
<foo>/path/to/example.foo</foo>.
Because most of the time, like above, the html can be constructed with a format string, it is also possible to supply a string. For example:
bower.register_renderer('.foo', "<foo>{url}</foo>")
You can use url and content as variables in the format string.
You can also use
register_renderer() to override existing behavior of how a
resource with a particular extension is to be included.
If you include a resource with an unrecognized extension, a
bowerstatic.Error is raised.
Custom renderer¶
It’s also possible to specify the renderer which will be used in an included resource, so the renderer of the resource type will be overriden just for the given resource. When you specify the renderer, you can again do that both as callable and format string:
include('static/favicon.ico', '<link rel="shortcut icon" type="image/x-icon" href="{url}"/>')
or:
include('static/favicon.ico', lambda resource: '<link rel="shortcut icon" type="image/x-icon" href="' + resource.url() + '"/>')
Rendering inline¶
In some cases, you may want to render the content of resource directly into the web page, instead of referring to it through a URL:
include('static/something.js', bowerstatic.render_inline_js) include('static/something.css', bowerstatic.render_inline_css)
URL structure¶
Let’s look at the URLs used by BowerStatic:
/bowerstatic/components/jquery/2.1.1/dist/jquery.js
bowerstatic
- The BowerStatic signature. You can change the default signature used by passing a
signatureargument to the
Bowerconstructor.
components
- The unique name of the
bower_componentsdirectory which you registered with the
bowerobject.
jquery
- The name of the installed component as given by the
namefield in
bower.json.
2.1.1
- The version number of the installed component as given by the
versionfield in
bower.json.
dist/jquery.js
- A relative path to a file within the component.
Caching¶
BowerStatic makes sure that resources are served with caching headers set to cache them forever [1]. This means that after the first time a web browser accesses the browser, it does not have to request them from the server again. This takes load off your web server.
To take more load off your web server, you can install a install a
caching proxy like Varnish or Squid in front of your web server, or
use Apache’s
mod_cache. With those installed, the WSGI server only
has to serve the resource once, and then it is served by cache after
that.
Caching forever would not normally be advisable as it would make it hard to upgrade to newer versions of components. You would have to teach your users to issue a shift-reload to get the new version of JavaScript code. But with BowerStatic this is safe, because it busts the cache automatically for you. When a new version of a component is installed, the version number is updated, and new URLs are generated by the include mechanism.
Main endpoint¶
Bower has a concept of a
main end-point for a component in its
bower.json. You can include the main endpoint by including the
component with its name without any file path after it:
include('jquery')
This includes the file listed in the
main field in
bower.json.
In the case of jQuery, this is the same file as we already included
in the earlier examples:
dist/jquery.js.
A component can also specify an array of files in
main. In this case
only the first endpoint listed in this array is included.
The endpoint system is aware of Bower component dependencies. Suppose you include ‘jquery-ui’:
include('jquery-ui')
The
jquery-ui component specifies in the
dependencies field in
its
bower.json that it depends on the
jquery component. When you
include the
jquery-ui endpoint, BowerStatic automatically also
include the
jquery endpoint for you. You therefore get two
inclusions in your HTML:
<script type="text/javascript" src="/bowerstatic/static/jquery/2.1.1/dist/jquery.js"> </script> <script type="text/javascript" src="/bowerstatic/static/jquery-ui/1.10.4/ui/jquery-ui.js"> </script>
If
main lists a resource with an extension that has no renderer
registered for it, that resource is not included.
WSGI Publisher and Injector¶
Earlier we described
bower.wrap to wrap your WSGI application with
the BowerStatic functionality. This is enough for many applications.
Sometimes you may want to be able to use the static resource
publishing and injecting-into-HTML behavior separately from each
other, however.
Publisher¶
BowerStatic uses the publisher WSGI middleware to wrap a WSGI application so it can serve static resources automatically:
app = bower.publisher(my_wsgi_app)
app is now a WSGI application that does everything
my_wsgi_app
does, as well as serve Bower components under the special URL
/bowerstatic.
Injector¶
BowerStatic also automates the inclusion of static resources in your
HTML page, by inserting the appropriate
<script> and
<link>
tags. This is done by another WSGI middleware, the injector.
You need to wrap the injector around your WSGI application as well:
app = bower.injector(my_wsgi_app)
Morepath integration¶
See static resources with Morepath for information on how the more.static extension helps you use BowerStatic in the Morepath web framework.
Pyramid integration¶
For integration into the Pyramid web framework, there is a pyramid_bowerstatic extension or you can use djed.static.
Example Flask integration¶
The Flask web framework does not have a specific extension
integrating BowerStatic yet, but you can use BowerStatic’s WSGI
integration layer to do so. Here is an example of how you integrate
BowerStatic with Flask. This code assumes you have a
bower_components directory next to this module:
from flask import Flask, request import bowerstatic import os.path app = Flask(__name__) bower = bowerstatic.Bower() components = bower.components( 'components', os.path.join(os.path.dirname(__file__), 'bower_components')) @app.route('/') def home(): include = components.includer(request.environ) include('jquery') # it's important to have head and body elements in the # HTML so the includer has a point to include into return "<html><head></head><body></body></html>' if __name__ == "__main__": # wrap app.wsgi_wrap, not the Flask app. app.wsgi_app = bower.wrap(app.wsgi_app) app.run(debug=True)
In the example we used a simple text string but you can use Jinja
templates too. No special changes to the templates are necessary; the
only thing required is that they have HTML
<head>,
</head>,
<body> and
</body> tags so that the includer has a point where
it can include the static resources.
Using the Publisher and Injector with WebOb¶
The
Injector and
Publisher can also directly be used with
WebOb request and response objects. This is useful for integration
with web frameworks that already use WebOb:
from morepath import InjectorTween, PublisherTween def handle(request): ... do application's work, returning response ... # use wrapped_handle instead of handle to handle application # requests with BowerStatic support wrapped_handle = PublisherTween(bower, InjectorTween(bower, handle))
All that is required is a WebOb request and a response.
The Morepath and Pyramid integrations mentioned above already make use of this API.
|
http://bowerstatic.readthedocs.io/en/latest/integrating.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
We can add functional tests to our application to verify all of the functionality that we claim to have. Ferris provides you with the ability to test our application’s controllers in the same way that we tested our data model.
Before we get started testing, we need to add a root route so that goes to /posts.
Modify app/routes.py and remove these lines:
from webapp2 import Route from ferris.handlers.root import Root ferris_app.router.add(Route('/', Root, handler_method='root'))
Add these lines in their place:
from webapp2_extras.routes import RedirectRoute ferris_app.router.add(RedirectRoute('/', redirect_to='/posts'))
At this point opening should send us to.
Note
To run tests execute scripts/backend_test.sh or alternatively python ferris/scripts/test_runner.py app just as we did in the Data Model section.
Ferris provides an example test called SanityTest that resides in app/tests/backend/test_sanity.py.
Here’s the code for that test:
class SanityTest(AppTestCase): def testRoot(self): self.loginUser() resp = self.testapp.get('/') self.loginUser(admin=True) resp = self.testapp.get('/admin') self.assertTrue('Ferris' in resp)
Let’s walk through this:
We’re going build on this example and create tests to verify:
While this isn’t an exhaustive list, it will demonstrate how to test various aspects of our application.
In this test, we’ll create four posts: two from the first user, and two from the second user. We will then check the results of our list methods. Our tests should verify that the data was created and appears as expected.
First, let’s create our test class in app/tests/backend/test_posts.py:
from ferris.tests.lib import AppTestCase from app.models.post import Post class TestPosts(AppTestCase): def testLists(self): self.loginUser("user1@example.com")
We have a user logged in, so let’s create some posts as that user:
Post(title="Test Post 1").put() Post(title="Test Post 2").put()
Now let’s log in the second user and create some more posts:
self.loginUser("user2@example.com") Post(title="Test Post 3").put() Post(title="Test Post 4").put()
At this point we can now make requests and verify their content. Let’s start with /posts and verify that all of the posts are showing up:
resp = self.testapp.get('/posts') assert 'Test Post 1' in resp.body assert 'Test Post 2' in resp.body assert 'Test Post 3' in resp.body assert 'Test Post 4' in resp.body
Very well, let’s continue with /posts?mine and verify that only user2@example.com's posts are present:
resp = self.testapp.get('/posts?mine') assert 'Test Post 1' not in resp.body assert 'Test Post 2' not in resp.body assert 'Test Post 3' in resp.body assert 'Test Post 4' in resp.body
Additionally, let’s make sure the ‘edit’ links are present:
assert 'Edit' in resp.body
Let’s add a new method and make a request to /posts/add:
def testAdd(self): self.loginUser("user1@example.com") resp = self.testapp.get('/posts/add')
Now let’s get the form from the response, try to submit it without filling it out, and verify that it caused a validation error:
form = resp.form error_resp = form.submit() assert 'This field is required' in error_resp.body
With that in place, let’s fill out the form, submit it, and verify that it went through:
form['title'] = 'Test Post' good_resp = form.submit() assert good_resp.status_int == 302 # Success redirects us to list
Finally, load up the list and verify that the new post is there:
final_resp = good_resp.follow() assert 'Test Post' in final_resp
To test to make sure that a user can only edit his own posts, we’re going to need to create posts under two different users like we did before:
def testEdit(self): self.loginUser("user1@example.com") post_one = Post(title="Test Post 1") post_one.put() self.loginUser("user2@example.com") post_two = Post(title="Test Post 2") post_two.put()
Now, let’s load the edit page for post two. This should succeed:
self.testapp.get('/posts/:%s/edit' % post_two.key.urlsafe())
Finally, load the edit page for post one. We should expect this to fail:
self.testapp.get('/posts/:%s/edit' % post_one.key.urlsafe(), status=401)
|
http://ferris-framework.appspot.com/docs21/tutorial/6_functional_testing.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Captures loop safety information. More...
#include "llvm/Analysis/MustExecute.h"
Captures loop safety information.
It keep information for loop & its header may throw exception or otherwise exit abnormaly on any iteration of the loop which might actually execute at runtime. The primary way to consume this infromation is via isGuaranteedToExecute below, but some callers bailout or fallback to alternate reasoning if a loop contains any implicit control flow.
Definition at line 39 of file MustExecute.h.
Definition at line 44 of file MustExecute.h.
Referenced by canSplitPredecessors(), CloneInstructionInExitBlock(), llvm::computeLoopSafetyInfo(), isNotUsedOrFreeInLoop(), and splitPredecessorsOfLoopExit().
Definition at line 42 of file MustExecute.h.
Referenced by llvm::computeLoopSafetyInfo(), and llvm::isGuaranteedToExecute().
Definition at line 40 of file MustExecute.h.
Referenced by llvm::computeLoopSafetyInfo(), deleteDeadInstruction(), llvm::isGuaranteedToExecute(), llvm::isSafeToUnrollAndJam(), and llvm::promoteLoopAccessesToScalars().
|
http://llvm.org/doxygen/structllvm_1_1LoopSafetyInfo.html
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
I have been investigating several ways of generating files suitable for use in Excel from a C# application.
As with most problems, there is more than one way to crack a nut. Various examples on the web show how to generate formatted sheets in Excel, either by controlling Excel from a C# application or by transforming XML data. The XML transformation has the disadvantage that is limits your clients to the most recent versions of Excel, whereas dire manipulation of Excel requires that you have it installed on the server. You also have to be very careful not to leave instances of Excel running in the background, eventually grinding your server to a halt.
This article provides a demonstration of a very simple method to generate a file that will load into Excel. It is a bit of a hack that I used from a java platform a few years ago, but it works if all you need is a simple data export (with less than 65536 rows). All you do is set the response stream to the mime type "application/vnd.ms-excel" and then pass a tab delimted set of data with new lines at the end of each row.
Below is a sample code snippet showing a method to generate a simple set of data that will load into Excel (should work with most versions including Excel 95 and above). The version below also highlights how to request that the browser treats the file as a download and forces the user to save to disk rather than load in the browser. This also has the advantage that the filename is easily controlled.
/// Demo class showing how to generate an Excel Attachment
/// </summary>
public class XLWriter
{
public static void Write(HttpResponse response)
{
// make sure nothing is in response stream
response.Clear();
response.Charset = "";
// set MIME type to be Excel file.
response.ContentType = "application/vnd.ms-excel";
// add a header to response to force download (specifying filename)
response.AddHeader("Content-Disposition", "attachment; filename=\"MyFile.xls\"");
// Send the data. Tab delimited, with newlines.
response.Write("Col1\tCol2\tCol3\tCol4\n");
response.Write("Data 1\tData 2\tData 3\tData 4\n");
response.Write("Data 1\tData 2\tData 3\tData 4\n");
response.Write("Data 1\tData 2\tData 3\tData 4\n");
response.Write("Data 1\tData 2\tData 3\tData 4\n");
// Close response stream.
response.End();
}
}
Why not just output a comma-seperate-values file?
Column 1, Column 2
Row 1, Row 2
...etc.
I wanted the file generated to load into excel and appear to the user like it was a native excel file. A CSV formatted file cannot be saved with the XLS extension (it needs to be CSV).
Our ActiveReports for .NET product () includes a component we call SpreadBuilder that provides a full API for creating binary excel files. If our customers continue to show interest in the product, we may improve this component even further in the future. Of course, ActiveReports can also export any of your reports to Excel automatically.
...and ActiveReports for .NET can be obtained for the sum of.... ;-)
Don't know ActiveReports price, but ExcelLite .NET component () does the same and costs 160 EUR (~$190). You pay per developer; server or client deployment is royalty free.
Hi How can write the percentage sign % into excel using C# . Please respond
Hi,
How can I implement and use this xmlwriter class? I copy it to XMLWriter.cs but httpresponse gives error? I'm a beginner to c# and it would be really good if you help me. Thanks
Bibi,
If you post the error that may help. My guess is that you need the appropraite "using" statement at the top of the class. Also - the class is called XLWriter, XMLWriter.
Cheers,
Martin.
hi all...i have an excel image button when I click it will take the data from my page and download it as an excel file. When i drag and drop/or save it as a .txt file it shows all the crazy html/xml codes. I just want it to display as tab delimited contents. I am having very difficulty. can anyone help? thanks,
hafiz (guile786@aol.com)
Hi, I am not able to read data from excel cells if it contains errors. When the cell has data which has been converted from number to text, a green triangle appears on top left of the cell. It is then that the data reader reads a null value from that field. Using data set made no difference neither did disabling the background error checking of excel through code. Please help!!!
Hello .
Thanks about your sample code
this code is very well
Any tips on how to do simple formatting on the cells? Such as Bold and Italics?
Good
Dear sir..
I am using Windows application.
i want to create Excel file, rename the Sheets and i want to add sheets...
Hey thnx......
This is great good stuff..
Only if you could have added how can one format the cell it would have been excellent
Neways a great help..
The type or namespace name 'HttpResponse' could not be found (are you missing a using directive or an assembly reference?)
This is the error that i encountered. What is the appropriate using statement that i'm missing here.
Thanks and regards
Akhil.
See MSDN, it's System.Web.
I need to format my Excel report which is being generated through C#. Please let me know how to format each cell, if i have a given format.
Also i want thatthe user should get a prompt or dialouge box to savethe excel. Please let me know how thse 2 functionality can be achieved.
Please can someone help me to do the export to excel but into differetn sheets.
Thanks for help
Joe Way
Hi
Can anyone tell how to remove '1' from excel work sheetname..
For eg: I want the worksheet name as 'project', not 'project 1'
thanks for your code but how can i create second worksheet from this code???????????????????
hi
thanks for the approach. its working superb. but i am seeing a save/open dialogue box that tells open/save the excel file which i dont need. because by default when i click submit button the data should be saved with the file name that i provided in the code.
plz kindly let me know what i have to change to avoid the popup and save automatically at spcefied location in the code.
thanks in advance.
Raj
Thanks a lot it worked for me.
I wanted to export a big datatable to a excel/csv file, and It works great!
Hello,
I'm new at C# and coding in general.
I created a C# Windows form from within Visual Studio 2008.
I have a button that I want to execute a Stored Procedure; essentially a "Select" Sql query.
How do I implement the code that you have provided to execute my stored procedure and ouput to csv or excel either one?
Thanks for your time.
HELLO SIR,
I'm new at .net
I want to create a windows form which will consume the excel sheet's data and save it in sqlserver2008 database through a button or any other action control.
can u please help me in doing this?
Thanks and Regards in advance.
|
http://www.woodwardweb.com/dotnet/generating_exce.html
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Okay, I don't think I understand definitions, really. How does this work? With this
def travel_cost function, can't I just call hotel_cost, plane_cost and rental_cost? I don't understand. Please help!
def hotel_cost(nights): return 140 * nights def plane_ride_cost (city): if city == "Charlotte": return 183 elif city == "Tampa": return 220 elif city == "Pittsburgh": return 222 elif city == "Los Angeles": return 475 else: return "I'm sorry, that city is not in our database. Please choose from the following: Charlotte, Tampa, Pittsbugh or Los Angeles. Thank you" def rental_car_cost(days): if days >= 7: return days * 40 - 50 elif not days >= 7 and days >= 3: return days * 40 - 20 else: return days * 40 def trip_cost(city, days): return hotel_cost + plane_ride_cost + rental_car_cost
|
https://discuss.codecademy.com/t/i-dont-think-i-understand-definitions/81527
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
GameFromScratch.com
Since the release of Phaser 3.0 earlier this year, the HTML5 game framework has seen a rapid succession of updates. Today Phaser 3.8.0 was released, this release focusing heavily on the plugin system, making it easier to acquire and ultimately use them in your game. This release also enables you to provide your own already created WebGL context when initializing Phaser. Of course the release is also packed with other smaller fixes, features and improvements.
Further details from the change log:
New FeaturesYou can pass in your own canvas and context elements which is derived from gl.MAX_TEXTURE_IMAGE_UNITS, you can get it via the new method getMaxTextures(). WebGLRenderer.config has a new property maxTextureSize which is derived from gl.MAX_TEXTURE_SIZE, you can get it via the new method getMaxTextureSize()WebGLRenderer has a new property compression which holds the browser / devices compressed texture support gl extensions, which is populated during init. When calling generateFrameNames to which will randomly position them anywhere within the defined area, or if no area is given, anywhere within the game size. UpdatesGame.step now emits a prestep event, which some of the global systems hook in to, like Sound and Input. You can use it to perform pre-step tasks, ideally from plugins. Game.step now emits a step event. This is emitted once per frame. You can hook into it from plugins or code that exists outside of a Scene. Game.step now emits a poststep event. This is the last chance you get to do things before the render process begins. Optimized TextureTintPipeline.drawBlitter so it skips bobs that have alpha of zero and only calls setTexture2D if the bob sourceIndex has changed, previously it called it for every single bob. Game.context used to be undefined if running in WebGL. It is now set to be the WebGLRenderingContext during WebGLRenderer.init. If you provided your own custom context, it is set to this instead. The Game onStepCallback has been removed. You can now listen for the new step events instead. Phaser.EventEmitter was incorrectly namespaced, it's now only available under Phaser.Events.EventEmitter (thanks Tigran) Bug FixesTBounds on a nested container to fail. Fix #3624 (thanks @poasher) Calling a creator, such as GraphicsCreator, without passing in a config object, would cause an error to be thrown. All Game Object creators now catch against this.
canvas
context
maxTextures
gl.MAX_TEXTURE_IMAGE_UNITS
getMaxTextures()
maxTextureSize
gl.MAX_TEXTURE_SIZE
getMaxTextureSize()
compression
init
generateFrameNames
this.input.keyboard.on('keydown_NUMPAD_ZERO')
setRandomPosition
prestep
step
poststep
setTexture2D
WebGLRenderingContext
onStepCallback
document.body
getBounds
If you are interested in learning Phaser 3, be sure to check out our Getting Started video, also embedded below:
GameDev News
Phaser.
Phaser is a popular open source HTML5 2D game framework, that just released version 3.3.0. Phaser has been on a rapid release schedule since Phaser 3 was released just last month.
Highlights of this release include: event.
destroy
Additionally the documentation has seemed heavy focus which will hopefully result in Typescript definitions being available soon™. In addition to the above features there were several other smaller improvements and bug fixes. You can read the full change log here.
If you are interested in getting started with Phaser, be sure to check out our recently released Getting Started with Phaser 3 video tutorial, also embedded below.
Another quick update for the recently released Phaser 3 game engine, this one bringing Phaser to version 3.2.1. Phaser is a popular and full featured 2D framework for developing HTML5 games. This release is almost entirely composed of bug fixes and quality of life improvements.
Details of the release from the release notes:
Bug FixesFixed issue with Render Texture tinting. Fix #3336 (thanks @rexrainbow)
Fixed Utils.String.Format (thanks @samme)
The Matter Debug Layer wouldn't clear itself in canvas mode. Fix #3345 (thanks @samid737)
TimerEvent.remove would dispatch the Timer event immediately based on the opposite of the method argument, making it behave the opposite of what was expected. It now only fires when requested (thanks @migiyubi)
The TileSprite Canvas Renderer did not support rotation, scaling or flipping. Fix #3231 (thanks @TCatshoek)
Fixed Group doesn't remove children from Scene when cleared with the removeFromScene argument set (thanks @iamchristopher)
Fixed an error in the lights pipeline when no Light Manager has been defined (thanks @samme)
The ForwardDiffuseLightPipeline now uses sys.lights instead of the Scene variable to avoid errors due to injection removal.
Phaser.Display.Color.Interpolate would return NaN values because it was loading the wrong Linear function. Fix #3372 (thanks @samid737)
RenderTexture.draw was only drawing the base frame of a Texture. Fix #3374 (thanks @samid737)
TileSprite scaling differed between WebGL and Canvas. Fix #3338 (thanks @TCatshoek)
Text.setFixedSize was incorrectly setting the text property instead of the parent property. Fix #3375 (thanks @rexrainbow)
RenderTexture.clear on canvas was using the last transform state, instead of clearing the whole texture. UpdatesThe SceneManager.render will now render a Scene as long as it's in a LOADING state or higher. Before it would only render RUNNING scenes, but this precluded those that were loading assets.
A Scene can now be restarted by calling scene.start() and providing no arguments (thanks @migiyubi)
The class GameObject has now been exposed, available via Phaser.GameObjects.GameObject (thanks @rexrainbow)
A Camera following a Game Object will now take the zoom factor of the camera into consideration when scrolling. Fix #3353 (thanks @brandonvdongen)
Calling setText on a BitmapText object will now recalculate its display origin values. Fix #3350 (thanks @migiyubi)
You can now pass an object to Loader.atlas, like you can with images. Fix #3268 (thanks @TCatshoek)
The onContextRestored callback won't be defined any more unless the WebGL Renderer is in use in the following objects: BitmapMask, Static Tilemap, TileSprite and Text. This should allow those objects to now work in HEADLESS mode. Fix #3368 (thanks @16patsle)
The SetFrame method now has two optional arguments: updateSize and updateOrigin (both true by default) which will update the size and origin of the Game Object respectively. Fix #3339 (thanks @Jerenaux)
removeFromScene
sys.lights
text
parent
scene.start()
Phaser.GameObjects.GameObject
setText
onContextRestored
updateSize
updateOrigin
Phaser is available for download here. If you are interested in learning more about Phaser 3 development be sure to check out our getting started video available here and embedded below..
|
http://www.gamefromscratch.com/?tag=/Phaser
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Container management and automation tools are today a great necessity, since IT organisations need to manage and monitor distributed and cloud/native based applications. Let’s take a look at some of the best open source tools that are available to us today for containerisation.
The Linux operating system, LXC, the building block that sparked the development of containerisation technologies, was added to the Linux kernel in 2008. LXC combined the use of kernel cgroups, which allow groups to be separated so that they cannot “see” each other, to implement lightweight process isolation.
Recently, Docker has evolved into a powerful way of simplifying the tooling required to create and manage containers. It basically uses LXC as its default execution driver. With Docker, containers have become accessible to novice developers and systems administrators, as it uses simple processes and standard interfaces.
Containerisation is regarded as a lightweight alternative to the virtualisation of the full machine, which involves encapsulating an application in a container with its own operating environment. This, in turn, provides many unique advantages of loading an application into a virtual machine, as applications can run on any suitable physical machine without any dependencies.
Containerisation has become popular through Docker, for which containers are designed and developed in such a manner that they are capable of running on everything ranging from physical machines and virtual machines, to OpenStack cloud clusters, physical instances and all sorts of servers.
The following points highlight the unique advantages of containers.
- Host system abstraction: Containers are standardised systems, which means that they connect the host machine to anything outside of the container using standard interfaces. Container applications rely on host machine resources and architectures.
- Scalability: Abstraction between the host system and containers gives an accurate application design, scalability and an easy-to-operate environment. Service-Oriented-Design (SoD) is combined with container applications to provide high scalability.
- Easy dependency management: Containers give a powerful edge to developers, enabling them to combine an application or all application components along with their dependencies as one unit. The host system doesn’t face any sort of challenge regarding the dependencies required to run any application. As the host system can run Docker, everything can run on Docker containers.
- Lightweight and isolation execution operating environments: Though containers are not as powerful as virtualisation in providing isolation and resource management, they still have a clear edge in terms of being a lightweight execution environment. Containers are isolated at the process level and share the kernel of the host machine, which means that a container doesn’t include a complete operating system, leading to faster start-up times.
- Layering: Containers, being ultra-lightweight in the operating environment, function in layers and every layer performs individual tasks, leading to minimal disk utilisation for images.
With the general adoption of cloud computing platforms, integrating and monitoring container based technologies is of utmost necessity. Container management and automation tools are regarded as important areas for development as today’s IT organisations need to manage and monitor distributed and cloud/native based applications.
Containers can be visualised as the future of virtualisation, and strong adoption of containers can already be seen in cloud and Web servers. So, it is very important for systems administrators to be aware of various container management tools that are also open source. Let’s take a look at what I consider are the best tools in this domain.
Apache Aurora
Apache Aurora is a service scheduler that runs on top of Apache Mesos, enabling users to run long-running services and to take appropriate advantage of Mesos’ scalability, fault tolerance and resource isolation. Aurora runs applications and services across a shared pool of host machines and is responsible for keeping them up and running 24×7. In case of any technical failures, Aurora intelligently does the task of rescheduling over other running machines.
Components
- Scheduler: This is regarded as the primary interface for the user to work on the cluster and it performs various tasks like running jobs and managing Mesos.
- Client: This is a command line tool that exposes primitives, enabling the user to interact with the scheduler. It also contains Admin_Client to run admin commands especially for cluster administrators.
- Executor: This is responsible for carrying out workloads, executing user processes, performing task checks and registering tasks in Zookeeper for dynamic service discovery.
- Observer: This provides browser based access to individual tasks running on worker machines and gives detailed analysis of processes being executed. It enables the user to browse all the sandbox task directories.
- Zookeeper: It performs the task of service discovery.
- Mesos Master: This tracks all worker machines and ensures resource accountability. It acts as the central node that controls the entire cluster.
- Mesos Agent: This receives tasks from the scheduler and executes them. It basically interfaces with Linux isolation groups like cgroups, namespace and Docker to manage resource consumption.
Features
- Scheduling and deployment of jobs.
- Resource quota and multi-user support.
- Resource isolation, multi-tenancy, service discovery and Cron jobs.
Latest version: 0.18.0
Apache Mesos
Apache Mesos is an open source cluster manager developed by the University of California, Berkeley, and provides efficient resource isolation and sharing across distributed applications or frameworks. Apache Mesos abstracts the CPU, memory, storage and other computational resources away from physical or virtual machines and performs the tasks of fault tolerance.
Apache Mesos is being developed on the same lines as the Linux kernel. The Mesos kernel is compatible with every machine and provides a platform for various applications like Apache Hadoop, Spark, Kafka and Elasticsearch with APIs for effective resource management, and scheduling across data centres and cloud computing functional environments.
Components
- Master: This enables fine-grained resource sharing across frameworks by making resource offers.
- Scheduler: Registers with the Master to be offered resources.
- Executor: It is launched on agent nodes to run the framework tasks.
Features
- Isolation of computing resources like the CPU, memory and I/O in the entire cluster.
- Supports thousands of nodes, thereby providing linear scalability.
- Is fault-tolerant and provides high availability.
- Has the capability to share resources across multiple frameworks and implement a decentralised scheduling model.
Latest version: 1.3.0
Docker Engine
Docker Engine creates and runs Docker containers. A Docker container can be defined as a live instance of a Docker image, which is regarded as a file created to run a specific service or program in the operating system. Docker is an effective platform for developers and administrators to develop, share and run applications. It is also used to deploy code, test it and implement it as fast as possible.
Docker Engine is Docker’s open source containerisation technology combined with a workflow for building and containerising applications. Docker containers can effectively run on desktops, servers, virtual machines, data centres-cum-cloud servers, etc.
Features
- Faster delivery of applications as it is very easy to build new containers, enabling rapid iteration of applications; hence, changes can be easily visualised.
- Easy to deploy and highly scalable, as Docker containers can run almost anywhere and run on so many platforms, enabling users to move the applications around — from testing servers to real-time implementation environments.
- Docker containers don’t require a hypervisor and any number of hosts can be packed. This gives greater value for every server and reduces costs.
- Overall, Docker speeds up the work, as it requires only a few changes rather than huge updates.
Latest version: 17.06
Docker Swarm
Docker Swarm is an open source native clustering tool for Docker. It converts a pool of Docker hosts into a single, virtual Docker host. As Docker Swarm serves the standard Docker API, any tool communicating with the Docker daemon can use Swarm to scale to multiple hosts.
The following tools support Docker Swarm:
- Dokku
- Docker Compose
- Docker Machine
- Jenkins
Unlike other Docker based projects, the ‘swap, plug and play’ principle is utilised by Swarm. This means that users can swap the scheduled backend Docker Swarm out-of-the-box with any other one that they prefer.
Features
- Integrated cluster management via Docker Engine.
- Decentralised design: Users can deploy both kinds of nodes, managers and workers, via Docker Engine.
- Has the Declarative Service Model to define the desired states of various services in the application stack.
- Scaling: Swarm Manager adapts automatically by adding or removing tasks to maintain the desired state of application.
- Multi-host networking, service discovery and load balancing.
- Docker Swarm uses TLS mutual authentication and encryption to secure communications between the nodes.
- It supports rolling updates.
Version: Supports Docker Engine v1.12.0 (the latest edition)
Kontena
Kontena is an open source project for organising and running containerised workloads on the cluster and is composed of a number of nodes (virtual machines) and Master (for monitoring nodes).
Applications can be developed via Kontena Service, which includes the container image, networking, scaling and other attributes of the application. Service is highly dynamic and can be used to create any sort of architecture. Every service is assigned a DNS address, which can be used for inter-service communication.
Features
- In-built private Docker image registry.
- Load balancing service to maintain operational load of nodes and Master.
- Access control and roles for secure communication.
- Intelligent scheduler with affinity filtering.
Latest version: 1.3.3
oVirt
oVirt is an open source virtualisation platform, developed by Red Hat primarily for the centralised management of virtual machines, as well as compute, storage and networking resources, via an easy Web based GUI with platform independent access. oVirt is built on the powerful Kernel-based Virtual Machine (KVM) hypervisor, and is written in Java and the GWT Web toolkit.
Components
- oVirt Node: Has a highly scalable, image based and small footprint hypervisor written in Python.
- oVirt Engine: Has a centralised virtualisation management engine with a professional GUI interface to administer a cluster of nodes.
Features
- High availability and load balancing.
- Live migration and Web based administration.
- State-of-art security via SELinux, and access controls for virtual machines and the hypervisor.
- Powerful integration with varied open source projects like OpenStack Glance, Neutron and Katello for provisioning and overall administration.
- Highly scalable and self-hosted engine.
Latest version: 4.1.2
Weaveworks
Weaveworks comprises a set of tools for clustering, viewing and deploying microservices and cloud-native applications across intranets and the Internet.
Tools
- Weave Scope: Provides a visual monitoring GUI interface for software development across containers.
- Weave Cortex: Prometheus-as-a-Service open source plugin for data monitoring in Kubernetes based clusters.
- Weave Flux: Facilitates the deployment of containerised applications to Kubernetes clusters.
- Weave Cloud: Combines various open source projects to be delivered as Software-as-a-Service.
Features
- Powerful GUI interface for viewing processes, containers and hosts in order to perform all sorts of operations and microservices.
- Real-time monitoring of containers via a single-node click.
- Easy integration via no coding requirements.
Wercker
Wercker is an open source autonomous platform to create and deploy containers for multi-tiered, cloud native applications. It can build containers automatically, and deploy them to public and private Docker registries. Wercker provides a CLI based interface for developers to create Docker containers that deploy and build processes, and implement them on varied cloud platforms ranging from Heroku to AWS and Rackspace.
It is highly integrated with Docker containers and includes application code for easy mobility between servers. It works on the concept of pipelines, which are called ‘automated workflows’. The API provides programmatic access to information on applications, builds and deployments.
Features
- Tight integration with GitHub and Bitbucket.
- Automates builds, tests and deployments via pipelines and workflows.
- Executes tests in parallel, and saves wait times.
- Works with private containers and container registries.
|
https://opensourceforu.com/2017/09/open-source-tools-for-containerisation/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
libssh2_channel_send_eof man page
libssh2_channel_send_eof — send EOF to remote server
Synopsis
#include <libssh2.h>
int libssh2_channel_send_eof(LIBSSH2_CHANNEL *channel);
Description
Tell the remote host that no further data will be sent on the specified channel. Processes typically interpret this as a closed stdin descriptor.
Return Value
Return 0 on success or negative on failure. It returns LIBSSH2_ERROR_EAGAIN when it would otherwise block. While LIBSSH2_ERROR_EAGAIN is a negative number, it isn't really a failure per se.
Errors
LIBSSH2_ERROR_SOCKET_SEND - Unable to send data on socket.
See Also
libssh2_channel_wait_eof(3) libssh2_channel_eof(3)
Referenced By
libssh2_channel_wait_closed(3), libssh2_channel_wait_eof(3).
1 Jun 2007 libssh2 0.15 libssh2 manual
|
https://www.mankier.com/3/libssh2_channel_send_eof
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
A Configuration and Tweak SystemSetting up a configuration system for a game sounds like a trivial task. What exactly is there to store beyond some graphics, sound and controller information? For the final released game, those items cover most of the needs of a simple game, but during development it is handy to store considerably more information. Simple items such as auto loading a specific level you are working on, if bounding boxes are displayed and other items can greatly speed debugging. Of course it is possible to write different systems, one for the primary configuration and another for tweakables, but that is a duplication of effort and not required. Presented in this article is configuration system which supports both standard configuration and development time configuration. This article builds on the CMake environment presented in the articles here and extends from the updated version presented with the SIMD articles here. The code for the article can be found here starting at the tag 'ConfigSystem' and contained in the "Game" directory.
GoalsThe primary goals of the configuration system are fairly simple: provide configuration serialization without getting in the way of normal game programming or requiring special base classes. While this seems simple enough, there are some tricky items to deal with. An example could be the choice of a skin for the in-game UI. The configuration data will be loaded when the game starts up in order to setup the main window's position, size and if it is fullscreen or not, but the primary game UI is created later after the window is setup. While it is possible to simply set a global flag for later inspection by the UI, it is often preferable to keep the data with the objects which use them. In order to solve this sort of delayed configuration, the system maintains a key value store of the configuration such that access is available at any point during execution. Keeping the solution as simple and non-intrusive as possible is another important goal. It should take no more than a minute or two to hook up configuration file persistence without requiring multiple changes. If it is needed to change a local variable to be attached to a configuration item it should not be required to change the local to a global or move it to a centralized location. The system should work with the local value just as well as member variables and globals in order to remain non-intrusive. Finally, while not a requirement of the configuration data directly, it should be possible to control and display the items from within the game itself. For this purpose, a secondary library is supplied which wraps the open source library AntTweakBar () and connects it to variables in the game. This little library is a decent enough starting point to get going after hacking it a bit to fit into the CMake build environment. Eventually the library will likely be replaced with the chosen UI library for the game being written as part of these articles. For the time being though, it serves the purpose as something quick to use with some nice abilities.
Using the SystemA basic overview of using the configuration system is presented through a series of small examples. For full examples the current repository contains an application called XO which is the beginnings of a game and includes a number of useful additions beyond the scope of this article. It currently builds off of SFML 2.0 and includes a test integration of the 'libRocket' UI framework in addition to a number of utilities for logging and command line argument parsing. For more detailed examples, please see the application being created.
The SingletonThe configuration system requires a singleton object to store the in memory database. While there are many ways to implement singletons, the choice of singleton style here is to use a scoped object somewhere within the startup of the game in order to explicitly control creation and shutdown. A very simple example of usage:
#includeCompared to other solutions, such as the Pheonix singleton, there are no questions as to the lifetime of the object which is very important in controlling when data is loaded and saved.
int main( int argc, char** argv ) { Config::Initializer configInit( "config.json" ); return 0; }
Simple Global Configuration ItemsThe first use of the configuration system will show a simple global configuration item. This will be shown without any of the helpers which ease usage in order to provide a general overview of the classes involved. The example is simply a modification of the standby "Hello World!" example:
#includeWhen you run this the first time, the output is as expected: "Hello World!". Additionally, the executable will create the file "HelloWorld.json" with the following contents:
#include #include std::string gMessage( "Hello World!" ); Config::TypedItem< std::string > gMessageItem( "HelloWorld/Message", gMessage ); int main( int argc, char** argv ) { Config::Initializer configInit( "HelloWorld.json" ); // Initialize the message from the configuration. // If the item does not exist, the initial value is retained. Config::Instance->Initialize( gMessageItem ); std::cout << gMessage; // Update the message in the configuration registry. // This example never changes the value but it is // possible to modify it in the json file after // the first run. Config::Instance->Store( gMessageItem ); return 0; }
{ "Registry" : [ { "Name" : "HelloWorld\/Message", "Value" : "Hello World!" } ] }If you edit the value string in the JSON file to be "Goodbye World!" and rerun the example, the output will be changed to the new string. The default value is overwritten by the value read from the configuration file. This is not in itself all that useful, but it does show the basics of using the system.
Auto Initialization and MacrosIn order to ease the usage of the configuration system there is a utility class and a set of macros. Starting with the utility class, we can greatly simplify the global configuration item. Rewrite the example as follows:
#includeThe InitializerList class works with any global or static item and automates the initialization and storage of the item. This system uses a safe variation of static initialization in order to build a list of all items wrapped with the InitializerList class. When the configuration initializer is created and the configuration is loaded, items in the list are automatically configured. At shutdown, the configuration data is updated from current values and as such the user does not need to worry about the global and static types when the InitializerList is in use.
#include #include std::string gMessage( "Hello World!" ); Config::InitializerList gMessageItem( "HelloWorld/Message", gMessage ); int main( int argc, char** argv ) { Config::Initializer configInit( "HelloWorld.json" ); std::cout << gMessage; return 0; }
Using the static initializer from a static library in release builds will generally cause the object to be stripped as it is not directly referenced. This means that the item will not be configured in such a scenario. While this can be worked around, it is not currently done in this implementation. If you require this functionality before I end up adding it, let me know and I'll get it added.A further simplification using macros is also possible. The two lines defining the variable and wrapping it with an initializer list are helpfully combined into the CONFIG_VAR macro. The macro takes the type of the variable, the name, the key to be used in the registry and the default starting value. The macro expands identically to the two lines and is not really needed, but it does make things more readable at times.
Scoped ConfigurationOther forms of configuration such as local scoped variables and members can be defined in similar manners to the global items. Using the InitializerList class is safe in the case of locals and members as it will initialize from the registry on creation and of course store to the registry on destruction. The class automatically figures out if the configuration is loaded and deals with the referenced item appropriately. So, for instance the following addition to the example works as intended:
#includeThe currently configured message will be printed out followed by ": 1234" and the configuration JSON will reflect the new variable. Changing the variable in the configuration file will properly be reflected in a second run of the program and if you changed the value within the program it would be reflected in the configuration file.
#include #include CONFIG_VAR( std::string, gMessage, "HelloWorld/Message", "Hello World!" ); int main( int argc, char** argv ) { Config::Initializer configInit( "HelloWorld.json" ); CONFIG_VAR( int32_t, localVar, "HelloWorld/localVar", 1234 ); std::cout << gMessage << " : " << localVar; return 0; }
Class Member ConfigurationConfiguring classes using the system is not much more difficult than using globals, the primary difference is in splitting the header file declaration from the implementation initialization and adding some dynamic key building abilities if appropriate. Take the following example of a simple window class:
class MyWindow { public: MyWindow( const std::string& name ); private: const std::string mName; Math::Vector2i mPosition; Math::Vector2i mSize; }; MyWindow::MyWindow( const std::string& name ) : mName( name ) , mPosition( Math::Vector2i::Zero() ) , mSize( Math::Vector2i::Zero() ) { }In order to add configuration persistence to this class simply make the following modifications:
class MyWindow { public: MyWindow( const std::string& name ); private: const std::string mName; CONFIG_MEMBER( Math::Vector2i, mPosition ); CONFIG_MEMBER( Math::Vector2i, mSize ); }; MyWindow::MyWindow( const std::string& name ) : mName( name ) , CONFIG_INIT( mPosition, name+"/Position", Math::Vector2i::Zero() ) , CONFIG_INIT( mSize, name+"/Size", Math::Vector2i::Zero() ) { }Each differently named window will now have configuration data automatically initialized from and stored in the configuration registry. On first run, the values will be zero'd and after the user moves the windows and closes them, they will be properly persisted and restored next use. Once again, the macros are not required and are simply a small utility to create the specific type instance and the helper object which hooks it to the configuration system. While it would be possible to wrap the actual instance item within the configuration binding helpers and avoid the macro, it was preferable to leave the variables untouched so as not to affect other pieces of code. This was a tradeoff required to prevent intrusive behavior when converting items to be configured.
Adding New TypesAdding new types to the configuration system is intended to be fairly simple. It is required to add a specialized template class for your type and implement three items: the constructor, a load and save function. The following structure will be serialized in the example:
struct MyStruct { int32_t Test1; uint32_t Test2; float Test3; std::string Test4; };While prior examples only dealt with single types, it is quite simple to deal with composites such as this structure given the underlying nature of the JSON implementation used for serialization; the outline of the implementation is as follows:
#includeThat is everything you need to do to handle your type, though of course we need to fill in the _Load and _Save functions which is also quite simple. The _Load function is:
namespace Config { template<> struct Serializer< MyStruct > : public Config::Item { Serializer( const std::string& key ) : Item( key ) {} protected: bool _Load( MyStruct& ms, const JSONValue& inval ); JSONValue* _Save( const MyStruct& inval ); }; }
inline bool Serializer< MyStruct >::_Load( MyStruct& ms, const JSONValue& inval ) { if( inval.IsObject() ) { const JSONObject& obj = inval.AsObject(); ms.Test1 = (int32_t)obj.at( L"Test1" )->AsNumber(); ms.Test2 = (uint32_t)obj.at( L"Test2" )->AsNumber(); ms.Test3 = (float)obj.at( L"Test3" )->AsNumber(); ms.Test4 = string_cast< std::string >( obj.at( L"Test4" )->AsString() ); return true; } return false; }Obviously this code does very little error checking and can cause problems if the keys do not exist. But other than adding further error checks, this code is representative of how easy the JSON serialization is in the case of reading value data. The save function is just as simplistic:
inline JSONValue* Serializer< MyStruct >::_Save( const MyStruct& inval ) { JSONObject obj; obj[ L"Test1" ] = new JSONValue( (double)inval.Test1 ); obj[ L"Test2" ] = new JSONValue( (double)inval.Test2 ); obj[ L"Test3" ] = new JSONValue( (double)inval.Test3 ); obj[ L"Test4" ] = new JSONValue( string_cast< std::wstring >( inval.Test4 ) ); return new JSONValue( obj ); }This implementation shows just how easy it is to implement new type support thanks to both the simplicity of the library requirements and the JSON object serialization format in use.
The JSON library used works with L literals or std::wstring by default, the string_cast functions are simple helpers to convert std::string to/from std::wstring. These conversions are not code page aware or in anyway safe to use with real unicode strings, they simply trim/expand the width of the char type since most of the data in use is never intended to be presented to the user.
The Tweak UIAs mentioned in the usage overview, the UI for tweaking configuration data is currently incorporated into the beginnings of a game application called XO. The following image shows a sample of configuration display, some debugging utility panels and a little test of the UI library I incorporated into the example. While this discussion may seem related to the configuration system itself, that is only a side affect of both systems going together. There is no requirement that the panels refer only to data marked for persistence, the tweak UI can refer to any data persisted or not. This allows hooking up panels to debug just about anything you could require.
A Basic PanelCreating a basic debug panel and hooking it up for display in the testbed is fairly easy, though it uses a form of initialization not everyone will be familiar with. Due to the fact that AntTweakBar provides quite a number of different settings and abilities, I found it preferable to wrap up the creation in a manner which does not require a lot of default arguments, filling in structures or other repetitive details. The solution is generally called chained initialization which looks rather odd at first but can reduce the amount of typing in complicated initialization scenarios. Let's create a simple empty panel in order to start explaining chaining:
TweakUtils::Panel myPanel = TweakUtils::Panel::Create( "My Panel" );If that were hooked into the system and displayed it would be a default blue colored panel with white text and no content. Nothing too surprising there, but let's say we want to differentiate the panel for quick identification by turning it bright red with black text. In a traditional initialization system you would likely have default arguments in the Create function which could be redefined. Unfortunately given the number of options possible in a panel, such default arguments become exceptionally long chains. Consider that a panel can have defaults for position, size, color, font color, font size, iconized or not, and even a potential callback to update or read data items it needs to function; the number of default arguments gets out of hand. Chaining the initialization cleans things up, though it looks a bit odd as mentioned:
TweakUtils::Panel myPanel = TweakUtils::Panel::Create( "My Panel" ) .Color( 200, 40, 40 ) .DarkText();If you look at the example and say "Huh???", don't feel bad, I said the same thing when I first discovered this initialization pattern. There is nothing really fancy going on here, it is normal C++ code. Color is a member function of Panel with the following declaration:
Panel& Color( uint8_t r, uint8_t g, uint8_t b, uint8_t a=200 );Because Panel::Create returns a Panel instance, the call to Color works off the returned instance and modifies the rgba values in the object, returning a reference to itself which just happens to be the originally created panel instance. DarkText works off the reference and modifies the text color, once again returning a reference to the original panel instance. You can chain such modifiers as long as you want as long as they all return a reference to the panel. At the end of the chain, the modified panel object is assigned to your variable with all the modifications in place. When you have many possible options, this chaining is often cleaner than definition structures or default arguments. This is especially apparent with default arguments where you may wish to add only one option but if that option were at the end of the defaults, you would have to write all the intermediates just to get to the final argument.
Adding a ButtonWith the empty panel modified as desired, it is time to add something useful to it. For the moment, just adding a simple button which logs when it is pressed will be enough. Adding a button also uses the initializer chaining though there is one additional requirement I will discuss after the example:
TweakUtils::Panel myPanel = TweakUtils::Panel::Create( "My Panel" ) .Color( 200, 40, 40 ) .DarkText(); .Button( "My Button", []{ LOG( INFO ) << "My Button was pressed."; } ).End();Using a lambda as the callback for button press, we simply log event. But you may be wondering what the End function is doing. When adding controls to the panel the controls don't return references to the panel, they return references to the created control. In this way, if it were supported, you could add additional settings to the button such as a unique color. The initialization chain would affect the control being defined and not the panel itself even if the functions were named the same. When you are done setting up the control, End is called to return the owning Panel object such that further chaining is possible. So, adding to the example:
static uint32_t sMyTestValue = 0; TweakUtils::Panel myPanel = TweakUtils::Panel::Create( "My Panel" ) .Color( 200, 40, 40 ) .DarkText(); .Button( "My Button", []{ LOG( INFO ) << "My Button was pressed."; } ).End() .Variable< uint32_t >( "My Test", &someVariable ).End();Adds an editable uint32_t variable to the panel. Panel variables can be most fundamental types, std::string and the math library Vector2i, Vector3f and Quaternionf types. With the Vector3f and Quaternionf types, AntTweakBar displays a graphical representation of direction and orientation respectively which helps when debugging math problems. Further and more detailed examples exist in the XO application within the repository.
The current implementation of the TweakUI is fairly preliminary which means that it is both dirty and subject to rapid change. As with the configuration system, it gets the required job done but it is missing some features which would be nice to have. An additional note, in order to prevent direct dependencies, the Vector and Quaternion types are handled in a standalone header. If you do not desire them, simply do not include the header and there will be no dependencies on the math library.
Adding a Panel to the ScreenAdding a panel to the system can be done in a number of ways. The obvious method is to create a panel, hook up a key in the input processing and toggle it from there. This is perfectly viable and is in fact how the primary 'Debug Panel' works. I wanted something a little easier though and as such I added a quick (and dirty) panel management ability to the application framework itself. The function RegisterPanel exists in the AppWindow class where you can hand a pointer to your panel over and it will be added to the 'Debug Panel' as a new toggle button. At the bottom of the 'Debug Panel' in the screen shot, you see the 'Joystick Debugger' button, that button is the result of registering the panel in this manner. It is a quick and simple way to add panels without having to bind specific keys to each panel.
Currently the 'Debug Panel' is bound to the F5 function key and all panels default to hidden. Pressing F5 will open/close the panel, which the first time will be very small in the upper left corner. Move it, resize it and exit the application. The next run it will be shown or not at the location and size of the last exit. Additionally worth noting, the current panel creation may change to better integrate with other sections of code. The chained initialization is likely to remain unchanged but the returned types may be modified a bit.
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now
|
https://www.gamedev.net/articles/programming/general-and-gameplay-programming/configuration-and-tweaking-r3154/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Custom TraceSource and debugging using IntelliTrace
December 16, 2014.
using System; using System.Diagnostics; namespace CustomTracing { class TraceListenerForIntelliTrace : TraceListener { ///<summary> /// A class scope variable that holds the contents of the current /// trace message being sent to this custom TraceListener. /// We can expect multiple .Write(string) calls and one final /// .WriteLine(string) call that will signify the end of the message. ///</summary> private string message = String.Empty; public override void WriteLine(string message) { // Since we are told to WriteLine now, it means // this is the last part of the message this.message += message; this.WriteMessage(this.message); // Since we just wrote the last part of the messsage // reset the class scope variable to an empty string // to prepare for the next message this.message = String.Empty; } public override void Write(string message) { // Since we are told to just Write and not WriteLine // it means there is more to come for this message, so // use a class scope variable to build up the message this.message += message; } public void WriteMessage(string message) { // Do nothing here, we just need the method to exist // so that IntelliTrace can hook into it and extract // the parameter "message" } } }
Step 2 – Enable the custom TraceListener for your application
Make the following changes to app’s config. The highlighted parts will need to be updated with the appropriate namespace and class name that you choose for your custom TraceListener.
<system.diagnostics> <sources> <source name="TraceTest" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" > <listeners> <add name="TraceListenerForIntelliTrace" type="CustomTracing.TraceListenerForIntelliTrace, CustomTracing"/> <remove name ="Default" /> </listeners> </source> </sources> <switches> <!-- You can set the level at which tracing is to occur --> <add name="SourceSwitch" value="Information" /> <!-- You can turn tracing off --> <!--add name="SourceSwitch" value="Off" --> </switches> </system.diagnostics>.
<ModuleSpecification Id="custom">CustomTracing.exe</ModuleSpecification>
Add the following XML fragment just before the </DiagnosticEventSpecifications> closing tag. The highlighted parts will need to be updated with the appropriate namespace and class name that you choose for your custom TraceListener.
<DiagnosticEventSpecification enabled="true"> <CategoryId>tracing</CategoryId> <SettingsName _locID="settingsName.Custom">Custom TraceSource</SettingsName> <SettingsDescription _locID="settingsDescription.Custom">Custom TraceSource</SettingsDescription> <Bindings> <Binding> <ModuleSpecificationId>custom</ModuleSpecificationId> <TypeName>CustomTracing.TraceListenerForIntelliTrace</TypeName> <MethodName>WriteMessage</MethodName> <MethodId>CustomTracing.TraceListenerForIntelliTrace.WriteMessage(System.String):System.Void</MethodId> <ShortDescription _locID="shortDescription.Custom">"{0}"</ShortDescription> <LongDescription _locID="longDescription.Custom">"{0}"</LongDescription> <DataQueries> <DataQuery index="1" maxSize="4096" type="String" name="message" _locID="dataquery.Custom.text" _locAttrData="name" query=""></DataQuery> </DataQueries> </Binding> </Bindings> </DiagnosticEventSpecification>
Where to make this change when using F5/Attach
If you are using F5/Attach to debug the application, then you need to change the “collectionplan.xml” file found here:
C:\Program Files (x86)\Microsoft Visual Studio [Version of your VS]\Common7\IDE\CommonExtensions\Microsoft\IntelliTrace\[Version of your VS]\en
For example, for VS 2013:
C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\CommonExtensions\Microsoft\IntelliTrace\12.0.0\en\Microsoft Monitoring Agent\Agent\Int\VisualStudio\12.0\Cloud
using System; using System.Diagnostics; namespace CustomTracing { class Program { static TraceSource ts = new TraceSource("TraceTest"); static void Main(string[] args) { ts.TraceInformation("Trace using TraceSource.TraceInformation"); ts.TraceEvent(TraceEventType.Information, 1, "Trace using TraceSource.TraceEvent"); ts.TraceEvent(TraceEventType.Error, 2, "Trace using TraceSource.TraceEvent"); Console.WriteLine("Hit enter to exit..."); Console.ReadLine(); } } }
<ShortDescription _locID="shortDescription.Custom">"{0}"</ShortDescription> <LongDescription _locID="longDescription.Custom">"{0}"</LongDescription>
<ShortDescription _locID="shortDescription.Custom">"{0}"</ShortDescription> <LongDescription _locID="longDescription.Custom">Trace captured by custom TraceListener: "{0}"</LongDescription>
CustomTracing (source code).zip
Thanks for the blog.
Always wanted to write on my blog something like that. Can I implement a part of your post to my site?
@ideavate: Sure thing, all I ask for is to please link back to this blog post 🙂
|
https://blogs.msdn.microsoft.com/devops/2014/12/16/custom-tracesource-and-debugging-using-intellitrace/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.